← Back to Articles
· kubernetes

Deploy a Self-Hosted Git Server on Kubernetes : Gitea with Raw Manifests

A step-by-step companion to Episode 3 of the Kubernetes on Raspberry Pi series. Covers Namespaces, Deployments, Services, and PersistentVolumeClaims by deploying Gitea without Helm.

Watch the Video kubernetesgiteagitraspberry-pihomelabdeploymentnamespace

Helm is coming in the next episode. But before reaching for a package manager, it's worth deploying one real application by hand: writing each manifest, understanding each field, watching the pieces fit together. By the end you'll know exactly what a Helm chart is doing under the hood, and that knowledge doesn't go away.

This is the companion article to Episode 3 of the Kubernetes on Raspberry Pi series. We deploy Gitea, a self-hosted Git server, using raw Kubernetes manifests, without Helm.

All YAML configs are in the kubernetes-series GitHub repo under video-03-gitea/.

What We're Building

Gitea requires five Kubernetes resources: a Namespace to isolate it from other workloads, a PersistentVolume backed by NFS, a PersistentVolumeClaim to request that storage, a Deployment to run the container, and a Service to expose it on the network. We'll create each one explicitly with no abstractions hiding the work.

Namespaces and Storage

Start by creating the namespace:

kubectl create namespace gitea

Namespaces are logical partitions within a cluster. Everything Gitea-related lives in gitea, which makes it easy to find, manage, and eventually delete without touching other workloads.

Next, create the directory on the NAS. The Gitea container runs as UID 2000 (we'll use the rootless image), so ownership on disk needs to match:

ssh todd@10.51.51.154
sudo mkdir -p /mnt/raid/kubernetes/gitea
sudo chown 2000:2000 /mnt/raid/kubernetes/gitea
exit

With the NFS directory ready, create the PV and PVC. Unlike Episode 2 where we used ReadWriteMany for nginx, Gitea needs ReadWriteOnce. It's a stateful application that shouldn't have multiple instances writing to the same data simultaneously.

# gitea-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: gitea-pv
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  nfs:
    server: 10.51.51.154
    path: /mnt/raid/kubernetes/gitea
  persistentVolumeReclaimPolicy: Retain
# gitea-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: gitea-pvc
  namespace: gitea
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi
kubectl apply -f gitea-pv.yaml
kubectl apply -f gitea-pvc.yaml -n gitea
kubectl get pv,pvc -n gitea
# Both should show Bound

Deploying Gitea

We use the rootless Gitea image (gitea/gitea:1.25.4-rootless), which runs as UID 2000, matching our NFS directory ownership. The Deployment mounts the PVC at /var/lib/gitea, which is where Gitea stores all its data: repositories, configuration, and database.

# gitea-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: gitea
  namespace: gitea
spec:
  replicas: 1
  selector:
    matchLabels:
      app: gitea
  template:
    metadata:
      labels:
        app: gitea
    spec:
      containers:
        - name: gitea
          image: gitea/gitea:1.25.4-rootless
          ports:
            - containerPort: 3000
            - containerPort: 2222
          volumeMounts:
            - name: gitea-data
              mountPath: /var/lib/gitea
      volumes:
        - name: gitea-data
          persistentVolumeClaim:
            claimName: gitea-pvc
kubectl apply -f gitea-deployment.yaml
kubectl get pods -n gitea -w

If the pod doesn't come up, kubectl describe pod and kubectl logs are your tools. Getting comfortable with that debugging loop is worth practicing. It's the same workflow for any broken workload.

Exposing Gitea with a Service

The pod is running, but nothing can reach it yet. A Service exposes it on the network. We use NodePort here, which assigns a random high-numbered port on every cluster node so you can access Gitea at any node's IP address.

# gitea-service.yaml
apiVersion: v1
kind: Service
metadata:
  name: gitea
  namespace: gitea
spec:
  type: NodePort
  selector:
    app: gitea
  ports:
    - name: http
      port: 3000
      targetPort: 3000
    - name: ssh
      port: 2222
      targetPort: 2222
kubectl apply -f gitea-service.yaml
kubectl get svc -n gitea

Note the NodePort assigned to port 3000 and access Gitea at http://<any-node-ip>:<nodeport>.

Verifying Persistence

Open Gitea in the browser via the NodePort URL and complete the setup wizard. Set your admin password here, you'll need it. Then create a test repository, clone it, commit something, and push it back.

Now delete the pod and watch Kubernetes recreate it:

kubectl delete pod -n gitea <gitea-pod-name>
kubectl get pods -n gitea -w

Once the new pod is up, open Gitea and your repository is still there. The data lived on NFS and survived the restart completely.

What's Next

We just wrote five separate YAML files to deploy one application. In Episode 4 we deploy Prometheus and Grafana, which would require 20+ manifests by hand. That's where we meet Helm.

← Back to Articles