Skip to content

MinIO

Object storage for the cluster — S3-compatible. Used for GitLab runner cache, application assets, and general-purpose storage.

  • S3 API: https://s3.zaroz.cloud
  • Console: https://console.s3.zaroz.cloud
  • Namespace: minio
  • Operator namespace: minio-operator
  • Storage: 1Ti on TrueNAS NFS (truenas-nfs StorageClass)

Credentials

Admin credentials are stored in the minio-env-config secret in the minio namespace:

kubectl get secret minio-env-config -n minio \
  -o jsonpath='{.data.config\.env}' | base64 -d

Architecture

Installed via the MinIO Operator. The operator manages a Tenant CRD (zaroz) in the minio namespace.

  • 1 server, 1 volume (scale by adding pools to the tenant spec)
  • TLS terminated at Traefik ingress (cert-manager, zaroz-cluster-issuer)
  • Internal services: minio:80 (S3 API), zaroz-console:9090 (UI)

Expanding Storage

To expand the MinIO PVC, see Expanding a PVC.

Creating an Access Key

Access keys are scoped to a user and used to authenticate S3 API requests from applications.

The MinIO console UI does not expose this in all versions. Use mc inside the pod instead:

# Set up the alias (only needed once per shell session)
kubectl exec -n minio zaroz-pool-0-0 -c minio -- \
  mc alias set local http://localhost:9000 admin <admin-password>

# Create a key with a descriptive name
kubectl exec -n minio zaroz-pool-0-0 -c minio -- \
  mc admin accesskey create local/ --name <name>

The output contains the Access Key and Secret Keycopy the secret key immediately, it is not shown again.

To list existing keys:

kubectl exec -n minio zaroz-pool-0-0 -c minio -- \
  mc admin accesskey list local/

To revoke a key:

kubectl exec -n minio zaroz-pool-0-0 -c minio -- \
  mc admin accesskey rm local/ <access-key>

Adding a Bucket

Via the console at https://console.s3.zaroz.cloud, or with mc:

kubectl exec -n minio zaroz-pool-0-0 -c minio -- \
  mc mb local/<bucket-name>

Scaling to Multiple Nodes

Add a new pool to the tenant spec:

pools:
- name: pool-0
  servers: 1
  volumesPerServer: 1
  ...
- name: pool-1
  servers: 2
  volumesPerServer: 1
  volumeClaimTemplate:
    spec:
      resources:
        requests:
          storage: 1Ti
      storageClassName: truenas-nfs

Apply with kubectl apply -f tenant.yaml. The operator handles rebalancing.

GitLab Runner Cache

The gitlab-runner-cache bucket is already created and the runner is configured to use it. See GitLab Runner for details.

To create a new access key for the runner (e.g. after rotation):

kubectl exec -n minio zaroz-pool-0-0 -c minio -- \
  mc admin accesskey create local/ --name gitlab-runner