Skip to main content

Helm Part III: Finalizing Our Helm Charts

·3958 words·19 mins
Scott Kragen
Author
Scott Kragen
I’m all about linux, red teaming and devops.

In the last post, we created the raw Kubernetes deployment and service files for our custom Nginx and AdaptixC2 containers, then verified they worked correctly.

In this entry, we’ll package those deployments into Helm charts to make them easier to reuse and deploy, especially for red team operators who need to spin up infrastructure quickly and consistently.

Helm charts are basically a package of templates for deploying Kubernetes applications and services. One of the reasons i choose them over using Kustomize for our red team operators because the value file is much simpler to understand, since it resembles more of a config file then a bunch of patches like Kustomize uses.

Helm charts are essentially packages of Kubernetes templates. I prefer Helm over Kustomize for red team use because its values.yaml files are easier to read and modify because they act more like configuration files rather than layered patches.

Helm templates use the Go Template Language, which should feel familiar if you’ve worked with YAML-based systems like Ansible. It supports variables and basic logic, though we’ll keep things simple for these charts.

The first thing we want to do is deploy the generic set of templates helm can create then modify them. The first helm package we are going to make is the AdaptixC2 helm chart.

Let’s start by creating the chart structure. Helm can scaffold a standard chart layout for us:

mkdir helmcharts &&cd helmcharts
helm create adaptixc2

This will generate the the chart structure with standard template files:

NOTES.txt
_helpers.tpl
deployment.yaml
hpa.yaml
ingress.yaml
service.yaml
serviceaccount.yaml

Deployment.yaml
#

We will clean up some of these files later but for we are going to modify the deployment.yaml. We are going to combine our base_deployment into this and add the variables we need.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{ .Values.excerciseName }}{{ .Values.tier }}adc2
  labels:
    excercise: {{ .Values.excerciseName }}
    tier: {{ .Values.tier }}i
    app: adaptixc2
  namespace: {{ .Values.excerciseName }}
spec:
  {{- if not .Values.autoscaling.enabled }}
  replicas: {{ .Values.replicaCount }}
  {{- end }}
  selector:
    matchLabels:
      excercise: {{ .Values.excerciseName  }}
      tier: {{ .Values.tier  }}
      app: adaptixc2
  template:
    metadata:
      {{- with .Values.podAnnotations }}
      annotations:
        {{- toYaml . | nindent 8 }}
      {{- end }}
      labels:
        excercise: {{ .Values.excerciseName  }}
        tier: {{ .Values.tier }}
        app: adaptixc2
    spec:
      {{- with .Values.imagePullSecrets }}
      imagePullSecrets:
        {{- toYaml . | nindent 8 }}
      {{- end }}
      # serviceAccountName: {{ include "adaptixc2.serviceAccountName" . }}
      securityContext:
        fsGroup: 999
      initContainers:
      - name: mkcert
        image: alpine/mkcert:latest
        volumeMounts:
        - mountPath: /mnt
          name: server-certs
        #command: ["/bin/sh","-c"]
        args:
          - -cert-file
          - /mnt/cert.pem
          - -key-file
          - /mnt/cert.key
          - adaptix
          - adaptix.internal

      - name: fixpermissions
        image: alpine:latest
        volumeMounts:
        - mountPath: /mnt
          name: server-certs
        command: ["/bin/sh"]
        args: ["-c", "chown 999:999 /mnt/cert.*"]

      containers:
        - name: {{ .Values.excerciseName }}{{ .Values.tier }}adc2
          securityContext:
            runAsUser: 999
            runAsGroup: 999
          image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
          imagePullPolicy: {{ .Values.image.pullPolicy }}
          args:
            - -profile
            - /home/adaptix/profile.cfg
          ports:
            - name: httpport
              containerPort: 8080
              protocol: TCP
            - name: httpsport
              containerPort: 8443
            - name: operatorport
              containerPort: 4321
            - name: proxyport
              containerPort: {{ int .Values.proxyPort }}
          livenessProbe:
            tcpSocket:
              port: 4321
          readinessProbe:
            tcpSocket:
              port: 4321
          {{- with .Values.resources }}
          resources:
            {{- toYaml . | nindent 12 }}
          {{- end }}
          volumeMounts:
          - name: adaptix-data
            mountPath: /home/adaptix/data
          - name: profile-secret
            mountPath: /home/adaptix/profile.cfg
            subPath: profile.cfg
          - name: server-certs
            mountPath: /home/adaptix/cert/
      volumes:
      - name: adaptix-data
        persistentVolumeClaim:
          claimName: {{ .Values.excerciseName }}{{ .Values.tier }}adc2-pvc
      - name: server-certs
        emptyDir: {} #mkcert will create the certs here
      - name: profile-secret
        secret:
          secretName: {{ .Values.excerciseName }}{{ .Values.tier }}adc2-profile
          items:
          - key: profile-data
            path: profile.cfg
      {{- with .Values.nodeSelctor }}
      nodeSelector:
        {{- toYaml . | nindent 8 }}
      {{- end }}
      {{- with .Values.affinity }}
      affinity:
        {{- toYaml . | nindent 8 }}
      {{- end }}
      {{- with .Values.tolerations }}
      tolerations:
        {{- toYaml . | nindent 8 }}
      {{- end }}

So to break down {{ .Values.XYZ }} is a references variable. The key variables we have are:

excerciseName: "tomato"
tier: ops
proxyPort: 4444

So the .Values.excerciseName is usually whatever code word the team maybe using instead of the clients name so it can be discussed more openly.

The tier is what c2 channel we might be using.

The proxyPort is where the socks proxy would be opened for the operators on the exercise.
This can help identify all the pods easily in the cluster related to various labels. For instance if I wanted to find all the deployed pods for ops I could run:

kubectl -n tomato get pods -l tier=ops

When done our output would be something like this:

NAME                                      READY   STATUS    RESTARTS   AGE
tomatoopsadc2-bb58cb858-8fpbr             1/1     Running   0          2m37s
tomatoopsnginx-gitsync-7567479667-s9lzh   2/2     Running   0          47h

If i wanted to see all the adaptixc2 deployed, especially if we have multi channels:

kubectl -n tomato get pods -l app=adaptixc2
NAME                            READY   STATUS    RESTARTS   AGE
tomatoopsadc2-bb58cb858-8fpbr   1/1     Running   0          5m8s

The other key variable to key on is how the images are set. This allows us to change the image per exercise, or even tier with the version we want or want to upgrade too.

The rest of the files are going to be a reuse of these variables. Lets review them and I’ll call out where we have used a variable we have not defined yet.

Service Files
#

Instead of the one service like in our test deployment we will need two.

adaptixweb_service.yaml:

apiVersion: v1
kind: Service
metadata:
  name: {{ .Values.excerciseName }}{{ .Values.tier }}adc2-webservice
  labels:
    excercise: {{ .Values.excerciseName }}
    tier: {{ .Values.tier }}
    app: adaptixc2
  namespace: {{ .Values.excerciseName }}
spec:
  type: ClusterIP
  ports:
    - protocol: TCP
      port: 443
      targetPort: httpsport
      name: httpsport
  selector:
    excercise: {{ .Values.excerciseName }}
    tier: {{ .Values.tier }}
    app: adaptixc2

operator_service.yaml

apiVersion: v1
kind: Service
metadata:
  name: {{ .Values.excerciseName }}{{ .Values.tier }}adc2-operatorservice
  labels:
    excercise: {{ .Values.excerciseName }}
    tier: {{ .Values.tier }}
    app: adaptixc2
  namespace: {{ .Values.excerciseName }}
spec:
  type: LoadBalancer
  ports:
    - protocol: TCP
      port: {{ int .Values.operatorPort }}
      targetPort: operatorport
      name: operator-port
    - protocol: TCP
      port: 4444
      targetPort: {{ int .Values.proxyPort }}
      name: proxy-port
  selector:
    excercise: {{ .Values.excerciseName }}
    tier: {{ .Values.tier }}
    app: adaptixc2

In the operator_service.yaml is where we define where our operators will connect and reuse the proxy port file.

Persistent Volumes
#

In the original template we didn’t have PVC. Since this can be opinionated for our deployment our template is defined longhorn as the expected volume. But if we need to change the max allocated space we have a variable for that too.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: {{ .Values.excerciseName }}{{ .Values.tier }}adc2-pvc
  namespace: {{ .Values.excerciseName }}
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: longhorn
  resources:
    requests:
      storage: {{ .Values.MaxStorage }}

Unchanged and not need files
#

The service account was giving me issues and wasn’t needed so I ended up deleting it and removing it out of the helpers.tpl.

_helpers.tpl:

{{/*
Expand the name of the chart.
*/}}
{{- define "adaptixc2.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
{{- end }}

{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "adaptixc2.fullname" -}}
{{- if .Values.fullnameOverride }}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- $name := default .Chart.Name .Values.nameOverride }}
{{- if contains $name .Release.Name }}
{{- .Release.Name | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}
{{- end }}

{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "adaptixc2.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
{{- end }}

{{/*
Common labels
*/}}
{{- define "adaptixc2.labels" -}}
helm.sh/chart: {{ include "adaptixc2.chart" . }}
{{ include "adaptixc2.selectorLabels" . }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end }}

{{/*
Selector labels
*/}}
{{- define "adaptixc2.selectorLabels" -}}
app.kubernetes.io/name: {{ include "adaptixc2.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}

{{/*
Create the name of the service account to use
*/}}
{{- define "adaptixc2.serviceAccountName" -}}
{{- if .Values.serviceAccount.create }}
{{- default (include "adaptixc2.fullname" .) .Values.serviceAccount.name }}
{{- else }}
{{- default "default" .Values.serviceAccount.name }}
{{- end }}
{{- end }}

As you can see the service account section was commented out.

The ingress.yaml and hpa.yaml are unchanged and overall aren’t used but could be in the future so I left them.

Since adaptix will most likely deployed with multiple redirectors and the nginx service its best to define the outside of the helm chart.

Test case:

Finally for the test case i setup a check for the port to be open with netcat.

apiVersion: v1
kind: Pod
metadata:
  name: "{{ include "adaptixc2.fullname" . }}-test-connection"
  labels:
    {{- include "adaptixc2.labels" . | nindent 4 }}
  namespace: {{ .Values.excerciseName }}

  annotations:
    "helm.sh/hook": test
spec:
  containers:
    - name: ncat-test
      image: busybox
      command: ['sh','-c']
      args: ['nc -zv {{ .Values.excerciseName }}{{ .Values.tier }}adc2-operatorservice {{ .Values.operatorPort }}']
  restartPolicy: Never

Values file
#

The values.yaml file in the root of the chart is where the default values of the chart. The ones important are.

This is the one I have made currenlty but a lot of it is not used. I tried to keep the parts used at the top of the file

# Default values for adaptixc2.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.



excerciseName: "tomato"
tier: ops
operatorPort: 4321
proxyPort: 4444
MaxStorage: 2Gi



# This sets the container image more information can be found here: https://kubernetes.io/docs/concepts/containers/images/
image:
  repository: gitea.dev.th3redc0rner.com/redcorner/adaptixc2
  # This sets the pull policy for images.
  pullPolicy: IfNotPresent
  # Overrides the image tag whose default is the chart appVersion.
  tag: "2025-09-15"


resources:
  # We usually recommend not to specify default resources and to leave this as a conscious
  # choice for the user. This also increases chances charts run on environments with little
  # resources, such as Minikube. If you do want to specify resources, uncomment the following
  # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
   limits:
     cpu: 100m
     memory: 512Mi
  # requests:
  #   cpu: 100m
  #   memory: 128Mi



# This will set the replicaset count more information can be found here: https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/
replicaCount: 1

# This is for the secrets for pulling an image from a private repository more information can be found here: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
imagePullSecrets: []
# This is to override the chart name.
nameOverride: ""
fullnameOverride: ""

# This section builds out the service account more information can be found here: https://kubernetes.io/docs/concepts/security/service-accounts/
serviceAccount:
  # Specifies whether a service account should be created
  create: true
  # Automatically mount a ServiceAccount's API credentials?
  automount: true
  # Annotations to add to the service account
  annotations: {}
  # The name of the service account to use.
  # If not set and create is true, a name is generated using the fullname template
  name: ""

# This is for setting Kubernetes Annotations to a Pod.
# For more information checkout: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/
podAnnotations: {}
# This is for setting Kubernetes Labels to a Pod.
# For more information checkout: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
podLabels: {}


  # capabilities:
  #   drop:
  #   - ALL
  # readOnlyRootFilesystem: true
  # runAsNonRoot: true
  # runAsUser: 1000

# This is for setting up a service more information can be found here: https://kubernetes.io/docs/concepts/services-networking/service/
service:
  # This sets the service type more information can be found here: https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types
  #  type: ClusterIP
  # This sets the ports more information can be found here: https://kubernetes.io/docs/concepts/services-networking/service/#field-spec-ports
  #  port: 80

# This block is for setting up the ingress for more information can be found here: https://kubernetes.io/docs/concepts/services-networking/ingress/
ingress:
  enabled: false
  className: ""
  annotations: {}
    # kubernetes.io/ingress.class: nginx
    # kubernetes.io/tls-acme: "true"
  hosts:
    - host: chart-example.local
      paths:
        - path: /
          pathType: ImplementationSpecific
  tls: []
  #  - secretName: chart-example-tls
  #    hosts:
  #      - chart-example.local

# This section is for setting up autoscaling more information can be found here: https://kubernetes.io/docs/concepts/workloads/autoscaling/
autoscaling:
  enabled: false
  minReplicas: 1
  maxReplicas: 100
  targetCPUUtilizationPercentage: 80
  # targetMemoryUtilizationPercentage: 80

# Additional volumes on the output Deployment definition.
#volumes: []
# - name: foo
#   secret:
#     secretName: mysecret
#     optional: false

# Additional volumeMounts on the output Deployment definition.
volumeMounts: []
# - name: foo
#   mountPath: "/etc/foo"
#   readOnly: true

serviceAccount:
  # Specifies whether a service account should be created
  create: true
  # Automatically mount a ServiceAccount's API credentials?
  automount: true
  # Annotations to add to the service account
  annotations: {}
  # The name of the service account to use.
  # If not set and create is true, a name is generated using the fullname template
  name: ""

nodeSelector: {}

tolerations: []

affinity: {}

When we actually deploy it in the environment beyond testing I’ll talk about only using the values that we will change. But overall this file defines default values.

Testing the chart
#

To make sure everything workings we can test this chart There are some items we have to make first to test it. To actually have it deploy we need a namespace and the secret file for the profile.

To create the namespace run:

kubectl create namespace tomato

Tomato was the default value we used for our exercise name.

The profile is not part of the templates so we can easily deploy and encrypt it in our git repository. For testing purposes we are going to modify the one we used in the base deployment.

profile-secret.yaml

apiVersion: v1
data:
  profile-data: ewogICJUZWFtc2VydmVyIjogewogICAgImludGVyZmFjZSI6ICIwLjAuMC4wIiwKICAgICJwb3J0IjogNDMyMSwKICAgICJlbmRwb2ludCI6ICIvZW5kcG9pbnQiLAogICAgInBhc3N3b3JkIjogInBhc3MiLAogICAgImNlcnQiOiAiY2VydC9jZXJ0LnBlbSIsCiAgICAia2V5IjogImNlcnQvY2VydC5rZXkiLAogICAgImV4dGVuZGVycyI6IFsKICAgICAgImV4dGVuZGVycy9saXN0ZW5lcl9iZWFjb25faHR0cC9jb25maWcuanNvbiIsCiAgICAgICJleHRlbmRlcnMvbGlzdGVuZXJfYmVhY29uX3NtYi9jb25maWcuanNvbiIsCiAgICAgICJleHRlbmRlcnMvbGlzdGVuZXJfYmVhY29uX3RjcC9jb25maWcuanNvbiIsCiAgICAgICJleHRlbmRlcnMvYWdlbnRfYmVhY29uL2NvbmZpZy5qc29uIiwKICAgICAgImV4dGVuZGVycy9saXN0ZW5lcl9nb3BoZXJfdGNwL2NvbmZpZy5qc29uIiwKICAgICAgImV4dGVuZGVycy9hZ2VudF9nb3BoZXIvY29uZmlnLmpzb24iCiAgICBdLAogICAgImFjY2Vzc190b2tlbl9saXZlX2hvdXJzIjogMTIsCiAgICAicmVmcmVzaF90b2tlbl9saXZlX2hvdXJzIjogMTY4CiAgfSwKCiAgIlNlcnZlclJlc3BvbnNlIjogewogICAgInN0YXR1cyI6IDQwNCwKICAgICJoZWFkZXJzIjogewogICAgICAiQ29udGVudC1UeXBlIjogInRleHQvaHRtbDsgY2hhcnNldD1VVEYtOCIsCiAgICAgICJTZXJ2ZXIiOiAiQWRhcHRpeEMyIiwKICAgICAgIkFkYXB0aXggVmVyc2lvbiI6ICJ2MC44IgogICAgfSwKICAgICJwYWdlIjogIjQwNHBhZ2UuaHRtbCIKICB9LAoKICAiRXZlbnRDYWxsYmFjayI6IHsKICAgICJUZWxlZ3JhbSI6IHsKICAgICAgInRva2VuIjogIiIsCiAgICAgICJjaGF0c19pZCI6IFtdCiAgICB9LAogICAgIm5ld19hZ2VudF9tZXNzYWdlIjogIk5ldyBhZ2VudDogJXR5cGUlICglaWQlKVxuXG4ldXNlciUgQCAlY29tcHV0ZXIlICglaW50ZXJuYWxpcCUpXG5lbGV2YXRlZDogJWVsZXZhdGVkJVxuZnJvbTogJWV4dGVybmFsaXAlXG5kb21haW46ICVkb21haW4lIiwKICAgICJuZXdfY3JlZF9tZXNzYWdlIjogIk5ldyBzZWNyZXQgWyV0eXBlJV06XG5cbiV1c2VybmFtZSUgOiAlcGFzc3dvcmQlICglZG9tYWluJSlcblxuU3RvcmFnZTogJXN0b3JhZ2UlXG5Ib3N0OiAlaG9zdCUiLAogICAgIm5ld19kb3dubG9hZF9tZXNzYWdlIjoiRmlsZSBzYXZlZDogJXBhdGglIFslc2l6ZSVdIGZyb20gJWNvbXB1dGVyJSAoJXVzZXIlKSIKICB9Cn0K
kind: Secret
metadata:
  creationTimestamp: null
  name: tomatoopsadc2-profile
  namespace: tomato

This can be applied by running

kubectl apply -f profile-secret.yaml

Now we can actually test the deloyment of this chart.
if you go to the root of the helmcharts directory you can run the following command:

helm install adaptix test adaptixc2

This will name this deployment test and use our current templates as we created them.

We can check the status of the chart by runnning:

helm status test

The output should be:

admin1@devadmin1:~/dev/helmcharts$ helm status test
NAME: test
LAST DEPLOYED: Tue Oct 21 01:08:25 2025
NAMESPACE: default
STATUS: deployed
REVISION: 1

Then we can check if it deployed correctly by running

kubectl get pods -n tomato

The out put should be:

NAME                                      READY   STATUS    RESTARTS   AGE
tomatoopsnginx-gitsync-7567479667-s9lzh   2/2     Running   0          3d

Nginx Helm Chart
#

We are going to create the template as we did before then add our differences again from the original deployment files.

helm create nginx-gitsync

When we are done these are the set of files we will have left in templates folder. So you can delete the others.

NOTES.txt
_helpers.tpl
deployment.yaml
hpa.yaml
nginx-cm.yaml
service.yaml
tests

Deployment File
#

Our deployment.yaml file is going to look like this:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{ .Values.excerciseName }}{{ .Values.tier }}nginx-gitsync
  labels:
    excercise: {{ .Values.excerciseName }}
    tier: {{ .Values.tier }}
    app: nginxgitsync
  namespace: {{ .Values.excerciseName }}
spec:
  {{- if not .Values.autoscaling.enabled }}
  replicas: {{ .Values.replicaCount }}
  {{- end }}
  selector:
    matchLabels:
      excercise: {{ .Values.excerciseName }}
      tier: {{ .Values.tier }}
      app: nginxgitsync
  template:
    metadata:
      {{- with .Values.podAnnotations }}
      annotations:
        {{- toYaml . | nindent 8 }}
      {{- end }}
      labels:
        excercise: {{ .Values.excerciseName }}
        tier: {{ .Values.tier }}
        app: nginxgitsync
    spec:
      {{- with .Values.imagePullSecrets }}
      imagePullSecrets:
        {{- toYaml . | nindent 8 }}
      {{- end }}
{{/*      #serviceAccountName: {{ include "nginx-gitsync.serviceAccountName" . }} */}}
      securityContext:
        runAsUser: 1000
        fsGroup: 1000
      containers:
        - name: {{ .Values.excerciseName }}{{ .Values.tier }}nginx-gitsync
          securityContext:
            runAsUser: 1000
            runAsGroup: 1000
            runAsNonRoot: true
          image: "{{ .Values.nginxImage.repository }}:{{ .Values.nginxImage.tag | default .Chart.AppVersion }}"
          imagePullPolicy: {{ .Values.nginxImage.pullPolicy }}
          ports:
            - name: httpport
              containerPort: 8080
              protocol: TCP
          {{- with .Values.livenessProbe }}
          livenessProbe:
            {{- toYaml . | nindent 12 }}
          {{- end }}
          {{- with .Values.readinessProbe }}
          readinessProbe:
            {{- toYaml . | nindent 12 }}
          {{- end }}
          {{- with .Values.resources }}
          resources:
            {{- toYaml . | nindent 12 }}
          {{- end }}
          args: ["nginx", "-g", "daemon off;"]
          volumeMounts:
            - name: webroot
              mountPath: /usr/share/nginx/html
            - name: nginx-cache
              mountPath: /var/cache/nginx
            - name: nginx-run
              mountPath: /run
            - name: nginx-conf
              mountPath: /etc/nginx/nginx.conf
              subPath: nginx.conf
        - name: git-sync
          image: "{{ .Values.gitsyncImage.repository }}:{{ .Values.gitsyncImage.tag | default .Chart.AppVersion }}"
          imagePullPolicy: {{ .Values.gitsyncImage.pullPolicy }}
          volumeMounts:
            - name: webroot
              mountPath: /git
          env:
            - name: GITSYNC_REPO
              value: {{ .Values.gitRepo }}   # <-- update to your repo
            - name: GITSYNC_REF
              value: "main"
            - name: GITSYNC_ROOT
              value: "/git"
            - name: GITSYNC_LINK
              value: "website"
            - name: GITSYNC_PERIOD
              value: "30s"
      volumes:
        - name: webroot
          emptyDir: {}
        - name: nginx-cache
          emptyDir: {}
        - name: nginx-run
          emptyDir: {}
        - name: nginx-conf
          configMap:
            name: {{ .Values.excerciseName }}{{ .Values.tier }}nginx-conf
            items:
              - key: nginx.conf
                path: nginx.conf
      {{- with .Values.nodeSelctor }}
      nodeSelector:
        {{- toYaml . | nindent 8 }}
      {{- end }}
      {{- with .Values.affinity }}
      affinity:
        {{- toYaml . | nindent 8 }}
      {{- end }}
      {{- with .Values.tolerations }}
      tolerations:
        {{- toYaml . | nindent 8 }}
      {{- end }}

We are following a similar pattern to what we had used in the AdaptixC2 by defining our labels, tiers and application.

The other important variable to note is the gitRepo variable, for defining our repo to pull the websites from.

ConfigMap
#

Our nginx-cm.yaml file is the config map for our nginx config.

apiVersion: v1
kind: ConfigMap
metadata:
  name: {{ .Values.excerciseName }}{{ .Values.tier }}nginx-conf
  labels:
    excercise: {{ .Values.excerciseName }}
    tier: {{ .Values.tier }}
    app: nginx-gitsync
  namespace: {{ .Values.excerciseName }}
data:
  nginx.conf: |
    events {}
    http {
      server {
        listen 8080;
        root /usr/share/nginx/html/website/{{ .Values.webRoot }};
        location / {
          index index.html;
        }
      }
    }

The important variable we are additionally defining is webRoot variable. This will point the directory in the git repo where we are holding the static files for this website.

Service File
#

service.yaml:

apiVersion: v1
kind: Service
metadata:
  name: {{ .Values.excerciseName }}{{ .Values.tier }}nginx-webservice
  labels:
    excercise: {{ .Values.excerciseName }}
    tier: {{ .Values.tier }}
    app: nginx-gitsync
  namespace: {{ .Values.excerciseName }}
spec:
  type: ClusterIP
  ports:
    - protocol: TCP
      port: {{ int .Values.httpPort }}
      targetPort: httpport
      name: httpport
  selector:
    excercise: {{ .Values.excerciseName }}
    tier: {{ .Values.tier }}
    app: nginxgitsync

This file is pretty straight forward. The httpport is open to defined in case in another deployment the port can be defined something other then the default in the values.yaml.

Helper file
#

_helper.tpl

{{/*
Expand the name of the chart.
*/}}
{{- define "nginx-gitsync.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
{{- end }}

{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "nginx-gitsync.fullname" -}}
{{- if .Values.fullnameOverride }}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- $name := default .Chart.Name .Values.nameOverride }}
{{- if contains $name .Release.Name }}
{{- .Release.Name | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}
{{- end }}

{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "nginx-gitsync.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
{{- end }}

{{/*
Common labels
*/}}
{{- define "nginx-gitsync.labels" -}}
helm.sh/chart: {{ include "nginx-gitsync.chart" . }}
{{ include "nginx-gitsync.selectorLabels" . }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end }}

{{/*
Selector labels
*/}}
{{- define "nginx-gitsync.selectorLabels" -}}
app.kubernetes.io/name: {{ include "nginx-gitsync.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}

{{/*
{{/*
#Create the name of the service account to use
#*/}}
#{{- define "nginx-gitsync.serviceAccountName" -}}
#{{- if .Values.serviceAccount.create }}
#{{- default (include "nginx-gitsync.fullname" .) .Values.serviceAccount.name }}
#{{- else }}
#{{- default "default" .Values.serviceAccount.name }}
#{{- end }}
#{{- end }}
*/}}

The only changes is the commented out service account.

Test file tests/test-connection.yaml:

apiVersion: v1
kind: Pod
metadata:
  name: "{{ include "nginx-gitsync.fullname" . }}-test-connection"
  labels:
    {{- include "nginx-gitsync.labels" . | nindent 4 }}
  annotations:
    "helm.sh/hook": test
spec:
  containers:
    - name: wget
      image: busybox
      command: ['wget']
      args: ['{{ .Values.excerciseName }}{{ .Values.tier }}nginx-webservice:{{ .Values.httpPort }}']
  restartPolicy: Never
  

This test check to see if our web service is working properly.

Values File
#


# Default values for nginx-gitsync.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.

excerciseName: "tomato"
tier: ops
httpPort: 80
webRoot: "www.example.com" #the directory in our git repo that would become the webroot
gitRepo: "https://gitea.dev.th3redc0rner.com/RedCorner/hostedsites"
# This will set the replicaset count more information can be found here: https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/
replicaCount: 1

# This sets the container image more information can be found here: https://kubernetes.io/docs/concepts/containers/images/
nginxImage:
  repository: nginx
  # This sets the pull policy for images.
  pullPolicy: IfNotPresent
  # Overrides the image tag whose default is the chart appVersion.
  tag: "1.29.2-alpine3.22"

gitsyncImage:
  repository: registry.k8s.io/git-sync/git-sync
  # This sets the pull policy for images.
  pullPolicy: IfNotPresent
  # Overrides the image tag whose default is the chart appVersion.
  tag: "v4.2.3"
# This is for the secrets for pulling an image from a private repository more information can be found here: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
imagePullSecrets: []
# This is to override the chart name.
nameOverride: ""
fullnameOverride: ""

resources:
  # We usually recommend not to specify default resources and to leave this as a conscious
  # choice for the user. This also increases chances charts run on environments with little
  # resources, such as Minikube. If you do want to specify resources, uncomment the following
  # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
   limits:
     cpu: 100m
     memory: 128Mi
  # requests:
  #   cpu: 100m
  #   memory: 128Mi
  #
# This section builds out the service account more information can be found here: https://kubernetes.io/docs/concepts/security/service-accounts/
#serviceAccount:
  # Specifies whether a service account should be created
  # create: true
  # Automatically mount a ServiceAccount's API credentials?
  # automount: true
  # Annotations to add to the service account
  # annotations: {}
  # The name of the service account to use.
  # If not set and create is true, a name is generated using the fullname template
  # name: ""

# This is for setting Kubernetes Annotations to a Pod.
# For more information checkout: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/
podAnnotations: {}
# This is for setting Kubernetes Labels to a Pod.
# For more information checkout: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/





# This is to setup the liveness and readiness probes more information can be found here: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/
livenessProbe:
  httpGet:
    path: /
    port: httpport
readinessProbe:
  httpGet:
    path: /
    port: httpport

# This section is for setting up autoscaling more information can be found here: https://kubernetes.io/docs/concepts/workloads/autoscaling/
autoscaling:
  enabled: false
  minReplicas: 1
  maxReplicas: 100
  targetCPUUtilizationPercentage: 80
  # targetMemoryUtilizationPercentage: 80

# Additional volumes on the output Deployment definition.
#volumes: []
# - name: foo
#   secret:
#     secretName: mysecret
#     optional: false

# Additional volumeMounts on the output Deployment definition.
#volumeMounts: []
# - name: foo
#   mountPath: "/etc/foo"
#   readOnly: true

nodeSelector: {}

tolerations: []

affinity: {}

Just like in adaptix the key values to change are on top of file. Also just like AdaptixC2 where the image is being pulled from and the tag version can be modified.

That overall wraps up the nginx helm chart.

One thing to note in both charts is I ended up commenting out all lines in NOTES.txt. This is basic instruction for the chart usually, especiallly after being deployed.

Testing the chart
#

From the helmcharts root directory run an install

helm install test2 nginx-gitsync

If successful both deployments should show up:

kubectl get pods -n tomato

Should output:

NAME                                      READY   STATUS    RESTARTS        AGE
tomatoopsadc2-bb58cb858-8fpbr             1/1     Running   0               2d
tomatoopsnginx-gitsync-7567479667-s9lzh   2/2     Running   2 (5h55m ago)   4d

To clean up the test deployments one would run:

helm uninstall test
helm uninstall test2

Next blog we will focus in pushing charts into our repo. Making them a package and then showing how to use them in our deployment pipeline in our staging server.