Helm: Managing My Dev and Prod Environments in K3s
I’d been deploying my Zillow Housing Forecast application using the traditional kubectl apply -f method. Now that I’d created different types of deployment YAMLs (single node, multi-node, dev/prod), it was time to package and organize these into a Helm chart. I did this while setting up my dev/staging environment which is almost 1:1 with my “prod” environment, save for the hostname, image pull policy, and container tags.
Installing Helm on K3s
Important to note that k3s stores the kubeconfig file at /etc/rancher/k3s/k3s.yaml and not the traditional ~/.kube/config because they figured that installing the cluster system-wide was more simple than installing it under a user. It certainly makes more sense for my use case of a homelab environment since I’m only managing the one cluster. So we need to export kubeconfig variable:
export KUBECONFIG=/etc/rancher/k3s/k3s.yaml
# Persist on reboot:
echo 'KUBECONFIG=/etc/rancher/k3s/k3s.yaml' >> ~/.bashrc
Parametizing replicas and setting pod affinity rules
I started by creating the chart scaffold and copying over my definition YAMLs to the templates/ directory (excluding my namespace definition file). I also removed the namespace lines in my YAMLs because I want Helm to handle it when it’s deployed via CLI.
Since my idea is to separate single-node deployments from multi-node deployments, I want to parametize replicas and podAffinity. On a single-node deployment, podAffinity isn’t needed but a multi-node deployment should have “preferred anti affinity” based on node hostname.
This is a snippet of my frontend app deployment and I also have a similar setup for my DB deployment. I’m setting the default deployment to be 1 replica and I’m making the assumption that any user installing 1 replica does not require pod affinity rules. If the replica count is greater than 1, then I’m applying soft anti-affinity rules so that the pods will prefer to spread out across nodes.
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
io.kompose.service: app
environment: {{ .Values.environment }}
template:
metadata:
labels:
io.kompose.service: app
environment: {{ .Values.environment }}
spec:
# Spread pods across nodes if running multiple replicas
{{- if gt (int (default 1 .Values.replicaCount)) 1}}
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: io.kompose.service
operator: In
values:
- app
topologyKey: kubernetes.io/hostname
{{- end}}
values.yaml for dev and prod
I’ve set up 2 values files for each deployment with some key differences. I’m using the “Always” pull policy for dev to ensure that the latest container is pulled on every deployment. For prod, I’ll be updating the container image much less frequently and prefer to have a stable “IfNotPresent” policy to optimize rollout times.
DEV
replicaCount: 2 # For 1:1 match with prod. But if I eventually run into resource constrictions, then I'll make it 1 on dev
# Monitoring
monitoring:
enabled: true
prometheusRelease: prometheus
# Specify dev or prod
environment: "dev"
# This sets the container image more information can be found here: https://kubernetes.io/docs/concepts/containers/images/
appImage:
repository: dstanecki/zhf
pullPolicy: Always
tag: "latest"
dbImage:
repository: dstanecki/zhf-mariadb
pullPolicy: Always
tag: "latest"
PROD
replicaCount: 2
# Monitoring
monitoring:
enabled: true
prometheusRelease: prometheus
# Specify dev or prod
environment: "prod"
# This sets the container image more information can be found here: https://kubernetes.io/docs/concepts/containers/images/
appImage:
repository: dstanecki/zhf
pullPolicy: IfNotPresent
tag: "v1.0.0"
dbImage:
repository: dstanecki/zhf-mariadb
pullPolicy: IfNotPresent
tag: "v1.0.0"
Helper template
I’m using a helper template to handle my hostname variable which is repeated frequently.
{{/*
Return the correct hostname based on the environment.
*/}}
{{- define "zhf.hostname" -}}
{{- if eq .Values.environment "prod" -}}
zhf.danielstanecki.com
{{- else -}}
zhf-dev.danielstanecki.com
{{- end -}}
{{- end -}}
Final Thoughts & Notes
This was a solid start into learning about how Helm works and I plan to continue using it to deploy and parameterize my kubernetes apps.
Notes
- A Chart is a package which generates K8s manifests, parametization files, can have subcharts
- A Release is an instance of a chart running in a cluster. If you install the same chart twice, each will have its own “release” so that they can coexist
- In this command, happy-panda is the release name, bitname is the repo, and wordpress is the chart
helm install happy-panda bitnami/wordpress
- Can use –generate-name in place of release name
- Check status
helm status happy-panda
- Charts have default configurations that are installed if you don’t specify values
- See what options are configurable on a chart
helm show values bitnami/wordpress # Apply your own custom values.yaml: helm install -f values.yaml bitnami/wordpress --generate-name
- See what options are configurable on a chart
- Deploying my own zhf Helm chart:
```bash
- helm create zhf-chart
- helm lint zhf-chart # Validate formatting
- helm template zhf-chart # Double check Chart templates
- helm package zhf-chart
- helm install
./zhf-chart- .tgz ```
- Helm renders out as whitespace which can cause empty lines
- To fix, use and the hyphen will signal to NOT create an empty whitespace line