Helm – Sane App Management in Kubernetes

Helm – Sane App Management in Kubernetes

So you know all about Kubernetes and how it manages your containers, hosts your ingresses, mounts your volumes, schedules your jobs, feeds your dog and makes your coffee. You know that Kubernetes is for devops folks who don’t want to be woken up at 4 AM on Sunday to rebuild a failed server or to spend three months re-architecting an application when the CFO decides you should use a cheaper cloud provider. Kubernetes lets you replace your server after lunch on Monday and migrate your app to a new cloud between meetings on Tuesday.

Your app’s Kubernetes primitives- deployments, pods, jobs, volumes, claims, services, ingresses, and whatnot- need to be managed somehow. If you’re like us, your first attempt at that would be to write your resource YAMLs explicitly and keep them in your source control server. If you need to support a few different environments, such as multiple cloud providers, or cloud and a bare metal cluster, or Minikube, then you’d probably copy your YAMLs into a directory for each environment and then make whatever specific changes are needed for each environment.

This method has a few problems:

  • You’ve duplicated parts of many resources. Now when you make a change to a resource, you need to remember to duplicate that change in each variation of that resource.
  • When you deploy a new instance of your application, you must apply all the necessary resources in the right order and handle any failures. When you update the application, you must update whatever resources have changed and delete whatever resources have been removed. There’s a lot of room for error, and errors may not be easy to fix.
  • You need a method to manage tasks outside of Kubernetes. For example, if each app instance needs its own database user account, you need to script or document that step separately from the installation in Kubernetes.
  • You have to install your application dependencies manually. For example, if you need a Redis cache service, you must install it in the appropriate order in your setup process.


Helm solves all these problems. In Helm, Kubernetes resource YAMLs are written as templates. The collection of templates and related information is called a “chart”. Templates are very flexible and allow resources to be included and configured based on data provided to Helm. And it understands the application management lifecycle to make installation, upgrade, and removal a breeze.

As an example of a Helm-templated resource definition, consider:

{{- if .Values.dbdump.enabled }}
apiVersion: batch/v1beta1
kind: CronJob
  name: dbdump
  schedule: '{{ default "0 8 * * *" .Values.dbdump.schedule }}'
[...many lines snipped...]
{{- end }}

This simple example shows two basic template techniques:

  1. The use of an if block to install the cronjob only when the value dbdump.enabled is true.
  2. The use of the variable block and default tag to schedule the job on the schedule in the dbdump.schedule field, or 0800 every day if dbdump.schedule is not set.

Here are a few benefits of using Helm:

Support multiple environments with single resources

There’s no need to keep resource YAML that’s specific to a single environment. YAML files are templated and can install whatever resources are necessary for the environment where the application is being installed. For example, when a pod is configured, a value can come from the Helm command line, and a default value can be given by the chart. This is useful for allowing an application to use a specific persistent volume claim, if provided, and to create a new PVC if none is given. The templates can also be used to install a resource only under certain conditions. For example, when using Google Cloud, the chart could deploy resources for a Google Cloud Load Balancer, but when using Minikube it could deploy an ingress resource. Alternatively, the conditions can be based on business requirements, such as deploying a search service only if the customer has paid for a search feature.

Remove risk and tedium from application installation and updating

Helm understands Kubernetes primitives. For example, when installing an application Helm will install volumes and networking before the deployments that depend on them. You don’t need to worry about adding resources in the correct order or about forgetting to add a resource. Helm also understands what resources need to be restarted when they are reconfigured and what needs to be deleted if a resource is removed from a chart.

This capability also makes it easy to recover from failures. For example, if you accidentily remove a configmap you can just re-install the application to restore the missing resource.

Helm charts are versioned. It’s easy to see what applications were installed from old versions of the chart.

Automating app lifecycle events

Helm even helps manage tasks outside the Kubernetes cluster. It offers lifecycle hooks, which are containers that Helm runs when certain release events happen, such as before or after app installation or after removing an application.

These hooks can be used to automate provisioning of databases, DNS records, storage accounts, or any other resource inside or outside the cluster.

Manage dependencies automatically

Helm manages your application dependencies. A chart can specify other charts as dependencies, ensuring important services are installed when the application is deployed. The dependencies can be conditional, allowing flexibility to, for example, use a cloud database in one environment but use a database in Kubernetes in a Minikube environment.

I think you’ll agree- Helm is the sane way to deploy applications in Kubernetes.

Categories: DevOps

By Matt Fox

November 11, 2018

Matt Fox
Author: Matt Fox

VP TopLeft


How to trigger a Kubernetes cronjob manually


Presentation: DevOps Deployment Management using Chatbots