Our Kubernetes journey at Algolia began with one question: How can our development team deploy new services with more flexibility?

Two years ago, we were a big user of bare metal machines. This changed when we assessed Kubernetes for automating hardware allocation and workloads and services life cycle management. Today, most of our products are deployed on Kubernetes.

Below are seven “best practices” for using Kubernetes — a combination of tips, explanations and lessons learned from the field.

1. Do not use the “root” user in containers

Kubernetes runs applications in containers, or logical partitions of the underlying host’s resources. Contrary to full virtualization, applications running in containers work directly within the host operating system. This greatly improves resource consumption and startup time, but also loosens security isolation. By default, the root user in a container is the same as the underlying host’s, and applications running in it have full access to the entire host.

There are several ways to avoid this. When running on stock Kubernetes, you can simply modify the setup of the container for your application (typically described in a text file as a Docker image) to create specific users with limited rights, and run said application using these identities.

2. Configure resource requests and limits

Kubernetes is a container orchestrator. Ask Kubernetes to run a Docker image, and it will select “nodes” (machines), create “pods” (entities to manage containers), and run the image. For Kubernetes to achieve good hardware allocation, it is recommended to declare the typical hardware resource consumption of a pod for a given image.

With the “requests” property, you can tell Kubernetes what resources a pod absolutely requires for running a given container. Resist the urge to systematically ask for big machines — it would make you consume more than needed, which will cost you in the long run.

With the “limits” property, you can inform Kubernetes of the number of resources a pod is not supposed to surpass for a given container. See it as the last safeguard against CPU loops and memory leak: Kubernetes will kill any pod going beyond its limits.

3. Specify pod anti-affinity

Assigning a pod to a node is not limited to finding a machine matching specific hardware requirements. Consider a critical service having high availability requirements: when deploying several instances of a server — several “replicas” of the same “pod” in Kubernetes parlance — running each on different machines is generally desirable.

This can be accomplished by specifying “anti-affinity,” or rules preventing a given pod to be allocated to some nodes — in this instance, to nodes already running a replica of that pod.

4. Configure the liveness and readiness probes

Liveness and readiness are ways for applications to communicate their health. Configuring both helps Kubernetes manage pods’ lifecycles correctly.

The liveness probe assesses whether the applications running in a pod are answering in an acceptable amount of time. When an application enters a faulty state, its liveness signal should reflect it, so that Kubernetes can decide the best course of action, such as restarting the pod.

The readiness probe tells if a pod can receive traffic. As the liveness one, the readiness probe is continuously checked; it is a great way to temporarily disconnect pods from traffic.

5. Specify a Pod Disruption Budget

Pods can be terminated at any time. This is called a “disruption.” Involuntary disruptions are triggered by exceptional events, like hardware failures. Voluntary disruptions are initiated by someone, or by Kubernetes.

Defining a “Pod Disruption Budget” tells Kubernetes how to manage pods when a voluntary disruption happens. This is crucial to improve the availability of your system, as it prevents Kubernetes from voluntarily terminating too many instances of your services at once.

6. Handle the “SIGTERM” signal

When a shutdown is decided, Kubernetes notifies the pod that it is about to be terminated.

This is notably done by sending the “SIGTERM” signal to the pod, which gets propagated to applications. Make sure your applications react appropriately to it (e.g. by closing connections, or saving their states) and stop gracefully.

7. Select declarative manifests

There are two ways to interact with Kubernetes: the imperative way (asking Kubernetes to create, update or delete entities) or the declarative way (sending Kubernetes a manifest describing a target state).

The declarative way makes the description of your setup independent from the state Kubernetes is currently in. This is key for performing rollback painlessly. Note that for this to work correctly, manifests themselves must reference external resources in a stable way, via stable identifiers like hashes, and not through tags such as “latest.”

While we see these practices as good recommendations for working with Kubernetes, they are by no means absolute. Your team will have its own interpretation, and will come with its own set of tips. When this happens, log them and share them, so that we can all benefit from our discoveries!