Dan Bond

archives

Kubernetes Deployment: Canary release

May 11, 2018

Kubernetes makes light work of giving us the ability to deploy a canary release.

Canary release is a technique to reduce the risk of introducing a new software version in production by slowly rolling out the change to a small subset of users before rolling it out to the entire infrastructure and making it available to everybody.

https://martinfowler.com/bliki/CanaryRelease.html

For example, let's say we want to deploy 4 stable replicas of my-service to a cluster. The yaml might look something like this:

apiVersion: apps/v1beta2
kind: Deployment
metadata:
  name: my-service
  labels:
    app: my-service
spec:
  replicas: 4
  selector:
    matchLabels:
      app: my-service
  template:
    metadata:
      labels:
        app: my-service
        release: stable
    spec:
      containers:
      - name: my-service
        image: my-service:latest
        command:
        - run
        ports:
        - containerPort: 6000

We can use labels to specify information about a Deployment. (Tip: it's good practice to keep these consistent throughout each service.)

In our Deployment, we use the same app label from the metadata in the spec selector. The selector field defines how the Deployment finds which Pods to manage.

In order to expose our service to the internet, we can create a Load Balancer in the form of a Service:

apiVersion: v1
kind: Service
metadata:
  name: my-service
  labels:
    app: my-service
spec:
  ports:
  - port: 80
    targetPort: 6000
  type: LoadBalancer
  selector:
    app: my-service

Here we will use the same selector field to match the app label of our Deployment as this enables the Service to route traffic to the correct Pods.

Now it's time to introduce our Canary release. Similarly to our stable build, we will create another Deployment:

apiVersion: apps/v1beta2
kind: Deployment
metadata:
  name: my-canary-service
  labels:
    app: my-service
spec:
  replicas: 1
  selector:
    matchLabels:
      app: my-service
  template:
    metadata:
      labels:
        app: my-service
        release: canary
    spec:
      containers:
      - name: my-service
        image: my-service:latest
        command:
        - run
        ports:
        - containerPort: 6000

However, this time we will use a different name and require only 1 replica, but still use the same labels as our stable build. This allows the Service to route traffic from the Load Balancer to our stable and canary Pods, but gives us the flexibility to control everything about the canary container in isolation.

As we are now running 5 replicas of our service, in theory, 1 in 5 users should have their request proccessed by the canary build. (Tip: this is not always the case as a Kubernetes Service does not guarantee even distribution of traffic.)


Resources