Kubernetes Local Development
If you run your application/service on Kubernetes, you probably already have a Kubernetes cluster for development, probably staging. You may already encounter challenges in managing and promoting k8s and app code.
In the past few years, when I was running the Kubernetes cluster became clear that we had a few challenges to tackle. One of the biggest hurdles was to have a local Kubernetes development for the Dev team.
Context
One of the most overlooked aspects of the development is to have immediate feedback.
Even if you have a mature CICD pipeline with proper code testing TDD/BDD and clean code promotion to diff environments, even in the most optimized code promotion pipeline, every time the dev team updates service or Kubernetes YAML file and tries to deploy it to the Development Kubernetes cluster is taking time 10 to 30 min. It is a waste of productive time even for a single line of code change we need to wait, but we are battling with context switching and time wasted to refocus on the task.
The Problem
How can we eliminate dead times and allow the development team fast feedback on any code changes or deployment changes before promoting the Dev k8s cluster?
One of the prerequisites is to have mature mock services, as not everybody can afford to run a full-fledged Kubernetes cluster locally with all services available on the spot. Assuming we can mock external dependencies, we need a flexible local Kubernetes cluster with minimal resource requirements that will allow us to run and validate the changes before promoting further to the CICD pipeline.
Solution
In this article, we will cover a few available options to run local development Kubernetes clusters.
This option may not work for all use cases. An example, DevOps/SRE team may need to work and CNI K8s policies that depend on the exact CNI and version to be deployed that some local Kubernetes solutions may not support.
We will cover k3d, kind, and minikube as potential local Kubernetes clusters for rapid development.
Minikube
Minikube is a cross-platform, community-driven Kubernetes distribution targeted to be used primarily in local environments.
Minikube supports the latest Kubernetes release, and you can also deploy older versions of Kubernetes up to 6+ previous minor versions.
Minikube can be deployed and used on Linux, macOS, and Windows.
Minikube has three deployment modes as a VM (Hyper-V, VMware, VirtBox, QEMU/KVM, etc.)as a container on bare-metal
Minikube supports multiple container runtimes (CRI-O, containerd, docker)
Minikube supports multiple features such as LoadBalancer, filesystem mounts, and FeatureGates.
In the below example, I will cove minikube running Kubernetes in docker containers on Linux OS as for Windows a few years back, I posted a short tutorial running minikube in Windows Hyter_V
Minikube Kubernetes in docker
By default, minikube will run the latest Kubernetes in the docker container if you are not specifying VM-DRIVER.
~> minikube start
😄 minikube v1.22.0 on Nixos 21.05.3248.6120ac5cd20 (Okapi)
✨ Automatically selected the docker driver. Other choices: kvm2, none, ssh
👍 Starting control plane node minikube in cluster minikube
🚜 Pulling base image ...
🔥 Creating docker container (CPUs=2, Memory=15900MB) ...
🐳 Preparing Kubernetes v1.21.2 on Docker 20.10.7 ...
▪ Generating certificates and keys ...
▪ Booting up control plane ...
▪ Configuring RBAC rules ...
🔎 Verifying Kubernetes components...
▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟 Enabled addons: storage-provisioner, default-storageclass
🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default⎈ minikube>~>
Once you run minikube start
, you will have a single node cluster running in docker.
⎈ minikube>~> docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2c70e46a9a73 gcr.io/k8s-minikube/kicbase:v0.0.25 "/usr/local/bin/entr…" 2 minutes ago Up 2 minutes 127.0.0.1:49162->22/tcp, 127.0.0.1:49161->2376/tcp, 127.0.0.1:49160->5000/tcp, 127.0.0.1:49159->8443/tcp, 127.0.0.1:49158->32443/tcp minikube
Kubernetes version and service
⎈ minikube>~> kubectl version --v=10
I0920 14:42:50.480948 37492 loader.go:372] Config loaded from file: /home/mudrii/.kube/config
I0920 14:42:50.481396 37492 round_trippers.go:435] curl -v -XGET -H "Accept: application/json, */*" -H "User-Agent: kubectl/v1.22.1 (linux/amd64) kubernetes/632ed30" 'https://192.168.49.2:8443/version?timeout=32s'
I0920 14:42:50.481401 37492 cert_rotation.go:137] Starting client certificate rotation controller
I0920 14:42:50.486055 37492 round_trippers.go:454] GET https://192.168.49.2:8443/version?timeout=32s 200 OK in 4 milliseconds
I0920 14:42:50.486062 37492 round_trippers.go:460] Response Headers:
I0920 14:42:50.486065 37492 round_trippers.go:463] Cache-Control: no-cache, private
I0920 14:42:50.486068 37492 round_trippers.go:463] Content-Type: application/json
I0920 14:42:50.486070 37492 round_trippers.go:463] X-Kubernetes-Pf-Flowschema-Uid: 3ecc2902-bedc-48a3-ab73-67960d80ad8f
I0920 14:42:50.486072 37492 round_trippers.go:463] X-Kubernetes-Pf-Prioritylevel-Uid: 0518e829-243c-4582-a699-250fe4b0be5e
I0920 14:42:50.486074 37492 round_trippers.go:463] Content-Length: 263
I0920 14:42:50.486076 37492 round_trippers.go:463] Date: Mon, 20 Sep 2021 06:42:50 GMT
I0920 14:42:50.492363 37492 request.go:1181] Response Body: {
"major": "1",
"minor": "21",
"gitVersion": "v1.21.2",
"gitCommit": "092fbfbf53427de67cac1e9fa54aaa09a28371d7",
"gitTreeState": "clean",
"buildDate": "2021-06-16T12:53:14Z",
"goVersion": "go1.16.5",
"compiler": "gc",
"platform": "linux/amd64"
}
Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.1", GitCommit:"632ed300f2c34f6d6d15ca4cef3d3c7073412212", GitTreeState:"archive", BuildDate:"1980-01-01T00:00:00Z", GoVersion:"go1.16.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.2", GitCommit:"092fbfbf53427de67cac1e9fa54aaa09a28371d7", GitTreeState:"clean", BuildDate:"2021-06-16T12:53:14Z", GoVersion:"go1.16.5", Compiler:"gc", Platform:"linux/amd64"}⎈ minikube>~> kubectl get nodes
NAME STATUS ROLES AGE VERSION
minikube Ready control-plane,master 13m v1.21.2⎈ minikube>~> kubectl get po -ALL
NAMESPACE NAME READY STATUS RESTARTS AGE L
kube-system coredns-558bd4d5db-jn4zr 1/1 Running 0 13m
kube-system etcd-minikube 1/1 Running 0 13m
kube-system kube-apiserver-minikube 1/1 Running 0 13m
kube-system kube-controller-manager-minikube 1/1 Running 0 13m
kube-system kube-proxy-ct8hm 1/1 Running 0 13m
kube-system kube-scheduler-minikube 1/1 Running 0 13m
kube-system storage-provisioner 1/1 Running 1 13m⎈ minikube>~> kubectl get services -ALL
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE L
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 14m
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 14m
Running minikube Kubernetes in docker allows you to run multiple nodes and with multiple configuration options for CNI, CPU, runtime, etc., as in the below example.
Note: running four-node cluster with calico as CNI and using containerd runtime.
~> minikube start -n=4 --cni='calico' --container-runtime='containerd'
😄 minikube v1.22.0 on Nixos 21.05.3248.6120ac5cd20 (Okapi)
✨ Automatically selected the docker driver. Other choices: kvm2, none, ssh
👍 Starting control plane node minikube in cluster minikube
🚜 Pulling base image ...
💾 Downloading Kubernetes v1.21.2 preload ...
> preloaded-images-k8s-v11-v1...: 922.45 MiB / 922.45 MiB 100.00% 53.15 Mi
🔥 Creating docker container (CPUs=2, Memory=3975MB) ...
📦 Preparing Kubernetes v1.21.2 on containerd 1.4.6 ...
▪ Generating certificates and keys ...
▪ Booting up control plane ...
▪ Configuring RBAC rules ...
🔗 Configuring Calico (Container Networking Interface) ...
🔎 Verifying Kubernetes components...
▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟 Enabled addons: storage-provisioner, default-storageclass👍 Starting node minikube-m02 in cluster minikube
🚜 Pulling base image ...
🔥 Creating docker container (CPUs=2, Memory=3975MB) ...
🌐 Found network options:
▪ NO_PROXY=192.168.49.2
📦 Preparing Kubernetes v1.21.2 on containerd 1.4.6 ...
▪ env NO_PROXY=192.168.49.2
🔎 Verifying Kubernetes components...👍 Starting node minikube-m03 in cluster minikube
🚜 Pulling base image ...
🔥 Creating docker container (CPUs=2, Memory=3975MB) ...
🌐 Found network options:
▪ NO_PROXY=192.168.49.2,192.168.49.3
📦 Preparing Kubernetes v1.21.2 on containerd 1.4.6 ...
▪ env NO_PROXY=192.168.49.2
▪ env NO_PROXY=192.168.49.2,192.168.49.3
🔎 Verifying Kubernetes components...👍 Starting node minikube-m04 in cluster minikube
🚜 Pulling base image ...
🔥 Creating docker container (CPUs=2, Memory=3975MB) ...
🌐 Found network options:
▪ NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4
📦 Preparing Kubernetes v1.21.2 on containerd 1.4.6 ...
▪ env NO_PROXY=192.168.49.2
▪ env NO_PROXY=192.168.49.2,192.168.49.3
▪ env NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4
🔎 Verifying Kubernetes components...
🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default⎈ minikube>~> kubectl get nodes
NAME STATUS ROLES AGE VERSION
minikube Ready control-plane,master 4m36s v1.21.2
minikube-m02 Ready <none> 3m36s v1.21.2
minikube-m03 Ready <none> 2m55s v1.21.2
minikube-m04 Ready <none> 2m11s v1.21.2
⎈ minikube>~> docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ac473f83f900 gcr.io/k8s-minikube/kicbase:v0.0.25 "/usr/local/bin/entr…" 11 minutes ago Up 11 minutes 127.0.0.1:49192->22/tcp, 127.0.0.1:49191->2376/tcp, 127.0.0.1:49190->5000/tcp, 127.0.0.1:49189->8443/tcp, 127.0.0.1:49188->32443/tcp minikube-m04
d214ea1bd215 gcr.io/k8s-minikube/kicbase:v0.0.25 "/usr/local/bin/entr…" 12 minutes ago Up 12 minutes 127.0.0.1:49187->22/tcp, 127.0.0.1:49186->2376/tcp, 127.0.0.1:49185->5000/tcp, 127.0.0.1:49184->8443/tcp, 127.0.0.1:49183->32443/tcp minikube-m03
b134ac6d11e1 gcr.io/k8s-minikube/kicbase:v0.0.25 "/usr/local/bin/entr…" 13 minutes ago Up 13 minutes 127.0.0.1:49182->22/tcp, 127.0.0.1:49181->2376/tcp, 127.0.0.1:49180->5000/tcp, 127.0.0.1:49179->8443/tcp, 127.0.0.1:49178->32443/tcp minikube-m02
3479fa2fb55e gcr.io/k8s-minikube/kicbase:v0.0.25 "/usr/local/bin/entr…" 14 minutes ago Up 14 minutes 127.0.0.1:49177->22/tcp, 127.0.0.1:49176->2376/tcp, 127.0.0.1:49175->5000/tcp, 127.0.0.1:49174->8443/tcp, 127.0.0.1:49173->32443/tcp minikube
Note: In the above example, we have one master node and three worker nodes you can use in our testing. Make sure you have enough resources to allocate for running minikube nodes.
Minikube comes with multiple addons we can use.
⎈ minikube>~> minikube addons list
|-----------------------------|----------|--------------|-----------------------|
| ADDON NAME | PROFILE | STATUS | MAINTAINER |
|-----------------------------|----------|--------------|-----------------------|
| ambassador | minikube | disabled | unknown (third-party) |
| auto-pause | minikube | disabled | google |
| csi-hostpath-driver | minikube | disabled | kubernetes |
| dashboard | minikube | disabled | kubernetes |
| default-storageclass | minikube | enabled ✅ | kubernetes |
| efk | minikube | disabled | unknown (third-party) |
| freshpod | minikube | disabled | google |
| gcp-auth | minikube | disabled | google |
| gvisor | minikube | disabled | google |
| helm-tiller | minikube | disabled | unknown (third-party) |
| ingress | minikube | disabled | unknown (third-party) |
| ingress-dns | minikube | disabled | unknown (third-party) |
| istio | minikube | disabled | unknown (third-party) |
| istio-provisioner | minikube | disabled | unknown (third-party) |
| kubevirt | minikube | disabled | unknown (third-party) |
| logviewer | minikube | disabled | google |
| metallb | minikube | disabled | unknown (third-party) |
| metrics-server | minikube | disabled | kubernetes |
| nvidia-driver-installer | minikube | disabled | google |
| nvidia-gpu-device-plugin | minikube | disabled | unknown (third-party) |
| olm | minikube | disabled | unknown (third-party) |
| pod-security-policy | minikube | disabled | unknown (third-party) |
| registry | minikube | disabled | google |
| registry-aliases | minikube | disabled | unknown (third-party) |
| registry-creds | minikube | disabled | unknown (third-party) |
| storage-provisioner | minikube | enabled ✅ | kubernetes |
| storage-provisioner-gluster | minikube | disabled | unknown (third-party) |
| volumesnapshots | minikube | disabled | kubernetes |
|-----------------------------|----------|--------------|-----------------------|⎈ minikube>~> minikube addons enable dashboard
▪ Using image kubernetesui/metrics-scraper:v1.0.4
▪ Using image kubernetesui/dashboard:v2.1.0
💡 Some dashboard features require the metrics-server addon. To enable all features please run:
minikube addons enable metrics-server
🌟 The 'dashboard' addon is enabled⎈ minikube>~> minikube dashboard --url
🤔 Verifying dashboard health ...
🚀 Launching proxy ...
🤔 Verifying proxy health ...
http://127.0.0.1:44069/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
Minikube Kubernetes in VM (QEMU/KVM)
If you intend to run Kubernetes on your laptop or desktop and use Virtualisation, you can run minikube in VM by specifying --vm-driver
. In the below example, I am running minikube in Linux KVM.
This solution is more permanent and implies you have a high-end laptop or desktop with enough resources to spare for running Kubernetes.
~> minikube start -n=3 --cni='calico' --container-runtime='containerd' --vm-driver kvm2
😄 minikube v1.22.0 on Nixos 21.05.3248.6120ac5cd20 (Okapi)
✨ Using the kvm2 driver based on user configuration
👍 Starting control plane node minikube in cluster minikube
🔥 Creating kvm2 VM (CPUs=2, Memory=5300MB, Disk=20000MB) ...
📦 Preparing Kubernetes v1.21.2 on containerd 1.4.4 ...
▪ Generating certificates and keys ...
▪ Booting up control plane ...
▪ Configuring RBAC rules ...
🔗 Configuring Calico (Container Networking Interface) ...
🔎 Verifying Kubernetes components...
▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟 Enabled addons: default-storageclass, storage-provisioner👍 Starting node minikube-m02 in cluster minikube
🔥 Creating kvm2 VM (CPUs=2, Memory=5300MB, Disk=20000MB) ...
🌐 Found network options:
▪ NO_PROXY=192.168.39.165
📦 Preparing Kubernetes v1.21.2 on containerd 1.4.4 ...
▪ env NO_PROXY=192.168.39.165
🔎 Verifying Kubernetes components...👍 Starting node minikube-m03 in cluster minikube
🔥 Creating kvm2 VM (CPUs=2, Memory=5300MB, Disk=20000MB) ...
🌐 Found network options:
▪ NO_PROXY=192.168.39.165,192.168.39.171
📦 Preparing Kubernetes v1.21.2 on containerd 1.4.4 ...
▪ env NO_PROXY=192.168.39.165
▪ env NO_PROXY=192.168.39.165,192.168.39.171
🔎 Verifying Kubernetes components...
🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default⎈ minikube>~> kubectl get nodes
NAME STATUS ROLES AGE VERSION
minikube Ready control-plane,master 2m59s v1.21.2
minikube-m02 Ready <none> 111s v1.21.2
minikube-m03 Ready <none> 59s v1.21.2
⎈ minikube>~> virsh list --all
Id Name State
-------------------------------
2 minikube running
3 minikube-m02 running
4 minikube-m03 running
- nixos shut off
Now we can use dashboard and metric-server to monitor worker nodes.
⎈ minikube>~> minikube addons enable dashboard
▪ Using image kubernetesui/dashboard:v2.1.0
▪ Using image kubernetesui/metrics-scraper:v1.0.4
💡 Some dashboard features require the metrics-server addon. To enable all features please run:
minikube addons enable metrics-server
🌟 The 'dashboard' addon is enabled
⎈ minikube>~> minikube addons enable metrics-server
▪ Using image k8s.gcr.io/metrics-server/metrics-server:v0.4.2
🌟 The 'metrics-server' addon is enabled⎈ minikube>~> minikube dashboard --url
🤔 Verifying dashboard health ...
🚀 Launching proxy ...
🤔 Verifying proxy health ...
http://127.0.0.1:43507/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
Minikube is an excellent and very flexible solution to run Kubernetes in Local machines or VM. It comes preconfigured with many modules that can be enabled/installed by a single line. But as well is very heavy, and if you need speed over features, you may take a look at the k3d or kind.
Minikube best works for DevOps/SRE teams that need to test CNI policies RBACK and need an environment as close as possible with the Kubernetes running in Production/Dev/Staging
Kind
“Kind is a tool for running local Kubernetes clusters using Docker container “nodes”. kind was primarily designed for testing Kubernetes itself, but may be used for local development or CI.”
Kind is trying to solve the same problem as minikube by creating a local development cluster. Kind is faster than minikube as it only deals with containers as hosts and has less inbuild functionality.
“kind or Kubernetes in docker is a suite of tooling for local Kubernetes “clusters” where each “node” is a Docker container. kind is targeted at testing Kubernetes.”
As a reference kind architecture
Kind is completely written in Go, and to install, you need Go language or download the binary that contains all statically linked dependencies.
Kind can be installed on Linux, macOS, Windows.
Kind Installation
~> GO111MODULE="on" go get sigs.k8s.io/kind@v0.11.1 && kind create cluster
Another option is to download binary directly from github kind repository
~> curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.11.1/kind-linux-amd64
~> chmod +x ./kind
~> mv ./kind /some-dir-in-your-PATH/kind
Once you have kind installed, you can create a new cluster with;
~> kind create cluster
Creating cluster "kind" ...
✓ Ensuring node image (kindest/node:v1.21.1) 🖼
✓ Preparing nodes 📦
✓ Writing configuration 📜
✓ Starting control-plane 🕹️
✓ Installing CNI 🔌
✓ Installing StorageClass 💾
Set kubectl context to "kind-kind"
You can now use your cluster with:kubectl cluster-info --context kind-kindHave a question, bug, or feature request? Let us know! https://kind.sigs.k8s.io/#community 🙂
By default, kind will create a single node that plays the role of the Master and Worker.
Once the Kubernetes cluster is created, kind automatically configures kubectl context.
⎈ kind-kind>~> kubectl cluster-info --context kind-kind
Kubernetes control plane is running at https://127.0.0.1:39827
CoreDNS is running at https://127.0.0.1:39827/api/v1/namespaces/kube-system/services/kube-dns:dns/proxyTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.⎈ kind-kind>~> kubectl get nodes
NAME STATUS ROLES AGE VERSION
kind-control-plane Ready control-plane,master 72s v1.21.1⎈ kind-kind>~> docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
7892a9aa93a1 kindest/node:v1.21.1 "/usr/local/bin/entr…" 2 minutes ago Up 2 minutes 127.0.0.1:39827->6443/tcp kind-control-plane
Kind is very customizable. You can configure kind to run in a multi-node setup.
You will need to create the kind configuration file with YAML format as in the below example.
Kind configuration file as an example;
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
kubeadmConfigPatches:
- |
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "ingress-ready=true"
extraPortMappings:
- containerPort: 80
hostPort: 80
protocol: TCP
- containerPort: 443
hostPort: 443
protocol: TCP
- role: control-plane
- role: control-plane
- role: worker
- role: worker
- role: worker
Kind multi-node cluster creation
⎈ kind-kind>~> kind create cluster --config kind.yaml
Creating cluster "kind" ...
✓ Ensuring node image (kindest/node:v1.21.1) 🖼
✓ Preparing nodes 📦 📦 📦 📦 📦 📦
✓ Configuring the external load balancer ⚖️
✓ Writing configuration 📜
✓ Starting control-plane 🕹️
✓ Installing CNI 🔌
✓ Installing StorageClass 💾
✓ Joining more control-plane nodes 🎮
✓ Joining worker nodes 🚜
Set kubectl context to "kind-kind"
You can now use your cluster with:kubectl cluster-info --context kind-kindHave a nice day! 👋
⎈ kind-kind>~> docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ced47990f3ca kindest/haproxy:v20200708-548e36db "/docker-entrypoint.…" 7 minutes ago Up 7 minutes 127.0.0.1:42437->6443/tcp kind-external-load-balancer
f71f59a44ef9 kindest/node:v1.21.1 "/usr/local/bin/entr…" 8 minutes ago Up 7 minutes 127.0.0.1:32955->6443/tcp kind-control-plane2
d341ee581b9c kindest/node:v1.21.1 "/usr/local/bin/entr…" 8 minutes ago Up 7 minutes kind-worker2
6cd893350528 kindest/node:v1.21.1 "/usr/local/bin/entr…" 8 minutes ago Up 7 minutes kind-worker3
c75e38bfaa48 kindest/node:v1.21.1 "/usr/local/bin/entr…" 8 minutes ago Up 7 minutes kind-worker
f3290aa18692 kindest/node:v1.21.1 "/usr/local/bin/entr…" 8 minutes ago Up 7 minutes 127.0.0.1:42019->6443/tcp kind-control-plane3
f5095a2fde95 kindest/node:v1.21.1 "/usr/local/bin/entr…" 8 minutes ago Up 7 minutes 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp, 127.0.0.1:42303->6443/tcp kind-control-plane
⎈ kind-kind>~> kubectl get nodes
NAME STATUS ROLES AGE VERSION
kind-control-plane Ready control-plane,master 7m16s v1.21.1
kind-control-plane2 Ready control-plane,master 6m48s v1.21.1
kind-control-plane3 Ready control-plane,master 5m57s v1.21.1
kind-worker Ready <none> 5m37s v1.21.1
kind-worker2 Ready <none> 5m37s v1.21.1
kind-worker3 Ready <none> 5m37s v1.21.1
Note: kind by default installed simple networking implementation (“kindnetd”) CNI, but many CNI manifests known to work too like Calico, Wave, Flannel, etc.
Kind a good alternative for minikube and best works for developers who need a fast deployable Kubernetes cluster for testing on local dev environment.
K3d
K3d was developed as a wrapper to run k3s in docker containers.
k3s (Rancher Lab’s minimal Kubernetes distribution) was designed as a very light Kubernetes distribution for edge location, IoT, or for ARM platform where every CPU cycle counts.
K3d simplifies Kubernetes cluster creation significantly on single or multi-node k3s clusters in docker.
K3s has a very small footprint, and it provides many Kubernetes components like Containerd and runc, Flannel for CNI, CoreDNS, Metrics Server, Traefik for ingress, Klipper-lb as an embedded service load balancer provider, Kube-router for network policy, Helm-controller to allow for CRD-driven deployment of helm manifests, Kine as a datastore shim that allows etcd to be replaced with other databases, Local-path-provisioner for provisioning volumes using local storage, Host utilities such as iptables/nftables, ebtables, ethtool, and socat.
K3d is the fastest Kubernetes cluster creation tool compare with kind and minikube.
K3d, similar to kind, runs only on docker and uses a configuration file YAML format.
K3d cab be deployed on Windows, macOS, Linux.
Ex. deployment on Linux
~> wget -q -O - https://raw.githubusercontent.com/rancher/k3d/main/install.sh
# Or
~> curl -s https://raw.githubusercontent.com/rancher/k3d/main/install.sh | TAG=v4.0.0 bash
Creating new Kubernetes cluster is as easy as running;
~> k3d cluster create mycluster
INFO[0000] Prep: Network
INFO[0000] Created network 'k3d-mycluster' (4bb8f280f158cc9fc0a7b22268483e10f045641230f5e8df518fce5ebb4733af)
INFO[0000] Created volume 'k3d-mycluster-images'
INFO[0001] Creating node 'k3d-mycluster-server-0'
INFO[0001] Creating LoadBalancer 'k3d-mycluster-serverlb'
INFO[0001] Starting cluster 'mycluster'
INFO[0001] Starting servers...
INFO[0001] Starting Node 'k3d-mycluster-server-0'
INFO[0006] Starting agents...
INFO[0006] Starting helpers...
INFO[0006] Starting Node 'k3d-mycluster-serverlb'
INFO[0007] (Optional) Trying to get IP of the docker host and inject it into the cluster as 'host.k3d.internal' for easy access
INFO[0011] Successfully added host record to /etc/hosts in 2/2 nodes and to the CoreDNS ConfigMap
INFO[0011] Cluster 'mycluster' created successfully!
INFO[0011] --kubeconfig-update-default=false --> sets --kubeconfig-switch-context=false
INFO[0011] You can now use it like this:
kubectl config use-context k3d-mycluster
kubectl cluster-info
Note: in case you started Kubernetes k3d cluster and is unresponsive you may check the logs for
docker logs k3d-mycluster-server-0
and if you see the Error:
I0920 14:01:59.608876 7 conntrack.go:103] Set sysctl 'net/netfilter/nf_conntrack_max' to 524288
F0920 14:01:59.608924 7 server.go:495] open /proc/sys/net/netfilter/nf_conntrack_max: permission deni
You will need to start k3d with the below settings._
~> k3d cluster create \
--k3s-server-arg "--kube-proxy-arg=conntrack-max-per-core=0" \
--k3s-agent-arg "--kube-proxy-arg=conntrack-max-per-core=0"
Kubernetes cluster should be up and running in no time,
⎈ k3d-k3s-default>~> docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
11786bfb7cdb rancher/k3d-proxy:v4.4.7 "/bin/sh -c nginx-pr…" About a minute ago Up About a minute 80/tcp, 0.0.0.0:45107->6443/tcp k3d-k3s-default-serverlb
3431f9bd178e rancher/k3s:v1.20.6-k3s1 "/bin/entrypoint.sh …" About a minute ago Up About a minute k3d-k3s-default-server-0
⎈ k3d-k3s-default>~> kubectl get nodes
NAME STATUS ROLES AGE VERSION
k3d-k3s-default-server-0 Ready control-plane,master 113s v1.20.6+k3s1
⎈ k3d-k3s-default>~> kubectl get po -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system local-path-provisioner-5ff76fc89d-pxtzj 1/1 Running 0 107s
kube-system metrics-server-86cbb8457f-26kqq 1/1 Running 0 107s
kube-system coredns-854c77959c-7mg8g 1/1 Running 0 107s
kube-system helm-install-traefik-8ktk2 0/1 Completed 0 107s
kube-system svclb-traefik-qxndx 2/2 Running 0 78s
kube-system traefik-6f9cbd9bd4-mqkgc 1/1 Running 0 78s
K3d, similar to kind, allows you to configure clusters the way you want by creating a configuration YAML formatted file as in the below example.
kind: Simple
apiVersion: k3d.io/v1alpha2
name: my-cluster
image: rancher/k3s:v1.20.4-k3s1
servers: 3
agents: 3
ports:
- port: 80:80
nodeFilters:
- loadbalancertes k3d-my-cluster-server-0~> k3d cluster create \
--k3s-server-arg "--kube-proxy-arg=conntrack-max-per-core=0" \
--k3s-agent-arg "--kube-proxy-arg=conntrack-max-per-core=0" \
--config k3d.yaml
INFO[0000] Using config file k3d.yaml
INFO[0000] Prep: Network
INFO[0000] Created network 'k3d-my-cluster' (c8c4a2763f2b4a613189f236c71af10d2a6411d68fec3f0ae3e57617d37ad5e2)
INFO[0000] Created volume 'k3d-my-cluster-images'
INFO[0000] Creating initializing server node
INFO[0000] Creating node 'k3d-my-cluster-server-0'
INFO[0003] Pulling image 'rancher/k3s:v1.20.4-k3s1'
INFO[0017] Creating node 'k3d-my-cluster-server-1'
INFO[0018] Creating node 'k3d-my-cluster-server-2'
INFO[0018] Creating node 'k3d-my-cluster-agent-0'
INFO[0018] Creating node 'k3d-my-cluster-agent-1'
INFO[0018] Creating node 'k3d-my-cluster-agent-2'
INFO[0018] Creating LoadBalancer 'k3d-my-cluster-serverlb'
INFO[0018] Starting cluster 'my-cluster'
INFO[0018] Starting the initializing server...
INFO[0019] Starting Node 'k3d-my-cluster-server-0'
INFO[0019] Starting servers...
INFO[0019] Starting Node 'k3d-my-cluster-server-1'
INFO[0039] Starting Node 'k3d-my-cluster-server-2'
INFO[0054] Starting agents...
INFO[0054] Starting Node 'k3d-my-cluster-agent-0'
INFO[0061] Starting Node 'k3d-my-cluster-agent-1'
INFO[0069] Starting Node 'k3d-my-cluster-agent-2'
INFO[0076] Starting helpers...
INFO[0076] Starting Node 'k3d-my-cluster-serverlb'
INFO[0077] (Optional) Trying to get IP of the docker host and inject it into the cluster as 'host.k3d.internal' for easy access
INFO[0086] Successfully added host record to /etc/hosts in 7/7 nodes and to the CoreDNS ConfigMap
INFO[0086] Cluster 'my-cluster' created successfully!
INFO[0086] --kubeconfig-update-default=false --> sets --kubeconfig-switch-context=false
INFO[0086] You can now use it like this:
kubectl config use-context k3d-my-cluster
kubectl cluster-info⎈ k3d-k3s-default>~> kubectl get nodes
NAME STATUS ROLES AGE VERSION
k3d-my-cluster-agent-0 Ready <none> 31s v1.20.4+k3s1
k3d-my-cluster-agent-1 Ready <none> 24s v1.20.4+k3s1
k3d-my-cluster-agent-2 Ready <none> 16s v1.20.4+k3s1
k3d-my-cluster-server-0 Ready control-plane,etcd,master 64s v1.20.4+k3s1
k3d-my-cluster-server-1 Ready control-plane,etcd,master 50s v1.20.4+k3s1
k3d-my-cluster-server-2 Ready control-plane,etcd,master 36s v1.20.4+k3⎈ k3d-k3s-default>~> k3d cluster list
NAME SERVERS AGENTS LOADBALANCER
my-cluster 3/3 3/3 true⎈ k3d-k3s-default>~> docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
62b10f1477c3 rancher/k3d-proxy:v4.4.7 "/bin/sh -c nginx-pr…" 3 minutes ago Up 2 minutes 0.0.0.0:80->80/tcp, 0.0.0.0:34057->6443/tcp k3d-my-cluster-serverlb
f5edcf43f303 rancher/k3s:v1.20.4-k3s1 "/bin/entrypoint.sh …" 3 minutes ago Up 2 minutes k3d-my-cluster-agent-2
8d12afe08cde rancher/k3s:v1.20.4-k3s1 "/bin/entrypoint.sh …" 3 minutes ago Up 2 minutes k3d-my-cluster-agent-1
7d4d12e7a1d7 rancher/k3s:v1.20.4-k3s1 "/bin/entrypoint.sh …" 3 minutes ago Up 2 minutes k3d-my-cluster-agent-0
1ba6883505f2 rancher/k3s:v1.20.4-k3s1 "/bin/entrypoint.sh …" 3 minutes ago Up 3 minutes k3d-my-cluster-server-2
ebf43e7cbf84 rancher/k3s:v1.20.4-k3s1 "/bin/entrypoint.sh …" 3 minutes ago Up 3 minutes k3d-my-cluster-server-1
127c50851a1c rancher/k3s:v1.20.4-k3s1 "/bin/entrypoint.sh …" 3 minutes ago Up 3 minutes k3d-my-cluster-server-0
In the above example, we created 3 Masters nodes and 3 Worker nodes.
Now we can start deploying and testing your application/service in a newly created cluster.
Conclusions
Based on my experience using all three local development Kubernetes cluster options:
* minikube — best works for DevOps/SRE team who need to configure clusters in detail line CNI, policies and test locally.
* kind and k3d — works best for the development team who want a quick validation and immediate feedback on microservices code change or Kubernetes deployment, service, ConfigMap, or Secrets changes before promoting the code into Dev/Staging environment.