Hello! We are:
👷🏻♀️ AJ (@s0ulshake, EphemeraSearch)
🐳 Jérôme (@jpetazzo, Enix SAS)
The training will run from 9:30 to 13:00
There will be a break at 11:00
Feel free to interrupt for questions at any time
Especially when you see full screen container pictures!
This was initially written by Jérôme Petazzoni to support in-person, instructor-led workshops and tutorials
Credit is also due to multiple contributors — thank you!
You can also follow along on your own, at your own pace
We included as much information as possible in these slides
We recommend having a mentor to help you ...
... Or be comfortable spending some time reading the Kubernetes documentation ...
... And looking for answers on StackOverflow and other outlets
We recommend that you open these slides in your browser:
Use arrows to move to next/previous slide
(up, down, left, right, page up, page down)
Type a slide number + ENTER to go to that slide
The slide number is also visible in the URL bar
(e.g. .../#123 for slide 123)
Slides will remain online so you can review them later if needed
(let's say we'll keep them online at least 1 year, how about that?)
You can download the slides using that URL:
https://2020-06-enix.container.training/slides.zip
(then open the file 5.yml.html
)
You will to find new versions of these slides on:
You are welcome to use, re-use, share these slides
These slides are written in markdown
The sources of these slides are available in a public GitHub repository:
Typos? Mistakes? Questions? Feel free to hover over the bottom of the slide ...
👇 Try it! The source file will be shown and you can view it on GitHub and fork and edit it.
This slide has a little magnifying glass in the top left corner
This magnifying glass indicates slides that provide extra details
Feel free to skip them if:
you are in a hurry
you are new to this and want to avoid cognitive overload
you want only the most essential information
You can review these slides another time if you want, they'll be waiting for you ☺
We've set up a chat room that we will monitor during the workshop
Don't hesitate to use it to ask questions, or get help, or share feedback
The chat room will also be available after the workshop
Join the chat room: Gitter
Say hi in the chat room!
(auto-generated TOC)
(auto-generated TOC)
Pre-requirements
(automatically generated title slide)
Kubernetes concepts
(pods, deployments, services, labels, selectors)
Hands-on experience working with containers
(building images, running them; doesn't matter how exactly)
Familiar with the UNIX command-line
(navigating directories, editing files, using kubectl
)
We are going to build and break multiple clusters
Everyone will get their own private environment(s)
You are invited to reproduce all the demos (but you don't have to)
All hands-on sections are clearly identified, like the gray rectangle below
This is the stuff you're supposed to do!
Go to https://2020-06-enix.container.training/ to view these slides
Each person gets their own private set of VMs
Each person should have a printed card with connection information
We will connect to these VMs with SSH
(if you don't have an SSH client, install one now!)
We are using basic cloud VMs with Ubuntu LTS
Kubernetes packages or binaries have been installed
(depending on what we want to accomplish in the lab)
We disabled IP address checks
we want to route pod traffic directly between nodes
most cloud providers will treat pod IP addresses as invalid
... and filter them out; so we disable that filter k8s/prereqs-admin.md
Kubernetes architecture
(automatically generated title slide)
We can arbitrarily split Kubernetes in two parts:
the nodes, a set of machines that run our containerized workloads;
the control plane, a set of processes implementing the Kubernetes APIs.
Kubernetes also relies on underlying infrastructure:
servers, network connectivity (obviously!),
optional components like storage systems, load balancers ...
The control plane can run:
in containers, on the same nodes that run other application workloads
on a dedicated node
(example: a cluster installed with kubeadm)
on a dedicated set of nodes
(example: Kubernetes The Hard Way; kops)
outside of the cluster
(example: most managed clusters like AKS, EKS, GKE)
Our containerized workloads
A container engine like Docker, CRI-O, containerd...
(in theory, the choice doesn't matter, as the engine is abstracted by Kubernetes)
kubelet: an agent connecting the node to the cluster
(it connects to the API server, registers the node, receives instructions)
kube-proxy: a component used for internal cluster communication
(note that this is not an overlay network or a CNI plugin!)
Everything is stored in etcd
(it's the only stateful component)
Everyone communicates exclusively through the API server:
we (users) interact with the cluster through the API server
the nodes register and get their instructions through the API server
the other control plane components also register with the API server
API server is the only component that reads/writes from/to etcd
The API server exposes a REST API
(except for some calls, e.g. to attach interactively to a container)
Almost all requests and responses are JSON following a strict format
For performance, the requests and responses can also be done over protobuf
(see this design proposal for details)
In practice, protobuf is used for all internal communication
(between control plane components, and with kubelet)
The kubelet agent uses a number of special-purpose protocols and interfaces, including:
CRI (Container Runtime Interface)
CNI (Container Network Interface)
The Kubernetes API
(automatically generated title slide)
(Clayton Coleman, Kubernetes Architect and Maintainer)
What does that mean?
We cannot tell the API, "run a pod"
We can tell the API, "here is the definition for pod X"
The API server will store that definition (in etcd)
Controllers will then wake up and create a pod matching the definition
We can create, read, update, and delete objects
We can also watch objects
(be notified when an object changes, or when an object of a given type is created)
Objects are strongly typed
Types are validated and versioned
Storage and watch operations are provided by etcd
(note: the k3s project allows us to use sqlite instead of etcd)
test
clusterSSH to the first node of the test cluster
Check that the cluster is operational:
kubectl get nodes
All nodes should be Ready
kubectl create -f- <<EOFapiVersion: v1kind: Namespacemetadata: name: helloEOF
This is equivalent to kubectl create namespace hello
.
kubectl get namespace hello -o yaml
We see a lot of data that wasn't here when we created the object.
Some data was automatically added to the object (like spec.finalizers
).
Some data is dynamic (typically, the content of status
.)
Almost every Kubernetes API payload (requests and responses) has the same format:
apiVersion: xxxkind: yyymetadata: name: zzz (more metadata fields here)(more fields here)
The fields shown above are mandatory, except for some special cases
(e.g.: in lists of resources, the list itself doesn't have a metadata.name
)
We show YAML for convenience, but the API uses JSON
(with optional protobuf encoding)
The apiVersion
field corresponds to an API group
It can be either v1
(aka "core" group or "legacy group"), or group/versions
; e.g.:
apps/v1
rbac.authorization.k8s.io/v1
extensions/v1beta1
It does not indicate which version of Kubernetes we're talking about
It indirectly indicates the version of the kind
(which fields exist, their format, which ones are mandatory...)
A single resource type (kind
) is rarely versioned alone
(e.g.: the batch
API group contains jobs
and cronjobs
)
Let's update our namespace object
There are many ways to do that, including:
kubectl apply
(and provide an updated YAML file)kubectl edit
kubectl patch
kubectl label
, or kubectl set
In each case, kubectl
will:
PATCH
requests)For demonstration purposes, let's add a label to the namespace
The easiest way is to use kubectl label
In one terminal, watch namespaces:
kubectl get namespaces --show-labels -w
In the other, update our namespace:
kubectl label namespaces hello color=purple
We demonstrated update and watch semantics.
The API server itself doesn't do anything: it's just a fancy object store
All the actual logic in Kubernetes is implemented with controllers
A controller watches a set of resources, and takes action when they change
Examples:
when a Pod object is created, it gets scheduled and started
when a Pod belonging to a ReplicaSet terminates, it gets replaced
when a Deployment object is updated, it can trigger a rolling update
Other control plane components
(automatically generated title slide)
API server ✔️
etcd ✔️
Controller manager
Scheduler
This is a collection of loops watching all kinds of objects
That's where the actual logic of Kubernetes lives
When we create a Deployment (e.g. with kubectl create deployment web --image=nginx
),
we create a Deployment object
the Deployment controller notices it, and creates a ReplicaSet
the ReplicaSet controller notices the ReplicaSet, and creates a Pod
When a pod is created, it is in Pending
state
The scheduler (or rather: a scheduler) must bind it to a node
Kubernetes comes with an efficient scheduler with many features
if we have special requirements, we can add another scheduler
(example: this demo scheduler uses the cost of nodes, stored in node annotations)
A pod might stay in Pending
state for a long time:
if the cluster is full
if the pod has special constraints that can't be met
if the scheduler is not running (!)
:EN:- Kubernetes architecture review :FR:- Passage en revue de l'architecture de Kubernetes
They say, "a picture is worth one thousand words."
The following 19 slides show what really happens when we run:
kubectl create deployment web --image=nginx
Building our own cluster
(automatically generated title slide)
Let's build our own cluster!
Perfection is attained not when there is nothing left to add, but when there is nothing left to take away. (Antoine de Saint-Exupery)
Our goal is to build a minimal cluster allowing us to:
kubectl create deployment
)"Minimal" here means:
For now, we don't care about security
For now, we don't care about scalability
For now, we don't care about high availability
All we care about is simplicity
We will use the machine indicated as dmuc1
(this stands for "Dessine Moi Un Cluster" or "Draw Me A Sheep",
in homage to Saint-Exupery's "The Little Prince")
This machine:
runs Ubuntu LTS
has Kubernetes, Docker, and etcd binaries installed
but nothing is running
Log into the dmuc1
machine
Get root:
sudo -i
Check available versions:
etcd -versionkube-apiserver --versiondockerd --version
Start API server
Interact with it (create Deployment and Service)
See what's broken
Fix it and go back to step 2 until it works!
We are going to start many processes
Depending on what you're comfortable with, you can:
open multiple windows and multiple SSH connections
use a terminal multiplexer like screen or tmux
put processes in the background with &
(warning: log output might get confusing to read!)
kube-apiserver# It will fail with "--etcd-servers must be specified"
Since the API server stores everything in etcd, it cannot start without it.
etcd
Success!
Note the last line of output:
serving insecure client requests on 127.0.0.1:2379, this is strongly discouraged!
Sure, that's discouraged. But thanks for telling us the address!
Try again, passing the --etcd-servers
argument
That argument should be a comma-separated list of URLs
kube-apiserver --etcd-servers http://127.0.0.1:2379
Success!
List nodes:
kubectl get nodes
List services:
kubectl get services
We should get No resources found.
and the kubernetes
service, respectively.
Note: the API server automatically created the kubernetes
service entry.
kubeconfig
?We didn't need to create a kubeconfig
file
By default, the API server is listening on localhost:8080
(without requiring authentication)
By default, kubectl
connects to localhost:8080
(without providing authentication)
kubectl create deployment web --image=nginx
Success?
kubectl get all
Our Deployment is in bad shape:
NAME READY UP-TO-DATE AVAILABLE AGEdeployment.apps/web 0/1 0 0 2m26s
And, there is no ReplicaSet, and no Pod.
We stored the definition of our Deployment in etcd
(through the API server)
But there is no controller to do the rest of the work
We need to start the controller manager
kube-controller-manager
The final error message is:
invalid configuration: no configuration has been provided
But the logs include another useful piece of information:
Neither --kubeconfig nor --master was specified.Using the inClusterConfig. This might not work.
The controller manager needs to connect to the API server
It does not have a convenient localhost:8080
default
We can pass the connection information in two ways:
--master
and a host:port combination (easy)
--kubeconfig
and a kubeconfig
file
For simplicity, we'll use the first option
kube-controller-manager --master http://localhost:8080
Success!
kubectl get all
We now have a ReplicaSet.
But we still don't have a Pod.
In the controller manager logs, we should see something like this:
E0404 15:46:25.753376 22847 replica_set.go:450] Sync "default/web-5bc9bd5b8d"failed with No API token found for service account "default", retry after thetoken is automatically created and added to the service account
The service account default
was automatically added to our Deployment
(and to its pods)
The service account default
exists
But it doesn't have an associated token
(the token is a secret; creating it requires signature; therefore a CA)
There are many ways to solve that issue.
We are going to list a few (to get an idea of what's happening behind the scenes).
Of course, we don't need to perform all the solutions mentioned here.
Restart the API server with
--disable-admission-plugins=ServiceAccount
The API server will no longer add a service account automatically
Our pods will be created without a service account
Add automountServiceAccountToken: false
to the Deployment spec
or
Add automountServiceAccountToken: false
to the default ServiceAccount
The ReplicaSet controller will no longer create pods referencing the (missing) token
default
ServiceAccount:kubectl patch sa default -p "automountServiceAccountToken: false"
This is the most complex option!
Generate a key pair
Pass the private key to the controller manager
(to generate and sign tokens)
Pass the public key to the API server
(to verify these tokens)
kubectl get all
Note: we might have to wait a bit for the ReplicaSet controller to retry.
If we're impatient, we can restart the controller manager.
Our pod exists, but it is in Pending
state
Remember, we don't have a node so far
(kubectl get nodes
shows an empty list)
We need to:
start a container engine
start kubelet
dockerd
Success!
Feel free to check that it actually works with e.g.:
docker run alpine echo hello world
If we start kubelet without arguments, it will start
But it will not join the cluster!
It will start in standalone mode
Just like with the controller manager, we need to tell kubelet where the API server is
Alas, kubelet doesn't have a simple --master
option
We have to use --kubeconfig
We need to write a kubeconfig
file for kubelet
We can copy/paste a bunch of YAML
Or we can generate the file with kubectl
~/.kube/config
with kubectl
:kubectl config \ set-cluster localhost --server http://localhost:8080kubectl config \ set-context localhost --cluster localhostkubectl config \ use-context localhost
~/.kube/config
fileThe file that we generated looks like the one below.
That one has been slightly simplified (removing extraneous fields), but it is still valid.
apiVersion: v1kind: Configcurrent-context: localhostcontexts:- name: localhost context: cluster: localhostclusters:- name: localhost cluster: server: http://localhost:8080
kubelet --kubeconfig ~/.kube/config
Success!
kubectl get nodes
Our node should show up.
Its name will be its hostname (it should be dmuc1
).
kubectl get all
kubectl get all
Our pod is still Pending
. 🤔
kubectl get all
Our pod is still Pending
. 🤔
Which is normal: it needs to be scheduled.
(i.e., something needs to decide which node it should go on.)
Why do we need a scheduling decision, since we have only one node?
The node might be full, unavailable; the pod might have constraints ...
The easiest way to schedule our pod is to start the scheduler
(we could also schedule it manually)
The scheduler also needs to know how to connect to the API server
Just like for controller manager, we can use --kubeconfig
or --master
kube-scheduler --master http://localhost:8080
Our pod will go through a short ContainerCreating
phase
Then it will be Running
kubectl get pods
Success!
We can schedule a pod in Pending
state by creating a Binding, e.g.:
kubectl create -f- <<EOFapiVersion: v1kind: Bindingmetadata: name: name-of-the-podtarget: apiVersion: v1 kind: Node name: name-of-the-nodeEOF
This is actually how the scheduler works!
It watches pods, makes scheduling decisions, and creates Binding objects
Check our pod's IP address:
kubectl get pods -o wide
Send some HTTP request to the pod:
curl X.X.X.X
We should see the Welcome to nginx!
page.
Expose the Deployment's port 80:
kubectl expose deployment web --port=80
Check the Service's ClusterIP, and try connecting:
kubectl get service webcurl http://X.X.X.X
Expose the Deployment's port 80:
kubectl expose deployment web --port=80
Check the Service's ClusterIP, and try connecting:
kubectl get service webcurl http://X.X.X.X
This won't work. We need kube-proxy to enable internal communication.
kube-proxy also needs to connect to the API server
It can work with the --master
flag
(although that will be deprecated in the future)
kube-proxy --master http://localhost:8080
kubectl get service webcurl http://X.X.X.X
Success!
kube-proxy watches Service resources
When a Service is created or updated, kube-proxy creates iptables rules
Check out the OUTPUT
chain in the nat
table:
iptables -t nat -L OUTPUT
Traffic is sent to KUBE-SERVICES
; check that too:
iptables -t nat -L KUBE-SERVICES
For each Service, there is an entry in that chain.
KUBE-SVC-...
corresponding to our serviceCheck that KUBE-SVC-...
chain:
iptables -t nat -L KUBE-SVC-...
It should show a jump to a KUBE-SEP-...
chains; check it out too:
iptables -t nat -L KUBE-SEP-...
This is a DNAT
rule to rewrite the destination address of the connection to our pod.
This is how kube-proxy works!
With recent versions of Kubernetes, it is possible to tell kube-proxy to use IPVS
IPVS is a more powerful load balancing framework
(remember: iptables was primarily designed for firewalling, not load balancing!)
It is also possible to replace kube-proxy with kube-router
kube-router uses IPVS by default
kube-router can also perform other functions
(e.g., we can use it as a CNI plugin to provide pod connectivity)
kubernetes
service?If we try to connect, it won't work
(by default, it should be 10.0.0.1
)
If we look at the Endpoints for this service, we will see one endpoint:
host-address:6443
By default, the API server expects to be running directly on the nodes
(it could be as a bare process, or in a container/pod using the host network)
... And it expects to be listening on port 6443 with TLS
:EN:- Building our own cluster from scratch :FR:- Construire son cluster à la main
Adding nodes to the cluster
(automatically generated title slide)
So far, our cluster has only 1 node
Let's see what it takes to add more nodes
We are going to use another set of machines: kubenet
We have 3 identical machines: kubenet1
, kubenet2
, kubenet3
The Docker Engine is installed (and running) on these machines
The Kubernetes packages are installed, but nothing is running
We will use kubenet1
to run the control plane
Start the control plane on kubenet1
Join the 3 nodes to the cluster
Deploy and scale a simple web server
kubenet1
Clone the repository containing the workshop materials:
git clone https://github.com/jpetazzo/container.training
Go to the compose/simple-k8s-control-plane
directory:
cd container.training/compose/simple-k8s-control-plane
Start the control plane:
docker-compose up
Show control plane component statuses:
kubectl get componentstatuseskubectl get cs
Show the (empty) list of nodes:
kubectl get nodes
dmuc
Our new control plane listens on 0.0.0.0
instead of the default 127.0.0.1
The ServiceAccount admission plugin is disabled
We need to generate a kubeconfig
file for kubelet
This time, we need to put the public IP address of kubenet1
(instead of localhost
or 127.0.0.1
)
kubeconfig
file:kubectl config set-cluster kubenet --server http://X.X.X.X:8080kubectl config set-context kubenet --cluster kubenetkubectl config use-context kubenetcp ~/.kube/config ~/kubeconfig
kubeconfig
filekubeconfig
file on the other nodes, tookubeconfig
to the other nodes:for N in 2 3; do scp ~/kubeconfig kubenet$N:done
sudo
!Join the first node:
sudo kubelet --kubeconfig ~/kubeconfig
Open more terminals and join the other nodes to the cluster:
ssh kubenet2 sudo kubelet --kubeconfig ~/kubeconfigssh kubenet3 sudo kubelet --kubeconfig ~/kubeconfig
We should now see all 3 nodes
At first, their STATUS
will be NotReady
They will move to Ready
state after approximately 10 seconds
kubectl get nodes
Let's create a Deployment and scale it
(so that we have multiple pods on multiple nodes)
Create a Deployment running NGINX:
kubectl create deployment web --image=nginx
Scale it:
kubectl scale deployment web --replicas=5
The pods will be scheduled on the nodes
The nodes will pull the nginx
image, and start the pods
What are the IP addresses of our pods?
kubectl get pods -o wide
The pods will be scheduled on the nodes
The nodes will pull the nginx
image, and start the pods
What are the IP addresses of our pods?
kubectl get pods -o wide
🤔 Something's not right ... Some pods have the same IP address!
Without the --network-plugin
flag, kubelet defaults to "no-op" networking
It lets the container engine use a default network
(in that case, we end up with the default Docker bridge)
Our pods are running on independent, disconnected, host-local networks
On a normal cluster, kubelet is configured to set up pod networking with CNI plugins
This requires:
installing CNI plugins
writing CNI configuration files
running kubelet with --network-plugin=cni
We need to set up a better network
Before diving into CNI, we will use the kubenet
plugin
This plugin creates a cbr0
bridge and connects the containers to that bridge
This plugin allocates IP addresses from a range:
either specified to kubelet (e.g. with --pod-cidr
)
or stored in the node's spec.podCIDR
field
See here for more details about this kubenet
plugin.
k8s/multinode.md
kubenet
does and does not doIt allocates IP addresses to pods locally
(each node has its own local subnet)
It connects the pods to a local bridge
(pods on the same node can communicate together; not with other nodes)
It doesn't set up routing or tunneling
(we get pods on separated networks; we need to connect them somehow)
It doesn't allocate subnets to nodes
(this can be done manually, or by the controller manager)
On each node, we will add routes to the other nodes' pod network
Of course, this is not convenient or scalable!
We will see better techniques to do this; but for now, hang on!
There are multiple options:
passing the subnet to kubelet with the --pod-cidr
flag
manually setting spec.podCIDR
on each node
allocating node CIDRs automatically with the controller manager
The last option would be implemented by adding these flags to controller manager:
--allocate-node-cidrs=true --cluster-cidr=<cidr>
kubenet
needs the pod CIDR, but other plugins don't need it
(e.g. because they allocate addresses in multiple pools, or a single big one)
The pod CIDR field may eventually be deprecated and replaced by an annotation
We need to stop and restart all our kubelets
We will add the --network-plugin
and --pod-cidr
flags
We all have a "cluster number" (let's call that C
) printed on your VM info card
We will use pod CIDR 10.C.N.0/24
(where N
is the node number: 1, 2, 3)
Stop all the kubelets (Ctrl-C is fine)
Restart them all, adding --network-plugin=kubenet --pod-cidr 10.C.N.0/24
When we stop (or kill) kubelet, the containers keep running
When kubelet starts again, it detects the containers
kubectl get pods -o wide
🤔 But our pods still use local IP addresses!
The IP address of a pod cannot change
kubelet doesn't automatically kill/restart containers with "invalid" addresses
(in fact, from kubelet's point of view, there is no such thing as an "invalid" address)
We must delete our pods and recreate them
Delete all the pods, and let the ReplicaSet recreate them:
kubectl delete pods --all
Wait for the pods to be up again:
kubectl get pods -o wide -w
Let's start kube-proxy to provide internal load balancing
Then see if we can create a Service and use it to contact our pods
Start kube-proxy:
sudo kube-proxy --kubeconfig ~/.kube/config
Expose our Deployment:
kubectl expose deployment web --port=80
Retrieve the ClusterIP address:
kubectl get svc web
Send a few requests to the ClusterIP address (with curl
)
Retrieve the ClusterIP address:
kubectl get svc web
Send a few requests to the ClusterIP address (with curl
)
Sometimes it works, sometimes it doesn't. Why?
Our pods have new, distinct IP addresses
But they are on host-local, isolated networks
If we try to ping a pod on a different node, it won't work
kube-proxy merely rewrites the destination IP address
But we need that IP address to be reachable in the first place
How do we fix this?
(hint: check the title of this slide!)
The technique that we are about to use doesn't work everywhere
It only works if:
all the nodes are directly connected to each other (at layer 2)
the underlying network allows the IP addresses of our pods
If we are on physical machines connected by a switch: OK
If we are on virtual machines in a public cloud: NOT OK
on AWS, we need to disable "source and destination checks" on our instances
on OpenStack, we need to disable "port security" on our network ports
We need to tell each node:
"The subnet 10.C.N.0/24 is located on node N" (for all values of N)
This is how we add a route on Linux:
ip route add 10.C.N.0/24 via W.X.Y.Z
(where W.X.Y.Z
is the internal IP address of node N)
We can see the internal IP addresses of our nodes with:
kubectl get nodes -o wide
By default, Docker prevents containers from using arbitrary IP addresses
(by setting up iptables rules)
We need to allow our containers to use our pod CIDR
For simplicity, we will insert a blanket iptables rule allowing all traffic:
iptables -I FORWARD -j ACCEPT
This has to be done on every node
Create all the routes on all the nodes
Insert the iptables rule allowing traffic
Check that you can ping all the pods from one of the nodes
Check that you can curl
the ClusterIP of the Service successfully
We did a lot of manual operations:
allocating subnets to nodes
adding command-line flags to kubelet
updating the routing tables on our nodes
We want to automate all these steps
We want something that works on all networks
:EN:- Connecting nodes ands pods :FR:- Interconnecter les nœuds et les pods
The Container Network Interface
(automatically generated title slide)
Allows us to decouple network configuration from Kubernetes
Implemented by plugins
Plugins are executables that will be invoked by kubelet
Plugins are responsible for:
allocating IP addresses for containers
configuring the network for containers
Plugins can be combined and chained when it makes sense
Interface could be created by e.g. vlan
or bridge
plugin
IP address could be allocated by e.g. dhcp
or host-local
plugin
Interface parameters (MTU, sysctls) could be tweaked by the tuning
plugin
The reference plugins are available here.
Look in each plugin's directory for its documentation. k8s/cni.md
The plugin (or list of plugins) is set in the CNI configuration
The CNI configuration is a single file in /etc/cni/net.d
If there are multiple files in that directory, the first one is used
(in lexicographic order)
That path can be changed with the --cni-conf-dir
flag of kubelet
When we set up the "pod network" (like Calico, Weave...) it ships a CNI configuration
(and sometimes, custom CNI plugins)
Very often, that configuration (and plugins) is installed automatically
(by a DaemonSet featuring an initContainer with hostPath volumes)
Examples:
Calico CNI config and volume
kube-router CNI config and volume
There are two slightly different configuration formats
Basic configuration format:
.conf
name suffixtype
string field in the top-most structureConfiguration list format:
.conflist
name suffixplugins
list field in the top-most structureParameters are given through environment variables, including:
CNI_COMMAND: desired operation (ADD, DEL, CHECK, or VERSION)
CNI_CONTAINERID: container ID
CNI_NETNS: path to network namespace file
CNI_IFNAME: what the network interface should be named
The network configuration must be provided to the plugin on stdin
(this avoids race conditions that could happen by passing a file path)
We are going to set up a new cluster
For this new cluster, we will use kube-router
kube-router will provide the "pod network"
(connectivity with pods)
kube-router will also provide internal service connectivity
(replacing kube-proxy)
Very simple architecture
Does not introduce new CNI plugins
(uses the bridge
plugin, with host-local
for IPAM)
Pod traffic is routed between nodes
(no tunnel, no new protocol)
Internal service connectivity is implemented with IPVS
Can provide pod network and/or internal service connectivity
kube-router daemon runs on every node
Connect to the API server
Obtain the local node's podCIDR
Inject it into the CNI configuration file
(we'll use /etc/cni/net.d/10-kuberouter.conflist
)
Obtain the addresses of all nodes
Establish a full mesh BGP peering with the other nodes
Exchange routes over BGP
BGP (Border Gateway Protocol) is the protocol used between internet routers
It scales pretty well (it is used to announce the 700k CIDR prefixes of the internet)
It is spoken by many hardware routers from many vendors
It also has many software implementations (Quagga, Bird, FRR...)
Experienced network folks generally know it (and appreciate it)
It also used by Calico (another popular network system for Kubernetes)
Using BGP allows us to interconnect our "pod network" with other systems
We'll work in a new cluster (named kuberouter
)
We will run a simple control plane (like before)
... But this time, the controller manager will allocate podCIDR
subnets
(so that we don't have to manually assign subnets to individual nodes)
We will create a DaemonSet for kube-router
We will join nodes to the cluster
The DaemonSet will automatically start a kube-router pod on each node
Log into node kuberouter1
Clone the workshop repository:
git clone https://github.com/jpetazzo/container.training
Move to this directory:
cd container.training/compose/kube-router-k8s-control-plane
/etc/cni/net.d
/etc/cni/net.d
(On most machines, at this point, /etc/cni/net.d
doesn't even exist).)
We will use a Compose file to start the control plane
It is similar to the one we used with the kubenet
cluster
The API server is started with --allow-privileged
(because we will start kube-router in privileged pods)
The controller manager is started with extra flags too:
--allocate-node-cidrs
and --cluster-cidr
We need to edit the Compose file to set the Cluster CIDR
Our cluster CIDR will be 10.C.0.0/16
(where C
is our cluster number)
Edit the Compose file to set the Cluster CIDR:
vim docker-compose.yaml
Start the control plane:
docker-compose up
In the same directory, there is a kuberouter.yaml
file
It contains the definition for a DaemonSet and a ConfigMap
Before we load it, we also need to edit it
We need to indicate the address of the API server
(because kube-router needs to connect to it to retrieve node information)
The address of the API server will be http://A.B.C.D:8080
(where A.B.C.D
is the public address of kuberouter1
, running the control plane)
Edit the YAML file to set the API server address:
vim kuberouter.yaml
Create the DaemonSet:
kubectl create -f kuberouter.yaml
Note: the DaemonSet won't create any pods (yet) since there are no nodes (yet).
kubenet
clusterX.X.X.X
with the address of kuberouter1
):kubectl config set-cluster cni --server http://X.X.X.X:8080kubectl config set-context cni --cluster cnikubectl config use-context cnicp ~/.kube/config ~/kubeconfig
kubeconfig
to the other nodes:for N in 2 3; do scp ~/kubeconfig kuberouter$N:done
We don't need the --pod-cidr
option anymore
(the controller manager will allocate these automatically)
We need to pass --network-plugin=cni
Join the first node:
sudo kubelet --kubeconfig ~/kubeconfig --network-plugin=cni
Open more terminals and join the other nodes:
ssh kuberouter2 sudo kubelet --kubeconfig ~/kubeconfig --network-plugin=cnissh kuberouter3 sudo kubelet --kubeconfig ~/kubeconfig --network-plugin=cni
At this point, kuberouter should have installed its CNI configuration
(in /etc/cni/net.d
)
/etc/cni/net.d
There should be a file created by kuberouter
The file should contain the node's podCIDR
Create a Deployment running a web server:
kubectl create deployment web --image=jpetazzo/httpenv
Scale it so that it spans multiple nodes:
kubectl scale deployment web --replicas=5
Expose it with a Service:
kubectl expose deployment web --port=8888
Get the ClusterIP address for the service:
kubectl get svc web
Send a few requests there:
curl X.X.X.X:8888
Note that if you send multiple requests, they are load-balanced in a round robin manner.
This shows that we are using IPVS (vs. iptables, which picked random endpoints).
Check the IP addresses of our pods:
kubectl get pods -o wide
Check our routing table:
route -nip route
We should see the local pod CIDR connected to kube-bridge
, and the other nodes' pod CIDRs having individual routes, with each node being the gateway.
We can also look at the output of the kube-router pods
(with kubectl logs
)
kube-router also comes with a special shell that gives lots of useful info
(we can access it with kubectl exec
)
But with the current setup of the cluster, these options may not work!
Why?
kubectl logs
/ kubectl exec
Try to show the logs of a kube-router pod:
kubectl -n kube-system logs ds/kube-router
Or try to exec into one of the kube-router pods:
kubectl -n kube-system exec kube-router-xxxxx bash
These commands will give an error message that includes:
dial tcp: lookup kuberouterX on 127.0.0.11:53: no such host
What does that mean?
To execute these commands, the API server needs to connect to kubelet
By default, it creates a connection using the kubelet's name
(e.g. http://kuberouter1:...
)
This requires our nodes names to be in DNS
We can change that by setting a flag on the API server:
--kubelet-preferred-address-types=InternalIP
We can also ask the logs directly to the container engine
First, get the container ID, with docker ps
or like this:
CID=$(docker ps -q \ --filter label=io.kubernetes.pod.namespace=kube-system \ --filter label=io.kubernetes.container.name=kube-router)
Then view the logs:
docker logs $CID
We don't need kube-router and BGP to distribute routes
The list of nodes (and associated podCIDR
subnets) is available through the API
This shell snippet generates the commands to add all required routes on a node:
NODES=$(kubectl get nodes -o name | cut -d/ -f2)for DESTNODE in $NODES; do if [ "$DESTNODE" != "$HOSTNAME" ]; then echo $(kubectl get node $DESTNODE -o go-template=" route add -net {{.spec.podCIDR}} gw {{(index .status.addresses 0).address}}") fidone
This could be useful for embedded platforms with very limited resources
(or lab environments for learning purposes)
:EN:- Configuring CNI plugins :FR:- Configurer des plugins CNI
Interconnecting clusters
(automatically generated title slide)
We assigned different Cluster CIDRs to each cluster
This allows us to connect our clusters together
We will leverage kube-router BGP abilities for that
We will peer each kube-router instance with a route reflector
As a result, we will be able to ping each other's pods
There are many methods to interconnect clusters
Depending on your network implementation, you will use different methods
The method shown here only works for nodes with direct layer 2 connection
We will often need to use tunnels or other network techniques
Someone will start the route reflector
(typically, that will be the person presenting these slides!)
We will update our kube-router configuration
We will add a peering with the route reflector
(instructing kube-router to connect to it and exchange route information)
We should see the routes to other clusters on our nodes
(in the output of e.g. route -n
or ip route show
)
We should be able to ping pods of other nodes
Only do this slide if you are doing this on your own
There is a Compose file in the compose/frr-route-reflector
directory
Before continuing, make sure that you have the IP address of the route reflector
This can be done in two ways:
with command-line flags to the kube-router
process
with annotations to Node objects
We will use the command-line flags
(because it will automatically propagate to all nodes)
Note: with Calico, this is achieved by creating a BGPPeer CRD.
Edit the kuberouter.yaml
file
Add the following flags to the kube-router arguments:
- "--peer-router-ips=X.X.X.X"- "--peer-router-asns=64512"
(Replace X.X.X.X
with the route reflector address)
Update the DaemonSet definition:
kubectl apply -f kuberouter.yaml
The DaemonSet will not update the pods automatically
(it is using the default updateStrategy
, which is OnDelete
)
We will therefore delete the pods
(they will be recreated with the updated definition)
kubectl delete pods -n kube-system -l k8s-app=kube-router
Note: the other updateStrategy
for a DaemonSet is RollingUpdate.
For critical services, we might want to precisely control the update process.
We can see informative messages in the output of kube-router:
time="2019-04-07T15:53:56Z" level=info msg="Peer Up"Key=X.X.X.X State=BGP_FSM_OPENCONFIRM Topic=Peer
We should see the routes of the other clusters show up
For debugging purposes, the reflector also exports a route to 1.0.0.2/32
That route will show up like this:
1.0.0.2 172.31.X.Y 255.255.255.255 UGH 0 0 0 eth0
We should be able to ping the pods of other clusters!
kube-router can also export ClusterIP addresses
(by adding the flag --advertise-cluster-ip
)
They are exported individually (as /32)
This would allow us to easily access other clusters' services
(without having to resolve the individual addresses of pods)
Even better if it's combined with DNS integration
(to facilitate name → ClusterIP resolution)
:EN:- Interconnecting clusters :FR:- Interconnexion de clusters
API server availability
(automatically generated title slide)
When we set up a node, we need the address of the API server:
for kubelet
for kube-proxy
sometimes for the pod network system (like kube-router)
How do we ensure the availability of that endpoint?
(what if the node running the API server goes down?)
Set up an external load balancer
Point kubelet (and other components) to that load balancer
Put the node(s) running the API server behind that load balancer
Update the load balancer if/when an API server node needs to be replaced
On cloud infrastructures, some mechanisms provide automation for this
(e.g. on AWS, an Elastic Load Balancer + Auto Scaling Group)
Set up a load balancer (like NGINX, HAProxy...) on each node
Configure that load balancer to send traffic to the API server node(s)
Point kubelet (and other components) to localhost
Update the load balancer configuration when API server nodes are updated
Distribute the updated configuration (push)
Or regularly check for updates (pull)
The latter requires an external, highly available store
(it could be an object store, an HTTP server, or even DNS...)
Updates can be facilitated by a DaemonSet
(but remember that it can't be used when installing a new node!)
Put all the API server nodes behind a round-robin DNS
Point kubelet (and other components) to that name
Update the records when needed
Note: this option is not officially supported
(but since kubelet supports reconnection anyway, it should work)
Many managed clusters expose a high-availability API endpoint
(and you don't have to worry about it)
You can also use HA mechanisms that you're familiar with
(e.g. virtual IPs)
Tunnels are also fine
(e.g. k3s uses a tunnel to allow each node to contact the API server)
:EN:- Ensuring API server availability :FR:- Assurer la disponibilité du serveur API
Setting up Kubernetes
(automatically generated title slide)
Kubernetes is made of many components that require careful configuration
Secure operation typically requires TLS certificates and a local CA
(certificate authority)
Setting up everything manually is possible, but rarely done
(except for learning purposes)
Let's do a quick overview of available options!
Are you writing code that will eventually run on Kubernetes?
Then it's a good idea to have a development cluster!
Development clusters only need one node
This simplifies their setup a lot:
pod networking doesn't even need CNI plugins, overlay networks, etc.
they can be fully contained (no pun intended) in an easy-to-ship VM image
some of the security aspects may be simplified (different threat model)
Examples: Docker Desktop, k3d, KinD, MicroK8s, Minikube
(some of these also support clusters with multiple nodes)
Many cloud providers and hosting providers offer "managed Kubernetes"
The deployment and maintenance of the cluster is entirely managed by the provider
(ideally, clusters can be spun up automatically through an API, CLI, or web interface)
Given the complexity of Kubernetes, this approach is strongly recommended
(at least for your first production clusters)
After working for a while with Kubernetes, you will be better equipped to decide:
whether to operate it yourself or use a managed offering
which offering or which distribution works best for you and your needs
Pricing models differ from one provider to another
nodes are generally charged at their usual price
control plane may be free or incur a small nominal fee
Beyond pricing, there are huge differences in features between providers
The "major" providers are not always the best ones!
Most providers let you pick which Kubernetes version you want
some providers offer up-to-date versions
others lag significantly (sometimes by 2 or 3 minor versions)
Some providers offer multiple networking or storage options
Others will only support one, tied to their infrastructure
(changing that is in theory possible, but might be complex or unsupported)
Some providers let you configure or customize the control plane
(generally through Kubernetes "feature gates")
If you want to run Kubernetes yourselves, there are many options
(free, commercial, proprietary, open source ...)
Some of them are installers, while some are complete platforms
Some of them leverage other well-known deployment tools
(like Puppet, Terraform ...)
A good starting point to explore these options is this guide
(it defines categories like "managed", "turnkey" ...)
kubeadm is a tool part of Kubernetes to facilitate cluster setup
Many other installers and distributions use it (but not all of them)
It can also be used by itself
Excellent starting point to install Kubernetes on your own machines
(virtual, physical, it doesn't matter)
It even supports highly available control planes, or "multi-master"
(this is more complex, though, because it introduces the need for an API load balancer)
The resources below are mainly for educational purposes!
Kubernetes The Hard Way by Kelsey Hightower
step by step guide to install Kubernetes on Google Cloud
covers certificates, high availability ...
“Kubernetes The Hard Way is optimized for learning, which means taking the long route to ensure you understand each task required to bootstrap a Kubernetes cluster.”
Deep Dive into Kubernetes Internals for Builders and Operators
conference presentation showing step-by-step control plane setup
emphasis on simplicity, not on security and availability
How did we set up these Kubernetes clusters that we're using?
We used kubeadm
on freshly installed VM instances running Ubuntu LTS
Install Docker
Install Kubernetes packages
Run kubeadm init
on the first node (it deploys the control plane on that node)
Set up Weave (the overlay network) with a single kubectl apply
command
Run kubeadm join
on the other nodes (with the token produced by kubeadm init
)
Copy the configuration file generated by kubeadm init
Check the prepare VMs README for more details
kubeadm
"drawbacks"Doesn't set up Docker or any other container engine
(this is by design, to give us choice)
Doesn't set up the overlay network
(this is also by design, for the same reasons)
HA control plane requires some extra steps
Note that HA control plane also requires setting up a specific API load balancer
(which is beyond the scope of kubeadm)
:EN:- Various ways to install Kubernetes :FR:- Survol des techniques d'installation de Kubernetes
Running a local development cluster
(automatically generated title slide)
Let's review some options to run Kubernetes locally
There is no "best option", it depends what you value:
ability to run on all platforms (Linux, Mac, Windows, other?)
ability to run clusters with multiple nodes
ability to run multiple clusters side by side
ability to run recent (or even, unreleased) versions of Kubernetes
availability of plugins
etc.
Available on Mac and Windows
Gives you one cluster with one node
Rather old version of Kubernetes
Very easy to use if you are already using Docker Desktop:
go to Docker Desktop preferences and enable Kubernetes
Ideal for Docker users who need good integration between both platforms
Based on K3s by Rancher Labs
Requires Docker
Runs Kubernetes nodes in Docker containers
Can deploy multiple clusters, with multiple nodes, and multiple master nodes
As of June 2020, two versions co-exist: stable (1.7) and beta (3.0)
They have different syntax and options, this can be confusing
(but don't let that stop you!)
Get k3d
beta 3 binary on https://github.com/rancher/k3d/releases
Create a simple cluster:
k3d create cluster petitcluster --update-kubeconfig
Use it:
kubectl config use-context k3d-petitcluster
Create a more complex cluster with a custom version:
k3d create cluster groscluster --update-kubeconfig \ --image rancher/k3s:v1.18.3-k3s1 --masters 3 --workers 5 --api-port 6444
(note: API port seems to be necessary when running multiple clusters)
Kubernetes-in-Docker
Requires Docker (obviously!)
Deploying a single node cluster using the latest version is simple:
kind create cluster
More advanced scenarios require writing a short config file
(to define multiple nodes, multiple master nodes, set Kubernetes versions ...)
Can deploy multiple clusters
The "legacy" option!
(note: this is not a bad thing, it means that it's very stable, has lots of plugins, etc.)
Supports many drivers
(HyperKit, Hyper-V, KVM, VirtualBox, but also Docker and many others)
Can deploy a single cluster; recent versions can deploy multiple nodes
Great option if you want a "Kubernetes first" experience
(i.e. if you don't already have Docker and/or don't want/need it)
Available on Linux, and since recently, on Mac and Windows as well
The Linux version is installed through Snap
(which is pre-installed on all recent versions of Ubuntu)
Also supports clustering (as in, multiple machines running MicroK8s)
DNS is not enabled by default; enable it with microk8s enable dns
Choose your own adventure!
Pick any Linux distribution!
Build your cluster from scratch or use a Kubernetes installer!
Discover exotic CNI plugins and container runtimes!
The only limit is yourself, and the time you are willing to sink in!
:EN:- Kubernetes options for local development :FR:- Installation de Kubernetes pour travailler en local
Deploying a managed cluster
(automatically generated title slide)
"The easiest way to install Kubernetes is to get someone
else to do it for you."
(Jérôme Petazzoni)
Let's see a few options to install managed clusters!
This is not an exhaustive list
(the goal is to show the actual steps to get started)
The list is sorted alphabetically
All the options mentioned here require an account with a cloud provider
... And a credit card
Install the Azure CLI
Login:
az login
Select a region
Create a "resource group":
az group create --name my-aks-group --location westeurope
Create the cluster:
az aks create --resource-group my-aks-group --name my-aks-cluster
Wait about 5-10 minutes
Add credentials to kubeconfig
:
az aks get-credentials --resource-group my-aks-group --name my-aks-cluster
Delete the cluster:
az aks delete --resource-group my-aks-group --name my-aks-cluster
Delete the resource group:
az group delete --resource-group my-aks-group
Note: delete actions can take a while too!
(5-10 minutes as well)
The cluster has useful components pre-installed, such as the metrics server
There is also a product called AKS Engine:
leverages ARM (Azure Resource Manager) templates to deploy Kubernetes
it's "the library used by AKS"
fully customizable
think of it as "half-managed" Kubernetes option
Create service roles, VPCs, and a bunch of other oddities
Try to figure out why it doesn't work
Start over, following an official AWS blog post
Try to find the missing Cloud Formation template
Create service roles, VPCs, and a bunch of other oddities
Try to figure out why it doesn't work
Start over, following an official AWS blog post
Try to find the missing Cloud Formation template
(╯°□°)╯︵ ┻━┻
Install eksctl
Set the usual environment variables
(AWS_DEFAULT_REGION, AWS_ACCESS_KEY, AWS_SECRET_ACCESS_KEY)
Create the cluster:
eksctl create cluster
Cluster can take a long time to be ready (15-20 minutes is typical)
Add cluster add-ons
(by default, it doesn't come with metrics-server, logging, etc.)
Delete the cluster:
eksctl delete cluster <clustername>
If you need to find the name of the cluster:
eksctl get clusters
Note: the AWS documentation has been updated and now includes eksctl instructions.
Convenient if you have to use AWS
Needs extra steps to be truly production-ready
The only officially supported pod network is the Amazon VPC CNI plugin
integrates tightly with security groups and VPC networking
not suitable for high density clusters (with many small pods on big nodes)
other plugins should still work but will require extra work
Install doctl
Generate API token (in web console)
Set up the CLI authentication:
doctl auth init
(It will ask you for the API token)
Check the list of regions and pick one:
doctl compute region list
(If you don't specify the region later, it will use nyc1
)
Create the cluster:
doctl kubernetes cluster create my-do-cluster [--region xxx1]
Wait 5 minutes
Update kubeconfig
:
kubectl config use-context do-xxx1-my-do-cluster
The cluster comes with some components (like Cilium) but no metrics server
List clusters (if you forgot its name):
doctl kubernetes cluster list
Delete the cluster:
doctl kubernetes cluster delete my-do-cluster
Install gcloud
Login:
gcloud auth init
Create a "project":
gcloud projects create my-gke-projectgcloud config set project my-gke-project
Pick a region
(example: europe-west1
, us-west1
, ...)
Create the cluster:
gcloud container clusters create my-gke-cluster --region us-west1 --num-nodes=2
(without --num-nodes
you might exhaust your IP address quota!)
The first time you try to create a cluster in a given project, you get an error
Clutser should be ready in a couple of minutes
List clusters (if you forgot its name):
gcloud container clusters list
Delete the cluster:
gcloud container clusters delete my-gke-cluster --region us-west1
Delete the project (optional):
gcloud projects delete my-gke-project
Well-rounded product overall
(it used to be one of the best managed Kubernetes offerings available; now that many other providers entered the game, that title is debatable)
The cluster comes with many add-ons
Versions lag a bit:
latest minor version (e.g. 1.18) tends to be unsupported
previous minor version (e.g. 1.17) supported through alpha channel
previous versions (e.g. 1.14-1.16) supported
After creating your account, make sure you set a password or get an API key
(by default, it uses email "magic links" to sign in)
Install scw
(you need CLI v2, which in beta as of May 2020)
Generate the CLI configuration with scw init
(it will prompt for your API key, or email + password)
Create the cluster:
k8s cluster create name=my-kapsule-cluster version=1.18.3 cni=cilium \ default-pool-config.node-type=DEV1-M default-pool-config.size=3
After less than 5 minutes, cluster state will be ready
(check cluster status with e.g. scw k8s cluster list
on a wide terminal
)
Add connection information to your .kube/config
file:
scw k8s kubeconfig install CLUSTERID
(the cluster ID is shown by scw k8s cluster list
)
If you want to obtain the cluster ID programmatically, this will do it:
scw k8s cluster list# orCLUSTERID=$(scw k8s cluster list -o json | \ jq -r '.[] | select(.name="my-kapsule-cluster") | .id')
Get cluster ID (e.g. with scw k8s cluster list
)
Delete the cluster:
scw cluster delete cluster-id=$CLUSTERID
Warning: as of May 2020, load balancers have to be deleted separately!
The create
command is a bit more complex than with other providers
(you must specify the Kubernetes version, CNI plugin, and node type)
To see available versions and CNI plugins, run scw k8s version list
As of May 2020, Kapsule supports:
multiple CNI plugins, including: cilium, calico, weave, flannel
Kubernetes versions 1.15 to 1.18
multiple container runtimes, including: Docker, containerd, CRI-O
To see available node types and their price, check their pricing page
:EN:- Installing a managed cluster :FR:- Installer un cluster infogéré
Kubernetes distributions and installers
(automatically generated title slide)
Sometimes, we need to run Kubernetes ourselves
(as opposed to "use a managed offering")
Beware: it takes a lot of work to set up and maintain Kubernetes
It might be necessary if you have specific security or compliance requirements
(e.g. national security for states that don't have a suitable domestic cloud)
There are countless distributions available
We can't review them all
We're just going to explore a few options
Deploys Kubernetes using cloud infrastructure
(supports AWS, GCE, Digital Ocean ...)
Leverages special cloud features when possible
(e.g. Auto Scaling Groups ...)
Provisions Kubernetes nodes on top of existing machines
kubeadm init
to provision a single-node control plane
kubeadm join
to join a node to the cluster
Supports HA control plane with some extra steps
Based on Ansible
Works on bare metal and cloud infrastructure
(good for hybrid deployments)
The expert says: ultra flexible; slow; complex
Opinionated installer with low requirements
Requires a set of machines with Docker + SSH access
Supports highly available etcd and control plane
The expert says: fast; maintenance can be tricky
Sometimes it is necessary to build a custom solution
Example use case:
deploying Kubernetes on OpenStack
... with highly available control plane
... and Cloud Controller Manager integration
Solution: Terraform + kubeadm (kubeadm driven by remote-exec)
Docker Enterprise Edition
Lokomotive, leveraging Terraform and Flatcar Linux
Pivotal Container Service (PKS)
Tarmak, leveraging Puppet and Terraform
Tectonic by CoreOS (now being integrated into Red Hat OpenShift)
Typhoon, leveraging Terraform
VMware Tanzu Kubernetes Grid (TKG)
Each distribution / installer has pros and cons
Before picking one, we should sort out our priorities:
cloud, on-premises, hybrid?
integration with existing network/storage architecture or equipment?
are we storing very sensitive data, like finance, health, military?
how many clusters are we deploying (and maintaining): 2, 10, 50?
which team will be responsible for deployment and maintenance?
(do they need training?)
etc.
:EN:- Kubernetes distributions and installers :FR:- L'offre Kubernetes "on premises"
Static pods
(automatically generated title slide)
Hosting the Kubernetes control plane on Kubernetes has advantages:
we can use Kubernetes' replication and scaling features for the control plane
we can leverage rolling updates to upgrade the control plane
However, there is a catch:
deploying on Kubernetes requires the API to be available
the API won't be available until the control plane is deployed
How can we get out of that chicken-and-egg problem?
Since each component of the control plane can be replicated...
We could set up the control plane outside of the cluster
Then, once the cluster is fully operational, create replicas running on the cluster
Finally, remove the replicas that are running outside of the cluster
What could possibly go wrong?
What if anything goes wrong?
(During the setup or at a later point)
Worst case scenario, we might need to:
set up a new control plane (outside of the cluster)
restore a backup from the old control plane
move the new control plane to the cluster (again)
This doesn't sound like a great experience
Pods are started by kubelet (an agent running on every node)
To know which pods it should run, the kubelet queries the API server
The kubelet can also get a list of static pods from:
a directory containing one (or multiple) manifests, and/or
a URL (serving a manifest)
These "manifests" are basically YAML definitions
(As produced by kubectl get pod my-little-pod -o yaml
)
Kubelet will periodically reload the manifests
It will start/stop pods accordingly
(i.e. it is not necessary to restart the kubelet after updating the manifests)
When connected to the Kubernetes API, the kubelet will create mirror pods
Mirror pods are copies of the static pods
(so they can be seen with e.g. kubectl get pods
)
We can run control plane components with these static pods
They can start without requiring access to the API server
Once they are up and running, the API becomes available
These pods are then visible through the API
(We cannot upgrade them from the API, though)
This is how kubeadm has initialized our clusters.
The API only gives us read-only access to static pods
We can kubectl delete
a static pod...
...But the kubelet will re-mirror it immediately
Static pods can be selected just like other pods
(So they can receive service traffic)
A service can select a mixture of static and other pods
Once the control plane is up and running, it can be used to create normal pods
We can then set up a copy of the control plane in normal pods
Then the static pods can be removed
The scheduler and the controller manager use leader election
(Only one is active at a time; removing an instance is seamless)
Each instance of the API server adds itself to the kubernetes
service
Etcd will typically require more work!
Alright, but what if the control plane is down and we need to fix it?
We restart it using static pods!
This can be done automatically with the Pod Checkpointer
The Pod Checkpointer automatically generates manifests of running pods
The manifests are used to restart these pods if API contact is lost
(More details in the Pod Checkpointer documentation page)
This technique is used by bootkube k8s/staticpods.md
Is it better to run the control plane in static pods, or normal pods?
If I'm a user of the cluster: I don't care, it makes no difference to me
What if I'm an admin, i.e. the person who installs, upgrades, repairs... the cluster?
If I'm using a managed Kubernetes cluster (AKS, EKS, GKE...) it's not my problem
(I'm not the one setting up and managing the control plane)
If I already picked a tool (kubeadm, kops...) to set up my cluster, the tool decides for me
What if I haven't picked a tool yet, or if I'm installing from scratch?
static pods = easier to set up, easier to troubleshoot, less risk of outage
normal pods = easier to upgrade, easier to move (if nodes need to be shut down)
staticPodPath
is /etc/kubernetes/manifests
ls -l /etc/kubernetes/manifests
We should see YAML files corresponding to the pods of the control plane.
Copy a manifest to the directory:
sudo cp ~/container.training/k8s/just-a-pod.yaml /etc/kubernetes/manifests
Check that it's running:
kubectl get pods
The output should include a pod named hello-node1
.
In the manifest, the pod was named hello
.
apiVersion: v1kind: Podmetadata: name: hello namespace: defaultspec: containers: - name: hello image: nginx
The -node1
suffix was added automatically by kubelet.
If we delete the pod (with kubectl delete
), it will be recreated immediately.
To delete the pod, we need to delete (or move) the manifest file.
:EN:- Static pods :FR:- Les static pods
Upgrading clusters
(automatically generated title slide)
It's recommended to run consistent versions across a cluster
(mostly to have feature parity and latest security updates)
It's not mandatory
(otherwise, cluster upgrades would be a nightmare!)
Components can be upgraded one at a time without problems
Log into node test1
Check the version of kubectl and of the API server:
kubectl version
In a HA setup with multiple API servers, they can have different versions
Running the command above multiple times can return different values
kubectl get nodes -o wide
Different nodes can run different kubelet versions
Different nodes can run different kernel versions
Different nodes can run different container engines
kube-system
namespace:kubectl --namespace=kube-system get pods -o json \ | jq -r ' .items[] | [.spec.nodeName, .metadata.name] + (.spec.containers[].image | split(":")) | @tsv ' \ | column -t
When I say, "I'm running Kubernetes 1.15", is that the version of:
kubectl
API server
kubelet
controller manager
something else?
etcd
kube-dns or CoreDNS
CNI plugin(s)
Network controller, network policy controller
Container engine
Linux kernel
To update a component, use whatever was used to install it
If it's a distro package, update that distro package
If it's a container or pod, update that container or pod
If you used configuration management, update with that
Sometimes, we need to upgrade quickly
(when a vulnerability is announced and patched)
If we are using an installer, we should:
make sure it's using upstream packages
or make sure that whatever packages it uses are current
make sure we can tell it to pin specific component versions
Should we upgrade the control plane before or after the kubelets?
Within the control plane, should we upgrade the API server first or last?
How often should we upgrade?
How long are versions maintained?
All the answers are in the documentation about version skew policy!
Let's review the key elements together ...
Kubernetes versions look like MAJOR.MINOR.PATCH; e.g. in 1.17.2:
It's always possible to mix and match different PATCH releases
(e.g. 1.16.1 and 1.16.6 are compatible)
It is recommended to run the latest PATCH release
(but it's mandatory only when there is a security advisory)
API server must be more recent than its clients (kubelet and control plane)
... Which means it must always be upgraded first
All components support a difference of one¹ MINOR version
This allows live upgrades (since we can mix e.g. 1.15 and 1.16)
It also means that going from 1.14 to 1.16 requires going through 1.15
¹Except kubelet, which can be up to two MINOR behind API server, and kubectl, which can be one MINOR ahead or behind API server.
There is a new PATCH relese whenever necessary
(every few weeks, or "ASAP" when there is a security vulnerability)
There is a new MINOR release every 3 months (approximately)
At any given time, three MINOR releases are maintained
... Which means that MINOR releases are maintained approximately 9 months
We should expect to upgrade at least every 3 months (on average)
We are going to update a few cluster components
We will change the kubelet version on one node
We will change the version of the API server
We will work with cluster test
(nodes test1
, test2
, test3
)
This cluster has been deployed with kubeadm
The control plane runs in static pods
These pods are started automatically by kubelet
(even when kubelet can't contact the API server)
They are defined in YAML files in /etc/kubernetes/manifests
(this path is set by a kubelet command-line flag)
kubelet automatically updates the pods when the files are changed
Log into node test1
Check API server version:
kubectl version
Edit the API server pod manifest:
sudo vim /etc/kubernetes/manifests/kube-apiserver.yaml
Look for the image:
line, and update it to e.g. v1.16.0
kubectl version
No!
No!
Remember the guideline we gave earlier:
To update a component, use whatever was used to install it.
This control plane was deployed with kubeadm
We should use kubeadm to upgrade it!
Let's make it right, and use kubeadm to upgrade the entire control plane
(note: this is possible only because the cluster was installed with kubeadm)
sudo kubeadm upgrade plan
Note 1: kubeadm thinks that our cluster is running 1.16.0.
It is confused by our manual upgrade of the API server!
Note 2: kubeadm itself is still version 1.15.9.
It doesn't know how to upgrade do 1.16.X.
Upgrade kubeadm:
sudo apt install kubeadm
Check what kubeadm tells us:
sudo kubeadm upgrade plan
Problem: kubeadm doesn't know know how to handle upgrades from version 1.15.
This is because we installed version 1.17 (or even later).
We need to install kubeadm version 1.16.X.
View available versions for package kubeadm
:
apt show kubeadm -a | grep ^Version | grep 1.16
Downgrade kubeadm:
sudo apt install kubeadm=1.16.6-00
Check what kubeadm tells us:
sudo kubeadm upgrade plan
kubeadm should now agree to upgrade to 1.16.6.
Ideally, we should revert our image:
change
(so that kubeadm executes the right migration steps)
Or we can try the upgrade anyway
sudo kubeadm upgrade apply v1.16.6
These nodes have been installed using the official Kubernetes packages
We can therefore use apt
or apt-get
Log into node test3
View available versions for package kubelet
:
apt show kubelet -a | grep ^Version
Upgrade kubelet:
sudo apt install kubelet=1.16.6-00
Log into node test1
Check node versions:
kubectl get nodes -o wide
Create a deployment and scale it to make sure that the node still works
Almost!
Almost!
Yes, kubelet was installed with distribution packages
However, kubeadm took care of configuring kubelet
(when doing kubeadm join ...
)
We were supposed to run a special command before upgrading kubelet!
That command should be executed on each node
It will download the kubelet configuration generated by kubeadm
We need to upgrade kubeadm, upgrade kubelet config, then upgrade kubelet
(after upgrading the control plane)
for N in 1 2 3; do ssh test$N " sudo apt install kubeadm=1.16.6-00 && sudo kubeadm upgrade node && sudo apt install kubelet=1.16.6-00"done
kubectl get nodes -o wide
This example worked because we went from 1.15 to 1.16
If you are upgrading from e.g. 1.14, you will have to go through 1.15 first
This means upgrading kubeadm to 1.15.X, then using it to upgrade the cluster
Then upgrading kubeadm to 1.16.X, etc.
Make sure to read the release notes before upgrading!
:EN:- Best practices for cluster upgrades :EN:- Example: upgrading a kubeadm cluster
:FR:- Bonnes pratiques pour la mise à jour des clusters :FR:- Exemple : mettre à jour un cluster kubeadm
Backing up clusters
(automatically generated title slide)
Backups can have multiple purposes:
disaster recovery (servers or storage are destroyed or unreachable)
error recovery (human or process has altered or corrupted data)
cloning environments (for testing, validation...)
Let's see the strategies and tools available with Kubernetes!
Kubernetes helps us with disaster recovery
(it gives us replication primitives)
Kubernetes helps us clone / replicate environments
(all resources can be described with manifests)
Kubernetes does not help us with error recovery
We still need to back up/snapshot our data:
with database backups (mysqldump, pgdump, etc.)
and/or snapshots at the storage layer
and/or traditional full disk backups
The deployment of our Kubernetes clusters is automated
(recreating a cluster takes less than a minute of human time)
All the resources (Deployments, Services...) on our clusters are under version control
(never use kubectl run
; always apply YAML files coming from a repository)
Stateful components are either:
stored on systems with regular snapshots
backed up regularly to an external, durable storage
outside of Kubernetes
If our deployment system isn't fully automated, it should at least be documented
Litmus test: how long does it take to deploy a cluster...
for a senior engineer?
for a new hire?
Does it require external intervention?
(e.g. provisioning servers, signing TLS certs...)
Full machine backups of the control plane can help
If the control plane is in pods (or containers), pay attention to storage drivers
(if the backup mechanism is not container-aware, the backups can take way more resources than they should, or even be unusable!)
If the previous sentence worries you:
automate the deployment of your clusters!
Ideal scenario:
never create a resource directly on a cluster
push to a code repository
a special branch (production
or even master
) gets automatically deployed
Some folks call this "GitOps"
(it's the logical evolution of configuration management and infrastructure as code)
What do we keep in version control?
For very simple scenarios: source code, Dockerfiles, scripts
For real applications: add resources (as YAML files)
For applications deployed multiple times: Helm, Kustomize...
(staging and production count as "multiple times")
Various tools exist (Weave Flux, GitKube...)
These tools are still very young
You still need to write YAML for all your resources
There is no tool to:
list all resources in a namespace
get resource YAML in a canonical form
diff YAML descriptions with current state
Start describing your resources with YAML
Leverage a tool like Kustomize or Helm
Make sure that you can easily deploy to a new namespace
(or even better: to a new cluster)
When tooling matures, you will be ready
What if we can't describe everything with YAML?
What if we manually create resources and forget to commit them to source control?
What about global resources, that don't live in a namespace?
How can we be sure that we saved everything?
All objects are saved in etcd
etcd data should be relatively small
(and therefore, quick and easy to back up)
Two options to back up etcd:
snapshot the data directory
use etcdctl snapshot
The basic command is simple:
etcdctl snapshot save <filename>
But we also need to specify:
an environment variable to specify that we want etcdctl v3
the address of the server to back up
the path to the key, certificate, and CA certificate
(if our etcd uses TLS certificates)
The following command will work on clusters deployed with kubeadm
(and maybe others)
It should be executed on a master node
docker run --rm --net host -v $PWD:/vol \ -v /etc/kubernetes/pki/etcd:/etc/kubernetes/pki/etcd:ro \ -e ETCDCTL_API=3 k8s.gcr.io/etcd:3.3.10 \ etcdctl --endpoints=https://[127.0.0.1]:2379 \ --cacert=/etc/kubernetes/pki/etcd/ca.crt \ --cert=/etc/kubernetes/pki/etcd/healthcheck-client.crt \ --key=/etc/kubernetes/pki/etcd/healthcheck-client.key \ snapshot save /vol/snapshot
snapshot
in the current directoryOlder versions of kubeadm did add a healthcheck probe with all these flags
That healthcheck probe was calling etcdctl
with all the right flags
With recent versions of kubeadm, we're on our own!
Exercise: write the YAML for a batch job to perform the backup
(how will you access the key and certificate required to connect?)
Execute exactly the same command, but replacing save
with restore
(Believe it or not, doing that will not do anything useful!)
The restore
command does not load a snapshot into a running etcd server
The restore
command creates a new data directory from the snapshot
(it's an offline operation; it doesn't interact with an etcd server)
It will create a new data directory in a temporary container
(leaving the running etcd node untouched)
Create a new data directory from the snapshot:
sudo rm -rf /var/lib/etcddocker run --rm -v /var/lib:/var/lib -v $PWD:/vol \ -e ETCDCTL_API=3 k8s.gcr.io/etcd:3.3.10 \ etcdctl snapshot restore /vol/snapshot --data-dir=/var/lib/etcd
Provision the control plane, using that data directory:
sudo kubeadm init \ --ignore-preflight-errors=DirAvailable--var-lib-etcd
Rejoin the other nodes
This only saves etcd state
It does not save persistent volumes and local node data
Some critical components (like the pod network) might need to be reset
As a result, our pods might have to be recreated, too
If we have proper liveness checks, this should happen automatically
Kubernetes documentation about etcd backups
etcd documentation about snapshots and restore
A good blog post by elastisys explaining how to restore a snapshot
Another good blog post by consol labs on the same topic
Also back up the TLS information
(at the very least: CA key and cert; API server key and cert)
With clusters provisioned by kubeadm, this is in /etc/kubernetes/pki
If you don't:
you will still be able to restore etcd state and bring everything back up
you will need to redistribute user certificates
TLS information is highly sensitive!
Anyone who has it has full access to your cluster!
It's totally fine to keep your production databases outside of Kubernetes
Especially if you have only one database server!
Feel free to put development and staging databases on Kubernetes
(as long as they don't hold important data)
Using Kubernetes for stateful services makes sense if you have many
(because then you can leverage Kubernetes automation)
Option 1: snapshot volumes out of band
(with the API/CLI/GUI of our SAN/cloud/...)
Option 2: storage system integration
(e.g. Portworx can create snapshots through annotations)
Option 3: snapshots through Kubernetes API
(now in alpha for a few storage providers: GCE, OpenSDS, Ceph, Portworx)
back up Kubernetes persistent volumes
cluster state management
Heptio Ark Velero
full cluster backup
simple scripts to save resource YAML to a git repository
Backup Interface for Volumes Attached to Containers
:EN:- Backing up clusters :FR:- Politiques de sauvegarde
Pod Security Policies
(automatically generated title slide)
By default, our pods and containers can do everything
(including taking over the entire cluster)
We are going to show an example of a malicious pod
Then we will explain how to avoid this with PodSecurityPolicies
We will enable PodSecurityPolicies on our cluster
We will create a couple of policies (restricted and permissive)
Finally we will see how to use them to improve security on our cluster
For simplicity, let's work in a separate namespace
Let's create a new namespace called "green"
Create the "green" namespace:
kubectl create namespace green
Change to that namespace:
kns green
Create a Deployment using the official NGINX image:
kubectl create deployment web --image=nginx
Confirm that the Deployment, ReplicaSet, and Pod exist, and that the Pod is running:
kubectl get all
We will now show an escalation technique in action
We will deploy a DaemonSet that adds our SSH key to the root account
(on each node of the cluster)
The Pods of the DaemonSet will do so by mounting /root
from the host
Check the file k8s/hacktheplanet.yaml
with a text editor:
vim ~/container.training/k8s/hacktheplanet.yaml
If you would like, change the SSH key (by changing the GitHub user name)
Create the DaemonSet:
kubectl create -f ~/container.training/k8s/hacktheplanet.yaml
Check that the pods are running:
kubectl get pods
Confirm that the SSH key was added to the node's root account:
sudo cat /root/.ssh/authorized_keys
Remove the DaemonSet:
kubectl delete daemonset hacktheplanet
Remove the Deployment:
kubectl delete deployment web
To use PSPs, we need to activate their specific admission controller
That admission controller will intercept each pod creation attempt
It will look at:
who/what is creating the pod
which PodSecurityPolicies they can use
which PodSecurityPolicies can be used by the Pod's ServiceAccount
Then it will compare the Pod with each PodSecurityPolicy one by one
If a PodSecurityPolicy accepts all the parameters of the Pod, it is created
Otherwise, the Pod creation is denied and it won't even show up in kubectl get pods
With RBAC, using a PSP corresponds to the verb use
on the PSP
(that makes sense, right?)
If no PSP is defined, no Pod can be created
(even by cluster admins)
Pods that are already running are not affected
If we create a Pod directly, it can use a PSP to which we have access
If the Pod is created by e.g. a ReplicaSet or DaemonSet, it's different:
the ReplicaSet / DaemonSet controllers don't have access to our policies
therefore, we need to give access to the PSP to the Pod's ServiceAccount
We are going to enable the PodSecurityPolicy admission controller
At that point, we won't be able to create any more pods (!)
Then we will create a couple of PodSecurityPolicies
...And associated ClusterRoles (giving use
access to the policies)
Then we will create RoleBindings to grant these roles to ServiceAccounts
We will verify that we can't run our "exploit" anymore
To enable Pod Security Policies, we need to enable their admission plugin
This is done by adding a flag to the API server
On clusters deployed with kubeadm
, the control plane runs in static pods
These pods are defined in YAML files located in /etc/kubernetes/manifests
Kubelet watches this directory
Each time a file is added/removed there, kubelet creates/deletes the corresponding pod
Updating a file causes the pod to be deleted and recreated
Have a look at the static pods:
ls -l /etc/kubernetes/manifests
Edit the one corresponding to the API server:
sudo vim /etc/kubernetes/manifests/kube-apiserver.yaml
There should already be a line with --enable-admission-plugins=...
Let's add PodSecurityPolicy
on that line
Locate the line with --enable-admission-plugins=
Add PodSecurityPolicy
It should read: --enable-admission-plugins=NodeRestriction,PodSecurityPolicy
Save, quit
The kubelet detects that the file was modified
It kills the API server pod, and starts a new one
During that time, the API server is unavailable
kubectl run testpsp1 --image=nginx --restart=Never
Try to create a Deployment:
kubectl create deployment testpsp2 --image=nginx
Look at existing resources:
kubectl get all
We can get hints at what's happening by looking at the ReplicaSet and Events.
We will create two policies:
privileged (allows everything)
restricted (blocks some unsafe mechanisms)
For each policy, we also need an associated ClusterRole granting use
We have a couple of files, each defining a PSP and associated ClusterRole:
privileged
, role psp:privileged
restricted
, role psp:restricted
kubectl create -f ~/container.training/k8s/psp-restricted.yamlkubectl create -f ~/container.training/k8s/psp-privileged.yaml
The privileged policy comes from the Kubernetes documentation
The restricted policy is inspired by that same documentation page
We haven't bound the policy to any user yet
But cluster-admin
can implicitly use
all policies
Check that we can now create a Pod directly:
kubectl run testpsp3 --image=nginx --restart=Never
Create a Deployment as well:
kubectl create deployment testpsp4 --image=nginx
Confirm that the Deployment is not creating any Pods:
kubectl get all
We can create Pods directly (thanks to our root-like permissions)
The Pods corresponding to a Deployment are created by the ReplicaSet controller
The ReplicaSet controller does not have root-like permissions
We need to either:
or
The first option would allow anyone to create pods
The second option will allow us to scope the permissions better
Let's bind the role psp:restricted
to ServiceAccount green:default
(aka the default ServiceAccount in the green Namespace)
This will allow Pod creation in the green Namespace
(because these Pods will be using that ServiceAccount automatically)
kubectl create rolebinding psp:restricted \ --clusterrole=psp:restricted \ --serviceaccount=green:default
The Deployments that we created earlier will eventually recover
(the ReplicaSet controller will retry to create Pods once in a while)
If we create a new Deployment now, it should work immediately
Create a simple Deployment:
kubectl create deployment testpsp5 --image=nginx
Look at the Pods that have been created:
kubectl get all
Create a hostile DaemonSet:
kubectl create -f ~/container.training/k8s/hacktheplanet.yaml
Look at the state of the namespace:
kubectl get all
The restricted PSP is similar to the one provided in the docs, but:
it allows containers to run as root
it doesn't drop capabilities
Many containers run as root by default, and would require additional tweaks
Many containers use e.g. chown
, which requires a specific capability
(that's the case for the NGINX official image, for instance)
We still block: hostPath, privileged containers, and much more!
If we list the pods in the kube-system
namespace, kube-apiserver
is missing
However, the API server is obviously running
(otherwise, kubectl get pods --namespace=kube-system
wouldn't work)
The API server Pod is created directly by kubelet
(without going through the PSP admission plugin)
Then, kubelet creates a "mirror pod" representing that Pod in etcd
That "mirror pod" creation goes through the PSP admission plugin
And it gets blocked!
This can be fixed by binding psp:privileged
to group system:nodes
Our cluster is currently broken
(we can't create pods in namespaces kube-system, default, ...)
We need to either:
disable the PSP admission plugin
allow use of PSP to relevant users and groups
For instance, we could:
bind psp:restricted
to the group system:authenticated
bind psp:privileged
to the ServiceAccount kube-system:default
Edit the Kubernetes API server static pod manifest
Remove the PSP admission plugin
This can be done with this one-liner:
sudo sed -i s/,PodSecurityPolicy// /etc/kubernetes/manifests/kube-apiserver.yaml
:EN:- Preventing privilege escalation with Pod Security Policies :FR:- Limiter les droits des conteneurs avec les Pod Security Policies
The CSR API
(automatically generated title slide)
The Kubernetes API exposes CSR resources
We can use these resources to issue TLS certificates
First, we will go through a quick reminder about TLS certificates
Then, we will see how to obtain a certificate for a user
We will use that certificate to authenticate with the cluster
Finally, we will grant some privileges to that user
TLS (Transport Layer Security) is a protocol providing:
encryption (to prevent eavesdropping)
authentication (using public key cryptography)
When we access an https:// URL, the server authenticates itself
(it proves its identity to us; as if it were "showing its ID")
But we can also have mutual TLS authentication (mTLS)
(client proves its identity to server; server proves its identity to client)
To authenticate, someone (client or server) needs:
a private key (that remains known only to them)
a public key (that they can distribute)
a certificate (associating the public key with an identity)
A message encrypted with the private key can only be decrypted with the public key
(and vice versa)
If I use someone's public key to encrypt/decrypt their messages,
I can be certain that I am talking to them / they are talking to me
The certificate proves that I have the correct public key for them
This is what I do if I want to obtain a certificate.
Create public and private keys.
Create a Certificate Signing Request (CSR).
(The CSR contains the identity that I claim and a public key.)
Send that CSR to the Certificate Authority (CA).
The CA verifies that I can claim the identity in the CSR.
The CA generates my certificate and gives it to me.
The CA (or anyone else) never needs to know my private key.
The Kubernetes API has a CertificateSigningRequest resource type
(we can list them with e.g. kubectl get csr
)
We can create a CSR object
(= upload a CSR to the Kubernetes API)
Then, using the Kubernetes API, we can approve/deny the request
If we approve the request, the Kubernetes API generates a certificate
The certificate gets attached to the CSR object and can be retrieved
We will show how to use the CSR API to obtain user certificates
This will be a rather complex demo
... And yet, we will take a few shortcuts to simplify it
(but it will illustrate the general idea)
The demo also won't be automated
(we would have to write extra code to make it fully functional)
We will create a Namespace named "users"
Each user will get a ServiceAccount in that Namespace
That ServiceAccount will give read/write access to one CSR object
Users will use that ServiceAccount's token to submit a CSR
We will approve the CSR (or not)
Users can then retrieve their certificate from their CSR object
...And use that certificate for subsequent interactions
For a user named jean.doe
, we will have:
ServiceAccount jean.doe
in Namespace users
CertificateSigningRequest user=jean.doe
ClusterRole user=jean.doe
giving read/write access to that CSR
ClusterRoleBinding user=jean.doe
binding ClusterRole and ServiceAccount
Most Kubernetes identifiers and names are fairly restricted
They generally are DNS-1123 labels or subdomains (from RFC 1123)
A label is lowercase letters, numbers, dashes; can't start or finish with a dash
A subdomain is one or multiple labels separated by dots
Some resources have more relaxed constraints, and can be "path segment names"
(uppercase are allowed, as well as some characters like #:?!,_
)
This includes RBAC objects (like Roles, RoleBindings...) and CSRs
See the Identifiers and Names design document and the Object Names and IDs documentation page for more details
If you want to use another name than jean.doe
, update the YAML file!
Create the global namespace for all users:
kubectl create namespace users
Create the ServiceAccount, ClusterRole, ClusterRoleBinding for jean.doe
:
kubectl apply -f ~/container.training/k8s/user=jean.doe.yaml
Let's obtain the user's token and give it to them
(the token will be their password)
List the user's secrets:
kubectl --namespace=users describe serviceaccount jean.doe
Show the user's token:
kubectl --namespace=users describe secret jean.doe-token-xxxxx
kubectl
to use the tokenAdd a new identity to our kubeconfig file:
kubectl config set-credentials token:jean.doe --token=...
Add a new context using that identity:
kubectl config set-context jean.doe --user=token:jean.doe --cluster=kubernetes
(Make sure to adapt the cluster name if yours is different!)
Use that context:
kubectl config use-context jean.doe
Try to access any resource:
kubectl get pods
(This should tell us "Forbidden")
Try to access "our" CertificateSigningRequest:
kubectl get csr user=jean.doe
(This should tell us "NotFound")
There are many tools to generate TLS keys and CSRs
Let's use OpenSSL; it's not the best one, but it's installed everywhere
(many people prefer cfssl, easyrsa, or other tools; that's fine too!)
openssl req -newkey rsa:2048 -nodes -keyout key.pem \ -new -subj /CN=jean.doe/O=devs/ -out csr.pem
The command above generates:
jean.doe
in group devs
The Kubernetes CSR object is a thin wrapper around the CSR PEM file
The PEM file needs to be encoded to base64 on a single line
(we will use base64 -w0
for that purpose)
The Kubernetes CSR object also needs to list the right "usages"
(these are flags indicating how the certificate can be used)
kubectl apply -f - <<EOFapiVersion: certificates.k8s.io/v1beta1kind: CertificateSigningRequestmetadata: name: user=jean.doespec: request: $(base64 -w0 < csr.pem) usages: - digital signature - key encipherment - client authEOF
By default, the CSR API generates certificates valid 1 year
We want to generate short-lived certificates, so we will lower that to 1 hour
Fow now, this is configured through an experimental controller manager flag
Edit the static pod definition for the controller manager:
sudo vim /etc/kubernetes/manifests/kube-controller-manager.yaml
In the list of flags, add the following line:
- --experimental-cluster-signing-duration=1h
Switch back to cluster-admin
:
kctx -
Inspect the CSR:
kubectl describe csr user=jean.doe
Approve it:
kubectl certificate approve user=jean.doe
Switch back to the user's identity:
kctx -
Retrieve the updated CSR object and extract the certificate:
kubectl get csr user=jean.doe \ -o jsonpath={.status.certificate} \ | base64 -d > cert.pem
Inspect the certificate:
openssl x509 -in cert.pem -text -noout
Add the key and certificate to kubeconfig:
kubectl config set-credentials cert:jean.doe --embed-certs \ --client-certificate=cert.pem --client-key=key.pem
Update the user's context to use the key and cert to authenticate:
kubectl config set-context jean.doe --user cert:jean.doe
Confirm that we are seen as jean.doe
(but don't have permissions):
kubectl get pods
We have just shown, step by step, a method to issue short-lived certificates for users.
To be usable in real environments, we would need to add:
a kubectl helper to automatically generate the CSR and obtain the cert
(and transparently renew the cert when needed)
a Kubernetes controller to automatically validate and approve CSRs
(checking that the subject and groups are valid)
a way for the users to know the groups to add to their CSR
(e.g.: annotations on their ServiceAccount + read access to the ServiceAccount)
Larger organizations typically integrate with their own directory
The general principle, however, is the same:
users have long-term credentials (password, token, ...)
they use these credentials to obtain other, short-lived credentials
This provides enhanced security:
the long-term credentials can use long passphrases, 2FA, HSM...
the short-term credentials are more convenient to use
we get strong security and convenience
Systems like Vault also have certificate issuance mechanisms
:EN:- Generating user certificates with the CSR API :FR:- Génération de certificats utilisateur avec la CSR API
OpenID Connect
(automatically generated title slide)
The Kubernetes API server can perform authentication with OpenID connect
This requires an OpenID provider
(external authorization server using the OAuth 2.0 protocol)
We can use a third-party provider (e.g. Google) or run our own (e.g. Dex)
We are going to give an overview of the protocol
We will show it in action (in a simplified scenario)
We want to access our resources (a Kubernetes cluster)
We authenticate with the OpenID provider
we can do this directly (e.g. by going to https://accounts.google.com)
or maybe a kubectl plugin can open a browser page on our behalf
After authenticating us, the OpenID provider gives us:
an id token (a short-lived signed JSON Web Token, see next slide)
a refresh token (to renew the id token when needed)
We can now issue requests to the Kubernetes API with the id token
The API server will verify that token's content to authenticate us
A JSON Web Token (JWT) has three parts:
a header specifying algorithms and token type
a payload (indicating who issued the token, for whom, which purposes...)
a signature generated by the issuer (the issuer = the OpenID provider)
Anyone can verify a JWT without contacting the issuer
(except to obtain the issuer's public key)
Pro tip: we can inspect a JWT with https://jwt.io/
Server side
enable OIDC authentication
indicate which issuer (provider) should be allowed
indicate which audience (or "client id") should be allowed
optionally, map or prefix user and group names
Client side
obtain JWT as described earlier
pass JWT as authentication token
renew JWT when needed (using the refresh token)
We will use Google Accounts as our OpenID provider
We will use the Google OAuth Playground as the "audience" or "client id"
We will obtain a JWT through Google Accounts and the OAuth Playground
We will enable OIDC in the Kubernetes API server
We will use the JWT to authenticate
If you can't or won't use a Google account, you can try to adapt this to another provider.
The API server logs will be particularly useful in this section
(they will indicate e.g. why a specific token is rejected)
Let's keep an eye on the API server output!
kubectl logs kube-apiserver-node1 --follow --namespace=kube-system
We will use the Google OAuth Playground for convenience
In a real scenario, we would need our own OAuth client instead of the playground
(even if we were still using Google as the OpenID provider)
Open the Google OAuth Playground:
https://developers.google.com/oauthplayground/
Enter our own custom scope in the text field:
https://www.googleapis.com/auth/userinfo.email
Click on "Authorize APIs" and allow the playground to access our email address
The previous step gave us an "authorization code"
We will use it to obtain tokens
The JWT is the very long id_token
that shows up on the right hand side
(it is a base64-encoded JSON object, and should therefore start with eyJ
)
We need to create a context (in kubeconfig) for our token
(if we just add the token or use kubectl --token
, our certificate will still be used)
Create a new authentication section in kubeconfig:
kubectl config set-credentials myjwt --token=eyJ...
Try to use it:
kubectl --user=myjwt get nodes
We should get an Unauthorized
response, since we haven't enabled OpenID Connect in the API server yet. We should also see invalid bearer token
in the API server log output.
We need to add a few flags to the API server configuration
These two are mandatory:
--oidc-issuer-url
→ URL of the OpenID provider
--oidc-client-id
→ app requesting the authentication
(in our case, that's the ID for the Google OAuth Playground)
This one is optional:
--oidc-username-claim
→ which field should be used as user name
(we will use the user's email address instead of an opaque ID)
See the API server documentation for more details about all available flags
The instructions below will work for clusters deployed with kubeadm
(or where the control plane is deployed in static pods)
If your cluster is deployed differently, you will need to adapt them
Edit /etc/kubernetes/manifests/kube-apiserver.yaml
Add the following lines to the list of command-line flags:
- --oidc-issuer-url=https://accounts.google.com- --oidc-client-id=407408718192.apps.googleusercontent.com- --oidc-username-claim=email
The kubelet monitors the files in /etc/kubernetes/manifests
When we save the pod manifest, kubelet will restart the corresponding pod
(using the updated command line flags)
After making the changes described on the previous slide, save the file
Issue a simple command (like kubectl version
) until the API server is back up
(it might take between a few seconds and one minute for the API server to restart)
Restart the kubectl logs
command to view the logs of the API server
kubectl --user=myjwt get nodeskubectl --user=myjwt get pods
We should see a message like:
Error from server (Forbidden): nodes is forbidden: User "jean.doe@gmail.com"cannot list resource "nodes" in API group "" at the cluster scope
→ We were successfully authenticated, but not authorized.
As an extra step, let's grant read access to our user
We will use the pre-defined ClusterRole view
Create a ClusterRoleBinding allowing us to view resources:
kubectl create clusterrolebinding i-can-view \ --user=jean.doe@gmail.com --clusterrole=view
(make sure to put your Google email address there)
Confirm that we can now list pods with our token:
kubectl --user=myjwt get pods
This was a very simplified demo! In a real deployment...
We wouldn't use the Google OAuth Playground
We probably wouldn't even use Google at all
(it doesn't seem to provide a way to include groups!)
Some popular alternatives:
We would use a helper (like the kubelogin plugin) to automatically obtain tokens
The tokens used by Service Accounts are JWT tokens as well
They are signed and verified using a special service account key pair
Extract the token of a service account in the current namespace:
kubectl get secrets -o jsonpath={..token} | base64 -d
Copy-paste the token to a verification service like https://jwt.io
Notice that it says "Invalid Signature"
JSON Web Tokens embed the URL of the "issuer" (=OpenID provider)
The issuer provides its public key through a well-known discovery endpoint
(similar to https://accounts.google.com/.well-known/openid-configuration)
There is no such endpoint for the Service Account key pair
But we can provide the public key ourselves for verification
On clusters provisioned with kubeadm, the Service Account key pair is:
/etc/kubernetes/pki/sa.key
(used by the controller manager to generate tokens)
/etc/kubernetes/pki/sa.pub
(used by the API server to validate the same tokens)
Display the public key used to sign Service Account tokens:
sudo cat /etc/kubernetes/pki/sa.pub
Copy-paste the key in the "verify signature" area on https://jwt.io
It should now say "Signature Verified"
:EN:- Authenticating with OIDC :FR:- S'identifier avec OIDC
Hello! We are:
👷🏻♀️ AJ (@s0ulshake, EphemeraSearch)
🐳 Jérôme (@jpetazzo, Enix SAS)
The training will run from 9:30 to 13:00
There will be a break at 11:00
Feel free to interrupt for questions at any time
Especially when you see full screen container pictures!
Keyboard shortcuts
↑, ←, Pg Up, k | Go to previous slide |
↓, →, Pg Dn, Space, j | Go to next slide |
Home | Go to first slide |
End | Go to last slide |
Number + Return | Go to specific slide |
b / m / f | Toggle blackout / mirrored / fullscreen mode |
c | Clone slideshow |
p | Toggle presenter mode |
t | Restart the presentation timer |
?, h | Toggle this help |
Esc | Back to slideshow |