Kubernetes is known for being complex and coming with a steep learning curve. Over time, many new tools were developed in an attempt to avoid the dauting task of having to set up everything yourself. Now, crucial parts of a working Kubernetes cluster are “hidden” behind user-friendly tools that automate essential tasks, and you no longer need to worry about the underlying structure. Or do you? When it comes to managing any type of system, one rule always applies: In order to fully master a system, you need to fully understand the architecture behind it. Due to the complexity of Kubernetes, it is simply impossible to learn about all its parts and components in a single session. With this being said, the following will focus on the role of TLS certificates in a basic Kubernetes cluster. By the end, you should have a rough idea when and how TLS certificates might be used.

Disclaimer: While TLS is used to secure communication, please note that this article is not about cluster security and it will not validate how secure TLS certificates are in the examples given below. The aim of this article is solely to provide an overview of the option of using TLS certificates, so you have a better understanding of Kubernetes architecture.

Basics of TLS Certificates

Before focusing on Kubernetes, let’s briefly review how TLS certificates work in general. In order to obtain a server/client certificate, you need to send a Certificate Signing Request (CSR) to a trusted Certificate Authority (CA). The CA will use its private key to sign the certificate. The certificate is a bind between the public key and the subject (identifying information, e.g., name) that owns the corresponding private key. Each certificate is valid for a set time frame (notBefore, notAfter).

Note: If the certificate requester uses their own private key to sign their own certificate, the resulting certificate is known as a “self-signed certificate.” The certificates of root CAs are usually self-signed.

When a client wants to communicate with a server, they need to successfully complete a TLS handshake in order to establish a shared secret. We will not get into detail about the handshake itself. The part of the handshake that is relevant in this case is the exchange of the certificate.

Example:

Let’s assume there is a “server” with a certificate and a “client” that sends a request. The server will send its server certificate to the client. The client makes sure that the server information matches the information in the certificate. The client then uses the public key of the respective CA to verify that the certificate was actually signed by that CA. In order for this to work, the client needs to trust the respective CA (trust anchor). Once the client verified the authenticity of the connection using the server certificate, the server and the client can establish encrypted communication.

The example above is a basic server-only TLS handshake. TLS can also include an optional step of client certificate authentication. A TLS handshake with this option is often referred to as mutual TLS (mTLS).  With mTLS, the server and the client are both expected to provide a valid certificate. This means the server sends its certificate to the client and requests the certificate of the client. Both entities will verify the authenticity of the certificate they receive. If both certificates are valid, the communication will continue. If the client does not provide its certificate, the server can decide whether or not it wants to continue communication.

TLS Certificates in Kubernetes

With the basics of TLS and mTLS in mind, it is now time to look at Kubernetes. The official Kubernetes documentation includes a full overview of required certificates but it assumes that you create your cluster with kubeadm.

Certificate overview with kubeadm

Note: Intermediate CAs are allowed, but that goes beyond the scope of this article.

It is possible to use a single CA for all certificates. However, Kubernetes recommends using three different CAs. The certificates for the control plane components and the kubelet are always required. The certificate for the extended API is only required if you run kube-proxy to support an extension API server. Extension APIs enable a cluster to support new resource types that are not provided by Kubernetes itself. When you use kubectl to create one of the new resources, Kubernetes will forward the request to the API you added (the extension API server).

Looking at the certificates themselves shows that the etcd nodes need to authenticate as servers and as clients but they only have one certificate each. That’s because they use a single certificate for both server authentication and client authentication. For the other components, the certificates are usually used for one purpose only, either for server authentication or for client authentication.

When you set up a Kubernetes cluster manually, you should also be aware of the default authorization methods as they set the minimum requirements for Certificate Authorities (CAs) and TLS certificates. Here is an overview of the minimum certificates that are required to install the Kubernetes cluster manually.

Minimum certificate overview with manual installation

Note that this is the minimum requirement and not the recommended option. In terms of understanding the Kubernetes architecture, the table above is quite abstract and does not really say much about where and how those certificates are used. For a better understanding, below you’ll see a basic overview of the default authorization methods between the main components.

If you want to increase security, you can use other authorization methods and more certificates. For now, let’s focus on the default settings.

Requests to the API server

The API server listens on an HTTPS port with client authentication enabled. This means that clients always need to provide some sort of client credential. It could be in the form of client certificates, bearer tokens or an authenticating proxy, for example. For control plane components and the kubelet, it is recommended to use client certificates.

First, let’s discuss requests from worker node components to the API server. The authentication method of kube-proxy depends on the way it was installed on the worker node:

  • With kubeadm, kube-proxy is installed as a DaemonSet.
  • With manual installation, kube-proxy is installed directly as a process on the node.

The pods managed by a DaemonSet have a service account, and each service account automatically comes with a signed bearer token. With kubeadm, kube-proxy uses the bearer token of the service account to authenticate and authorize to the API server.

If you install kube-proxy manually, the process on the worker node does not have a bearer token or any other client credentials. In order to authenticate to the API server, you need to create a client certificate for kube-proxy.

This is why the certificate overview with kubeadm does not include a client certificate for kube-proxy and the certificate overview with manual installation does.

Additional information about service accounts: If your pods need to communicate with the API server, the pods can securely do so via a service account. When a pod is scheduled, containers within a pod mount to a secret of “type: kubernetes.io/service-account-token” which contains the trust anchor for the server certificate of the API server (root CA), the namespace of the pod, and the token. Each token is signed by the controller manager. To sign tokens, the token controller, which runs as part of kube-controller-manager, needs a private key that it can use for this purpose.

Secondly, let’s discuss requests from control plane components and service accounts to the API server. Both usually send their API server requests to the DNS name “kubernetes.default.svc”. The IP address assigned to the “kubernetes” service in the “default” namespace is redirected to the HTTPS endpoint on the API server. The control plane components and the service accounts need to provide their client credentials for the HTTPS endpoint on the API server.

Requests from the API server

The above certificate overviews suggest that the API server does not need a single client certificate with both kubeadm and with a manual installation. However, this does not mean that the API server does not need to provide client credentials to other components.

First, let’s discuss requests from the API server to the kubelet. The illustration above shows that the kubelet has an HTTPS endpoint, but the API server does not verify the server certificate of the kubelet. To verify the kubelet server certificate, you would need to provide the root certificate to the API server. In the example above, this would be the self-signed certificate of kubernetes-ca. You can and should change the default setting to mTLS. If you do, you will need a client certificate for the API server.

Secondly, let’s discuss requests from the API server to etcd. By default, no authentication takes place. You can and should change this default setting to mTLS for communication between the etcd members themselves and for communication from the API server to etcd. If you do, you will need client/server certificates for the cluster members, a server certificate for communication between etcd and clients, and a client certificate for the API server.

Thirdly, let’s discuss requests from the API server to the scheduler and the controller manager. The illustration above shows that authorized TLS plus a bearer token is used. However, the certificate overview from the beginning does not include server certificates for the scheduler or the controller manager. The reason for this is that the scheduler and controller manager both automatically create self-signed in-memory certificates for incoming requests from the API server. During a TLS handshake, they can send their self-signed server certificate to the API server. The API server automatically creates an ephemeral loopback token at initialization. During a TLS handshake, the API server sends this token as client credentials to the scheduler or controller manager. As a result, you do not need to worry about this authentication process and the respective certificates, even when manually setting up a Kubernetes cluster.

Hopefully, this guide has given you a better understanding of the underlying architecture and maybe even a little bit more appreciation for the tools that take on the daunting task of certificate management for us.


To hear more about cloud-native topics, join the Cloud Native Computing Foundation, Techstrong Group and the entire cloud-native community in Paris, France at KubeCon+CloudNativeCon EU 2024 – March 19-22, 2024.