This the multi-page printable view of this section. Click here to print.
TLS
- 1: Configure Certificate Rotation for the Kubelet
- 2: Manage TLS Certificates in a Cluster
- 3: Manual Rotation of CA Certificates
1 - Configure Certificate Rotation for the Kubelet
This page shows how to enable and configure certificate rotation for the kubelet.
Kubernetes v1.19 [stable]
Before you begin
- Kubernetes version 1.8.0 or later is required
Overview
The kubelet uses certificates for authenticating to the Kubernetes API. By default, these certificates are issued with one year expiration so that they do not need to be renewed too frequently.
Kubernetes contains kubelet certificate rotation, that will automatically generate a new key and request a new certificate from the Kubernetes API as the current certificate approaches expiration. Once the new certificate is available, it will be used for authenticating connections to the Kubernetes API.
Enabling client certificate rotation
The kubelet
process accepts an argument --rotate-certificates
that controls
if the kubelet will automatically request a new certificate as the expiration of
the certificate currently in use approaches.
The kube-controller-manager
process accepts an argument
--cluster-signing-duration
(--experimental-cluster-signing-duration
prior to 1.19)
that controls how long certificates will be issued for.
Understanding the certificate rotation configuration
When a kubelet starts up, if it is configured to bootstrap (using the
--bootstrap-kubeconfig
flag), it will use its initial certificate to connect
to the Kubernetes API and issue a certificate signing request. You can view the
status of certificate signing requests using:
kubectl get csr
Initially a certificate signing request from the kubelet on a node will have a
status of Pending
. If the certificate signing requests meets specific
criteria, it will be auto approved by the controller manager, then it will have
a status of Approved
. Next, the controller manager will sign a certificate,
issued for the duration specified by the
--cluster-signing-duration
parameter, and the signed certificate
will be attached to the certificate signing request.
The kubelet will retrieve the signed certificate from the Kubernetes API and
write that to disk, in the location specified by --cert-dir
. Then the kubelet
will use the new certificate to connect to the Kubernetes API.
As the expiration of the signed certificate approaches, the kubelet will automatically issue a new certificate signing request, using the Kubernetes API. This can happen at any point between 30% and 10% of the time remaining on the certificate. Again, the controller manager will automatically approve the certificate request and attach a signed certificate to the certificate signing request. The kubelet will retrieve the new signed certificate from the Kubernetes API and write that to disk. Then it will update the connections it has to the Kubernetes API to reconnect using the new certificate.
2 - Manage TLS Certificates in a Cluster
Kubernetes provides a certificates.k8s.io
API, which lets you provision TLS
certificates signed by a Certificate Authority (CA) that you control. These CA
and certificates can be used by your workloads to establish trust.
certificates.k8s.io
API uses a protocol that is similar to the ACME
draft.
certificates.k8s.io
API are signed by a
dedicated CA. It is possible to configure your cluster to use the cluster root
CA for this purpose, but you should never rely on this. Do not assume that
these certificates will validate against the cluster root CA.
Before you begin
You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a cluster, you can create one by using minikube or you can use one of these Kubernetes playgrounds:
You need the cfssl
tool. You can download cfssl
from
https://github.com/cloudflare/cfssl/releases.
Some steps in this page use the jq
tool. If you don't have jq
, you can
install it via your operating system's software sources, or fetch it from
https://stedolan.github.io/jq/.
Trusting TLS in a cluster
Trusting the custom CA from an application running as a pod usually requires
some extra application configuration. You will need to add the CA certificate
bundle to the list of CA certificates that the TLS client or server trusts. For
example, you would do this with a golang TLS config by parsing the certificate
chain and adding the parsed certificates to the RootCAs
field in the
tls.Config
struct.
You can distribute the CA certificate as a ConfigMap that your pods have access to use.
Requesting a certificate
The following section demonstrates how to create a TLS certificate for a Kubernetes service accessed through DNS.
Create a certificate signing request
Generate a private key and certificate signing request (or CSR) by running the following command:
cat <<EOF | cfssl genkey - | cfssljson -bare server
{
"hosts": [
"my-svc.my-namespace.svc.cluster.local",
"my-pod.my-namespace.pod.cluster.local",
"192.0.2.24",
"10.0.34.2"
],
"CN": "my-pod.my-namespace.pod.cluster.local",
"key": {
"algo": "ecdsa",
"size": 256
}
}
EOF
Where 192.0.2.24
is the service's cluster IP,
my-svc.my-namespace.svc.cluster.local
is the service's DNS name,
10.0.34.2
is the pod's IP and my-pod.my-namespace.pod.cluster.local
is the pod's DNS name. You should see the output similar to:
2022/02/01 11:45:32 [INFO] generate received request
2022/02/01 11:45:32 [INFO] received CSR
2022/02/01 11:45:32 [INFO] generating key: ecdsa-256
2022/02/01 11:45:32 [INFO] encoded CSR
This command generates two files; it generates server.csr
containing the PEM
encoded PKCS#10 certification request,
and server-key.pem
containing the PEM encoded key to the certificate that
is still to be created.
Create a CertificateSigningRequest object to send to the Kubernetes API
Generate a CSR manifest (in YAML), and send it to the API server. You can do that by running the following command:
cat <<EOF | kubectl apply -f -
apiVersion: certificates.k8s.io/v1
kind: CertificateSigningRequest
metadata:
name: my-svc.my-namespace
spec:
request: $(cat server.csr | base64 | tr -d '\n')
signerName: example.com/serving
usages:
- digital signature
- key encipherment
- server auth
EOF
Notice that the server.csr
file created in step 1 is base64 encoded
and stashed in the .spec.request
field. You are also requesting a
certificate with the "digital signature", "key encipherment", and "server
auth" key usages, signed by an example example.com/serving
signer.
A specific signerName
must be requested.
View documentation for supported signer names
for more information.
The CSR should now be visible from the API in a Pending state. You can see it by running:
kubectl describe csr my-svc.my-namespace
Name: my-svc.my-namespace
Labels: <none>
Annotations: <none>
CreationTimestamp: Tue, 01 Feb 2022 11:49:15 -0500
Requesting User: yourname@example.com
Signer: example.com/serving
Status: Pending
Subject:
Common Name: my-pod.my-namespace.pod.cluster.local
Serial Number:
Subject Alternative Names:
DNS Names: my-pod.my-namespace.pod.cluster.local
my-svc.my-namespace.svc.cluster.local
IP Addresses: 192.0.2.24
10.0.34.2
Events: <none>
Get the CertificateSigningRequest approved
Approving the certificate signing request
is either done by an automated approval process or on a one off basis by a cluster
administrator. If you're authorized to approve a certificate request, you can do that
manually using kubectl
; for example:
kubectl certificate approve my-svc.my-namespace
certificatesigningrequest.certificates.k8s.io/my-svc.my-namespace approved
You should now see the following:
kubectl get csr
NAME AGE SIGNERNAME REQUESTOR REQUESTEDDURATION CONDITION
my-svc.my-namespace 10m example.com/serving yourname@example.com <none> Approved
This means the certificate request has been approved and is waiting for the requested signer to sign it.
Sign the CertificateSigningRequest
Next, you'll play the part of a certificate signer, issue the certificate, and upload it to the API.
A signer would typically watch the CertificateSigningRequest API for objects with its signerName
,
check that they have been approved, sign certificates for those requests,
and update the API object status with the issued certificate.
Create a Certificate Authority
You need an authority to provide the digital signature on the new certificate.
First, create a signing certificate by running the following:
cat <<EOF | cfssl gencert -initca - | cfssljson -bare ca
{
"CN": "My Example Signer",
"key": {
"algo": "rsa",
"size": 2048
}
}
EOF
You should see output similar to:
2022/02/01 11:50:39 [INFO] generating a new CA key and certificate from CSR
2022/02/01 11:50:39 [INFO] generate received request
2022/02/01 11:50:39 [INFO] received CSR
2022/02/01 11:50:39 [INFO] generating key: rsa-2048
2022/02/01 11:50:39 [INFO] encoded CSR
2022/02/01 11:50:39 [INFO] signed certificate with serial number 263983151013686720899716354349605500797834580472
This produces a certificate authority key file (ca-key.pem
) and certificate (ca.pem
).
Issue a certificate
{
"signing": {
"default": {
"usages": [
"digital signature",
"key encipherment",
"server auth"
],
"expiry": "876000h",
"ca_constraint": {
"is_ca": false
}
}
}
}
Use a server-signing-config.json
signing configuration and the certificate authority key file
and certificate to sign the certificate request:
kubectl get csr my-svc.my-namespace -o jsonpath='{.spec.request}' | \
base64 --decode | \
cfssl sign -ca ca.pem -ca-key ca-key.pem -config server-signing-config.json - | \
cfssljson -bare ca-signed-server
You should see the output similar to:
2022/02/01 11:52:26 [INFO] signed certificate with serial number 576048928624926584381415936700914530534472870337
This produces a signed serving certificate file, ca-signed-server.pem
.
Upload the signed certificate
Finally, populate the signed certificate in the API object's status:
kubectl get csr my-svc.my-namespace -o json | \
jq '.status.certificate = "'$(base64 ca-signed-server.pem | tr -d '\n')'"' | \
kubectl replace --raw /apis/certificates.k8s.io/v1/certificatesigningrequests/my-svc.my-namespace/status -f -
jq
to populate the base64-encoded
content in the .status.certificate
field.
If you do not have jq
, you can also save the JSON output to a file, populate this field manually, and
upload the resulting file.
Once the CSR is approved and the signed certificate is uploaded, run:
kubectl get csr
The output is similar to:
NAME AGE SIGNERNAME REQUESTOR REQUESTEDDURATION CONDITION
my-svc.my-namespace 20m example.com/serving yourname@example.com <none> Approved,Issued
Download the certificate and use it
Now, as the requesting user, you can download the issued certificate
and save it to a server.crt
file by running the following:
kubectl get csr my-svc.my-namespace -o jsonpath='{.status.certificate}' \
| base64 --decode > server.crt
Now you can populate server.crt
and server-key.pem
in a
Secret
that you could later mount into a Pod (for example, to use with a webserver
that serves HTTPS).
kubectl create secret tls server --cert server.crt --key server-key.pem
secret/server created
Finally, you can populate ca.pem
into a {< glossary_tooltip text="ConfigMap" term_id="configmap" >}}
and use it as the trust root to verify the serving certificate:
kubectl create configmap example-serving-ca --from-file ca.crt=ca.pem
configmap/example-serving-ca created
Approving CertificateSigningRequests
A Kubernetes administrator (with appropriate permissions) can manually approve
(or deny) CertificateSigningRequests by using the kubectl certificate approve
and kubectl certificate deny
commands. However if you intend
to make heavy usage of this API, you might consider writing an automated
certificates controller.
The ability to approve CSRs decides who trusts whom within your environment. The ability to approve CSRs should not be granted broadly or lightly.
You should make sure that you confidently understand both the verification requirements
that fall on the approver and the repercussions of issuing a specific certificate
before you grant the approve
permission.
Whether a machine or a human using kubectl as above, the role of the approver is to verify that the CSR satisfies two requirements:
- The subject of the CSR controls the private key used to sign the CSR. This addresses the threat of a third party masquerading as an authorized subject. In the above example, this step would be to verify that the pod controls the private key used to generate the CSR.
- The subject of the CSR is authorized to act in the requested context. This addresses the threat of an undesired subject joining the cluster. In the above example, this step would be to verify that the pod is allowed to participate in the requested service.
If and only if these two requirements are met, the approver should approve the CSR and otherwise should deny the CSR.
For more information on certificate approval and access control, read the Certificate Signing Requests reference page.
Configuring your cluster to provide signing
This page assumes that a signer is setup to serve the certificates API. The
Kubernetes controller manager provides a default implementation of a signer. To
enable it, pass the --cluster-signing-cert-file
and
--cluster-signing-key-file
parameters to the controller manager with paths to
your Certificate Authority's keypair.
3 - Manual Rotation of CA Certificates
This page shows how to manually rotate the certificate authority (CA) certificates.
Before you begin
You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a cluster, you can create one by using minikube or you can use one of these Kubernetes playgrounds:
Your Kubernetes server must be at or later than version v1.13. To check the version, enterkubectl version
.
- For more information about authentication in Kubernetes, see Authenticating.
- For more information about best practices for CA certificates, see Single root CA.
Rotate the CA certificates manually
Make sure to back up your certificate directory along with configuration files and any other necessary files.
This approach assumes operation of the Kubernetes control plane in a HA configuration with multiple API servers. Graceful termination of the API server is also assumed so clients can cleanly disconnect from one API server and reconnect to another.
Configurations with a single API server will experience unavailability while the API server is being restarted.
-
Distribute the new CA certificates and private keys (ex:
ca.crt
,ca.key
,front-proxy-ca.crt
, andfront-proxy-ca.key
) to all your control plane nodes in the Kubernetes certificates directory. -
Update kube-controller-manager's
--root-ca-file
to include both old and new CA. Then restart the component.Any service account created after this point will get secrets that include both old and new CAs.
Note: The files specified by the kube-controller-manager flags--client-ca-file
and--cluster-signing-cert-file
cannot be CA bundles. If these flags and--root-ca-file
point to the sameca.crt
file which is now a bundle (includes both old and new CA) you will face an error. To workaround this problem you can copy the new CA to a separate file and make the flags--client-ca-file
and--cluster-signing-cert-file
point to the copy. Onceca.crt
is no longer a bundle you can restore the problem flags to point toca.crt
and delete the copy. -
Update all service account tokens to include both old and new CA certificates.
If any pods are started before new CA is used by API servers, they will get this update and trust both old and new CAs.
base64_encoded_ca="$(base64 -w0 <path to file containing both old and new CAs>)" for namespace in $(kubectl get ns --no-headers | awk '{print $1}'); do for token in $(kubectl get secrets --namespace "$namespace" --field-selector type=kubernetes.io/service-account-token -o name); do kubectl get $token --namespace "$namespace" -o yaml | \ /bin/sed "s/\(ca.crt:\).*/\1 ${base64_encoded_ca}/" | \ kubectl apply -f - done done
-
Restart all pods using in-cluster configs (ex: kube-proxy, coredns, etc) so they can use the updated certificate authority data from ServiceAccount secrets.
- Make sure coredns, kube-proxy and other pods using in-cluster configs are working as expected.
-
Append the both old and new CA to the file against
--client-ca-file
and--kubelet-certificate-authority
flag in thekube-apiserver
configuration. -
Append the both old and new CA to the file against
--client-ca-file
flag in thekube-scheduler
configuration. -
Update certificates for user accounts by replacing the content of
client-certificate-data
andclient-key-data
respectively.For information about creating certificates for individual user accounts, see Configure certificates for user accounts.
Additionally, update the
certificate-authority-data
section in the kubeconfig files, respectively with Base64-encoded old and new certificate authority data -
Follow below steps in a rolling fashion.
-
Restart any other aggregated api servers or webhook handlers to trust the new CA certificates.
-
Restart the kubelet by update the file against
clientCAFile
in kubelet configuration andcertificate-authority-data
in kubelet.conf to use both the old and new CA on all nodes.If your kubelet is not using client certificate rotation update
client-certificate-data
andclient-key-data
in kubelet.conf on all nodes along with the kubelet client certificate file usually found in/var/lib/kubelet/pki
. -
Restart API servers with the certificates (
apiserver.crt
,apiserver-kubelet-client.crt
andfront-proxy-client.crt
) signed by new CA. You can use the existing private keys or new private keys. If you changed the private keys then update these in the Kubernetes certificates directory as well.Since the pod trusts both old and new CAs, there will be a momentarily disconnection after which the pod's kube client will reconnect to the new API server that uses the certificate signed by the new CA.
-
Restart Scheduler to use the new CAs.
-
Make sure control plane components logs no TLS errors.
Note: To generate certificates and private keys for your cluster using theopenssl
command line tool, see Certificates (openssl
). You can also usecfssl
. -
-
Annotate any Daemonsets and Deployments to trigger pod replacement in a safer rolling fashion.
Example:
for namespace in $(kubectl get namespace -o jsonpath='{.items[*].metadata.name}'); do for name in $(kubectl get deployments -n $namespace -o jsonpath='{.items[*].metadata.name}'); do kubectl patch deployment -n ${namespace} ${name} -p '{"spec":{"template":{"metadata":{"annotations":{"ca-rotation": "1"}}}}}'; done for name in $(kubectl get daemonset -n $namespace -o jsonpath='{.items[*].metadata.name}'); do kubectl patch daemonset -n ${namespace} ${name} -p '{"spec":{"template":{"metadata":{"annotations":{"ca-rotation": "1"}}}}}'; done done
Note: To limit the number of concurrent disruptions that your application experiences, see configure pod disruption budget.
-
-
If your cluster is using bootstrap tokens to join nodes, update the ConfigMap
cluster-info
in thekube-public
namespace with new CA.base64_encoded_ca="$(base64 -w0 /etc/kubernetes/pki/ca.crt)" kubectl get cm/cluster-info --namespace kube-public -o yaml | \ /bin/sed "s/\(certificate-authority-data:\).*/\1 ${base64_encoded_ca}/" | \ kubectl apply -f -
-
Verify the cluster functionality.
-
Validate the logs from control plane components, along with the kubelet and the kube-proxy are not throwing any tls errors, see looking at the logs.
-
Validate logs from any aggregated api servers and pods using in-cluster config.
-
-
Once the cluster functionality is successfully verified:
-
Update all service account tokens to include new CA certificate only.
- All pods using an in-cluster kubeconfig will eventually need to be restarted to pick up the new SA secret for the old CA to be completely untrusted.
-
Restart the control plane components by removing the old CA from the kubeconfig files and the files against
--client-ca-file
,--root-ca-file
flags resp. -
Restart kubelet by removing the old CA from file against the
clientCAFile
flag and kubelet kubeconfig file.
-