Using Wildcard Certificates with Traefik and K3s
— Kubernetes, Tutorials — 3 min read
K3s is a lightweight Kubernetes distribution built for the purposes of IoT and Edge computing. Recently, I decided to use it as the basis for all my self-hosted services. My goal was to set up a Kubernetes distribution that would run well across a bunch of ARM64 compute modules. As is standard for me, I decided to over-engineer it and ensure that my home network was as "enterprisey" as humanely possible.
A big part of this was making sure that all my services were strictly available over HTTPS (via private subdomains of lachlan.io). A tool like cert-manager can be used to automatically issue regular TLS certificates, but I wasn't satisfied with the fact that my internal domain names would be leaked via publicly available certificate transparency logs.
As an example, take a look at the logs for my domain: https://crt.sh/?q=lachlan.io. In that list, you'll see home.lachlan.io
, a certificate I initially issued for one of my internally hosted services. That record will remain publicly available forever, leaking private information about which services I'm using. On the other hand, a wildcard certificate like *.lachlan.io
leaks nothing more than the fact that I'm using any number of subdomains.
In this post, I will describe how I used cert-manager to configure a default wildcard certificate for K3's inbuilt Traefik ingress controller.
Warning
Careful consideration must be applied when using wildcard TLS certificates. If a server holding the private key for a wildcard certificate is compromised, then the confidentiality and integrity of all traffic to all other servers using the certificate will also be compromised. On the other hand, the compromise of a regular TLS certificate only affects the one subdomain, significantly lowering the blast radius of an event.
As a general rule of thumb, you probably shouldn't be using a wildcard certificate unless you have one of the following requirements:
- You don't want your subdomains showing up in certificate transparency logs.
- You're issuing for so many subdomains that certificate providers are rate-limiting you.
Preparing the Cluster
Deploying the actual K3s cluster is out of the scope of this article, so please use the official quick-start guide if needed. Once your cluster is up and running, the first step will be to install cert-manager. You can do this by kubectl
applying regular manifests, but I'm partial to using helm charts wherever possible:
1# Create the namespace for cert-manager2kubectl create namespace cert-manager3# Add the Jetstack Helm repository and update your local cache4helm repo add jetstack https://charts.jetstack.io && helm repo update5# Install cert-manager with CRD resources6helm install \7 cert-manager jetstack/cert-manager \8 --namespace cert-manager \9 --version v1.2.0 \10 --create-namespace \11 --set installCRDs=true
Once it's finished, you should be able to run kubectl get pods --namespace cert-manager
to check the cert-manager namespace for running pods.
Create a ClusterIssuer
Now that cert-manager is up and running, we should start by creating a ClusterIssuer
. There are a multitude of different ways to configure it, so the best solution will depend on your specific requirements. In my case, I'm using the ACME issuer type with DNS01 challenges via Cloudflare. This involves me first needing to get an API token from Cloudflare and then providing it to K3s as a Secret
resource:
1apiVersion: v12kind: Secret3metadata:4 name: cloudflare-api-token5 namespace: cert-manager6type: Opaque7stringData:8 api-token: REDACTED9---10apiVersion: cert-manager.io/v111kind: ClusterIssuer12metadata:13 name: letsencrypt-prod14spec:15 acme:16 server: https://acme-v02.api.letsencrypt.org/directory17 email: REDACTED18 privateKeySecretRef:19 name: letsencrypt-prod-key20 solvers:21 - dns01:22 cloudflare:23 email: REDACTED24 apiTokenSecretRef:25 name: cloudflare-api-token-secret26 key: api-token
The above is fairly straightforward. Using a ClusterIssuer
(over a standard Issuer
) will make it possible to create the wildcard certificate in the kube-system
namespace that K3s uses for Traefik. Also, note that any referenced Secret
resources will (by default) need to be in the cert-manager
namespace.
Request a Wildcard Certificate
Now comes the (arguably) fun part: certificate generation. Apply something like the following to get started:
1apiVersion: cert-manager.io/v12kind: Certificate3metadata:4 name: wildcard-lachlan-io5 namespace: kube-system6spec:7 secretName: wildcard-lachlan-io-tls8 issuerRef:9 name: letsencrypt-prod10 kind: ClusterIssuer11 dnsNames:12 - "*.lachlan.io"
You can follow along with the progress of the certificate request by using kubectl describe certificate -n kube-system
. Watch the events until you see the message "certificate issued successfully." If you encounter any errors, this will also be the place to start investigating the issue.
Configure Traefik
We're in the home stretch now. The final part is to reconfigure the default K3s Traefik installation so that it uses our shiny new wildcard certificate by default. When an Ingress
resource is defined without a spec.tls.secretName
, Traefik will attempt to use its configured default TLS certificate instead. If we mount our wildcard certificate in such a way that it overrides the one Traefik generates, we will effectively be setting it as the new default certificate. K3s makes it easy to do this since it is designed to automatically redeploy Traefik when any changes are made to its helm chart.
In your text editor of choice, open up /var/lib/rancher/k3s/server/manifests/traefik.yaml
on your K3s server node and add the following to the spec.valuesContent
string:
1extraVolumeMounts:2 - name: ssl3 mountPath: /ssl4extraVolumes:5 - name: ssl6 secret:7 secretName: wildcard-lachlan-io-tls
Since Traefik runs in the kube-system
namespace, it will easily pick up your wildcard secret, mount it, and use it as the new default certificate! At this point, you're now free to expose services by creating arbitrary ingress resources. As long as you don't specify spec.tls.secretName
, the wildcard certificate will be used:
1apiVersion: networking.k8s.io/v12kind: Ingress3metadata:4 name: home5 namespace: home6 annotations:7 kubernetes.io/ingress.class: traefik8 traefik.ingress.kubernetes.io/redirect-entry-point: https9spec:10 rules:11 - host: home.lachlan.io12 http:13 paths:14 - backend:15 service:16 name: home17 port:18 number: 8019 path: /20 pathType: Prefix21 tls:22 - hosts:23 - home.lachlan.io
This prevents you from having to do any hacky secret copying/syncing across namespaces while providing the privacy benefits that wildcard certificates afford. I couldn't find a guide online that adequately explained how to cleanly manage a wildcard certificate in this fashion, so I hope this article is able to help you with your own projects (be them personal or professional).