K8s mTLS Auth with TLS Passthrough

Joshua Casey
5 min readJun 22, 2023

--

It’s well known that the default way to authenticate with a Kubernetes cluster uses mTLS to authenticate an API call.

any user that presents a valid certificate signed by the cluster’s certificate authority (CA) is considered authenticated. https://kubernetes.io/docs/reference/access-authn-authz/authentication/

What if you need your user identities to flow from an external identity provider, and have them automatically translated into K8s identities? To add another layer of complexity, what if you don’t have access to the cluster’s CA authority to issue certificates for your users?

Enter Pinniped’s Impersonation Proxy, which sits “in front of” the K8s API server. Currently (up through v0.24.0) it will use generated certificate authorities to both serve TLS and issue client certificates that can be used for mTLS authentication by clients. I’ll leave it to the Pinniped docs to describe what actually happens in more detail, for now we’ll just say that it can translate a user’s OIDC identity (ID token) into a client certificate used for mTLS authentication with the impersonation proxy. The impersonation proxy then uses impersonation headers to perform the user’s request against the K8s API.

A quick side note about mTLS authentication: I can’t describe it any better than this article from cloudflare. But I will point out that the certificates used by the server and client in the general case don’t actually need to have any CA relationship with each other — they can be generated by different CAs entirely! The Pinniped Impersonation Proxy has a different behavior than the K8s API Server in that it uses different CAs for the server and client certificates.

So what happens if you want traffic to the Impersonation Proxy to come in through a Kubernetes Ingress? Out of the box an Ingress service will terminate TLS and then uses unencrypted traffic between the ingress and the backing services/pods. This won’t work for the Impersonation Proxy, since it needs to perform mTLS auth.

Let’s see if we can get around this behavior by using Contour Ingress with TLS session passthrough.

The setup:

Docker desktop v1.20.1
Kind v0.20.0
Contour 1.25
Pinniped v0.24.0
pinniped CLI v0.24.0 (https://pinniped.dev/docs/howto/install-cli/)

Set up kind to run with Contour, using the example kind cluster configuration file provided by Contour.

$ wget https://raw.githubusercontent.com/projectcontour/contour/main/examples/kind/kind-expose-port.yaml
$ kind create cluster \
--config kind-expose-port.yaml \
--name kind-with-contour \
--kubeconfig kind-with-contour.kubeconfig.yaml

Install Contour (see https://projectcontour.io/getting-started/ for more details).

# From https://projectcontour.io/getting-started/
$ kubectl apply \
--filename https://projectcontour.io/quickstart/contour.yaml \
--kubeconfig kind-with-contour.kubeconfig.yaml
# Verify that the Contour pods are running
$ kubectl get pods \
--namespace projectcontour \
--output wide \
--kubeconfig kind-with-contour.kubeconfig.yaml

Install Pinniped’s local-user-authenticator and add some sample users (see https://pinniped.dev/docs/tutorials/concierge-only-demo/ for more details).

# Install Pinniped's local-user-authenticator so that you have some demo users to play with
$ kubectl apply \
--filename https://get.pinniped.dev/v0.24.0/install-local-user-authenticator.yaml \
--kubeconfig kind-with-contour.kubeconfig.yaml
# Create a local user "pinny" with password "password123"
$ kubectl create secret generic pinny \
--namespace local-user-authenticator \
--from-literal=groups=group-for-mtls \
--from-literal=passwordHash=$(htpasswd -nbBC 10 x password123 | sed -e "s/^x://") \
--kubeconfig kind-with-contour.kubeconfig.yaml
# The local-user-authenticator will serve a TLS endpoint for the Concierge to talk to,
# so you need to configure the Concierge's webhook to verify TLS with the appropriate CA
# Just make sure it does print out the TLS secret - can take a few seconds
$ kubectl get secret local-user-authenticator-tls-serving-certificate \
--namespace local-user-authenticator \
--output jsonpath={.data.caCertificate} \
--kubeconfig kind-with-contour.kubeconfig.yaml \
| tee local-user-authenticator-ca-base64-encoded
$ kubectl apply \
--filename https://get.pinniped.dev/v0.24.0/install-pinniped-concierge-crds.yaml \
--kubeconfig kind-with-contour.kubeconfig.yaml

Install Pinniped’s Concierge.

# Need to perform a custom install of Pinniped's Concierge, requiring it to always use the impersonation proxy

$ git clone \
--depth 1 \
--branch v0.24.0 \
git@github.com:vmware-tanzu/pinniped.git

$ cat << EOF > concierge-with-impersonation-proxy.values.yaml
#@data/values
---
impersonation_proxy_spec:
mode: enabled
external_endpoint: impersonation-proxy-mtls.local
service:
type: ClusterIP

EOF
$ ytt --file pinniped/deploy/concierge \
--file concierge-with-impersonation-proxy.values.yaml \
> concierge-with-impersonation-proxy.resources.yaml
$ kubectl apply \
--filename concierge-with-impersonation-proxy.resources.yaml \
--kubeconfig kind-with-contour.kubeconfig.yaml

# Confirm that the CredentialIssuer looks as expected
$ kubectl get credentialissuers \
--output yaml \
--kubeconfig kind-with-contour.kubeconfig.yaml
...
spec:
impersonationProxy:
externalEndpoint: impersonation-proxy-mtls.local
mode: enabled
service:
annotations:
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "4000"
type: ClusterIP
...
# Confirm that the ClusterIP service was automatically created (may take a minute)
$ kubectl get service pinniped-concierge-impersonation-proxy-cluster-ip \
--namespace pinniped-concierge \
--output yaml \
--kubeconfig kind-with-contour.kubeconfig.yaml

# Configure a webhook authenticator to use the local-user-authenticator
$ cat << EOF > concierge.webhookauthenticator.yaml
apiVersion: authentication.concierge.pinniped.dev/v1alpha1
kind: WebhookAuthenticator
metadata:
name: local-user-authenticator
spec:
endpoint: https://local-user-authenticator.local-user-authenticator.svc/authenticate
tls:
certificateAuthorityData: $(cat local-user-authenticator-ca-base64-encoded)
EOF

# Create the webhook authenticator
$ kubectl apply \
--filename concierge.webhookauthenticator.yaml \
--kubeconfig kind-with-contour.kubeconfig.yaml

Now deploy a Contour HTTPProxy ingress that fronts the ClusterIP service. Note in particular the spec.tcpproxy block, which is different than the typical spec.rules block. spec.tcpproxy is required when using spec.virtualhost.tls.passthrough: true . See https://projectcontour.io/docs/1.25/config/tls-termination/#tls-session-passthrough for more details.

$ cat << EOF > contour-ingress-impersonation-proxy.yaml
---
apiVersion: projectcontour.io/v1
kind: HTTPProxy
metadata:
name: impersonation-proxy
namespace: pinniped-concierge
spec:
virtualhost:
fqdn: impersonation-proxy-mtls.local
tls:
passthrough: true
tcpproxy:
services:
- name: pinniped-concierge-impersonation-proxy-cluster-ip
port: 443

EOF

$ kubectl apply \
--filename contour-ingress-impersonation-proxy.yaml \
--kubeconfig kind-with-contour.kubeconfig.yaml

Now generate the Pinniped kubeconfig so that you can perform mTLS with the impersonation proxy.

# add 127.0.0.1 impersonation-proxy-mtls.local to your /etc/hosts!
$ pinniped get kubeconfig \
--static-token "pinny:password123" \
--concierge-authenticator-type webhook \
--concierge-authenticator-name local-user-authenticator \
--concierge-mode ImpersonationProxy \
--kubeconfig kind-with-contour.kubeconfig.yaml \
> pinniped-kubeconfig.yaml

Now perform an action as user pinny!

$ kubectl get pods -A \
--kubeconfig pinniped-kubeconfig.yaml
Error from server (Forbidden): pods is forbidden: User "pinny" cannot list resource "pods" in API group "" at the cluster scope: decision made by impersonation-proxy.concierge.pinniped.dev

This does result in an error because the cluster does not have any RoleBindings or ClusterRoleBindings that allow your user pinny or the group group-for-mtls to perform any actions on the cluster. Let’s make a ClusterRoleBinding that grants this group cluster admin privileges.

# Perform this as the cluster admin using the kind kubeconfig
$ kubectl create clusterrolebinding mtls-admins \
--clusterrole=cluster-admin \
--group=group-for-mtls \
--kubeconfig kind-with-contour.kubeconfig.yaml
# Now try again with the Pinniped kubeconfig
$ kubectl get pods -A \
--kubeconfig pinniped-kubeconfig.yaml
NAMESPACE NAME READY STATUS RESTARTS AGE
pinniped-concierge pinniped-concierge-f4c78b674-bt6zl 1/1 Running 0 3h36m

Congratulations, you have successfully performed mTLS authentication between your local client (kubectl, using the pinniped CLI) and the Pinniped concierge impersonation proxy inside the cluster!

--

--

Responses (1)