Below I will describe an approach as to how to get a proxy running that handles SSL termination and certificate regeneration using letsencrypt.org.
You’ve got a cat naming service, you’ve containerized it and it is running happily in kubernetes. It’s only serving http, and you want to start securing connections made.
Here’s the state of your cat-name serving API:
SSL Termination proxy
Google has provided a nginx-ssl-proxy container that can be configured to route traffic through to any service.
A very useful walkthrough is available on the kubernetes blog to get an idea of how this termination works
Getting a certificate
Letsencrypt is great to use - it’ll give us a free 90-day certificate for the cat name service, and we can completely automate fetching new ones.
The verification step is done by expecting a shared secret to be present at a particular url on the domain being certified.
But for letsencrypt to work with our proxy, we need to be able to:
- trigger a certificate fetch
- serve the challenge that letsencrypt gives us (verification)
- store the certificate somewhere that the proxies will have access to.
Kubernetes secrets allow sensitive information to be stored outside of your application but accessible by any containers in the namespace. This is a great way to store private keys and certificates that will be updated infrequently but read by potentially many proxies.
The walkthrough shows how these can be used with the nginx-ssl-proxy to provide your proxy with certificates
A letsencrypt container.
ployst/letsencrypt is a docker container that provides:
- Monthly certificate/key regeneration
- Storage of artifacts
- Restarting of containers that use the certificates + keys.
This will attempt to get a certificate for
to be registered to
It will store the result in a secret named
certs-example.com with filenames that are usable by the
It will restart all containers that are owned by the rc named
It also supports using the letsencrypt staging endpoint. simply add this to your yaml env section:
Please note that there are strict low quotas on the production letsencrypt endpoint, use the LETSENCRYPT_ENDPOINT env var to to specify the staging server during testing or face being locked out for a week.
A letsencrypt-friendly ssl-terminating proxy
ployst/nginx-ssl-proxy is a fork of the GoogleCloudPlatform repo that supports an additional env variable:
All requests to
/.well-known/acme-challenge will be routed through to that
It can be constructed from two additional env variables in the same way that the TARGET_SERVICE is by the proxy:
In kubernetes, we can use the env variables that are exposed in all containers to create the correct configuration. Assuming that you have a service named ‘LETSENCRYPT-SERVICE’ pointing to the letsencrypt container:
- Challenge requests made by letsencrypt.org are routed through to the container that kicked off the certification process.
nginx-ssl-proxy-api-1pod has a mounted secret that is populated by
letsencrypt-rc-1will restart (using a rolling deploy) all containers controlled by
nginx-ssl-proxy-apiafter a secret updated
- All https requests are routed through to the cat naming service application
- All other http requests are redirected to https
I’m led to believe that from kubernetes 1.2 the ingress resource will support https and SSL termination. When this happens, the proxy will probably be unnecessary.
Cron is not the long term solution to scheduled jobs in kubernetes. Soon, it will be possible to schedule jobs to run at particular times in which case the current cron can be moved into such a job. The discussion around this feature can be found here