# Let's Encrypt with nginx - easy TLS for your website

### Nowhere near as hard as people think it is

Posted by Niko Montonen on Sat 24 June 2017 Updated on Fri 30 June 2017

## The what now?

The Let's Encrypt project was formed in 2014 with the aim of making TLS encryption in web traffic ubiquitous.

The project accomplishes this by providing free TLS certificates to anyone that can satisfy their (extremely trivial) criteria.

The Let's Encrypt root certificate has been cross-signed by IdenTrust [1] in order to be trusted by clients while the project propagates their own root certificate, meaning clients with old certificate databases will still work with a Let's Encrypt certificate.

## How do I get a certificate?

For the purposes of this post, we're going to be using nginx as our web server. However, the certificate you get can be used by a lot of other software - I use mine with both nginx and postfix.

To acquire and renew your certificate, you must use an ACME [2] client. Due to the simplicity of the protocol there are many clients [3] implemented in many languages. However, we'll be focusing on certbot, the recommended client for beginners.

## Getting the software

Installing certbot is trivial on most platforms.

### Debian

On Debian 9 (Stretch), you can simply install the certbot package [4].

sudo apt install certbot


If you haven't updated your Debian machines yet, on Debian 8 (Jessie) you can install the package [5] from jessie-backports. If you have no experience with using backports on Debian, you can follow their official instructions [6] to enable the backports repository, and then install the package from backports.

sudo apt -t jessie-backports install certbot


### Red Hat/CentOS/Fedora

On Red Hat/CentOS 7, certbot is packaged in EPEL [7]. Enable the EPEL repositories [8] as documented by the Fedora Project, and install the package.

sudo yum install python-certbot-nginx


On Fedora, certbot has been available in the repositories since Fedora 24.

sudo dnf install python-certbot-nginx


## Getting the certificate

certbot has an automatic mode for nginx, but I'm not going to touch it. You can give it a try, if you wish, with the following:

sudo certbot --nginx


Instead of the automated module, we're going to be using webroot.

If you looked at how the ACME protocol works, you'll have noticed that our domain will be validated by either provisioning a DNS record, or by provisioning an HTTP resource on the domain. The webroot module puts a file in a directory of our choosing, which will then be served by nginx to the server which is validating our control of the domain.

A typical use case would be this:

certbot certonly --webroot -w /var/www/example.com/ -d example.com


This will put a file in /var/www/example.com/, tell Let's Encrypt to verify it exists, and then deletes the file. After the domain verification is done, your certificate files will be stored in /etc/letsencrypt/live/example.com/.

If we look in the directory, we'll see the files relating to our certificate:

ls /etc/letsencrypt/live/example.com
cert.pem  chain.pem  fullchain.pem  privkey.pem  README


But what if we're doing something slightly more complicated, like using nginx as a reverse proxy for a web application? Running nginx as a reverse proxy in front of Gunicorn and likes isn't exactly rare nowadays in the world of Django, NodeJS and other things.

For this, we're going to have to make some changes to our nginx configuration.

The webroot module uses /.well-known/ on your domain for its domain verification. If your web application doesn't need this path, we can just simply direct it to another location.

Create a directory that certbot can access, and we'll use that for verification instead.

mkdir /var/www/ssl-proof


After this, we'll go to our nginx domain configuration and use the location directive to redirect the path.

location /.well-known {
root /var/www/ssl-proof;
}


Then we invoke certbot.

certbot certonly --webroot -w /var/www/ssl-proof/ -d example.com


After this, we have our certificate we can use.

## nginx configuration

Moving over to nginx, there's some things we need to add to our domain.

First, we need to listen on port 443.

server {
server_name example.com;
listen 443 ssl;
listen [::]:443 ssl;
}


Next, we'll need to turn TLS on, and give nginx the certificate.

ssl on;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;


At this point, you're done. Restart nginx and TLS works.

But you're not really satisfied, are you? We have things to do.

If you're on nginx 1.9.5 or newer, you might as well move over to HTTP/2. HTTP/2 has many benefits, such as operating over one single continuous connection that is kept open, through which all requests are multiplexed. Previously on HTTP/1.1, transfers would be performed sequentially, decreasing page load speeds.

You can see a demo showcasing how this affects load speeds on Akamai's demo page.

server {
server_name example.com;
listen 443 ssl http2;
listen [::]:443 ssl http2;
}


After that, we'll generate stronger parameters for Diffie-Hellman,

openssl dhparam -out /etc/nginx/dhparams.pem 4096


and add that to our configuration.

ssl_dhparam /etc/nginx/dhparams.pem;


From this point forward, the options we'll be setting have large implications to security. So we're not going to come up with our answers, and we'll follow https://cipherli.st/ instead.

We'll go ahead and disable support for SSL and older versions of TLS.

ssl_protocols TLSv1.2;


We'll also change what ciphers we support, and in what order. Just be careful if you need to support older clients - a cipherlist for supporting older clients is also available on https://cipherli.st/

ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH";


Then we'll need to prefer our own ciphers instead of what the client wants.

ssl_prefer_server_ciphers on;


Next, we'll turn off session tickets so we don't compromise forwards secrecy [9].

ssl_session_tickets off;


Also, we'll use the session cache so the client doesn't have to perform a new handshake for every request. I assume this isn't a problem under HTTP/2, but maybe not everyone supports it. They should, though.

ssl_session_cache shared:SSL:10m;


We'll set the curve for the elliptic curve, because math. People smarter than either of us have decided this is a good idea. To get rid of the existential crisis you're now experiencing, remember that there are no new ideas

ssl_ecdh_curve secp384r1;


Then we'll introduce OCSP stapling. This adds a timestamped response signed by our certificate authority to the TLS handshake, so the client doesn't have to waste time connecting to the certificate authority.

ssl_stapling on;
ssl_stapling_verify on;


There are other options specified on cipherli.st, but I'm not going to recommend implementing them for a beginner. Denying iFrames can break some applications (like gogs), and Strict Transport Security can cause a lot of destruction if done incorrectly. If you do want play around with it, set a short lifetime.

After you're done, you're going to want to run Qualys SSL Labs against your server when you're done, and see what they have to say.

Keep in mind that Let's Encrypt issues certificates with fairly short lifetimes, so you're going to want to renew your certificate before it expires. You can do so by simply running the command again.

You might also consider forcing renewal of the certificate with the option --renew-by-default. Running this command in crontab will force a renewal of your certificate when the cronjob is run.

Personally, I have a small Bash script located at /automate_cert.sh that contains everything I want to be run when the certificate is renewed, and I run this with cron using the handy @weekly timer.

@weekly  /automate_cert.sh