2020-06-23

 23 Jun 2020  guides

From JAMstack to RAMstack

First of all, the term “RAMstack” is simply used in jest. What we’re doing is using Varnish and Cloudflare to serve up a static site that’s generated and hosted behind the scenes on Netlify — thereby effecting a reduction in cost and an increase in performance.

TL;DR

For the curl | bash among us, here’s a quick gist of this post:

  • Host your static site’s origin on Netlify
  • Install Varnish Cache on a VPS
  • Serve your site from the Varnish server
  • Never exceed your Netlify bandwidth
  • Faster page load times (RAM vs. Disk)
  • Can accelerate your site further with Cloudflare, etc.

Assumptions

  1. You have an existing CI/CD workflow that uses Netlify to host your static site

  2. You have a Digital Ocean account and have created a $5 droplet using the Debian 10 image

  3. You’re using Cloudflare for DNS and SSL management

Setup Varnish

Having connected to your droplet via SSH, you’ll need to install the following packages since Debian 10’s image used by DigitalOcean doesn’t ship with them by default:

apt-get install curl gpg apt-transport-https lsof -y

Varnish houses it’s packages over at Package Cloud, and they offer simple install scripts per distribution in order to setup their repositories as sources.

In this case, you can visit https://packagecloud.io/varnishcache/varnish64 to get the latest stable version of Varnish at the time of writing this post.

On that page you’ll see Quick Install instructions for deb and rpm packages. Since we’re using Debian 10, the following command when run as root will setup the repositories correctly:

curl -s https://packagecloud.io/install/repositories/varnishcache/varnish64/script.deb.sh | sudo bash

Once that’s complete, install the latest version for Debian 10 (Buster). In this case it’s 6.4.0-1~buster which can now be installed with the following command:

apt-get install varnish=6.4.0-1~buster -y

Configure Varnish

Varnish Configuration Language (VCL) is one of the most powerful configuration formats you will ever come across. That being said, I’ve tweaked one of the default VCL configuration file, which can be found here:

https://gist.github.com/oedmarap/bd72763eb03fbce490e738e0bbb71450

You’ll need to edit the file to match your needs. Specifically, the following parts need to be adjusted.

Line 17 — this is your website that Netlify serves when a GET request is sent:

"Host: domain.com"

Line 37 — add your IP address(es) here if you wish to remotely send a PURGE request that will clear the cache. If you’re using a VPN, your egress IP will suffice:

"123.45.67.89"/32;

Line 172 — these are the types of files that I instruct Varnish to cache, feel free to change then if you so desire, and also read the comment in the VCL file for further understanding of this option:

if (req.url ~ "^[^?]*\.(css|gif|ico|jpg|js|png|svg|txt|webp|woff2)(\?.*)?$") {
  unset req.http.Cookie;
  return (hash);
}

Now it’s time to replace the default Varnish configuration file. Use the following commands to empty and then open the file for editing:

> /etc/varnish/default.vcl && nano /etc/varnish/default.vcl

Paste the contents of the configuration file you customized earlier from the Gist and save the file. Next, edit Varnish’s systemd service file with the following command:

nano /lib/systemd/system/varnish.service

Look for the start command within the service definition, it should be the line containing the string ExecStart. When found, change the parameters to match the following:

ExecStart=/usr/sbin/varnishd -a :80 -f /etc/varnish/default.vcl -s malloc,512m

Where Varnish now uses Port 80 and the memory is set to 512 MB (you can of course use a larger memory pool based on the size of your VM).

Before starting Varnish, you can check to see if any other service is listening on Port 80 with the commmand:

lsof -i :80 | grep LISTEN

If it doesn’t return anything, it means Varnish is free to utilize the standard HTTP port. After you’ve saved the varnish.service file, start Varnish with the following command:

systemctl daemon-reload && systemctl restart varnish

It’s important to note that running the above command at any time will both restart Varnish and completely clear the cache. Useful to know if you’re logged into your VM and want to do just that, or set a cronjob or script in order to automate it.

Configure Netlify

In order to configure cache-control headers to be read by Varnish, you’ll need to add a netlify.toml configuration file to the base directory of the repository that Netlify builds from (e.g. your Jekyll site’s folder hosted at GitLab). I’ll provide my own configuration as an example:

[[headers]]
  for = "/*"
  [headers.values]
	cache-control = '''
	public,
	max-age=86400,
	max-stale=3600'''

I currently set all my content to be publicly cached for 1 day (86400 seconds) and allow stale content to be served for 1 hour (3600 seconds) before the cache is refreshed. The max-stale header simply prevents a ton of direct requests to flood Varnish while it’s in the process of rebuilding it’s cache. Feel free to customize the above file to match your needs.

It’s important to note that Varnish does not terminate SSL by itself. In Netlify the process of adding your site with this setup involves adding a custom domain, and letting Varnish fetch from the backend without adding or configuring SSL within Netlify. This ensures that Varnish can effectively sit in-front of the backend and handle HTTP to HTTP traffic.

If you already have a website in Netlify, you’ll have to delete and re-add the site from your connected repository in order to use the non-SSL domain (without going through the final Let’s Encrypt step).

Handling SSL is entirely up to you in terms of how you go about it. If you wish to terminate SSL on the server, you can install NGINX and simply have it listen on 443 on behalf of Varnish. I’ve used this before and it works flawlessly. One way is to use Let’s Encrypt. Another way is to install one of Cloudflare’s origin certificates that matches the domain name you’re serving through Cloudflare.

Since JAMstack sites are entirely static, I prefer to use the simpler method of just enabling Cloudflare’s Flexible SSL option at Cloudflare’s edge — letting Cloudflare enforce SSL upgrades and also handle rewrites. That’s how I do it in the case of static sites and for the purposes of this guide.

The final step then is to simply add an Apex record in Cloudflare’s DNS that points to your Varnish server’s IP. Then, enabling Flexible SSL and Always Use HTTPS. Further, setting Cloudflare’s caching option to Respect Existing Headers ensures that Netlify’s toml file we edited earlier sets the correct cache-control headers.

Notes

If you don’t want Varnish to be probing Netlify with a health check, you can manually force Varnish to mark the Netlify backend as healthy with the command:

varnishadm backend.set_health netlify healthy

Where netlify is the name of the backend as specified in the VCL configuration. If you wish to revert this, you can simply run the commmand varnishadm backend.set_health netlify auto and Varnish will revert to health checking the backend.

The linked Netlify VCL file uses an IP address of 104.198.14.52 as the backend. This is the IP that Netlify provides when you are configuring a domain to with an Apex Record (versus using a CNAME record).

One of the added benefits of using Varnish along with Netlify is the ability to surpass Netlify’s generous but limited usage caps.

Netlify’s Free tier allows for 100 GB of bandwidth transfer per month. Their Pro tier (which costs $45 per month) allows 400 GB of transfer. Add in the natural inclination to add multiple repositories-as-sites and it’s easy to end up with a situation where the usable data transfer per-site becomes negligibly low. A single Hacker News hug of death or generally accrued traffic over time can quickly push your site outside of the Free tier’s limits.

For comparison, a $5 DigitalOcean virtual machine allows for 1 TB of transfer, and scales quite linearly as you increase the VM’s size as needed. Cloudflare also plays a large part in saving bandwidth that would otherwise hit Varnish as well, due to their proxy caching.

Static sites are by their nature very small in size. This means that it would be trivial to fit a dozen or so JAMstack sites into a 512 MB Varnish cache pool, which in turn would be able to serve a tremendous amount of traffic without breaking a sweat.

In closing, for the cost of a coffee or two per month, it’s possible to get the CI/CD benefits of Netlify’s build pipeline when working with the JAMstack while freeing yourself from any bandwidth restrictions and letting the power of Varnish and Cloudflare do the heavy-lifting when it comes to serving up your static site(s) directly from RAM and straight to your visitors.

Webmentions & Comments

Copyright © Paramdeo Singh. #JAMstacked with in Guyana. All Rights Reserved.

Last Site Build on Sat, 24 Oct 2020 10:45:33 -0400