Moved to DigitalOcean

This post is more than 9 years old.

I’ve moved nearly all of WebAware’s websites to DigitalOcean, and stuck NginX in front of everything.

For a long time, I’ve had nearly everything on a cheap but pretty good shared hosting plan. This has allowed me to focus on work and not worry about server configuration, server-level security etc. but at the cost of performance. I’ve been planning to move off shared hosting for ages, and had been considering my options, including premium hosting, managed VPS, and basic self-managed VPS.

In September 2014, I listened to Vlad Lasky present his pitch about self-managed VPS at WordCamp Sydney 2014, and my mind was made up. I set about evaluating which service to go with, and documenting all the things I’d need to set up (and learn!) to make it happen. I also reviewed many different approaches to take in building a VPS for hosting WordPress websites. Brad Touesnard’s post on managing your own server helped me along a bit too, especially the comments section. To make sure it would all work, I tested a few scenarios in local QEMU-KVM virtual machines.

What I settled on was:

  • DigitalOcean for hosting the VPS
  • Ubuntu as the server platform
  • MariaDB
  • NginX as the web server
  • proxy to either php-fpm or Apache for executing PHP
  • memcached to speed up WordPress without page caching
  • WP Super Cache for basic anonymous visitor page caching
  • CloudFlare to handle static asset caching and SSL

I chose DigitalOcean for a few reasons. Other options were Linode, Vultr, Amazon EC2, and some Aussie hosts like Crucial and Exigent. DigitalOcean got my business because it offered great value for money, stability, terrific documentation, backups!, and allowed payment by PayPal. The others all look pretty good too, except Amazon which is overpriced, but DigitalOcean was the pick for me.

As a long-time Red Hat user (from Red Hat 5.0 back in 1997 through to Fedora 21 today), I knew that I didn’t want it as a server platform. Fedora is great, but not what you’d call totally stable. Red Hat Enterprise Linux, and its free offshoot, CentOS, are rock-solid — and old. I wanted something that was up-to-date and stable, and that also had lots of good documentation. Hence Ubuntu.

I chose NginX for performance and simplicity. Hah! The past few days have proven that simplicity can actually be hiding complexity, but I think I’ve surmounted all challenges now. The big thing that Apache has over NginX is the ability to drop a .htaccess file into a folder to change how the server handles that folder, but a properly thought-out NginX configuration can give you all the flexibility you need without the performance hit of Apache.

A common configuration I see these days is Varnish out the front of Apache, sometimes even with NginX on the side for SSL termination. I’m sure that’s a good choice on hardware with lots of RAM, but I chose to stay small, and also to simplify my configuration requirements. NginX has a very small hit to RAM, whereas Apache needs more RAM the busier you get. I can still play with Apache when I need it (my test sites have NginX proxying PHP requests to Apache), but I now run the sites that matter to me as NginX + php-fpm.

I briefly looked at using NginX’s page caching. It’s actually really good, especially with SSD instead of spinning metal hard drives. Sticking the cache folders into RAM means it truly rivals Varnish for speed. With WordPress, you need a good solution for automatically purging stale cache files, however, and I didn’t think much of the options available. They did mostly work, but I could get what I needed much more simply with WP Super Cache. And with all the files on SSD, it’s blindingly fast.

The tricky bit in setting this all up was learning NginX configuration files. The model is very different to Apache’s configurations and has a few conceptually gnarly bits and gotcha’s. There’s lots of example config scripts out there telling you how to set up NginX + php-fpm + WordPress, which at first seems great — until you discover that most of them are garbage, and even the good ones have serious problems (like not handling cached files for HTTPS connections). Expect some more blog posts once the dust settles.

Memcached was a no-brainer, which was handy because that’s how I felt after sorting out NginX. WordPress makes use of memcached to very effectively reduce the load on MySQL (or MariaDB if you prefer). Like, from dozens of queries per page to queries in the single digits (or in the case of some big websites I’ve worked on, from hundreds of queries per page to around 20). I’m using a fork of Zack Tollman’s wordpress-pecl-memcached-object-cache, which is the best documented memcached drop-in and has an active GitHub project.

CloudFlare — well basically, I don’t have time right now to sort out a CDN and proper SSL, so I’ve just stuck CloudFlare in front of things. I created self-signed SSL certificates so that I can make use of CloudFlare’s Universal SSL (full). It gets me part-way to where I want to be, and I can return later to sort it out properly.

So there it is. The new, faster WebAware websites are up and, I think, working OK. There’s more to be done, which means more to be learned, but that’s life. And now I’m going for a walk…