Cloudflare is pretty cool.
When I first set up my network’s PiHole(s) I was using OpenNIC as the DNS provider, but I ended up switching over to Cloudflare after I was having some problems with domain resolution on OpenNIC. OpenNIC is still great, but I wasn’t really making any use of the unofficial TLDs served by them to justify spending my time on resolving the issue. So I switched to Cloudflare, and since I’ve had no issues, I haven’t needed to switch to any other provider.

I made the switch to them, rather than any of the other providers included by default in the PiHole configuration, since we had started to use them more and more at Reclaim. Better to go with the devil you know, and all.
One such example of us using their services is the SSH Gateway server(s) I put together, which uses Cloudflare to provide a web terminal for folks. Very useful for everyone who doesn’t spend all day looking at a terminal like us in the Infrastructure team.
Last November (2021), we took a team trip to Nashville and spent quite a bit of time discussing security and multiregion setups. The security discussions were the genesis of projects like the SSH Gateway servers, but the multiregion stuff was (at least for me) left mostly on the backburner. Recently though, we’ve decided to start testing out Reclaim Cloud’s multiregion capabilities by setting up the main Reclaim Hosting site in such a fashion. Currently the site is hosted (by itself, with no other sites on the same server) on a DigitalOcean Droplet, but we intend to bring it over to Reclaim Cloud and utilize Cloudflare.
Even if we are working to improve the availability of such a high trafficked site, moving it will be a lot of work and can result in downtime. So it’s something we will want to test, because if something can go wrong then it will.
Enter: reclaimhosting.dev
Rather than use the default Reclaim Cloud subdomain or one of our .reclaim.hosting dev URLs, we got one of them fancy new TLDs and did a partial (CNAME) setup in Cloudflare. We looked at doing CNAME Flattening so we could point the apex record to Cloudflare as well, but this is something that we/our registrar do(es) not support. So we only pointed the www
subdomain, and set up a redirect for reclaimhosting.dev
to www.reclaimhosting.dev
. A redirect ain’t elegant, but it works.

To make managing DNS easier, I created a cPanel account that owned this domain on one of our Shared Hosting servers. This way, I could manage DNS right from cPanel’s Zone Editor rather than going directly through the registrar or something. It could totally be managed directly with the registrar, but this just seems easier.

Before all the DNS and Cloudflare stuff, we had to actually have somewhere ready to point everything to. The Virtuozzo App Platform (formerly Jelastic) has an installer in beta (alpha?) for this really performative (performant? idk, words are hard) multiregion WordPress cluster. But with all of its separate SQL nodes and load balancers and all, it was kind of overkill. While the main Reclaim Hosting site needs to be up and running pretty much all the time, it’s not a super heavy site doing much more than sitting there; our billing system is on a separate server. Luckily, they also have a multiregion standalone installer that’s just two LAMP (actually LLSMP) stack containers. Perfect for what we need here.
I didn’t have too much luck when I tried to install it, but Jim did, and was able to get everything from the main site copied over. We did initially want three servers, but the installer we were using only allows for two. Two is fine for now.
The installer also automatically configures things so that changes to the files and DB from the primary node will automatically sync over to the secondary node. Very useful.
With the actual servers set up, all the files and DB copied over, and the www
subdomain pointed to Cloudflare, we were ready to start actually using Cloudflare. The first step was to create DNS records in Cloudflare for the www
subdomain pointing to the two servers. So many DNS Records.

After we pointed DNS to point DNS again, we had to set up the load balancer on Cloudflare’s end. For this, I mostly relied on Virtuozzo’s documentation. After I had set up the Origin Pool and configured monitoring, I was seeing that both origins were listed as “unhealthy” even though the site itself was up and running. This was because, while Cloudflare had issued an SSL certificate and the site did load securely, there were no certs on the servers themselves. This was fixed by issuing a Let’s Encrypt cert for both servers, which was actually a step mentioned in the documentation I was using. A step I totally overlooked.
And with that, the site was up and running; being hosted on two separate servers in two separate locations and protected by Cloudflare!
To give the site even more performance, we decided to hook it up to Cloudflare’s CDN (which is included in the plan we purchased for this dev space anyways). While there are a number of ways to do this with a WordPress site, we went the simplest route for our setup: the LiteSpeed Cache plugin. The server(s) the dev site is hosted on is (are?) already running the LiteSpeed Web Server, so it makes enough sense to use the WordPress caching plugin built for it. The fact that, just with an API token, it can hook right up to Cloudflare’s CDN is a bonus.
We also enabled Cloudflare’s Always Online service to ensure that if everything came crashing down, the site would still remain up and running, at least in a static form. This is done by storing a copy of the site with the Internet Archive, which is just a fantastic service that needs to be protected.
Right now, if you’re not signed in, the dev site is just a basic maintenance page. And that it will likely remain. But soon we’ll be moving the main site over to this kind of setup. Very exciting!
I wanted to add an update to this.
We’ve gone ahead and added a third server to the multiregion setup. But, rather than run yet another WordPress instance and figure out how to set up all the syncing for not only the files but the DB as well, this third server is just an Nginx container hosting a static copy of the site mirrored with “wget” (https://www.gnu.org/software/wget/manual/wget.html#Recursive-Retrieval-Options). I wrote a script that runs once a day as a cronjob and grabs a copy of the site, and then set up this third server as a “Fallback Origin” (https://developers.cloudflare.com/load-balancing/understand-basics/traffic-steering/steering-policies/standard-options/) in Cloudflare’s Load Balancing options. I will mention that the script checks the HTTP Status Code and makes sure it’s not throwing a 4xx or 5xx before it grabs the copy, so we’re not left with a broken static mess in the case traffic does end up going to the static mirror.
Because of some Let’s Encrypt weirdness (no doubt caused by the whole “Fallback” thing, at least in part) I wasn’t able to issue an SSL cert for the third server the same way I did for the original two. This was solved by issuing one from Cloudflare (https://developers.cloudflare.com/ssl/origin-configuration/origin-ca). This does only work for communication between the server and Cloudflare, but that’s all we need here.
This way, if the two “real” servers go down, (a static copy of) the site will still remain up. And if that goes down too, then (while I scrambling to bring everything up) we’re still covered by the copy stored on the Internet Archive through Cloudflare’s Always Online thing. Uptime, uptime uptime!