My cloud server had been running my old setup for a while and along with this site was in much need of some updated and housekeeping. I will go more into the updates to this site in another article, as I'm also planning on giving a fresh paint job over the next few weeks. This post is all about the issues I encountered, the benefits I had seen, and just a way for me to document the process for if I'm doing a similar thing again in future and stumble upon the same hurdles.

Old Setup

Let's start with the setup I had before. So I use Digital Ocean to host a single Droplet I then currently host two sites, this blog, and the BulmaJS documentation site. This site is a Laravel application with the Statamic CMS, and then BulmaJS documentation is just a static site.

The server software itself consisted of:

  • Ubuntu 20.04

  • Webmin / Virtualmin

  • Nginx

  • Apache

  • PHP 7.3, 8.0, 8.1, 8.2

  • MariaDB

  • and a bunch of other stuff bundled with Webmin/Virtualmin

I did not need all of this. As a baseline, not that it really matters, the resource usage at this time was:

  • 59% RAM usage (~1.2GB)

  • CPU would be consistently between 2 - 6% (even though I get barely any traffic)

Time for the plan!

The Plan

I had decided that I want to upgrade the OS to the next LTS. Webmin/Virtualmin are at this point for my use case just bloat, I barely use them and since setting the server up I'm much more comfortable configuring things myself via SSH rather than needing a GUI. I was upgrading this site to the latest Statamic, Laravel, and PHP versions, so there were a lot of clean-up possibles there. I didn't currently have any site hosting on the server that requires a database, and some other general clean up.

Not only that, but I also like to try new things, I had been using the Caddy web server in some test docker containers for a couple of months now, and find it so easy to use that I wanted to have my fresh updated server making use of it too.

So the plan was:

  1. Upgrade to Ubuntu 22.04 LTS (24.04 LTS is not yet available at the time of writing)

  2. Remove Webmin and Virtualmin

  3. Remove Nginx

  4. Remove Apache

  5. Remove all PHP versions

  6. Remove MariaDB

  7. Remove everything else I don't need

  8. Install and setup Caddy for both sites

So, I grabbed some coffee, and got started. The first step was upgrading Ubuntu.

Upgrading Ubuntu

Upgrading Ubuntu is fairly simple process. I do see a lot of recommendations to create a brand-new server and just move the sites over, however, for projects like this I don't mind just upgrading inline, if it was a super critical production server then I would be more inclined to take the safer approach.

The first step was to ensure the current packages were all up to date:

1sudo apt update
2sudo apt upgrade
shell

Once this was completed, we can now call Ubuntu's upgrade command. However, the first time this failed for me, and it was due to a couple of reasons:

  1. The Webmin/Virtualmin keys installed had expired, so fetching from the repository would fail

  2. Digital Ocean uses mirrors for the package repositories and by default Ubuntu disables third party sources when doing the upgrade

So, first, as I was removing Webmin/Virtualmin I just removed the repositories. First this required removing the list files from /etc/apt/sources.list.d and then I also used the apt-key command to find and remove the keys for those repositories.

For the second issue, this was a simple flag I needed to pass to the upgrade command so that it would use Digital Ocean's mirrors. Time for the update:

1sudo do-release-upgrade --allow-third-party
shell

Now I sipped my coffee and answered any popups the upgrade provided. Once complete I restarted the server, and then checked the two sites still functioned - always regularly check after each stage as that way you know what may have broken it!

Time for the purge!

The Purge

Now we're on a newer version of Ubuntu, the next stage was to purge everything we do not need. This was effectively the point of no return, as it would be quicker to set up the new system than reconfigure Webmin and Virtualmin. So I started. This may not be the best approach, and did cause me some issues later one, so I would recommend not being quite so heavy-handed.

This stage is pretty simple, effectively seeing what was installed using the list command of apt and then removing what I don't want. I started with Webmin and Virtualmin, as a lot of dependencies were installed by these packages.

1sudo apt purge webmin* virtualmin*
shell

Then I rechecked the list of installed and removed the rest, a snippet of this:

1sudo apt purge php7.2* php7.3* php8.0* php8.1* php8.2* webalizer* apache2* nginx* ...
shell

This was actually the point at which I messed up, and later on will spend quite awhile trying to fix problems that happened here. Before doing this, I had already upgraded this site, so I did have PHP8.3 installed. This may have unintentionally broken that install, which I would find out later.

After uninstalling everything, I then ran autoremove to ensure everything not needed was removed.

As expected, since we now do not have a web server, both sites were now offline. Caddy time!

Caddy 2 install and setup

Caddy if you're not familiar with it is a web server and reverse proxy that prides itself on security and low configuration. Their own site will do a far better job of explaining it than I will.

What I have really liked about Caddy is how easy it is to set up. HTTPS is configured by default, so no extra manual configuration for that, and getting a basic file based server takes about 4 lines of configuration. A PHP-FPM proxy is just an extra line!

To install Caddy 2 I followed the official installation documentation, I won't cover that here just in case it changes in future.

Once it was installed, I then configured my Caddyfile so that both my sites can start being served again. This file was really simple, and I've included it below:

 1tomerbe.co.uk {
 2    redir https://www.{host}{uri}
 3}
 4
 5www.tomerbe.co.uk {
 6    root * /var/www/html/tomerbe.co.uk/public
 7    encode gzip
 8    php_fastcgi unix//run/php/php8.3-fpm.sock
 9    file_server
10}
11
12bulmajs.tomerbe.co.uk {
13    root * /var/www/html/bulmajs.tomerbe.co.uk
14    encode gzip
15    file_server
16}
nginx

To give a quick explanation of this:

  • The first configuration block is to redirect tomerbe.co.uk to www.tomerbe.co.uk

  • The second configuration block

    • Sets the root directory, as this is a Laravel application that would be the public folder

    • Enables gzip encoding

    • Setups up PHP FastCGI to the php8.2-fpm UNIX socket

    • Enables the file server built into Caddy for static files

  • The third configuration block

    • Sets the root directory

    • Enabled gzip encoding

    • Enables the file server for static files

The BulmaJS documentation doesn't have any PHP and is purely HTML files, so this is configured as just a simple static file server.

When Caddy is started, it'll expand on this configuration internally with additional boilerplate, and then get SSL certificates for all of these domains and install them. Within a few seconds, these sites would then be available on the internet again... right...

Well, the BulmaJS documentation site was. However, it was at this point I stumbled upon a couple of hurdles.

The Hurdles

I would like to start this by saying these issues could have been caused by me (one definitely was) the other I hadn't encountered before when using Caddy with PHP FPM, however, I had been doing this Docker containers before, so may just be an extra step I wasn't aware of.

The first issue, that was my fault, and took a long time to figure out. Do you remember all that purging I did? Yeah, well that happened to also delete all the PHP 8.3 configuration in the crossfire. What caused this to be really tricky to find is that I was not getting any log output anywhere. The Caddy log just had a 500 error but no details, there was no error from PHP FPM and the request wasn't getting far enough into Laravel to use its error log.

I eventually tried to run a composer install just in case there was a dependency problem, this is then when I got a lot of errors about missing configuration files for PHP extensions. I tried a few suggestions I found online, ultimately I just purged all of PHP 8.3 and then reinstalled it, so the configuration files would be regenerated.

After doing this I was then getting Laravel's 500 error. This next issue is one that was unusual for me and I hadn't encountered before, but it may also be documentation I missed or my setup being different.

Caddy creates a caddy user and group, and the files are expected to be owned by the user and/or group, with suitable permissions. I, personally, like to have my user own the files and then have the web server's group own them as well. This way I can more easily edit them and use Git etc.

However, PHP FPM assumes the www-data user and group is being used, as this is what Nginx and Apache both use. I had to change this by editing the user and group settings in /etc/php/8.3/fpm/pool.d/www.conf to my user and Caddy's group.

After restarting PHP FPM, voilà! The site is back up, kind of. I then just had to clear Statamic's Stache cache, and then it was all back up and running!

I don't know if the Docker containers for PHP FPM and Caddy work slightly differently and so do not have this issue, but once I worked it out it did make sense.

Final Comments

So, now my server is running an updated Ubuntu version. This site is on the latest PHP version. I'm using Caddy for my web server. And I've removed a lot of unnecessary software. How does the server resources look now? Well, it's likely not surprising that it is a lot better, after all there is less running on it now. This improvement doesn't impact me too much, since I pay a fixed price anyway. However, it does mean if I wanted to I could downgrade to the tier lower, although I'm probably not going to. For the last 12+ hours, at the time of writing, the server has been running at:

  • 27% RAM usage (~540MB)

  • CPU is consistently between 0.3–0.4%

Overall a really good improvement, and if the traffic to the server was much higher than this would have increased the possible capacity too!