[Completed] Short interruption on anuhosting.net server

Update 23:15 GMT: Migration has been completed, after a slightly longer than estimated 37 minute data sync. All service checks are coming up green again.

21:45 GMT. In order to improve performance, we are going to migrate anuhosting.net to another server with faster CPUs. Since 15 February anuhosting.net has been running on backup hardware after the RAID array it was stored on failed. The migration will entail a short interruption to service, we expect around 20 minutes +- 10 minutes. Apologies for any inconvenience caused. We will start the migration at 22:15 GMT.

[Complete] SpamTitan server upgrade

2016-02-19 17:55 GMT

As announced 2 weeks ago, we are performing an upgrade on our SpamTitan server. We are commencing work now and will update this blog when complete.

There will be a brief interruption to filtering of inbound mail, we expect this to last no more than 10 minutes.

Update 18:20 GMT: the disruption was less than 5 minutes, and mail is now flowing through the new server. We’ve had to make a few manual config tweaks that weren’t carried over automatically by the configuration export/import tool, but overall the process went fairly smoothly.

Update 18:30 GMT: we’ve tested and monitored log files for the past 20 minutes and everything is running smoothly. The old SpamTitan server is available at https://oldspamtitan.anu.net/, we’ll leave it running for 14 days (after which all quarantined mail will have expired) in case any customers need to log in and retrieve any false positives from it.

A brief analysis of the ongoing anuhosting.net outage

While we wait and watch the painfully slow process of restoring our anuhosting.net shared hosting and reseller server, let me write a brief analysis of what happened and what’s going on.

Last year, we spent almost £20,000 updating our server and network hardware in our Amsterdam 1 datacenter, to accommodate continued growth in shared hosting and reseller hosting services. We put in 10Gbps Ethernet, new 16-core/256GB RAM servers and pure SSD RAID storage arrays that are over 10 times faster than spinning disks.

We planned (and still plan on) continuing to invest in our physical infrastructure this year, including adding high speed, replicated, redundant network attached storage arrays and new backup servers.

The server that failed this morning was just 8 months old. We can only assume at this point we ended up with 2 SSDs from a bad batch, and they ended up failing at almost the exact same time, leaving no time for the RAID to rebuild.

The storage for our anuhosting.net server was on the RAID array that failed, and was backed up daily to a separate backup server. Those incremental backups run over 1Gbps Ethernet, and run at night in the background.

The problem we are facing today is that the sheer volume of data needed to restore the server is taking many many hours to copy from the backup server’s spinning SATA disks, over 1Gbps Ethernet. Our (woefully inadequate) backup plan for a disaster like this was to take a fresh copy of all the data, put it on a new server and fire it up. We do this sort of thing regularly for routine jobs like cloning servers or testing, but what we failed to account for in this case is how long it would take to copy all the data.

Plan B was that instead of copying all the data, we’d spin up a new virtual machine and connect it directly to the data stored on the backup server. The theory was at least we could get sites back online, even if they ran somewhat slowly. Around mid day today, we made the call to implement plan B. Again though, we failed to account for the I/O bandwidth required to run anuhosting.net, and almost as soon as we booted it up the server crashed due to insufficient I/O.

So we came up with plan C: continue copying the data from the backup server in the background, and start minimal services on anuhosting.net so we could at least restore some functionality.

That’s where we’re currently at: DNS and email are running, while the restore process continues in the background (albeit at a slower pace, due to increased I/O load).

Once the restore process completes, we will temporarily shut down DNS and email services again while we synchronise the latest changed data to the new server, and boot up from local storage. At this point we will be able to start up the Apache/PHP/MySQL servers again, as well as DNS and email.

We don’t know exactly how long this will take, at a guess 4 hours.

We know where we went wrong, we know what needs to be done to fix our infrastructure going forward. All we can ask is for continued patience and understanding from our customers while we keep working to restore service. Be assured we are working as fast as we possibly can.