[Resolved] Amsterdam 1 Server Failure

2016-02-15 06:00 GMT: 2 hard drives failed at the same time in a server at our Amsterdam 1 datacenter, causing the RAID array to fail. This has resulted in several clients and some internal infrastructure going offline (including our support helpdesk).

We are currently working hard to restore backups on to another server and will provide regular updates. If you want to contact us you can tweet @anuinternet.

Update 08:55 GMT: our support desk is back online, some virtual machines are already back online, others are still restoring from backup.

Update 10:15 GMT: all servers except the anuhosting.net shared/reseller server are now back online. anuhosting.net has been priority #1 since we started the restore procedure 4 hours ago, unfortunately it is also by far the largest and is taking quite some time to restore. We estimate it may take another 2-3 hours to complete.

Update 13:40 GMT: we are working on recovering services on anuhosting.net, we aim to have mail services running very shortly followed by MySQL and Apache/PHP. Apologies for the ongoing service interruption.

Update 14:00 GMT: DNS and mail on anuhosting.net are operational again. MySQL was unable to recover InnoDB to a functional state, so we are restoring the last consistent database snapshot available which is from 04:00 Sunday. We will make the recovered InnoDB databases from the latest backup available to anyone who wants to try to extract missing data from Sunday. Most of the data is there but we were unable to recover it to a fully functional state, so we made the decision to roll back to a known good copy. We expect PHP/MySQL/Apache services to be back online within 30 minutes.

Update 15:30 GMT: we are running into problem after problem with restoring Web functionality and do not currently have an ETA. We are working as fast as possible to recover MySQL, Apache and PHP services on anuhosting.net. All other services are currently operational. Our sincere apologies for the continued downtime on shared and reseller hosting servers.

Update 01:30 GMT: It’s been a very long day, thankfully at this point we can say our anuhosting.net server is finally operational again. We have spent the past half hour testing as many sites as possible and things seem to be running well. We are concerned there may be a handful of InnoDB tables with errors, if your site is not functioning 100% this may be the cause. Please contact support@anu.net ASAP and we will do what we can to help get you back up and running. We will be on hand tomorrow to answer any questions and help with any remaining issues.

A big thank you to all our customers for their patience, understanding and encouragement throughout this difficult day.

We will of course follow up with a detailed review of our storage systems, redundancy measures, backup and disaster recovery plans.

Today’s Packet Loss

When both our Amsterdam 1 and Amsterdam 2 data centres, separate facilities at opposite ends of town, began to experience similar levels of packet loss, it became clear that something outside of our control was amiss.

Both facilities are connected to the internet by a handful of global transit providers though two they both share in common are Level3 Communications and Cogent Communications. A quick search on Twitter, the best place to find trending issues, revealed that Level3 was the problem.

Level3 is a Tier 1 Network. This means that they operate a lot of inter-country and inter-continent cables which are vital for everybody to remain connected online. Individual home/office broadband providers will sign agreements directly with or with partner companies of one or more Tier 1 Networks which connects their customers with the rest of the web. As part of this, the companies agree to receive and deliver traffic to houses and businesses on their lines – it is a two way process.

Telecom Malaysia, an internet service provider in Malaysia, is a user of Level3. Probably due to a human error, early this morning, they began giving incorrect information to Level3 as to who they could and couldn’t deliver traffic to and how much they could handle. Their systems gave the go ahead for Level3 to dump tremendous amounts of traffic on them which crippled their infrastructure, slowed down the internet for millions of people and left website owners unsure as to who could and couldn’t visit them. For theoretical sake, for a short period of time, we could say that “25 percent of people online could not access 25 percent of the internet” due to this.

As our facilities are connected directly to multiple Tier 1 Networks, it gives us the ability to respond to these problems fairly promptly. In Amsterdam 1, the NOC immediately severed our connection to Level3, diverting all traffic that would have been lost through our other providers. In Amsterdam 2, whilst we don’t have the official write up yet, we saw traffic flowing in very promptly so we imagine the NOC took similar steps.

This does open our eyes to how fragile the internet can be and a misconfiguration between just two companies can disconnect large parts of the internet from each other. We are lucky enough to own our own hardware and work directly with facilities who are connected to Tier 1 networks, giving us flexibility and the ability to control our own fate. People who are a client of a reseller, or even a reseller of a reseller, may find in scenarios like this they are left offline or confused for a much longer period of time.