[Resolved] SpamTitan service interruption

Update 2013-10-23 12:30 UTC

Backlog has been cleared, service back to 100%

Update 2013-10-23 11:45 UTC

We’re still seeing some delays processing incoming mail due to the backlog created by the outage earlier. No mail has been lost, and we expect the backlog to be clear within one hour.

Update 2013-10-23 08:35 UTC

Service fully restored. Apologies to all affected customers for the inconvenience caused.

Update 2013-10-23 08:05 UTC

ETA for fix is 20 minutes.

2013-10-23 06:15 UTC

Unfortunately we currently experiencing a service interruption on our SpamTitan email filtering system. We require assistance from SpamTitan support which come online at 09:00 British time, we expect the problem will be resolved shortly thereafter.

[Resolved] Network outage Amsterdam 1 datacenter

Update 2013-10-17 00:30 UTC
Packet loss stopped, back to 100% connectivity. Awaiting RFO from network ops.

Update 2013-10-16 18:45 UTC
Service restored but 25-30% packet loss ongoing. Pending RFO (Reason for Outage) report from network ops.

Update 2013-10-16 16:45 UTC

Network ops are performing a full reload on the affected routers.

Update 2013-10-16 16:15 UTC
Network ops have identified the problem and are working to resolve it as soon as possible. Still no firm ETA available.

Update 2013-10-16 15:45 UTC
Our hardware is working and links are up, but no traffic is reaching our switches. Network ops are investigating.

2013-10-16 15:15 UTC
We are currently experiencing an outage at our Amsterdam 1 datacenter. Engineers are working on a solution but we do not yet have a firm ETA. Please check back shortly for an update.

Customers affected include shared Web hosting, reseller Web hosting, ns1.anu.net and ns2.anu.net DNS resolvers (ns3 is hosted in Chicago and is still up), and customers with virtual servers hosted on ams1-cloudmin.anu.net.

[Resolved] ams2-cloudmin.anu.net crash

The Cloudmin VM management system for our Amsterdam 2 datacenter crashed last night (11th October). The crash was due to a bug in the VM status collection system which caused it to use excessive resources and eventually run out of memory altogether.

We have resolved this issue by restarting the Cloudmin server. There is also an unofficial patch for this bug which we have now applied, pending the next maintenance release of Cloudmin which will fix this known issue.

Services affected: DNS resolution for ams2-cloudmin.anu.net zone, Web GUI VM management for VMs in Amsterdam 2 datacenter, API/customer portal management of VMs in Amsterdam 2 datacenter.