All systems are operational

Past Incidents

Wednesday 3rd July 2019

No incidents reported

Tuesday 2nd July 2019

3rd Party 3rd Party Service Interruption

Disruption

  • Update (14:53): Customers are reporting issues with Gateway Timeouts; commonalities between customers are the use of CloudFlare. We are recommending that all customers disable CloudFlare completely before raising a ticket for support.
  • Update (15:19): CloudFlare are reporting that a temporary fix is in place and services should resume normality.

Do not raise an emergency ticket if you use CloudFlare. Customers are advised to view https://www.cloudflarestatus.com/ and disable CloudFlare in the mean time.

Monday 1st July 2019

No incidents reported

Sunday 30th June 2019

No incidents reported

Saturday 29th June 2019

No incidents reported

Friday 28th June 2019

No incidents reported

Thursday 27th June 2019

Network Network Interruption

Disruption

  • Update (17:47): Some customers have reported connectivity issues with their stacks.
  • Update (18:25): The connectivity has now been restored on the majority of our network but some customers will still be affected.
  • Update (19:21): We are still suffering connectivity issues across our network, our engineers are investigating.
  • Update (19:34): The majority of our network has connectivity but some customers are still affected.
  • Update (20:09): Connectivity has been restored. Please get in touch with us if you are still experiencing issues.
  • Update (21:30): Services are confirmed fully operational and the underlying issue appears to have been tracked to a failing/malfunctioning core aggregation switch. This has been permanently powered off and will be replaced. A full RFO will be available within 24 hours.

Post-Mortem

Our report from the incident is as follows.

Issue

Loss of connectivity, high load and periods of unavailability for the entire MA3 facility and a single isolated network segment.

Outage Length

The duration was between 60 to 180 minutes.

Underlying cause

Unfortunately, this was a repeat incident of similar nature to https://status.sonassi.com/incident/149/

We believe a malfunctioning aggregation switch as part of the backbone of the network core began sending out malformed/erroneous L2 packets, driving up CPU utilisation on other routers and switches. Control plane traffic was disturbed and multi-chassis aggregated links degraded, resulting in loss of downstream connectivity to rack pods and subsequent customer stacks.

Different symptoms to the last incident lead diagnosis ultimately down an incorrect path of initial resolution, leading to extended resolution times and the isolation of an entire network segment (a single rack pod).

Symptoms

Our facilities monitoring, and service monitoring probes immediately reported the incident. Customers would have experienced slow page load times through to a completely inaccessible site.

Resolution

A repeat incident of a malfunctioning aggregation switch appeared to be the source of increased CPU load throughout the network; permanently powering off the device (with subsequent replacement due) resolved the underlying issue.

Prevention

The network architecture and equipment in use is more than adequate, offering extreme levels of availability, with multiple layers of redundancy designed into every tier of the network stack. However, a failed/failing device has caused significant network disruption wherein the device had no explicitly failed, demonstrated signs of error or malfunction - other than the successful operation of the network in its absence.

Whilst rare, the eventual degradation of the switch silicon is lead to be the cause and the device (and its paired device) will be replaced with latest generation hardware.