Amazon’s AWS UAE data centre hit by fire; more details here

/ 3 min read
Summary

The incident began at around 4:30 AM PST (12:30 PM UAE time) on March 1, when objects struck the data centre facility housing mec1-az2 — one of three availability zones in AWS's ME-CENTRAL-1 region.

Shutterstock
Credits: Shutterstock

A fire at an Amazon Web Services data centre in the UAE knocked out one of the region's three availability zones on Sunday, triggering a service disruption that stretched well past 18 hours and left customers scrambling to recover workloads or shift them elsewhere.

ADVERTISEMENT
Sign up for Fortune India's ad-free experience
Enjoy uninterrupted access to premium content and insights.

The incident occurred on the same day that Iran launched retaliatory missile and drone strikes against targets across the UAE, following U.S. and Israeli military action against Iran. AWS has not stated whether the objects that struck the data centre were connected to those strikes.

The incident began at around 4:30 AM PST (12:30 PM UAE time) on March 1, when objects struck the data centre facility housing mec1-az2 — one of three availability zones in AWS's ME-CENTRAL-1 region. The impact caused sparks and ignited a fire. As a standard safety measure, the fire department cut power to the entire facility, including its backup generators, while they worked to extinguish the blaze.

AWS confirmed the situation in a public update at 9:41 AM PST, stating it was "still awaiting permission to turn the power back on" from the fire department.

What went down

With power cut to the zone, the damage was immediate and wide-ranging. EC2 instances, EBS volumes, RDS database instances, and other resources inside mec1-az2 went offline. On top of that, several EC2 networking APIs — including AllocateAddress, AssociateAddress, DescribeRouteTable, and DescribeNetworkInterfaces — began failing across the broader region, affecting customers even in the unaffected zones.

AWS began rerouting traffic away from mec1-az2 by 6:09 AM PST, and most services had weighted away from the affected zone by 7:09 AM. But for customers with EC2 instances and EBS volumes physically sitting inside mec1-az2, there was no quick fix — those resources were simply down until power could be restored.

Through the afternoon, AWS worked on multiple fronts simultaneously — trying to restore power to the zone while also deploying workarounds for the broken networking APIs so customers in healthy zones weren't completely stuck.

Recommended Stories

By 2:28 PM PST, the AllocateAddress API had partially recovered, allowing customers to create new network addresses. But the AssociateAddress API — needed to actually attach those addresses to instances — remained broken. AWS told customers to expect a two-to-threeh-our window before that was resolved.

By 4:26 PM, AssociateAddress was showing "significant signs of recovery." By 6:01 PM, AWS confirmed both APIs were back, and crucially, customers could now move Elastic IP addresses away from resources stuck in the affected zone and reassign them to instances running in the healthy zones.

ADVERTISEMENT

As of that update, however, power had still not been restored to mec1-az2 and AWS said there was no ETA.

Then, at 9:59 PM PST — nearly 17 hours after the incident started — AWS flagged a fresh wave of connectivity issues and elevated error rates in the ME-CENTRAL-1 region, saying it was investigating.

Fortune 500 India 2025A definitive ranking of India’s largest companies driving economic growth and industry leadership.
RANK
COMPANY NAME
REVENUE
(INR CR)
View Full List >

Who all are affected

AWS was clear on one point throughout that customers running applications redundantly across multiple availability zones were not impacted. The other two AZs in the UAE region, mec1-az1 and mec1-az3, remained fully operational throughout the incident.

The pain was felt by customers who had workloads concentrated in mec1-az2, or those who relied on the regional networking APIs that degraded as a knock-on effect of the outage.

Due to the surge of customers shifting workloads into the surviving AZs, AWS also noted that provisioning times in those zones were longer than usual, and some instance types were harder to get.

For now, the ME-CENTRAL-1 region remains in a fragile state. Power restoration to mec1-az2 has not been confirmed, and the late-night flare-up of connectivity issues suggests the incident is not fully behind AWS or its customers just yet.

ADVERTISEMENT

Explore the world of business like never before with the Fortune India app. From breaking news to in-depth features, experience it all in one place. Download Now