Ticker

10/recent/ticker-posts

Header Ads Widget

An Amazon server failure hampered Alexa, Ring, Disney Plus, PUBG, League of Legends and delivery services

Toisthe Tech News - An Amazon server failure hampered Alexa, Ring, Disney Plus, PUBG, League of Legends, and delivery services.

Amazon web services
Amazon Web Services-AWS

Large sections of the internet are experiencing poor loading or crashes due to issues with various Amazon Web Services cloud servers. Because Amazon's huge network of data centers powers many of the internet services you use, including this website, any outage has significant ramifications, as we've seen in previous AWS disruptions. People began to notice problems at 10:45 a.m. ET.

Though some Amazon web services (AWS) have been restored, the internet remains slower and more unreliable than usual. The most critical apps affected by the outage maybe those used by Amazon employees. Reddit posts from Amazon Flex, warehouse, and delivery workers, according to CNBC, claim that the apps that track parcels, inform them where to go, and generally keep your products on schedule have also gone down.

Disney Plus and Netflix streaming services, as well as games like PUBG, League of Legends, and Valiant, have been reported to be unavailable. Amazon.com and other Amazon products, such as the Alexa AI assistant, Kindle ebooks, Amazon Music, and Ring or Wyze security cameras, also had some troubles. The list of services with simultaneous spikes in outage complaints on DownDetector includes practically every well-known name: Tinder, Roku, Coinbase, both Cash App and Venmo, and so on.

Errors connecting to Amazon's instances and the AWS Management Console, which regulates their access to the servers, were reported by network administrators all around the world. Amazon's official status page updated with notifications indicating the outage after nearly an hour of issues.

[9:37 AM PST] Multiple AWS APIs in the US-EAST-1 Region are being impacted. This problem is also affecting some of our monitoring and incident response software, causing delays in providing updates. We've figured out what's causing the problem and are working to fix it.

[10:12 AM PST] Multiple AWS APIs in the US-EAST-1 Region are being impacted. This problem is also affecting some of our monitoring and incident response software, causing delays in providing updates. The root cause of the issue causing service API and console issues in the US-EAST-1 Region has been found, and we are beginning to see signs of recovery. At this time, there is no estimate for when you will be fully recovered.

[11:26 a.m. Pacific Standard Time] Multiple AWS APIs in the US-EAST-1 Region are being impacted. This problem is also affecting some of our monitoring and incident response software, causing delays in providing updates. EC2, Connect, DynamoDB, Glue, Athena, Timestream, and Chime, as well as other AWS Services in US-EAST-1, are affected. A malfunction of multiple network equipment in the US-EAST-1 Region is the root cause of this problem. We are pursuing multiple mitigation paths in parallel and have observed some signs of recovery, but we do not yet have an ETA for full recovery. This issue affects root logins for consoles in all AWS regions, however customers can use an IAM role for authentication on consoles other than US-EAST-1. 

[12:34 PM PST] In the US-EAST-1 Region, we continue to see higher API error rates for multiple AWS Services. A malfunction of numerous network devices is the root cause of this problem. We are continuing to work toward mitigation and are now pursuing a variety of mitigation and resolution options. We've seen some early signs of improvement, but we don't have a timetable for complete recovery. Customers who are having trouble logging in to the AWS Management Console in US-EAST-1 should try a different Management Console endpoint (such as https://us-west-2.console.aws.amazon.com/). Additionally, even when utilizing console endpoints outside of US-EAST-1, you may be unable to login using root login credentials. If this affects you, we recommend that you use IAM Users or Roles for authentication. As additional information becomes available, we will continue to post updates here. 

[2:04 PM PST]  In the US-EAST-1 Region, we have implemented a mitigation strategy that has resulted in significant recovery. We're still keeping an eye on the network devices' health, and we expect to keep making progress toward full recovery. At this time, there is no estimated period of full recovery. 

[2:43 PM PST] The underlying issue that caused some network devices in the US-EAST-1 Region to malfunction has been resolved. Availability is improving across the board for most AWS services. All services are now working on service-by-service recovery on their own. We're still working to get all impacted AWS Services and API operations back up and running. We've temporarily suspended Event Deliveries for Amazon EventBridge in the US-EAST-1 Region to speed up overall recovery. These events will still be received and accepted, and they will be queued for delivery at a later time. 

[3:03 PM PST]  Although many services have recovered, we are working to ensure that all services have fully recovered. SSO, Connect, API Gateway, ECS/Fargate, and EventBridge continue to be affected. Engineers are working hard to minimize the impact on these services. 

[4:35 PM PST] We are presently working to restore any services that have been impacted as a result of the network device difficulties. In the Service Health Dashboard, we will provide additional updates for impaired services. 

The troubles appear to be limited to the US-EAST-1 AWS region in Virginia, so customers elsewhere may not experience as many problems, and if you are, it may manifest as slightly delayed loading while the network reroutes your requests elsewhere. When contacted for comment, Amazon referred to the updates on its status page, which show the firm is "actively working toward recovery.

Toisthe Tech News  Toisthe Tech News  Toisthe Tech News

Post a Comment

0 Comments