#aws#outage
Today's deepdive is a rare behind-the-scenes on how AWS handles an outage. Details from October's outage from Senior Principal Engineer, Gavin McCullagh, who was part of the crew resolving this outage. Plus details on how incidents are handled at AWS: newsletter.pragmaticengineer.com/p/how-aws-de...
December 16, 2025 at 6:33 PM
How AWS deals with a major outage • What happens when there’s a massive outage at AWS? A member of AWS’s Incident Response team lifts the lid, after playing a key role in resolving the leading cloud provider’s most recent major outage
How AWS deals with a major outage
What happens when there’s a massive outage at AWS? A member of AWS’s Incident Response team lifts the lid, after playing a key role in resolving the leading cloud provider’s most recent major outage
newsletter.pragmaticengineer.com
December 16, 2025 at 5:28 PM
Business Insider says that downdetector is a reliable source and they've noted a spike in outage reports.

downdetector.com/status/aws-a...
AWS live status. Problems and outages for Amazon Web Services |
Real-time AWS (Amazon Web Services) status. Is AWS down or suffering an outages? Here you see what is going on.
downdetector.com
December 15, 2025 at 3:54 PM
Is there an AWS or cloudflare outage? Struggling to use the internet in any meaningful way.
December 15, 2025 at 3:43 PM
I cannot wait for this nonsense to blow up in their faces.

We have had multiple, significant outages this year alone (AWS, Azure, etc.) because companies aren't doing proper, modern QA. And that takes staff.

People should prepare for a modern financial system outage. Cash on hand won't help.
December 15, 2025 at 3:33 PM
B) Mostly to make him understand how complicated things are. Because nothing short of a Cloudflare/AWS outage would be causing me problems.
If you had an annoying issue trying to access the World Wide Web and its associated "web sites" whom would you ask for help?

A) the closest eye-rolling 16yr-old

B) Patrick O'Donovan the Minister for Culture, Communications, and Sport
December 15, 2025 at 1:47 PM
can aws, cloudflare or github have an outage so we can go enjoy the snow plz
December 12, 2025 at 2:44 PM
When the Cloud Sneezes: a look at the ‘Outage Season’

The past few months have been a bruising reminder that even the biggest cloud providers can stumble. AWS, Microsoft Azure, and Cloudflare have all suffered major outages, disrupting any services that rely on them from websites to shopping sites…
When the Cloud Sneezes: a look at the ‘Outage Season’
The past few months have been a bruising reminder that even the biggest cloud providers can stumble. AWS, Microsoft Azure, and Cloudflare have all suffered major outages, disrupting any services that rely on them from websites to shopping sites to CRM and Finance systems and AI tools. For businesses (and their customers / users) there cause huge problems, lower confidence, impact services and reputation.
robquickenden.blog
December 12, 2025 at 12:46 PM
New AWS outage explainer: your zone was unexpectedly deorbited.
The heat dispersal part is so funny. They think space means free cooling.
December 11, 2025 at 7:14 PM
First AWS outage, and you're a back door man.
December 11, 2025 at 3:23 PM
🚨 Major #AzureFrontDoor & #Cloudflare outages shook the web in Nov 2025. How can your apps survive the next one?

📄 New DCAC white paper: From Outage to Opportunity — resilience strategies you need now.

👉 Read here: bit.ly/4oMfzcU

#cloudComputing #Azure #AWS #HA #Devops #sysadmin #Availability
From Outage to Opportunity: Strengthening Web Applications After Azure & Cloudflare Downtime
In today’s digital economy, your website is your business. When it goes down, you lose sales, leads, and reputation instantly. Recent global outages at Microsoft Azure Front Door and Cloudflare expose...
bit.ly
December 10, 2025 at 10:26 PM
The Register -
"Botnet takes advantage of AWS outage to smack 28 countries"

www.theregister.com/2025/11/26/m...

==========================
#librecanada #linux #opensource
Botnet takes advantage of AWS outage to smack 28 countries
: Even worse, it might have been a 'test run' for future attacks
www.theregister.com
December 10, 2025 at 9:23 PM
ShadowV2 Botnet Activity Quietly Intensified During AWS Outage - CySecurity News - Latest Information Security and Hacking Incidents https://www.cysecurity.news/2025/12/shadowv2-botnet-activity-quietly.html
December 10, 2025 at 4:24 AM
🔴 AWS down? GitHub having issues?
Now you’ll know in Slack, the moment it happens.
Read more on integrating Slack and StatusGator for outage alerts across your dependencies ➡️ statusgator.com/blog/third-p...
December 9, 2025 at 2:15 PM
ShadowV2 Botnet Activity Quietly Intensified During AWS Outage #AWS #CloudOutageRisks #CyberThreats
ShadowV2 Botnet Activity Quietly Intensified During AWS Outage
  The recently discovered wave of malicious activity has raised fresh concerns for cybersecurity analysts, who claim that ShadowV2 - a fast-evolving strain of malware that is quietly assembling a global network of compromised devices - is quietly causing alarm. It appears that the operation is based heavily upon Mirai's source code and is much more deliberate and calculated than previous variants. The operation is spread across more than 20 countries.  Moreover, ShadowV2 has been determined to have been created by actors exploiting widespread misconfigurations in everyday Internet of Things hardware. This is an increasingly common weakness in modern digital ecosystems and it is aimed at building a resilient, stealthy, and scaleable botnet. The campaign was discovered by FortiGuard Labs during the Amazon Web Services disruption in late October, which the operators appeared to have been using to cover up their activity.  During the outage, the malware spiked in activity, an activity investigators interpret to be the result of a controlled test run rather than an opportunistic attack, according to the report. During its analysis of devices from DDWRT (CVE-2009-2765), D-Link (CVE-2020-25506, CVE-2022-37055, CVE-2024-10914, CVE-2024-10915), DigiEver (CVE-2023-52163), TBK (CVE-2024-3721), TP-Link (CVE-2024-53375), and DigiEver (CVE-2024-53375), ShadowV2 was observed exploiting a wide range of CVE-2024-53375.  A campaign’s ability to reach out across industries and geographies, coupled with its precise use of IoT flaws, is indicative of a maturing cybercriminal ecosystem, according to experts. This ecosystem is becoming increasingly adept at leveraging consumer-grade technology to stage sophisticated and coordinated attacks in the future.  ShadowV2 exploited a variety of vulnerabilities that have been identified for a long time in IoT security, particularly in devices that have already been retired by manufacturers. This report, which is based on a research project conducted by NetSecFish, identified a number of vulnerabilities that could be affecting D-Link products that are at the end of their life cycle.  The most concerning issue is CVE-2024-10914, which is a command-injection flaw affecting end-of-life D-Link products. In November 2024, a related issue, CVE-2024-10915, was found by researchers in a report published by NetSecFish. However, after finding no advisory, D-Link later confirmed that the affected devices had reached end of support and were unpatched.  The vendor responded to inquiries by updating an existing bulletin to include the newly assigned CVE and issuing a further announcement that has directly related to the ShadowV2 campaign, reminding customers that outdated hardware will no longer receive security updates or maintenance, and that security updates will not be provided on them anymore.  During the same period, a vulnerability exploited by the botnet, CVE-2024-53375, was revealed. This vulnerability has been reported to have been resolved through a beta firmware update. Considering that all of these lapses are occurring together, they serve as an excellent illustration of the fact that aging consumer devices continue to serve as a fertile ground for large-scale malicious operations long after support has ended, as many of these devices are left running even after support has ended.  Based on the analysis of the campaign, it seems as though ShadowV2's operators use a familiar yet effective distribution chain to spread its popularity and reach as widely as possible. By exploiting a range of vulnerable IoT vulnerabilities, the attackers are able to download a software program known as binary.sh, which is located at 81[.]88[.]18[.]108, which is the command server's location. As soon as the script is executed, it fetches the ShadowV2 payload - every sample is identified by the Shadow prefix - which is similar to the well-known Mirai offshoot LZRD in many ways. A recent study examining the x86-64 build of the malware, shadow.x86_64, has found that the malware initializes its configuration and attack routines by encoding them using a light-weight XOR-encoding algorithm, encrypting them with one byte (0x22) to protect file system paths, HTTP headers, and User-Agent strings using a single byte key.  As soon as these parameters are decoded, the bot connects with its command-and-control server, where it waits for instructions on how to launch distributed denial-of-service attacks. While aesthetically modest in nature, this streamlined design is a reflection of a disciplined and purpose-built approach which makes it easy for deployment across diverse hardware systems without attracting attention right away.  According to Fortinet, a deeper analysis of the malware—which uses XOR capabilities to encrypt configuration data and compact binaries—underscores that ShadowV2 shares many of the same features as the LZRD strain derived from Mirai. This allows ShadowV2 to minimize its visibility on compromised systems in a similar fashion.  An infection sequence that has been observed across multiple incidents follows a consistent pattern: attackers are the ones who break into a vulnerable device, then they download the ShadowV2 payload via 81[.]88[.]18[.]108, and then they proceed to install it. The malware connects to its command server at silverpath[.]shadowstresser[.]info immediately after it has been installed, allowing it to be part of a distributed network geared towards coordinated attacks.  Once installed, the malware immediately resides on the compromised device. In addition to supporting a wide range of DDoS techniques, including UDP, TCP, and HTTP, the botnet is well suited for high-volume denial-of-service operations, including those associated with for-hire DDoS services, criminal extortion, and targeted disruption campaigns.  Researchers claim that ShadowV2's initial activity window may have been purposefully chosen to be the right time to conduct its initial operations. It is perfectly possible to test botnets at an early stage in the early stages of their development during major outages, such as the AWS disruption of late October, as sudden traffic irregularities are easily blended into the broader instability of the service.  By targeting both consumer-grade and enterprise-grade IoT systems, operators seem to be building an attack fabric that is flexible and geographically diffuse, and capable of scaling rapidly, even in times of overwhelming defensive measures. While the observation was brief, analysts believe that it served as a controlled proof-of-concept that could be used to determine if a more expansive or destructive return could occur as a result of future widespread outages or high-profile international events.  Fortinet has issued a warning for consumers and organizations to strengthen their defenses before similar operations occur in the future, in light of the implications of the campaign. In addition to installing the latest firmware on all supported IoT and networking devices, the company emphasizes the importance of decommissioning any end-of-life D-Link or other vendor devices, as well as preventing unnecessary internet-exposed features such as remote management and UPnP, to name just a few.  Additionally, IoT hardware should be isolated within segmented networks, outbound traffic and DNS queries are monitored for anomalies, and strong, unique passwords should be enforced across all interfaces of all connected devices. As a whole, these measures aim to reduce the attack surface that has enabled the rapid emergence of IoT-driven botnets such as ShadowV2 to flourish.  As for ShadowV2's activity, it has only been limited to the short window of the Amazon Web Services outage, but researchers stress that it should act as a timely reminder of the fragile state of global IoT security at the moment. During the campaign, it is stressed that the continued importance of protecting internet-connected devices, updating firmware regularly, and monitoring network activity for unfamiliar or high-volume traffic patterns that may signal an early compromise of those devices has been underscored.  Defendants will benefit from an extensive set of indicators of compromise that Fortinet has released in order to assist them with proactive threat hunting, further supporting what researcher Li has described as an ongoing reality in cybersecurity: IoT hardware remains one of the most vulnerable entry points for cybercriminals. When ShadowV2 emerged, there was an even greater sense of concern when Microsoft disclosed just days later, days after its suspected test run, that Azure had been able to defend against what they called the largest cloud-based DDoS attack ever recorded.  As a result of this attack, attributed to the Aisuru botnet, an unprecedented 15.72 Tbps was reached, resulting in nearly 3.64 billion packets per second being delivered. Despite the attack, Microsoft reported that it had successfully been absorbed by its cloud DDoS protection systems on October 24, thus preventing any disruptions to customer workflows.  Analysts suggest that the timing of the two incidents indicates a rapidly intensifying threat landscape in which adversaries are increasingly preparing to launch large-scale attacks, often without much advance notice. Analysts are pointing out that the ShadowV2 incident is not merely an isolated event, but should also be considered a preview of what a more volatile era of botnet-driven disruption might look like once the dust settles on these consecutive warning shots.  Due to the convergence of aging consumer hardware and incomplete patch ecosystems, as well as the increasing sophistication of adversaries, an overlooked device can become a launchpad for global-scale attacks as a result of this emergence. According to experts, real resilience will require more than reactive patching: settings that embed sustained visibility into their networks, enforcing strict asset lifecycle management, and incorporating architectures that limit the blast radius of inevitable compromises are all priorities that need to be addressed.  Consumers also play a crucial role in preventing botnets from spreading by replacing unsupported devices, enabling automatic updates, and regularly reviewing router and Internet-of-Things configurations, which collectively help to reduce the number of vulnerable nodes available to botnets.  In the face of attacks that demonstrate a clear willingness to demonstrate their capabilities during times of widespread disruption, cybersecurity experts warn that proactive preparedness must replace event-based preparedness as soon as possible. As they argue, the ShadowV2 incident serves as a timely reminder that strengthening the foundations of IoT security today is crucial to preventing much more disruptive campaigns from unfolding tomorrow.
dlvr.it
December 9, 2025 at 2:02 PM
Oh good a brand new Cloudflare Reddit AWS outage for the holidays.
December 9, 2025 at 5:15 AM
again?
December 9, 2025 at 5:08 AM
We invite you to join our Webinar “Apply Anycast Best Practices for Resilient & Performant Global Applications” on December 9th, 2:00 PM EST.

REGISTER HERE: www.brighttalk.com/webcast/2088...

#NetActuate #BGP #Anycast #CloudComputing #EdgeComputing #AWS #Outage #IPv6 #DDOS #CDN
December 8, 2025 at 5:19 PM
WhatIs Network, observability and Kubernetes management news at re:Invent aligned around themes of multi-cloud scale and resilience amid AI growth and cloud outage concerns. AWS CloudOps hones multi-cloud support for AI, resilience [email protected]

Interest | Match | Feed
Origin
www.techtarget.com
December 7, 2025 at 9:49 PM
This AWS outage was Monday, October 20th & started around 3 am ET and the internal issue was identified within 3 hours around 6 am ET. That’s when recovery started.

Many systems were back online by 9:30 am ET but full-recovery took 15 hours. This had a global impact.

This is a major vulnerability.
December 7, 2025 at 9:03 PM
The aws outage where my alarm went off through her, but she wouldn’t stop when I told her. That was a rough way to wake up.
December 6, 2025 at 3:55 PM
“When AWS went down, their share price went up, because people realised how many people are using them. In some ways [the outage] is great marketing, because you see how many people are using Cloudflare.”

Source: www.theguardian.com/technology/2...
December 5, 2025 at 7:25 PM
oh yeah i get the AWS thing but also i was very pleased to discover the recent outage had no bearing on my personal or professional day online, pro level Bezos avoider!
December 5, 2025 at 3:57 PM