AWS News(Unofficial)
banner
awsnews.bsky.social
AWS News(Unofficial)
@awsnews.bsky.social
I am a bot 🤖

I post about all #AWS service, feature, and region expansion announcements as they are released.

RSS Feeds:
https://aws.amazon.com/new/
https://aws.amazon.com/blogs/aws/

Source Code: https://github.com/thulasirajkomminar/aws-news-bot
Announcing new memory optimized Amazon EC2 X8aedz instances

AWS announces Amazon EC2 X8aedz, next generation memory optimized instances, powered by 5th Gen AMD EPYC processors (formerly code named Turin). These instances offer the highest maximum CPU frequency, 5GHz in the clo...

#AWS #AmazonEc2
Announcing new memory optimized Amazon EC2 X8aedz instances
AWS announces Amazon EC2 X8aedz, next generation memory optimized instances, powered by 5th Gen AMD EPYC processors (formerly code named Turin). These instances offer the highest maximum CPU frequency, 5GHz in the cloud. They deliver up to 2x higher compute performance compared to previous generation X2iezn instances. X8aedz instances are built using the latest sixth generation https://aws.amazon.com/ec2/nitro/ and are ideal for electronic design automation (EDA) workloads such as physical layout and physical verification jobs, and relational databases that benefit from high single-threaded processor performance and a large memory footprint. The combination of 5 GHz processors and local NVMe storage enables faster processing of memory-intensive backend EDA workloads such as floor planning, logic placement, clock tree synthesis (CTS), routing, and power/signal integrity analysis. X8aedz instances feature a 32:1 ratio of memory to vCPU and are available in 8 sizes ranging from 2 to 96 vCPUs with 64 to 3,072 GiB of memory, including two bare metal variants, and up to 8 TB of local NVMe SSD storage. X8aedz instances are now available in US West (Oregon) and Asia Pacific (Tokyo) regions. Customers can purchase X8aedz instances via Savings Plans, On-Demand instances, and Spot instances. To get started, sign in to the AWS Management Console. For more information visit the https://aws.amazon.com/ec2/instance-types/x8aedz or https://aws.amazon.com/blogs/aws/introducing-amazon-ec2-x8aedz-instances-powered-by-5th-gen-amd-epyc-processors-for-memory-intensive-workloads/.
aws.amazon.com
December 2, 2025 at 9:05 PM
Amazon Bedrock AgentCore Runtime now supports bi-directional streaming

Amazon Bedrock AgentCore Runtime now supports bi-directional streaming, enabling real-time conversations where agents listen and respond simultaneously while handling interruptions and context changes m...

#AWS #AmazonBedrock
Amazon Bedrock AgentCore Runtime now supports bi-directional streaming
Amazon Bedrock AgentCore Runtime now supports bi-directional streaming, enabling real-time conversations where agents listen and respond simultaneously while handling interruptions and context changes mid-conversation. This feature eliminates conversational friction by enabling continuous, two-way communication where context is preserved throughout the interaction. Traditional agents require users to wait for them to finish responding before providing clarification or corrections, creating stop-start interactions that break conversational flow and feel unnatural, especially in voice applications. Bi-directional streaming addresses this limitation by enabling continuous context handling, helping power voice agents that deliver natural conversational experiences where users can interrupt, clarify, or change direction mid-conversation, while also enhancing text-based interactions through improved responsiveness. Built into AgentCore Runtime, this feature eliminates months of engineering effort required to build real-time streaming capabilities, so developers can focus on building innovative agent experiences rather than managing complex streaming infrastructure. This feature is available in all nine https://docs.aws.amazon.com/bedrock-agentcore/latest/devguide/agentcore-regions.html where Amazon Bedrock AgentCore Runtime is available: US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), and Europe (Ireland). To learn more about AgentCore Runtime bi-directional streaming, read the blog, visit the AgentCore documentation and get started with the https://docs.aws.amazon.com/bedrock-agentcore/latest/devguide/agentcore-get-started-toolkit.html. With AgentCore Runtime's consumption-based pricing, you only pay for https://aws.amazon.com/bedrock/agentcore/pricing/ during agent execution, with no charges for idle time or upfront costs. 
aws.amazon.com
December 2, 2025 at 8:05 PM
Amazon CloudWatch GenAI observability now supports Amazon AgentCore Evaluations

Amazon CloudWatch now enables automated quality assessment of AI agents through AgentCore Evaluations. This new capability helps developers continuously monitor and improve agent performance...

#AWS #AmazonCloudwatch
Amazon CloudWatch GenAI observability now supports Amazon AgentCore Evaluations
Amazon CloudWatch now enables automated quality assessment of AI agents through AgentCore Evaluations. This new capability helps developers continuously monitor and improve agent performance based on real-world interactions, allowing teams to identify and address quality issues before they impact customers. AgentCore Evaluations comes with 13 pre-built evaluators covering essential quality dimensions like helpfulness, tool selection, and response accuracy, while also supporting custom model-based scoring systems. You can access unified quality metrics and agent telemetry in CloudWatch dashboards, with end-to-end tracing capabilities to correlate evaluation metrics with prompts and logs. The feature integrates seamlessly with CloudWatch's existing capabilities including Application Signals, Alarms, Sensitive Data Protection, and Logs Insights. This capability eliminates the need for teams to build and maintain custom evaluation infrastructure, accelerating the deployment of high-quality AI agents. Developers can monitor their entire agent fleet through the AgentCore section in the CloudWatch GenAI observability console. AgentCore Evaluations is now available in US East (N. Virginia), US West (Oregon), Europe (Frankfurt), and Asia Pacific (Sydney). To get started, visit the https://docs.aws.amazon.com/bedrock-agentcore/latest/devguide/evaluations.html and https://aws.amazon.com/bedrock/agentcore/pricing/ details. Standard CloudWatch https://aws.amazon.com/cloudwatch/pricing/ applies for underlying telemetry data.
aws.amazon.com
December 2, 2025 at 8:05 PM
Announcing Amazon EC2 M4 Max Mac instances (Preview)

Amazon Web Services announces preview of Amazon EC2 M4 Max Mac instances, powered by the latest Mac Studio hardware. Amazon EC2 M4 Max Mac instances are the next-generation EC2 Mac instances, that enable Apple developers to ...

#AWS #AmazonEc2
Announcing Amazon EC2 M4 Max Mac instances (Preview)
Amazon Web Services announces preview of Amazon EC2 M4 Max Mac instances, powered by the latest Mac Studio hardware. Amazon EC2 M4 Max Mac instances are the next-generation EC2 Mac instances, that enable Apple developers to migrate their most demanding build and test workloads onto AWS. These instances are ideal for building and testing applications for Apple platforms such as iOS, macOS, iPadOS, tvOS, watchOS, visionOS, and Safari. M4 Max Mac instances are powered by the AWS Nitro System, providing up to 10 Gbps network bandwidth and 8 Gbps of Amazon Elastic Block Store (Amazon EBS) storage bandwidth. These instances are built on Apple M4 Max Mac Studio computers featuring a 16-core CPU, 40-core GPU, 16-core Neural Engine, and 128GB of unified memory. Compared to EC2 M4 Pro Mac instances, M4 Max instances offer twice the GPU cores and more than 2.5x the unified memory, offering customers more choice to match instance capabilities to their specific workload requirements and further expanding the selection of Apple silicon Mac hardware on AWS. To learn more or request access to the Amazon EC2 M4 Max Mac instances preview, visit the https://aws.amazon.com/ec2/instance-types/mac/
aws.amazon.com
December 2, 2025 at 8:05 PM
Amazon S3 Tables now offer the Intelligent-Tiering storage class

Amazon S3 Tables now offer the Intelligent-Tiering storage class, which optimizes costs based on access patterns, without performance impact or operational overhead. Intelligent-Tiering automatically transitions d...

#AWS #AmazonS3
Amazon S3 Tables now offer the Intelligent-Tiering storage class
Amazon S3 Tables now offer the Intelligent-Tiering storage class, which optimizes costs based on access patterns, without performance impact or operational overhead. Intelligent-Tiering automatically transitions data in tables across three low-latency access tiers as access patterns change, reducing storage costs by up to 80%. Additionally, S3 Tables automated maintenance operations such as compaction, snapshot expiration, and unreferenced file removal never tier up your data. This helps you to keep your tables optimized while saving on storage costs. With the Intelligent-Tiering storage class, data in tables not accessed for 30 consecutive days automatically transitions to the Infrequent Access tier (40% lower cost than the Frequent Access tier). After 90 days without access, that data transitions to the Archive Instant Access tier (68% lower cost than the Infrequent Access tier). You can now select Intelligent-Tiering as the storage class when you create a table or set it as the default for all new tables in a table bucket. The Intelligent-Tiering storage class is available in all https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-tables-regions-quotas.html. For pricing details, visit the https://aws.amazon.com/s3/pricing/. To learn more about S3 Tables, visit the https://aws.amazon.com/s3/features/tables/, https://docs.aws.amazon.com/AmazonS3/latest/userguide/tables-intelligent-tiering.html, and read the https://aws.amazon.com/blogs/aws/announcing-replication-support-and-intelligent-tiering-for-amazon-s3-tables.
aws.amazon.com
December 2, 2025 at 7:05 PM
Amazon SageMaker AI announces serverless MLflow capability for faster AI development

Amazon SageMaker AI now offers a serverless MLflow capability that dynamically scales to support AI model development tasks. With MLflow, AI developers can begin tracking, comparing, and...

#AWS #AmazonSagemaker
Amazon SageMaker AI announces serverless MLflow capability for faster AI development
Amazon SageMaker AI now offers a serverless MLflow capability that dynamically scales to support AI model development tasks. With MLflow, AI developers can begin tracking, comparing, and evaluating experiments without waiting for infrastructure setup. As customers across industries accelerate AI development, they require capabilities to track experiments, observe behavior, and evaluate the performance of AI models, applications and agents. However, managing MLflow infrastructure requires administrators to continuously maintain and scale tracking servers, make complex capacity planning decisions, and deploy separate instances for data isolation. This infrastructure burden diverts resources away from core AI development and creates bottlenecks that impact team productivity and cost effectiveness. With this update, MLflow now scales dynamically to deliver fast performance for demanding and unpredictable model development tasks, then scales down during idle time. Administrators can also enhance productivity by setting up cross-account access via Resource Access Manager (RAM) to simplify collaboration across organizational boundaries. The serverless MLflow capability on Amazon SageMaker AI is offered at no additional charge and works natively with familiar Amazon SageMaker AI model development capabilities like SageMaker AI JumpStart, SageMaker Model Registry and SageMaker Pipelines. Customers can access the latest version of MLflow on Amazon SageMaker AI with automatic version updates. Amazon SageMaker AI with MLflow is now available in select AWS Regions. To learn more, see the https://docs.aws.amazon.com/sagemaker/latest/dg/mlflow.html and the https://aws.amazon.com/blogs/aws/accelerate-ai-development-using-amazon-sagemaker-ai-with-serverless-mlflow.
aws.amazon.com
December 2, 2025 at 7:05 PM
Amazon Bedrock AgentCore now includes Policy, Evaluations (preview) and more

Today, Amazon Bedrock AgentCore introduces new offerings, including Policy and Evaluations (preview), to give teams the controls and quality assurance they need to confidently scale agent deployme...

#AWS #AmazonBedrock
Amazon Bedrock AgentCore now includes Policy, Evaluations (preview) and more
Today, Amazon Bedrock AgentCore introduces new offerings, including Policy and Evaluations (preview), to give teams the controls and quality assurance they need to confidently scale agent deployment across their organization, transforming agents from prototypes to solutions in production. Policy in AgentCore integrates with AgentCore Gateway to intercept every tool call in real time, ensuring agents stay within defined boundaries without slowing down. Teams can create policies using natural language that automatically convert to Cedar—the AWS open-source policy language—helping development, compliance, and security teams set up, understand, and audit rules without writing custom code. AgentCore Evaluations helps developers test and continuously monitor agent performance based on real-world behavior to improve quality and catch issues before they cause widespread customer impact. Developers can use 13 built-in evaluators for common quality dimensions, such as helpfulness, tools selection, and accuracy, or create custom model-based scoring systems, drastically reducing the effort required to develop evaluation infrastructure. All quality metrics are accessible through a unified dashboard powered by Amazon CloudWatch. We’ve also added new features to AgentCore Memory, AgentCore Runtime, and AgentCore Identity to support more advanced agent capabilities. AgentCore Memory now includes episodic memory, enabling agents to learn and adapt from experiences, building knowledge over time to create more humanlike interactions. AgentCore Runtime supports bidirectional streaming for natural conversations where agents simultaneously listen and respond while handling interruptions and context changes mid-conversation, unlocking powerful voice agent use cases. AgentCore Identity now supports custom claims for enhanced authentication rules across multi-tenant environments while maintaining seamless integration with your chosen identity providers. AgentCore Evaluations is available in preview in four AWS Regions: US East (N. Virginia), US West (Oregon), Asia Pacific (Sydney), Europe (Frankfurt). Policy in AgentCore is available in preview in all https://docs.aws.amazon.com/bedrock-agentcore/latest/devguide/agentcore-regions.html where AgentCore is available. Learn more about new AgentCore updates through the blog, deep dive using AgentCore resources, and get started with the AgentCore Starter Toolkit. AgentCore offers consumption-based pricing with no upfront costs.
aws.amazon.com
December 2, 2025 at 7:05 PM
Announcing Amazon EC2 Memory optimized X8i instances (Preview)

Amazon Web Services is announcing the preview of Amazon EC2 X8i, next-generation Memory optimized instances. X8i instances are powered by custom Intel Xeon 6 processors delivering the highest performance and fastes...

#AWS #AmazonEc2
Announcing Amazon EC2 Memory optimized X8i instances (Preview)
Amazon Web Services is announcing the preview of Amazon EC2 X8i, next-generation Memory optimized instances. X8i instances are powered by custom Intel Xeon 6 processors delivering the highest performance and fastest memory among comparable Intel processors in the cloud. X8i instances offer 1.5x more memory capacity (up to 6TB) , and up to 3.4x more memory bandwidth compared to previous generation X2i instances. X8i instances will be SAP-certified and deliver 46% higher SAPS compared to X2i instances, for mission-critical SAP workloads. X8i instances are a great choice for memory-intensive workloads, including in-memory databases and analytics, large-scale traditional databases, and Electronic Design Automation (EDA). X8i instances offer 35% higher performance than X2i instances with even higher gains for some workloads. To learn more or request access to the X8i instances preview, visit the Amazon EC2 X8i page.
aws.amazon.com
December 2, 2025 at 7:05 PM
Amazon S3 Storage Lens adds performance metrics, support for billions of prefixes, and export to S3 Tables

Amazon S3 Storage Lens provides organization-wide visibility into your storage usage and activity to help optimize costs, improve performance, and strengthe...

#AWS #AwsGovcloudUs #AmazonS3
Amazon S3 Storage Lens adds performance metrics, support for billions of prefixes, and export to S3 Tables
Amazon S3 Storage Lens provides organization-wide visibility into your storage usage and activity to help optimize costs, improve performance, and strengthen data protection. Today, we are adding three new capabilities to S3 Storage Lens that give you deeper insights into your S3 storage usage and application performance: performance metrics that provide insights into how your applications interact with S3 data, analytics for billions of prefixes in your buckets, and metrics export directly to S3 Tables for easier querying and analysis. We are adding three specific types of performance metrics. Access pattern metrics identify inefficient requests, including those that are too small and create unnecessary network overhead. Request origin metrics, such as cross-Region request counts, show when applications access data across regions, impacting latency and costs. Object access count metrics reveal when applications frequently read a small subset of objects that could be optimized through caching or moving to high-performance storage. We are expanding the prefix analytics in S3 Storage Lens to enable analyzing billions of prefixes per bucket, whereas previously metrics were limited to the largest prefixes that met minimum size and depth thresholds. This gives you visibility into storage usage and activity across all your prefixes. Finally, we are making it possible to export metrics directly to managed S3 Tables, making them immediately available for querying with AWS analytics services like Amazon QuickSight and enabling you to join this data with other AWS service data for deeper insights. To get started, enable performance metrics or expanded prefixes in your S3 Storage Lens advanced metrics dashboard configuration. These capabilities are available in all AWS Regions, except for AWS China Regions and AWS GovCloud (US) Regions. You can enable metrics export to managed S3 Tables in both free and advanced dashboard configurations in AWS Regions where S3 Tables are available. To learn more, visit the https://aws.amazon.com/s3/storage-lens/, https://docs.aws.amazon.com/AmazonS3/latest/userguide/storage_lens.html, https://aws.amazon.com/s3/pricing/, and read the AWS News Blog.
aws.amazon.com
December 2, 2025 at 7:05 PM
Announcing the Apache Spark upgrade agent for Amazon EMR

AWS announces the Apache Spark upgrade agent, a new capability that accelerates Apache Spark version upgrades for Amazon EMR on EC2 and EMR Serverless. The agent converts complex upgrade processes...

#AWS #AwsGovcloudUs #AwsGlue #AmazonEmr
Announcing the Apache Spark upgrade agent for Amazon EMR
AWS announces the Apache Spark upgrade agent, a new capability that accelerates Apache Spark version upgrades for Amazon EMR on EC2 and EMR Serverless. The agent converts complex upgrade processes that typically take months into projects spanning weeks through automated code analysis and transformation. Organizations invest substantial engineering resources analyzing API changes, resolving conflicts, and validating applications during Spark upgrades. The agent introduces conversational interfaces where engineers express upgrade requirements in natural language, while maintaining full control over code modifications. The Apache Spark upgrade agent automatically identifies API changes and behavioral modifications across PySpark and Scala applications. Engineers can initiate upgrades directly from SageMaker Unified Studio, Kiro CLI or IDE of their choice with the help of MCP (Model Context Protocol) compatibility. During the upgrade process, the agent analyzes existing code and suggests specific changes, and engineers can review and approve before implementation. The agent validates functional correctness through data quality validations. The agent currently supports upgrades from Spark 2.4 to 3.5 and maintains data processing accuracy throughout the upgrade process. The Apache Spark upgrade agent is now available in all AWS Regions where SageMaker Unified Studio is available. To start using the agent, visit SageMaker Unified Studio and select IDE Spaces or install the Kiro CLI. For detailed implementation guidance, reference documentation, and migration examples, visit the https://docs.aws.amazon.com/emr/latest/ReleaseGuide/spark-upgrades.html.
aws.amazon.com
December 2, 2025 at 7:05 PM
Announcing Amazon EC2 General purpose M8azn instances (Preview)

Starting today, new general purpose high-frequency high-network Amazon Elastic Compute Cloud (Amazon EC2) M8azn instances are available for preview. These instances are powered by fifth generation AMD EPYC (former...

#AWS #AmazonEc2
Announcing Amazon EC2 General purpose M8azn instances (Preview)
Starting today, new general purpose high-frequency high-network Amazon Elastic Compute Cloud (Amazon EC2) M8azn instances are available for preview. These instances are powered by fifth generation AMD EPYC (formerly code named Turin) processors, offering the highest maximum CPU frequency, 5GHz in the cloud. The M8azn instances offer up to 2x compute performance versus previous generation M5zn instances. These instances also deliver 24% higher performance than M8a instances. M8azn instances are built on the AWS Nitro System, a collection of hardware and software innovations designed by AWS. The https://aws.amazon.com/ec2/nitro/ enables the delivery of efficient, flexible, and secure cloud services with isolated multitenancy, private networking, and fast local storage. These instances are ideal for applications such as gaming, high-performance computing, high-frequency trading (HFT), CI/CD, and simulation modeling for the automotive, aerospace, energy, and telecommunication industries. To learn more or request access to the M8azn instances preview, visit the https://aws.amazon.com/ec2/instance-types/m8a.
aws.amazon.com
December 2, 2025 at 7:05 PM
Amazon SageMaker Catalog now exports asset metadata as queryable dataset

Amazon SageMaker Catalog now exports asset metadata as an Apache Iceberg table through Amazon S3 Tables. This allows data teams to query catalog inventory and answer questions such as, "How many ass...

#AWS #AmazonSagemaker
Amazon SageMaker Catalog now exports asset metadata as queryable dataset
Amazon SageMaker Catalog now exports asset metadata as an Apache Iceberg table through Amazon S3 Tables. This allows data teams to query catalog inventory and answer questions such as, "How many assets were registered last month?", "Which assets are classified as confidential?", or "Which assets lack business descriptions?" using standard SQL without building custom ETL infrastructure for reporting. This capability automatically converts catalog asset metadata into a queryable table accessible from Amazon Athena, SageMaker Unified Studio notebooks, AI agents, and other analytics and BI tools. The exported table includes technical metadata (such as resource_id, resource_type), business metadata (such as asset_name, business_description), ownership details, and timestamps. Data is partitioned by snapshot_date for time travel queries and automatically appears in SageMaker Unified Studio under the aws-sagemaker-catalog bucket. This capability is available in all AWS Regions where SageMaker Catalog is supported at no additional charge. You pay only for underlying services including S3 Tables storage and Amazon Athena queries. You can control storage costs by setting retention policies on the exported tables to automatically remove records older than your specified period. To get started, activate dataset export using the AWS CLI, then access the asset table through S3 Tables or SageMaker Unified Studio's Data tab within 24 hours. Query using Amazon Athena, Studio notebooks, or connect external BI tools through the S3 Tables Iceberg REST Catalog endpoint. For instructions, see the Amazon SageMaker https://docs.aws.amazon.com/sagemaker-unified-studio/latest/userguide/export-asset-metadata.html. 
aws.amazon.com
December 2, 2025 at 6:05 PM
Amazon API Gateway adds MCP proxy support

Amazon API Gateway now supports Model Context Protocol (MCP) proxy, allowing you to transform your existing REST APIs into MCP-compatible endpoints. This new capability enables organizations to make their APIs accessible to AI a...

#AWS #AmazonApiGateway
Amazon API Gateway adds MCP proxy support
Amazon API Gateway now supports Model Context Protocol (MCP) proxy, allowing you to transform your existing REST APIs into MCP-compatible endpoints. This new capability enables organizations to make their APIs accessible to AI agents and MCP clients. Through integration with Amazon Bedrock AgentCore's Gateway service, you can securely convert your REST APIs into agent-compatible tools while enabling intelligent tool discovery through semantic search. The MCP proxy capability, alongside Bedrock AgentCore Gateway services, delivers three key benefits. First, it enables REST APIs to communicate with AI agents and MCP clients through protocol translation, eliminating the need for application modifications or managing additional infrastructure. Second, it provides comprehensive security through dual authentication - verifying agent identities for inbound requests while managing secure connections to REST APIs for outbound calls. Finally, it enables AI agents to search and select the most relevant REST APIs that best match the prompt context. To learn about pricing for this feature, please see the https://aws.amazon.com/bedrock/agentcore/pricing/Amazon API Gateway MCP proxy capability is available in the nine AWS Regions that Amazon Bedrock AgentCore is available in: Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Dublin), Europe (Frankfurt), US East (N. Virginia), US East (Ohio), and US West (Oregon). To get started, visit https://docs.aws.amazon.com/apigateway/latest/developerguide/mcp-server.html.
aws.amazon.com
December 2, 2025 at 6:05 PM
AWS previews EC2 C8ine instances

AWS launches the preview of Amazon EC2 C8ine instances, powered by custom sixth-generation Intel Xeon Scalable processors (Granite Rapids) and the latest AWS Nitro v6 card. These instances are designed specifically for dataplane packet processi...

#AWS #AmazonEc2
AWS previews EC2 C8ine instances
AWS launches the preview of Amazon EC2 C8ine instances, powered by custom sixth-generation Intel Xeon Scalable processors (Granite Rapids) and the latest AWS Nitro v6 card. These instances are designed specifically for dataplane packet processing workloads. Amazon EC2 C8ine instance configurations can deliver up to 2.5 times higher packet performance per vCPU versus prior generation C6in instances. They can offer up to 2x higher network bandwidth through internet gateways and up to 3x more Elastic Network Interface (ENI) compared to existing C6in network optimized instances. They are ideal for packet processing workloads requiring high performance at small packet sizes. These workloads include security virtual appliances, firewalls, load balancers, DDoS protection systems, and Telco 5G UPF applications. These instances are available for preview upon request through your AWS account team. Connect with your account representatives to signup.
aws.amazon.com
December 2, 2025 at 6:05 PM
AWS Support transformation: AI-powered operations with the human expertise you trust

AWS Support announces a transformation of its Support portfolio, simplified into three intelligent, experience-driven plans: Business Support+, Enterprise Support, and Unified Operations. Eac...

#AWS #AwsSupport
AWS Support transformation: AI-powered operations with the human expertise you trust
AWS Support announces a transformation of its Support portfolio, simplified into three intelligent, experience-driven plans: Business Support+, Enterprise Support, and Unified Operations. Each plan combines the speed and precision of AI with the expertise of AWS engineers. Each higher plan builds on the previous one, adding faster response times, proactive guidance, and smarter operations. The result: reduced engineering burden, stronger reliability and resiliency, and streamlined cloud operations. Business Support+ delivers 24/7 AI-powered assistance that understands your context, with direct engagement to AWS experts for critical issues within 30 minutes—twice as fast as current plans. Enterprise Support expands on this with designated Technical Account Managers (TAMs) who blend generative AI insights with human judgment to provide strategic operational guidance across resiliency, cost, and efficiency. It also includes https://aws.amazon.com/security-incident-response/ at no additional cost, which customers can activate to automate security alert investigation and triage. Unified Operations, the top plan, is designed for mission-critical workloads—offering a global team of designated experts who deliver architecture reviews, guided testing, proactive optimization, and five-minute context-specific response times for critical incidents. Customers using AWS DevOps Agent (preview) can engage with AWS Support with one-click from an investigation when needed, giving AWS experts immediate context for faster resolution. AWS DevOps Agent is a frontier agent that resolves and proactively prevents incidents, continuously improving reliability and performance of applications in AWS, multicloud, and hybrid environments. Business Support+, Enterprise Support, and Unified Operations are available in all commercial AWS Regions. Existing customers can continue with their current plans or explore the new offerings for enhanced performance and efficiency. To see how AWS blends AI intelligence and human expertise to transform your cloud operations, visit the https://aws.amazon.com/premiumsupport.
aws.amazon.com
December 2, 2025 at 6:05 PM
AWS Security Agent (Preview): AI agent for proactive app security

Today, AWS announces the preview of AWS Security Agent, an AI-powered agent that proactively secures your applications throughout the development lifecycle. AWS Security Agent conducts automated security reviews tailored t...

#AWS
AWS Security Agent (Preview): AI agent for proactive app security
Today, AWS announces the preview of AWS Security Agent, an AI-powered agent that proactively secures your applications throughout the development lifecycle. AWS Security Agent conducts automated security reviews tailored to your organizational requirements and delivers context-aware penetration testing. By continuously validating security from design to deployment, it helps prevent vulnerabilities early in development across all your environments. Security teams define organizational security requirements once in the AWS Security Agent console, such as approved encryption libraries, authentication frameworks, and logging standards. AWS Security Agent then automatically validates these requirements throughout development by evaluating architectural documents and code against your defined standards, providing specific guidance when violations are detected. For deployment validation, security teams define their penetration testing scope and AWS Security Agent develops application context, executes sophisticated attack chains, and discovers and validates vulnerabilities. This delivers consistent security policy enforcement across all teams, scales security reviews to match development velocity, and transforms penetration testing from a periodic bottleneck into an on-demand capability that dramatically reduces risk exposure. AWS Security Agent (Preview) is currently available in the US East (N. Virginia) Region. All of your data remains safe and private. Your queries and data are never used to train models. AWS Security Agent logs API activity to AWS CloudTrail for auditing and compliance. To learn more about AWS Security Agent, visit the product page and read the launch announcement. For technical details and to get started, see the AWS Security Agent documentation.
aws.amazon.com
December 2, 2025 at 6:05 PM
Introducing Amazon Nova 2 Omni in Preview

We are excited to announce Amazon Nova 2 Omni, an all-in-one model for multimodal reasoning and image generation. It is the industry’s first reasoning model that supports text, images, video, and speech inputs while generating both text and ima...

#AWS
Introducing Amazon Nova 2 Omni in Preview
We are excited to announce Amazon Nova 2 Omni, an all-in-one model for multimodal reasoning and image generation. It is the industry’s first reasoning model that supports text, images, video, and speech inputs while generating both text and image outputs. It enables multimodal understanding, image generation and editing using natural language, and speech transcription. Unlike traditional approaches that often force organizations to stitch together various specialized models, each supporting different input and output types, Nova 2 Omni eliminates the complexity of managing multiple AI models. This helps to accelerate application development while reducing complexity and costs, enabling developers to tackle diverse tasks from marketing content creation and customer support call transcription to video analysis and documentation with visual aids. The model supports a 1M token context window, 200+ languages for text processing and 10 languages for speech input. It can generate and edits high-quality images using natural language, enabling character consistency, text rendering within image as well as object and background modification. Nova 2 Omni delivers superior speech understanding with native reasoning to transcribe, translate and summarize multi-speaker conversations. And with flexible reasoning controls for depth and budget, developers can ensure optimal performance, accuracy, and cost management across different use cases. Nova 2 Omni is in preview with early access available to all Nova Forge customers, and to authorized customers. Please reach out to your AWS account team for access. To learn more about Amazon Nova 2 Omni read the https://docs.aws.amazon.com/nova/latest/userguide/what-is-nova.html. 
aws.amazon.com
December 2, 2025 at 6:05 PM
AWS Lambda announces durable functions for multi-step applications and AI workflows

AWS Lambda announces durable functions, enabling developers to build reliable multi-step applications and AI workflows within the Lambda developer experience. Durable functions automatically ch...

#AWS #AwsLambda
AWS Lambda announces durable functions for multi-step applications and AI workflows
AWS Lambda announces durable functions, enabling developers to build reliable multi-step applications and AI workflows within the Lambda developer experience. Durable functions automatically checkpoint progress, suspend execution for up to one year during long-running tasks, and recover from failures - all without requiring you to manage additional infrastructure or write custom state management and error handling code. Customers use Lambda for the simplicity of its event-driven programming model and built-in integrations. While traditional Lambda functions excel at handling single, short-lived tasks, developers building complex multi-step applications, such as order processing, user onboarding, and AI-assisted workflows, previously needed to implement custom state management logic or integrate with external orchestration services. Lambda durable functions address this opportunity by extending the Lambda programming model with new operations like "steps" and "waits" that let you checkpoint progress and pause execution without incurring compute charges. The service handles state management, error recovery, and efficient pausing and resuming of long-running tasks, allowing you to focus on your core business logic. Lambda durable functions are generally available in US East (Ohio) with support for Python (versions 3.13 and 3.14) and Node.js (versions 22 and 24) runtimes. For the latest region availability, visit the AWS Capabilities by Region https://builder.aws.com/build/capabilities. You can activate durable functions for new Python or Node.js based Lambda functions using the AWS Lambda API, AWS Management Console, AWS Command Line Interface (AWS CLI), AWS Cloud Formation, AWS Serverless Application Model (AWS SAM), AWS SDK, and AWS Cloud Development Kit (AWS CDK). For more information on durable functions, visit the https://docs.aws.amazon.com/lambda/latest/dg/durable-functions.html and launch blog post. To learn about pricing, visit https://aws.amazon.com/lambda/pricing/. 
aws.amazon.com
December 2, 2025 at 6:05 PM
Amazon GuardDuty Extended Threat Detection now supports Amazon EC2 and Amazon ECS

AWS announces further enhancements to Amazon GuardDuty Extended Threat Detection with new capabilities to detect multistage attacks targeting Amazon Elastic Compute Cloud (Amazon EC2) insta...

#AWS #AmazonGuardduty
Amazon GuardDuty Extended Threat Detection now supports Amazon EC2 and Amazon ECS
AWS announces further enhancements to Amazon GuardDuty Extended Threat Detection with new capabilities to detect multistage attacks targeting Amazon Elastic Compute Cloud (Amazon EC2) instances and Amazon Elastic Container Service (Amazon ECS) clusters running on AWS Fargate or Amazon EC2. GuardDuty Extended Threat Detection uses artificial intelligence and machine learning algorithms trained at AWS scale to automatically correlate security signals and detect critical threats. It analyzes multiple security signals across network activity, process runtime behavior, malware execution, and AWS API activity over extended periods to detect sophisticated attack patterns that might otherwise go unnoticed. With this launch, GuardDuty introduces two new critical-severity findings: AttackSequence:EC2/CompromisedInstanceGroup and AttackSequence:ECS/CompromisedCluster. These findings provide attack sequence information, allowing you to spend less time on initial analysis and more time responding to critical threats, minimizing business impact. For example, GuardDuty can identify suspicious processes followed by persistence attempts, crypto-mining activities, and reverse shell creation, representing these related events as a single, critical-severity finding. Each finding includes a detailed summary, events timeline, mapping to MITRE ATT&CK® tactics and techniques, and remediation recommendations. While GuardDuty Extended Threat Detection is automatically enabled for GuardDuty customers at no additional cost, its detection comprehensiveness depends on your enabled GuardDuty protection plans. To improve attack sequence coverage and threat analysis of Amazon EC2 instances, enable Runtime Monitoring for EC2. To enable detection of compromised ECS clusters, enable Runtime Monitoring for Fargate or EC2 depending on your infrastructure type. To get started, enable GuardDuty protection plans via the Console or API. New GuardDuty customers can start with a https://portal.aws.amazon.com/billing/signup?pg=guarddutyprice&cta=herobtn&redirect_url=https%3A%2F%2Faws.amazon.com%2Fregistration-confirmation, and existing customers who haven't used Runtime Monitoring can also try it free for 30 days. For additional information, visit the blog post and https://aws.amazon.com/guardduty/.
aws.amazon.com
December 2, 2025 at 6:05 PM
Amazon S3 Batch Operations introduces performance improvements

Amazon S3 Batch Operations now completes jobs up to 10x faster at a scale of up to 20 billion objects in a job, helping you accelerate large-scale storage operations.

With S3 Batch Operations, yo...

#AWS #AwsGovcloudUs #AmazonS3
Amazon S3 Batch Operations introduces performance improvements
Amazon S3 Batch Operations now completes jobs up to 10x faster at a scale of up to 20 billion objects in a job, helping you accelerate large-scale storage operations. With S3 Batch Operations, you can perform operations at scale such as copying objects between staging and production buckets, tagging objects for S3 Lifecycle management, or computing object checksums to verify the content of stored datasets. S3 Batch Operations now pre-processes objects, executes jobs, and generates completion reports up to 10x faster for jobs processing millions of objects with no additional configuration or cost. To get started, create a job in the AWS Management Console and specify operation type as well as filters like bucket, prefix, or creation date. S3 automatically generates the object list, creates an AWS Identity and Access Management (IAM) role with permission policies as needed, then initiates the job. S3 Batch Operations performance improvements are available in all AWS Regions, except for AWS China Regions and AWS GovCloud (US) Regions. For pricing information, please visit the Management & Insights tab of the https://aws.amazon.com/s3/pricing/. To learn more about S3 Batch Operations, visit the https://aws.amazon.com/s3/features/batch-operations/ and https://docs.aws.amazon.com/AmazonS3/latest/userguide/batch-ops.html.
aws.amazon.com
December 2, 2025 at 6:05 PM
Amazon CloudWatch launches unified management and analytics for operational, security, and compliance data

Amazon CloudWatch now provides new data management and analytics capabilities that allow you to unify operational, security, and compliance data across your AWS en...

#AWS #AmazonCloudwatch
Amazon CloudWatch launches unified management and analytics for operational, security, and compliance data
Amazon CloudWatch now provides new data management and analytics capabilities that allow you to unify operational, security, and compliance data across your AWS environment and third-party sources. DevOps teams, security analysts, and compliance officers can now access all their data in a single place, eliminating the need to maintain multiple separate data stores and complex (extract-transform-load) ETL pipelines. CloudWatch now offers greater flexibility in where and how customers gain insights into this data, both natively in CloudWatch or with any Apache Iceberg-compatible tool. With the unified data store enhancements, customers can now easily collect and aggregate logs across AWS accounts and regions aligned to geographic boundaries, business units, or persona-specific requirements. With AWS Organization-wide enablement for AWS sources such as AWS CloudTrail, Amazon VPC, and Amazon WAF, and managed collectors for third party sources such as Crowdstrike, Okta, Palo Alto Networks, CloudWatch makes it easy to bring more of your logs together. Customers can use pipelines to transform and enrich their logs to standard formats such as Open Cybersecurity Schema Framework (OCSF) for security analytics, and define facets to accelerate insights on their data. Customers can make their data available in managed Amazon S3 Tables at no additional storage charge, enabling teams to query data in Amazon SageMaker Unified Studio, Amazon Quick Suite, Amazon Athena, Amazon Redshift, or any Apache Iceberg-compatible analytics tool. To get started, visit the Ingestion page in the CloudWatch console and add one or more data sources. To learn more about Amazon CloudWatch unified data store, visit the product page, pricing page, and https://docs.aws.amazon.com/cloudwatch/. For Regional availability, visit the https://builder.aws.com/build/capabilities.
aws.amazon.com
December 2, 2025 at 6:05 PM
Amazon OpenSearch Service adds GPU-accelerated and auto-optimized vector indexes

You can now build billion-scale vector databases in under an hour on Amazon OpenSearch Service with GPU-acceleration, and auto-optimize vector indexes for optimal trade-offs between ...

#AWS #AmazonOpensearchService
Amazon OpenSearch Service adds GPU-accelerated and auto-optimized vector indexes
You can now build billion-scale vector databases in under an hour on Amazon OpenSearch Service with GPU-acceleration, and auto-optimize vector indexes for optimal trade-offs between search quality, speed and cost. Previously, large-scale vector indexes took days to build, and optimizing them required experts to spend weeks of manual tuning. The time, cost and effort weighed down innovation velocity, and customers forwent cost and performance optimizations. You can now run serverless, auto-optimize jobs to generate optimization recommendations. You simply specify search latency and recall requirements, and these jobs will evaluate index configurations (k-NN algorithms, quantization, and engine settings) automatically. Then, you can use vector GPU-acceleration to build an optimized index up to 10X faster at a quarter of the indexing cost. Serverless GPUs dynamically activate and accelerate your domain or collection, so you’re only billed when you benefit from speed boosts—all done without you managing GPU instances. These capabilities help you scale AI applications including semantic search, recommendation engines, and agentic systems more efficiently. By simplifying and accelerating the time to build large-scale, optimized vector databases, your team will be empowered to innovate faster. Asia Pacific (Sydney),Vector https://docs.aws.amazon.com/opensearch-service/latest/developerguide/serverless-auto-optimize.html is available for vector collections and OpenSearch 3.1+ domains in US East (N. Virginia), US West (Oregon), Asia Pacific (Sydney), Europe (Ireland), and Asia Pacific (Tokyo) Regions. Vector https://docs.aws.amazon.com/opensearch-service/latest/developerguide/serverless-auto-optimize.html is available for vector collections and OpenSearch 2.17+ domains in US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt) and Europe (Ireland) Regions. Learn more.
aws.amazon.com
December 2, 2025 at 6:05 PM
Amazon S3 Tables now support automatic replication of Apache Iceberg tables

Amazon S3 Tables now support automatic replication of Apache Iceberg tables across AWS Regions and accounts. This new capability replicates your complete table structure, including all snapshots and met...

#AWS #AmazonS3
Amazon S3 Tables now support automatic replication of Apache Iceberg tables
Amazon S3 Tables now support automatic replication of Apache Iceberg tables across AWS Regions and accounts. This new capability replicates your complete table structure, including all snapshots and metadata to reduce query latency and improve data accessibility for global analytics workloads. S3 Tables replication automatically creates read-only replica tables in your destination table buckets, backfills them with the latest state of the source table, and continuously monitors for new updates to keep replicas in sync. Replica tables can be configured with independent snapshot retention policies and encryption keys from source tables to meet compliance and data protection requirements. You can query replica tables using Amazon SageMaker Unified Studio or any Iceberg-compatible engine including Amazon Athena, Amazon Redshift, Apache Spark, and DuckDB. S3 Tables replication is now available in all https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-tables-regions-quotas.html. For pricing details, visit the https://aws.amazon.com/s3/pricing/. To learn more about S3 Tables, visit the https://aws.amazon.com/s3/features/tables/, https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-tables-replication-tables.html, and read the AWS News Blog.
aws.amazon.com
December 2, 2025 at 6:05 PM
Amazon EMR Serverless eliminates local storage provisioning for Apache Spark workloads

Amazon EMR Serverless now offers serverless storage that eliminates local storage provisioning for Apache Spark workloads, reducing data processing costs by up to 20% and prev...

#AWS #AmazonEmr #AwsGovcloudUs
Amazon EMR Serverless eliminates local storage provisioning for Apache Spark workloads
Amazon EMR Serverless now offers serverless storage that eliminates local storage provisioning for Apache Spark workloads, reducing data processing costs by up to 20% and preventing job failures from disk capacity constraints. You no longer need to configure local disk type and size for each application. EMR Serverless automatically handles intermediate data operation such as shuffle with no local storage charges. You pay only for compute and memory resources your job consumes. EMR Serverless offloads intermediate data operations to a fully managed, auto-scaling serverless storage that encrypts data in transit and at rest with job-level isolation. Serverless storage decouples storage from compute, allowing Spark to release workers immediately when idle rather than keeping workers active to preserve temporary data. It eliminates job failures from insufficient disk capacity and reduces costs by avoiding idle worker charges. This is particularly valuable for jobs using dynamic resource allocation, such as recommendation engines processing millions of customer interactions, where initial stages process large datasets with high parallelism then narrow as data aggregates. This feature is generally available for EMR release 7.12 and later. See https://docs.aws.amazon.com/emr/latest/EMR-Serverless-UserGuide/jobs-serverless-storage.html#jobs-serverless-storage-regions for availability. To get started, visit see serverless storage for EMR Serverless documentation. 
aws.amazon.com
December 2, 2025 at 6:05 PM
Announcing Database Savings Plans with up to 35% savings

Today, AWS announces Database Savings Plans, a new flexible pricing model that helps you save up to 35% in exchange for a commitment to a consistent amount of usage (measured in $/hour) over a one-year ter...

#AWS #CloudFinancialManagement
Announcing Database Savings Plans with up to 35% savings
Today, AWS announces Database Savings Plans, a new flexible pricing model that helps you save up to 35% in exchange for a commitment to a consistent amount of usage (measured in $/hour) over a one-year term with no upfront payment. Database Savings Plans automatically apply to eligible serverless and provisioned instance usage regardless of supported engine, instance family, size, deployment option, or AWS Region. For example, with Database Savings Plans, you can change between Aurora db.r7g and db.r8g instances, shift a workload from EU (Ireland) to US (Ohio), modernize from Amazon RDS for Oracle to Amazon Aurora PostgreSQL or from RDS to Amazon DynamoDB and still benefit from discounted pricing offered by Database Savings Plans. Database Savings Plans will be available starting today in all AWS Regions, except China Regions, with support for Amazon Aurora, Amazon RDS, Amazon DynamoDB, Amazon ElastiCache, Amazon DocumentDB (with MongoDB compatibility), Amazon Neptune, Amazon Keyspaces (for Apache Cassandra), Amazon Timestream, and AWS Database Migration Service (DMS). You can get started with Database Savings Plans from the AWS Billing and Cost Management Console or by using the AWS CLI. To realize the largest savings, you can make a commitment to Savings Plans by using purchase recommendations provided in the console. For a more customized analysis, you can use the Savings Plans Purchase Analyzer to estimate potential cost savings for custom purchase scenarios. For more information, visit the Database Savings Plans pricing page and the AWS Savings Plans FAQs.
aws.amazon.com
December 2, 2025 at 6:05 PM