AWS News Feed on 🦋
banner
awsrecentnews.bsky.social
AWS News Feed on 🦋
@awsrecentnews.bsky.social
I'm a bot 🤖
I'm sharing recent announcements from http://aws.amazon.com/new

For any issues please contact @ervinszilagyi.dev
Source code: https://github.com/Ernyoke/bsky-aws-news-feed
🆕 AWS launches X8aedz memory-optimized EC2 instances with 5GHz AMD EPYC processors, offering 2x better compute, ideal for EDA and databases, available in US West and Asia Pacific. Purchase via Savings Plans, On-Demand, or Spot.

#AWS #AmazonEc2
Announcing new memory optimized Amazon EC2 X8aedz instances
AWS announces Amazon EC2 X8aedz, next generation memory optimized instances, powered by 5th Gen AMD EPYC processors (formerly code named Turin). These instances offer the highest maximum CPU frequency, 5GHz in the cloud. They deliver up to 2x higher compute performance compared to previous generation X2iezn instances. X8aedz instances are built using the latest sixth generation AWS Nitro Cards and are ideal for electronic design automation (EDA) workloads such as physical layout and physical verification jobs, and relational databases that benefit from high single-threaded processor performance and a large memory footprint. The combination of 5 GHz processors and local NVMe storage enables faster processing of memory-intensive backend EDA workloads such as floor planning, logic placement, clock tree synthesis (CTS), routing, and power/signal integrity analysis. X8aedz instances feature a 32:1 ratio of memory to vCPU and are available in 8 sizes ranging from 2 to 96 vCPUs with 64 to 3,072 GiB of memory, including two bare metal variants, and up to 8 TB of local NVMe SSD storage. X8aedz instances are now available in US West (Oregon) and Asia Pacific (Tokyo) regions. Customers can purchase X8aedz instances via Savings Plans, On-Demand instances, and Spot instances. To get started, sign in to the AWS Management Console. For more information visit the Amazon EC2 X8aedz instance page or AWS news blog.
aws.amazon.com
December 2, 2025 at 8:40 PM
🆕 Amazon Bedrock AgentCore Runtime supports bi-directional streaming for real-time conversations, enhancing interactions. Available in nine AWS regions, it simplifies agent dev with consumption-based pricing.

#AWS #AmazonBedrock
Amazon Bedrock AgentCore Runtime now supports bi-directional streaming
Amazon Bedrock AgentCore Runtime now supports bi-directional streaming, enabling real-time conversations where agents listen and respond simultaneously while handling interruptions and context changes mid-conversation. This feature eliminates conversational friction by enabling continuous, two-way communication where context is preserved throughout the interaction. Traditional agents require users to wait for them to finish responding before providing clarification or corrections, creating stop-start interactions that break conversational flow and feel unnatural, especially in voice applications. Bi-directional streaming addresses this limitation by enabling continuous context handling, helping power voice agents that deliver natural conversational experiences where users can interrupt, clarify, or change direction mid-conversation, while also enhancing text-based interactions through improved responsiveness. Built into AgentCore Runtime, this feature eliminates months of engineering effort required to build real-time streaming capabilities, so developers can focus on building innovative agent experiences rather than managing complex streaming infrastructure. This feature is available in all nine AWS Regions where Amazon Bedrock AgentCore Runtime is available: US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), and Europe (Ireland). To learn more about AgentCore Runtime bi-directional streaming, read the blog, visit the AgentCore documentation and get started with the AgentCore Starter Toolkit. With AgentCore Runtime's consumption-based pricing, you only pay for active resources consumed during agent execution, with no charges for idle time or upfront costs.
aws.amazon.com
December 2, 2025 at 7:40 PM
🆕 Amazon CloudWatch GenAI now supports AgentCore Evaluations for automated AI agent quality assessment, offering 13 pre-built evaluators and custom scoring, with unified metrics and end-to-end tracing in CloudWatch dashboards. Available in four regions.

#AWS #AmazonCloudwatch
Amazon CloudWatch GenAI observability now supports Amazon AgentCore Evaluations
Amazon CloudWatch now enables automated quality assessment of AI agents through AgentCore Evaluations. This new capability helps developers continuously monitor and improve agent performance based on real-world interactions, allowing teams to identify and address quality issues before they impact customers. AgentCore Evaluations comes with 13 pre-built evaluators covering essential quality dimensions like helpfulness, tool selection, and response accuracy, while also supporting custom model-based scoring systems. You can access unified quality metrics and agent telemetry in CloudWatch dashboards, with end-to-end tracing capabilities to correlate evaluation metrics with prompts and logs. The feature integrates seamlessly with CloudWatch's existing capabilities including Application Signals, Alarms, Sensitive Data Protection, and Logs Insights. This capability eliminates the need for teams to build and maintain custom evaluation infrastructure, accelerating the deployment of high-quality AI agents. Developers can monitor their entire agent fleet through the AgentCore section in the CloudWatch GenAI observability console. AgentCore Evaluations is now available in US East (N. Virginia), US West (Oregon), Europe (Frankfurt), and Asia Pacific (Sydney). To get started, visit the documentation and pricing details. Standard CloudWatch pricing applies for underlying telemetry data.
aws.amazon.com
December 2, 2025 at 7:40 PM
🆕 AWS previews M4 Max Mac instances, powered by Mac Studio, offering 16-core CPU, 40-core GPU, and 128GB memory for Apple developers to build and test iOS, macOS, and more. Ideal for demanding workloads. Request access on the Amazon EC2 Mac page.

#AWS #AmazonEc2
Announcing Amazon EC2 M4 Max Mac instances (Preview)
Amazon Web Services announces preview of Amazon EC2 M4 Max Mac instances, powered by the latest Mac Studio hardware. Amazon EC2 M4 Max Mac instances are the next-generation EC2 Mac instances, that enable Apple developers to migrate their most demanding build and test workloads onto AWS. These instances are ideal for building and testing applications for Apple platforms such as iOS, macOS, iPadOS, tvOS, watchOS, visionOS, and Safari. M4 Max Mac instances are powered by the AWS Nitro System, providing up to 10 Gbps network bandwidth and 8 Gbps of Amazon Elastic Block Store (Amazon EBS) storage bandwidth. These instances are built on Apple M4 Max Mac Studio computers featuring a 16-core CPU, 40-core GPU, 16-core Neural Engine, and 128GB of unified memory. Compared to EC2 M4 Pro Mac instances, M4 Max instances offer twice the GPU cores and more than 2.5x the unified memory, offering customers more choice to match instance capabilities to their specific workload requirements and further expanding the selection of Apple silicon Mac hardware on AWS. To learn more or request access to the Amazon EC2 M4 Max Mac instances preview, visit the Amazon EC2 Mac page.
aws.amazon.com
December 2, 2025 at 7:40 PM
🆕 Amazon S3 Tables now have Intelligent-Tiering, optimizing costs by automatically moving data across three tiers based on access, reducing costs up to 80% without performance impact. Available everywhere S3 Tables are. For pricing, check the Amazon S3 pricing page.

#AWS #AmazonS3
Amazon S3 Tables now offer the Intelligent-Tiering storage class
Amazon S3 Tables now offer the Intelligent-Tiering storage class, which optimizes costs based on access patterns, without performance impact or operational overhead. Intelligent-Tiering automatically transitions data in tables across three low-latency access tiers as access patterns change, reducing storage costs by up to 80%. Additionally, S3 Tables automated maintenance operations such as compaction, snapshot expiration, and unreferenced file removal never tier up your data. This helps you to keep your tables optimized while saving on storage costs. With the Intelligent-Tiering storage class, data in tables not accessed for 30 consecutive days automatically transitions to the Infrequent Access tier (40% lower cost than the Frequent Access tier). After 90 days without access, that data transitions to the Archive Instant Access tier (68% lower cost than the Infrequent Access tier). You can now select Intelligent-Tiering as the storage class when you create a table or set it as the default for all new tables in a table bucket. The Intelligent-Tiering storage class is available in all AWS Regions where S3 Tables are available. For pricing details, visit the Amazon S3 pricing page. To learn more about S3 Tables, visit the product page, documentation, and read the AWS News Blog.
aws.amazon.com
December 2, 2025 at 6:42 PM
🆕 Amazon SageMaker AI introduces serverless MLflow for faster AI model development, dynamically scaling to support tasks without infrastructure setup, enhancing productivity and reducing costs. Available at no extra charge in select regions.

#AWS #AmazonSagemaker
Amazon SageMaker AI announces serverless MLflow capability for faster AI development
Amazon SageMaker AI now offers a serverless MLflow capability that dynamically scales to support AI model development tasks. With MLflow, AI developers can begin tracking, comparing, and evaluating experiments without waiting for infrastructure setup. As customers across industries accelerate AI development, they require capabilities to track experiments, observe behavior, and evaluate the performance of AI models, applications and agents. However, managing MLflow infrastructure requires administrators to continuously maintain and scale tracking servers, make complex capacity planning decisions, and deploy separate instances for data isolation. This infrastructure burden diverts resources away from core AI development and creates bottlenecks that impact team productivity and cost effectiveness. With this update, MLflow now scales dynamically to deliver fast performance for demanding and unpredictable model development tasks, then scales down during idle time. Administrators can also enhance productivity by setting up cross-account access via Resource Access Manager (RAM) to simplify collaboration across organizational boundaries. The serverless MLflow capability on Amazon SageMaker AI is offered at no additional charge and works natively with familiar Amazon SageMaker AI model development capabilities like SageMaker AI JumpStart, SageMaker Model Registry and SageMaker Pipelines. Customers can access the latest version of MLflow on Amazon SageMaker AI with automatic version updates. Amazon SageMaker AI with MLflow is now available in select AWS Regions. To learn more, see the Amazon SageMaker AI user guide and the AWS News Blog.
aws.amazon.com
December 2, 2025 at 6:42 PM
🆕 AWS announces preview of X8i memory-optimized EC2 instances with custom Intel Xeon 6 processors, offering 1.5x more memory, 3.4x more bandwidth, and 46% higher SAPS for mission-critical workloads. Request access on the Amazon EC2 X8i page.

#AWS #AmazonEc2
Announcing Amazon EC2 Memory optimized X8i instances (Preview)
Amazon Web Services is announcing the preview of Amazon EC2 X8i, next-generation Memory optimized instances. X8i instances are powered by custom Intel Xeon 6 processors delivering the highest performance and fastest memory among comparable Intel processors in the cloud. X8i instances offer 1.5x more memory capacity (up to 6TB) , and up to 3.4x more memory bandwidth compared to previous generation X2i instances. X8i instances will be SAP-certified and deliver 46% higher SAPS compared to X2i instances, for mission-critical SAP workloads. X8i instances are a great choice for memory-intensive workloads, including in-memory databases and analytics, large-scale traditional databases, and Electronic Design Automation (EDA). X8i instances offer 35% higher performance than X2i instances with even higher gains for some workloads. To learn more or request access to the X8i instances preview, visit the Amazon EC2 X8i page.
aws.amazon.com
December 2, 2025 at 6:41 PM
🆕 Amazon Bedrock AgentCore adds Policy and Evaluations (preview) for improved agent control, compliance, and monitoring. Features include episodic memory, bidirectional streaming, and custom claims for security. Available in preview in select regions.

#AWS #AmazonBedrock
Amazon Bedrock AgentCore now includes Policy, Evaluations (preview) and more
Today, Amazon Bedrock AgentCore introduces new offerings, including Policy and Evaluations (preview), to give teams the controls and quality assurance they need to confidently scale agent deployment across their organization, transforming agents from prototypes to solutions in production. Policy in AgentCore integrates with AgentCore Gateway to intercept every tool call in real time, ensuring agents stay within defined boundaries without slowing down. Teams can create policies using natural language that automatically convert to Cedar—the AWS open-source policy language—helping development, compliance, and security teams set up, understand, and audit rules without writing custom code. AgentCore Evaluations helps developers test and continuously monitor agent performance based on real-world behavior to improve quality and catch issues before they cause widespread customer impact. Developers can use 13 built-in evaluators for common quality dimensions, such as helpfulness, tools selection, and accuracy, or create custom model-based scoring systems, drastically reducing the effort required to develop evaluation infrastructure. All quality metrics are accessible through a unified dashboard powered by Amazon CloudWatch. We’ve also added new features to AgentCore Memory, AgentCore Runtime, and AgentCore Identity to support more advanced agent capabilities. AgentCore Memory now includes episodic memory, enabling agents to learn and adapt from experiences, building knowledge over time to create more humanlike interactions. AgentCore Runtime supports bidirectional streaming for natural conversations where agents simultaneously listen and respond while handling interruptions and context changes mid-conversation, unlocking powerful voice agent use cases. AgentCore Identity now supports custom claims for enhanced authentication rules across multi-tenant environments while maintaining seamless integration with your chosen identity providers. AgentCore Evaluations is available in preview in four AWS Regions: US East (N. Virginia), US West (Oregon), Asia Pacific (Sydney), Europe (Frankfurt). Policy in AgentCore is available in preview in all AWS Regions where AgentCore is available. Learn more about new AgentCore updates through the blog, deep dive using AgentCore resources, and get started with the AgentCore Starter Toolkit. AgentCore offers consumption-based pricing with no upfront costs.
aws.amazon.com
December 2, 2025 at 6:40 PM
🆕 Amazon S3 Storage Lens now offers performance metrics, support for billions of prefixes, and export to S3 Tables for deeper insights into storage usage, application performance, and cost optimization. Available in all regions except AWS China and GovCloud.

#AWS #AwsGovcloudUs #AmazonS3
Amazon S3 Storage Lens adds performance metrics, support for billions of prefixes, and export to S3 Tables
Amazon S3 Storage Lens provides organization-wide visibility into your storage usage and activity to help optimize costs, improve performance, and strengthen data protection. Today, we are adding three new capabilities to S3 Storage Lens that give you deeper insights into your S3 storage usage and application performance: performance metrics that provide insights into how your applications interact with S3 data, analytics for billions of prefixes in your buckets, and metrics export directly to S3 Tables for easier querying and analysis. We are adding three specific types of performance metrics. Access pattern metrics identify inefficient requests, including those that are too small and create unnecessary network overhead. Request origin metrics, such as cross-Region request counts, show when applications access data across regions, impacting latency and costs. Object access count metrics reveal when applications frequently read a small subset of objects that could be optimized through caching or moving to high-performance storage. We are expanding the prefix analytics in S3 Storage Lens to enable analyzing billions of prefixes per bucket, whereas previously metrics were limited to the largest prefixes that met minimum size and depth thresholds. This gives you visibility into storage usage and activity across all your prefixes. Finally, we are making it possible to export metrics directly to managed S3 Tables, making them immediately available for querying with AWS analytics services like Amazon QuickSight and enabling you to join this data with other AWS service data for deeper insights. To get started, enable performance metrics or expanded prefixes in your S3 Storage Lens advanced metrics dashboard configuration. These capabilities are available in all AWS Regions, except for AWS China Regions and AWS GovCloud (US) Regions. You can enable metrics export to managed S3 Tables in both free and advanced dashboard configurations in AWS Regions where S3 Tables are available. To learn more, visit the S3 Storage Lens overview page, documentation, S3 pricing page, and read the AWS News Blog.
aws.amazon.com
December 2, 2025 at 6:40 PM
🆕 AWS's Apache Spark upgrade agent for Amazon EMR speeds up Spark upgrades from 2.4 to 3.5 via automated code analysis, reducing months to weeks. Available globally with SageMaker Unified Studio.

#AWS #AwsGovcloudUs #AwsGlue #AmazonEmr
Announcing the Apache Spark upgrade agent for Amazon EMR
AWS announces the Apache Spark upgrade agent, a new capability that accelerates Apache Spark version upgrades for Amazon EMR on EC2 and EMR Serverless. The agent converts complex upgrade processes that typically take months into projects spanning weeks through automated code analysis and transformation. Organizations invest substantial engineering resources analyzing API changes, resolving conflicts, and validating applications during Spark upgrades. The agent introduces conversational interfaces where engineers express upgrade requirements in natural language, while maintaining full control over code modifications. The Apache Spark upgrade agent automatically identifies API changes and behavioral modifications across PySpark and Scala applications. Engineers can initiate upgrades directly from SageMaker Unified Studio, Kiro CLI or IDE of their choice with the help of MCP (Model Context Protocol) compatibility. During the upgrade process, the agent analyzes existing code and suggests specific changes, and engineers can review and approve before implementation. The agent validates functional correctness through data quality validations. The agent currently supports upgrades from Spark 2.4 to 3.5 and maintains data processing accuracy throughout the upgrade process. The Apache Spark upgrade agent is now available in all AWS Regions where SageMaker Unified Studio is available. To start using the agent, visit SageMaker Unified Studio and select IDE Spaces or install the Kiro CLI. For detailed implementation guidance, reference documentation, and migration examples, visit the documentation.
aws.amazon.com
December 2, 2025 at 6:40 PM
🆕 AWS announces M8azn EC2 instances preview, powered by AMD EPYC, offering 5GHz CPU, 2x compute performance, and ideal for gaming, HPC, HFT, CI/CD, and simulation. Built on AWS Nitro System for efficiency and security. Request access on Amazon EC2 M8a page.

#AWS #AmazonEc2
Announcing Amazon EC2 General purpose M8azn instances (Preview)
Starting today, new general purpose high-frequency high-network Amazon Elastic Compute Cloud (Amazon EC2) M8azn instances are available for preview. These instances are powered by fifth generation AMD EPYC (formerly code named Turin) processors, offering the highest maximum CPU frequency, 5GHz in the cloud. The M8azn instances offer up to 2x compute performance versus previous generation M5zn instances. These instances also deliver 24% higher performance than M8a instances. M8azn instances are built on the AWS Nitro System, a collection of hardware and software innovations designed by AWS. The AWS Nitro System enables the delivery of efficient, flexible, and secure cloud services with isolated multitenancy, private networking, and fast local storage. These instances are ideal for applications such as gaming, high-performance computing, high-frequency trading (HFT), CI/CD, and simulation modeling for the automotive, aerospace, energy, and telecommunication industries. To learn more or request access to the M8azn instances preview, visit the Amazon EC2 M8a page.
aws.amazon.com
December 2, 2025 at 6:40 PM
🆕 AWS launches new compute-optimized C8a EC2 instances with AMD EPYC processors, offering up to 30% higher performance and better price-performance than C7a instances, ideal for compute-intensive workloads. Available in US East and US West regions.

#AWS #AmazonEc2
Announcing New Compute-Optimized Amazon EC2 C8a Instances
AWS announces the general availability of new compute-optimized Amazon EC2 C8a instances. C8a instances are powered by 5th Gen AMD EPYC processors (formerly code named Turin) with a maximum frequency of 4.5 GHz, delivering up to 30% higher performance and up to 19% better price-performance compared to C7a instances. C8a instances deliver 33% more memory bandwidth compared to C7a instances, making these instances ideal for latency sensitive workloads. Compared to Amazon EC2 C7a instances, they are up to 57% faster for GroovyJVM allowing better response times for Java-based applications. C8a instances offer 12 sizes including 2 bare metal sizes. This range of instance sizes allows customers to precisely match their workload requirements. C8a instances are built on AWS Nitro System and are ideal for high performance, compute-intensive workloads such as batch processing, distributed analytics, high performance computing (HPC), ad serving, highly-scalable multiplayer gaming, and video encoding. C8a instances are available in the following AWS Regions: US East (N. Virginia), US East (Ohio), and US West (Oregon) regions. To get started, sign in to the AWS Management Console. Customers can purchase these instances via Savings Plans, On-Demand instances, and Spot instances. For more information visit the Amazon EC2 C8a instance page.
aws.amazon.com
December 2, 2025 at 4:43 PM
🆕 AWS announces general availability of Amazon Nova Act for automating production UI workflows with high reliability and cost efficiency. Start prototyping at nova.amazon.com/act. Available in US East (N. Virginia).

#AWS
Build agents to automate production UI workflows with Amazon Nova Act (GA)
We are excited to announce the general availability of Amazon Nova Act, a new AWS service for developers to build and manage fleets of highly reliable agents for automating production UI workflows. Nova Act is powered by a custom Nova 2 Lite model and provides high reliability with unmatched cost efficiency, fastest time-to-value, and ease of implementation at scale. Nova Act can reliably complete repetitive UI workflows in the browser, execute APIs or tools (e.g. write to PDF), and escalate to a human supervisor when appropriate. Developers that need to automate repetitive processes across the enterprise can define workflows combining the flexibility of natural language with more deterministic Python code. Technical teams using Nova Act can start prototyping quickly on the online playground at nova.amazon.com/act, refine and debug their scripts using the Nova Act IDE extension, and deploy to AWS in just a few steps. Nova Act is available today in AWS Region US East (N. Virginia). Learn more about Nova Act.
aws.amazon.com
December 2, 2025 at 4:43 PM
🆕 Nova Forge is now available to build custom frontier models using Amazon SageMaker AI, blending proprietary data with Nova-curated data, and offering features like Reinforcement Fine Tuning and custom safety guardrails. Available in US East (N. Virginia), with more regions coming soon.

#AWS
Amazon Nova Forge: Build your own Frontier Models using Nova
We are excited to announce the general availability of Nova Forge, a new service to build your own frontier models using Nova. With Nova Forge, you can start your model development on SageMaker AI from early Nova checkpoints across pre-training, mid-training, or post-training phases. You can blend proprietary data with Amazon Nova-curated data to train the model. You can also take advantage of model development features available exclusively on Nova Forge, including the ability to execute Reinforcement Fine Tuning (RFT) with reward functions in your environment and to implement custom safety guardrails using the built-in responsible AI toolkit. Nova Forge allows you to build models that deeply understand your organization’s proprietary knowledge and reflects your expertise, while preserving general capabilities like reasoning and minimizing risks like catastrophic forgetting. In addition, Nova Forge customers get early access to new Nova models, including Nova 2 Pro and Nova 2 Omni. Nova Forge is available today in US East (N. Virginia) AWS Region and will be available in additional regions in the coming months. Learn more about Nova Forge on the AWS News Blog, the Amazon Nova product page, or the Amazon Nova user guide. You can get started with Nova Forge today from the Amazon SageMaker AI console.
aws.amazon.com
December 2, 2025 at 4:43 PM
🆕 AWS launches Trn3 UltraServers with Trainium3 chip for faster, cost-effective generative AI training, offering 4.4x higher performance, 3.9x memory bandwidth, and 4x better efficiency vs. Trn2, with up to 144 chips, HBM3e memory, and NeuronSwitch-v1 fabric.

#AWS #AmazonEc2
Announcing Amazon EC2 Trn3 UltraServers for faster, lower-cost generative AI training
AWS announces the general availability of Amazon Elastic Compute Cloud (Amazon EC2) Trn3 UltraServers powered by our fourth–generation AI chip Trainium3, our first 3nm AWS AI chip purpose-built to deliver the best token economics for next-generation agentic, reasoning, and video generation applications. Each AWS Trainium3 chip provides 2.52 petaflops (PFLOPs) of FP8 compute, increases the memory capacity by 1.5x and bandwidth by 1.7x over Trainium2 to 144 GB of HBM3e memory, and 4.9 TB/s of memory bandwidth. Trainium3 is designed for both dense and expert-parallel workloads with advanced data types (MXFP8 and MXFP4) and improved memory-to-compute balance for real-time, multimodal, and reasoning tasks. Trn3 UltraServers can scale up to 144 Trainium3 chips (362 FP8 PFLOPs total) and are available in EC2 UltraClusters 3.0 to scale to hundreds of thousands of chips. A fully configured Trn3 UltraServer delivers up to 20.7 TB of HBM3e and 706 TB/s of aggregate memory bandwidth. The next-generation Trn3 UltraServer, feature the NeuronSwitch-v1, an all-to-all fabric that doubles interchip interconnect bandwidth over Trn2 UltraServer. Trn3 delivers up to 4.4x higher performance, 3.9x higher memory bandwidth and 4x better performance/watt compared to our Trn2 UltraServers, providing the best price-performance for training and serving frontier-scale models, including reinforcement learning, Mixture-of-Experts (MoE), reasoning, and long-context architectures. On Amazon Bedrock, Trainium3 is our fastest accelerator, delivering up to 3× faster performance than Trainium2 with over 5× higher output tokens per megawatt at similar latency per user. New Trn3 UltraServers are built for AI researchers and powered by the AWS Neuron SDK, to unlock breakthrough performance. With native PyTorch integration, developers can train and deploy without changing a single line of model code. For AI performance engineers, we’ve enabled deeper access to Trainium3 so they can fine-tune performance, customize kernels, and push models even further. Because innovation thrives on openness, we are committed to engaging with our developers through open-source tools and resources.
aws.amazon.com
December 2, 2025 at 4:42 PM
🆕 AWS launches DevOps Agent preview, an agent that autonomously resolves incidents, proactively prevents issues, and improves application reliability and performance across AWS, multicloud, and hybrid environments at no extra cost in US East (N. Virginia).

#AWS
Introducing AWS DevOps Agent (preview), frontier agent for operational excellence
We're excited to launch AWS DevOps Agent in preview, a frontier agent that resolves and proactively prevents incidents, continuously improving reliability and performance of applications in AWS, multicloud, and hybrid environments. AWS DevOps Agent investigates incidents and identifies operational improvements as an experienced DevOps engineer would: by learning your resources and their relationships, working with your observability tools, runbooks, code repositories, and CI/CD pipelines, and correlating telemetry, code, and deployment data across all of them to understand the relationships between your application resources. AWS DevOps Agent autonomously triages incidents and guides teams to rapid resolution to reduce Mean Time to Resolution (MTTR). AWS DevOps Agent begins investigating the moment an alert comes in, whether at 2 AM or during peak hours, to quickly restore your application to optimal performance. It analyzes patterns across historical incidents to provide actionable recommendations that strengthen key areas including observability, infrastructure optimization, and deployment pipeline enhancement. AWS DevOps Agent helps access the untapped insights in your operational data and tools without changing your workflows. AWS DevOps Agent is available at no additional cost during preview in the US East (N. Virginia) Region. To learn more, read the AWS News Blog and see getting started.
aws.amazon.com
December 2, 2025 at 4:42 PM
🆕 AWS announces Amazon Nova 2 foundation models in Amazon Bedrock: Nova 2 Lite for everyday tasks and Nova 2 Pro (Preview) for complex multistep tasks. Both offer advanced reasoning, customizable fine-tuning, and global inference. Learn more on AWS News Blog.

#AWS #AmazonBedrock #AmazonSagemaker
Announcing Amazon Nova 2 foundation models now available in Amazon Bedrock
Today AWS announces Amazon Nova 2, our next generation of general models that deliver reasoning capabilities with industry-leading price performance. The new models available today in Amazon Bedrock are: • Amazon Nova 2 Lite, a fast, cost-effective reasoning model for everyday workloads. • Amazon Nova 2 Pro (Preview), our most intelligent model for highly complex, multistep tasks. Amazon Nova 2 Lite and Amazon Nova 2 Pro (Preview) offer significant advancements over our previous generation models. These models support extended thinking with step-by-step reasoning and task decomposition and include three thinking intensity levels—low, medium, and high—giving developers control over the balance of speed, intelligence, and cost. The models also offer built-in tools such as code interpreter and web grounding, support remote MCP tools, and provide a one-million-token context window for richer interactions. Nova 2 Lite can be used for a broad range of your everyday tasks. It offers the best combination of price, performance, and speed. Early customers are using Nova 2 Lite for customer service chatbots, document processing, and business process automation. Amazon Nova 2 Pro (Preview) can be used for highly complex agentic tasks such as multi-document analysis, video reasoning, and software migrations. Nova 2 Pro is in preview with early access available to all Amazon Nova Forge customers. If interested, reach out to your AWS account team regarding access. Nova 2 Lite can be customized using supervised fine-tuning (SFT) on Amazon Bedrock and Amazon SageMaker, and full fine-tuning is available on Amazon SageMaker. Amazon Nova 2 Lite and Nova 2 Pro (Preview) is now available in Amazon Bedrock via global cross region inference in multiple locations. Learn more at the AWS News Blog, Amazon Nova models product page, and Amazon Nova user guide.
aws.amazon.com
December 2, 2025 at 4:41 PM
🆕 AWS AI Factories deploy AI infrastructure in your data center with Trainium, GPUs, and AWS AI services. Use Amazon Bedrock and SageMaker for foundation models. Contact AWS for secure, specialized AI environments.

#AWS
Introducing AWS AI Factories
AWS AI Factories are now available, providing rapidly deployable, high-performance AWS AI infrastructure in your own data centers. By combining the latest AWS Trainium accelerators and NVIDIA GPUs, specialized low-latency networking, high-performance storage, and AWS AI services, AI Factories accelerate your AI buildouts by months or years compared to building independently. Leveraging nearly two decades of AWS cloud leadership expertise, AWS AI Factories eliminate the complexity of procurement, setup, and optimization that typically delays AI initiatives. With integrated AWS AI services like Amazon Bedrock and Amazon SageMaker, you gain immediate access to leading foundation models without negotiating separate contracts with individual model providers.  AWS AI Factories operate as dedicated environments built exclusively for you or your designated trusted community, ensuring complete separation and operating independence while integrating with the broader set of AWS services. This approach helps governments and enterprises meet digital sovereignty requirements while benefiting from the unparalleled security, reliability, and capabilities of the AWS Cloud. You provide the data center space and power capacity you've already acquired, while AWS deploys and manages the infrastructure.  AWS AI Factories deliver advanced AI technologies to enterprises across all industries and government organizations seeking secure, isolated environments with strict data residency requirements. These dedicated environments provide access to the same advanced technologies available in public cloud Regions, allowing you to build AI-powered applications as well as train and deploy large language models using your own proprietary data. Rather than spending years building capacity independently, AWS accelerates deployment timelines so you can focus on innovation instead of infrastructure complexity.  Contact your AWS account team to learn more about deploying AWS AI Factories in your data center and accelerating your AI initiatives with AWS proven expertise in building and maintaining dedicated AI infrastructure at scale.
aws.amazon.com
December 2, 2025 at 4:40 PM
🆕 Amazon announces Nova 2 Sonic, a speech-to-speech model for real-time conversational AI with industry-leading quality, expanded language support, polyglot voices, and seamless voice-to-text integration. Available in Amazon Bedrock.

#AWS
Announcing Amazon Nova 2 Sonic for real-time conversational AI
Today, Amazon announces the availability of Amazon Nova 2 Sonic, our speech-to-speech model for natural, real-time conversational AI that delivers industry leading quality and price for voice-based conversational AI. It offers best-in-class streaming speech understanding with robustness to background noise and users’ speaking styles, efficient dialog handling, and speech generation with expressive voices that can speak natively in multiple languages (Polyglot voices). It has superior reasoning, instruction following, and tool invocation accuracy over the previous model. Nova 2 Sonic builds on the capabilities introduced in the original Nova Sonic model with new features including expanded language support (Portuguese and Hindi), polyglot voices that enable the model to speak different languages with native expressivity using the same voice, and turn-taking controllability to allow developers to set low, medium, or high pause sensitivity. The model also adds cross-modal interaction, allowing users to seamlessly switch between voice and text in the same session, asynchronous tool calling to support multi-step tasks without interrupting conversation flow, and a one-million token context window for sustained interactions. Developers can integrate Nova Sonic 2 directly into real-time voice systems using Amazon Bedrock’s bidirectional streaming API. Nova Sonic 2 now also seamlessly integrates with Amazon Connect and other leading telephony providers, including Vonage, Twilio, and AudioCodes, as well as open source frameworks such as LiveKit and Pipecat. Amazon Nova 2 Sonic is available in Amazon Bedrock in the following AWS Regions: US East (N. Virginia), US West (Oregon), Asia Pacific (Tokyo), and Europe (Stockholm). To learn more, read the AWS News Blog and the Amazon Nova Sonic User Guide. To get started with Nova Sonic 2 in Amazon Bedrock, visit the Amazon Bedrock console.
aws.amazon.com
December 2, 2025 at 4:40 PM
🆕 AWS Transform now offers agentic AI for automating VMware migrations, reducing manual effort and complexity. It discovers environments, prioritizes apps, and generates migration plans, migrating servers securely across 16 AWS Regions. Available in all regions where AWS Transform is offered.

#AWS
AWS Transform adds new agentic AI capabilities for enterprise VMware migrations
AWS Transform adds powerful new agentic AI capabilities to automate VMware migrations to AWS. The migration agent collaborates with migration teams to understand business priorities and intelligently plan and migrate hundreds of applications spanning thousands of servers, significantly reducing manual effort, time, and complexity. The agent can now discover your on-premises environment and prioritize applications for migration using the AWS Transform discovery tool, inventory data from various third-party discovery tools, and unstructured data such as documents, notes, and business rules. It analyzes infrastructure, database, and application details, maps dependencies, and generates migration plans grouped by business and technical priorities such as ownership, department, function, subnet, and operating systems. It generates networks with hub-and-spoke and isolated network configurations, provides flexible IP address management options, deploys to multiple accounts, generates network configurations for your AWS landing zones, and migrates from source environments like NSX, Palo Alto, Fortigate, and Cisco ACI. The agent migrates servers to AWS securely and iteratively in waves and provides clear progress updates throughout the deployment. It also migrates Windows and Linux x86 servers, hypervisors such as VMware, HyperV, Nutanix, and KVM, and bare-metal physical environments to multiple target accounts. Throughout your migration, you can ask the agent questions as it guides your decisions, whether that’s repeating or skipping steps, or adjusting plans. To simplify internal approvals, the agent also generates a detailed report with the migration plan and mapping of networks, servers, and applications. With AWS Transform, you can accelerate time to value, lower risk, and reduce the complexity of VMware migrations. These new capabilities are available in all AWS Regions where AWS Transform is offered, with support for migrating servers and networks to 16 AWS Regions. Learn more on the product page and user guide, and get started with AWS Transform.
aws.amazon.com
December 1, 2025 at 8:40 PM
🆕 AWS Transform for mainframe now supports reimagining legacy apps with advanced data analysis, business logic extraction, and automated workflows, available in multiple regions. Optimize modernization efforts with AI-powered insights and flexible job plans.

#AWS
AWS Transform for mainframe now supports application reimagining
AWS Transform for mainframe delivers new data and activity analysis capabilities to extract comprehensive insights to drive the reimagining of mainframe applications. These insights can be combined with business logic extraction to inform decomposition of legacy applications into logical business domains. Together, these form the basis of a comprehensive specification for coding agents like Kiro to reimagine applications into cloud-native architectures. The new capabilities empower organizations to reimagine legacy workloads, providing a comprehensive reverse engineering workflow that includes automated code and data structure analysis, activity analysis, technical documentation generation, business logic extraction, and intelligent code decomposition. Through in-depth data and activity analysis, AWS Transform helps identify application components with high utilization or business value, allowing teams to optimize their modernization efforts and make data-informed architectural decisions. In the AI-powered chat interface, users can customize their modernization approach through flexible job plans that allow them to select predefined comprehensive workflows—full modernization, analysis focus, or business logic focus—or create their own combination of capabilities based on specific objectives. The reimagine capabilities in AWS Transform for mainframe are available today in US East (N. Virginia), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), and Europe (London) Regions. To learn more about reimagining mainframe applications with AWS Transform for mainframe, read the AWS News Blog post or visit the AWS Transform product page.
aws.amazon.com
December 1, 2025 at 8:39 PM
🆕 AWS Transform now modernizes .NET Framework to .NET 10, supports UI porting, and offers an enhanced developer experience with customizable transformation plans, real-time updates, and next steps markdown. Available in multiple regions.

#AWS
AWS Transform expands .NET transformation capabilities and enhances developer experience
Today, AWS announces the general availability of expanded .NET transformation capabilities and an enhanced developer experience in AWS Transform. Customers can now modernize .NET Framework and .NET code to .NET 10 or .NET Standard. New transformation capabilities include UI porting of ASP.NET Web Forms to Blazor on ASP.NET Core and porting Entity Framework ORM code. The new developer experience, available with the AWS Toolkit for Visual Studio 2026 or 2022, is customizable, interactive, and iterative. It includes an editable transformation plan, estimated transformation time, real-time updates during transformation, the ability to repeat transformations with a revised plan, and next steps markdown for easy handoff to AI code companions. With these enhancements, AWS Transform provides a path to modern .NET for more project types, supports the latest releases of .NET and Visual Studio, and gives developers oversight and control of transformations. Developers can now streamline their .NET modernization through an enhanced IDE experience. The process begins with automated code analysis that produces a customizable transformation plan. Developers can customize the transformation plan, such as fine-tuning package updates. Throughout the transformation, they benefit from transparent progress tracking and detailed activity logs. Upon completion, developers receive a Next Steps document that outlines remaining tasks, including Linux readiness requirements, which they can address through additional AWS Transform iterations or by leveraging AI code companion tools such as Kiro. AWS Transform is available in the following AWS Regions: US East (N. Virginia), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), and Europe (London). To get started with AWS Transform, refer to the AWS Transform documentation.
aws.amazon.com
December 1, 2025 at 7:41 PM
🆕 AWS Transform for mainframe automates test planning, data collection, and case automation to cut modernization testing time and effort by up to 50%. Available in multiple regions, it streamlines complex tasks, boosting accuracy and confidence in projects.

#AWS
AWS Transform for mainframe delivers new testing automation capabilities
AWS Transform for mainframe now offers test planning and automation features to accelerate mainframe modernization projects. New capabilities include automated test plan generation, test data collection scripts, and test case automation scripts, alongside functional test environment tools for continuous delivery and regression testing, helping accelerate and de-risk testing and validation during mainframe modernization projects. The new capabilities address key testing challenges across the modernization lifecycle, reducing the time and effort required for mainframe modernization testing, which typically consumes over 50% of project duration. Automated test plan generation helps teams reduce upfront planning efforts and align on critical functional tests needed to mitigate risk and ensure modernization success, while test data collection scripts accelerate the error-prone, complex process of capturing mainframe data. Test automation scripts then enable scalable execution of test cases by automating test environment staging, test case execution, and results validation against expected outcomes. By automating complex testing tasks and reducing dependency on scarce mainframe expertise, organizations can now modernize their applications with greater confidence while improving accuracy through consistent, automated processes. The new testing capabilities in AWS Transform for mainframe are available today in US East (N. Virginia), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), and Europe (London) Regions. To learn more about automated testing in AWS Transform for mainframe, and how it can help your organization accelerate modernization, read the AWS News Blog, visit the AWS Transform for mainframe product page, or explore the AWS Transform User Guide.
aws.amazon.com
December 1, 2025 at 7:40 PM
🆕 AWS Transform now offers a full-stack Windows modernization agent for .NET apps and SQL Server, automating migration to Aurora PostgreSQL and container deployment on ECS/EC2, accelerating modernization 5x and cutting costs by 70%. Available in US East (N. Virginia).

#AWS
AWS Transform launches an AI agent for full-stack Windows modernization
AWS Transform is expanding its capability from the .NET modernization agent to now include the full-stack Windows modernization agent that handles both .NET applications and their associated databases. The new agent automates the transformation of .NET applications and Microsoft SQL Server databases to Amazon Aurora PostgreSQL and deploys them to containers on Amazon ECS or Amazon EC2 Linux. AWS Transform accelerates full-stack Windows modernization by 5x across application and database layers, while reducing operating costs by up to 70%. With AWS Transform, customers can accelerate their full-stack modernization journey through automated discovery, transformation, and deployment. The full-stack Windows modernization agent scans Microsoft SQL Server databases in Amazon EC2 or Amazon RDS instances, and it scans .NET application code from source repositories (GitHub, GitLab, Bitbucket, or Azure Repos) to create customized, editable modernization plans. It automatically transforms SQL Server schemas to Aurora PostgreSQL and migrates databases to new or existing Aurora PostgreSQL target clusters. For .NET application transformation, the agent updates database connections in the source code and modifies database access code written in Entity Framework and ADO.NET to be compatible with Aurora PostgreSQL—all in a unified workflow with human supervision. All the transformed code is committed to a new repository branch. Finally, the transformed application along with the databases can be deployed into a new or existing environment to validate the transformed applications and databases. Customers can monitor transformation progress through worklog updates and interactive chat, and they can use the detailed transformation summaries for next steps recommendations and for easy handoff to AI code companions. AWS Transform for full-stack Windows modernization is available in the US East (N. Virginia) AWS Region. To learn more, visit the overview page and AWS Transform documentation.
aws.amazon.com
December 1, 2025 at 7:40 PM