SageMaker Training Plans allows you to reserve GPU capacity within specified time frames in cluster sizes of up to 64 instances. Today, Amazon SageMaker AI announces that Training Plans can now be extended when your AI workloads take longer than anticipated, ensuring uninterrupted access to capacity. You can extend plans by 1-day increments up to 14 days, or 7-day increments up to 182 days (26 weeks). Extensions can be initiated via API or the SageMaker console. Once the extension is purchased the workload continues to run un-interrupted without you needing to reconfgure the workload.
SageMaker AI helps you create the most cost-efficient training plans that fits within your timeline and AI budget. Once you create and purchase your training plans, SageMaker automatically provisions the infrastructure and runs the AI workloads on these compute resources without requiring any manual intervention. See the SageMaker AI pricing page for a detailed breakdown of instance availability by AWS Region.
To learn more about training plan extensions, see the Amazon SageMaker Training Plans User Guide
You can now create provisioned Amazon Managed Streaming for Apache Kafka (Amazon MSK) clusters with Express brokers in Africa (Cape Town) and Asia Pacific (Taipei) regions.
Express brokers are a new broker type for Amazon MSK Provisioned designed to deliver up to 3x more throughput per broker, scale up to 20x faster, and reduce recovery time by 90% as compared to standard Apache Kafka brokers. Express brokers come pre-configured with Kafka best practices by default, support all Kafka APIs, and provide the same low-latency performance that Amazon MSK customers expect, so they can continue using existing client applications without any changes.
To get started, create a new cluster with Express brokers through the Amazon MSK console or the Amazon CLI and read our Amazon MSK Developer Guide for more information.
Starting today, customers can use Amazon Bedrock in the Asia Pacific (New Zealand) Region to easily build and scale generative AI applications using a variety of foundation models (FMs) as well as powerful tools to build generative AI applications.
Amazon Bedrock is a fully managed service that offers a choice of high-performing large language models (LLMs) and other FMs from leading AI companies like AI21 Labs, Anthropic, Cohere, Meta, Mistral AI, OpenAI, Stability AI, as well as Amazon via a single API. Amazon Bedrock also provides a broad set of capabilities customers need to build generative AI applications with security, privacy, and responsible AI built in. These capabilities help you build tailored applications for multiple use cases across different industries, helping organizations unlock sustainable growth from generative AI while maintaining privacy and security.
With this launch, customers can now use models from Anthropic (Sonnet 4.5, Sonnet 4.6, Opus 4.5, Opus 4.6, Haiku 4.5) and Amazon (Nova 2 Lite) in New Zealand with cross region inference.
To get started, visit the Amazon Bedrock page and see the Amazon Bedrock documentation for more details.
Amazon Bedrock AgentCore Runtime now supports InvokeAgentRuntimeCommand, a new API that lets you execute shell commands directly inside a running AgentCore Runtime session. Developers can send a command, stream the output in real time over HTTP/2, and receive the exit code — without building custom command execution logic in their containers.
AI agents often operate in workflows where deterministic operations such as running tests, installing dependencies, or executing git commands need to run alongside LLM-powered reasoning. Previously, developers had to build custom logic inside their containers to distinguish agent invocations from shell commands, spawn child processes, capture stdout and stderr, and handle timeouts. InvokeAgentRuntimeCommand eliminates this undifferentiated work by providing a platform-level API for command execution. Commands run inside the same container, filesystem, and environment as the agent session, and can execute concurrently with agent invocations without blocking.
Executing shell commands in AgentCore Runtime is supported across fourteen AWS Regions: US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Mumbai), Canada (Central), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Paris), and Europe (Stockholm).
To learn more, see Execute shell commands in AgentCore Runtime.
Amazon SageMaker Unified Studio now provides an aggregated view of data lineage, displaying all jobs contributing to your dataset. The aggregated view gives you a complete picture of data transformations and dependencies across your entire lineage graph, helping you quickly identify all upstream sources and downstream consumers of your datasets.
Previously, SageMaker Unified Studio showed the lineage graph as it existed at a specific point in time, which is useful for troubleshooting and investigating specific data processing events. The aggregated view now provides a complete picture of data transformations and dependencies across multiple levels of the lineage graph. You can use this view to understand the full scope of jobs impacting your datasets and to identify all upstream sources and downstream consumers.
The aggregated view is available as the default lineage view in Amazon SageMaker Unified Studio for IdC-based domains. You can switch to the previous view by toggling the "display in event timestamp order" option. You can also query the lineage graph using the new QueryGraph API, which provides lineage node graphs with metadata and augmented business context.
Aggregated view of lineage is available in all existing Amazon SageMaker Unified Studio regions. For detailed information on how to get started with lineage using these new features, refer to the documentation and API.
AWS Blu Insights capabilities are now available as part of AWS Transform, enabling customers to launch mainframe refactoring projects from the AWS Transform console. This launch unifies all three mainframe modernization patterns — refactor, replatform, and reimagine — within AWS Transform for mainframe. Code transformation is now offered at no cost, replacing the previous lines-of-code based pricing model.
With this launch, you can access AWS Transform for mainframe refactor directly from the AWS Transform console using your existing AWS credentials. The mandatory three-level certification requirement to access the Transformation Center has been removed, lowering the friction to exploring refactor projects. Self-paced training content remains available within the application for those who want to build deeper knowledge.
AWS Transform for mainframe refactor is available in 18 AWS Regions. In regions where AWS Transform for mainframe is not yet available, you can continue to access the service through the AWS Mainframe Modernization console.
To get started, visit the AWS Transform for mainframe refactor user guide.
Amazon Corretto 26, a Feature Release (FR) version, is now available for download. Amazon Corretto is a no-cost, multi-platform, production-ready distribution of OpenJDK. You can download Corretto 26 for Linux, Windows, and macOS from our downloads page. Corretto 26 will be supported through October 2026.
A detailed description of these features can be found on the OpenJDK 26 Project page. Amazon Corretto 26 is distributed by Amazon under an open source license.
Amazon Relational Database Service (Amazon RDS) for SQL Server now supports Additional Storage Volumes, Resource Governor, and SQL Server 2019 with SQL Server Developer Edition. SQL Server Developer Edition is an ideal choice to build and test applications because it includes all the functionality of Enterprise edition, and is free of license charges for use as a development and test system, not as production server.
You can use Additional Storage Volumes to your Amazon RDS for SQL Server Developer Edition instances, which provide you up to 256 TiB, 4X more storage. You can also use SQL Server Resource Governor, which lets you manage workload and resource consumption by defining resource pools and workload groups to control CPU and memory usage, enabling more realistic performance testing. Amazon RDS for SQL Server Developer Edition now also supports SQL Server 2019 (CU32 GDR - 15.0.4455.2), so you can match the SQL Server version used in your development and testing environments with what you use for your production environment.
For more information about these features and region availability, see Working with SQL Server Developer Edition on RDS for SQL Server. For pricing details, see Amazon RDS for SQL Server Pricing.
AWS Glue Data Catalog now supports AWS IAM-based authorization for Amazon S3 Tables and Apache Iceberg materialized views. With IAM-based authorization, you can define all necessary permissions across storage, catalog, and query engines in a single IAM policy.
This capability simplifies the integration of S3 Tables or materialized views with any AWS Analytics service, including Amazon Athena, Amazon EMR, Amazon Redshift, and AWS Glue. You can also opt in to AWS Lake Formation at any time to manage fine-grained access controls using the AWS Management Console, AWS CLI, API, and AWS CloudFormation.
This feature is now available in select AWS Regions. To learn more, visit the S3 Tables documentation and the AWS Glue Data Catalog documentation.
Amazon Connect now supports 13 new languages for voice AI agents, bringing the total to 40 language locales. New languages include Arabic (Saudi Arabia), Czech, Danish, Dutch (Belgium), English (Ireland), English (New Zealand), English (Wales), German (Switzerland), Icelandic, Romanian, Spanish (Mexico), Turkish, and Welsh.
Amazon Connect's agentic self-service capabilities enable AI agents to understand, reason, and take action across voice and digital channels to automate routine and complex customer service tasks across multiple languages.
To learn more about this feature, see the Amazon Connect Administrator Guide. To learn more about Amazon Connect, a complete AI-powered contact center solution delivering personalized customer experiences at scale, visit the Amazon Connect website.
Starting today, AWS Elemental MediaConnect supports NDI® (Network Device Interface) as a live video source, enabling broadcasters and live production teams to ingest NDI streams and convert them to transport stream outputs such as SRT for downstream distribution. NDI is a widely adopted IP video technology used in live production environments and supported by more than 500 hardware products and 400 software applications.
With this new capability, live production teams can bridge NDI-based production environments with standards-based cloud distribution workflows without requiring custom transcoding or protocol conversion infrastructure. For example, you can route an NDI feed from an EC2 instance running NDI Tools directly into a MediaConnect flow, convert it to a transport stream, and pass it downstream to AWS Elemental MediaLive for transcoding and AWS Elemental MediaPackage for origin and packaging. This eliminates the complexity of egressing NDI content from the AWS Cloud and enables seamless integration with existing IP-based broadcast workflows.
NDI support is available in most regions where MediaConnect is currently deployed. For more information and details on pricing, please refer to the NDI documentation and the MediaConnect pricing page.
Amazon SageMaker Unified Studio now provides an aggregated view of data lineage, displaying all jobs contributing to your dataset. The aggregated view gives you a complete picture of data transformations and dependencies across your entire lineage graph, helping you quickly identify all upstream sources and downstream consumers of your datasets.
Previously, SageMaker Unified Studio showed the lineage graph as it existed at a specific point in time, which is useful for troubleshooting and investigating specific data processing events. The aggregated view now provides a complete picture of data transformations and dependencies across multiple levels of the lineage graph. You can use this view to understand the full scope of jobs impacting your datasets and to identify all upstream sources and downstream consumers.
The aggregated view is available as the default lineage view in Amazon SageMaker Unified Studio for IdC-based domains. You can switch to the previous view by toggling the "display in event timestamp order" option. You can also query the lineage graph using the new QueryGraph API, which provides lineage node graphs with metadata and augmented business context.
Aggregated view of lineage is available in all existing Amazon SageMaker Unified Studio regions. For detailed information on how to get started with lineage using these new features, refer to the documentation and API.
Amazon Connect now offers generative text-to-speech voices in three additional AWS Regions: Europe (London), Asia Pacific (Seoul), and Asia Pacific (Sydney). Amazon Connect also expands support for nine new generative text-to-speech voices across US English, UK English, European French, German, and Italian: Tiffany (en-US), Amy (en-GB), Brian (en-GB), Ambre (fr-FR), Florian (fr-FR), Tina (de-DE), Lennart (de-DE), Beatrice (it-IT), and Lorenzo (it-IT).
Amazon Connect's agentic self-service capabilities enable AI agents to understand, reason, and take action across voice and messaging channels to automate routine and complex customer service tasks. Connect's voice AI agents understand not only what customers say but how they say it, adapting voice responses to match customer tone and sentiment while maintaining natural conversational pace. With these updates, you can deliver natural, human-like voice AI experiences to a broader range of customers across more regions and languages.
To learn more about this feature, see the Amazon Connect Administrator Guide. To learn more about Amazon Connect, a complete AI-powered contact center solution delivering personalized customer experiences at scale, visit the Amazon Connect website.
AWS Security Agent now provides the ability to download penetration testing reports. This enhancement to the AWS Security Agent allows users to create customized reports based on specific filters. Each report includes an executive summary with a high-level overview of security posture and findings, the scope of test, the test methodology detailing the approach and techniques used along with task details, and comprehensive findings details with vulnerability information and risk assessments.
The new report download capability allows users to filter findings based on risk level, confidence level, finding status, risk types, and task status. Reports are downloadable in PDF format, making it easy to share and review findings across teams. This functionality enhances the AWS Security Agent's ability to provide flexiblity to teams, that use AWS Security Agent's on-demand penetration testing capability to accelerate pentestion testing from weeks to hours.
To learn more about AWS Security Agent and its new report generation feature, visit the AWS Security Agent page.
We’re excited to announce that Amazon Web Services (AWS) has completed its second GDV (German Insurance Association) community audit with 36 members from the Germany insurance industry participating, corresponding to over 63% coverage of the German market in terms of insurance premiums. Community audits are an efficient method to provide additional assurance to a group […]
Bulletin ID: 2026-009-AWS
Scope: AWS
Content Type: Important (requires attention)
Publication Date: 2026/03/17 12:15 PM PDT
Description:
Kiro is an AI-powered IDE for agentic software development. We identified CVE-2026-4295, where improper trust boundary enforcement allowed arbitrary code execution when a user opened a maliciously crafted project directory.
Impacted versions: < 0.8.0
Please refer to the article below for the most up-to-date and complete information related to this AWS Security Bulletin.
In this post, we’ll explore how Atos used the AWS AI League to help accelerate AI education across 400+ participants, highlight the tangible benefits of gamified, experiential learning, and share actionable insights you can apply to your own AI enablement programs.