To improve developer experience, AWS Transform now includes an interactive agentic AI assistant in the AWS Toolkit for Visual Studio. This enables .NET developers to modernize applications through a conversational, step-by-step guided experience directly in their IDE. The assistant provides visibility, checkpointing, and enhanced steering capabilities. So, a developer that lives in IDE can continue to work in IDE leveraging fine granular control. The agent analyzes source code, provides a detailed assessment report, and generates a transformation plan. It then executes modernization tasks interactively, allowing developers to review, edit, and approve each step before proceeding, all without switching to the web console.
You can pause at any step, inspect generated diffs, upload a custom plan, and direct the agent with natural language. The agent automatically attempts to fix build errors encountered during transformation, provides detailed worklogs for transparency, and generates a downloadable HTML summary report upon completion along with recommended next steps. You can start a modernization project in the AWS Transform web console and continue directly in Visual Studio, with full context and progress preserved across both environments, eliminating the need to restart or reconfigure your workflow. In addition to Visual Studio, you can invoke the power of AWS Transform agents from Kiro and other AI coding assistants and coding environments. Through Kiro power for AWS Transform and AWS Transform MCP agents, you can enjoy a unified tool experience to reduce context-switching and continue iterating on transformed code in your preferred development.
This capability is available in the following AWS Regions: US East (N. Virginia), Canada (Central), Europe (Frankfurt), Europe (London), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Sydney), and Asia Pacific (Tokyo).
To get started, download the latest AWS Toolkit for Visual Studio from the Visual Studio Marketplace. To learn more, visit the AWS Transform for Windows .NET page.
AWS CloudFormation now supports a new intrinsic function, Fn::GetStackOutput, that enables you to reference stack outputs across AWS accounts and Regions directly within your CloudFormation templates and CDK applications. This new capability simplifies the provisioning and management of multi-account and multi-Region workloads in CloudFormation and CDK, and eliminates deployment deadlocks when restructuring cross-stack dependencies in CDK apps.
When managing multi-account AWS environments, teams often need to share infrastructure values, such as VPC IDs or database endpoints, across account boundaries. Previously, achieving this required multiple steps, including copying values between templates or coordinating parameter updates across teams. Now, with Fn::GetStackOutput, you simply specify the target stack name, output key, an IAM role ARN for cross-account access, and optionally a Region. CloudFormation assumes the specified role, retrieves the output value, and resolves it during template processing, reducing manual coordination and the risk of configuration drift. In CDK applications, cross-account and cross-Region references now use this function automatically, eliminating the need for custom resources and SSM parameters that the previous approach required. Customers can also call Fn.getStackOutput directly to create weak references between stacks, simplifying stack refactoring.
To get started, add the Fn::GetStackOutput function to your CloudFormation template and configure the appropriate IAM permissions for cross-account access. In CDK, cross-account and cross-Region references use this function automatically. Visit the AWS CloudFormation User Guide or the CDK developer guide to learn more.
This feature is available in all AWS Regions where CloudFormation is supported. Refer to the AWS Region table for service availability details.
Amazon Application Recovery Controller (ARC) Region Switch helps customers orchestrate the failover of their multi-Region applications to achieve a bounded recovery time in the event of a Regional impairment. Today, we are announcing the Lambda event source mapping execution block, which automates the coordinated failover of event streams for multi-Region workloads.
Customers running event-driven architectures use Lambda functions with event source mappings to process event streams from Kinesis, DynamoDB Streams, MSK, or SQS. For active-passive workloads, customers may maintain Lambda functions in each Region but process events in only one Region at a time. These event source mappings must be toggled during failover to avoid duplicate processing—a manual, error-prone step. The Lambda event source mapping execution block automates this by enabling or disabling event source mappings in either the activating or deactivating Region. To control duplicate processing, customers can configure two Lambda event source mapping execution blocks in sequence: a disable block to stop event processing in the deactivating Region, and an enable block to start it in the activating Region. The disable block can be overridden by running the plan in "ungraceful" mode for unplanned failovers where the deactivating Region may be impaired. Native cross-account support enables a single plan to handle event stream failover across multiple accounts.
To get started, see the Lambda event source mapping execution block documentation. ARC Region switch is available in all commercial Regions. See ARC Region switch availability
Amazon Aurora DSQL introduces support for change data capture (CDC) in preview, enabling you to stream real-time database changes directly to Amazon Kinesis Data Streams. This fully managed capability removes the need to build or maintain custom streaming pipelines, making it easier to build event-driven applications, power real-time analytics pipelines, and synchronize data across systems.
Aurora DSQL automatically captures the result of insert, update, and delete operations as change events. You can use these events to synchronize data across microservices, trigger downstream processing with AWS Lambda, or deliver to Amazon S3, Amazon Redshift, and Amazon OpenSearch Service through Amazon Data Firehose for analytics. CDC streaming requires no infrastructure setup and is designed to have zero impact on your database workload, so you can stream changes without affecting database throughput or latency.
CDC streaming in preview is available in all AWS Regions where Aurora DSQL is available. Streams are billed using Distributed Processing Units (DPUs) based on the volume of data captured, with standard Amazon Kinesis Data Streams pricing applying separately. To learn more, read the blog and see getting started.
Today, AWS announces that the AWS Transform agents — built on decades of AWS migration and modernization experience — are now accessible through a Kiro power, agent plugins, and via the AWS Transform MCP server. Developers can now consume all of AWS Transform's capabilities directly from their preferred development environment, whether working interactively in an agentic IDE, managing jobs through the web console, or integrating programmatically via MCP.
This launch gives builders flexibility to choose the surface that fits their workflow while gaining the depth of transformation expertise behind the AWS Transform agents for Windows, VMware, mainframe and more. A developer can start a transformation in their agentic IDE, monitor progress and collaborate in the web console, then see results back in their IDE — all against the same underlying job with consistent state. Additionally, AWS Transform now supports IAM role authentication. Customers who start using AWS Transform in their IDE or the web app can use their existing AWS credentials to create a Transform environment, workspace, and transformation job.
The agent plugin and MCP are available on GitHub, and the Kiro Power within the Kiro marketplace. To learn more, see https://aws.amazon.com/transform.
Today, as part of the AWS Transform composability initiative, AWS announces the general availability of the agent builder toolkit Kiro power for AWS Transform. With the agent builder toolkit, AWS Partners and customers can build agents tailored to their specific modernization needs and ensure it works seamlessly within AWS Transform.
This capability enables Migration and Modernization Competency Partners, ISVs, or customers to create differentiated transformation solutions by integrating their specialized agents, tools, knowledge bases, and workflows with AWS Transform's agentic AI capabilities. The agent builder toolkit provides the end-to-end lifecycle for transformation agents: build agents using the Kiro power; share them with teams or across partner networks, and register them with AWS Transform for discovery.
The agent builder toolkit for AWS Transform is available in the Kiro power marketplace. To learn more, see AWS Transform (https://aws.amazon.com/transform).
Amazon SageMaker AI now supports serverless model customization for Qwen3.6 27B parameter model using supervised fine-tuning (SFT) and reinforcement fine-tuning (RFT). Qwen3.6 is a popular open-weight model family from Alibaba Cloud. This launch is an addition to our support for fine-tuning Qwen3.5 and other popular models. Before this launch, you could deploy Qwen3.6 base model on SageMaker AI and now, you can also adapt it to your specific domains and workflows.
Model customization enables you to tailor foundation models with your proprietary data so they more accurately reflect your domain knowledge, terminology, and quality standards. Rather than building models from scratch, fine-tuning lets you start from a capable base model and specialize it for your use cases, whether that's improving accuracy on domain-specific tasks, aligning outputs with your organization's tone, or improving performance on new tasks using your labeled data. With serverless customization, SageMaker AI handles all infrastructure provisioning and training orchestration, so you can focus on your data and evaluation rather than cluster management, and only pay for what you use.
Serverless model customization for Qwen3.6 on SageMaker AI is available in US East (N. Virginia), US West (Oregon), Asia Pacific (Tokyo), and EU (Ireland). To get started, navigate to the Models page in Amazon SageMaker Studio to launch a customization job, or use the SageMaker Python SDK for programmatic access. To learn more, see the Amazon SageMaker AI model customization documentation.
Migrating your TLS endpoints to Post-quantum cryptography (PQC) starts with understanding your current TLS endpoint inventory and posture. This post introduces the PQC Readiness Scanner — an automated tool that inventories your Application Load Balancer (ALB), Network Load Balancer (NLB), and Amazon API Gateway endpoints and continuously monitors their TLS configurations for PQC readiness. The […]
Bulletin ID: 2026-031-AWS
Scope: AWS
Content Type: Important (requires attention)
Publication Date: 05/14/2026 13:00 PM PDT
Description:
Amazon SageMaker Python SDK is an open-source library for training and deploying machine learning models on Amazon SageMaker. The ModelBuilder component simplifies model deployment by automating model artifact preparation and SageMaker model creation.
We identified two issues affecting the model artifact integrity verification mechanism in the ModelBuilder/Serve component:
- CVE-2026-8596: We identified a cleartext storage of sensitive information issue in the ModelBuilder/Serve component. When building models using ModelBuilder, the SDK stored an HMAC signing key as a container environment variable (SAGEMAKER_SERVE_SECRET_KEY). This key was returned in plaintext by SageMaker describe APIs (DescribeModel, DescribeEndpointConfig, DescribeModelPackage). A remote authenticated actor with permissions to call these APIs and S3 write access to the model artifact path could extract the key, forge valid integrity signatures for specially crafted model artifacts, and achieve code execution in inference containers.
- CVE-2026-8597: We identified a missing integrity verification issue in the Triton inference handler. The Triton handler deserialized model artifacts without performing integrity verification before execution. A remote authenticated actor with S3 write access to the model artifact path could replace model artifacts with a specially crafted pickle payload that would be deserialized without verification, achieving code execution in inference containers.
Description: Amazon SageMaker Python SDK >= v2.199.0 AND <= v2.257.1, >= v3.0.0 AND <= v3.7.1
See more details at Security Bulletin (ID: 2026-030-AWS).
In this post, you will configure Chrome enterprise policies to restrict a browser agent to a specific website, observe the policy enforcement through session recording, and demonstrate custom root CA certificates using a public test site. The walkthrough produces a working solution that researches Amazon Bedrock AgentCore documentation while operating under enterprise browser restrictions.
Today, we're announcing cross-account Athena access for Amazon Quick. With this feature, customers can query Athena data in other AWS accounts using AWS Identity and Access Management (IAM) role chaining, with query costs billed to the account where the data resides.
In this post, you learn how to combine Stream's Vision Agents open-source framework with Amazon Bedrock and Amazon Nova 2 Sonic to build real-time voice agents that can be production-ready in minutes. You'll learn how the integration works under the hood, walk through code examples, and explore advanced capabilities like function calling, automatic reconnection, and multilingual voice support.
In this post, you will learn how to implement Assisted NLU effectively. You will learn how to improve your bot design with effective intent and slot descriptions, validate your implementation using Test Workbench, and plan your transition from traditional NLU to Assisted NLU for both new and existing bots.