AWS Firewall Manager announces that it is now available in AWS Asia Pacific (New Zealand) Region. AWS Firewall Manager helps cloud security administrators and site reliability engineers protect applications while reducing the operational overhead of manually configuring and managing rules.
Working with AWS Firewall Manager, customers can provide defense in depth policies to address the full range of AWS security services for customers hosting their applications and workloads in AWS Taipei. Customers wishing to establish secured assets using AWS WAF can create and maintain security policies with AWS Firewall Manager.
To learn more about how AWS Firewall Manager works, see the AWS Firewall Manager documentation for more details and the AWS Region Table for the list of regions where AWS Firewall Manager is currently available. To learn more about AWS Firewall Manager, its features, and its pricing, visit the AWS Firewall Manager website.
AWS DataSync now supports AWS Secrets Manager for credential management across all location types, including Hadoop Distributed File System (HDFS), Amazon FSx for Windows File Server, and Amazon FSx for NetApp ONTAP. Previously, Secrets Manager integration was limited to a subset of location types, requiring you to provide credentials directly through the DataSync API or console.
You can centralize credential management for all DataSync locations in Secrets Manager, providing a single, consistent approach across all your data transfers. You can also encrypt credentials with your own AWS KMS key instead of the default AWS-owned key, helping you meet your organization's security requirements and governance policies. All secrets are stored in your account, allowing you to update credentials as needed, independent of the DataSync service.
DataSync supports two approaches for credential management. You can provide a secret ARN referencing credentials you manage in Secrets Manager for full control over rotation, auditing, and access policies. Alternatively, DataSync can automatically create and manage secrets on your behalf.
This capability is available is available in the majority of AWS regions where AWS DataSync is offered. For the full list of supported regions, visit the AWS Capabilities tool in Builder Center. To get started, visit the AWS DataSync console. For more information, see Managing credentials with AWS Secrets Manager in the AWS DataSync documentation.
AWS Database Migration Service (DMS) Schema Conversion with GenAI is now available in nine additional AWS Regions: Asia Pacific (Tokyo, Osaka, Sydney), Europe (Ireland, London, Stockholm, Paris), Canada (Central) and US East (Ohio). This feature leverages Amazon Bedrock foundation models—including Claude 3.5 Sonnet v2, Claude 3.7 Sonnet, and Claude Sonnet 4—to automate database schema and code conversion, helping organizations accelerate their database modernization initiatives. The regional expansion enables customers to process their migration workloads locally, reducing latency and supporting data residency requirements.
DMS Schema Conversion with GenAI automatically converts database schemas and code from Oracle, SQL Server, MySQL, PostgreSQL, and Sybase to Amazon Aurora PostgreSQL-Compatible Edition and Amazon RDS for PostgreSQL. By automating the conversion process, the service significantly reduces manual effort and accelerates migration project timelines, enabling database administrators and migration specialists to focus on strategic modernization activities rather than time-consuming manual code transformation.
DMS Schema Conversion is available at no additional charge and can be accessed through the AWS Management Console or AWS Command Line Interface (CLI). To learn more about supported database engines, conversion capabilities, and regional availability, visit the DMS Schema Conversion documentation and cross-region inference documentation.
AWS Database Migration Service (DMS) Schema Conversion with GenAI is now available in nine additional AWS Regions: Asia Pacific (Tokyo, Osaka, Sydney), Europe (Ireland, London, Stockholm, Paris), Canada (Central) and US East (Ohio). This feature leverages Amazon Bedrock foundation models—including Claude Sonnet 4.5, and Sonnet 4.6 —to automate database schema and code conversion, helping organizations accelerate their database modernization initiatives. The regional expansion enables customers to process their migration workloads locally, reducing latency and supporting data residency requirements.
DMS Schema Conversion with GenAI automatically converts database schemas and code from Oracle, SQL Server, MySQL, PostgreSQL, and Sybase to Amazon Aurora PostgreSQL-Compatible Edition and Amazon RDS for PostgreSQL. By automating the conversion process, the service significantly reduces manual effort and accelerates migration project timelines, enabling database administrators and migration specialists to focus on strategic modernization activities rather than time-consuming manual code transformation.
DMS Schema Conversion is available at no additional charge and can be accessed through the AWS Management Console or AWS Command Line Interface (CLI). To learn more about supported database engines, conversion capabilities, and regional availability, visit the DMS Schema Conversion documentation and cross-region inference documentation.
Today, we are excited to announce the general availability of 10 new highly expressive Amazon Polly Generative voices across 8 locales: Tiffany (American English), Brian (British English), Aria (New Zealand English), Jasmine (Singapore English), Florian (French), Ambre (French), Lorenzo (Italian), Beatrice (Italian), Lennart (German), and Sabrina (Swiss German).
Alongside these new voices, we have expanded the Generative engine to two new AWS regions in Europe (London) and Canada (Central). We have also introduced the Bidirectional Streaming API support for the Generative engine, allowing customers to stream text to Polly and receive synthesized audio back simultaneously. This makes it easy to feed output directly from a large language model (LLM) into speech synthesis, enabling real-time applications like chatbots and bespoke characters in games.
Amazon Polly is a fully managed service that turns text into lifelike speech. This expansion addresses the growing demand for natural-sounding, lifelike speech generation in conversational AI and content creation. Developers building LLM-based interactive systems and speech-enabled applications can take advantage of the enhanced voice quality and variety, expanded language and feature support, as well as broader AWS region availability.
To hear how Polly voices sound, go to Amazon Polly Features. For more details on the Polly offerings and use, see the Amazon Polly documentation and pricing page.
AWS announces the Neuron Dynamic Resource Allocation (DRA) driver for Amazon Elastic Kubernetes Service (EKS), bringing Kubernetes-native hardware-aware scheduling to AWS Trainium-based instances. The Neuron DRA driver publishes rich device attributes — including hardware topology and Neuron-EFA PCIe co-location — directly to the Kubernetes scheduler, enabling topology-aware placement decisions without custom scheduler extensions.
Deploying AI workloads on Kubernetes requires ML engineers to make infrastructure decisions that are not directly related to model development, such as determining device counts, understanding hardware and network topologies, and writing accelerator-specific manifests. This creates friction, slows iteration, and tightly couples workloads to underlying infrastructure. As use cases expand to distributed training, long-context inference, and disaggregated architectures, this complexity becomes a scaling bottleneck.
The Neuron DRA driver removes this burden by separating infrastructure concerns from ML workflows. Infrastructure teams define reusable ResourceClaimTemplates that capture device topology, allocation, and networking policies—for example, mapping instance types to optimal NeuronDevice and EFA configurations. ML engineers can simply reference these templates in their manifests, without needing to reason about hardware details. This enables consistent deployment across workload types while allowing per-workload configuration so multiple workloads can efficiently share the same nodes.
The Neuron DRA driver supports all AWS Trainium instance types and is available in all AWS Regions where AWS Trainium is available.
For documentation, sample templates, and implementation guides, visit the Neuron DRA documentation.
Learn more:
Amazon Elastic Kubernetes Service (Amazon EKS) now offers a 99.99% Service Level Agreement (SLA) for clusters running on Provisioned Control Plane, up from the 99.95% SLA offered on standard control plane. Amazon EKS is also introducing the 8XL scaling tier, the largest available Provisioned Control Plane tier.
Provisioned Control Plane gives you the ability to select your cluster's control plane capacity from a set of well-defined scaling tiers, ensuring the control plane is pre-provisioned and ready to handle traffic spikes or unpredictable bursts. The higher 99.99% SLA is measured in 1-minute intervals, providing a more granular and stringent availability commitment for mission-critical workloads. The new 8XL tier offers double the Kubernetes API server request processing capacity of the next lower 4XL tier, enabling workloads such as ultra-scale AI/ML training, high-performance computing (HPC), and large-scale data processing.
Both the 99.99% SLA and the 8XL tier are available today in all AWS regions where Amazon EKS Provisioned Control Plane is offered. To learn more about the SLA, see the Amazon EKS Service Level Agreement. For 8XL pricing and capabilities, see the EKS pricing and EKS Provisioned Control Plane documentation.
Amazon Bedrock AgentCore Runtime now supports WebRTC for real-time bidirectional streaming between clients and agents, adding to the existing WebSocket protocol support. With WebRTC, developers can build voice agents for browser and mobile applications that stream audio and video bidirectionally with low latency using peer-to-peer, UDP-based transport, enabling natural, real-time conversational experiences.
WebRTC joins WebSocket as the second bidirectional streaming protocol supported by AgentCore Runtime. While WebSocket provides persistent, full-duplex connections for text and audio streaming over TCP, WebRTC is optimized for real-time media delivery where low latency is critical, such as voice agents in browser and mobile applications. WebRTC requires a TURN relay for media traffic, and AgentCore Runtime gives you flexibility in how you set that up: Amazon Kinesis Video Streams managed TURN for a fully managed experience with native AWS IAM integration, a third-party provider, or your own self-hosted TURN infrastructure. Both protocols benefit from AgentCore Runtime session isolation, observability, and scaling.
WebRTC is supported in AgentCore Runtime across fourteen AWS Regions: US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Mumbai), Canada (Central), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Paris), and Europe (Stockholm).
To get started, see Bidirectional streaming in the Amazon Bedrock AgentCore documentation, which includes ready-to-deploy examples for both protocols: an Amazon Nova Sonic voice agent with KVS TURN server, Pipecat voice agents with WebSocket, WebRTC, and Daily transport, a LiveKit voice agent, and a Strands Agents SDK voice agent.