Amazon Connect now offers agentic speech-to-speech voice experiences in an additional AWS Region: Europe (London). Amazon Connect also adds three new speech-to-speech voices across US Spanish and UK English: Pedro (es-US), Amy (en-GB), and Brian (en-GB).
Amazon Connect's agentic self-service capabilities enable AI agents to understand, reason, and take action across voice and messaging channels to automate routine and complex customer service tasks. Connect's agentic speech-to-speech voice AI agents understand not only what customers say but how they say it, adapting voice responses to match customer tone and sentiment while maintaining natural conversational pace. With these updates, you can deliver agentic speech-to-speech voice experiences to customers across a new region with a wider selection of voices.
To learn more about this feature, see the Amazon Connect Administrator Guide. To learn more about Amazon Connect, a complete AI-powered contact center solution delivering personalized customer experiences at scale, visit the Amazon Connect website.
Amazon Redshift improves the performance of BI dashboards and ETL workloads by speeding up new queries by up to 7x. This significantly improves the response times of low-latency SQL queries, such as those used in near real-time analytics applications, BI dashboards, ETL pipelines, and autonomous, goal-seeking AI agents. Customers experience substantially faster query response times as Redshift accelerates the process of preparing the SQL query for execution. Queries start faster and return results quicker. This improvement is automatically enabled at no additional cost.
To deliver this major improvement, Redshift added a new optimization to query compilation where new queries are processed immediately using composition. Composition is a technique that generates a lightweight arrangement of pre-existing logic while simultaneously creating highly optimized, query-specific code that is compiled and executed across available compute resources to further boost performance. Composition removes compilation from the critical path of query execution and provides immediate execution while compilation proceeds in the background. With this optimization, new queries processed by Redshift start faster and deliver performance consistent with subsequent runs.
This optimization is enabled by default for any SQL query across all provisioned clusters and serverless workgroups, in all commercial AWS Regions where Amazon Redshift operates. It is available on the Redshift current track with other tracks following in upcoming patch releases. No action is required from customers to benefit from this enhancement, and it is free of charge.
You can now run OpenSearch version 3.5 on Amazon OpenSearch Service. OpenSearch 3.5 introduces significant improvements in agentic AI capabilities, search relevance tooling, and observability features to help you build powerful agentic applications.
With this launch, agentic conversation memory captures conversation context and tool reasoning in persistent storage, enabling your agents to provide coherent, accurate responses across multi-turn conversations. In addition to this, context management optimizes what you send to large language models (LLMs) through automatic truncation and summarization, reducing your token costs while maintaining response quality. Finally a redesigned no-code agent interface supports Model Context Protocol (MCP) integration, search templates, conversational memory, and single model configurations, allowing you to build sophisticated agents without writing code.
You can now tune search quality faster with expanded search relevance workbench capabilities. LLM-powered evaluation automatically assesses search results with customizable prompts, letting you scale relevance testing beyond manual judgments and accelerate quality improvements. Scheduled experiments run tests nightly, weekly, or monthly, helping you track search quality trends over time and catch regressions early. Enhanced single query comparison displays agentic search queries alongside agent summaries, making it easier to validate and optimize agent-driven search experiences.
For information on upgrading to OpenSearch 3.5, please see the documentation. OpenSearch 3.5 is now available in all AWS Regions where Amazon OpenSearch Service is available.
Amazon Inspector now offers expanded agentless EC2 scanning with enhanced detection coverage, including new support for Windows operating system vulnerability scanning without requiring an agent. Security teams and IT administrators can now detect vulnerabilities across a broader range of software and applications on their EC2 instances — including WordPress, Apache HTTP Server, Python packages, and Ruby gems — as well as Windows OS vulnerabilities, all through agentless scanning. Customers automatically receive findings for newly supported software and applications with no configuration changes required.
Amazon Inspector is also introducing Windows Knowledge Base (KB)-based findings for Windows OS vulnerabilities. Rather than receiving a separate finding for each CVE addressed by a single Microsoft patch, customers now receive a single consolidated KB finding that groups all related CVEs together. Each KB finding surfaces the highest CVSS score, EPSS score, and exploit availability from its constituent CVEs, and includes a direct link to the relevant Microsoft KB article — making it straightforward to understand exactly which patch to apply and why. All existing CVE-based Windows OS findings will automatically transition to KB-based findings. All existing CVE-based Windows OS findings will automatically transition to KB-based findings, and customers do not need to take any additional action.
Both capabilities are available in all AWS Regions where Amazon Inspector is available. To learn more, visit the Amazon Inspector product page and the Amazon Inspector documentation.
You can now create Amazon S3 Access Grants in the AWS Asia Pacific (New Zealand) Region.
Amazon S3 Access Grants map identities in directories such as Microsoft Entra ID, or AWS Identity and Access Management (IAM) principals, to datasets in S3. This helps you manage data permissions at scale by automatically granting S3 access to end users based on their corporate identity.
Visit the AWS Region Table for complete regional availability information. To learn more about Amazon S3 Access Grants, visit our product page.
Starting today, the compute-optimized Amazon EC2 C8a instances are available in the Asia Pacific (Tokyo) region. C8a instances are powered by 5th Gen AMD EPYC processors (formerly code named Turin) with a maximum frequency of 4.5 GHz, delivering up to 30% higher performance and up to 19% better price-performance compared to C7a instances.
C8a instances deliver 33% more memory bandwidth compared to C7a instances, making these instances ideal for latency sensitive workloads. Compared to Amazon EC2 C7a instances, they are up to 57% faster for GroovyJVM allowing better response times for Java-based applications. C8a instances offer 12 sizes including 2 bare metal sizes. This range of instance sizes allows customers to precisely match their workload requirements.
C8a instances are built on AWS Nitro System and are ideal for high performance, compute-intensive workloads such as batch processing, distributed analytics, high performance computing (HPC), ad serving, highly-scalable multiplayer gaming, and video encoding.
To get started, sign in to the AWS Management Console. Customers can purchase these instances via Savings Plans, On-Demand instances, and Spot instances. For more information visit the Amazon EC2 C8a instance page.
AWS Config announces the launch of an additional 75 managed Config rules for various use cases such as security, durability, and operations. You can now search, discover, enable and manage these additional rules directly from AWS Config and govern more use cases for your AWS environment.
With this launch, you can now enable these controls across your account or across your organization. For example, you can assess your security posture across AWS Amplify, Amazon SageMaker, Amazon Route 53, and more. Additionally, you can leverage Conformance Packs to group these new controls and deploy across an account or across organization, streamlining your multi-account governance.
For the full list of recently released rules, visit the AWS Config developer guide. For description of each rule and the AWS Regions in which it is available, please refer our Config managed rules documentation. To start using Config rules, please refer our documentation.
New Rules Launched:
Amazon EC2 High Memory U7i instances with 6TB of memory (u7i-6tb.112xlarge) are now available in AWS Asia Pacific (Malaysia). U7i instances are part of AWS 7th generation and are powered by custom fourth generation Intel Xeon Scalable Processors (Sapphire Rapids). U7i-6tb instances offer 6TiB of DDR5 memory, enabling customers to scale transaction processing throughput in a fast-growing data environment.
U7i-6tb instances deliver 448 vCPUs with up to 100 Gbps of Amazon EBS bandwidth for faster data loading and backups, 100 Gbps of network bandwidth, and ENA Express. U7i instances are ideal for customers using mission-critical in-memory databases like SAP HANA, Oracle, and SQL Server.
To learn more about U7i instances, visit the High Memory instances page.
Amazon Elastic Container Registry (Amazon ECR) pull through cache now supports Chainguard’s registry as an upstream source. With today’s release, customers now benefit from the security and availability of Amazon ECR for private Chainguard images.
As customers continue to scale their use of Chainguard images, keeping them synchronized with Chainguard's registry becomes increasingly important. With ECR's pull through cache feature, customers can keep Chainguard images in sync without additional workflows or tools to manage. Amazon ECR's pull through cache supports frequent registry syncs, helping to keep container images sourced from Chainguard up to date. Later, customers can apply ECR features such as image scanning and lifecycle policies to their cached Chainguard images.
The pull through cache for Chainguard is available in all AWS Regions where Amazon ECR pull through cache is supported. To get started, review our documentation.
Starting today, Amazon Elastic Compute Cloud (Amazon EC2) M6in and M6idn instances are available in AWS London Region. These sixth-generation network optimized instances, powered by 3rd Generation Intel Xeon Scalable processors and built on the AWS Nitro System, deliver up to 200Gbps network bandwidth, for 2x more network bandwidth over comparable fifth-generation instances. Customers can use M6in and M6idn instances to scale their performance and throughput of network-intensive workloads such as high-performance file systems, distributed web scale in-memory caches, caching fleets, real-time big data analytics, and Telco applications such as 5G User Plane Function.
M6in and M6idn instances are available in 10 different instance sizes including metal, offering up to 128 vCPUs and 512 GiB of memory. They deliver up to 100Gbps of Amazon Elastic Block Store (EBS) bandwidth, and up to 400K IOPS. M6in and M6idn instances offer Elastic Fabric Adapter (EFA) networking support on 32xlarge and metal sizes. M6idn instances offer up to 7.6 TB of high-speed, low-latency instance storage.
With this regional expansion, M6in and M6idn instances are available in the following AWS Regions: US East (Ohio, N. Virginia), US West (N. California, Oregon), Europe (Ireland, Frankfurt, Spain, Stockholm, Zurich, London), Asia Pacific (Mumbai, Singapore, Tokyo, Sydney, Seoul), Canada (Central), and AWS GovCloud (US-West). Customers can purchase the new instances through Savings Plans, On-Demand, and Spot instances. To learn more, see M6in and M6idn instances page.
Amazon Bedrock expands model selection for customers by adding support for GLM 5 and Minimax M2.5. GLM 5 is a frontier‑class, general‑purpose large language model optimized for complex systems engineering and long‑horizon agentic tasks. It builds on the GLM 4.5 agent‑centric lineage and is designed to support multi‑step reasoning, math (including AIME‑style benchmarks), advanced coding, and tool‑augmented workflows, with long context support suitable for sophisticated agents and enterprise applications. MiniMax M2.5 is an agent‑native frontier model trained explicitly to reason efficiently, decompose tasks optimally, and complete complex workflows under real‑world time and cost constraints. It achieves task completion speeds comparable to or faster than leading proprietary frontier models by combining high inference throughput with reinforcement learning focused on token‑efficient reasoning and better decision‑making in agentic scaffolds.
MiniMax M2.5 and GLM 5 are now available in Amazon Bedrock across select AWS Regions. For the full list of available AWS Regions, refer to the documentation.
Amazon Bedrock now supports NVIDIA Nemotron 3 Super, an open hybrid Mixture-of-Experts (MoE) model designed for complex multi-agent applications. Built for agentic workloads, Nemotron 3 Super delivers fast, and cost-efficient inference enabling AI agents to maintain focus and accuracy across long, multi-step tasks without losing context. Fully open with weights, datasets, and recipes, the model supports easy customization and secure deployment, making it well-suited for enterprises, startups, and individual developers building multi-agent workflows, and advanced reasoning applications.
Amazon Bedrock gives customers access to Nemotron 3 Super through a single, fully managed API — with no infrastructure to provision or models to host. Bedrock's serverless inference, built-in security controls, and compatibility with OpenAI API specifications make it easy to integrate Nemotron 3 Super into existing workflows and deploy at production scale with confidence.
NVIDIA Nemotron 3 Super is now available in Amazon Bedrock across select AWS Regions. For the full list of available AWS Regions, refer to the documentation. To learn more and get started, visit the Amazon Bedrock console or the service documentation here. To get started with Amazon Bedrock OpenAI API-compatible service endpoints, visit documentation here.
Amazon SageMaker Unified Studio adds custom metadata search filters, enabling customers to narrow catalog search results using organization-specific attributes. This helps customers find the right assets faster by filtering on fields like business region, data classification, or study name, in addition to existing keyword and semantic search.
With custom metadata search filters, customers can add filters based on any custom metadata fields available in their catalog, such as sample type or study ID. Filters support string fields with a "contains" operator and numeric fields (Integer, Long) with equals, greater than, and less than operators. Customers can also filter by asset name, description, and date range. Multiple filters can be combined, and filter selections persist across browser sessions.
Custom metadata search filters are available in all AWS Regions where Amazon SageMaker Unified Studio is supported. Standard Amazon SageMaker pricing applies.
To get started, navigate to the Browse Assets page in Amazon SageMaker Unified Studio and use the "+ Add Filter" button to create custom filters. You can also use the SearchListings API with metadata form attributes in the filters parameter. For more information, see the Amazon SageMaker Unified Studio documentation.
Amazon Quick is now available in the AWS Europe (London) region (eu-west-2). This launch allows customers in the United Kingdom to access the full power of Amazon Quick while meeting local and regional requirements for data sovereignty.
Amazon Quick provides business users an agentic teammate that quickly answers questions at work and turns those answers into actions. With Amazon Quick, every user is empowered to make better decisions, faster and take actions without switching applications using AI they can trust. Today’s launch allows customers to take advantage of Amazon Quick’s capabilities including AI-powered chat, Research, Spaces, Flows, and QuickSight dashboards — with their data stored and processed locally within the London region. This expansion also supports in-region inference through EU-CRIS (Europe Cross-Region Inference), ensuring that inference requests from London instances are routed exclusively within European AWS Regions. Customers in regulated industries such as financial services, healthcare, and the public sector can meet strict data sovereignty requirements of UK data protection frameworks.
For a full list of AWS regions where Amazon Quick is available, visit the Quick regional availability page. To learn more, visit the Amazon Quick documentation or product detail page.
Amazon Quick is now available in the AWS Europe (Frankfurt) region (eu-central-1). This launch allows customers in Germany to access the full power of Amazon Quick while meeting local and regional requirements for data sovereignty.
Amazon Quick provides business users an agentic teammate that quickly answers questions at work and turns those answers into actions. With Amazon Quick, every user is empowered to make better decisions, faster and take actions without switching applications using AI they can trust. Today’s launch allows customers to take advantage of Amazon Quick’s capabilities including AI-powered chat, Research, Spaces, Flows, and QuickSight dashboards — with their data stored and processed locally within the Frankfurt region. This expansion also supports in-region inference through EU-CRIS (Europe Cross-Region Inference), ensuring that inference requests from Frankfurt instances are routed exclusively within European AWS Regions. Customers in regulated industries such as financial services, healthcare, and the public sector can meet strict data sovereignty requirements of EU data protection frameworks including GDPR.
For a full list of AWS regions where Amazon Quick is available, visit the Quick regional availability page. To learn more, visit the Amazon Quick documentation or product detail page.
Amazon Quick is now available in the AWS Asia Pacific (Tokyo) region (ap-northeast-1). This launch allows customers in Japan to access the full power of Amazon Quick while meeting local and regional requirements for data sovereignty.
Amazon Quick provides business users an agentic teammate that quickly answers questions at work and turns those answers into actions. With Amazon Quick, every user is empowered to make better decisions, faster and take actions without switching applications using AI they can trust. Today’s launch allows customers to take advantage of Amazon Quick’s capabilities including AI-powered chat, Research, Spaces, Flows, and QuickSight dashboards — with their data stored and processed locally within the London region. This expansion also supports in-region inference through JP-CRIS (Japan Cross-Region Inference), ensuring that inference requests from Tokyo instances are routed exclusively within Japanese AWS Region. Customers in regulated industries such as financial services, healthcare, and the public sector can meet strict data sovereignty requirements of Japan's data protection frameworks, including the Act on the Protection of Personal Information (APPI).
For a full list of AWS regions where Amazon Quick is available, visit the Quick regional availability page. To learn more, visit the Amazon Quick documentation or product detail page.
We’re thrilled to celebrate three exceptional developer community leaders as AWS Heroes. These individuals represent the heart of what makes the AWS community so vibrant. In addition to sharing technical knowledge, they build connections, forge genuine human relationships, and create pathways for others to grow. From pioneering cloud culture in mountain villages to leading cybersecurity […]
We’re thrilled to celebrate three exceptional developer community leaders as AWS Heroes. These individuals represent the heart of what makes the AWS community so vibrant. In addition to sharing technical knowledge, they build connections, forge genuine human relationships, and create pathways for others to grow. From pioneering cloud culture in mountain villages to leading cybersecurity […]
AWS Security Hub は、マルチクラウド環境全体でセキュリティオペレーションを統合する新しい機能を拡張します。共通データレイヤーによるセキュリティシグナルの統合、一貫したポスチャ管理、リスク分析の優先順位付けにより、複数のクラウド環境にまたがるセキュリティリスクの検出と対応を単一の統合エクスペリエンスで実現します。
Amazon threat intelligence has identified an active Interlock ransomware campaign exploiting CVE-2026-20131, a critical vulnerability in Cisco Secure Firewall Management Center (FMC) Software that could allow an unauthenticated, remote attacker to execute arbitrary Java code as root on an affected device, which was disclosed by Cisco on March 4, 2026. After Cisco’s disclosure, Amazon threat […]
In this post, you'll learn how AWS DevOps Agent integrates with your existing observability stack to provide intelligent, automated responses to system events.
In this post, you will learn how to migrate from Nova 1 to Nova 2 on Amazon Bedrock. We cover model mapping, API changes, code examples using the Converse API, guidance on configuring new capabilities, and a summary of use cases. We conclude with a migration checklist to help you plan and execute your transition.
Working with the AWS Generative AI Innovation Center, Bark developed an AI-powered content generation solution that demonstrated a substantial reduction in production time in experimental trials while improving content quality scores. In this post, we walk you through the technical architecture we built, the key design decisions that contributed to success, and the measurable results achieved, giving you a blueprint for implementing similar solutions.
This post shows you how to build an AI-powered A/B testing engine using Amazon Bedrock, Amazon Elastic Container Service, Amazon DynamoDB, and the Model Context Protocol (MCP). The system improves traditional A/B testing by analyzing user context to make smarter variant assignment decisions during the experiment.
In this post, we show how to evaluate AI agents systematically using Strands Evals. We walk through the core concepts, built-in evaluators, multi-turn simulation capabilities and practical approaches and patterns for integration.
Today, we are launching Nova Forge SDK that makes LLM customization accessible, empowering teams to harness the full potential of language models without the challenges of dependency management, image selection, and recipe configuration and eventually lowering the barrier of entry.
In this post, we walk you through the process of using the Nova Forge SDK to train an Amazon Nova model using Amazon SageMaker AI Training Jobs.