AWS and OpenAI are expanding their partnership to bring frontier intelligence to the infrastructure millions of organizations already trust. Enterprises want the most capable AI models and agents, with the security, operational maturity, and data governance that production workloads demand. Today, we’re bringing those together with three new offerings on Amazon Bedrock, all in limited preview: the latest OpenAI models, Codex, and Managed Agents powered by OpenAI.
First, the latest OpenAI models are available on Amazon Bedrock. For the first time, AWS customers can access OpenAI frontier models through the same Bedrock services they already use for model access, fine-tuning, and orchestration. OpenAI models on Bedrock inherit the enterprise controls customers depend on, including IAM, AWS PrivateLink, guardrails, encryption, and CloudTrail logging. Second, Codex on Amazon Bedrock brings the OpenAI coding agent into the AWS environments where enterprise teams already build. Customers authenticate with AWS credentials and run inference through Bedrock. Codex will be available through Bedrock via the Codex CLI, desktop app, and VS Code extension. Usage of both OpenAI models and Codex can be applied toward existing AWS cloud commitments. Lastly, Amazon Bedrock Managed Agents, powered by OpenAI, makes it fast to deploy production-ready OpenAI-powered agents on AWS. At the core are the latest OpenAI frontier models and the OpenAI agent harness, engineered for faster execution, sharper reasoning, and reliable steering of long-running tasks. Every agent has its own identity, logs each action, and runs in your environment with all inference on Amazon Bedrock. Managed Agents works with Amazon Bedrock AgentCore, which provides the default compute environment.
Read the blog to learn more. To follow our progress and be among the first to hear about the latest updates, register here.
Amazon Connect Talent is now available in Preview, giving talent acquisition leaders an AI-powered hiring solution that accelerates candidate selection at scale. Informed by decades of Amazon's hiring science, Amazon Connect Talent uses AI agents to conduct structured voice interviews, administer science-backed assessments, and score candidates consistently — freeing recruiters to focus on strategic decisions. Candidates interview 24/7 from any device. Recruiters review scores, transcripts, and detailed candidate evaluations generated by their AI teammate — empowering them to make faster hiring decisions with consistent objectivity.
Preview capabilities include AI-driven skills assessments, AI-led voice interviews with adaptive questioning, a brand-customizable mobile-first candidate portal, a comprehensive recruiter dashboard, system admin onboarding tools, and Applicant Tracking System (ATS) integrations for quick deployment. Amazon Connect Talent scales to handle hiring surges, evaluating hundreds of candidates simultaneously.
Amazon Connect Talent is available in AWS US East (N. Virginia) and US West (Oregon) regions. To learn more and request access, visit the Amazon Connect Talent page.
Starting today, Amazon Elastic Compute Cloud (Amazon EC2) C8gn instances, powered by the latest-generation AWS Graviton4 processors, are available in the AWS Europe (Milan) and Asia Pacific (Hong Kong) regions. The new instances provide up to 30% better compute performance than Graviton3-based Amazon EC2 C7gn instances. Amazon EC2 C8gn instances feature the latest 6th generation AWS Nitro Cards, and offer up to 600 Gbps network bandwidth, the highest network bandwidth among network optimized EC2 instances.
Take advantage of the enhanced networking capabilities of C8gn to scale performance and throughput, while optimizing the cost of running network-intensive workloads such as network virtual appliances, data analytics, CPU-based artificial intelligence and machine learning (AI/ML) inference.
For increased scalability, C8gn instances offer instance sizes up to 48xlarge, up to 384 GiB of memory, and up to 120 Gbps of bandwidth to Amazon Elastic Block Store (EBS). C8gn instances support Elastic Fabric Adapter (EFA) networking on the 16xlarge, 24xlarge, 48xlarge, metal-24xl, and metal-48xl sizes, which enables lower latency and improved cluster performance for workloads deployed on tightly coupled clusters.
C8gn instances are available in the following AWS Regions: US East (N. Virginia, Ohio), US West (Oregon, N.California), Europe (Frankfurt, Stockholm, Ireland, London, Spain, Zurich, Milan), Asia Pacific (Singapore, Malaysia, Sydney, Thailand, Mumbai, Seoul, Melbourne, Jakarta, Hyderabad, Tokyo, Hong Kong), Middle East (UAE), Africa (Cape Town), Canada West (Calgary, Central), South America (Sao Paulo), AWS GovCloud (US-East, US-West).
To learn more, see Amazon C8gn Instances. To begin your Graviton journey, visit the Level up your compute with AWS Graviton page. To get started, see AWS Management Console, AWS Command Line Interface (AWS CLI), and AWS SDKs.
Amazon WorkSpaces Personal now provides an enhanced experience for administrators migrating WorkSpaces from PCoIP to DCV protocol, including a guided console action for protocol modification, checkpoint snapshots for rollback support, and session blocking during migration.
Amazon DCV is a high-performance streaming protocol built by AWS that powers Amazon WorkSpaces services. By migrating to DCV, customers gain access to broader operating system support including Windows 11 and Windows Server 2025, enhanced security features such as certificate-based authentication and WebAuthN, and improved streaming performance. Administrators can now modify a WorkSpace's streaming protocol directly from the AWS Management Console through a single-click action, in addition to the existing command line interface (CLI) and API methods. Before migration begins, WorkSpaces automatically takes a checkpoint snapshot, enabling administrators to restore to a known-good state if migration fails, ensuring no data loss. Session provisioning is also blocked during migration with clear error messaging for end users who attempt to connect, preventing connection attempts from interfering with the migration process. Together, these enhancements help administrators migrate WorkSpaces to DCV with greater confidence and operational simplicity.
These enhancements are available in all AWS commercial and AWS GovCloud (US) Regions where Amazon WorkSpaces Personal is supported.
To get started, sign in to the Amazon WorkSpaces console. For more information, see Modify protocols section in the Amazon WorkSpaces Administration Guide. To learn more about Amazon WorkSpaces, visit the Amazon WorkSpaces product page.
AWS Cost Optimization Hub now supports direct CSV download in the console, enabling you to export your cost optimization recommendations to your local machine with a single click. This capability provides a one click export option directly from the console and complements the existing Data Export feature for automated exports to Amazon S3.
With CSV download, you can instantly export recommendations that use your current console filters, sorting preferences, and grouping settings. The download begins immediately, making it easy to analyze recommendations in spreadsheet applications, share with stakeholders who don't have AWS console access, or work with recommendations offline in your preferred tools.
This feature is available now in all regions where AWS Cost Optimization Hub is offered. To learn more, visit the Cost Optimization Hub page.
Amazon GameLift Streams now supports Proton 10, an updated version of the Proton compatibility layer for running Windows games on Linux-based stream classes. Proton 10 improves game compatibility for newer titles, has updated graphic translation layers for improved performance (VKD3D/DXVK) for many titles, updates to the Media Foundation to fix black screen, color bar, long standing video playback issues, and much more.
With Proton 10, game developers can stream a broader catalog of Windows titles — including modern DirectX 12 games — to end users on any device with improved rendering quality and performance. Proton 10 is available at no additional cost; existing Amazon GameLift Streams pricing for Linux stream classes applies.
You can use Proton 10 in all AWS Regions where Amazon GameLift Streams is available. For a full list of supported Regions, see the AWS Region table.
To get started, select Proton 10 as the runtime when creating or updating stream groups. To learn more, see Runtime environment in the Amazon GameLift Streams Developer Guide.
AWS Glue 5.1 is now available in the Asia Pacific (New Zealand), AWS GovCloud (US-West) and AWS GovCloud (US-East) Regions.
AWS Glue is a serverless, scalable data integration service that simplifies discovering, preparing, moving, and integrating data from multiple sources. AWS Glue 5.1 upgrades core engines to Apache Spark 3.5.6, Python 3.11, and Scala 2.12.18, bringing performance and security enhancements. This release also updates support for open table format libraries, including Apache Hudi 1.0.2, Apache Iceberg 1.10.0, and Delta Lake 3.3.2. Additionally, AWS Glue 5.1 introduces support for Apache Iceberg format version 3.0, adding default column values, deletion vectors for merge-on-read tables, multi-argument transforms, and row lineage tracking. This release extends AWS Lake Formation fine-grained access control to write operations - both DML and DDL - for Spark DataFrames and Spark SQL. Previously, this capability was limited to read operations only. AWS Glue 5.1 also adds full-table access control in Apache Spark for Apache Hudi and Delta Lake tables, providing more comprehensive security options for your data.
With this expansion, AWS Glue 5.1 is now available all AWS commercial and AWS GovCloud (US) Regions.
You can get started with AWS Glue 5.1 using AWS APIs, AWS CLI, AWS SDK, or AWS Glue Studio. To learn more, visit the AWS Glue product page and our documentation.
Amazon Bedrock AgentCore Runtime now supports Node.js as a managed language runtime for direct code deployment, alongside the existing Python support. Developers can bring their Node.js-based agents to AgentCore Runtime by packaging their agent code and dependencies into a .zip file archive, without building or managing a container image.
To deploy, write your agent in Node.js, zip it up with its dependencies, upload the zip to Amazon S3, and create your agent runtime. You can deploy a plain Node.js app, a TypeScript project (compiled to JavaScript first), or an agent built with any agent framework like the Strands Agents SDK. Dependencies can be included as a `node_modules` folder in the zip, or bundled into a single JavaScript file using esbuild to keep the package smaller.
Node.js agents on AgentCore Runtime benefit from the same capabilities as other supported runtimes, including session isolation, built-in authentication with SigV4 and OAuth 2.0, bidirectional streaming, managed session storage, and observability with Amazon CloudWatch. Observability is available through the AWS Distro for OpenTelemetry Node.js auto-instrumentation package.
To learn more, see Direct code deployment for Node.js in the Amazon Bedrock AgentCore documentation.
Amazon OpenSearch Service now supports JSON Web Key Set (JWKS) URL configuration for JWT authentication. You can configure a JWKS URL as part of your JWT authentication setup, allowing your OpenSearch domains to automatically fetch and validate public keys from your identity provider's JWKS endpoint.
Previously, JWT authentication required you to manually configure and update static public keys. With JWKS URL support, your domains automatically retrieve the latest public keys from your identity provider, eliminating the need to manually update keys when your identity provider rotates signing keys. The configuration includes built-in security validation checks and clear error messaging to help troubleshoot issues.
JWKS URL support requires OpenSearch version 3.3 or later. You can set up JWKS URL configuration using the Amazon OpenSearch Service console, the AWS CLI, or the CreateDomain and UpdateDomainConfig APIs.
JWKS URL configuration for JWT authentication is available in all AWS Regions where Amazon OpenSearch Service is available. To learn more, see JWT authentication and authorization in the Amazon OpenSearch Service Developer Guide.
Amazon EMR 7.13 is now available with Python 3.11 and version upgrades for additional applications.
EMR 7.13 ships with Python 3.11 for Apache Spark by default. This release also includes patch version upgrades for Apache HBase 2.6.3, Apache Hadoop 3.4.2, Apache Phoenix 5.3.0, and AWS SDK v2.41.11.
Amazon EMR 7.13 is available in all AWS regions where Amazon EMR is available. To learn more about EMR 7.13, visit the Amazon EMR 7.13 Release Guide.
Amazon Relational Database Service (Amazon RDS) for Db2 is now available in the AWS GovCloud (US-East, US-West) Regions. Amazon RDS for Db2 makes it easy to set up, operate, and scale Db2 databases in the cloud. Customers can deploy a Db2 database in minutes with automatically configured parameters for optimal performance. For databases setup with Multi-AZ configuration, Amazon RDS performs synchronous replication to a standby instance in a different Availability Zone to provide high availability.
To use Amazon RDS for Db2, customers can use Bring Your Own License (BYOL) available in Standard and Advanced Editions. Your RDS for Db2 usage may be eligible for Database Savings Plan, a flexible pricing model that offers savings in exchange for a commitment to a specific amount of usage (measured in $/hour) over a 1-year term. You can learn more about eligible usage on the Database Savings Plans pricing page.
To learn more about Amazon RDS for Db2, refer to documentation and pricing pages.
AWS Transfer Family Terraform module now includes end-to-end examples for deploying Transfer Family endpoints integrated with Okta and Microsoft Entra ID as custom identity providers (IdP) for authentication and access control. This allows enterprises already using these platforms to automate and streamline the deployment of Transfer Family servers with their existing identity infrastructure.
The Terraform module and examples are based on the open source Custom IdP solution, which provides standardized integration with widely used identity providers and includes built-in security controls such as multi-factor authentication, audit logging, and per-user IP allowlisting. The Okta example supports password-based authentication flows, time-based one-time password (TOTP)-based MFA, and attribute retrieval, while the Entra ID example demonstrates password-based authentication for organizations standardized on Microsoft's identity platform.
Customers can get started by using the new module from the Terraform Registry. To learn more about the Transfer Family Custom IdP solution, visit the user guide. To see all the AWS Regions where Transfer Family is available, visit the AWS Capabilities table.
At the "What's Next with AWS" 2026 event, AWS launched Amazon Quick—an AI assistant for work with a desktop app and expanded integrations—and expanded Amazon Connect into four agentic AI solutions for supply chain, hiring, customer experience, and healthcare. AWS also extended its partnership with OpenAI, bringing models like GPT-5.5, Codex, and Managed Agents to Amazon Bedrock in limited preview.
As organizations expand their Amazon Web Services (AWS) footprint, managing secure, scalable, and cost-efficient access across multiple accounts becomes increasingly important. AWS IAM Identity Center offers a centralized, unified solution for managing workforce access to AWS accounts. It simplifies authentication, enhances security, and provides a seamless user sign-in experience to AWS services across diverse environments. […]
The AWS Customer Incident Response Team (AWS CIRT) regularly encounters patterns that repeat across their engagements when helping customers respond to security incidents. We’re passionate about making sure that information is widely accessible so that everyone can improve their security posture and their organization’s resilience to disruption. The primary method we use to share this […]
Today, we are excited to announce the day zero availability of NVIDIA Nemotron 3 Nano Omni on Amazon SageMaker JumpStart. In this post, we walk through the model architecture and key capabilities of Nemotron 3 Nano Omni, explore the enterprise use cases it unlocks, and show you how to deploy and run inference using Amazon SageMaker JumpStart.
In this post, we explore what it takes to migrate a traditional text agent into a conversational voice assistant using Amazon Nova 2 Sonic. We compare text and voice agent requirements, highlight design priorities for different use cases, break down agent architecture, and address common concerns like tools and sub-agents for reuse and system prompt adaptation. This post helps you navigate the migration process and avoid common pitfalls.
This post shows you how to deploy a serverless MCP proxy on Amazon Bedrock AgentCore Runtime that gives you a programmable layer to implement proper governance, controls, and observability aligned with an organization's security policies.
In this post, you'll learn how Vanguard built their Virtual Analyst solution by focusing on eight guiding principles of AI-ready data, the AWS services that powered their implementation, and the measurable business outcomes they achieved.