Today, AWS announces increased Amazon Elastic Block Store (Amazon EBS) performance for Amazon EC2 C8gn, M8gn, and R8gn instances in 48xlarge and metal-48xl sizes.
EC2 C8gn, M8gn, and R8gn instances are network optimized instances powered by AWS Graviton4 processors and latest 6th generation AWS Nitro Cards. With the latest enhancements to AWS Nitro System, we have doubled the maximum EBS performance on these instances in 48xlarge and metal-48xl sizes, from 60 Gbps of EBS bandwidth and 240,000 IOPS to 120 Gbps of EBS bandwidth and 480,000 IOPS. Customers running network-intensive workloads while requiring additional block storage performance such as data analytics and high-performance file systems can benefit from the improved EBS performance.
All existing and new C8gn, M8gn, and R8gn instances in 48xlarge and metal-48xl sizes launched starting today will benefit from this performance increase at no additional cost. For running instances, customers can stop and start instances to enable this performance increase. The higher EBS performance is available in all AWS regions where these instance types are generally available today.
To learn more, see Amazon C8gn, M8gn, and R8gn Instances and EBS-optimized instance types.
AWS today announced a new delivery option for AWS Data Exports, enabling FinOps teams to send Standard exports—including Cost and Usage Report 2.0 (CUR 2.0), FOCUS, Cost Optimization Recommendations, and Carbon Emissions reports—directly to any authorized AWS account's Amazon S3 bucket. This capability eliminates the need for for customers to replicate the data across accounts or pay for duplicate storage.
With this launch, customers can now specify the destination S3 bucket in any AWS account when creating an export. The destination account owner controls which source accounts can deliver data through S3 bucket policies, so both accounts explicitly authorize where billing data flows. For example, a FinOps team can configure CUR 2.0 exports from their management account to flow directly into a centralized analytics account within their organization where their cost optimization tools reside, without building custom replication processes. This also supports the security best practice of keeping non-administrative workloads out of management accounts.
This feature is available in all commercial AWS Regions, except the AWS GovCloud (US) Regions and the China Regions.
To learn more about this feature, see AWS Data Exports and AWS Billing and Cost Management in the AWS Cost Management User Guide.
AWS Secrets Manager now supports hybrid post-quantum key exchange using ML-KEM (Module-Lattice-based Key-Encapsulation Mechanism) to secure TLS connections for retrieving and managing secrets. This protection is automatically enabled in Secrets Manager Agent (version 2.0.0+), AWS Lambda Extension (version 19+), and Secrets Manager CSI Driver (version 2.0.0+). For SDK-based clients, hybrid post-quantum key exchange is available in supported AWS SDKs including Rust, Go, Node.js, Kotlin, Python (with OpenSSL 3.5+), and Java v2 (v2.35.11+).
With this launch, your applications retrieve secrets over TLS connections that combine classical key exchange with post-quantum cryptography, helping protect against both traditional cryptographic attacks and future quantum computing threats known as "harvest now, decrypt later" (HNDL). No code changes, configuration updates, or migration effort are required for customers using the latest client versions except for Java v2. For example, a microservice requiring multiple secrets at startup can now retrieve them over quantum-resistant TLS connections by simply upgrading to the latest Secrets Manager Agent version. You can verify hybrid post-quantum key exchange is active by checking CloudTrail logs for the "X25519MLKEM768" key exchange algorithm in the tlsDetails field of GetSecretValue API calls.
Hybrid post-quantum key exchange using ML-KEM for AWS Secrets Manager is available in all AWS Regions where AWS Secrets Manager is supported. To learn more, visit the AWS Secrets Manager documentation and the AWS Post-Quantum Cryptography migration page.
AWS Transform is now available through two additional developer tools — including Kiro and VS Code. AWS Transform is an agentic migration and modernization factory designed to compress enterprise transformation timelines from years to months — handling everything from large-scale infrastructure migrations to continuous tech debt reduction, without the manual handoffs and lost context that commonly stall these programs..
With today’s launch, you can get started with AWS Transform custom transformations from wherever you already work: install the AWS Transform Power in Kiro, or install the AWS Transform extension in VS Code . AWS Transform custom transformations help you crush tech debt at scale — choose from AWS-managed transformations for common patterns like Java, Python, and Node.js version upgrades, AWS SDK migrations (boto2 to boto3, Java SDK v1 to v2, JS SDK v2 to v3), or define your own. These new surfaces make it easier to discover additional capabilities as they become available, build and iterate on your own custom transformations, and run any agent repeatedly or across thousands of repositories at once. The custom transformations are the first in a growing library of playbooks coming to developer tools, complementing the existing AWS Transform web console and CLI so you can start a job in your IDE, track progress in the web console, and finish transformations wherever it makes sense — with job state and context shared across every surface.
AWS Transform supports deploying to all AWS commercial regions,and AWS Transform custom is available in US East (N. Virginia) and Europe (Frankfurt). To learn more, visit the AWS Transform product page and user guide.
Starting today, Amazon Elastic Cloud Compute (Amazon EC2) P6-B300 instances are available in the AWS GovCloud (US-East) Region. P6-B300 instances provide 8x NVIDIA Blackwell Ultra GPUs with 2.1 TB high bandwidth GPU memory, 6.4 Tbps EFA networking, 300 Gbps dedicated ENA throughput, and 4 TB of system memory.
P6-B300 instances deliver 2x networking bandwidth, 1.5x GPU memory size, and 1.5x GPU TFLOPS (at FP4, without sparsity) compared to P6-B200 instances, making them well suited to train and deploy large trillion-parameter foundation models (FMs) and large language models (LLMs) with sophisticated techniques. The higher networking and larger memory deliver faster training times and more token throughput for AI workloads.
P6-B300 instances are now available in p6-b300.48xlarge size in the following AWS Regions: US West (Oregon) and AWS GovCloud (US-East). To learn more about P6-B300 instances, visit Amazon EC2 P6 instances.
Today, we’re announcing the general availability of AWS Interconnect – multicloud, a managed private connectivity service that connects your Amazon Virtual Private Cloud (Amazon VPC) directly to VPCs on other cloud providers. We’re also introducing AWS Interconnect – last mile, a new capability that simplifies how you establish high-speed, private connections to AWS from your […]
ターミナルで作業する開発者にとって、ワークフローに合ったツールが必要です。その逆ではありません。だからこそ私たちは Kiro CLI を開発しました。Kiro CLI は、そのまま使えるエージェント型ターミナルで、高品質なコードをより速くリリースできます。ローンチ以来、皆さんから素晴らしい反響をいただきました。気に入った点、改善が必要な点、そして足りない機能について教えていただきました。私たちはその声に耳を傾け、本日、皆さんからリクエストの多かった 3 つの大きな機能をリリースしました。
AI の業務活用が広がる中、多くの企業が次の課題に向き合っています。個別の AI 活用は始まっているものの、組 […]
前回の Week in Review に、2026 年、お客様との AI-Driven Development […]
コンプライアンス管理は、時に圧倒的な負担に感じることがあります。多くのエンジニアリングチームにとって、継続的に大きな注意を払い続ける必要がある業務です。チームは年間サイクルごとに 40 時間以上をかけてエビデンスを収集し、クラウドプロバイダーのコンソールを操作し、監査期限が迫る中でスプレッドシートを作成しています。戦略的ポートフォリオ管理のリーダーとして世界中で 3,000 社以上の顧客にサービスを提供する Planview も、同じ課題に直面していました。マルチサービスの AWS インフラストラクチャ全体で SOC 2 コンプライアンスを維持するために、顧客向けの機能開発に充てるべきエンジニアリング時間が消費されていたのです。ここでは、Planview が Kiro CLI を使ってコンプライアンスワークフローをどのように変革し、コンプライアンスサイクルあたり 40 時間以上を削減したかをご紹介します。
AI agents and coding assistants interact with AWS resources through the Model Context Protocol (MCP). Unlike traditional applications with deterministic code paths, agents reason dynamically, choosing different tools or accessing different data depending on context. You must assume an agent can do anything within its granted entitlements, whether OAuth scopes, API keys, or AWS Identity […]
Bulletin ID: 2026-011-AWS
Scope: AWS
Content Type: Important (requires attention)
Publication Date: 2026/03/31 10:15 AM PDT
Description:
AWS Common Runtime library is used by several AWS SDKs to communicate with event-stream services (Ex. Kinesis, Transcribe). We identified CVE-2026-5190. AWS Common Runtime event-stream decoder component before 0.6.0 might allow a third party operating a server to cause memory corruption leading to arbitrary code execution on a client application that processes crafted event-stream messages.
Impacted versions:
- aws-c-event-stream < 0.6.0and the following higher level libraries that expose event-stream functionality
- aws-iot-device-sdk-cpp-v2 < 1.42.1
- aws-iot-device-sdk-java-v2 < 1.30.1
- aws-iot-device-sdk-python-v2 < 1.28.2
- aws-iot-device-sdk-js-v2 < 1.25.1
- aws-sdk-swift < 1.6.70
- aws-sdk-cpp < 1.11.764
Please refer to the article below for the most up-to-date and complete information related to this AWS Security Bulletin.
Bulletin ID: 2026-012-AWS
Scope: AWS
Content Type: Important (requires attention)
Publication Date: 2026/04/02 11:30 AM PDT
Description:
Kiro IDE is an agentic development environment that makes it easy for developers to ship real engineering work with the help of AI agents.
We identified CVE-2026-5429, where unsanitized input during web page generation in the Kiro Agent webview in Kiro IDE before version 0.8.140 allows a remote unauthenticated threat actor to execute arbitrary code via a maliciously crafted color theme name when a local user opens the workspace. This issue requires the user to trust the workspace when prompted.
Impacted versions: < 0.8.140
Please refer to the article below for the most up-to-date and complete information related to this AWS Security Bulletin.
Bulletin ID: 2026-013-AWS
Scope: AWS
Content Type: Important (requires attention)
Publication Date: 2026/04/03 13:00 PM PDT
Description:
The Amazon Athena ODBC driver implements standard ODBC application program interfaces (APIs). The ODBC driver provides access to Amazon Athena from any C/C++ application. The Amazon Athena ODBC driver provides 64-bit ODBC drivers for Windows, Linux and MAC operating systems.
We identified the following:
- CVE-2026-5485: OS command injection in browser-based authentication component (Linux only, fixed in 2.0.5.1)
- CVE-2026-35558: Improper neutralization of special elements in authentication components
- CVE-2026-35559: Out-of-bounds write in query processing components
- CVE-2026-35560: Improper certificate validation in identity provider connection components
- CVE-2026-35561: Insufficient authentication security controls in browser-based authentication components
- CVE-2026-35562: Allocation of resources without limits in parsing components
Impacted versions: CVE-2026-5485 was addressed in 2.0.5.1 (Linux only). The remaining five (CVE-2026-35558 through CVE-2026-35562) were addressed in version 2.1.0.0 and apply to all supported platforms
Please refer to the article below for the most up-to-date and complete information related to this AWS Security Bulletin.
Bulletin ID: 2026-014-AWS
Scope: AWS
Content Type: Important (requires attention)
Publication Date: 2026/04/06 14:00 PM PDT
Description:
Research and Engineering Studio (RES) on AWS is an open source, web portal design for administrators to create and manage secure cloud-based research and engineering environments. We have identified the following issues with the AWS Research and Engineering Studio (RES).
CVE-2026-5707: Unsanitized input in an OS Command in the virtual desktop session name handling in AWS Research and Engineering Studio (RES) version 2025.03 through 2025.12.01 might allow a remote authenticated actor to execute arbitrary commands as root on the virtual desktop host via a crafted session name.
CVE-2026-5708: Improper control of user-modifiable attributes in the session creation component in AWS Research and Engineering Studio (RES) before version 2026.03 might allow an authenticated remote user to escalate privileges and assume the Virtual Desktop Host instance profile permissions and interact with other AWS resources and services via a crafted API request.
CVE-2026-5709: Unsanitized input in the FileBrowser API in AWS Research and Engineering Studio (RES) version 2024.10 through 2025.12.01 might allow a remote authenticated actor to execute arbitrary commands on the cluster-manager EC2 instance via crafted input when using the FileBrowser functionality.
Impacted versions: <= 2025.12.01
Please refer to the article below for the most up-to-date and complete information related to this AWS Security Bulletin.
Bulletin ID: 2026-015-AWS
Scope: AWS
Content Type: Important (requires attention)
Publication Date: 2026/04/07 15:30 PM PDT
Description:
Firecracker is an open source virtualization technology that is purpose-built for creating and managing secure, multi-tenant container and function-based services.
We identified CVE-2026-5747, an out-of-bounds write issue in the virtio PCI transport in Firecracker 1.13.0 through 1.14.3 and 1.15.0 on x86_64 and aarch64 that might allow a local guest user with root privileges to crash the Firecracker VMM process or potentially execute arbitrary code on the host via modification of virtio queue configuration registers after device activation. Achieving code execution on the host requires additional preconditions, such as the use of a custom guest kernel or specific snapshot configurations.
No AWS service is affected.
Impacted versions: Firecracker >= 1.13.0 AND <= 1.14.3 AND 1.15.0
Please refer to the article below for the most up-to-date and complete information related to this AWS Security Bulletin.
With the new Spring AI AgentCore SDK, you can build production-ready AI agents and run them on the highly scalable AgentCore Runtime. The Spring AI AgentCore SDK is an open source library that brings Amazon Bedrock AgentCore capabilities into Spring AI. In this post, we build an AI agent starting with a chat endpoint, then adding streaming responses, conversation memory, and tools for web browsing and code execution.
In this post, we walk through how Guidesly built Jack AI on AWS using AWS Lambda, AWS Step Functions, Amazon Simple Storage Service (Amazon S3), Amazon Relational Database Service (Amazon RDS), Amazon SageMaker AI, and Amazon Bedrock to ingest trip media, enrich it with context, apply computer vision and generative AI, and publish marketing-ready content across multiple channels—securely, reliably, and at scale.
This post explores how Amazon SageMaker HyperPod provides a comprehensive solution for inference workloads. We walk you through the platform’s key capabilities for dynamic scaling, simplified deployment, and intelligent resource management. By the end of this post, you’ll understand how to use the HyperPod automated infrastructure, cost optimization features, and performance enhancements to reduce your total cost of ownership by up to 40% while accelerating your generative AI deployments from concept to production.
We're excited to announce the launch of Amazon SageMaker JumpStart optimized deployments. SageMaker JumpStart improved deployments address the need for rich and straightforward deployment customization on SageMaker JumpStart by offering pre-defined deployment configurations, designed for specific use cases. Customers maintain the same level of visibility into the details of their proposed deployments, but now deployments are optimized for their specific use case and performance constraint.
In this post, we introduce the Generative AI Path-to-Value (P2V) framework, a structured approach to help you move generative AI initiatives from concept to production and sustained value creation.
Organizations using AWS Outposts racks commonly manage capacity from a single AWS account and share resources through AWS Resource Access Manager (AWS RAM) with other AWS accounts (consumer accounts) within AWS Organizations. In this post, we demonstrate one approach to create a multi-account serverless solution to surface costs in shared AWS Outposts environments using Amazon […]