Starting today, Amazon EC2 Capacity Manager supports tag-based dimensions, enabling you to use tags from your EC2 resources to group and filter capacity metrics. EC2 Capacity Manager helps you monitor and optimize capacity usage across On-Demand Instances, Spot Instances, and Capacity Reservations. This launch also introduces Account Name as a new built-in dimension.
You can activate up to five custom tag keys — such as environment, team, or cost-center — and use them alongside built-in dimensions like Region, Instance Type, and Availability Zone to group and filter capacity metrics by tag values in the console and APIs, and include tag data as additional columns in newly created S3 data exports. Capacity Manager also includes four Capacity Manager-provided tags by default: EC2 Auto Scaling group name, EKS cluster name, EKS Kubernetes node pool, and Karpenter node pool. The new Account Name dimension makes it easier to identify accounts when analyzing cross-account capacity data across your organization.
This feature is available in all AWS Regions where EC2 Capacity Manager is available. To get started, navigate to the Settings tab in Capacity Manager and choose Manage tag keys, or use the AWS CLI. To learn more, see Managing monitored tag keys in the Amazon EC2 User Guide. For more information about Amazon EC2 Capacity Manager, visit the EC2 Capacity Manager documentation.
AWS Private Certificate Authority (AWS Private CA) now supports customer managed permissions in AWS Resource Access Manager (AWS RAM). AWS Private CA lets you share certificate authorities (CAs) across accounts using AWS RAM so you can centralize your PKI instead of creating separate CAs in every account. With customer managed permissions, you can now select exactly which AWS Private CA API operations to allow when sharing a CA, granting only the specific operations each consuming account needs.
Previously, you could only use AWS managed permissions, which provide predefined sets of actions and restrict cross-account issuers to specific certificate templates. Now you can select from read operations (e.g., DescribeCertificateAuthority, GetCertificate, and GetCertificateAuthorityCertificate) and write operations (e.g., IssueCertificate and RevokeCertificate) to tailor access for each consuming account or organizational unit. With customer managed permissions, cross-account issuers are not restricted to a specific certificate template.
Customer managed permissions for AWS Private CA are available in all AWS Regions where AWS Private CA and AWS RAM are available. To learn more, see Customer managed permissions in RAM in the AWS Private CA User Guide and Creating and using customer managed permissions in the AWS RAM User Guide.
Amazon OpenSearch Serverless now supports Zstandard codecs for index storage, giving customers greater control over the trade-off between storage costs and query performance. With this launch, customers can configure Zstandard compression to achieve up to 32% reduction in index size compared to the default LZ4 codec, helping lower managed storage costs for data-intensive workloads.
Customers running large-scale log analytics, observability pipelines, and time-series workloads on Amazon OpenSearch Serverless can benefit most from Zstandard compression where high data volumes make storage efficiency a significant cost driver. The Zstandard compression algorithm is available in two different modes in Amazon OpenSearch Serverless: zstd and zstd_no_dict. Customers can tune the compression level to balance their specific needs: lower levels (e.g., level 1) deliver meaningful storage savings with minimal impact on indexing throughput and query latency, while higher levels (e.g., level 6) maximize compression ratios at the cost of slower indexing speeds.
Zstandard codec support is available today in all AWS Regions where Amazon OpenSearch Serverless is supported. To get started, you can specify these codecs in your index settings at creation time. For more information, see the Amazon OpenSearch Serverless documentation.
Today, AWS Marketplace announces the Discovery API, giving you programmatic access to product and pricing information across the AWS Marketplace catalog — including SaaS, AI agents and tools, AMI, containers, and machine learning models.
With the Discovery API, buyers can embed catalog data into internal portals, enrich procurement tools with current pricing and offer terms, and streamline vendor evaluation workflows. Sellers and channel partners can surface product listings, public pricing, and private offer details directly within their own websites and storefronts — helping customers browse, compare, and move to purchase without leaving the partner experience.
The API provides access to product descriptions, categories, pricing across public and private offers, and offer terms, so you can build experiences tailored to how your organization discovers and procures software through AWS Marketplace.
The AWS Marketplace Discovery API is available in US East (N. Virginia), US West (Oregon), and Europe (Ireland).
You can get started by configuring IAM permissions for your AWS account and calling the API through the AWS SDK. For more information, see the AWS Marketplace Discovery API Reference.
AWS Agent Registry, available through Amazon Bedrock AgentCore, is now in preview — a private, governed catalog and discovery layer for agents, tools, skills, MCP servers, and custom resources within the organization. It gives teams complete visibility into their AI landscape, enabling them to discover existing agents and tools instead of rebuilding capabilities that already exist. The registry can be accessed via the AgentCore Console UI, APIs (AWS CLI, AWS SDK), or as an MCP server that builders can query and invoke directly from their IDEs. Registry supports both IAM and OAuth (Custom JWT) based access.
Teams can register resources manually through the console or API, or use URL-based discovery, which automatically retrieves metadata such as tool schemas and capability descriptions from a live MCP server or agent endpoint. Records go through an approval workflow where administrators can approve records before they become discoverable, and they can plug the registry into their existing approval workflows to enforce governance policies. AWS CloudTrail provides complete audit trails of all registry access and administrative actions, ensuring compliance and security oversight. For discovery, the registry offers both semantic and keyword search, so developers can quickly find agents by describing their use case in natural language.
AWS Agent Registry (preview) is available in five AWS Regions where AgentCore is available: US West (Oregon), Asia Pacific (Tokyo), Asia Pacific (Sydney), Europe (Ireland), and US East (N. Virginia). Learn more about the registry through the blog, and deep dive using the documentation.
Amazon RDS Blue/Green Deployments now supports Amazon RDS Proxy, enabling faster application recovery during switchover by eliminating DNS propagation delays. Blue/Green Deployments create a fully managed staging environment (Green) that allows you to deploy and test production changes, keeping your current production database (Blue) safe. When ready, you can switchover to the new production environment and your applications begin accessing it immediately without any configuration changes.
During a Blue/Green Deployment switchover for single-Region configurations, RDS Proxy actively monitors database instances and detects when the Green environment becomes the new production environment. This allows RDS Proxy to quickly redirect connections to the Green environment, enabling faster application recovery. You don't need to modify your drivers or change your existing application setup.
Amazon RDS Blue/Green Deployments with Amazon RDS Proxy is available for Amazon Aurora with MySQL compatibility, Amazon Aurora with PostgreSQL compatibility, Amazon RDS for MySQL, Amazon RDS for PostgreSQL, and Amazon RDS for MariaDB in all commercial AWS Regions where RDS Proxy is available.
In a few clicks, update your databases using RDS Blue/Green Deployments via the Amazon RDS Console or Amazon CLI. To learn more, see Blue/Green Deployments overview in the Amazon RDS documentation.
Amazon RDS Blue/Green Deployments now supports Amazon RDS Proxy, enabling faster application recovery during switchover by eliminating DNS propagation delays. Blue/Green Deployments create a fully managed staging environment (Green) that allows you to deploy and test production changes, keeping your current production database (Blue) safe. When ready, you can switchover to the new production environment and your applications begin accessing it immediately without any configuration changes.
During a Blue/Green Deployment switchover for single-Region configurations, RDS Proxy actively monitors database instances and detects when the Green environment becomes the new production environment. This allows RDS Proxy to quickly redirect connections to the Green environment, enabling faster application recovery. You don't need to modify your drivers or change your existing application setup.
Amazon RDS Blue/Green Deployments with Amazon RDS Proxy is available for Amazon Aurora with MySQL compatibility, Amazon Aurora with PostgreSQL compatibility, Amazon RDS for MySQL, Amazon RDS for PostgreSQL, and Amazon RDS for MariaDB in all commercial AWS Regions where RDS Proxy is available.
In a few clicks, update your databases using RDS Blue/Green Deployments via the Amazon RDS Console or Amazon CLI. To learn more, see Blue/Green Deployments overview in the Amazon RDS documentation.
Amazon S3 Lifecycle now prevents expiration and transition actions on objects that failed replication, helping you to coordinate replication configuration or permissions changes with actions defined in your lifecycle rules.
Incorrect permissions or replication configuration can prevent objects from being replicated. With this change, S3 Lifecycle no longer expires or transitions objects that have failed replication, even if they match one of the lifecycle rules that you have defined. Once you have corrected your replication configuration or permissions, you can use S3 Batch Replication to replicate objects that previously failed. After successful replication, S3 Lifecycle will automatically process these objects according to your configured rules.
This change applies automatically to all existing and new S3 Lifecycle configurations, across 37 AWS Regions, including the AWS China and AWS GovCloud (US) Regions. We are in the process of deploying this change and plan to complete the deployment in the coming days. To learn more, visit S3 Lifecycle documentation and S3 Replication troubleshooting documentation.
Amazon OpenSearch Service now provides a unified observability experience that brings together metrics, logs, traces, and AI agent tracing in a single interface. This release introduces native integration with Amazon Managed Service for Prometheus and comprehensive agent tracing capabilities, addressing the dual challenges of prohibitive costs from premium observability platforms and operational complexity from fragmented tooling. Site Reliability Engineers, DevOps Engineers, and Platform Engineering teams can now consolidate their observability stack without costly data duplication or constant context switching between multiple tools.
You can now query Prometheus metrics directly using native PromQL syntax alongside logs and traces in OpenSearch UI's observability workspace—without duplicating data. Combined with new application monitoring workflows powered by RED metrics (Rate, Errors, Duration) and AI agent tracing using OpenTelemetry GenAI semantic conventions, operations teams can correlate slow traces to application logs, overlay Prometheus metrics on service dashboards, and trace LLM agent execution—all without switching tools. This live query architecture delivers significant cost reduction compared to premium platforms while maintaining operational excellence.
The new unified observability experience is available on OpenSearch UI in 20 AWS regions: US East (N. Virginia, Ohio), US West (N. California, Oregon), Asia Pacific (Hong Kong, Mumbai, Osaka, Seoul, Singapore, Sydney, Tokyo), Europe (Frankfurt, Ireland, London, Milan, Paris, Spain, Stockholm), Canada (Central), and South America (São Paulo).
To learn more, visit the OpenSearch Service observability documentation and direct query documentation.
Amazon Bedrock now supports cost allocation by IAM principal, such as IAM users and IAM roles, in AWS Cost and Usage Report 2.0 (CUR 2.0) and Cost Explorer. This enables customers to understand and attribute Bedrock model inference costs across users, teams, projects, and applications.
With this launch, customers can tag their IAM users and roles with attributes like team, project, or cost center, activate them as cost allocation tags, and analyze Bedrock model inference costs by the tags in Cost Explorer or at the line-item level in CUR 2.0. To get started, tag your IAM users and roles and activate them as cost allocation tags in the Billing and Cost Management console. Then create a CUR 2.0 data export and select "Include caller identity (IAM principal) allocation data" or filter by tags in Cost Explorer.
This feature is available in all AWS commercial Regions where Amazon Bedrock is available. To learn more, see Using IAM principal for Cost Allocation documentation. To get started with Amazon Bedrock, visit Amazon Bedrock documentation.
Amazon Timestream for InfluxDB now supports customer-defined maintenance windows, giving you control over when routine maintenance is performed on your InfluxDB databases. This feature is available for both InfluxDB 2 instances and InfluxDB 3 clusters across all supported editions.
With this launch, you can specify a weekly maintenance window using a day-and-time format in your preferred timezone. Timestream for InfluxDB supports IANA timezone identifiers such as America/New_York, Europe/London, and Asia/Tokyo, and automatically handles Daylight Saving Time transitions so you don't need to manually adjust your schedule. If you don't specify a maintenance window, the service continues to manage maintenance timing automatically.
You can set or update your preferred maintenance window when creating or modifying a resource using the Amazon Timestream for InfluxDB console, AWS CLI, or AWS SDKs. You can use Amazon Timestream for InfluxDB Customer-Defined Maintenance Windows in all Regions where Timestream for InfluxDB is offered.
To get started with Amazon Timestream for InfluxDB, visit the Amazon Timestream for InfluxDB console. For more information, see the Amazon Timestream for InfluxDB documentation and pricing page.
AWS が Anthropic と共同で取り組む Project Glasswing と Claude Mythos Preview の発表、自律型ペネトレーションテストを実現する AWS Security Agent の一般提供開始、Amazon Bedrock の自動推論によるハルシネーション防止など、AI を活用した大規模セキュリティ防御の最新の取り組みと、脅威が現実化する前に先手を打つ AWS のセキュリティ哲学を紹介します。
2026 年 4 月 1 日、Amazon Elastic Container Service (Amazon […]
2026 年4 月 3 日、Amazon Bedrock Guardrails でクロスアカウントセーフガード […]
2026 年 3 月 30 日週、私はチームと一緒に AWS 香港ユーザーグループを訪問しました。香港には小さ […]
In this post, you will learn how to build stateful MCP servers that request user input during execution, invoke LLM sampling for dynamic content generation, and stream progress updates for long-running tasks. You will see code examples for each capability and deploy a working stateful MCP server to Amazon Bedrock AgentCore Runtime.
This post walks you through three steps: starting a session and generating the Live View URL, rendering the stream in your React application, and wiring up an AI agent that drives the browser while your users watch. At the end, you will have a working sample application you can clone and run.
Today, we're announcing AWS Agent Registry (preview) in AgentCore, a single place to discover, share, and reuse AI agents, tools, and agent skills across your enterprise.
This post shows you how to manage FM transitions in Amazon Bedrock, so you can make sure your AI applications remain operational as models evolve. We discuss the three lifecycle states, how to plan migrations with the new extended access feature, and practical strategies to transition your applications to newer models without disruption.