Amazon CloudWatch announces Query Studio in public preview, a unified query and visualization experience that brings native PromQL querying to CloudWatch for the first time. Query Studio combines PromQL and CloudWatch Metric Insights in a single interface, enabling you to query AWS vended metrics and OpenTelemetry metrics using the language you prefer without switching between consoles.
Query Studio provides a visual form builder with autocomplete and a code editor with syntax highlighting, making it accessible to both new and experienced users. For example, a team running applications on Amazon EC2 can correlate their custom OpenTelemetry application metrics with EC2 vended metrics side by side, quickly spot issues across their stack, and create alarms or add charts to dashboards directly from their query results.
Amazon CloudWatch Query Studio is available in public preview in US East (N. Virginia), US West (Oregon), Asia Pacific (Sydney), Asia Pacific (Singapore), and Europe (Ireland). Standard CloudWatch dashboard pricing applies, see pricing page for details.
To get started, open Query Studio from the Metrics console or dashboard edit mode in the Amazon CloudWatch console. Learn more on the Amazon CloudWatch documentation page.
Quick Sight in Amazon Quick now supports sparklines, enabling authors to embed compact, inline trend visualizations directly within table cells. Sparklines allow users to display line or area charts inside individual table columns, providing at-a-glance trend context alongside their data without requiring separate chart visuals. Authors can add sparklines to tables by configuring a metric with a date dimension, and Quick automatically renders a miniature trend line within each row.
This enhancement helps organizations quickly identify patterns, anomalies, and directional trends within tabular data — such as revenue over time by product, weekly ticket volumes by team, or monthly utilization by region — without navigating away from the table view. Sparklines support customization options including visual type (line or area), line color, line interpolation (linear, smooth, or stepped), and Y-axis behavior (shared or independent scaling across rows), giving authors flexibility to tailor the visualization to their dashboard's needs.
This feature is now available in all Amazon Quick regions where Quick Sight is supported. Learn more about how to add sparklines to your tables in Amazon Quick.
AWS Secrets Manager console now allows you to specify a custom customer managed AWS Key Management Service (KMS) key when creating secrets. You can now provide a KMS key Amazon Resource Name (ARN) directly in the console, in addition to selecting from the pre-populated list of KMS keys in your current account.
Previously, when creating a secret through the AWS Secrets Manager console, you could only select customer managed KMS keys from a dropdown list that displayed keys within the same AWS account. With this enhancement, you can now enter a KMS key ARN to use a key from a different account, aligning the console experience with the existing API capabilities. This simplifies cross-account encryption workflows and provides greater flexibility in managing your encryption keys across multiple accounts.
This feature is available in all AWS Regions where AWS Secrets Manager is available. To learn more about using customer managed KMS keys with AWS Secrets Manager, visit the AWS Secrets Manager documentation.
Today, AWS announces the general availability of Partner Revenue Measurement User Agent string — a new capability that enables AWS Partners to measure AWS service consumption driven by their solutions using AWS APIs and SDKs. Partner Revenue Measurement allows Partners to better understand their AWS revenue impact and product consumption patterns. The User Agent string capability allows Partners to embed a unique product code from their AWS Marketplace listing as a user agent to quantify and measure the AWS revenue impact of that solution across certain services.
Partners can now add a user agent (format APN_1.1/pc_<AWS Marketplace product-code>$) in their application to enable AWS service consumption measurement by solution across partner-managed and customer-managed accounts. Partners can also set an environment variable in their SDKs or configure a setting in their AWS shared configuration file to automatically apply the User Agent string to all AWS service calls. This method supports automated deployments across Python, Node.js, Java, Kotlin, and other AWS SDKs, by capturing attribution for API-driven workloads and complements Partner Revenue Measurement’s Resource Tagging and AWS Marketplace Metering integration capabilities.
Partner Revenue Measurement is generally available in all commercial regions. To learn more about User Agent string implementation, visit the User Agent implementation guide and services supported. To learn more about Partner Revenue Measurement capabilities, review the onboarding guide for more information.
Today, AWS announces the launch of Partner Revenue Measurement integration with AWS Marketplace Metering for Amazon Machine Image (AMI) and Machine Learning (ML) products listed in AWS Marketplace. Partner Revenue Measurement allows Partners to better understand their AWS revenue impact and product consumption patterns. The AWS Marketplace Metering capability automatically measures AWS service consumption when customers purchase and use AMI and ML products via AWS Marketplace.
Partners can now gain visibility into how their solutions impact Amazon Elastic Compute Cloud (Amazon EC2) and Amazon SageMaker AI service consumption across partner-managed and customer-managed accounts. This method complements Partner Revenue Measurement’s Resource Tagging and User Agent string capabilities by capturing attribution without requiring additional Partner implementation.
Partner Revenue Measurement is generally available in all commercial regions. To learn more about AWS Marketplace Metering, review the AWS Marketplace metering guide. To learn more about Partner Revenue Measurement capabilities, review the onboarding guide.
Amazon Bedrock Guardrails now enables centralized enforcement of safety controls across all AWS accounts within an organization through cross-account safeguards. Amazon Bedrock Guardrails offers configurable safeguards that help block up to 88% of harmful multimodal content from both input prompts and model responses, while filtering hallucinated responses from foundation models. Central security teams and administrators can now automatically implement these controls for all foundation model interactions in Amazon Bedrock across their organization, eliminating the operational overhead of manually configuring guardrails for each account.
With cross-account safeguards, you can specify a guardrail ID from your management account in a new Amazon Bedrock policy that automatically enforces configured safeguards across all member entities including organizational units (OUs) and individual accounts for all model invocations with Amazon Bedrock. This enables operational efficiency through automatic enforcement from a single control point in your management account. You can implement organization-level enforcement for uniform baseline protection, account-level controls for specific departmental requirements, and application-specific safeguards that complement organizational policies, with the union of multiple guardrails enforced during model inference calls.
Organizational safeguards in Amazon Bedrock Guardrails is now available in all AWS commercial and GovCloud regions where Bedrock Guardrails is supported. You can access this capability through the AWS management console or using the supported APIs.
To learn more about implementing centralized guardrails enforcement across your organization, read the News blog, visit the Amazon Bedrock Guardrails documentation, and explore the Amazon Bedrock Guardrails service page.
Amazon SageMaker Data Agent now supports interactive charting, SQL analytics on Snowflake data sources, and materialized view management in Amazon SageMaker Unified Studio notebooks. Data Agent now provides a complete analytics workflow that goes beyond code generation, enabling you to explore AWS and external data sources, visualize results, and optimize query performance, all with natural language prompts.
You can ask "plot monthly revenue trends by region for 2025" and Data Agent generates an interactive chart directly in your notebook, where you can hover over data points, and modify without writing code. When your analysis spans AWS and Snowflake, you can query Snowflake tables through external connections and join them with your AWS Glue Data Catalog data in a single prompt. Additionally, you can ask "analyze my notebook and suggest which queries would benefit from materialized views" and the agent recommends optimizations based on your query patterns, creates the views, and sets refresh schedules.
To get started, open a notebook in your SageMaker Unified Studio project and use the Data Agent chat panel. These features are available in all AWS Regions where Amazon SageMaker Unified Studio is supported. To learn more, see SageMaker Data Agent in the SageMaker Unified Studio User Guide.
You can now use the AWS Glue Schema Registry, a serverless and free feature of AWS Glue, in the Asia Pacific (Jakarta), Europe (Spain), and Europe (Zurich) regions to validate and control the evolution of streaming data using registered Apache Avro, JSON, and Protobuf schema formats.
The Schema Registry acts as a centralized repository for managing data format and structure between decoupled applications in data streaming systems. By using it, you can eliminate data validation logic and cross-team coordination, improve streaming data quality, and reduce downstream application failures. Through Apache-licensed serializers and deserializers, the Schema Registry integrates with C# and Java applications developed for Apache Kafka/Amazon Managed Streaming for Apache Kafka, Amazon Kinesis Data Streams, Apache Flink/Amazon Kinesis Data Analytics for Apache Flink, and AWS Lambda.
To get started, visit the AWS Glue Schema Registry documentation. For a full list of AWS Regions where AWS Glue Schema Registry is available, see the AWS Regional Services List.
Amazon Bedrock Data Automation (BDA) now supports custom vocabulary through a new capability called Data Automation Library, enabling you to improve speech recognition and transcription accuracy for domain-specific terminology when processing audio and video content. BDA automates the generation of insights from unstructured multimodal content such as documents, images, audio, and videos for your GenAI-powered applications. This new capability is designed for organizations in healthcare, legal, financial services, media, and contact center analytics that need accurate recognition of specialized terminology.
With custom vocabulary, you can provide BDA with lists of domain-specific words and phrases—including brand names, acronyms, medical terms, and technical jargon—so that these terms are accurately recognized and extracted from your content. You can also specify display forms to control how recognized terms appear in output, such as displaying "electrocardiogram" as "ECG" or "discounted cash flow" as "DCF". For example, a healthcare company processing doctor-patient conversations can provide lists of disease and drug names, or a contact center analytics provider can supply industry-specific terminology across verticals such as insurance, telecom, and public sector to ensure accurate transcriptions. Custom vocabulary supports 11 languages including English, Spanish, French, German, Italian, Portuguese, Japanese, Korean, and Chinese (Simplified, Traditional, and Cantonese), and is included at no additional charge.
Data Automation Library with custom vocabulary is available in US East (N. Virginia), US West (Oregon), Europe (Ireland), Europe (London), Europe (Frankfurt), Asia Pacific (Mumbai), and Asia Pacific (Sydney). To learn more, visit the Bedrock Data Automation page or view the Bedrock Data Automation User Guide.
The Apache Spark troubleshooting agent and upgrade agent for Amazon EMR are now available as Kiro powers, bringing one-click access to AI-assisted Spark operations directly in Kiro. With these powers, data engineers can reduce troubleshooting time from hours to minutes and compress Spark version upgrades from months to weeks.
When a Spark job fails, the troubleshooting power identifies the root cause by analyzing logs, metrics, and configurations across EMR on EC2 and EMR Serverless, and provides specific code recommendations for PySpark applications. The upgrade power automates Spark version upgrades, such as moving from EMR 6.5 to EMR 7.12, by handling code transformation and dependency resolution through remote validation and data quality comparison on EMR. Both powers connect to Spark agents through MCP Proxy for AWS with IAM role-based authentication, and all actions are recorded in AWS CloudTrail for full auditability..
The Apache Spark troubleshooting and upgrade powers are available with Amazon EMR in all AWS commercial regions. To get started, install the Apache Spark troubleshooting power or the upgrade power from the Kiro IDE. For more information, see the troubleshooting agent and upgrade agent documentation.
Organizational safeguards are now generally available in Amazon Bedrock Guardrails, enabling centralized enforcement and management of safety controls across multiple AWS accounts within an AWS Organization.
2026 年 3 月 23 日週の出来事で私が最も心を躍らせたのは、AWS Agentic AI バイスプレジ […]
Kiro に 2 つの新しいエンタープライズガバナンス機能が追加されました。管理者が承認済み MCP サーバーを JSON 形式のレジストリでホワイトリスト管理できる「MCP サーバーレジストリ」と、組織内の開発者が利用できる AI モデルを制限できる「モデルガバナンス」です。MCP レジストリは起動時・24 時間ごとに同期され、未承認サーバーへの接続を防止します。モデルガバナンスはデータレジデンシー要件への対応にも有効で、実験的モデルを承認完了まで無効化できます。これらの機能は Kiro IDE 0.11.28 / CLI 1.23 以降のエンタープライズユーザー向けに提供されます。
Kiro に 2 つの新しいエンタープライズガバナンス機能が追加されました。管理者が承認済み MCP サーバーを JSON 形式のレジストリでホワイトリスト管理できる「MCP サーバーレジストリ」と、組織内の開発者が利用できる AI モデルを制限できる「モデルガバナンス」です。MCP レジストリは起動時・24 時間ごとに同期され、未承認サーバーへの接続を防止します。モデルガバナンスはデータレジデンシー要件への対応にも有効で、実験的モデルを承認完了まで無効化できます。これらの機能は Kiro IDE 0.11.28 / CLI 1.23 以降のエンタープライズユーザー向けに提供されます。
AWS Sustainability コンソールが提供開始されました。サステナビリティに関するレポートとリソースを一か所に集約したスタンドアロンサービスで、Billing コンソールとは独立した権限モデル、スコープ 1〜3 の排出量表示、カスタム CSV レポート、API によるプログラムアクセスなどの機能を備えています。
If you run high-scale applications that encrypt large volumes of data, you might be concerned about tracking encryption limits and rotating keys. This post explains how AWS Key Management Service (AWS KMS) and the AWS Encryption SDK handle Advanced Encryption Standard in Galois Counter Mode’s (AES-GCM) encryption limits or bounds automatically by using derived key […]