Amazon RDS Custom for SQL Server now provides customers with ability to view and schedule new operating system (OS) updates for RDS provided engine versions (RPEV). With RPEV, RDS Custom provides a SQL Server version pre-installed on an Amazon Machine Image (AMI). When new operating system updates are available for RPEV, customers can now view upcoming updates, apply them immediately, or schedule them for application in the next maintenance window using RDS Custom APIs.
To view available OS updates, customers can use the describe-pending-maintenance-actions API, or subscribe to RDS-EVENT-0230 to receive an alert when new updates become available for their database instance. Customers can use apply-pending-maintenance-action API to apply the updates immediately or schedule them within their next maintenance window.
Using these features, customers can efficiently track and apply OS updates. To learn more, refer to the Amazon RDS Custom for SQL Server User Guide. These features are available in all the AWS Regions where RDS Custom for SQL Server is available.
Starting today, Amazon Elastic Compute Cloud (Amazon EC2) C8gn instances, powered by the latest-generation AWS Graviton4 processors, are available in the AWS Region Asia Pacific (Jakarta, Hyderabad, Tokyo), South America (Sao Paulo), and Europe (Zurich). The new instances provide up to 30% better compute performance than Graviton3-based Amazon EC2 C7gn instances. Amazon EC2 C8gn instances feature the latest 6th generation AWS Nitro Cards, and offer up to 600 Gbps network bandwidth, the highest network bandwidth among network optimized EC2 instances.
Take advantage of the enhanced networking capabilities of C8gn to scale performance and throughput, while optimizing the cost of running network-intensive workloads such as network virtual appliances, data analytics, CPU-based artificial intelligence and machine learning (AI/ML) inference.
For increased scalability, C8gn instances offer instance sizes up to 48xlarge, up to 384 GiB of memory, and up to 60 Gbps of bandwidth to Amazon Elastic Block Store (EBS). C8gn instances support Elastic Fabric Adapter (EFA) networking on the 16xlarge, 24xlarge, 48xlarge, metal-24xl, and metal-48xl sizes, which enables lower latency and improved cluster performance for workloads deployed on tightly coupled clusters.
C8gn instances are available in the following AWS Regions: US East (N. Virginia, Ohio), US West (Oregon, N.California), Europe (Frankfurt, Stockholm, Ireland, London, Spain, Zurich), Asia Pacific (Singapore, Malaysia, Sydney, Thailand, Mumbai, Seoul, Melbourne, Jakarta, Hyderabad, Tokyo), Middle East (UAE), Africa (Cape Town), Canada West (Calgary, Central), South America (Sao Paulo), AWS GovCloud (US-East, US-West).
To learn more, see Amazon C8gn Instances. To begin your Graviton journey, visit the Level up your compute with AWS Graviton page. To get started, see AWS Management Console, AWS Command Line Interface (AWS CLI), and AWS SDKs.
AWS Lambda now provides Availability Zone (AZ) metadata through a new metadata endpoint in the Lambda execution environment. With this capability, developers can determine the AZ ID (e.g., use1-az1) of the AZ their Lambda function is running in, enabling them to build functions that make AZ-aware routing decisions, such as preferring same-AZ endpoints for downstream services to reduce cross-AZ latency. This capability also enables operators to implement AZ-aware resilience patterns like AZ-specific fault injection testing.
Lambda automatically provisions and maintains execution environments ready to serve function invocations across multiple AZs within an AWS Region to provide high availability and fault tolerance without any additional configuration or management overhead for customers. As development teams scale their serverless applications, their functions often need to interact with other AWS services like Amazon ElastiCache and Amazon RDS that provide endpoints specific to each AZ. Until now, Lambda did not provide a way for functions to determine which AZ they were running in. With the new metadata endpoint, functions can now retrieve their AZ ID with a simple HTTP request, making it easy to implement AZ-aware logic without building and maintaining custom solutions.
To get started, use the Powertools for AWS Lambda metadata utility or call the metadata endpoint directly using the environment variables that Lambda automatically sets in the execution environment. This capability is supported for all Lambda runtimes, including custom runtimes and functions packaged as container images, and integrates seamlessly with Lambda capabilities like SnapStart and provisioned concurrency, regardless of whether your functions are VPC-enabled.
AZ metadata support is available at no additional cost in all commercial AWS Regions where Lambda is available. To learn more, visit Lambda documentation.
AWS announces support for NVIDIA Inference Xfer Library (NIXL) with Elastic Fabric Adapter (EFA) to accelerate disaggregated large language model (LLM) inference on Amazon EC2. This integration enhances disaggregated inference serving through three key improvements: increased KV-cache throughput, reduced inter-token latency, and optimized KV-cache memory utilization.
NIXL with EFA enables high throughput, low-latency KV-cache transfer between prefill and decode nodes, and it enables efficient KV-cache movement between various storage layers. NIXL is interoperable with all EFA-enabled EC2 instances and integrates natively with frameworks including NVIDIA Dynamo, SGLang, and vLLM. Combined, NIXL with EFA enables flexible integration with your EC2 instance and framework of choice, providing performant disaggregated inference at scale.
AWS supports NIXL version 1.0.0 or higher with EFA installer version 1.47.0 or higher on all EFA-enabled EC2 instance types in all AWS regions at no additional cost. For more information, visit the EFA documentation.
Amazon EC2 Fleet now supports interruptible Capacity Reservations. EC2 Fleet allows you to launch instances across multiple instance types and Availability Zones. Starting today, you can specify interruptible Capacity Reservation IDs across your Launch Templates to provision instances in a single EC2 Fleet call.
When On-Demand Capacity Reservations are not in use, customers can make them temporarily available as interruptible reservations within their AWS Organization to improve utilization and save costs. When these interruptible reservations are available to your account, you can now use EC2 Fleet to easily consume them.
This feature is available in all AWS commercial regions. To get started, refer to the EC2 Fleet documentation. To learn more about interruptible Capacity Reservations, visit the EC2 Capacity Reservations user guide.
Today, AWS announced the opening of a new AWS Direct Connect location at Equinix SY5 in Sydney, Australia. You can now establish private, direct network access to all public AWS Regions (except those in China), AWS GovCloud Regions, and AWS Local Zones from this location. This site is the fourth AWS Direct Connect location in Sydney and the tenth AWS Direct Connect location within Australia. This Direct Connect location offers dedicated 10 Gbps and 100 Gbps connections with MACsec encryption available.
The Direct Connect service enables you to establish a private, physical network connection between AWS and your data center, office, or colocation environment. These private connections can provide a more consistent network experience than those made over the public internet.
For more information on the over 150 Direct Connect locations worldwide, visit the locations section of the Direct Connect product detail pages. Or, visit our getting started page to learn more about how to purchase and deploy Direct Connect.
Amazon Redshift federated permissions are now supported with AWS IAM Identity Center (IdC) in multiple AWS Regions. You can extend IdC from your primary AWS Region to additional Regions for improved performance through proximity to users and reliability. In the additional regions, you now have simplified administration of Redshift fine-grained access controls at the table and column level using existing workforce identities with IdC.
When a new Region is added in IdC, you can create Redshift and Lake Formation Identity Center applications in the new Region without replicating identities from the primary Region. This enables you to use existing workforce identities to query data across warehouses in the new Region. Regardless of which warehouse is used for querying, row-level, column-level, and masking controls always apply automatically, delivering fine-grained access compliance. You can also access Amazon Redshift with single sign-on in these new Regions from Amazon QuickSight, Amazon Redshift Query Editor, or third-party SQL tools.
To get started with Redshift federated permissions using IdC, read the blog and documentation. To extend IdC support in multiple regions, read IdC documentation, Redshift documentation, Lake Formation documentation, and see the region availability.
Amazon Bedrock AgentCore now enables customers to configure Chrome Enterprise policies for AgentCore Browser and specify custom root Certificate Authority (CA) certificates for both AgentCore Browser and Code Interpreter. These enhancements help ensure enterprise requirements are met when allowing AI agents to operate within organizations that have strict security policies and internal infrastructure using custom certificates.
With Chrome policies, you can leverage over 100+ configurable policies for managing browser behavior across security, URL filtering, content settings, and more to enforce organizational compliance requirements. For example, restrict agents to specific URLs for kiosk-mode operations, disable password managers and downloads for data-entry tasks, or implement URL blocklists for regulatory compliance. Custom root CA support enables agents to seamlessly connect to internal services like Artifactory, Jira, and finance portals that use SSL certificates signed by your organization's internal Certificate Authority, and work with corporate proxies performing TLS interception.
These features are available in all 14 AWS Regions where Amazon Bedrock AgentCore Browser and Code Interpreter are available: US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Paris), Europe (Stockholm), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Asia Pacific (Seoul), and Canada (Central).
To learn more, visit the AgentCore Browser documentation.
Today, AWS announces that the AWS MCP Server (preview) now publishes operational metrics to Amazon CloudWatch and introduces scalable Agent SOPs discovery using semantic similarity. Agent SOPs are pre-built, tested workflows that guide AI assistants through complex multi-step AWS tasks. These updates give you visibility into your MCP Server usage and provide a guided path for your agents to perform tasks on AWS.
Previously, customers were unable to monitor changes done through agents using AWS MCP server to track usage patterns, identify permission issues, and set up alarms on errors. With this update, the AWS MCP Server now automatically publishes metrics under the AWS-MCP namespace in CloudWatch at no additional cost. You can monitor invocation counts, success rates, client errors, server errors, and throttling for individual tools such as the AWS API caller (call_aws) and the Agent SOP retriever (retrieve_agent_sop). These metrics help you track usage patterns, identify permission issues, and set up alarms when error rates exceed your thresholds. Additionally, the documentation search tool (search_documentation) now uses semantic similarity to return relevant Agent SOPs alongside AWS documentation results, allowing AI assistants to discover the right SOP through natural language queries.
The AWS MCP Server is available in preview in the US East (N. Virginia) AWS Region at no additional cost.
To get started on AWS MCP server, please read documentation here.
Celebrating twenty years of innovation in ML and AI technology at AWS. Countless developers—myself included—have embraced cloud computing and actively used its capabilities to accomplish what was previously impossible.
Amazon S3 の一般提供が開始されたのは、20 年前の先週にあたる 2006 年 3 月 14 日でした […]
Amazon Threat Intelligence が、Cisco Secure Firewall Management Center の重大な脆弱性 CVE-2026-20131 を悪用する Interlock ランサムウェアのキャンペーンを特定しました。調査の結果、この脆弱性は公開の 36 日前からゼロデイとして悪用されていたことが判明しました。攻撃者の設定ミスにより外部に露出していたインフラストラクチャから攻撃ツールキットの全容が明らかになり、本記事ではその技術分析、侵害インジケータ (IoC)、および多層防御の重要性を含む防御の推奨事項を共有します。
AWS メンバーアカウントが侵害された場合、攻撃者はアカウントを組織から離脱させ、すべてのガバナンスコントロールを無効化する可能性があります。本記事では、サービスコントロールポリシー (SCP)、安全なアカウント移行、一元化されたルートアクセス管理などの多層的なセキュリティコントロールを使用して、AWS 環境を保護する方法を解説します。
イベント概要 自動車・製造業を中心にCAEワークロードを実行されているお客様向けに、本年2月(東京リージョンは […]
本日、AWS Console for SAP Applications の提供開始を発表します。これは、AWS 上で稼働する SAP HANA ベースのアプリケーションを登録・管理するための、アプリケーション中心のビューを SAP のお客様に提供する新しい一元管理エクスペリエンスです。このコンソールは、登録済みの SAP アプリケーションの表示、ランディングゾーンのセットアップ状況の把握、SAP ワークロードが使用するリソースの可視化を行うための統合ダッシュボードを提供します。アプリケーション詳細ページでは、アプリケーショントポロジーや関連リソースの表示に加え、アプリケーションを考慮した起動/停止、SAP ワークロード構成の自動検証、スケジュールされたオペレーションなどの管理操作を実行できます。
本記事は 2026/2/24に投稿された Well-Architected design for resili […]
AWS は最近、Amazon Web Services (AWS) Load Balancer Controller による Kubernetes Gateway API サポートの一般提供を発表しました。これまで、AWS Load Balancer Controller は Kubernetes Ingress と Service リソースの要件を満たすため、それぞれ Application Load Balancer (ALB) と Network Load Balancer (NLB) をプロビジョニングしていました。この新機能により、標準の Kubernetes Gateway API を使用してAWSロードバランシング機能を定義できるようになりました。
Amazon EMR Serverless のパフォーマンス、コスト、スケーラビリティを最適化するためのベストプラクティス 10 選を紹介します。アプリケーション設計、ワーカーの適正化、Graviton プロセッサの活用、ストレージ選択、マルチ AZ 構成など、効率的なデータ処理パイプラインの構築に役立つ実践的な推奨事項をまとめています。
Amazon SageMaker のレイクハウスアーキテクチャにおけるストレージパターンの選択ガイドです。データレイク (汎用 S3、S3 Tables) とデータウェアハウス (Redshift Managed Storage) の特性を比較し、ETL、Zero-ETL、データフェデレーションなどのデータ取り込みパターンとともに、ユースケースに応じた最適なアーキテクチャの選択方法を解説します。
Bulletin ID: 2026-010-AWS
Scope: AWS
Content Type: Important (requires attention)
Publication Date: 2026/03/19 13:30 PM PDT
Description:
AWS-LC is a general-purpose cryptographic library maintained by AWS. We identified CVE-2026-4428 affecting X.509 certificate verification.
A logic error in the CRL (Certificate Revocation List) distribution point matching in AWS-LC allows a revoked certificate to bypass revocation checks during certificate validation, when the application enables CRL checking and uses partitioned CRLs with Issuing Distribution Point (IDP) extensions.
Applications that do not enable CRL checking (X509_V_FLAG_CRL_CHECK) are not affected. Applications using complete (non-partitioned) CRLs without IDP extensions are also not affected.
Impacted versions:
- CRL Distribution Point Scope Check Logic Error in AWS-LC >= v1.24.0, < v1.71.0
- CRL Distribution Point Scope Check Logic Error in AWS-LC-FIPS >= AWS-LC-FIPS-3.0.0, < AWS-LC-FIPS-3.3.0
- CRL Distribution Point Scope Check Logic Error in aws-lc-sys >= v0.15.0, < v0.39.0
- CRL Distribution Point Scope Check Logic Error in aws-lc-fips-sys >= v0.13.0, < v0.13.13
Please refer to the article below for the most up-to-date and complete information related to this AWS Security Bulletin.
In this post, we will show you how to enforce data residency when deploying Amazon Quick Microsoft Teams extensions across multiple AWS Regions. You will learn how to configure multi-Region Amazon Quick extensions that automatically route users to AWS Region-appropriate resources, helping keep compliance with GDPR and other data sovereignty requirements.
SageMaker AI endpoints now support enhanced metrics with configurable publishing frequency. This launch provides the granular visibility needed to monitor, troubleshoot, and improve your production endpoints.
This post introduces Video Retrieval-Augmented Generation (V-RAG), an approach to help improve video content creation. By combining retrieval augmented generation with advanced video AI models, V-RAG offers an efficient, and reliable solution for generating AI videos.
In this post, we explore our approach to video generation through VRAG, transforming natural language text prompts and images into grounded, high-quality videos. Through this fully automated solution, you can generate realistic, AI-powered video sequences from structured text and image inputs, streamlining the video creation process.
This post explores the technical characteristics of the Nemotron 3 Super model and discusses potential application use cases. It also provides technical guidance to get started using this model for your generative AI applications within the Amazon Bedrock environment.