AWS Updates Feed

← トップに戻る

AWS Updates - 2026-02-12

AWS What's New

Amazon Bedrock increases default quotas for Anthropic’s Claude Sonnet 4.5 model in AWS GovCloud (US)

Amazon Bedrock has increased the default quotas for Anthropic’s Claude Sonnet 4.5 in AWS GovCloud (US-West) and AWS GovCloud (US-East) to 5,000,000 tokens per minute and 1,000 requests per minute, aligning with commercial AWS regions. This 25x increase enables customers to scale their AI workloads more effectively in regulated environments.
Claude Sonnet 4.5 is Anthropic's latest Sonnet model, excelling at building complex agents, coding, and long-horizon tasks while maintaining optimal speed and cost-efficiency for high-volume use-cases.


AI Troubleshooting in the AWS Support Center Console now supports 7 additional languages

AI troubleshooting in the AWS Support Center Console is now available in seven languages in addition to English: Japanese, Korean, Mandarin (Simplified), Mandarin (Traditional), Spanish, Portuguese, French. AWS Support Center Console is the primary interface where customers manage their AWS support experience, including creating and tracking support cases. Previously, AI troubleshooting capabilities were only available in English, creating a barrier for customers who prefer to work in their native language. With this launch, customers can now interact with AI-powered troubleshooting assistance in their preferred language.

AWS Support's AI troubleshooting helps customers resolve issues faster by providing immediate, contextual recommendations while they create a support case. For example, a Japanese developer troubleshooting an EC2 connectivity issue can now receive AI-generated insights and potential solutions in Japanese, reducing the time needed to understand and implement fixes. This capability is seamlessly integrated into the support experience and is available to all customers regardless of support plan, ensuring that language is no longer a barrier to self-service support.

All customers regardless of support plan can access the experience by selecting a supported language in their console settings and clicking the “Try it now” link in the banner at the top of the AWS Support Center Console.


Amazon RDS for PostgreSQL supports minor versions 18.2, 17.8, 16.12, 15.16 and 14.21

Amazon Relational Database Service (RDS) for PostgreSQL now supports the latest minor versions 18.2, 17.8, 16.12, 15.16, and 14.21. We recommend that you upgrade to the latest minor versions to fix known security vulnerabilities in prior versions of PostgreSQL, and to benefit from the bug fixes added by the PostgreSQL community. This release also includes new extension pg_stat_monitor that enables you to collect performance metrics and evaluate query performance insights in a unified view.

You can upgrade your databases during scheduled maintenance windows using automatic minor version upgrades. To simplify operations at scale, enable automatic minor version upgrades and use the AWS Organizations Upgrade Rollout Policy to orchestrate thousands of upgrades in phases, first to development environments before upgrading production systems. You can also use Amazon RDS Blue/Green deployments with physical replication to minimize downtime for minor version upgrades.

Amazon RDS for PostgreSQL makes it simple to set up, operate, and scale PostgreSQL deployments in the cloud. See Amazon RDS for PostgreSQL Pricing for pricing details and regional availability. Create or update a fully managed Amazon RDS database in the Amazon RDS Management Console or by using the AWS Command Line Interface (CLI).

 


Announcing new Amazon EC2 general purpose M8azn instances

AWS is announcing the general availability of new Amazon EC2 M8azn instances, general purpose high-frequency high-network instances powered by fifth generation AMD EPYC (formerly code named Turin) processors, offering the highest maximum CPU frequency, 5GHz in the cloud. M8azn instances offer up to 2x compute performance compared to previous generation M5zn instances, and up to 24% higher performance than M8a instances.

M8azn instances deliver up to 4.3x higher memory bandwidth and 10x larger L3 cache compared to M5zn instances allowing latency-sensitive and compute-intensive workloads to achieve results faster. These instances also offer up to 2x networking throughput and up to 3x EBS throughput versus M5zn instances. Built on the AWS Nitro System using sixth generation Nitro Cards, these instances are ideal for applications such as real-time financial analytics, high-performance computing, high-frequency trading (HFT), CI/CD, intensive gaming, and simulation modeling for the automotive, aerospace, energy, and telecommunication industries. M8azn instances feature a 4:1 ratio of memory to vCPU and are available in 9 sizes ranging from 2 to 96 vCPUs with up to 384 GiB of memory, including two bare metal variants.

M8azn instances are available in the following AWS Regions: US East (N. Virginia), US West (Oregon), Asia Pacific (Tokyo), and Europe (Frankfurt) Regions. Customers can purchase these instances via Savings Plans, On-Demand instances, and Spot instances. To get started, sign in to the AWS Management Console. For more information visit the Amazon EC2 M8azn instance page.


Amazon S3 Tables add partition and sort order definition in the CreateTable API

Amazon S3 Tables announce partition and sort order definition support for the CreateTable API. This enhancement simplifies setting these properties programmatically, making it easier to manage and optimize data in tables when they are created.

To use this feature, you can specify fields for partition transforms and sort order in the CreateTable API call. You can also define these properties when you create tables using the AWS CLI or the AWS SDK.

To create tables with partition and sort order, upgrade to the latest version of the AWS CLI and AWS SDKs. This support is available in all AWS Regions where S3 Tables are available. To learn more, visit the Amazon S3 Tables overview page and documentation.


AWS Backup adds cross-Region database snapshot copy to logically air-gapped vaults

AWS Backup now supports single-action database snapshot copies to logically air-gapped vaults across AWS Regions. This capability is available for Amazon Aurora, Amazon Neptune, and Amazon DocumentDB snapshots, eliminating the need for an intermediate copying step in target Regions.

You can perform cross-Region and cross-account snapshot copies to protect against incidents like ransomware events and Region outages that might affect your production accounts or primary Regions. Previously, this required a two-step process—first copying snapshots to the target Region in a backup vault, then copying them to the logically air-gapped vault in the same Region. Now, you can complete this in one step, achieving faster recovery point objectives (RPOs) while eliminating costs associated with intermediate copies. This streamlined process also removes the need for custom scripts or AWS Lambda functions to monitor intermediate copy status.

This feature is available for Amazon Aurora, Amazon Neptune and Amazon DocumentDB, in all Regions where AWS Backup supports these databases and logically air-gapped vaults. You can start using this feature today through the AWS Management Console, AWS Command Line Interface (CLI), or AWS SDKs. To get started, refer to the AWS Backup documentation.


Announcing Amazon EC2 C8i, M8i, and R8i instances on second-generation AWS Outposts racks

AWS is announcing local support for the latest generation of x86-powered Amazon EC2 instances on second-generation AWS Outposts racks, including C8i compute-optimized instances, M8i general-purpose instances, and R8i memory-optimized instances. These new instances deliver 20% better performance and 2.5x more memory bandwidth compared to the C7i, M7i, and R7i instances on second-generation Outposts racks. In addition, C8i, M8i, and R8i instances on second-generation Outposts racks deliver 20% more compute capacity than C7i, M7i, and R7i instances within the same rack space and power draw, enabling better space and energy efficiency for your on-premises workloads.

C8i, M8i, and R8i instances on second-generation Outposts racks are powered by custom Intel Xeon 6 processors available only on AWS and are ideal for a broad range of on-premises workloads requiring enhanced performance, such as larger databases, more memory-intensive applications, advanced real-time big data analytics, high-performance video encoding and streaming, and CPU-based edge inference with more sophisticated machine learning (ML) models.

To learn more about second-generation Outposts racks, refer to the Outposts racks product page and the user guide. For the most updated list of countries and territories and the AWS Regions where second-generation Outposts racks are supported, check out the Outposts rack FAQs page.


Amazon Bedrock adds support for the latest open-weight models in Asia Pacific (Sydney)

Amazon Bedrock is a fully managed service that provides secure, enterprise-grade access to high-performing foundation models from leading AI companies, enabling you to build and scale generative AI applications. Today, Amazon Bedrock announced support for the latest open-weight models in Asia Pacific (Sydney) using the bedrock-mantle endpoint. These include models from industry-leading providers, including DeepSeek, Google, MiniMax, Mistral, Moonshot AI, MiniMax, Nvidia, and OpenAI. The bedrock-mantle endpoint is powered by Project Mantle, a new distributed inference engine for large-scale machine learning model serving on Amazon Bedrock. Project Mantle simplifies and expedites onboarding of new models onto Amazon Bedrock. It provides highly performant and reliable serverless inference with sophisticated quality of service controls, unlocks higher default customer quotas with automated capacity management and unified pools, and delivers out-of-the-box compatibility with OpenAI API specifications.


To learn more and get started, visit the Amazon Bedrock console or the Amazon Bedrock service documentation. To get started with Amazon Bedrock OpenAI API-compatible service endpoints, visit the OpenAI API compatibility documentation.


Amazon S3 Access Grants are now available in the AWS Asia Pacific (Taipei) Region

You can now create Amazon S3 Access Grants in the AWS Asia Pacific (Taipei) Region.

Amazon S3 Access Grants map identities in directories such as Microsoft Entra ID, or AWS Identity and Access Management (IAM) principals, to datasets in S3. This helps you manage data permissions at scale by automatically granting S3 access to end users based on their corporate identity.

Visit the AWS Region Table for complete regional availability information. To learn more about Amazon S3 Access Grants, visit our product page.


Amazon Bedrock expands support for AWS PrivateLink

Amazon Bedrock is a fully managed service that provides secure, enterprise-grade access to high-performing foundation models from leading AI companies. It enables you to build and scale generative AI applications. Amazon Bedrock already supported AWS PrivateLink for the bedrock-runtime endpoint. Now, with this launch, you can also use AWS PrivateLink to privately access your applications using the bedrock-mantle endpoint. The bedrock-mantle endpoint is powered by Project Mantle, a new distributed inference engine for large-scale machine learning model serving on Amazon Bedrock. Project Mantle simplifies and expedites onboarding of new models onto Amazon Bedrock. It provides highly performant and reliable serverless inference with sophisticated quality of service controls, unlocks higher default customer quotas with automated capacity management and unified pools, and delivers out-of-the-box compatibility with OpenAI API specifications.

AWS PrivateLink support for OpenAI API-compatible endpoints is available in US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Jakarta), Asia Pacific (Tokyo), Asia Pacific (Mumbai), Asia Pacific (Sydney), South America (São Paulo), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Milan), Europe (Stockholm), and South America (Sao Paulo) AWS Regions. To learn more and get started, visit the Amazon Bedrock console or the Amazon Bedrock service documentation. To get started with Amazon Bedrock OpenAI API-compatible service endpoints, visit the OpenAI API compatibility documentation.


AWS expands Resource Control Policies support to Amazon DynamoDB

AWS Resource Control Policies (RCPs) now support Amazon DynamoDB. RCPs are a type of organization policy that you can use to manage permissions in your organization. RCPs offer central control over the maximum available permissions for resources in your organization.

With this expansion, you can now use RCPs to manage permissions for Amazon DynamoDB. For example, you can create policies that prevent identities outside your organization from accessing DynamoDB, helping you build a data perimeter and enforce baseline security standards across your AWS environment.  

RCPs are available in all AWS commercial Regions and AWS GovCloud (US) Regions. To learn more about RCPs and view the full list of supported AWS services, visit the Resource control policies (RCPs) documentation in the AWS Organizations User Guide.


Amazon Bedrock increases default quotas for Anthropic’s Claude Sonnet 4.5 model in AWS GovCloud (US)

Amazon Bedrock has increased the default quotas for Anthropic’s Claude Sonnet 4.5 in AWS GovCloud (US-West) and AWS GovCloud (US-East) to 5,000,000 tokens per minute and 1,000 requests per minute, aligning with commercial AWS regions. This 25x increase enables customers to scale their AI workloads more effectively in regulated environments.
Claude Sonnet 4.5 is Anthropic's latest Sonnet model, excelling at building complex agents, coding, and long-horizon tasks while maintaining optimal speed and cost-efficiency for high-volume use-cases.


Amazon RDS now supports latest Cumulative Update (CU23) for Microsoft SQL Server

Amazon Relational Database Service (Amazon RDS) for SQL Server now supports the latest Cumulative Update (CU23) KB5078297 for SQL Server 2022.

We recommend that you upgrade your Amazon RDS for SQL Server instances to apply this update using Amazon RDS Management Console, or by using the AWS SDK or CLI. You can learn more about upgrading your database instance in the Amazon RDS SQL Server User Guide for upgrading your RDS Microsoft SQL Server DB engine.


Amazon Connect launches granular access controls for analytics dashboards

Amazon Connect dashboards now provides granular access controls for analytics dashboards. This enables you to apply resource tags that control who is able to see metrics for specific resources such as agents, queues, and routing profiles. You can now filter metrics using tags to view aggregate metrics for agents or queues that share the same tags. For example, you can tag agents with Department:Customer Service to restrict dashboard metrics visibility to Customer Service team managers.


Amazon Connect dashboards are available in all AWS commercial and AWS GovCloud (US-West) regions where Amazon Connect is offered. To learn more about dashboards, see the Amazon Connect Administrator Guide. To learn more about Amazon Connect, the AWS cloud-based contact center, please visit the Amazon Connect website.


Amazon EC2 X8i instances are now available in Europe (Stockholm)

Amazon Web Services (AWS) is announcing the general availability of Amazon EC2 X8i instances, next-generation memory optimized instances powered by custom Intel Xeon 6 processors available only on AWS. X8i instances are SAP-certified and deliver the highest performance and fastest memory bandwidth among comparable Intel processors in the cloud. They deliver up to 43% higher performance, 1.5x more memory capacity (up to 6TB), and 3.3x more memory bandwidth compared to previous generation X2i instances.

X8i instances are designed for memory-intensive workloads like SAP HANA, large databases, data analytics, and Electronic Design Automation (EDA). Compared to X2i instances, X8i instances offer up to 50% higher SAPS performance, up to 47% faster PostgreSQL performance, 88% faster Memcached performance, and 46% faster AI inference performance. X8i instances come in 14 sizes, from large to 96xlarge, including two bare metal options.

X8i instances are available in the following AWS Regions: US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Frankfurt) and Europe (Stockholm).

To get started, visit the AWS Management Console. X8i instances can be purchased via Savings Plans, On-Demand instances, and Spot instances. For more information visit X8i instances page.