AWS Updates - 2026-02-10
AWS What's New
Amazon Aurora Global Database now supports managed minor version upgrades
- Link: https://aws.amazon.com/about-aws/whats-new/2026/02/amazon-aurora-global-database-managed-minor/
- Published: 2026-02-10
Amazon Aurora Global Database now supports managed minor version upgrades across your global topology with minimal downtime, eliminating the need to manually upgrade each cluster individually and reducing operational overhead for global cluster management.
Aurora Global Database allows a single Aurora database to span up to 11 AWS Regions, providing disaster recovery from Region-wide outages and enabling fast local reads for globally distributed applications. You can now perform managed minor version upgrades through the AWS Management Console, SDK, or CLI and all the global clusters, across Regions, are upgraded to the selected minor version, eliminating the need to manually upgrade each cluster individually.
This capability is currently only supported for Aurora PostgreSQL-compatible engines and available in all commercial AWS Regions and AWS GovCloud (US) Regions. To learn more about managed minor version upgrades, see our documentation.
Amazon SageMaker HyperPod now supports node actions from the console
- Link: https://aws.amazon.com/about-aws/whats-new/2026/02/amazon-sagemaker-hyperpod-node-actions/
- Published: 2026-02-10
Amazon SageMaker HyperPod now enables you to manage individual cluster nodes directly from the AWS Console. HyperPod cluster operators managing large-scale AI/ML workloads often need to connect to nodes for troubleshooting, reboot unresponsive instances, or replace degraded nodes. Connecting to a node previously required manually constructing SSM connection strings, while node recovery actions such as reboot and replace required CLI commands — the console now provides a single interface for all node actions.
With node actions in the console, you can now connect to any node via AWS Systems Manager (SSM). The console provides pre-populated SSM CLI commands with copy-to-clipboard support, and direct SSM session launch in the console. While SageMaker HyperPod clusters already support automatic replacement and reboot of unhealthy instances, there are scenarios such as memory overruns or undetectable hardware degradation that may require manual intervention. Now, node actions in the console provide a consistent approach to manually reboot nodes to recover from transient issues, delete unhealthy nodes, and replace nodes, with batch operations supporting multiple node actions simultaneously, enabling you to resolve node issues in minutes. This capability is especially valuable when running time-sensitive AI training and inference workloads where minimizing downtime is essential.
This feature is available in all AWS Regions where Amazon SageMaker HyperPod is supported. You can perform all these node actions in the HyperPod Cluster management page on console. Click on the respective links to learn more about replace/reboot and connecting to a node.
Amazon EC2 C8id, M8id, and R8id instances are available in additional regions
- Link: https://aws.amazon.com/about-aws/whats-new/2026/02/c8id-m8id-and-r8id-in-additional-regions/
- Published: 2026-02-10
Amazon Elastic Compute Cloud (EC2) C8id, M8id, and R8id instances powered by custom Intel Xeon 6 processors feature up to 384 vCPUs, 3TiB of memory, and 22.8TB of NVMe SSD storage and deliver up to 43% higher performance and 3.3x more memory bandwidth compared to previous generation C6id, M6id, and R6id instances. Starting today, C8id and M8id instances are available in Europe (Frankfurt) and Asia Pacific (Tokyo) regions, with M8id also available in Europe (Spain) region. Additionally, R8id instances are now available in Europe (Spain) and Asia Pacific (Tokyo) regions.
These instances deliver up to 46% higher performance for I/O intensive database workloads, and up to 30% faster query results for I/O intensive real-time data analytics than previous sixth-generation instances. Additionally, these instances support Instance Bandwidth Configuration, allowing 25% flexible allocation between network and EBS bandwidth, allocating resources optimally for each workload.
C8id instances are ideal for compute-intensive workloads such as high-performance web servers, batch processing, distributed analytics, ad serving, video encoding, and gaming servers. M8id instances are well-suited for balanced workloads including application servers, microservices, enterprise applications, and small to medium databases. R8id instances are ideal for memory-intensive workloads such as in-memory databases, real-time big data analytics, large in-memory caches, and scientific computing applications.
C8id, M8id and R8id instances are available in US East (N. Virginia, Ohio), US West (Oregon), Europe (Frankfurt), and Asia Pacific (Tokyo) regions. M8id and R8id instances are additionally available in Europe (Spain) region. Customers can purchase these instances via Savings Plans, On-Demand instances, and Spot instances. For more information visit the Amazon EC2 instance type page.
Amazon Athena now supports 1-minute reservations and 4 DPU minimum capacity
- Link: https://aws.amazon.com/about-aws/whats-new/2026/02/amazon-athena-one-minute-capacity-reservations/
- Published: 2026-02-10
Amazon Athena now supports 1-minute Capacity Reservations and a lower minimum capacity of 4 Data Processing Units (DPUs) for all reservations. Now, you can get started with less capacity and make more frequent, fine-grained adjustments to match your workload patterns—with no long-term commitments and cost savings up to 95% for short-duration query workloads.
Capacity Reservations provides dedicated serverless compute and is ideal for workloads requiring query prioritization and concurrency controls. You pay only for capacity that you reserve and there are no data scanned charges. Reserved capacity works seamlessly with existing Athena queries and workgroups—simply attach workgroups to a reservation and submit queries with no changes in your SQL queries or application code required.
To learn more, see the Athena User Guide and Athena pricing page.
Amazon Bedrock adds support for six fully-managed open weights models
- Link: https://aws.amazon.com/about-aws/whats-new/2026/02/amazon-bedrock-adds-support-six-open-weights-models
- Published: 2026-02-10
Amazon Bedrock now supports six new models spanning frontier reasoning and agentic coding: DeepSeek V3.2, MiniMax M2.1, GLM 4.7, GLM 4.7 Flash, Kimi K2.5, and Qwen3 Coder Next. These six models bring customers access to the most capable open weights models available today, delivering frontier-class performance at significantly lower inference costs. They collectively cover the full spectrum of enterprise AI workloads: DeepSeek V3.2 and Kimi K2.5 push the frontier on reasoning and agentic intelligence, GLM 4.7 and Minimax 2.1 set new standards for autonomous coding with massive output windows, and Qwen3 Coder Next and GLM 4.7 Flash offer lightweight, cost-efficient alternatives purpose-built for production deployment.
These models on Amazon Bedrock are powered by Project Mantle, a new distributed inference engine for large-scale machine learning model serving on Amazon Bedrock. Project Mantle simplifies and expedites onboarding of new models onto Amazon Bedrock, provides highly performant and reliable serverless inference with sophisticated quality of service controls, unlocks higher default customer quotas with automated capacity management and unified pools, and provides out-of-the-box compatibility with OpenAI API specifications.
To learn more and get started, visit Amazon Bedrock console or the service documentation here. To get started with Amazon Bedrock OpenAI API-compatible service endpoints, visit documentation here.
Amazon OpenSearch Serverless now supports Collection Groups
- Link: https://aws.amazon.com/about-aws/whats-new/2026/02/amazon-opensearch-serverless-supports-collection-groups/
- Published: 2026-02-10
Amazon OpenSearch Serverless now supports Collection Groups, a new capability that enables you to share OpenSearch Compute Units (OCUs) across collections with different AWS KMS keys. This new capability delivers enhanced cost optimization through a shared compute model that reduces overall OCU expenses while maintaining collection-level security and access controls. Additionally, Collection Groups introduce the ability to specify minimum OCU allocations alongside maximum OCU limits, allowing you to provision compute capacity upfront at startup for more predictable performance.
Collection Groups are particularly valuable for multi-tenant workloads where different tenants require data encryption with separate KMS keys while still benefiting from shared compute resources. By grouping collections together, you can optimize OCU utilization across workloads, reduce costs through resource sharing, and maintain the security isolation required by different encryption keys. The minimum OCU setting ensures your collections have guaranteed baseline capacity from the moment they start, eliminating cold start delays and providing consistent performance for latency-sensitive applications.
Collection groups are available in all regions where Amazon OpenSearch Serverless is currently available. To learn more about configuring and managing collection groups, visit the Amazon OpenSearch Serverless documentation.
Amazon EKS Auto Mode Announces Enhanced Logging for its Managed Kubernetes Capabilities
- Link: https://aws.amazon.com/about-aws/whats-new/2026/02/amazon-eks-auto-mode-enhanced-logging
- Published: 2026-02-10
Amazon Elastic Kubernetes Service (Amazon EKS) Auto Mode’s managed capabilities can now be configured as log delivery sources using Amazon CloudWatch Vended Logs. This integration enables customers to monitor and troubleshoot their EKS Auto Mode clusters more effectively by automatically collecting logs from Auto Mode’s managed Kubernetes capabilities for compute autoscaling, block storage, load balancing, and pod networking.
Customers can configure log delivery for Auto Mode capabilities using CloudWatch APIs or the AWS Console. Each Auto Mode capability can be configured as a CloudWatch Vended Logs delivery source, enabling reliable, secure log delivery with built-in AWS authentication and authorization at a reduced price compared to standard CloudWatch Logs. Customers can deliver these logs to CloudWatch Logs, Amazon S3, or Amazon Kinesis Data Firehose destinations.
This feature is available today in all regions where EKS Auto Mode is available. Standard CloudWatch Logs, S3, or Kinesis charges apply depending on the chosen destination.
To learn more about EKS Auto Mode logging capabilities, visit the Amazon EKS documentation.
AWS CloudWatch Alarm Mute Rules eliminate alert fatigue
- Link: https://aws.amazon.com/about-aws/whats-new/2026/02/amazon-cloudwatch-alarm-muting-rules
- Published: 2026-02-10
Amazon CloudWatch now supports Alarm Mute Rules, enabling customers to temporarily mute alarm notifications during planned deployments, maintenance windows, and off-hours without compromising monitoring visibility. This new capability helps eliminate alert fatigue while maintaining complete situational awareness across their infrastructure.
Alarm Mute Rules transform operational workflows by allowing teams to create one-time or recurring rules that silence notifications for up to 100 individual alarms around deployment calendars, scheduled maintenance activities, or predictable off-hours periods when non-critical alerts become disruptive. Customers can configure actions for OK, ALARM, and INSUFFICIENT_DATA states, and when mute rules expire, any previously muted actions are automatically triggered as long as the alarm remains in the same state it was in when the actions were muted, ensuring critical issues are never overlooked while preventing unnecessary alert fatigue.
This eliminates the operational risk of forgotten script-based workarounds and reduces alert noise during planned activities, enabling engineering teams to focus on core business initiatives rather than managing notification fatigue.
CloudWatch Alarm Mute Rules is available today in all AWS Regions supporting alarm-level muting.
To get started, see CloudWatch User Guide for Alarm Mute Rules. You can create mute rules through the Amazon CloudWatch Console.