AWS Updates Feed

← トップに戻る

AWS Updates - 2025-12-02

AWS News Blog

Announcing replication support and Intelligent-Tiering for Amazon S3 Tables

New features enable automatic cost optimization through intelligent storage tiering and simplified table replication across AWS Regions and accounts.


Amazon S3 Storage Lens adds performance metrics, support for billions of prefixes, and export to S3 Tables

New capabilities help optimize application performance, analyze unlimited prefixes, and simplify metrics analysis through S3 Tables integration.


Amazon Bedrock AgentCore adds quality evaluations and policy controls for deploying trusted AI agents

Deploy AI agents with confidence using new quality evaluations and policy controls—enabling precise boundaries on agent actions, continuous quality monitoring, and experience-based learning while maintaining natural conversation flows.


Build multi-step applications and AI workflows with AWS Lambda durable functions

New Lambda capability lets you build applications that coordinate multiple steps reliably over extended periods—from seconds to up to one year—without paying for idle compute time when waiting for external events or human decisions.


New capabilities to optimize costs and improve scalability on Amazon RDS for SQL Server and Oracle

Manage development, testing, and production database workloads more efficiently with new features including Developer Edition support for SQL Server, M7i/R7i instance support with optimize CPU, and expanded storage options up to 256 TiB.


Introducing Database Savings Plans for AWS Databases

New pricing model helps maintain cost efficiency while providing flexibility with database services and deployment options.


Amazon CloudWatch introduces unified data management and analytics for operations, security, and compliance

Reduce data management complexity and costs with automatic normalization across sources, native analytics integration, and built-in support for industry-standard formats like OCSF and Apache Iceberg.


New and enhanced AWS Support plans add AI capabilities to expert guidance

Prevent cloud infrastructure issues before they impact your business with AWS Support plans that combine AI-powered insights with expert guidance, offering faster response times and proactive monitoring across performance, security, and cost dimensions.


Amazon OpenSearch Service improves vector database performance and cost with GPU acceleration and auto-optimization

Build and optimize large-scale vector databases up to 10 times faster and at a quarter of the cost with new GPU acceleration and auto-optimization capabilities that automatically balance search quality, speed, and resource usage.


Amazon S3 Vectors now generally available with increased scale and performance

Scale vector storage and querying to new heights with S3 Vectors' general availability—now supporting up to 1 billion vectors per index, 100ms query latencies, and expanded regional availability, while reducing costs up to 90% compared to specialized databases.


Amazon Bedrock adds 18 fully managed open weight models, including the new Mistral Large 3 and Ministral 3 models

Access fully managed foundation models from leading providers like Google, Kimi AI, MiniMax AI, Mistral AI, NVIDIA, OpenAI, and Qwen, including the new Mistral Large 3 and Ministral 3 3B, 8B, and 14B models through Amazon Bedrock.


Introducing Amazon EC2 X8aedz instances powered by 5th Gen AMD EPYC processors for memory-intensive workloads

New memory-optimized instances deliver up to 5 GHz processor speeds and 3 TiB of memory—ideal for electronic design automation workloads and memory-intensive databases requiring high single-threaded performance.


AWS DevOps Agent helps you accelerate incident response and improve system reliability (preview)

New service acts as an always-on DevOps engineer, helping you respond to incidents, identify root causes, and prevent future issues through systematic analysis of incidents and operational patterns.


Accelerate AI development using Amazon SageMaker AI with serverless MLflow

Simplify AI experimentation with zero-infrastructure MLflow that launches in minutes, scales automatically, and seamlessly integrates with SageMaker's model customization and pipeline capabilities.