New features enable automatic cost optimization through intelligent storage tiering and simplified table replication across AWS Regions and accounts.
New capabilities help optimize application performance, analyze unlimited prefixes, and simplify metrics analysis through S3 Tables integration.
Deploy AI agents with confidence using new quality evaluations and policy controls—enabling precise boundaries on agent actions, continuous quality monitoring, and experience-based learning while maintaining natural conversation flows.
New Lambda capability lets you build applications that coordinate multiple steps reliably over extended periods—from seconds to up to one year—without paying for idle compute time when waiting for external events or human decisions.
Manage development, testing, and production database workloads more efficiently with new features including Developer Edition support for SQL Server, M7i/R7i instance support with optimize CPU, and expanded storage options up to 256 TiB.
New pricing model helps maintain cost efficiency while providing flexibility with database services and deployment options.
Reduce data management complexity and costs with automatic normalization across sources, native analytics integration, and built-in support for industry-standard formats like OCSF and Apache Iceberg.
Prevent cloud infrastructure issues before they impact your business with AWS Support plans that combine AI-powered insights with expert guidance, offering faster response times and proactive monitoring across performance, security, and cost dimensions.
Build and optimize large-scale vector databases up to 10 times faster and at a quarter of the cost with new GPU acceleration and auto-optimization capabilities that automatically balance search quality, speed, and resource usage.
Scale vector storage and querying to new heights with S3 Vectors' general availability—now supporting up to 1 billion vectors per index, 100ms query latencies, and expanded regional availability, while reducing costs up to 90% compared to specialized databases.
Access fully managed foundation models from leading providers like Google, Kimi AI, MiniMax AI, Mistral AI, NVIDIA, OpenAI, and Qwen, including the new Mistral Large 3 and Ministral 3 3B, 8B, and 14B models through Amazon Bedrock.
New memory-optimized instances deliver up to 5 GHz processor speeds and 3 TiB of memory—ideal for electronic design automation workloads and memory-intensive databases requiring high single-threaded performance.
New service acts as an always-on DevOps engineer, helping you respond to incidents, identify root causes, and prevent future issues through systematic analysis of incidents and operational patterns.
Simplify AI experimentation with zero-infrastructure MLflow that launches in minutes, scales automatically, and seamlessly integrates with SageMaker's model customization and pipeline capabilities.