Oracle Database@AWS is now generally available in five additional AWS Regions: EU-West-1 (Dublin), EU-West-2 (London), AP-South-1 (Mumbai), AP-South-2 (Hyderabad), and AP-Northeast-2 (Seoul). Oracle Database@AWS enables customers to access Oracle Cloud Infrastructure (OCI) managed Oracle Exadata systems within AWS data centers. With this launch, customers in Europe and Asia Pacific with in-region data residency requirements can migrate on-premises Oracle Exadata and Oracle Real Application Clusters (RAC) applications to AWS. Dublin, Mumbai, and Hyderabad are available with two Availability Zones (AZs), while London and Seoul are available with one Availability Zone. Additionally, CA-Central-1 (Canada Central) and AP-Southeast-2 (Sydney) now support two Availability Zones, providing enhanced high availability for production workloads.
With this expansion, Oracle Database@AWS services are now available in twelve Regions: US-East-1 (N. Virginia), US-West-2 (Oregon), US-East-2 (Ohio), CA-Central-1 (Canada Central), EU-Central-1 (Frankfurt), EU-West-1 (Dublin), EU-West-2 (London), AP-Northeast-1 (Tokyo), AP-Southeast-2 (Sydney), AP-South-1 (Mumbai), AP-South-2 (Hyderabad), and AP-Northeast-2 (Seoul). To use Oracle Database@AWS services, request a private offer from Oracle through the AWS Marketplace, and use AWS Management Console to setup and use your databases.
To learn more, visit Oracle Database@AWS overview and documentation.
Amazon OpenSearch Service now supports i8ge instances, which is the latest generation of storage optimized instances offering the best performance for storage-intensive workloads.
Powered by AWS Graviton4 processors, I8ge instances deliver up to 60% better compute performance compared to previous generation Graviton2-based storage optimized Im4gn instances. I8ge instances use the latest third generation AWS Nitro SSDs, local NVMe storage that deliver up to 55% better real-time storage performance per TB while offering up to 60% lower storage I/O latency and up to 75% lower storage I/O latency variability compared to previous generation Im4gn instances. Built on the AWS Nitro System, these instances offload CPU virtualization, storage, and networking functions to dedicated hardware and software enhancing the performance and security for your workloads.
I8ge instances are available of sizes up to 18xlarge and 45 TB instance storage. At 112.5 Gbps, these instances have the highest networking bandwidth among storage optimized instances available in Amazon OpenSearch Service.
I8ge instances support all OpenSearch versions & Elasticsearch (open source) versions 7.9 and 7.10.
Amazon OpenSearch Service supports i8ge instances in following AWS Regions : US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Frankfurt), Europe (Ireland), Europe (Stockholm), Asia Pacific (Malaysia), Asia Pacific (Mumbai), Asia Pacific (Singapore) and Asia Pacific (Sydney).
For region specific availability & pricing, visit our pricing page. To learn more about Amazon OpenSearch Service and its capabilities, visit our product page.
Amazon WorkSpaces Advisor is a new AI-powered tool that helps administrators quickly troubleshoot and resolve issues with Amazon WorkSpaces Personal. Using generative AI capabilities, it analyzes WorkSpace configurations, identifies problems, and provides actionable recommendations to restore service and optimize performance.
WorkSpaces Advisor streamlines administrative workflows by reducing the time needed to investigate and fix common issues. Administrators can leverage AI-driven insights to proactively maintain their virtual desktop infrastructure, improve end-user experience, and minimize downtime across their WorkSpaces.
Amazon WorkSpaces Advisor is now available in all AWS commercial regions where Amazon WorkSpaces is offered. Visit the Amazon WorkSpaces console to access WorkSpaces Advisor and begin troubleshooting your environment. Learn more in the feature blog and user guide.
Amazon Bedrock AgentCore Browser now supports OS-level interaction capabilities, enabling automation of browser workflows that require direct operating system control beyond Chrome DevTools Protocol (CDP) capabilities. This enhancement addresses automation scenarios where CDP alone is insufficient, such as mouse operations, print dialogs, native system alerts, and keyboard shortcuts. The feature serves AI agent developers, test automation engineers, and organizations building LLM-powered web interaction tools.
The new capabilities provide automation through mouse operations (click, move, drag, scroll), keyboard operations (type, press, shortcuts like ctrl+a and ctrl+p), and full desktop screenshots, all at OS-level coordinates extending beyond the browser viewport. Key use cases include automated testing with system dialog handling, document management workflows, complex UI interactions with right-click menus, and vision-based AI agents that require complete browser environment visibility.
This feature is available by default on all browser instances in all 14 AWS Regions where Amazon Bedrock AgentCore Browser is available: US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Paris), Europe (Stockholm), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Asia Pacific (Seoul), and Canada (Central).
To learn more, visit the AgentCore Browser documentation.
Amazon Elastic Kubernetes Service (Amazon EKS) managed node groups now support Auto Scaling warm pools, enabling you to maintain pre-initialized EC2 instances ready for rapid scale-out. This reduces node provisioning latency for applications with burst traffic patterns, time-sensitive workloads, or long instance boot times due to complex initialization scripts and software dependencies.
With warm pools enabled, your EKS managed node group maintains a pool of instances that have already completed OS initialization, user data execution, and software configuration. When demand increases and the Auto Scaling group scales out, instances transition from the warm pool to active service without repeating the full cold-start sequence. You can configure instances in the warm pool as Stopped (lower cost, longer transition) or Running (higher cost, faster transition). You can also enable reuse on scale-in, which returns instances to the warm pool during scale-down instead of terminating them. Warm pools work with Cluster Autoscaler without requiring any additional configuration.
You can enable warm pools through the EKS API, AWS CLI, AWS Management Console, or AWS CloudFormation by adding a warmPoolConfig to your CreateNodegroup or UpdateNodegroupConfig requests. Existing managed node groups that do not enable warm pools are unaffected.
This feature is available in all AWS Regions where Amazon EKS is available, except for the China (Beijing) Region, operated by Sinnet and the China (Ningxia) Region, operated by NWCD. To get started, see the Amazon EKS managed node groups documentation.
Amazon Interactive Video Service (Amazon IVS) Real-Time Streaming now supports redundant ingest, helping protect your live streams against source encoder failures and first-mile network issues. With redundant ingest, you can stream from two encoders simultaneously to a single stage with automated failover, ensuring uninterrupted delivery to your viewers.
Redundant ingest is ideal for live events, 24/7 live streams, or any scenario where uninterrupted delivery is essential. This capability helps you maintain viewer engagement during unexpected disruptions and enables continuous 24/7 streaming.
Amazon IVS is a managed live streaming solution designed to make low-latency or real-time video available to viewers around the world. Visit the AWS region table for a full list of AWS Regions where the Amazon IVS console and APIs for control and creation of video streams are available.
To learn more, please visit the Amazon IVS Real-Time Streaming RTMP ingest documentation page.
Amazon SageMaker HyperPod task governance now supports gang scheduling, which ensures all pods required for a distributed training job are ready before training begins. Administrators can configure gang scheduling to prevent wasted compute from partial job runs and avoid deadlocks from jobs waiting for resources.
Data scientists running distributed AI/ML training jobs on Amazon SageMaker HyperPod clusters using the EKS orchestrator require multiple pods to work together across nodes with pod-to-pod communication. When some pods start but others do not, jobs can hold onto resources without making progress, block other workloads, and increase costs. Gang scheduling resolves this by monitoring all pods in a workload and pulling the workload back if not all pods are ready within a set time. Pulled-back workloads are automatically requeued to prevent stalling. Administrators can adjust settings on the HyperPod Console, such as how long to wait for pods to be ready, how to handle node failures, whether to admit workloads one at a time to avoid deadlocks on busy clusters, and how retries are scheduled.
This capability is currently available for Amazon SageMaker HyperPod clusters using the EKS orchestrator across the following AWS Regions: US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), and Asia Pacific (Tokyo), Asia Pacific (Jakarta), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Stockholm), Europe (Spain), and South America (São Paulo).
To learn more, visit SageMaker HyperPod webpage, and HyperPod task governance documentation.
Starting today, domain name system (DNS) delegation for private hosted zone subdomains can be used with Route 53 inbound and outbound Resolver endpoints in AWS GovCloud (US) Regions. This allows you to delegate the authority for a subdomain from your on-premises infrastructure to the Route 53 Resolver cloud service and vice versa, enabling a simplified cloud experience across namespaces in AWS and on your own local infrastructure.
Many AWS customers delegate subdomain management to individual teams while maintaining central control of apex domains. Previously, we launched Route 53 Resolver delegation support for private hosted zones in commercial AWS Regions, enabling customers to use standard name server records to delegate subdomain authority between Route 53 and on-premises DNS — eliminating the need for conditional forwarding rules across their organization. With today's release, this delegation capability is now available for Route 53 Resolver endpoints in AWS GovCloud (US-East) and AWS GovCloud (US-West) Regions as well.
Inbound and outbound delegation is provided at no additional cost to Resolver endpoints usage. For more details on pricing, visit the Route 53 pricing page, and to learn more about this feature, visit the developer guide.
起業家の皆さん、12 月のスタートアップクレジットにたくさんのご応募をいただきありがとうございました。昨年 Kiro スタートアップクレジットプログラムを開始した際、その反応は予想を大きく上回るものでした。数千もの応募が寄せられ、ニーズは明確でした。アーリーステージのチームには、成長に合わせてスケールする開発者ツールが必要だということです。 そこで、このプログラムを復活させます。本日より、対象となるスタートアップは最大 1 年分の Kiro Pro+ を無料で申請できます。仕様駆動開発と高度な AI エージェントを活用して、コストを気にせず開発を加速できます。
はじめに 本ブログは 大豊建設株式会社 様と Amazon Web Services Japan 合同会社が共 […]
はじめに 本ブログは 大豊建設株式会社 様と Amazon Web Services Japan 合同会社が共 […]
AAAI 2026 のパネルディスカッションで、Microsoft、Mistral、シンガポール国立大学、LinkedIn、AWS の研究者と実務者が、コーディングエージェントを本番環境に投入する際の現実的な課題について議論しました。研究は能力の最適化に注力する一方、本番環境では信頼性、コスト、レイテンシー、信頼、組織への適合性を同時に最適化する必要があり、そのギャップはアーキテクチャ設計、スケーラブルな強化学習環境の構築、評価ベンチマークの現実との乖離、そして人間とエージェント間の信頼構築という複数のレベルで現れます。パネルの結論は、AI エージェントの成功にはモデルの性能向上だけでなく、監査可能性や説明可能性を備えた信頼の仕組みづくりが不可欠であり、人間の役割はコードを書くことから、判断し、委任し、曖昧さを解消することへと移行しているというものでした。
学生は、私たちが暮らす世界を形作る未来の意思決定者です。この信念が、本日発表するすべての根幹にあります。まだ学び、実験し、何を作りたいかを模索している段階のみなさんに、本格的なツールを届けたいと考えています。本日より、Kiro Students プランを開始します。対象の大学生は、月 1,000 クレジット付きの Kiro を 1 年間無料でご利用いただけます。クレジットカード不要。トライアル期間の制限もありません。
When customers experience a security incident, they need to acquire forensic artifacts to identify root cause, extract indicators of compromise (IoCs), and validate remediation efforts. NIST 800-86, Guide to Integrating Forensic Techniques into Incident Response, defines digital forensics as a process comprised of four basic phases: collection, examination, analysis, and reporting. This blog post focuses […]
In this post, we demonstrate how you can build a scalable, multi-tenant configuration service using the tagged storage pattern, an architectural approach that uses key prefixes (like tenant_config_ or param_config_) to automatically route configuration requests to the most appropriate AWS storage service. This pattern maintains strict tenant isolation and supports real-time, zero-downtime configuration updates through event-driven architecture, alleviating the cache staleness problem.
In this post, we explore where RFT is most effective, using the GSM8K mathematical reasoning dataset as a concrete example. We then walk through best practices for dataset preparation and reward function design, show how to monitor training progress using Amazon Bedrock metrics, and conclude with practical hyperparameter tuning guidelines informed by experiments across multiple models and use cases.
This post walks you through understanding audio embeddings, implementing Amazon Nova Multimodal Embeddings, and building a practical search system for your audio content. You'll learn how embeddings represent audio as vectors, explore the technical capabilities of Amazon Nova, and see hands-on code examples for indexing and querying your audio libraries. By the end, you'll have the knowledge to deploy production-ready audio search capabilities.
In healthcare and life sciences, AI agents help organizations process clinical data, submit regulatory filings, automate medical coding, and accelerate drug development and commercialization. However, the sensitive nature of healthcare data and regulatory requirements like Good Practice (GxP) compliance require human oversight at key decision points. This is where human-in-the-loop (HITL) constructs become essential. In this post, you will learn four practical approaches to implementing human-in-the-loop constructs using AWS services.
In this post, we'll walk you through a complete implementation of model fine-tuning in Amazon Bedrock using Amazon Nova models, demonstrating each step through an intent classifier example that achieves superior performance on a domain specific task. Throughout this guide, you'll learn to prepare high-quality training data that drives meaningful model improvements, configure hyperparameters to optimize learning without overfitting, and deploy your fine-tuned model for improved accuracy and reduced latency. We'll show you how to evaluate your results using training metrics and loss curves.