Amazon Relational Database Service (Amazon RDS) for Oracle now supports Oracle Management Agent (OMA) version 24.1.0.0.v1 for Oracle Enterprise Manager (OEM) Cloud Control 24ai Release 1. OEM 24ai offers web-based tools to monitor and manage your Oracle databases. Amazon RDS for Oracle installs OMA, which communicates with your Oracle Management Service (OMS) to provide monitoring information. Customers running OMS version 24.1 Release 1 or later can now manage databases by installing OMA 24.1.0.0.v1
To enable the version 24.1.0.0.v1 of OMA for OEM 24aiR1 or later, navigate to "Option Groups" in the AWS Management Console and add the "OEM_AGENT" option to a new or existing option group and set AGENT_VERSION to “24.1.0.0.v1”. You will also need to configure option settings including OMS hostname (or IP), port, agent registration password, and minimum TLS version of TLSv1.2 to allow OMA on your Amazon RDS for Oracle database instances to communicate with your existing Oracle Management Service (OMS) stack. To learn more, please refer to Amazon RDS for Oracle documentation.
For more information on enabling and configuring OEM agents, refer to the Amazon RDS for Oracle documentation.
As announced on November 19, 2025, Amazon S3 is now deploying a new default bucket security setting which will automatically disable server-side encryption with customer-provided keys (SSE-C) for all new general purpose buckets. For existing buckets in AWS accounts with no SSE-C encrypted objects, S3 will also disable SSE-C for all new write requests. For AWS accounts with SSE-C usage, S3 will not change the bucket encryption configuration on any of the existing buckets in those accounts. To learn more about this change, visit the S3 User Guide.
Amazon S3 will deploy this new default to both new and existing general purpose buckets in 37 AWS Regions including the AWS China and AWS GovCloud (US) Regions over the next few weeks.
We're excited to announce the launch of a new Greengrass component SDK for AWS IoT Greengrass applications. This new SDK addresses the challenge of deploying sophisticated applications on edge devices with limited resources, enabling industries such as automotive, industrial IoT, robotics, and smart buildings to run more complex AI and ML workloads at the edge. Moreover, the new SDK maintains full compatibility with both AWS IoT Greengrass nucleus and nucleus lite capabilities.
The new Greengrass component SDK offers significant memory footprint reduction, with a footprint of less than 0.5MB compared to 30MB, enabling deployment on resource-constrained devices. It provides native C, C++, and Rust bindings, optimized for performance and cost-critical embedded applications. This SDK opens new possibilities for edge computing applications where memory constraints have previously been a limiting factor.
The new Greengrass component SDK is available in all AWS Regions where AWS IoT Greengrass is available.
AWS today announced the general availability of Smithy-Java, an open-source Java framework for generating type-safe clients and standalone classes from Smithy models. Smithy-Java addresses one of the most consistently requested capabilities from enterprise Smithy users: production-grade Java SDK generation. The framework allows you to generate clients from models and async patterns that increase cognitive load and maintenance burden for developers building modern Java applications.
Built on Java 21's virtual threads, Smithy-Java provides a blocking-style API that is both simpler to use and competitive in performance with complex async alternatives. Key benefits include auto-generated type-safe clients from Smithy, protocol flexibility with runtime protocol swapping for gradual migration paths. The GA release includes the Java client code generator, support for AWS SigV4 and all major AWS protocols (AWS JSON, REST-JSON, REST-XML, AWS Query, and Smithy RPCv2-CBOR), standalone type code generation for sharing types across multiple services or data modeling, and a dynamic client that can call Smithy services without a codegen step.
The framework pioneers two architectural innovations: schema-driven serialization that reduces SDK size while improving performance, and binary decision diagrams (BDD) for endpoint rules resolution delivering significant latency improvements. Internal Amazon teams have already built complete services in weeks rather than months using Smithy-Java, with service teams depending on it internally. The framework is ideal for organizations invested in the Smithy ecosystem, teams requiring protocol-agnostic development, and developers building new services with generated server stubs.
To learn more, visit our blog post and follow the Smithy Java Quickstart guide.
Customers can now create Amazon FSx for OpenZFS file systems in the AWS Asia Pacific (Melbourne) Region, providing fully managed shared file storage built on the OpenZFS file system.
Amazon FSx makes it easier and more cost effective to launch, run, and scale feature-rich, high-performance file systems in the cloud. It supports a wide range of workloads with its reliability, security, scalability, and broad set of capabilities. Amazon FSx for OpenZFS provides fully managed, cost-effective, shared file storage powered by the popular OpenZFS file system, and is designed to deliver sub-millisecond latencies and multi-GB/s throughput along with rich ZFS-powered data management capabilities (like snapshots, data cloning, and compression).
To learn more about Amazon FSx for OpenZFS, visit our product page, and see the AWS Region Table for complete regional availability information.
Amazon WorkSpaces Personal now provides unique, publicly resolvable Domain Name System (DNS) names for each AWS PrivateLink Virtual Private Cloud (VPC) endpoint, enabling enterprise customers to deploy WorkSpaces across multiple AWS VPCs and accounts without DNS resolution conflicts. Each interface VPC endpoint now receives a globally unique AWS-managed DNS name in addition to the previous generic DNS name that was shared across all endpoints.
This enhancement enables customers to route traffic appropriately in multi-account environments with centralized DNS infrastructure. Customers can now deploy WorkSpaces Personal directories across different VPCs and AWS accounts while maintaining proper security isolation, eliminating the DNS name collision that previously prevented customers from using separate interface VPC endpoints across accounts. The publicly resolvable DNS names simplify configuration while maintaining security, as they resolve to private IP addresses accessible only from within the respective VPC. The unique DNS names are automatically managed by AWS throughout their lifecycle, requiring no additional Route 53 configuration or custom DNS management.
This feature is available in all AWS regions where PrivateLink is available in Amazon WorkSpaces Personal.
To learn more, see Amazon WorkSpaces PrivateLink documentation. For configuration details, refer to the WorkSpaces Administration Guide. Existing customers will automatically benefit from this enhancement, as the system maintains backward compatibility with previous DNS configurations.
Today, AWS announces support for policy store aliases and named policies and policy templates in Amazon Verified Permissions, simplifying multi-tenant deployments and day-to-day policy management. Amazon Verified Permissions is a fine-grained authorization service that helps you manage and enforce permissions across your applications using Cedar policies. These new capabilities eliminate the need to maintain separate mapping tables for associating tenant identifiers with policy store IDs or tracking individual policy and template IDs.
With policy store aliases, multi-tenant application developers can assign a human-readable alias based on a tenant identifier and use it in any API call, removing the need for a lookup table. Similarly, named policies and policy templates let you reference policies by meaningful names instead of system-generated IDs, making it easier to manage authorization logic as your application grows.
Amazon Verified Permissions policy store aliases and named policies and templates are available in all AWS Regions where Amazon Verified Permissions is available. For a full list of supported Regions, see Amazon Verified Permissions endpoints and quotas.
To get started, see Policy store aliases and Creating static policies in the Amazon Verified Permissions User Guide, or visit the Amazon Verified Permissions API Reference.
Amazon SageMaker Unified Studio notebooks now support import/export capabilities, enabling migration from JupyterLab and other notebook platforms. This release also introduces developer acceleration features including cell reordering, keyboard shortcuts, cell renaming, and multi-line SQL support, designed to enhance productivity for data engineers and data scientists professionals working with notebook-based workflows.
The new import/export functionality supports .ipynb, .json, and .py formats while preserving cell types and metadata, making platform migration straightforward. You can export notebooks in four formats including Jupyter notebook with requirements (.zip), standard .ipynb, Python scripts (.py), and SageMaker Unified Studio native format (.json). Developer acceleration features enable you to reorder cells without copy-paste duplication, assign custom names to cells for improved navigation in large notebooks, use familiar keyboard shortcuts for faster development, and execute multiple SQL statements in a single cell with results displayed in separate tabs for easy comparison and analysis.
This feature is available in all AWS Regions where Amazon SageMaker Unified Studio is available. To learn more, visit the Amazon SageMaker Unified Studio marketing page and user guide.
Last week, I visited AWS Hong Kong User Group with my team. Hong Kong has a small but strong community, and their energy and passion are high. They recently started a new AI user group, and we hope more people will join. I was able to strengthen my bond with the community through great food […]
このブログは、AWS DevOps Agentを使った自律的なインシデント対応について解説します。従来のSREエンジニアは、障害発生時に複数のログやツールから情報を手動で収集し、原因を特定するのに数時間かかっていました。AWS DevOps Agentは、アプリケーショントポロジーの理解、クロスアカウント調査、継続的学習機能を備えた完全マネージド型のAI運用チームメンバーです。6つの主要機能(Context、Control、Convenience、Collaboration、Continuous Learning、Cost Effective)により、単純なLLMラッパーとは異なる本格的な運用支援を実現します。このブログを読むことで、AWS DevOps Agentがどのように運用の複雑性を軽減し、インシデント対応を自動化・高速化するかを理解できます。
AWS Direct Connect に BGP 監視用 CloudWatch メトリクス追加、Amazon SageMaker Data Agent が Unified Studio Query Editor で利用可能に、Amazon Athena が追加リージョンで Capacity Reservations 開始、Amazon CloudFront が IPv6 の BYOIP をサポート開始、AWS Outposts で Amazon RDS for Oracle が利用可能に、AWS Transform custom の自動コードベース分析が GA、AWS Security Agent オンデマンドペネトレーションテストが GA、AWS DevOps Agent が GA、Amazon S3 Vectors が 17 リージョンに拡張、Amazon OpenSearch Service にログ分析向けエージェント AI 導入等
Amazon CloudWatch で OpenTelemetry メトリクスのネイティブ取り込みと PromQL クエリがサポートされました。メトリクスあたり最大 150 ラベルの高カーディナリティメトリクスストアにより、Kubernetes やマイクロサービスのラベルの多いメトリクスを変換なしで CloudWatch に直接送信できます。AWS リソースの自動エンリッチメントと組み合わせることで、インフラストラクチャ、コンテナ、アプリケーションのメトリクスを一元管理し、PromQL でクエリできるようになります。
本記事では、AWS Deadline Cloud のサービスマネージドフリート (SMF) 上で Autodesk 3ds Max を使用する方法を紹介します。設定スクリプトを活用して 3ds Max をフリートワーカーにインストールし、マネージドインフラストラクチャの利便性とレンダリングパイプラインの柔軟性を両立する手順を解説します。
私たちはターミナルが大好きです。この記事を読んでいるあなたも、きっとそうでしょう。CLI には、スピード、集中力、そして直接性という独特の魅力があります。速く、反応が良く、即座に結果が返ってくる。Kiro CLI はすでに、エージェントとの直接チャット、計画の作成、複数ステップにわたる処理の実行といった機能を通じて、エージェント型コーディングの力をターミナル環境にもたらしています。そして、さらに良い体験を追求した結果、新しいデザインへの刷新が必要だという結論に至りました。 本日、Kiro CLI の刷新された UX をご紹介します。いつでも以前の体験に戻せる「実験的モード」として提供しますので、ぜひフィードバックをお聞かせください。Kiro CLI をインストールして、コマンドラインで kiro-cli --tui と入力するだけで試せます。
今週の「週刊 生成 AI with AWS」では、非エンジニアが Amazon Bedrock と Amazon Q Developer で契約書管理 AI エージェントを構築した大成様の事例や、Agentic AI を PoC から本番稼働へ引き上げるための組織論・ペルソナ別ガイドなど、生成 AI の「実用化」と「組織展開」に役立つコンテンツが充実しています。サービス面でも AWS DevOps Agent の GA、Amazon Bedrock AgentCore Evaluations と Amazon Bedrock Guardrails のクロスアカウントセーフガードの GA、Amazon OpenSearch Service へのエージェント AI 機能追加、Amazon S3 Vectors の 31 リージョン展開など、AI エージェントの本番運用を支える重要なアップデートを紹介しています。
皆さんの多くと同じく、私も親です。そして、皆さんと同じように、自分の子どもたちのために築いている世界について考 […]
Kiro に新たに MiniMax M2.5 と GLM-5 の 2 つのオープンウェイトモデルが追加されました。MiniMax M2.5 はクレジット乗数 0.25x の低コストモデルで、SWE-Bench Verified 80.2% を達成し、マルチステップ実装や長時間エージェントセッションに最適です。GLM-5 は 200K コンテキストウィンドウを持つ大規模 MoE モデルで、リポジトリ規模の複雑なアーキテクチャ変更や長期エージェントワークフローに強みを発揮します。両モデルは IDE と CLI から即座に利用可能です。
2026 年 3 月 31 日にヘルスケア業界のお客様を AWS にお招きして「ヘルステック企業向け セキュリティインシデント疑似体験 GameDay」を開催しました。今回はそのイベントのご紹介や当日の雰囲気をお伝えし、セキュリティへの取り組みを知っていただければ幸いです。
In this post, we walk through the new installation experience, demonstrate three deployment methods (console, CLI, and Terraform), and show how features like multi-instance-type deployment and native node affinity give you fine-grained control over inference scheduling
Amazon Bedrock AgentCore Gateway provides a centralized layer for managing how AI agents connect to tools and MCP servers across your organization. In this post, we walk through how to configure AgentCore Gateway to connect to an OAuth-protected MCP server using the Authorization Code flow.
This blog post demonstrates how Windward helps enhance and accelerate alert investigation processes by combining geospatial intelligence with generative AI, enabling analysts to focus on decision-making rather than data collection.
In this post, we show how to implement a generative AI agentic assistant that uses both semantic and text-based search using Amazon Bedrock, Amazon Bedrock AgentCore, Strands Agents and Amazon OpenSearch.
In this post, we walk through how we fine-tuned Qwen 2.5 7B Instruct for tool calling using RLVR. We cover dataset preparation across three distinct agent behaviors, reward function design with tiered scoring, training configuration and results interpretation, evaluation on held-out data with unseen tools, and deployment.
In this post, we walk through building a custom HR onboarding agent with Quick. We show how to configure an agent that understands your organization’s processes, connects to your HR systems, and automates common tasks, such as answering new-hire questions and tracking document completion.