Today, we're announcing the General Availability of the AWS for SAP MCP Server on Amazon Bedrock AgentCore, purpose-built to connect AI agents directly to SAP ERP systems, securely and at scale. Built on the Model Context Protocol (MCP) and SAP's Open Data Protocol (OData) standards, this solution addresses the challenge of making SAP business data and processes accessible to AI agents while maintaining enterprise-grade security and comprehensive Observability. Organizations running SAP systems can now empower their AI agents to interact with various SAP processes including finance, procurement, logistics, and supply chain operations.
By leveraging SAP ERP business data, the AWS for SAP MCP Server enables AI agents to create, read, update, and delete SAP business objects such as sales orders, purchase orders, materials, and finance documents. Deployed on the fully managed Amazon Bedrock AgentCore Runtime, the server handles session isolation, private connectivity, and dual-layer authentication through AgentCore Identity with support for OAuth 2.0. Key capabilities include dynamic service catalog discovery, telemetry through CloudWatch for complete visibility into agent actions, and flexible connectivity options for SAP S/4 HANA and SAP ECC.
Organizations can deploy the AWS for SAP MCP Server in minutes using CloudFormation templates with no infrastructure management required. The AWS for SAP MCP server works seamlessly with MCP clients like Amazon Quick, Strands SDK based custom agents, and SAP Joule, and ships as a container image at no cost.
Early adopters including customers like Fortescue, Harman International, and PLDT are already demonstrating the transformative potential of the AWS for SAP MCP Server, by using it to orchestrate enterprise-scale AI integration, modernize test management, automate Procure-to-Pay workflows at scale and more.. To learn more, visit the AWS for SAP MCP Server documentation page
AWS Compute Optimizer now supports the latest generation of Amazon Elastic Compute Cloud (Amazon EC2) and Amazon Relational Database Service (Amazon RDS) instance types. This expansion enables Compute Optimizer to help you take advantage of the price-to-performance improvements offered by the newest EC2 and RDS instance types.
AWS Compute Optimizer has expanded support to include the latest generation EC2 instance types, including Compute Optimized (C8a, C8gb, C8i, C8i-flex, C8id), General Purpose (M8a, M8azn, M8gb, M8gn, M8id), Memory Optimized (R8a, R8gb, R8gn, R8id), Memory Intensive (x8i), and Storage Optimized (i7i) in its EC2 and EC2 Auto Scaling group recommendations. For RDS recommendations, Compute Optimizer has added support for M7i, M8g, R8g, X1, and Z1d DB instance classes across RDS for MySQL, RDS for PostgreSQL, Amazon Aurora MySQL, and Aurora PostgreSQL.
This new feature is available in all AWS Regions where Compute Optimizer is available except the AWS GovCloud (US) Regions, the AWS China (Beijing) Region, operated by Sinnet, and the AWS China (Ningxia) Region, operated by NWCD. For more information about Compute Optimizer, visit our product page and documentation. You can start using Compute Optimizer through the AWS Management Console, AWS CLI, or AWS SDK.
Amazon SageMaker Unified Studio now offers the CI/CD CLI (aws-smus-cicd-cli), an open-source command line tool that automates deployment of multi-service data and AI applications across development, test, and production. Organizations building applications in SageMaker Unified Studio combine multiple AWS services, including AWS Glue, Amazon Athena, Amazon MWAA, Amazon SageMaker AI, Amazon Bedrock, and Amazon QuickSight, into single applications. The CLI allows data teams to define applications once in a YAML manifest while DevOps teams deploy with a single command, reducing deployment bottlenecks and configuration drift.
The CLI reads a declarative manifest.yaml that maps each pipeline stage to an isolated SageMaker Unified Studio project. At deploy time, it substitutes stage-specific configurations (S3 paths, IAM roles, account IDs, and connection strings) and provisions resources in dependency order. Four commands cover the lifecycle: describe validates permissions and connections, bundle packages an immutable artifact from the source target, deploy writes that artifact to the destination target, and test runs post-deployment validation. It works with existing CI/CD solutions such as GitHub Actions, Jenkins, and GitLab CI.
The CI/CD CLI is available at no additional cost in all AWS Regions where Amazon SageMaker Unified Studio is available. You pay only for the underlying AWS resources provisioned during deployment.
To get started, visit the following resources:
The CI/CD CLI is available at no additional cost in all AWS Regions where Amazon SageMaker Unified Studio is available. You pay only for the underlying AWS resources provisioned during deployment.
To get started, visit the following resources:
Amazon SageMaker HyperPod now automatically selects and continuously maintains the optimal network topology configuration for Slurm clusters based on the GPU instance types in the cluster. Network topology directly impacts distributed training performance — when jobs are placed on nodes that are topologically close, GPU-to-GPU communication is faster, NCCL collective operations are more efficient, and training throughput improves. HyperPod dynamically adapts the topology as the cluster evolves through scaling operations and node replacements, so job placement remains optimized throughout the cluster lifecycle without requiring manual updates to topology files or Slurm reconfiguration.
HyperPod inspects the instance types across all instance groups at cluster creation, identifies the networking and interconnect characteristics of each instance type, and automatically selects the best-fit topology model. HyperPod supports tree topology for instance types with hierarchical interconnects such as ml.p5.48xlarge, ml.p5e.48xlarge, and ml.p5en.48xlarge, and block topology for instance types with uniform high-bandwidth connectivity such as ml.p6e-gb200.NVL72. For clusters with mixed instance types, HyperPod selects a compatible topology that works across all nodes. As the cluster changes through scale-up, scale-down, or node replacement events, HyperPod automatically updates the topology configuration without manual intervention, so the topology always reflects the actual state of the cluster.
To get started, create a SageMaker HyperPod Slurm cluster with supported GPU instance types. Topology-aware scheduling is enabled by default and requires no configuration.
This feature is available in all AWS Regions where Amazon SageMaker HyperPod is supported. To learn more about topology-aware scheduling, visit the Amazon SageMaker HyperPod documentation
AWS Client VPN now supports native integration with AWS Transit Gateway, simplifying centralized remote access for your end users across multiple VPCs and on-premises, and providing end-to-end source IP visibility.
AWS Transit Gateway interconnects your Amazon Virtual Private Clouds (VPCs) and on-premises networks, while AWS Client VPN enables secure remote access to AWS and on-premises resources connected through your AWS network. Previously, connecting Client VPN to multiple VPCs required provisioning and managing an intermediate VPC, adding operational complexity as you needed to manage additional resources. Moreover, client source IPs were translated through Source Network Address Translation (SNAT), making it difficult to identify which remote user generated specific traffic and complicating security audits. Native Transit Gateway attachment eliminates the need for an intermediate VPC, letting you provide centralized remote access to multiple VPCs and on-premises networks directly from your Client VPN endpoint. Additionally, the end-user source IP is now preserved end-to-end, so you can create authorization rules based on actual client IPs and trace traffic back to specific users, simplifying security, compliance, and troubleshooting workflows. Furthermore, Transit Gateway flow logs capture connection-level details tied to preserved source IPs for improved troubleshooting and compliance audits.
This integration is available in all AWS Regions where AWS Client VPN is available. There are no additional charges for this native integration beyond standard pricing of AWS Client VPN and AWS Transit Gateway.
To learn more about Client VPN:
私は 4 月 13 日週、University of Namur (uNamur) の 2025 年度の卒業式 […]
文部科学省は 2026 年 4 月、AI for Science 萌芽的挑戦研究創出事業 ( SPReAD ) の公募を開始しました。1 課題 500 万円以下、計 1,000 件程度の採択が予定され、AWS の 計算資源やAPI 利用料も対象経費に含まれます。本記事では、創薬・ゲノミクス・材料科学など 6 領域の AI 活用ユースケースと先駆者たちの事例を紹介し、AWS の サービス / 技術基盤との対応関係を解説します。SPReAD への応募を検討されている研究者の方にも参考になる内容です。
本ブログ記事では、AUMOVIO が Amazon Web Services (AWS) のサービスと知見を活用して、Software-Defined Vehicle (SDV) 領域における革新的な自動車向けコーディングアシスタントを開発した事例をご紹介します。AUMOVIO のソリューションは、複数の AI モデルを活用して開発ライフサイクルの各工程を加速させながら、自動車業界の標準と AUMOVIO 独自のコーディングベストプラクティスに準拠しています。可能な限りコードを再利用し、変更を最小限に抑えることで、このアシスタントは V 字モデルの他の工程に必要な作業を大幅に削減します。