AWS launches AWS Interconnect - last mile, a fully managed connectivity offering that allows customers to connect their branch offices, data centers, and remote locations to AWS with just a few clicks, eliminating the friction and complexity of network setup. As a milestone collaboration between AWS and Lumen, AWS Interconnect - last mile combines AWS cloud innovation with Lumen’s extensive network footprint to redefine how businesses connect to the cloud.
Through the AWS Console, customers can now instantly establish private, high-speed connections to AWS by simply choosing their preferred AWS Region, bandwidth speed, Direct Connect Gateway ID and partner subscriber ID. Once initiated, AWS generates an activation key to complete provisioning with Lumen. The launch simplifies the connectivity experience by pre-provisioning capacity and automating complex network configuration including BGP peering, VLAN configuration, and ASN assignment. Customers can dynamically scale bandwidth from 1 Gbps to 100 Gbps through the AWS Console and benefit from zero down-time maintenance. The service is designed for high availability and backed by SLA. MACsec encryption is enabled by default for enhanced security between AWS and partner devices.
AWS Interconnect - last mile is available in the US through our launch partner Lumen. Partners can also easily adopt via a published open API package on GitHub. For more information, see the AWS Interconnect - last mile documentation and pricing pages.
AWS announces general availability (GA) of AWS Interconnect - multicloud, providing simple, resilient, high-speed private connections to other cloud service providers (CSPs). With GA comes Google Cloud as the first launch partner, with Microsoft Azure coming later in 2026.
Customers have been adopting multicloud strategies while migrating more applications to the cloud. They do so for many reasons including interoperability requirements, the freedom to choose technology that best suits their needs, and the ability to build and deploy applications on any environment with greater ease and speed. Previously, when interconnecting workloads across multiple cloud providers, customers had to go the route of a ‘do-it-yourself’ multicloud approach, leading to complexities of managing global multi-layered networks at scale. AWS Interconnect - multicloud is the first purpose-built product of its kind and a new way of how clouds connect and talk to each other. Simplifying connectivity into AWS, Interconnect - multicloud enables customers to quickly establish private, secure, high-speed network connections with dedicated bandwidth and built-in resiliency between their Amazon VPCs and other cloud environments. Interconnect - multicloud makes it easy to connect AWS resources or VPCs to other CSPs. Customers can also quickly scale connectivity to multiple VPCs or Regions via associating Interconnect with other networking services such as AWS Transit Gateway and AWS Cloud WAN, instead of taking weeks or months. Interconnect - multicloud introduces a new, single-fee pricing structure based on the customer’s selected bandwidth and the geographical scope of the connectivity to other CSPs. Customers can also use one free, local 500Mbps interconnect per Region starting in May. To learn more please see the Interconnect - multicloud Pricing documentation page.
Interconnect - multicloud is available in five AWS Regions. You can enable this capability using the AWS Management Console, Command Line Interface (CLI), or API, and CSPs can also adopt via a published open API package on GitHub. For more information, see the AWS Interconnect - multicloud documentation and pricing pages.
AWS Elastic Disaster Recovery (AWS DRS) now supports IPv6 for both data replication and control plane connections. Customers operating in IPv6-only or dual-stack network environments can now configure AWS DRS to replicate using IPv6, eliminating the need for IPv4 addresses in their disaster recovery setup.
AWS DRS minimizes downtime and data loss with fast, reliable recovery of on-premises and cloud-based applications using affordable storage, minimal compute, and point-in-time recovery. Previously, AWS DRS required IPv4 connectivity for all replication and service communication. Now, customers can set the internet protocol to IPv6 in their replication configuration to use dual-stack endpoints for agent-to-service communication and data replication. This helps customers meet network modernization requirements and enables disaster recovery in environments where IPv4 addresses are unavailable or restricted. Existing replication configurations are not affected and continue to use IPv4 by default.
This capability is available in all AWS Regions where AWS DRS is available and where Amazon EC2 supports IPv6. See the AWS Regional Services List for the latest availability information.
To learn more about AWS DRS, visit our product page or documentation. To get started, sign in to the AWS Elastic Disaster Recovery Console.
Amazon FSx now supports copying file system backups across opt-in Regions (AWS Regions that are disabled by default) for Amazon FSx for Windows File Server, Amazon FSx for Lustre, and Amazon FSx for OpenZFS. This launch makes it easier for customers to meet business continuity, disaster recovery, and compliance requirements by extending cross-Region, cross-account backup and recovery capabilities beyond AWS Regions that are enabled by default.
Amazon FSx is a fully managed service that makes it easy and cost-effective to launch, run, and scale feature-rich, high-performance file systems in the AWS Cloud. Opt-in Regions are AWS Regions that are disabled by default, in contrast to regions that are enabled by default. Previously, customers could copy Amazon FSx file system backups across regions enabled by default, within the same AWS account or across AWS accounts in the same AWS Organization. Starting today, you can copy backups into and out of opt-in Regions within the same AWS account using the Amazon FSx console, API, or CLI, or across AWS accounts in the same AWS Organization using AWS Backup. This allows you to design resilient, multi-account, cross-Region backup and recovery architectures across a broader set of AWS Regions.
To get started, visit the Amazon FSx console or the AWS Backup console. For more details, see the Amazon FSx product page and the AWS Backup product page.
AWS IoT Core and AWS IoT Device Management services are now available in the Israel (Tel Aviv) and Europe (Milan) AWS Regions. With this expansion, organizations operating in these regions can better serve their local customers and unlock multiple benefits, including faster response times, stronger data residency controls, and reduced data transfer expenses.
AWS IoT Core is a managed cloud service that lets you securely connect billions of Internet of Things (IoT) devices to the cloud and manage them at scale. It routes trillions of messages to IoT devices and AWS endpoints, through bi-directional industry standard protocols, such as MQTT, HTTPS, LoRaWAN (select regions). AWS IoT Device Management allows customers to search, organize, monitor and remotely manage connected devices at scale.
With the expansion to these regions, AWS IoT is now available in 27 AWS Regions worldwide. To get started and to learn more, refer to the technical documentation for AWS IoT Core and AWS IoT Device Management.
Starting today, Amazon EC2 M8i and M8i-flex instances are now available in AWS GovCloud (US-West) Region. These instances are powered by custom Intel Xeon 6 processors, available only on AWS, delivering the highest performance and fastest memory bandwidth among comparable Intel processors in the cloud. The M8i and M8i-flex instances offer up to 15% better price-performance, and 2.5x more memory bandwidth compared to previous generation Intel-based instances. They deliver up to 20% better performance than M7i and M7i-flex instances, with even higher gains for specific workloads. The M8i and M8i-flex instances are up to 30% faster for PostgreSQL databases, up to 60% faster for NGINX web applications, and up to 40% faster for AI deep learning recommendation models compared to M7i and M7i-flex instances.
M8i-flex are the easiest way to get price performance benefits for a majority of general-purpose workloads like web and application servers, microservices, small and medium data stores, virtual desktops, and enterprise applications. They offer the most common sizes, from large to 16xlarge, and are a great first choice for applications that don't fully utilize all compute resources.
M8i instances are a great choice for all general purpose workloads, especially for workloads that need the largest instance sizes or continuous high CPU usage. The SAP-certified M8i instances offer 13 sizes including 2 bare metal sizes and the new 96xlarge size for the largest applications.
To get started, sign in to the AWS Management Console. For more information about the new instances, visit the M8i and M8i-flex instance page or visit the AWS News blog.
Starting today, Amazon Elastic Compute Cloud (Amazon EC2) R8i and R8i-flex instances are available in the AWS GovCloud (US-West) Region. These instances are powered by custom Intel Xeon 6 processors, available only on AWS, delivering the highest performance and fastest memory bandwidth among comparable Intel processors in the cloud. The R8i and R8i-flex instances offer up to 15% better price-performance, and 2.5x more memory bandwidth compared to previous generation Intel-based instances. They deliver 20% higher performance than R7i instances, with even higher gains for specific workloads. They are up to 30% faster for PostgreSQL databases, up to 60% faster for NGINX web applications, and up to 40% faster for AI deep learning recommendation models compared to R7i.
R8i-flex, our first memory-optimized Flex instances, are the easiest way to get price performance benefits for a majority of memory-intensive workloads. They offer the most common sizes, from large to 16xlarge, and are a great first choice for applications that don't fully utilize all compute resources.
R8i instances are a great choice for all memory-intensive workloads, especially for workloads that need the largest instance sizes or continuous high CPU usage. R8i instances offer 13 sizes including 2 bare metal sizes and the new 96xlarge size for the largest applications. R8i instances are SAP-certified and deliver 142,100 aSAPS, delivering exceptional performance for mission-critical SAP workloads.
To get started, sign in to the AWS Management Console. For more information about the R8i and R8i-flex instances visit the AWS News blog.
Amazon CloudWatch Logs Insights saved queries now support parameters, allowing you to pass values to reusable query templates with placeholders. This eliminates the need to maintain multiple copies of nearly identical queries that differ only in specific values such as log levels, service names, or time intervals.
You can define up to 20 parameters in a query, with each parameter supporting optional default values. For example, you can create a single template to query logs by severity level (such as ERROR or WARN) and pass different service names each time you run it. To execute a query with parameters, invoke it using the query name prefixed with $ and pass your parameter values, such as $ErrorsByService(logLevel="ERROR", serviceName="OrderEntry"). You can also use multiple saved queries with parameters together for complex log analysis, significantly reducing query maintenance overhead while improving reusability.
Saved queries with parameters are available in all commercial AWS regions. You can create and use saved queries with parameters using the Amazon CloudWatch console, AWS Command Line Interface (AWS CLI), AWS Cloud Development Kit (AWS CDK), and AWS SDKs. To learn more, see the Amazon CloudWatch Logs documentation.
Today we are announcing the release of the Aurora DSQL Connector for PHP (PDO_PGSQL) that makes it easy to build PHP applications on Aurora DSQL. The PHP Connector streamlines authentication and eliminates security risks associated with traditional user-generated passwords by automatically generating tokens for each connection, ensuring valid tokens are always used while maintaining full compatibility with existing PDO_PGSQL features.
The connector handles IAM token generation, SSL configuration, and connection pooling, enabling customers to scale from simple scripts to production workloads without changing their authentication approach. It also provides opt-in optimistic concurrency control (OCC) retry with exponential backoff, custom IAM credential providers, and AWS profile support, making it easier to develop client retry logic and manage AWS credentials.
To get started, visit the Connectors for Aurora DSQL documentation page. For code examples, visit our GitHub page for the PHP connector. Get started with Aurora DSQL for free with the AWS Free Tier. To learn more about Aurora DSQL, visit the webpage.
Amazon Quick now supports document-level access controls (ACLs) for Google Drive knowledge bases, enabling organizations to maintain native Google Drive permissions when indexing content. Quick combines ACL replication for efficient pre-retrieval filtering with an additional layer of real-time permission checks directly with Google Drive at query time. This dual approach means you get the performance benefits of indexed ACLs while also guarding against stale or incorrectly mapped permission data. When a user submits a query, Quick verifies their current permissions with Google Drive before generating a response—ensuring answers are based on live access rights.
With document-level access controls, Amazon Quick now respects individual file and folder permissions from Google Drive. This feature is available in all AWS Regions where Amazon Quick is available.
To get started, create or update a Google Drive knowledge base in the Amazon Quick console and configure document-level access controls in your integration settings. For more information, see Google Drive integration in the Amazon Quick User Guide.
Amazon Redshift further optimizes the processing of top-k queries (queries with ORDER BY and LIMIT clauses) by intelligently skipping irrelevant data blocks to return results faster, dramatically reducing the amount of data processed. This optimization reorders and efficiently adjusts the data blocks to be read based on the ORDER BY column's min/max values, maintaining only the K most qualifying rows in memory. When the ORDER BY column is sorted or partially sorted, Amazon Redshift now processes only the minimal data blocks needed rather than scanning entire tables, eliminating unnecessary I/O and compute overhead.
This enhancement particularly benefits top-k queries when the data permanently stores in descending order (ORDER BY ... DESC LIMIT K) on large tables where qualifying rows are appended at the end of the data storage. Common examples include:
With this new optimization, top-k query performance improves dramatically. This optimization for top-k queries is now available in Amazon Redshift at no additional cost starting with patch release P199 across all AWS regions where Amazon Redshift is available. This optimization automatically applies to eligible queries without requiring any query rewrites or configuration changes.
Amazon OpenSearch Serverless introduces support for Derived Source, a new feature that can help reduce the amount of storage required for your OpenSearch Service collections. With derived source support, you can skip storing source fields and dynamically derive them when required.
With Derived Source, OpenSearch Serverless reconstructs the _source field on the fly using the values already stored in the index, eliminating the need to maintain a separate copy of the original document. This can significantly reduce storage consumption, particularly for time-series and log analytics collections where documents contain many indexed fields. You can enable derived source at the index level when creating or updating index mappings.
Derived Source support is available today in all AWS Regions where Amazon OpenSearch Serverless is supported. For more information, see the Amazon OpenSearch Serverless documentation.
NVIDIA’s Nemotron-3-Super-120B, Qwen3.5-9B, and Qwen3.5-27B models are now available on Amazon SageMaker JumpStart, expanding the portfolio of foundation models available to AWS customers. These three models bring specialized capabilities spanning agentic reasoning, multilingual coding, and advanced instruction following, enabling customers to deploy high-performance, scalable AI solutions on AWS infrastructure.
These models address different enterprise AI challenges with specialized capabilities:
Nemotron-3-Super-120B is optimized for collaborative agents and high-volume workloads such as IT ticket automation. It employs a hybrid Latent Mixture-of-Experts (LatentMoE) architecture with Mamba-2 and MoE layers, enabling strong agentic, reasoning, and conversational capabilities useful for multi-agent applications like software development and cybersecurity triaging.
Qwen 3.5 9B excels in multilingual coding, instruction following, and long-horizon planning, automating software development workflows and executing complex, multi-step office tasks. Its compact design balances efficiency and performance for resource-constrained environments.
Qwen 3.5 27B provides deeper contextual understanding, extended reasoning capabilities, and enhanced spatial/complex scenario comprehension, ideal for advanced multimodal reasoning and large-scale document processing.
With SageMaker JumpStart, customers can deploy any of these models with just a few clicks to address their specific AI use cases.
To get started with these models, navigate to the SageMaker JumpStart model catalog in the SageMaker console or use the SageMaker Python SDK to deploy the models to your AWS account. For more information about deploying and using foundation models in SageMaker JumpStart, see the Amazon SageMaker JumpStart documentation.
In my last Week in Review post, I mentioned how much time I’ve been spending on AI-Driven Development Lifecycle (AI-DLC) workshops with customers this year. A common theme in those sessions is the need for better cost visibility. Teams are moving fast with AI, but as they go from experimenting to full production, finance and […]
Amazon S3 が新規および既存バケットに対してデフォルトで新しいセキュリティベストプラクティスの展開を開始, AWS が Smithy-Java クライアントフレームワークの一般提供開始を発表, Amazon WorkSpaces Personal が PrivateLink の一意な DNS 名をサポート, AWS Transfer Family がコネクタと Web アプリで IPv6 をサポート, Amazon S3 Files の発表、S3 バケットをファイルシステムとしてアクセス可能に, Amazon Bedrock で Claude Mythos Preview (限定研究プレビュー) の提供を開始, Amazon Bedrock AgentCore Browser が OS レベルのインタラクション機能を追加, Amazon EKS マネージドノードグループが EC2 Auto Scaling ウォームプールをサポート, Amazon EC2 Capacity Manager でタグベースディメンションがサポート開始, Amazon Bedrock が IAM ユーザーとロールによるコスト配分をサポート
週刊生成AI with AWS, エージェント関連の話題が盛りだくさんの2026年4月6日週号 - 大豊建設様、丸紅グループ様の国内事例ブログを紹介。また、Claude Mythos PreviewやAWS Security Agent GA、AWS DevOps Agent GAなどのブログ記事も。サービスアップデートではAWS Agent Registryプレビューや Amazon Bedrock IAMコスト配分をはじめとする6件のアップデートを紹介。
荏原製作所様は社内クラウドイベント「Ebara Cloud Day」を開催しました。 AWSセッションや社内エンジニアによる実践的なLT(新入社員研修、EC2運用、中国リージョン導入、Wrike連携ツール、コスト半減事例)を通じて、参加者のクラウド活用意欲が大きく向上し、継続的な学習コミュニティ形成の基盤となりました。 ブログではイベントの内容やイベント実施の効果をお伝えしています。
こんにちは。ソリューションアーキテクトの水野です。AWS のプロフェッショナルサービスでは、ブラザー工業株式会 […]
Spec-Driven Presentation Maker は、「何を伝えるか」を先に設計し、スライドの構築を AI に委ねるオープンソースのサンプル実装です。本記事では、仕様駆動アプローチの考え方と、AWS 環境への導入方法をご紹介します。
こんにちは、Amazon Connect ソリューションアーキテクトの梅田です。2026年 2 月号 はお読み […]
This post demonstrates how Lambda enables scalable, cost-effective reward functions for Amazon Nova customization. You'll learn to choose between Reinforcement Learning via Verifiable Rewards (RLVR) for objectively verifiable tasks and Reinforcement Learning via AI Feedback (RLAIF) for subjective evaluation, design multi-dimensional reward systems that help you prevent reward hacking, optimize Lambda functions for training scale, and monitor reward distributions with Amazon CloudWatch. Working code examples and deployment guidance are included to help you start experimenting.