Amazon Quick Automate now enables you to migrate automation versions across automation groups, AWS accounts, and AWS Regions. Previously, moving automations between environments required time-consuming manual recreation of workflows. This new capability is designed for DevOps teams and organizations managing multi-environment deployments who need to promote tested automations from development to production, deploy across geographic regions, or share proven workflows with other teams.
The migration process packages your workflow, runtime configuration, and process step metadata into a secure, encrypted link using AWS KMS encryption. Export links remain valid for 12 hours and can be reused multiple times, eliminating repeated exports. This streamlined approach saves significant time compared to manual recreation and enables point-in-time snapshots for disaster recovery scenarios. Key use cases include promoting automations between development, staging, and production environments, deploying across AWS Regions, and sharing proven automation with teams outside your automation group.
This feature is available in all AWS Regions where Quick Automate is enabled, including US East (N. Virginia), US West (Oregon), Europe (Dublin), Europe (London), Europe (Frankfurt), Asia Pacific (Tokyo), and Asia Pacific (Sydney). Note that dependencies such as action connectors, credentials, and humn-in-the-loop task queue configurations are not included in the migration bundle and must be configured separately in the destination environment.
To learn more, visit the Amazon Quick Automate marketing page or see the export and import documentation.
Amazon Connect Outbound Campaigns now allows you to dial contacts in configurable priority order based on up to 10 profile attributes for voice campaigns and voice activities in journeys. This helps you focus agent time on the most valuable customers or time-sensitive opportunities, improving campaign effectiveness and conversion rates.
With contact priority ordering, you can sort segments on attributes such as customer lifetime value, account tier, or appointment date. For example, a financial services team can prioritize outreach to high-value accounts nearing contract renewal, or a healthcare provider can ensure patients with the earliest upcoming appointments are contacted first. Initial dial attempts always take precedence over reattempts, ensuring your priority order is maintained throughout campaign execution.
This capability is available in all AWS Regions where Amazon Connect Outbound Campaigns is offered at no additional cost. To get started, configure sort attributes when building segments in Amazon Connect Customer Profiles. To learn more, see the Amazon Connect Outbound Campaigns best practice and how to build customer segments.
Amazon Quick Automate now provides APIs that enable you to programmatically start automation jobs and check their status from external applications and services. The new StartAutomationJob and DescribeAutomationJob APIs allow automation developers and DevOps engineers to invoke deployed automations with custom input data and retrieve structured results when jobs complete, extending Quick Automate's capabilities beyond scheduled execution.
These APIs enable seamless integration of Quick Automate workflows into your existing applications and event-driven architectures. You can trigger automations in response to application events such as new user registrations or order completions, pass dynamic input parameters with typed schemas, and use the output data for further processing. Use cases include incorporating automations into data pipelines, coordinating workflows across multiple AWS services or third-party applications, and executing batch operations with different input parameters from a single application.
This feature is available through AWS SDK and AWS CLI in all AWS Regions where Quick Automate is enabled, including US East (N. Virginia), US West (Oregon), Europe (Dublin), Europe (London), Europe (Frankfurt), Asia Pacific (Tokyo), and Asia Pacific (Sydney). To learn more, visit the Amazon Quick Automate marketing page or see the API documentation.
Today, AWS announces the general availability of the AWS Lambda Durable Execution SDK for Java, empowering Java developers to build resilient, long-running workflows using Lambda durable functions. With this SDK, developers can create multi-step applications like order processing pipelines, AI agent orchestration, and human-in-the-loop approvals directly in their applications without implementing custom progress tracking or integrating external orchestration services.
Lambda durable functions extend Lambda's event-driven programming model with operations that checkpoint progress automatically and pause execution for up to a year when waiting on external events. The AWS Lambda Durable Execution SDK for Java provides an idiomatic Java experience for building with Lambda durable functions. It includes steps for progress tracking, callback integration for human and agent-in-the-loop workflows, durable invocation for reliable function chaining, and waits for efficient suspension. The SDK is compatible with Java 17+ and can be deployed using Lambda managed runtimes or functions packaged as container images. The local testing emulator in the SDK enables developers to build and debug locally before deploying to production.
To get started, see the Lambda durable functions developer guide and the AWS Lambda Durable Execution SDK for Java on GitHub. For Regional availability and pricing details, see the AWS Regional Services List and AWS Lambda Pricing.
Amazon Quick Automate now supports in-app file storage, enabling you to seamlessly manage files within your automations without requiring external storage solutions or connectors. This feature provides a centralized location for uploading, downloading, and sharing files across multiple automations within the same automation group, eliminating the friction previously experienced when working with files in automation workflows.
Previously, providing files as inputs to automations or receiving file outputs required uploading to external services like Amazon S3, OneDrive, or SharePoint, and configuring connectors. With shared file storage, automation builders and business users can now upload files directly through an intuitive drag-and-drop interface and access them immediately across all automations in their automation group. Key use cases include static lookup files that business users can update as needed, such as contact lists or pricing tables; configuration files where business users can manage agent guidelines outside the automation itself; multi-automation workflows where one automation's output feeds into another; and daily or weekly reporting where automations share outputs with users for review.
This feature is available in all AWS Regions where Quick Automate is enabled, including US East (N. Virginia), US West (Oregon), Europe (Dublin), Europe (London), Europe (Frankfurt), Asia Pacific (Tokyo), and Asia Pacific (Sydney). To learn more about shared file storage, visit the Amazon Quick Automate product page or read the shared file actions documentation..
AWS Backup now supports Amazon Redshift Serverless namespaces and Amazon Aurora DSQL clusters as resource types in AWS Organizations backup policies. Organization administrators can now define backup policy rules that directly target these resource types across member accounts.
Previously, backing up Redshift Serverless namespaces and Aurora DSQL clusters through organization backup policies required using tag-based selections or backing up all resources in a member account. With this launch, administrators can specify these resource types directly in their backup policy selections, providing more precise control over which resources are included in or excluded from Organization-wide backup plans.
This capability is available in all AWS Commercial and GovCloud Regions where AWS Backup and the respective services are available. To get started, visit the AWS Organizations backup policies documentation or the AWS Backup console.
Amazon Aurora serverless — the autoscaling database that scales up to support your most demanding workloads and down to zero when you don't need it — just got faster and smarter, with up to 30% better performance than the previous version and enhanced scaling that understands your workload. It's especially well-suited for agentic AI applications, which typically have bursts of activity, long idle windows, and unpredictable patterns. Aurora serverless handles all of it automatically, scaling capacity with your agents rather than against them, and you only pay for what you actually use. When not in use, the database automatically scales down to zero to save cost.
With improved performance and scaling, you can now use serverless for even more demanding workloads. The enhanced scaling algorithm enables you to efficiently run workloads where multiple tasks compete for resources, such as busy web applications and API services. These improvements are available in platform version 4 at no additional cost. All new clusters, database restores, and new clones will automatically launch on platform version 4. Existing clusters on platform version 1, 2, or 3 can upgrade directly to platform version 4 by using pending maintenance action, stopping and restarting the cluster, or using blue/green deployments. You can verify your cluster's platform version in the AWS Console under instance configuration section or via the RDS API's ServerlessV2PlatformVersion parameter. To learn more, read the blog.
Aurora serverless is an on-demand, automatic scaling configuration for Amazon Aurora. For pricing details and Region availability, visit Amazon Aurora Pricing. To learn more, read the documentation, and get started by creating an Aurora serverless database using only a few steps in the AWS Management Console.
Today, AWS announces the general availability of Amazon Elastic Compute Cloud (Amazon EC2) G7e instances in AWS Local Zones in Los Angeles, California. G7e instances feature NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs and 5th generation Intel Xeon Scalable (Emerald Rapids) processors, bringing high-performance GPU compute closer to end users in Los Angeles.
For creative workloads, you can use G7e instances to run studio workstation workloads with low-latency access to local storage, and post-production workloads including visual effects (VFX) editorial, color correction, and VFX finishing. G7e instances support enhanced real-time rendering on graphics engines and 2D/3D VFX composition software. For AI workloads, you can also use G7e instances to deploy Large Language Models (LLMs), inference, and agentic AI at the edge.
To get started, opt-in to the Los Angeles Local Zone (us-west-2-lax-1b) from AWS Global View. You can enable G7e instances from the Amazon EC2 console, AWS Command Line Interface (AWS CLI), and AWS SDKs. G7e instances are available through On Demand and Savings Plans. To learn more, visit the AWS Local Zones Features page.
Amazon Location Service now offers bulk address validation for the United States, Canada, Australia, and the United Kingdom. Customers can now validate, correct, and standardize large volumes of addresses at scale, whether cleaning customer databases before a CRM migration, verifying shipping addresses to reduce failed deliveries, screening addresses for identity verification and fraud prevention, or improving direct mail targeting and insurance underwriting accuracy. This capability supports use cases across healthcare, financial services, transportation and logistics, retail, and more.
Address validation checks addresses against authoritative postal data, corrects common errors like misspellings, missing postal codes, and non-standard abbreviations, and standardizes formatting to match regional postal rules. Each result includes a confidence score and deliverability indicators so applications know exactly what to trust and act on. Using the new Amazon Location Service Jobs API, customers upload their address records to their own Amazon S3 bucket, submit a validation job, and retrieve enriched, standardized results when processing is complete. For addresses in the United States, Canada, and Australia, customers can optionally request position (geocode) coordinates alongside validated address results in the same job.
Address validation is available in the following AWS Regions: US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Frankfurt), Europe (Ireland), Europe (Stockholm), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Asia Pacific (Mumbai), Canada (Central), Europe (London), and South America (São Paulo). To learn more, visit the Amazon Location Service bulk address validation feature page.
AWS Transform custom is now available in six additional AWS Regions: Asia Pacific (Mumbai, Tokyo, Seoul, Sydney), Canada (Central), and Europe (London).
AWS Transform custom enables organizations to modernize and transform code at scale using AWS-managed and custom transformations. You can upgrade language versions, migrate frameworks, optimize performance, and analyze code bases using transformations that are ready to use or can be customized to meet your organization's specific requirements. These transformations benefit from continuous improvement, learning from each engagement to deliver increasingly accurate and efficient results.
With this expansion, AWS Transform custom is now available in a total of eight AWS Regions: US East (N. Virginia), Asia Pacific (Mumbai, Tokyo, Seoul, Sydney), Canada (Central), and Europe (Frankfurt, London). To learn more, visit the AWS Transform product page and user guide.
You can now connect your Apache Kafka applications to Amazon MSK Serverless in the Asia Pacific (Hong Kong), Asia Pacific (Hyderabad), Asia Pacific (Jakarta), Asia Pacific (Malaysia), Asia Pacific (Melbourne), Asia Pacific (New Zealand), Asia Pacific (Osaka), Asia Pacific (Thailand), Europe (Milan), Europe (Spain), Europe (Zurich), Israel (Tel Aviv), Mexico (Central) AWS Regions.
Amazon MSK is a fully managed service that makes it easier for you to build and run applications that use Apache Kafka as a data store. Amazon MSK Serverless is a cluster type for Amazon MSK that allows you to run Apache Kafka without having to manage and scale cluster capacity. MSK Serverless automatically provisions and scales compute and storage resources, so you can use Apache Kafka on demand.
To learn more about Amazon MSK Serverless, visit our Amazon MSK Developer Guide.
Starting today, AWS Glue supports OAuth 2.0 authorization and authentication for native Snowflake connectivity, enabling customers to read from and write to Snowflake without sharing user credentials. This makes it easier for enterprises to maintain security compliance while building data integration pipelines. With OAuth support, you can now securely access Snowflake data within AWS Glue using temporary token-based authorization.
AWS Glue provides built-in connector to Snowflake, which helps you to integrate Snowflake data with other sources on a single platform while leveraging the scalability and performance of the AWS Glue Spark engine—all without installing or managing connector libraries. Previously, connecting to Snowflake required using persistent credentials or private keys. With OAuth 2.0 support, you can now eliminate credential management entirely, relying instead on secure, temporary tokens that enhance security and simplify access control. This approach enables granular access control, allowing you to define precise permissions for different users and applications. Additionally, token-based authentication provides improved auditability, making it easier to track and monitor data access patterns across your organization.
OAuth 2.0 support for AWS Glue's Snowflake connector is available in all AWS commercial regions where AWS Glue is available.
To get started with configuring your AWS Glue Snowflake connection with OAuth, visit the AWS Glue documentation.
Amazon CloudWatch pipelines now lets you configure log processors using natural language descriptions powered by generative AI. CloudWatch pipelines is a fully managed service that ingests, transforms, and routes log data to CloudWatch without requiring you to manage infrastructure. Setting up the right combination of processors to parse and enrich logs can be time-consuming, especially when working with complex log formats. With AI-assisted configuration, you can simply describe the processing you need in plain language and have the pipeline configuration generated for you automatically.
When creating a pipeline in the CloudWatch console, toggle the AI-assisted option during the processing step and enter a natural language description of your desired transformations. The system generates the processor configuration along with a sample log event, so you can immediately verify the output before deploying. This reduces setup time and makes it easier to get your pipelines running correctly without needing deep familiarity with individual processor settings.
AI-assisted processor configuration is available at no additional cost in all AWS Regions where CloudWatch pipelines is generally available. Standard CloudWatch Logs ingestion and storage rates still apply.
To get started, open the Amazon CloudWatch console, navigate to pipelines under Ingestion, and follow the pipeline wizard. To learn more, see the CloudWatch pipelines documentation.
AWS Lambda now supports Amazon S3 Files, enabling your Lambda functions to mount Amazon S3 buckets as file systems and perform standard file operations without downloading data for processing. Built using Amazon EFS, S3 Files gives you the performance and simplicity of a file system with the scalability, durability, and cost-effectiveness of S3. Multiple Lambda functions can connect to the same S3 Files file system simultaneously, sharing data through a common workspace without building custom synchronization logic.
The S3 Files integration simplifies stateful workloads in Lambda by eliminating the overhead of downloading objects, uploading results, and managing ephemeral storage limits. This is particularly valuable for AI and machine learning workloads where agents need to persist memory and share state across pipeline steps. Lambda durable functions make these multi-step AI workflows possible by orchestrating parallel execution with automatic checkpointing. For example, an orchestrator function can clone a repository to a shared workspace while multiple agent functions analyze the code in parallel. The durable function handles checkpointing of execution state while S3 Files provides seamless data sharing across all steps.
To use S3 Files with Lambda, configure your function to mount an S3 bucket through the Lambda console, AWS CLI, AWS SDKs, AWS CloudFormation, or AWS Serverless Application Model (SAM). To learn more about how to use S3 Files with your Lambda function, visit the Lambda developer guide.
S3 Files is supported for Lambda functions not configured with a capacity provider, in all AWS Regions where both Lambda and S3 Files are available, at no additional charge beyond standard Lambda and S3 pricing.
Amazon Athena Spark now supports AWS PrivateLink so that you can access APIs and endpoints from your Amazon Virtual Private Cloud (VPC) without traversing the public internet. This feature can help you meet compliance requirements by allowing you to access and use Athena Spark APIs and endpoints entirely within the AWS network.
You can now create AWS PrivateLink interface endpoints to connect from clients in your VPC. The Athena VPC endpoint supports all Athena Spark APIs and endpoints, including the Spark Connect, Spark Live UI and Spark History Server endpoints. Communication between your VPC and Athena Spark APIs and endpoints is then conducted entirely within the AWS network, providing a secure pathway for your data.
To get started, you can create an interface VPC endpoint to connect to Amazon Athena Spark using the AWS Management Console or AWS Command Line Interface (AWS CLI) commands or AWS CloudFormation. This new feature is available in all AWS Regions where Amazon Athena Spark and AWS PrivateLink are available. For more information, refer to the AWS PrivateLink documentation and Athena Spark documentation.
AWS Marketplace now offers sellers a streamlined self-service process to submit Value Added Tax (VAT) invoices and receive automated VAT disbursements for deemed supply of digital services in the European Union, Norway, and the United Kingdom. Under the European Union, United Kingdom, and Norwegian VAT laws, when AWS Marketplace facilitates digital service sales, the law creates a deemed supply arrangement between sellers and the marketplace. To receive VAT payment, sellers are required to invoice the relevant AWS Europe, Middle East, and Africa (EMEA) SARL branch facilitating their transaction. This new capability provides sellers a unified experience within AWS Marketplace to submit VAT invoices and receive VAT payments, simplifying tax compliance under deemed supply arrangements.
Sellers can now access the new experience through AWS Marketplace Management portal or AWS Partner Central, submit VAT invoices, track invoice status in real-time, and receive automated VAT payments. The system automatically validates invoices against mandatory fields and disburses VAT amounts once buyer payment is received. Sellers can consolidate multiple deemed supply transactions into a single invoice per period, provided they relate to the same AWS EMEA branch and currency. Sellers can also submit invoices before buyer payment is received, with the system automatically processing disbursements when all conditions are met. Enhanced reporting capabilities through the Seller Reports help sellers identify eligible transactions and reconcile disbursements for audit and financial reporting purposes. This launch eliminates the previous manual process and separate platform onboarding while reducing the administrative burden of tracking VAT invoices and payments.
This capability is available for transactions where both seller and buyer AWS accounts are located in the same country when transacting via the AWS EMEA branch across 20 jurisdictions: Austria, Belgium, Czech Republic, Denmark, Estonia, Finland, France, Germany, Greece, Ireland, Italy, Luxembourg, Netherlands, Norway, Poland, Portugal, Romania, Spain, Sweden, and the United Kingdom.
To learn more about VAT payment for deemed supply transactions and invoice submission requirements, visit the AWS Marketplace Seller Guide or VAT on Deemed Supply FAQs.
Amazon Elastic Kubernetes Service (EKS) now offers the Amazon EKS Hybrid Nodes gateway, a feature that automates networking between your Amazon EKS cluster VPC and Kubernetes Pods running on Amazon EKS Hybrid Nodes. The Amazon EKS Hybrid Nodes gateway eliminates the need to make on-premises pod networks routable or coordinate network infrastructure changes when running in hybrid Kubernetes environments.
Networking in hybrid Kubernetes environments can be complex, often requiring changes to on-premises routing configurations, coordination with network teams, and ongoing maintenance as workloads scale. The Amazon EKS Hybrid Nodes gateway addresses these challenges by automatically enabling Kubernetes control plane-to-webhook communication, pod-to-pod traffic across cloud and on-premises environments, and connectivity for AWS services such as Application Load Balancers, Network Load Balancers, and Amazon Managed Service for Prometheus. Customers deploy the Amazon EKS Hybrid Nodes gateway to Amazon EC2 instances using Helm, and the gateway automatically maintains VPC route tables as workloads scale. The Amazon EKS Hybrid Nodes gateway codebase is open source.
The Amazon EKS Hybrid Nodes gateway is available in all AWS Regions where Amazon EKS Hybrid Nodes is available, except the China Regions. The Amazon EKS Hybrid Nodes gateway is offered at no additional charge. You pay for the underlying AWS infrastructure used to run the gateway, including Amazon EC2 instance charges and any associated data transfer fees. To get started, visit the Amazon EKS Hybrid Nodes gateway documentation.
Today, AWS announced the availability of Qwen3-Coder-Next, Qwen3-30B-A3B, Qwen3-30B-A3B-Thinking-2507, Qwen3-Coder-30B-A3B-Instruct, and Qwen3.5-4B in Amazon SageMaker JumpStart, expanding the portfolio of foundation models available to AWS customers. These five models from Qwen bring specialized capabilities spanning agentic coding, efficient reasoning, extended thinking, and multimodal understanding, enabling customers to build sophisticated AI applications across diverse use cases on AWS infrastructure.
These models address different enterprise AI challenges with specialized capabilities:
Qwen3-Coder-Next excels at long-horizon reasoning, complex tool use, and recovery from execution failures, making it ideal for powering coding agents in CLI/IDE platforms.
Qwen3-30B-A3B uniquely supports seamless switching between thinking and non-thinking modes, making it well suited for general-purpose assistant tasks like multilingual dialogue, math reasoning, and tool calling.
Qwen3-30B-A3B-Thinking-2507 delivers significantly improved performance on complex reasoning tasks in math, science, and coding, with enhanced long-context understanding.
Qwen3-Coder-30B-A3B-Instruct is designed for agentic coding workflows with a custom function call format and repo-scale context understanding.
Qwen3.5-4B supports unified vision-language training and 201 languages, making it ideal for lightweight multimodal deployments.
With SageMaker JumpStart, customers can deploy any of these models with just a few clicks to address their specific AI use cases.
To get started with these models, navigate to the Models section of SageMaker Studio or use the SageMaker Python SDK to deploy the models to your AWS account. For more information about deploying and using foundation models in SageMaker JumpStart, see the Amazon SageMaker JumpStart documentation.
Amazon SageMaker now supports multi-region replication from IAM Identity Center (IdC), enabling you to deploy SageMaker Unified Studio domains in different regions from your IdC instance. This new capability empowers enterprise customers, particularly those in regulated industries like financial services and healthcare, to maintain compliance while leveraging centralized workforce identity management.
As an Amazon SageMaker Unified Studio administrator, you can deploy SageMaker domains closer to your workforce based on data residency needs while maintaining seamless single sign-on (SSO) access. Organizations can address use cases such as maintaining IdC in one region while processing sensitive data in compliance-required regions, supporting global operations with centralized identity management, and meeting data sovereignty requirements without compromising SSO capabilities.
To get started see the SageMaker Unified Studio documentation and to learn about setting up IAM Identity Center multi-Region support see the IAM Identity Center User Guide.
Amazon SageMaker AI now supports inference recommendations, a new capability that eliminates manual optimization and benchmarking to deliver optimal inference performance. By delivering validated, optimal deployment configurations with performance metrics, SageMaker AI accelerates the path to production and keeps your model developers focused on building accurate models, not managing infrastructure.
Customers bring their own generative AI models, define expected traffic patterns, and specify a performance goal (optimize for cost, minimize latency, or maximize throughput). SageMaker AI then analyzes the model's architecture and applies optimizations aligned to that goal across multiple instance types, benchmarking each configuration on real GPU infrastructure using NVIDIA AIPerf. By evaluating multiple instance types, customers can select the most price-performant option for their workload. The result is deployment-ready configurations with validated metrics including time to first token, inter-token latency, request latency percentiles, throughput, and cost projections.
The capability is available today in seven AWS Regions: US East (N. Virginia), US West (Oregon), US East (Ohio), Asia Pacific (Tokyo), Europe (Ireland), Asia Pacific (Singapore), and Europe (Frankfurt). To learn more, visit the SageMaker AI documentation.
2026 年 3 月 17 日、AWSジャパン 大阪オフィスにて「AWS Business Innovation Series - West Japan」の第 1 回を開催しました。AI IDE「Kiro」の仕様駆動開発を座学・ハンズオン・ハッカソンの 3 ステップで体験いただき、普段コードを書かない方も半日で動くプロトタイプを作り上げました。当日の様子と参加者の声をお届けします。
Amazon EVS で Microsoft Windows Server ライセンスが利用可能になりました。BYOL または vCPU 時間単位の AWS 提供ライセンスの 2 つのオプションから選択でき、EVS 環境内で Windows Server VM を柔軟に運用できます。本記事では、vCenter コネクタの作成からライセンスエンタイトルメントの設定、KMS サーバーによるアクティベーションまでの手順を説明します。
Amazon Bedrock の IAM プリンシパルベースのコスト配分を使うと、Bedrock API コールの呼び出し元 (IAM ユーザーやロール) を CUR 2.0 と Cost Explorer に自動記録できます。本記事では、この機能のセットアップ方法と、チーム・部門・プロジェクトごとに Bedrock コストを追跡・配分する方法を解説します。
業務の「ボトルネック」を見つけて AI エージェントに置き換える。成果を KPI で測り報告する。この至極正当 […]
Find the “bottleneck” in operations, replac […]
本記事は、2026 年 1 月 28 日に公開された “Innovation sandbox on […]
This post explores how Oldcastle used AWS services to transform their analytics and AI capabilities by integrating Infor ERP with Amazon Aurora and Amazon Quick Sight. We discuss how they overcame the limitations of traditional cloud ERP reporting to deploy real-time dashboards and build a scalable analytics system. This practical, enterprise-grade approach offers a blueprint that organizations can adapt when extending ERP capabilities with cloud-native analytics and AI.
In this post, we show how to combine DVC (Data Version Control), Amazon SageMaker AI, and Amazon SageMaker AI MLflow Apps to build end-to-end ML model lineage. We walk through two deployable patterns — dataset-level lineage and record-level lineage — that you can run in your own AWS account using the companion notebooks.
Today, we're excited to announce Claude Cowork in Amazon Bedrock. You can now run Cowork and Claude Code Desktop through Amazon Bedrock, directly or using an LLM gateway. In this post, we walk through how Claude Cowork integrates with Amazon Bedrock and show an example of how knowledge workers use it in practice.