India customers can now use UPI (Unified Payments Interface) Scan and Pay to sign up for AWS or make payments to their invoices.
UPI is a popular and convenient payment method in India, which facilitates instant bank-to-bank transfers between two parties through mobile phones with internet. The new Scan and Pay experience simplifies payments by allowing customers to scan a QR code displayed on the AWS Console using their UPI mobile app (such as Google Pay, PhonePe, Paytm, or Amazon Pay), eliminating the need to manually enter a UPI ID.
This enhancement makes the UPI payment experience more secure, convenient, and error-free for customers signing up for AWS or making one-time payments. Scan and Pay reduces friction and aligns with how customers commonly use UPI for everyday transactions. Customers can also set up UPI AutoPay using Scan and Pay for automatic monthly payments up to INR 15,000.
To use this feature, customers log in to the AWS Console and select UPI as their payment method during signup or when making a payment. A QR code is displayed on screen, which customers scan using their UPI mobile app to verify and authorize the transaction.
To learn more, see Managing Payment Methods in India.
AWS is announcing the general availability of Amazon EC2 R8idn and Amazon EC2 R8idb instances, powered by custom sixth generation Intel Xeon Scalable processors, available only on AWS. These instances also feature the latest sixth generation AWS Nitro cards. R8idn and R8idb deliver up to 43% better compute performance per vCPU compared to previous generation R6in instances.
Amazon EC2 R8idn instances offer up to 600 Gbps network bandwidth, the highest network bandwidth among enhanced networking EC2 instances, combined with up to 22,800 GB of local NVMe instance storage. Amazon EC2 R8idb instances deliver up to 300 Gbps EBS bandwidth and up to 1,440K IOPS, the highest EBS performance among non-accelerated compute EC2 instances.
R8idn instances are ideal for memory-intensive workloads requiring high network throughput and local storage, such as in-memory databases, real-time big data analytics, and large-scale distributed caching layers. R8idb instances are ideal for memory-intensive workloads requiring high block storage performance, such as large-scale commercial databases, high-performance file systems, and enterprise analytics platforms.
Amazon EC2 R8idn and R8idb instances are available in US East (N. Virginia, Ohio), US West (Oregon), and Europe (Spain). R8idn and R8idb instances are available via Savings Plans, On-Demand, and Spot instances. For more information, visit the Amazon EC2 R8i instance page.
AWS is announcing the general availability of Amazon EC2 M8idn and Amazon EC2 M8idb instances, powered by custom sixth generation Intel Xeon Scalable processors, available only on AWS. These instances also feature the latest sixth generation AWS Nitro cards. M8idn and M8idb deliver up to 43% better compute performance per vCPU compared to previous generation M6idn instances.
Amazon EC2 M8idn instances offer up to 600 Gbps network bandwidth, the highest network bandwidth among enhanced networking EC2 instances. Amazon EC2 M8idb instances deliver up to 300 Gbps EBS bandwidth, the highest EBS performance among non-accelerated compute EC2 instances.
M8idn instances are ideal for network-intensive general purpose workloads requiring local storage, such as distributed compute, data analytics, and high-performance file systems. M8idb instances are ideal for storage-intensive general purpose workloads such as large commercial databases, data lakes, and NoSQL databases that benefit from both high EBS throughput and low-latency local NVMe storage.
Amazon EC2 M8idn and Amazon EC2 M8idb instances are available in US East (N. Virginia), US West (Oregon), and Europe (Spain). M8idn and M8idb instances are available via Savings Plans, On-Demand, and Spot instances. For more information, visit the Amazon EC2 M8i instance page.
AWS WAF now supports dynamic label interpolation, enabling you to forward WAF classification signals to your origin and embed context in responses with a single rule. Security engineers who previously maintained a separate rule for every signal value can now use ${namespace:} syntax in custom request headers, response headers, and response bodies to forward an entire label namespace at once. For example, one rule with a dynamic variable can forward all IP reputation signals to your application, which can then respond adaptively, such as by enforcing MFA.
Interpolation also introduces synthetic labels: built-in values resolved from request context, including client IP address, WAF request ID, and JA3 and JA4 fingerprints. You can embed these in custom block pages and challenge pages so users reporting false positives have a reference ID to cite, or forward TLS fingerprints to your application for adaptive auth decisions. Interpolation works with any label namespace, including AWS Managed Rules, AWS Marketplace rule groups, and your own custom labels. Headers automatically adapt as new labels are added to the namespace, and when multiple labels match, values resolve to a comma-separated list.
Dynamic label interpolation is available in all AWS Regions where AWS WAF is available at no additional cost. There are no new API fields or configuration steps. To get started, see Dynamic label interpolation in the AWS WAF Developer Guide, or explore the sample on GitHub.
Amazon SageMaker HyperPod now supports AMI-based configuration that provisions Slurm cluster nodes with the software and configurations needed for a production-ready environment to run AI/ML training workloads. This removes the need to download, configure, or upload lifecycle configuration scripts to Amazon S3. With fewer operational steps to prepare a cluster and no lifecycle configuration scripts executing during node provisioning, cluster creation time is significantly reduced, so you can start running jobs sooner.
AMI-based configuration includes required software such as Docker, Enroot, and Pyxis, and configurations such as Slurm accounting, SSH key generation, Slurm log rotation and user home directory setup. To enable AMI-based configuration, omit the LifeCycleConfig block from the instance group configuration when creating clusters using the CreateCluster API, or when using the SageMaker AI console, select "None" under Lifecycle scripts in Custom setup. For additional customization on top of the AMI-based configuration baseline, an extension script can be provided, allowing you to focus only on what capabilities and software to add, such as user configuration, observability, or LDAP integration.
Extension scripts can be configured when creating clusters through both the API and the SageMaker AI console. Using the CreateCluster API, specify the new OnInitComplete parameter and SourceS3Uri in the LifeCycleConfig block. Via the console, provide the S3 URI to the extension script in the "Extension script file in S3" field in Custom setup. For advanced use cases that require full control over provisioning, custom lifecycle configuration scripts remain fully supported through both the API and the SageMaker AI console.
This feature is available in all AWS Regions where SageMaker HyperPod is available. To get started with creating HyperPod Slurm clusters with AMI-based node lifecycle configuration, see Getting started with SageMaker HyperPod using the AWS CLI or Getting started with SageMaker HyperPod using the SageMaker AI console in the SageMaker AI developer guide.
Starting today, Amazon Elastic Compute Cloud (Amazon EC2) M8gn and M8gb instances are available in the AWS Europe (Ireland) region. These instances are powered by AWS Graviton4 processors to deliver up to 30% better compute performance than AWS Graviton3 processors, and feature the latest 6th generation AWS Nitro Cards. M8gn instances offer up to 600 Gbps network bandwidth, the highest network bandwidth among network optimized EC2 instances. M8gb offer up to 300 Gbps of EBS bandwidth to provide higher EBS performance compared to same-sized equivalent Graviton4-based instances.
M8gn are ideal for network-intensive workloads such as high-performance file systems, distributed web scale in-memory caches, caching fleets, real-time big data analytics, and Telco applications such as 5G User Plane Function (UPF). M8gb are ideal for workloads requiring high block storage performance such as high performance databases and NoSQL databases.
M8gn instances offer instance sizes up to 48xlarge and metal-48xl, up to 768 GiB of memory, up to 600 Gbps of networking bandwidth, and up to 120 Gbps of bandwidth to Amazon Elastic Block Store (EBS). They also support EFA networking on the 16xlarge, 24xlarge, 48xlarge sizes, metal-24xl, and metal-48xl.
M8gb instances offer sizes up to 48xlarge and metal-48xl, up to 768 GiB of memory, up to 300 Gbps of EBS bandwidth, and up to 400 Gbps of networking bandwidth. They support Elastic Fabric Adapter (EFA) networking on the 16xlarge, 24xlarge, 48xlarge sizes, metal-24xl, and metal-48xl., which enables lower latency and improved cluster performance for workloads deployed on tightly coupled clusters.
The new instances are available in the following AWS Regions: US East (N. Virginia), US West (Oregon), and Europe (Ireland). Metal sizes are available in US East (N. Virginia) region.
To learn more, see Amazon EC2 M8gn and M8gb Instances. To begin your Graviton journey, visit the Level up your compute with AWS Graviton page.
Amazon Connect Outbound Campaigns now detects customer time zones using all phone numbers and addresses on a customer profile, not just the primary contact fields. Previously, time zone detection used only the primary phone number, which could miss customers who span multiple time zones.
When a profile's contact information spans multiple time zones, the system delivers only during hours that fall within your configured window in every detected time zone, and skips profiles when no overlap exists. For example, if a customer has a mobile number with an Eastern time area code and a business number with a Pacific time area code, and your campaign is configured for 9am–5pm delivery, messages will only be sent between 12pm–5pm ET (9am–2pm PT), when both time zones fall within the allowed window.
This capability is available in all AWS Regions where Amazon Connect Outbound Campaigns is offered at no additional cost. To learn more, see the Amazon Connect Outbound Campaign documentation.
The AWS Advanced JDBC Wrapper now provides column-level client-side encryption through its KMS Encryption plugin. The wrapper provides advanced capabilities such as failover handling, AWS authentication integration, and enhanced monitoring for Amazon Aurora and Amazon RDS open source databases. It enables Java applications to encrypt sensitive data before it reaches the database without changing application code.
Database encryption at rest and TLS in transit are foundational security controls. However, with these controls decrypt the data within the database engine. A compromised credential, overprivileged administrator, or SQL injection attack can expose sensitive data in plaintext, creating compliance risk under PCI DSS, HIPAA, and GDPR. The KMS Encryption plugin closes this gap by working at the JDBC driver level. When your application writes to an encrypted column, the plugin encrypts the value before it reaches the database. When reading, it decrypts the value before returning it. Plaintext remains visible only to your application, while the database sees encrypted values. The database can verify data integrity through HMAC validation without needing the encryption key. The plugin integrates seamlessly with your existing SQL, Spring, Hibernate, and connection pool setup without requiring code changes.
The KMS Encryption plugin works with Amazon RDS and Amazon Aurora PostgreSQL and MySQL-compatible databases.
The plugin is available as an open-source project under the Apache 2.0 license. To learn more, see AWS Advanced JDBC Wrapper documentation.
AWS Elemental MediaTailor now supports monetization functions, a new capability that lets customers customize how MediaTailor builds ad decision server (ADS) requests and manages session data during ad-personalized playback. With monetization functions, customers can call external APIs and run inline data transformations at defined points in the playback session — eliminating the need to build and operate middleware between the player and the ADS.
Common use cases include resolving hashed email addresses into privacy-compliant identity envelopes through providers such as LiveRamp, appending contextual metadata from a content management system to every ad request through providers like GraceNote, activate header bidding workflows through providers like The Trade Desk and running A/B tests across multiple ad decision servers. Monetization functions are fail-open by design: if a function encounters an error, exceeds its timeout, or hits a resource limit, MediaTailor discards the output and proceeds with default ad-insertion behavior, so viewers' playback is never interrupted.
Monetization functions is available at general availability in all AWS regions where AWS Elemental MediaTailor operates. You are billed per lifecycle hook invocation at a flat rate that does not depend on the number, type, or complexity of functions. For full details, see the MediaTailor pricing page, the Monetization Functions section of the MediaTailor User Guide, and the MediaTailor product page.
Today, AWS announces availability notifications for AWS Capabilities by Region in AWS Builder Center, a new subscription-based system that automatically alerts builders when an AWS service(s) and/or features(s) become available in their target Regions. Availability notifications make it easy for builders to track availability of 1,500+ services and features across 37 AWS Regions, accelerating infrastructure planning and deployment decisions.
With availability notifications, builders can subscribe at the service level through AWS Builder Center UI, and the subscription automatically covers all underlying features across selected Regions, so there's no need to track each feature individually. Notifications are delivered through two channels: instantaneous in-app alerts within AWS Builder Center, and a consolidated weekly email digest. Subscriptions and notification preferences can be managed through Settings > Notifications in AWS Builder Center. Common use cases include tracking a specific capability launch, monitoring service parity across AWS Regions, and preparing for upcoming migrations or Regional expansions. For example, a solutions architect expanding a generative AI application into new Regions can subscribe to Amazon Bedrock and receive automatic updates as Knowledge Bases, Guardrails, and other features become available.
Starting today, Amazon EC2 G7e instances accelerated by NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs are now available in Europe (London) region. G7e instances offer up to 2.3x inference performance compared to G6e.
Customers can use G7e instances to deploy large language models (LLMs), agentic AI models, multimodal generative AI models, and physical AI models. G7e instances offer the highest performance for spatial computing workloads as well as workloads that require both graphics and AI processing capabilities. G7e instances feature up to 8 NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs, with 96 GB of memory per GPU, and 5th Generation Intel Xeon processors. They support up to 192 virtual CPUs (vCPUs) and up to 1600 Gbps of networking bandwidth. G7e instances support NVIDIA GPUDirect Peer to Peer (P2P) that boosts performance for multi-GPU workloads. Multi-GPU G7e instances also support NVIDIA GPUDirect Remote Direct Memory Access (RDMA) with EFA in EC2 UltraClusters, reducing latency for small-scale multi-node workloads.
You can use G7e instances for Amazon EC2 in the following AWS Regions: US West (Oregon), US East (N. Virginia, Ohio), Europe (Spain, London) and Asia Pacific (Tokyo, Seoul). You can purchase G7e instances as On-Demand Instances, Spot Instances, or as part of Savings Plans.
To get started, visit the AWS Management Console, AWS Command Line Interface (CLI), and AWS SDKs. To learn more, visit G7e instances.
Amazon SageMaker Unified Studio announces new administration features that give administrators more control over identity configuration and user management for both IAM and Identity Center domain types.
In SageMaker IAM domains, administrators can now onboard users through single sign-on by configuring AWS IAM Identity Center. After configuration, administrators can add IAM roles, IAM users, IAM Identity Center users, and IAM Identity Center groups as project members. Teams can collaborate on project data and resources regardless of how individual members authenticate. Administrators can set up IAM Identity Center integration in the SageMaker Unified Studio admin portal. A new domain user management page for SageMaker IAM domains gives administrators a consolidated view of all users active in the domain, where they can manage access and update permissions from a single screen.
In SageMaker Identity Center domains, users can now access the SageMaker Unified Studio portal by federating through an IAM role. SageMaker Unified Studio creates a unique user session for each federated user, so users sharing the same role don't overwrite each other's work. Administrators can audit individual actions even when multiple users share a single IAM role.
With these features, customers can use IAM identity or IAM Identity Center corporate identity across both domain types, giving teams flexibility to collaborate in SageMaker Unified Studio regardless of their authentication method.
These features are available in the following AWS Regions: Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Paris), Europe (Stockholm), South America (São Paulo), US East (N. Virginia), US East (Ohio), and US West (Oregon).
To learn more, visit the SageMaker Unified Studio documentation.
Starting today, Amazon Elastic Compute Cloud (Amazon EC2) X8i instances are available in the Europe (Ireland) and Asia Pacific (Mumbai) regions. These instances are powered by custom Intel Xeon 6 processors available only on AWS. X8i instances are SAP-certified and deliver the highest performance and fastest memory bandwidth among comparable Intel processors in the cloud. They deliver up to 43% higher performance, 1.5x more memory capacity (up to 6TB), and 3.3x more memory bandwidth compared to previous generation X2i instances.
X8i instances are designed for memory-intensive workloads like SAP HANA, large databases, data analytics, and Electronic Design Automation (EDA). Compared to X2i instances, X8i instances offer up to 50% higher SAPS performance, up to 47% faster PostgreSQL performance, 88% faster Memcached performance, and 46% faster AI inference performance. X8i instances come in 14 sizes, from large to 96xlarge, including two bare metal options.
To get started, visit the AWS Management Console. X8i instances can be purchased via Savings Plans, On-Demand instances, and Spot instances. For more information visit X8i instances page.
Starting today, the Amazon Elastic Compute Cloud (Amazon EC2) G6 instances powered by NVIDIA L4 GPUs are available in AWS European Sovereign Cloud (Germany). G6 instances can be used for a wide range of graphics-intensive and machine learning (ML) use cases.
Customers can use G6 instances for deploying ML models for natural language processing, language translation, video and image analysis, speech recognition, and personalization. G6 instances are also well-suited for graphics workloads, such as creating and rendering real-time, cinematic-quality graphics and game streaming. G6 instances feature up to 8 NVIDIA L4 Tensor Core GPUs with 24 GB of memory per GPU and third generation AMD EPYC processors. They also support up to 192 vCPUs, up to 100 Gbps of network bandwidth, and up to 7.52 TB of local NVMe SSD storage.
In addition to AWS European Sovereign Cloud (Germany), Amazon EC2 G6 instances are available today in the AWS US East (N. Virginia and Ohio), US West (Oregon), Europe (Frankfurt, London, Paris, Spain, Stockholm and Zurich), Asia Pacific (Mumbai, Tokyo, Malaysia, Seoul and Sydney), South America (Sao Paulo), Middle East (UAE) and Canada (Central) Regions. Customers can purchase G6 instances as On-Demand Instances, Spot Instances, or as part of Savings Plans.
To get started, visit the AWS Management Console, AWS Command Line Interface (CLI), and AWS SDKs. To learn more, visit the G6 instance page.
AWS Marketplace launches a new Tax management portal that provides sellers a streamlined self-service process to view and download invoices, eliminating the need to request invoices through support channels. Tax management portal integrates the invoice management directly into the AWS Partner Central console, providing centralized access to both seller listing fee invoices and invoices issued to buyers in applicable regions. The portal streamlines invoice retrieval and record-keeping for sellers and partner finance teams managing AWS Marketplace operations.
Sellers can now access the new experience through AWS Partner Central or AWS Marketplace Management portal, enabling advanced search and filtering capabilities, allowing you to search listing fee invoices by invoice ID, date range, or invoicing entity. Sellers can also access these invoices programmatically through the ListInvoiceSummaries API. Sellers can download multiple invoices simultaneously, making it efficient to prepare for audits, reconcile financial records, or retrieve tax-related information. This self-service approach provides transparency into listing fees across different AWS Marketplace invoicing entities, supporting multi-region operations and revenue tracking needs.
Beyond listing fee invoices, India-based sellers can view and download tax invoices generated on their behalf to the buyer through the portal, with filtering by invoice ID, buyer name, date range, buyer account ID, or invoicing entity.
Seller listing fee invoices are supported for all AWS Marketplace entities. To learn more about accessing and managing the invoices, visit AWS Marketplace Seller Guide.
Amazon Route 53 Resolver endpoints now support DNS64 on inbound endpoints and IPv6 forwarding through the internet gateway (IGW) on outbound endpoints, making it easier to manage hybrid DNS across IPv4 and IPv6 networks. With DNS64 enabled on inbound endpoints, you can synthesize AAAA (IPv6) responses for domains that only have A (IPv4) records, allowing IPv6-only clients on-premises to reach IPv4 services on AWS without changes to those services. You can also configure outbound endpoints to forward DNS queries to public IPv6 name servers through the IGW.
Amazon Route 53 Resolver endpoints simplify hybrid cloud DNS by enabling seamless query resolution between on-premises networks and Amazon Virtual Private Cloud (Amazon VPC). As you transition workloads to IPv6, these capabilities help your IPv6 resources on VPCs and on-premises networks communicate with both IPv4 and IPv6 destinations without additional workarounds.
These capabilities are available at no additional cost in all AWS Regions where Route 53 Resolver endpoints are supported. To get started, see the Route 53 VPC Resolver documentation. For regional availability, see the Route 53 Region list. For Route 53 Resolver endpoint pricing, see here.
IAM Policy Autopilot now supports Java applications and Terraform-aware policy generation, expanding its language coverage and its ability to generate less permissive IAM policies from code. IAM Policy Autopilot is an open-source tool launched at re:Invent 2025 that helps builders quickly and deterministically create baseline IAM policies on AWS that you can refine as your application evolves, reducing the time you spend writing IAM policies and troubleshooting access issues.
Java has been one of the most requested languages from IAM Policy Autopilot users. With this release, Java developers can now analyze their application source code to generate AWS IAM policies, joining Python, TypeScript, and Go as supported languages. In addition, IAM Policy Autopilot can now cross-reference Terraform resource definitions with SDK calls in your application code to resolve actual resource ARNs for each IAM action. For example, a policy generated for an application that calls S3 GetObject will now reference the specific bucket defined in Terraform rather than defaulting to wildcard (*) resources.
IAM Policy Autopilot is available at no additional cost and can be used from your own machine. To get started, visit the IAM Policy Autopilot GitHub repository.
私は 4 月 27 日週、英国のヨークで休暇を過ごしました。ヨークは、国内で最も幽霊に取りつかれた街として知ら […]
Kiro エージェントは、SAP Clean Core 戦略へのアプローチを変革します。Clean Core ジャーニーは通常、カスタムコードのアセスメント、違反の修正、そして今後の Clean Core 方式での運用という3つのフェーズで展開されますが、Kiro エージェントはそのすべてを加速できます。GitHub でリリースされたこれらのオープンソースエージェントは、ABAP コード違反の分類を自動化し、詳細な修正ガイダンスを提供します。数千の ABAP オブジェクトを手動でレビューする代わりに、数週間かかっていた包括的なアセスメントを数時間で完了できるようになります。
AWS For SAP Management MCP Serverは、SAP の管理機能をAIアシスタントにもたらします。サーバーが持つ知識—SAPコンポーネントの依存関係、検証済みパターン、クロスサービスロジック—を1つのインターフェースに統合し、対話できるようにします。このMCPをAIアシスタントに設定することで、適切なSAPコンテキストと20以上のSAP対応ツールへのアクセスが得られます。これらのツールは、ライブSAP環境で管理クエリを実行し、AWSサービス間でデータを相関分析し、SAP的に意味のある結果を返し、お客様の承認を得てアクションを実行できます。
本日、AWS は AWS SDK for SAP ABAP Knowledge MCP Server の一般提供開始を発表しました。このリリース以前は、IDE と AWS ドキュメントの間でコンテキストスイッチを行い、汎用的な使用方法や例を基に自分で ABAP コードを作成する必要がありました。この MCP(Model Context Protocol)サーバーにより、エージェント型 IDE が公式の AWS SDK for SAP ABAP リファレンスと同じ信頼性の高いドキュメントを使用して、その作業を代行します。
本ブログは、KDDI 株式会社 パーソナル事業統括本部 システム開発本部 ライフデザインプラットフォーム部 ア […]
Amazon Web Services (AWS) achieved three Standar Nasional Indonesia (SNI) certifications for the AWS Asia Pacific (Jakarta) Region: SNI ISO/IEC 27017:2015, SNI ISO/IEC 27018:2019, and SNI ISO 9001:2015. SNI represents Indonesia’s national standards framework, comprising standards that are broadly applicable across industries within the country. These certifications further demonstrate that AWS services meet nationally recognized […]
Read all about the latest AWS security features, compliance updates, and hands-on resources in our new, monthly digest posts. You’ll find expert blog posts, new service capabilities, code samples, and workshops. AWS Security Blog posts This month’s AWS Security Blog posts covered AI security, identity and access management, threat intelligence, data protection, and multicloud operations. […]
Bulletin ID: 2026-027-AWS
Scope: AWS
Content Type: Important (requires attention)
Publication Date: 2026/05/07 19:45 PM PDT
Description:
Amazon is aware of a class of issues in the Linux kernel related to the original issue (CVE-2026-31431). The issues commonly referred to as "DirtyFrag" are present in a number of loadable modules, including xfrm_user/esp4/esp6 and ipcomp4/ipcomp6. On systems that allow unprivileged users to create sockets directly or through CAP_NET_ADMIN, or allow the creation of unprivileged user namespaces (user+net), an actor may gain access to kernel memory and thus escalate their privileges.
Please refer to the article below for the most up-to-date and complete information related to this AWS Security Bulletin.
In this post, you will learn how to implement reinforcement learning with verifiable rewards (RLVR) to introduce verification and transparency into reward signals to improve training performance. This approach works best when outputs can be objectively verified for correctness, such as in mathematical reasoning, code generation, or symbolic manipulation tasks. You will also learn how to layer techniques like Group Relative Policy Optimization (GRPO) and few-shot examples to further improve results. You’ll use the GSM8K dataset (Grade School Math 8K: a collection of grade school math problems) to improve math problem solving accuracy, but the techniques used here can be adapted to a wide variety of other use cases.
In this post, you will learn how to secure reserved GPU capacity for short-term workloads using Amazon Elastic Compute Cloud (Amazon EC2) Capacity Blocks for ML and Amazon SageMaker training plans. These solutions can address GPU availability challenges when you need short-term capacity for load testing, model validation, time-bound workshops, or preparing inference capacity ahead of a release.
In this post, we'll explore how we built a proof-of-concept that converts natural language queries into executable seismic workflows while providing a question-answering capability for Halliburton's Seismic Engine tools and documentation. We'll cover the technical details of the solution, share evaluation results showing workflow acceleration of up to 95%, and discuss key learnings that can help other organizations enhance their complex technical workflows with generative AI.