AWS Updates Feed

← トップに戻る

AWS Updates - 2026-02-02

AWS What's New

AWS HealthImaging adds JPEG XL support

AWS HealthImaging now supports storing and retrieving lossy compressed medical images in the JPEG XL transfer syntax (1.2.840.10008.1.2.4.112). It is now simpler than ever to integrate HealthImaging with applications that require JPEG XL encoded DICOM data, such as digital pathology whole slide imaging systems.

With this launch, HealthImaging stores your JPEG XL Lossy image data without transcoding, which maintains the fidelity of your data and reduces your storage costs. Further, you can retrieve stored image frames in the JPEG XL format without the latency of transcoding at retrieval time.


AWS announces Flexible Cost Allocation in AWS GovCloud (US)

AWS Network Firewall now supports flexible cost allocation through AWS Transit Gateway native attachments in AWS GovCloud (US) Regions, enabling you to automatically distribute data processing costs across different AWS accounts. Customers can create metering policies to apply data processing charges based on their organization's chargeback requirements instead of consolidating all expenses in the firewall owner account.

This capability helps security and network teams better manage centralized firewall costs by distributing charges to application teams based on actual usage. Organizations can now maintain centralized security controls while automatically allocating inspection costs to the appropriate business units or application owners, eliminating the need for custom cost management solutions.

Flexible cost allocation is available in AWS GovCloud (US-East) and AWS GovCloud (US-West) Regions. You can enable these features using the AWS Management Console, AWS Command Line Interface (CLI) and the AWS Software Development Kit (SDK).

There are no additional charges for using this attachment or flexible cost allocation beyond standard pricing of AWS Network Firewall and AWS Transit Gateway. To get started, visit the Flexible Cost Allocation on AWS Transit Gateway service documentation.


Amazon Connect now provides APIs to test and simulate voice interactions

Amazon Connect now offers APIs to configure and run tests that simulate contact center experiences, making it easy to validate workflows, self-service voice interactions, and their outcomes. With these APIs, you can programmatically configure test parameters, including the caller's phone number or customer profile, the reason for the call (such as "I need to check my order status"), the expected responses (such as "Your request has been processed"), and business conditions like after-hours scenarios or full call queues. With this launch, you can also integrate testing directly into CI/CD pipelines, run multiple tests simultaneously to validate workflows at scale, and enable automated regression testing as part of your deployment cycles. These capabilities allow you to rapidly validate changes to your workflows and confidently deploy new customer experiences to production.

To learn more about these features, see the Amazon Connect API Reference and Amazon Connect Administrator Guide. These features are available in Asia Pacific (Mumbai), Africa (Cape Town), Europe (Frankfurt), US East (N. Virginia), Asia Pacific (Seoul), Europe (London), Asia Pacific (Tokyo), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), and Canada (Central) regions. To learn more about Amazon Connect, AWS’s AI-native customer experience solution, please visit the Amazon Connect website.


DeepSeek OCR, MiniMax M2.1, and Qwen3-VL-8B-Instruct models are now available on SageMaker JumpStart

Today, AWS announced the availability of DeepSeek OCR, MiniMax M2.1, and Qwen3-VL-8B-Instruct in Amazon SageMaker JumpStart, expanding the portfolio of foundation models available to AWS customers. These three models bring specialized capabilities spanning document intelligence, multilingual coding, advanced multimodal reasoning, and vision-language understanding, enabling customers to build sophisticated AI applications across diverse use cases on AWS infrastructure.

These models address different enterprise AI challenges with specialized capabilities:
DeepSeek OCR explores visual-text compression for document processing. It can extract structured information from forms, invoices, diagrams, and complex documents with dense text layouts.
MiniMax M2.1 is optimized for coding, tool use, instruction following, and long-horizon planning. It automates multilingual software development and executes complex, multi-step office workflows, empowering developers to build autonomous applications.
Qwen3-VL-8B-Instruct delivers ssuperior text understanding and generation, deeper visual perception and reasoning, extended context length, enhanced spatial and video dynamics comprehension, and stronger agent interaction capabilities.
With SageMaker JumpStart, customers can deploy any of these models with just a few clicks to address their specific AI use cases.

To get started with these models, navigate to the SageMaker JumpStart model catalog in the SageMaker console or use the SageMaker Python SDK to deploy the models to your AWS account. For more information about deploying and using foundation models in SageMaker JumpStart, see the Amazon SageMaker JumpStart documentation


Announcing memory-optimized instance bundles for Amazon Lightsail

Amazon Lightsail now offers memory-optimized instance bundles with up to 512 GB memory. The new instance bundles are available in 7 sizes, with Linux and Windows operating system (OS) and application blueprints, for both IPv6-only and dual-stack networking types. You can create instances using the new bundles with pre-configured OS and application blueprints including WordPress, cPanel & WHM, Plesk, Drupal, Magento, MEAN, LAMP, Node.js, Ruby on Rails, Amazon Linux, Ubuntu, CentOS, Debian, AlmaLinux, and Windows.

The new memory-optimized instance bundles enable you to run memory-intensive workloads that require high RAM-to-vCPU ratios in Lightsail. These high-memory instance bundles are ideal for workloads such as in-memory databases, real-time big data analytics, in-memory caching systems, high-performance computing (HPC) applications, and large-scale enterprise applications that process extensive datasets in memory.

These new bundles are now available in all AWS Regions where Amazon Lightsail is available. For more information on pricing, click here.


AWS STS now supports validation of select identity provider specific claims from Google, GitHub, CircleCI and OCI

AWS Security Token Service (STS) now supports validation of select identity provider specific claims from Google, GitHub, CircleCI and Oracle Cloud Infrastructure in IAM role trust policies and resource control policies for OpenID Connect (OIDC) federation into AWS via the AssumeRoleWithWebIdentity API.

With this new capability, you can reference these custom claims as condition keys in IAM role trust policies and resource control policies, expanding your ability to implement fine-grained access control for federated identities and help you establish your data perimeters. This enhancement builds upon IAM's existing OIDC federation capabilities, which allow you to grant temporary AWS credentials to users authenticated through external OIDC-compatible identity providers.


Amazon CloudFront announces mutual TLS support for origins

Amazon CloudFront announces support for mutual TLS authentication (mTLS) for origins, a security protocol that enables customers to verify that requests to their origin servers come only from their authorized CloudFront distributions using TLS certificates. This certificate-based authentication provides cryptographic verification of CloudFront's identity, eliminating the need for customers to manage custom security controls.

Previously, verifying that requests came from CloudFront distributions required customers to build and maintain custom authentication solutions like shared secret headers or IP allow-lists, particularly for public or externally hosted origins. These approaches required ongoing operational overhead to rotate secrets, update allow-lists, and maintain custom code. Now with origin mTLS support, customers can implement a standardized, certificate-based authentication approach that eliminates this operational burden. This enables organizations to enforce strict authentication for their proprietary content, ensuring that only verified CloudFront distributions can establish connections to backend infrastructure ranging from AWS origins and on-premises servers to third-party cloud providers and external CDNs. Customers can leverage client certificates issued by AWS Private Certificate Authority or third-party private Certificate Authorities, which they import through AWS Certificate Manager.

Customers can configure origin mTLS using the AWS Management Console, CLI, SDK, CDK, or CloudFormation. Origin mTLS is supported for all origins that support mutual TLS on AWS such as Application Load Balancer and API Gateway, as well as on-premises and custom origins. There is no additional charge for origin mTLS. Origin mTLS is also available in the Business and Premium flat-rate pricing plans. For detailed implementation guidance and best practices, visit the CloudFront origin mutual TLS documentation.


AWS Multi-party approval now requires one-time password verification for voting

AWS Multi-Party Approval now requires approvers to verify their voting actions with a one-time password (OTP) sent to their registered AWS Identity Center email address. This additional security layer prevents AWS IAM Identity Center administrators from bypassing multi-party approval controls by impersonating approvers through credential resets or authentication endpoint modifications. When approvers access the Approval Portal and attempt to cast their vote on protected operations, the system generates a six-digit verification code and sends it to their email. Approvers enter this code within 10 minutes to complete their vote, with up to three attempts allowed.

The OTP verification process activates only when approvers submit their vote decision, they can review all approval request details before verification is required. If approvers don't receive the email or the code expires, they can request a new code through the interface.

AWS Multi-party approval with OTP verification for voting is available in all AWS Regions where Mulit-party approval is offered at no additional charge. To learn more, visit the AWS Multi-party approval documentation


Build Production-Ready Drug Discovery and Robotics Pipelines with NVIDIA NIMs on SageMaker JumpStart

Amazon SageMaker JumpStart now enables one-click deployment of four NVIDIA NIMs models purpose-built for biosciences and physical AI: ProteinMPNN, Nemotron-3.5B-Instruct, MSA Search NIM, and Cosmos Reason. NVIDIA NIM™ provides prebuilt, optimized inference microservices for rapidly deploying the latest AI models on any NVIDIA-accelerated infrastructure. These models bring advanced capabilities spanning protein design, reasoning with configurable outputs, and physical world understanding, enabling customers to accelerate biosciences research, drug discovery, and embodied AI applications on AWS infrastructure.

ProteinMPNN enables fast and efficient protein sequence optimization guided by structural data. This NIM generates high-quality sequences with enhanced binding affinity and stability, validated through experimental results. Designed for scalability and flexibility, ProteinMPNN integrates seamlessly into protein engineering workflows, transforming applications like enzyme design and therapeutic development.

MSA Search NIM supports GPU-accelerated Multiple Sequence Alignment (MSA) of a query amino acid sequence against a set of protein sequence databases. These databases are searched for similar sequences to the query and then the collection of sequences are aligned to establish similar regions even when the proteins have different lengths and motifs.

Nemotron-3.5B-Instruct delivers high reasoning performance, native tool calling support, and extended context processing with 256k token context window. This model employs an efficient hybrid Mixture-of-Experts (MoE) architecture to ensure higher throughput than its predecessors for agentic and coding workloads, while maintaining the reasoning depth of a larger model. It is ideal for building multi-agent workflows, developer productivity tools, processes automation, and for scientific and mathematical reasoning analysis, amongst others.

Cosmos Reason is an open , customizable, reasoning vision language model (VLM) for physical AI and robotics. It enables robots and vision AI agents to reason like humans, using prior knowledge, physics understanding, and common sense to understand and act in the real world. This model understands space, time, and fundamental physics, and can serve as a planning model to reason what steps an embodied agent might take next.

With SageMaker JumpStart, customers can deploy any of these models with just a few clicks to address their specific AI use cases.

To get started with these models, navigate to the SageMaker JumpStart model catalog in the SageMaker console or use the SageMaker Python SDK to deploy the models to your AWS account. For more information about deploying and using foundation models in SageMaker JumpStart, see the Amazon SageMaker JumpStart documentation.


Amazon RDS for MySQL now supports new minor versions 8.0.45 and 8.4.8

Amazon Relational Database Service (Amazon RDS) for MySQL now supports MySQL minor versions 8.0.45 and 8.4.8, the latest minors released by the MySQL community. We recommend upgrading to the newer minor versions to fix known security vulnerabilities in prior versions of MySQL and to benefit from bug fixes, performance improvements, and new functionality added by the MySQL community. Learn more about the enhancements in RDS for MySQL 8.0.45 and 8.4.8 in the Amazon RDS user guide.

You can leverage automatic minor version upgrades to automatically upgrade your databases to more recent minor versions during scheduled maintenance windows. You can also use Amazon RDS Managed Blue/Green deployments for safer, simpler, and faster updates to your MySQL instances. Learn more about upgrading your database instances, including automatic minor version upgrades and Blue/Green Deployments, in the Amazon RDS User Guide.

Amazon RDS for MySQL makes it simple to set up, operate, and scale MySQL deployments in the cloud. Learn more about pricing details and regional availability at Amazon RDS for MySQL. Create or update a fully managed Amazon RDS for MySQL database in the Amazon RDS Management Console.


AWS News Blog

AWS Weekly Roundup: Amazon Bedrock agent workflows, Amazon SageMaker private connectivity, and more (February 2, 2026)

Over the past week, we passed Laba festival, a traditional marker in the Chinese calendar that signals the final stretch leading up to the Lunar New Year. For many in China, it’s a moment associated with reflection and preparation, wrapping up what the year has carried, and turning attention toward what lies ahead. Looking forward, […]


AWS Security Bulletins

Security Findings in SageMaker Python SDK

Bulletin ID: 2026-004-AWS
Scope: AWS
Content Type: Important (requires attention)
Publication Date: 2026/02/02 14:30 PM PST

Description:

CVE-2026-1777 - Exposed HMAC in SageMaker Python SDK
SageMaker Python SDK’s remote functions feature uses a per‑job HMAC key to protect the integrity of serialized functions, arguments, and results stored in S3. We identified an issue where the HMAC secret key is stored in environment variables and disclosed via the DescribeTrainingJob API. This allows third parties with DescribeTrainingJob permissions to extract the key, forge cloud-pickled payloads with valid HMACs, and overwrite S3 objects.

CVE-2026-1778 - Insecure TLS Configuration in SageMaker Python SDK
SageMaker Python SDK is an open source library for training and deploying machine learning models on Amazon SageMaker. We identified an issue where SSL certificate verification was globally disabled in the Triton Python backend. This configuration was introduced to work around SSL errors during model downloads from public sources (e.g., TorchVision) and it affected all HTTPS connections when the Triton Python model was imported.

Impacted versions:

- HMAC Configuration in SageMaker Python SDK v3 < v3.2.0
- HMAC Configuration in SageMaker Python SDK v2 < v2.256.0
- Insecure TLS Configuration in SageMaker Python SDK v3 < v3.1.1
- Insecure TLS Configuration in SageMaker Python SDK v2 < v2.256.0

Please refer to the article below for the most up-to-date and complete information related to this AWS Security Bulletin.