Amazon Redshift announces the general availability of RG instances, a new generation of provisioned cluster nodes powered by AWS Graviton processors that deliver better performance, running data warehouse and data lake workloads up to 2.4x as fast as previous generation RA3 instances, at 30% lower price per vCPU. RG instances include Redshift's custom-built vectorized data lake query engine that processes Apache Iceberg and Parquet data on your cluster nodes — enabling you to run SQL analytics across your data warehouse and data lake using a single engine. This eliminates the need for Redshift Spectrum's separate scanning fleet and its associated per-terabyte charges.
Whether you're running structured data warehouse workloads on Redshift Managed Storage or querying open-format data lake tables in Amazon S3, RG instances deliver significant performance improvements — up to 2.2x as fast as RA3 instances for data warehouse workloads, up to 2.4x as fast for Apache Iceberg queries, and up to 1.5x as fast for Parquet workloads. The natively built data lake engine features a purpose-built I/O subsystem with smart prefetch, NVMe caching, vectorized Parquet scans, and advanced file and partition-level pruning. Just-in-Time (JIT) Analyze delivers consistently fast queries without manual tuning — automatically collecting and updating table statistics as your data and workload patterns evolve. Intelligent NVMe caching keeps frequently accessed datasets close to compute, reducing round-trips to your data lake for faster response times on repeated queries. RG instances are available at launch in two instance sizes — rg.xlarge and rg.4xlarge. Existing RA3 clusters can migrate using Snapshot & Restore, Elastic Resize, or Classic Resize. RG instances are available with flexible pricing options, including On-Demand, and 1-year and 3-year Reserved Instances with No Upfront payment. For pricing details, visit the Amazon Redshift pricing page.
Amazon Redshift RG instances are now available in the following AWS Regions: US East (N. Virginia), US East (Ohio), US West (Oregon), US West (N. California), Canada (Central), South America (São Paulo), Europe (Ireland), Europe (Frankfurt), Europe (London), Europe (Paris), Europe (Stockholm), Europe (Milan), Europe (Spain), Asia Pacific (Tokyo), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Mumbai), Asia Pacific (Jakarta), Asia Pacific (Hong Kong), Asia Pacific (Osaka), Asia Pacific (Malaysia), Asia Pacific (Hyderabad), Asia Pacific (Taiwan), Asia Pacific (Melbourne), and Middle East (UAE).
To get started, refer to the following resources:
Previously, the Amazon CloudFront Premium flat-rate plan supported a single usage allowance, and customers who outgrew it needed to contact us to discuss custom pricing options. Now, the Premium plan offers a range of self-service monthly usage levels ranging from 500 million to 6 billion requests and 50 TB to 600 TB, so customers can scale within the plan as their applications grow. Enterprises and mid-sized businesses whose baseline traffic previously made them ineligible for flat-rate plans can now adopt the Premium plan at a usage level that fits their application.
You select your Premium plan usage level in the CloudFront console, see your new monthly flat-rate price instantly, and can change your usage level at any time with no commitment required. All Premium plan features are included at every usage level. Flat-rate plans provide a single monthly price covering content delivery, AWS WAF and DDoS protection, bot management, Amazon Route 53 DNS, Amazon CloudWatch Logs ingestion, serverless edge compute, and Amazon S3 storage credits — with no overage charges.
To get started, visit the CloudFront console. To learn more, refer to the Launch Blog or Amazon CloudFront Developer Guide.
Amazon Connect Customer now enables you to embed Cases and Customer Profiles into custom agent applications, helping agents access case details and customer context alongside the tools they already use to resolve issues. Developers can use the Amazon Connect SDK to bring native Connect experiences into custom applications, reducing the need to build and maintain these capabilities from scratch.
The Amazon Connect SDK is available in all AWS Regions where Amazon Connect Customer is available. To learn more and get started, visit the administrator guide and developer guide.
Amazon Elastic Kubernetes Service (Amazon EKS) now supports Amazon Application Recovery Controller (ARC) zonal shift and zonal autoshift when using the open source Karpenter project for compute provisioning. ARC helps you manage and coordinate recovery for your applications across AWS Regions and Availability Zones (AZs). With this launch, you can better maintain Kubernetes application availability by automating the process of shifting in-cluster network traffic away from an impaired AZ.
Customers increasingly deploy highly available applications in Amazon EKS across multiple AZs to eliminate a single point of failure. With ARC zonal shift, you can temporarily mitigate an AZ impairment by redirecting in-cluster network traffic away from the impacted AZ. For a fully automated experience, authorize AWS to manage this on your behalf using ARC zonal autoshift, which includes practice runs to verify your cluster functions as expected with one less AZ. When a zonal shift is activated for your EKS cluster, Karpenter stops provisioning new capacity in the impaired AZ, halts voluntary disruptions such as consolidation and drift for nodes in that AZ, and prevents voluntary disruptions in healthy zones if they depend on scheduling pods to the impaired zone. Pods with strict scheduling requirements such as volume affinities that require the impaired zone will not trigger launch attempts. When the zonal shift expires or is canceled, Karpenter resumes normal operations.
This Karpenter feature works with both manual zonal shifts and zonal autoshifts. No custom ARC resources are required as Karpenter integrates directly with the existing EKS cluster ARC resource. To enable zonal shift support, set the ENABLE_ZONAL_SHIFT setting in your Karpenter settings. To learn more, visit the Karpenter documentation and the ARC zonal shift documentation.
Amazon SageMaker Feature Store now supports the SageMaker Python SDK v3, including new capabilities for Lake Formation access controls and Apache Iceberg table properties configuration. Feature Store is a fully managed repository to store, share, and manage features for machine learning models. Data scientists can now use the modern, modular SDK v3 interfaces to manage feature groups with fine-grained access control and optimized offline storage.
Data scientists can use the SageMaker Python SDK v3 to manage feature groups with streamlined workflows and reduced boilerplate. With Lake Formation integration, data scientists can enforce column-level and row-level access control on offline store data through an opt-in setting at feature group creation. With Iceberg properties support, data scientists can configure additional table properties such as compaction and snapshot expiration directly through the SDK to optimize storage and query performance. These capabilities allow data scientists to govern access to feature data and optimize offline store performance from a single SDK without managing separate tools.
These capabilities are available in all AWS Regions where Amazon SageMaker Feature Store is available. To get started, install SageMaker Python SDK v3.8.0 or later. For more information, see Lake Formation access controls and Iceberg metadata management documentation.
Today, AWS announces the release of full repository code review, a new capability in AWS Security Agent that performs deep, context-aware security analysis of your entire codebase. Unlike traditional static analysis tools that match code against known vulnerability patterns, full repository code review reasons about your application's architecture, trust boundaries, and data flows to surface systemic vulnerabilities that pattern-matching tools miss. When vulnerabilities are found, the scanner generates code remediation, specific fixes tied to the exact file and line, so teams can identify and remediate security vulnerabilities faster than ever before. This capability is available at no additional charge for existing AWS Security Agent customers during the preview.
AI-driven cybersecurity capabilities are advancing rapidly. AWS Security Agent can find vulnerabilities and build working exploits at a scale and speed we haven't seen before. AWS is prioritizing free early access for customers, giving defenders the opportunity to strengthen their codebases and share what they learn so the whole industry can benefit.
Full repository code review is available in in all AWS Regions where AWS Security Agent is available.
To get started, visit the AWS Security Agent console to enable full repository code review and run your first review. To learn more, see the AWS Security Agent documentation.
Amazon EventBridge Scheduler expands its AWS SDK integrations with 13 additional services and 619 new API actions across new and existing AWS services, including AWS Lambda Managed Instances. You can now schedule direct invocations of a broader set of AWS services without writing custom integration code.
EventBridge Scheduler is a serverless scheduler that allows you to create, run, and manage billions of scheduled events and tasks across more than 270 AWS services, without provisioning or managing the underlying infrastructure. With this expansion, you can now schedule a broader set of AWS API actions directly from Scheduler, including scaling Lambda managed instances up or down on a time-based schedule for precise control over capacity provisioning.
These enhancements are now generally available in all AWS Regions where AWS EventBridge Scheduler is available. Specific services and API actions are subject to the availability of the target service in the AWS Region. To learn more about AWS EventBridge Scheduler SDK integrations, visit the Developer Guide.
AWS Lambda now supports scheduled scaling for functions running on Lambda Managed Instances, using Amazon EventBridge Scheduler. This capability allows you to define one-time or recurring schedules that proactively adjust your function's capacity limits ahead of expected traffic, to meet your performance targets during peak periods and avoid costs during idle periods.
Lambda Managed Instances lets you run Lambda functions on managed Amazon EC2 instances with built-in routing, load balancing, and autoscaling. Capacity scales between your configured minimum and maximum execution environment limits based on traffic. Previously, customers with predictable traffic patterns, such as business-hours applications or marketing events, were required to manually adjust capacity limits ahead of known demand changes or build custom automation to manage scaling on a schedule. With scheduled scaling, you can now define schedules that proactively adjust your function’s capacity limits ahead of expected traffic. For example, you can schedule capacity limits to increase before business hours so execution environments are ready when the first requests arrive. You can also define a schedule that scales capacity to zero during idle periods (so you only pay when the function is actively serving traffic), and schedule it to scale back up before traffic returns.
Scheduled scaling for functions running on Lambda Managed Instances is available in all AWS Regions where Lambda Managed Instances is supported. You can create schedules using the Amazon EventBridge Scheduler console, AWS CLI, AWS SDK, AWS CDK, or AWS CloudFormation. To learn more, visit the AWS Lambda Managed Instances documentation, Amazon EventBridge Scheduler documentation, AWS Lambda pricing, and Amazon EventBridge pricing.
Amazon RDS for Oracle now offers M8i and R8i instances with Oracle Database Standard Edition 2 (SE2) with the License Included (LI). M8i and R8i instances are powered by custom Intel Xeon 6 processors, available only on AWS, delivering the highest performance and fastest memory bandwidth among comparable Intel processors in the cloud. The new instances offer up to 15% better price-performance, and 2.5x more memory bandwidth compared to previous generation Intel-based instances.
With RDS for Oracle SE2 LI, customers don’t have to separately purchase Oracle license and support. Amazon RDS for Oracle SE2 LI offers subscription based pay-per-use pricing inclusive of software license, support, compute resources, and a managed database service. To use RDS for Oracle SE2 LI, customers can create database instances from the AWS Management Console or using the AWS CLI. and specify the LI option. For more details about how you can lower cost and simplify operations of running Oracle databases, refer to the AWS blog Rethink Oracle Standard Edition Two on Amazon RDS for Oracle.
Configuration details for available instance types can be found on the Amazon RDS for Oracle Instance Types page. Review the AWS blog Rethink Oracle Standard Edition Two on Amazon RDS for Oracle to explore how you can lower cost and simplify operations by using Amazon RDS Oracle SE2 License Included instances for your Oracle databases.
For pricing and AWS Region availability, see Amazon RDS for Oracle Pricing.
Amazon Redshift RG instances, powered by AWS Graviton, run data warehouse and data lake workloads up to 2.4x as fast as RA3 instances at 30% lower price per vCPU. Its integrated data lake query engine supports open table formats such as Apache Iceberg.
こんにちは。Amazon Web Services Japan のソリューションアーキテクト、田中 里絵 です […]
こんにちは。Amazon Web Services Japan のソリューションアーキテクト、田中 里絵 です […]
月曜日の朝を想像してみてください。CFO、エンジニアリングディレクター、財務チームの全員が、包括的な AWS […]
最新の Amazon Q コスト機能は、FinOps チームがクラウド支出を管理する方法を変革しています。Fi […]
こんにちは。Amazon Web Services Japan のソリューションアーキテクト、田中 里絵 です […]
Cloud and AI are transforming industries and societies at unprecedented speed, from accelerating research and enhancing customer experiences to optimizing business processes and enriching public services. At Amazon Web Services (AWS), we believe that for the cloud and AI to reach their full potential, customers need control over their data and choices for how and […]
Today, we’re excited to announce the preview release of full repository code review, a new capability in AWS Security Agent that performs deep, context-aware security analysis of your entire code base. AI-driven cybersecurity capabilities are advancing rapidly. AWS Security Agent can now find vulnerabilities and build working exploits across your entire code base at a […]
In this post, we show you how to set up FLOPs tracking during LLM fine-tuning using the open source Fine-Tuning FLOPs Meter toolkit on Amazon SageMaker AI. You learn how to determine your compliance status with a single configuration flag and generate audit-ready documentation.
In this post, we'll show you how our multi-document discovery feature solves this problem. It serves as an automated pre-processing step, analyzing unknown documents, clustering them by type, and generating schemas ready for the IDP Accelerator. You'll learn how the new capability uses visual embeddings for automatic clustering and agents for schema generation. We'll also walk you through running the solution on your own document collections.
In this post, we demonstrate how Amazon FinTech teams are using Amazon Bedrock and other AWS services to build a scalable AI application to transform how regulatory inquiries are handled. Each team using this solution creates and maintains its own dedicated knowledge base, populated with that team's specific documents and reference materials.