AWS Updates - 2026-02-24
AWS What's New
MediaConvert Introduces new video probe API and UI
- Link: https://aws.amazon.com/about-aws/whats-new/2026/02/aws-mediaconvert-introduces-video-probe/
- Published: 2026-02-24
Introducing Probe API, a powerful and free metadata analysis tool for AWS Elemental MediaConvert. Optimized for efficiency, Probe API reads header metadata to quickly return essential information about your media files, including codec specifications, pixel formats, color space details, and container information - all without waiting to process the actual video content. This analysis capability makes it an invaluable tool for content creators, developers, and media professionals who need to quickly validate files, automate workflows, or utilize Elementals' Step Functions to make encoding decisions based on source material characteristics.
For complete implementation details and usage examples, please visit the MediaConvert API Reference documentation. The Probe API can be utilized in any region where AWS Elemental MediaConvert is available, making it a versatile tool for streamlining your media workflow analysis.
To get started with Probe API and explore its capabilities, visit the AWS Elemental MediaConvert product page or consult the User Guide for comprehensive documentation.
AWS AppConfig integrates with New Relic for automated rollbacks
- Link: https://aws.amazon.com/about-aws/whats-new/2026/02/aws-appconfig-new-relic-for-automated-rollback/
- Published: 2026-02-24
AWS AppConfig today launched a new integration that enables automated, intelligent rollbacks during feature flag and dynamic configuration deployments using New Relic Workflow Automation. Building on AWS AppConfig's third-party alert capability, this integration provides teams using New Relic with a solution to automatically detect degraded application health and trigger rollbacks in seconds, eliminating manual intervention.
When you deploy feature flags using AWS AppConfig's gradual deployment strategy, the AWS AppConfig New Relic Extension continuously monitors your application health against configured alert conditions. If issues are detected during a feature flag update and deployment, such as increased error rates or elevated latency, the New Relic Workflow automatically sends a notification to trigger an immediate rollback, reverting the feature flag to its previous state. This closed-loop automation reduces the time between detection and remediation from minutes to seconds, minimizing customer impact during failed deployments.
Amazon EKS Node Monitoring Agent is now open source
- Link: https://aws.amazon.com/about-aws/whats-new/2026/02/amazon-eks-node-monitoring-agent-open-source/
- Published: 2026-02-24
Amazon Elastic Kubernetes Service (Amazon EKS) Node Monitoring Agent is now open source. You can access the Amazon EKS Node Monitoring Agent source code and contribute to its development on GitHub.
Running workloads reliably in Kubernetes clusters can be challenging. Cluster administrators often have to resort to manual methods of monitoring and repairing degraded nodes in their clusters. The Amazon EKS Node Monitoring Agent simplifies this process by automatically monitoring and publishing node-level system, storage, networking, and accelerator issues as node conditions, which are used by Amazon EKS for automatic node repair. With the Amazon EKS Node Monitoring Agent’s source code available on GitHub, you now have visibility into the agent’s implementation, can customize it to fit your requirements, and can contribute directly to its ongoing development.
The Amazon EKS Node Monitoring Agent is included in Amazon EKS Auto Mode and is available as an Amazon EKS add-on in all AWS Regions where Amazon EKS is available.
To learn more about the Amazon EKS Node Monitoring Agent and node repair, visit the Amazon EKS documentation.
Announcing AWS Elemental Inference
- Link: https://aws.amazon.com/about-aws/whats-new/2026/02/aws-elemental-inference-generally-avail/
- Published: 2026-02-24
AWS Elemental Inference, a fully managed Artificial Intelligence (AI) service that enables broadcasters and streamers to automatically generate vertical content and highlight clips for mobile and social platforms in real time, is now generally available. The service applies AI capabilities to live and on-demand video in parallel with encoding and helps companies and creators to reach audiences in any format without requiring AI expertise or dedicated production teams.
With Elemental Inference you can process video once and optimize it everywhere—creating main broadcasts while simultaneously generating vertical versions for TikTok, Instagram Reels, YouTube Shorts, Snapchat, and other mobile platforms in parallel with live video. For example, sports broadcasters can automatically generate vertical highlight clips during live games and distribute them to social platforms in real-time, capturing viral moments as they happen rather than hours later.
The service launches with two AI features: vertical video cropping that transforms live and on-demand landscape broadcasts into mobile-optimized formats, and advanced metadata analysis that identifies key moments to generate highlight clips from live content. Using an agentic AI application that requires no prompts or human-in-the-loop intervention, broadcasters can scale content production without adding manual workflows or production staff—the system automatically adapts content for each platform. In beta testing, large media companies achieved 34% or more savings on AI-powered live video workflows compared to using multiple point solutions.
AWS Elemental Inference is available in the following AWS Regions: US East (N. Virginia), US West (Oregon), Asia Pacific (Mumbai), and Europe (Ireland).
For more information, visit the AWS News Blog or explore the AWS Elemental Inference documentation.
Amazon EC2 M8a instances now available in AWS Europe (Frankfurt) region
- Link: https://aws.amazon.com/about-aws/whats-new/2026/02/amazon-ec2-m8a-instances-europe-frankfurt/
- Published: 2026-02-24
Starting today, the general-purpose Amazon EC2 M8a instances are available in AWS Europe (Frankfurt) region. M8a instances are powered by 5th Gen AMD EPYC processors (formerly code named Turin) with a maximum frequency of 4.5 GHz, deliver up to 30% higher performance, and up to 19% better price-performance compared to M7a instances.
M8a instances deliver 45% more memory bandwidth compared to M7a instances, making these instances ideal for even latency sensitive workloads. M8a instances deliver even higher performance gains for specific workloads. M8a instances are up to 60% faster for GroovyJVM benchmark, and up to 39% faster for Cassandra benchmark compared to Amazon EC2 M7a instances. M8a instances are SAP-certified and offer 12 sizes including 2 bare metal sizes. This range of instance sizes allows customers to precisely match their workload requirements.
M8a instances are built using the latest sixth generation AWS Nitro Cards and ideal for applications that benefit from high performance and high throughput such as financial applications, gaming, rendering, application servers, simulation modeling, mid-size data stores, application development environments, and caching fleets.
To get started, sign in to the AWS Management Console. Customers can purchase these instances via Savings Plans, On-Demand instances, and Spot instances. For more information visit the Amazon EC2 M8a instance page.
Amazon RDS Snapshot Export to S3 now available in AWS GovCloud (US) Regions
- Link: https://aws.amazon.com/about-aws/whats-new/2026/02/rds-exports-s3-available-gov-cloud/
- Published: 2026-02-24
Amazon RDS Snapshot Export to S3 is now available in AWS GovCloud (US) regions, enabling you to export snapshot data in Apache Parquet format for analytics, data retention, and machine learning use cases.
Snapshot export to S3 supports all DB snapshot types (manual, automated system, and AWS Backup snapshots) and runs directly on the snapshot without impacting database performance. The exported data in Apache Parquet format can be analyzed using other AWS services such as Amazon Athena, Amazon SageMaker, or Amazon Redshift Spectrum, or with big data processing frameworks such as Apache Spark.
You can create a snapshot export with just a few clicks in the Amazon RDS Management Console or by using the AWS SDK or CLI. Snapshot Export to S3 is supported for Amazon Aurora PostgreSQL - Compatible Edition and Amazon Aurora MySQL, Amazon RDS for PostgreSQL, Amazon RDS for MySQL, and Amazon RDS for MariaDB snapshots. For more information, including instructions on getting started, read Aurora documentation or Amazon RDS documentation.
AWS Deadline Cloud now supports running tasks together in chunks
- Link: https://aws.amazon.com/about-aws/whats-new/2026/02/aws-deadline-cloud-running-tasks-together-in/
- Published: 2026-02-24
Today, AWS Deadline Cloud announces support for grouping tasks into chunks to efficiently execute multiple tasks together. AWS Deadline Cloud is a fully managed service that simplifies render management for computer-generated 2D/3D graphics and visual effects for films, TV shows, commercials, games, and industrial design.
When your job has short tasks, or tasks that need to run in an environment with a long startup time, chunking them together for execution reduces the time and cost for completing the job. When creating a job, you can now manually specify a chunk size for the number tasks to group together for execution, or alternately specify a target run time for the execution of a chunk of tasks. The target run time will be used to dynamically change the number of tasks grouped together as the job completes to improve execution efficiency and achieve the target run time.
Running tasks together in chunks is now available in all AWS Regions where AWS Deadline Cloud is supported. To get started, visit the Deadline Cloud developer guide.
Amazon EC2 R7a instances are now available in the Asia Pacific (Hyderabad) Region
- Link: https://aws.amazon.com/about-aws/whats-new/2026/02/amazon-ec2-r7a-instances-asia-pacific-hyderabad-regions/
- Published: 2026-02-24
Starting today, the general-purpose Amazon EC2 R7a instances are now available in AWS Asia Pacific (Hyderabad) Region. R7a instances, powered by 4th Gen AMD EPYC processors (code-named Genoa) with a maximum frequency of 3.7 GHz, deliver up to 50% higher performance compared to R6a instances.
These instances can be purchased as Savings Plans, Reserved, On-Demand, and Spot instances. To get started, visit the AWS Management Console, AWS Command Line Interface (CLI), and AWS SDKs. To learn more, visit the R7a instances page.
Amazon EC2 C8i and C8i-flex instances are now available in Asia Pacific (Malaysia) and South America (Sao Paulo) regions
- Link: https://aws.amazon.com/about-aws/whats-new/2026/02/amazon-ec2-c8i-c8i-flex-instances-asia-pacific-malaysia-south-america-sao--paulo-regions/
- Published: 2026-02-24
Starting today, Amazon Elastic Compute Cloud (Amazon EC2) C8i and C8i-flex instances are available in the Asia Pacific (Malaysia) and South America (Sao Paulo) regions. These instances are powered by custom Intel Xeon 6 processors, available only on AWS, delivering the highest performance and fastest memory bandwidth among comparable Intel processors in the cloud. These C8i and C8i-flex instances offer up to 15% better price-performance, and 2.5x more memory bandwidth compared to previous generation Intel-based instances. They deliver up to 20% higher performance than C7i and C7i-flex instances, with even higher gains for specific workloads. The C8i and C8i-flex are up to 60% faster for NGINX web applications, up to 40% faster for AI deep learning recommendation models, and 35% faster for Memcached stores compared to C7i and C7i-flex.
C8i-flex are the easiest way to get price performance benefits for a majority of compute intensive workloads like web and application servers, databases, caches, Apache Kafka, Elasticsearch, and enterprise applications. They offer the most common sizes, from large to 16xlarge, and are a great first choice for applications that don't fully utilize all compute resources.
C8i instances are a great choice for all memory-intensive workloads, especially for workloads that need the largest instance sizes or continuous high CPU usage. C8i instances offer 13 sizes including 2 bare metal sizes and the new 96xlarge size for the largest applications.
To get started, sign in to the AWS Management Console. Customers can purchase these instances via Savings Plans, On-Demand instances, and Spot instances. For more information about the new C8i and C8i-flex instances visit the AWS News blog.
AWS Compute Optimizer now applies AWS-generated tags to EBS snapshots created during automation
- Link: https://aws.amazon.com/about-aws/whats-new/2026/02/aws-compute-optimizer-applies-tags-ebs-snapshots/
- Published: 2026-02-24
AWS Compute Optimizer makes it easier to identify snapshots that are created when snapshotting and deleting unattached Amazon Elastic Block Store (EBS) volumes by automatically applying an AWS-generated tag during creation. This enhancement improves visibility and tracking of EBS snapshots created through Compute Optimizer Automation.
When Compute Optimizer creates a snapshot before deleting an unattached EBS volume—whether initiated through manual actions or automation rules—the snapshot now receives the tag aws:compute-optimizer:automation-event-id with a tag value that links the snapshot to the unique identifier of the automation event that created it. This allows you to easily identify, track, and manage snapshots created through the automated optimization process, helping you maintain better governance over your backup resources and understand the source of snapshots in your environment.
This is available in all AWS Regions where AWS Compute Optimizer Automation is available. To get started with automated optimization, go to the AWS Compute Optimizer console or visit the user guide documentation.
AWS Observability now available as a Kiro power
- Link: https://aws.amazon.com/about-aws/whats-new/2026/02/aws-observability-kiro-power/
- Published: 2026-02-24
Today, AWS announces AWS Observability as a Kiro power, enabling developers and operators to investigate infrastructure and application health issues faster with AI agent-assisted workflows in Kiro. Kiro Powers is a repository of curated and pre-packaged Model Context Protocol (MCP) servers, steering files, and hooks validated by Kiro partners to accelerate specialized software development and deployment use cases.
The AWS Observability power packages four specialized MCP servers with targeted observability guidance: the CloudWatch MCP server for observability data; the Application Signals MCP server for application performance monitoring; the CloudTrail MCP server for security analysis and compliance; and the AWS Documentation MCP server for contextual reference access. This unified platform gives Kiro agents instant context for comprehensive workflows including alarm response, anomaly detection, distributed tracing, SLO compliance monitoring, and security investigation. Additionally, the power includes automated gap analysis that helps you identify and fix missing instrumentation.
With the AWS Observability power, developers can now accelerate troubleshooting their distributed applications and infrastructure in minutes, directly in their IDE. The power addresses two critical needs: reducing mean time to resolution (MTTR) for active incidents and proactively improving your observability stack. For faster incident response, when investigating an active alarm, the power dynamically loads relevant guidance and operational signals so AI agents receive only the context needed for the specific troubleshooting task at hand. For stack improvement, the automated gap analysis examines your code to identify missing instrumentation patterns—such as unlogged errors, missing correlation IDs, or absent distributed tracing—and provides actionable recommendations. The power includes eight comprehensive steering guides covering incident response, alerting, performance monitoring, security auditing, and gap analysis.
The AWS Observability power is available for one-click installation within Kiro IDE and Kiro powers webpage in all AWS Regions, with each underlying MCP server functional based on regional support of the corresponding AWS service. To learn more about AWS observability MCP servers, visit our documentation.
Amazon EC2 I7ie instances now available in AWS Africa (Cape Town) region
- Link: https://aws.amazon.com/about-aws/whats-new/2026/02/amazon-ec-i-ie-instances-available-aws-africa/
- Published: 2026-02-24
AWS is announcing Amazon EC2 I7ie instances are now available in AWS Africa (Cape Town) region. Designed for large storage I/O intensive workloads, I7ie instances are powered by 5th Gen Intel Xeon Processors with an all-core turbo frequency of 3.2 GHz, offering up to 40% better compute performance and 20% better price performance over existing I3en instances. I7ie instances offer up to 120TB local NVMe storage density for storage optimized instances and offer up to twice as many vCPUs and memory compared to prior generation instances. Powered by 3rd generation AWS Nitro SSDs, I7ie instances deliver up to 65% better real-time storage performance, up to 50% lower storage I/O latency, and 65% lower storage I/O latency variability compared to I3en instances.
I7ie are high density storage optimized instances, ideal for workloads requiring fast local storage with high random read/write performance at very low latency consistency to access large data sets. These instances are available in 9 different virtual sizes and deliver up to 100Gbps of network bandwidth and 60Gbps of bandwidth for Amazon Elastic Block Store (EBS).
To learn more, visit the I7ie instances page.
Amazon RDS Custom now supports the latest GDR updates for Microsoft SQL Server
- Link: https://aws.amazon.com/about-aws/whats-new/2026/02/amazon-rds-custom-supports-latest-gdr-updates-for-microsoft-sql-server/
- Published: 2026-02-24
Amazon Relational Database Service (Amazon RDS) Custom for SQL Server now supports the latest General Distribution Release (GDR) updates for Microsoft SQL Server. This release includes support for SQL Server 2022 Cumulative Update and KB5072936 (16.00.4230.2.v1).
The GDR updates address vulnerabilities described in CVE-2026-20803. For additional information on the improvements and fixes included in these updates, see Microsoft documentation for KB5072936. You can upgrade your Amazon RDS Custom for SQL Server instances to apply these recommended updates using Amazon RDS Management Console, or by using the AWS SDK or CLI. To learn more about upgrading your database instances, see Amazon RDS Custom User Guide.
Amazon Bedrock now supports server-side tool execution with AgentCore Gateway
- Link: https://aws.amazon.com/about-aws/whats-new/2026/02/amazon-bedrock-server-side-tool-execution-agentcore-gateway/
- Published: 2026-02-24
Amazon Bedrock now enables server-side tool execution through Amazon Bedrock AgentCore Gateway integration with the Responses API. Customers can connect their AgentCore Gateway tools to Amazon Bedrock models, enabling server-side tool execution without client-side orchestration.
With this launch, customers can specify an AgentCore Gateway ARN as a tool connector in Responses API requests. Amazon Bedrock automatically discovers available tools from the gateway, presents them to the model during inference, and executes tool calls server-side when the model selects them, all within a single API call. This eliminates the need for customers to build and maintain client-side tool orchestration loops, reducing application complexity and latency for agentic workflows. Customers retain full control over tool access through their existing AgentCore Gateway configurations and AWS IAM permissions.
Server-side tool execution with AgentCore Gateway supports all models available through the Amazon Bedrock Responses API. Customers define tools using the MCP server connector type with their gateway ARN, and Amazon Bedrock handles tool discovery, model-driven tool selection, execution, and result injection automatically. Multiple tool calls within a single conversation turn are supported, and tool results are streamed back to the client in real time.
This capability is generally available in all AWS Regions where both Amazon Bedrock's Responses API and Amazon Bedrock AgentCore Gateway are available. To get started, visit the Amazon Bedrock documentation or the Amazon Bedrock console. For more information about Amazon Bedrock AgentCore Gateway, see the AgentCore documentation.
AWS WAF announces AI activity dashboard for visibility into AI bot and agent traffic
- Link: https://aws.amazon.com/about-aws/whats-new/2026/02/aws-waf-ai-activity-dashboard/
- Published: 2026-02-24
Today, AWS WAF announced a new AI activity dashboard that provides centralized visibility into AI bot and agent traffic reaching your applications. With this launch, AWS WAF Bot Control expands its detection coverage to track more than 650 unique bots and agents, offering one of the most comprehensive AI bot detection catalogs available.
AI-powered bots and autonomous agents are rapidly reshaping web traffic patterns. AI search crawlers index content, retrieval-augmented generation (RAG) systems fetch data in real time, and autonomous agents execute multi-step tasks across APIs and web applications. Without clear visibility, this traffic can increase infrastructure costs, affect application performance, and access content in ways that may not align with your organization’s security or business policies.
The AI traffic analysis dashboard provides a centralized view of AI bot and agent traffic across your protected resources. You can visualize AI traffic trends over time, identify the most active bots and frequently accessed paths, analyze request volumes by bot category and verification status, and take action directly using AWS WAF Bot Control rules, such as allowing verified AI search crawlers while rate-limiting or blocking unverified agents.
AWS WAF Bot Control's detection catalog now covers more than 650 unique bots and agents spanning categories including AI search engine crawlers, AI data collectors, AI assistants, and large language model training crawlers. The catalog is continuously updated, enabling customers to identify newly emerging AI bots as they appear.
For customers on flat-rate pricing plans, the dashboard is included with all paid plans. For WAF customers not subscribed to flat-rate plans, the AI traffic analysis dashboard is available at no additional cost. Refer to WAF pricing for details.
The new dashboard and expanded detection capabilities are available in all AWS Regions where AWS WAF is available.
To get started, visit the AWS WAF console or explore the AWS WAF Bot Control documentation.
Amazon EC2 C7i and C7i-flex instances are now available in the Africa (Cape Town) Region
- Link: https://aws.amazon.com/about-aws/whats-new/2026/02/amazon-ec2-c7i-c7i-flex-instances-africa-cape-town-regions/
- Published: 2026-02-24
Starting today, Amazon Elastic Compute Cloud (Amazon EC2) C7i-flex and C7i instances are available in the Africa (Cape Town) region. These instances are powered by powered by custom 4th Generation Intel Xeon Scalable processors (code-named Sapphire Rapids) custom processors, available only on AWS, and offer up to 15% better performance over comparable x86-based Intel processors utilized by other cloud providers.
C7i-flex instances expand the EC2 Flex instances portfolio to provide the easiest way for you to get price performance benefits for a majority of compute intensive workloads, and deliver up to 19% better price-performance compared to C6i. C7i-flex instances offer the most common sizes, from large to 16xlarge, and are a great first choice for applications that don't fully utilize all compute resources. With C7i-flex instances, you can seamlessly run web and application servers, databases, caches, Apache Kafka, and Elasticsearch, and more.
C7i instances deliver up to 15% better price-performance versus C6i instances and are a great choice for all compute-intensive workloads, such as batch processing, distributed analytics, ad serving, and video encoding. C7i instances offer larger instance sizes, up to 48xlarge, and two bare metal sizes (metal-24xl, metal-48xl). These bare-metal sizes support built-in Intel accelerators: Data Streaming Accelerator, In-Memory Analytics Accelerator, and QuickAssist Technology that are used to facilitate efficient offload and acceleration of data operations and optimize performance for workloads.
To learn more, visit Amazon EC2 C7i Instances. To get started, see the AWS Management Console.
AWS News Blog
Transform live video for mobile audiences with AWS Elemental Inference
- Link: https://aws.amazon.com/blogs/aws/transform-live-video-for-mobile-audiences-with-aws-elemental-inference/
- Published: 2026-02-24
AWS Elemental Inference is a fully managed AI service that automatically transforms live and on-demand video broadcasts into vertical formats optimized for mobile and social platforms in real time, enabling broadcasters to reach audiences on TikTok, Instagram Reels, and YouTube Shorts without manual editing or AI expertise.