Amazon Route 53 Domains now supports registration and management of 34 new top-level domains (TLDs), including .app, .dev, .art, .forum, .health, and .realty. This expansion enhances Route 53's domain registration and DNS management capabilities by offering customers industry-specific, technology-focused, and purpose-driven domain name options directly through AWS, enabling businesses and individuals to better establish their online presence.
The new TLDs cater to diverse use cases across multiple sectors. The .app domain is perfect for anyone building digital products — from mobile apps and SaaS platforms to browser extensions and developer tools. Developers can utilize .dev for development projects and technical portfolios, while .art serves creative professionals and galleries. The .forum domain suits community platforms and discussion boards. Healthcare organizations can leverage .health for medical services and wellness platforms. Real estate professionals can establish their presence with .realty domains. Additional domains like .food, .lifestyle, .living, and .love provide opportunities for specialized content and services.
Users can register these domains through the Route 53 console, AWS CLI, or SDKs, enjoying integrated DNS management and automatic renewal features. This seamless integration allows for efficient domain administration alongside existing Route 53 hosted zones and DNS records, providing a unified experience for managing both domain registration and DNS services. Additionally, developers building AI-powered workflows can leverage the AWS Agent Toolkit to register and manage these domains programmatically through a fully managed MCP server.
Complete list of new TLDs: .app, .art, .bar, .boo, .build, .dad, .day, .dev, .diy, .earth, .esq, .fit, .foo, .food, .forum, .health, .how, .lifestyle, .living, .love, .menu, .mov, .my, .nexus, .one, .page, .phd, .prof, .realty, .rest, .rsvp, .soy, .win, .zip
To learn more about Amazon Route 53 Domains and start registering new domains, visit the Amazon Route 53 Domains page. Domain registration pricing varies by TLD. Visit the pricing page for detailed pricing information.
Amazon SageMaker Unified Studio now helps you get productive faster with getting started tutorials and a development environment appearance that automatically adapts to your system preference, and adds in-product release notes to help you discover new capabilities.
On the homepage, a new getting started section helps you get productive in minutes by walking through core workflows such as running your first SQL query, analyzing data from a notebook, building a data pipeline with Visual ETL, and training an ML model. Each tutorial uses pre-loaded sample data and can be completed in under 10 minutes. The development environment now also defaults to match your operating system’s light or dark mode setting, so the interface matches your preference from your first sign-in. A new “What’s New” section surfaces recent feature announcements and release notes directly in the product, so you can stay informed about new capabilities as they launch. In 2026 alone, SageMaker Unified Studio has added over 20 new features, which you can also find in the release notes.
These enhancements are available in all AWS Regions where Amazon SageMaker Unified Studio is supported in IAM-based domains. Sign in to SageMaker Unified Studio to explore what’s new, or start with the getting started tutorials in the Amazon SageMaker Unified Studio User Guide.
We are pleased to announce general availability of Amazon EC2 P5.4xl instances on SageMaker Studio notebooks.
Amazon EC2 P5.4xl instances are powered by NVIDIA H100 Tensor Core GPUs and deliver high performance in Amazon EC2 for deep learning (DL) and high performance computing (HPC) applications. They help you accelerate your time to solution by up to 4x compared to previous-generation GPU-based EC2 instances, and reduce cost to train ML models by up to 40%. Customers can use P5 instances for training and deploying complex large language models (LLMs) and diffusion models powering generative AI applications. These applications include question answering, code generation, video and image generation, and speech recognition.
Amazon EC2 P5.4xl instances are available for SageMaker Studio notebooks in the AWS US East (N. Virginia and Ohio), US West (Oregon), Asia Pacific (Mumbai, Tokyo, Jakarta) and South America (São Paulo) regions.
Visit developer guides for instructions on setting up and using JupyterLab and CodeEditor applications on SageMaker Studio. For pricing information on these instances, please visit our pricing page.
AWS HealthOmics now supports caching completed task outputs of cancelled runs, enabling customers to reuse outputs and avoid recomputing previously completed tasks. When caching is enabled and a run is cancelled, HealthOmics automatically stores completed task outputs in the customer’s S3 bucket, allowing customers to restart runs from the point of cancellation. AWS HealthOmics is a HIPAA-eligible service that helps healthcare and life sciences customers accelerate scientific breakthroughs at scale with fully managed bioinformatics workflows.
Caching of cancelled runs helps researchers, bioinformaticians, and workflow developers debug and iteratively develop workflows efficiently by storing intermediate files and completed task outputs for inspection. This saves customers the cost of recomputing completed tasks that may have taken hours and accelerates subsequent runs by executing only the remaining incomplete tasks.
Caching cancelled runs is now available for Nextflow, WDL, and CWL runs in all AWS HealthOmics regions: US East (N. Virginia), US West (Oregon), Europe (Frankfurt, Ireland, London), Israel (Tel Aviv), and Asia Pacific (Singapore, Seoul). To learn more, visit the workflow cache documentation.
Amazon Aurora DSQL single-Region clusters are now available in Asia Pacific (Hong Kong), Asia Pacific (Mumbai), Asia Pacific (Singapore), Europe (Stockholm), and South America (Sao Paulo). Aurora DSQL is the fastest serverless, distributed SQL database that enables you to build always available applications with virtually unlimited scalability, the highest availability, and zero infrastructure management. It is designed to make scaling and resilience effortless for your applications and offers the fastest distributed SQL reads and writes.
With this launch, Aurora DSQL is available in the following AWS Regions: US East (N. Virginia), US East (Ohio), US West (Oregon), Canada (Central), Canada West (Calgary), Asia Pacific (Hong Kong), Asia Pacific (Melbourne), Asia Pacific (Mumbai), Asia Pacific (Osaka), Asia Pacific (Singapore), Asia Pacific (Sydney), , Asia Pacific (Tokyo), Asia Pacific (Seoul), Europe (Ireland), Europe (London), Europe (Frankfurt), Europe (Paris), Europe (Stockholm), and South America (Sao Paulo).
Get started with Aurora DSQL for free with the AWS Free Tier. To learn more, visit the Aurora DSQL webpage and documentation.
We are pleased to announce general availability of Amazon EC2 G6e instances in the Middle East (Dubai), Asia Pacific (Tokyo, Seoul) and Europe (Frankfurt, Stockholm, Spain) on SageMaker Studio notebooks.
Amazon EC2 G6e instances are powered by up to 8 NVIDIA L40s Tensor Core GPUs with 48 GB of memory per GPU and third generation AMD EPYC processors. G6e instances deliver up to 2.5x better performance compared to EC2 G5 instances. Customers can use G6e instances to interactively test model deployment and for interactive model training use cases such as generative AI fine-tuning. You can use G6e instances to deploy large language models (LLMs) with up to 13B parameters and diffusion models for generating images, video, and audio.
Visit developer guides for instructions on setting up and using JupyterLab and CodeEditor applications on SageMaker Studio. For pricing information on these instances, please visit our pricing page.
We are pleased to announce general availability of Amazon EC2 G6 instances in the Middle East (Dubai) and Asia Pacific (Malaysia) on SageMaker Studio notebooks.
Amazon EC2 G6 instances are powered by up to 8 NVIDIA L4 Tensor Core GPUs with 24 GB of memory per GPU and third generation AMD EPYC processors. G6 instances offer 2x better performance for deep learning inference compared to EC2 G4dn instances. Customers can use G6 instances to interactively test model deployment and for interactive model training for use cases such as generative AI fine-tuning and inference workloads, natural language processing, language translation, computer vision, and recommender engines.
Visit developer guides for instructions on setting up and using JupyterLab and CodeEditor applications on SageMaker Studio. For pricing information on these instances, please visit our pricing page.
We are pleased to announce general availability of Amazon EC2 P4de instances in Asia Pacific (Tokyo, Singapore) and Europe (Frankfurt) on SageMaker Studio notebooks.
Amazon EC2 P4de instances are powered by 8 NVIDIA A100 GPUs with 80GB high-performance HBM2e GPU memory, 2X higher than the GPUs in our current P4d instances. The new P4de instances provide a total of 640GB of GPU memory, which provide up to 60% better ML training performance along with 20% lower cost to train when compared to P4d instances. The improved performance will allow customers to reduce model training times and accelerate time to market. Increased GPU memory on P4de will also benefit workloads that need to train on large datasets of high-resolution data.
Visit developer guides for instructions on setting up and using JupyterLab and CodeEditor applications on SageMaker Studio. For pricing information on these instances, please visit our pricing page.
Elastic Network Adapter (ENA) Express now supports traffic between Amazon EC2 instances in different Availability Zones within a Region, delivering up to 25 Gbps single-flow bandwidth. ENA Express is a networking feature that uses the AWS Scalable Reliable Datagram (SRD) protocol to improve network performance. SRD is a reliable network protocol that delivers performance improvements through advanced congestion control and multi-pathing. Amazon Elastic Block Store (EBS) io2 Block Express and Elastic Fabric Adapter (EFA) for high performance computing and machine learning workloads also leverage SRD.
Workloads such as distributed storage, databases, and file systems require deployments spanning multiple Availability Zones for resilience, yet single flows between zones support up to 5 Gbps with ENA. ENA Express delivers up to 25 Gbps single-flow bandwidth for traffic between Availability Zones. To achieve this, ENA Express detects compatibility between your EC2 instances and establishes an SRD connection when both communicating instances have ENA Express enabled. Once established, SRD uses multi-pathing to route your traffic across the network and avoids head-of-line blocking as it does not need packets to arrive in order. Using these capabilities, ENA Express delivers the performance benefits transparently to your application with TCP and UDP protocols.
ENA Express for connections between Availability Zones within a Region is available for all supported instance types and sizes in Africa (Cape Town), Asia Pacific (Hong Kong, Hyderabad, Jakarta, Malaysia, Melbourne, Mumbai, New Zealand, Osaka, Seoul, Singapore, Sydney, Taipei, Thailand, Tokyo), Canada (Central), Canada West (Calgary), Europe (Frankfurt, Ireland, London, Milan, Paris, Spain, Stockholm, Zurich), Israel (Tel Aviv), Mexico (Central), US East (N. Virginia, Ohio), US West (N. California, Oregon), and AWS GovCloud (US) Regions. ENA Express comes at no additional cost. For a list of supported instances and configuration guidance, please review the latest EC2 documentation.
We are pleased to announce general availability of Amazon EC2 P5.4xl instances on SageMaker Studio notebooks.
Amazon EC2 P5.4xl instances are powered by NVIDIA H100 Tensor Core GPUs and deliver high performance in Amazon EC2 for deep learning (DL) and high performance computing (HPC) applications. They help you accelerate your time to solution by up to 4x compared to previous-generation GPU-based EC2 instances, and reduce cost to train ML models by up to 40%. Customers can use P5 instances for training and deploying complex large language models (LLMs) and diffusion models powering generative AI applications. These applications include question answering, code generation, video and image generation, and speech recognition.
Amazon EC2 P5.4xl instances are available for SageMaker Studio notebooks in the AWS US East (N. Virginia and Ohio), US West (Oregon), Asia Pacific (Mumbai, Tokyo, Jakarta) and South America (São Paulo) regions.
Visit developer guides for instructions on setting up and using JupyterLab and CodeEditor applications on SageMaker Studio. For pricing information on these instances, please visit our pricing page.
We are pleased to announce general availability of Amazon EC2 P6-B200 instances in AWS US East (N. Virginia) on SageMaker Studio notebooks.
Amazon EC2 P6-B200 instances are powered by 8 NVIDIA Blackwell GPUs with 1440 GB of high-bandwidth GPU memory and 5th Generation Intel Xeon processors (Emerald Rapids). These instances deliver up to 2x better performance compared to P5en instances for AI training. Customers can use P6-B200 instances to interactively develop and fine-tune large foundation models, including LLMs, mixture of experts models, and multi-modal reasoning models. These instances enable efficient experimentation with larger models directly in JupyterLab or CodeEditor environments for generative AI applications such as enterprise copilots and content generation across text, images, and video.
Visit developer guides for instructions on setting up and using JupyterLab and CodeEditor applications on SageMaker Studio. For pricing information on these instances, please visit our pricing page.
We are pleased to announce general availability of Amazon EC2 G6 instances in Asia Pacific (Tokyo, Mumbai, Sydney) and Europe (London, Paris, Frankfurt, Stockholm, Zurich) on SageMaker notebook instances.
Amazon EC2 G6 instances are powered by up to 8 NVIDIA L4 Tensor Core GPUs with 24 GB of memory per GPU and third generation AMD EPYC processors. G6 instances offer 2x better performance for deep learning inference compared to EC2 G4dn instances. Customers can use G6 instances to interactively test model deployment and for interactive model training for use cases such as generative AI fine-tuning and inference workloads, natural language processing, language translation, computer vision, and recommender engines.
Visit developer guides for instructions on setting up and using JupyterLab and CodeEditor applications on SageMaker Studio and SageMaker notebook instances.
We are pleased to announce general availability of Amazon EC2 P5.48xl instances in the AWS US West (San Francisco), Asia Pacific (Tokyo, Mumbai, Sydney, Jakarta) and Europe (London, Stockholm) regions on SageMaker Studio notebooks.
Amazon EC2 P5.48xl instances are powered by NVIDIA H100 Tensor Core GPUs and deliver high performance in Amazon EC2 for deep learning (DL) and high performance computing (HPC) applications. They help you accelerate your time to solution by up to 4x compared to previous-generation GPU-based EC2 instances, and reduce cost to train ML models by up to 40%. Customers can use P5 instances for training and deploying complex large language models (LLMs) and diffusion models powering generative AI applications. These applications include question answering, code generation, video and image generation, and speech recognition.
Visit developer guides for instructions on setting up and using JupyterLab and CodeEditor applications on SageMaker Studio. For pricing information on these instances, please visit our pricing page.
My most exciting news of last week: Amazon Bedrock AgentCore previewed the first managed payment capabilities enabling AI agents to autonomously access and pay for APIs, MCP servers, web content, and other agents. Built in partnership with Coinbase and Stripe, it removes the undifferentiated heavy lifting of building customized systems for billing, credential management, and […]
株式会社日立産業制御ソリューションズ様が、11社合同「AI-DLC Unicorn Gym」に参加し、Kiroを活用したAI駆動開発ライフサイクル(AI-DLC)で従来3人月(約530時間)規模のIT資産・セキュリティデータ統合基盤を、わずか2日間(約70時間)で構築した実践事例をご紹介します。要件定義から設計・実装・デプロイまでの全工程をAI主導で進めた手応えと、エンジニアに求められる役割の変化について、現場のエンジニアによる寄稿でお届けします。
株式会社日立産業制御ソリューションズ様が、11社合同「AI-DLC Unicorn Gym」に参加し、Kiroを活用したAI駆動開発ライフサイクル(AI-DLC)で従来3人月(約530時間)規模のIT資産・セキュリティデータ統合基盤を、わずか2日間(約70時間)で構築した実践事例をご紹介します。要件定義から設計・実装・デプロイまでの全工程をAI主導で進めた手応えと、エンジニアに求められる役割の変化について、現場のエンジニアによる寄稿でお届けします。
本ブログは株式会社アクト・ノード様とAmazon Web Services Japan 合同会社が共同で執筆い […]
私にとって 2026 年 5 月 4 日週、最もエキサイティングだったニュースは、Amazon Bedrock […]
If you’re looking to strengthen your organization’s security posture on Amazon Web Services (AWS) but aren’t sure where to start, then we’re here to help. Security Activation Days are complimentary, virtual, hands-on workshops designed to help you get practical experience with AWS security services in a single session. What to expect Each Security Activation Day […]
Organizations face critical architectural decisions that can impact their operations for years to come such as: Is it better to maintain a single organization or implement multiple organizations? In this post, I explain the key advantages and disadvantages of both approaches and the scenarios where each model fits best.
In this post, we show you how to build a hybrid multi-tenant architecture that provides strong tenant isolation without requiring per-tenant AWS accounts. You learn how to configure Route 53 weighted routing to distribute traffic across multiple accounts, deploy Application Load Balancer listener rules for tenant-specific routing, create dedicated ECS clusters per tenant, and establish AWS PrivateLink connectivity to shared dependencies.
Amazon Quick helps turn your large enterprise data into fast and accurate AI-powered decisions. In this post, you will learn about five new capabilities of Amazon Quick that accelerate how data professionals deliver trusted AI-powered insights at enterprise scale.
In this post, we dive deep into the architecture and techniques we used to improve Miro’s bug routing, achieving six times fewer team reassignments and five times shorter time-to-resolution powered by Amazon Bedrock.
In this post, we build a multimodal retrieval system for aerospace manufacturing documents using Amazon Nova Multimodal Embeddings on Amazon Bedrock and Amazon S3 Vectors. We evaluate the system on 26 manufacturing queries and compare generation quality between a text-only pipeline and the multimodal pipeline.
Today, we're excited to announce the general availability of Claude Platform on AWS. Claude Platform on AWS is a new service that gives customers direct access to Anthropic's native Claude Platform experience through their AWS account, with no separate credentials, contracts, or billing relationships required. AWS is the first cloud provider to offer access to the native Claude Platform experience. In this post, we explore how Claude Platform on AWS works and how you can start using it today.
In this post, you will learn how to set up the Exa integration in Strands Agents, understand the two core tools it exposes, and walk through real-world use cases that show how agents use web search to complete multi-step tasks.