AWS Reinvent 2020
AWS Reinvent 2020
See below for a summary of each new service and its key customer benefit announced at re:Invent 2020.
Red Hat OpenShift Service on AWS Amazon Elastic Container Registry (ECR) Public
What is it? What is it?
Red Hat OpenShift Service on AWS provides an integrated experience to use Amazon Elastic Container Registry (ECR) is a fully managed container registry
OpenShift. If you are already familiar with OpenShift, you can accelerate your that makes it easy to store, manage, share, and deploy your container images
application development process by leveraging familiar OpenShift APIs and and artifacts anywhere. Amazon ECR eliminates the need to operate your
tools for deployments on AWS. With Red Hat OpenShift Service on AWS, you own container repositories or worry about scaling the underlying
can use the wide range of AWS compute, database, analytics, machine infrastructure. Amazon ECR hosts your images in a highly available and high-
learning, networking, mobile, and other services to build secure and scalable performance architecture, allowing you to reliably deploy images for your
applications faster. Red Hat OpenShift Service on AWS comes with pay-as container applications. You can share container software privately within
you-go hourly and annual billing, a 99.95% SLA, and joint support from AWS your organization or publicly worldwide for anyone to discover and
and Red Hat. download.
Red Hat OpenShift Service on AWS makes it easier for you to focus on Availability:
deploying applications and accelerating innovation by moving the cluster ECR is available for use globally, see details on the AWS Regions Table.
lifecycle management to Red Hat and AWS. With Red Hat OpenShift Service
Use Cases:
on AWS, you can run containerized applications with your existing OpenShift
NEW: Public container image and artifact gallery: You can discover and use
workflows and reduce the complexity of management.
container software that vendors, open source projects and community
Availability: developers share publicly in the Amazon ECR public gallery. Popular base
ROSA is in limited preview at this time. Customers can register interest at: images such as operating systems, AWS-published images, Kubernetes
https://pages.awscloud.com/ROSA_Preview.html add-ons and files such as Helm charts can be found in the gallery.
Customer Benefits: Team and public collaboration: Amazon ECR supports the ability to define
Clear path to running in the cloud: Red Hat OpenShift Service on AWS and organize repositories in your registry using namespaces. This allows
delivers the production-ready OpenShift that many enterprises already you to organize your repositories based on your team’s existing
use on- premises today, simplifying the ability to shift workloads to the workflows. You can set which API actions another user may perform on
AWS public cloud as business needs change. your repository (e.g., create, list, describe, delete, and get) through
Deliver high-quality applications faster: Remove barriers to development resource-level policies, allowing you to easily share your repositories with
and build high-quality applications faster with self-service provisioning, different users and AWS accounts, or publicly with anyone in the world.
automatic security enforcement, and consistent deployment. Accelerate Customer Benefits:
change iterations with automated development pipelines, templates, and Reduce your effort with a fully managed registry: Amazon Elastic
performance monitoring. Container Registry eliminates the need to operate and scale the
Flexible, cost-efficient pricing: Scale per your business needs and pay as infrastructure required to power your container registry. There is no
you go with flexible pricing with an on-demand hourly or annual billing software to install and manage or infrastructure to scale. Just push your
model. container images to Amazon ECR and pull the images using any container
management tool when you need to deploy.
Resources: Website Securely share and download container images Amazon Elastic Container
Registry transfers your container images over HTTPS and automatically
encrypts your images at rest. You can configure policies to manage
permissions and control access to your images using AWS Identity and
Access Management (IAM) users and roles without having to manage
credentials directly on your EC2 instances.
Provide fast and highly available access: Amazon Elastic Container Registry
has a highly scalable, redundant, and durable architecture. Your container
images are highly available and accessible, allowing you to reliably deploy
new containers for your applications. You can reliably distribute public
container images as well as related files such as helm charts and policy
configurations for use by any developer. ECR automatically replicates
container software to multiple AWS Regions to reduce download times
and improve availability.
Resources: Website
1
Amazon Elastic Container Service (ECS) Anywhere Amazon EKS Anywhere
What is it? What is it?
Amazon Elastic Container Service (ECS) Anywhere is a capability in Amazon Amazon EKS Anywhere is a new deployment option for Amazon EKS that
ECS that enables customers to easily run and manage container-based enables you to easily create and operate Kubernetes clusters on-premises,
applications on-premises, including on virtual machines (VMs), bare metal including on your own virtual machines (VMs) and bare metal servers. EKS
servers, and other customer-managed infrastructure. Anywhere provides an installable software package for creating and
With this announcement, customers will now be able to use ECS on any operating Kubernetes clusters on-premises and automation tooling for
compute infrastructure, whether in AWS regions, AWS Local Zones, AWS cluster lifecycle support.
Wavelength, AWS Outposts, or in any on-premises environment, without EKS Anywhere creates clusters based on Amazon EKS Distro, the same
installing or operating container orchestration software. Kubernetes distribution used by EKS for clusters on
AWS. EKS Anywhere enables you to automate cluster management, reduce
Availability: support costs, and eliminate the redundant effort of using multiple tools for
Amazon ECS Anywhere is planned to be available in all standard regions operating Kubernetes clusters. EKS Anywhere is fully supported by AWS. In
where Amazon ECS is available. addition, you can leverage the EKS console to view all your Kubernetes
Use Cases: clusters, running anywhere.
Use ECS as a common tool to deploy “anywhere”: ECS Anywhere offers Availability:
customers a single container orchestration platform for consistent tooling As an on-premises offering, EKS Anywhere can run anywhere
and deployment experience across AWS and on-premises environments
including now on customer-managed infrastructure. With ECS Anywhere, Use cases:
you get the same powerful simplicity of the ECS API, cluster management, Train models in the cloud and run inference on premises: With EKS
monitoring, and tooling for containers running anywhere. Anywhere, you can now combine and benefit the best of both worlds:
Run containers on customer-managed infrastructure to meet specific train your ML model in the cloud, using AWS managed services and use
requirements: ECS Anywhere enables customers to run workloads on- the trained ML model in your on-premises setup.
premises on their own infrastructure for reasons such as regulatory, Workload migration (on-premises to cloud): With EKS Anywhere, you can
latency, security, and data residency requirements. have the same EKS tooling on-premises, and this consistency provides a
Leverage the simplicity of ECS while making use of existing capital quicker on-ramp of your Kubernetes-based workloads to the
investments: ECS Anywhere allows customers to utilize their on-premises cloud.Increase operational efficiencies
investments as they need to in order to run containerized applications. Application modernization: EKS Anywhere empowers you to finally
Additionally, some customers are looking to use their on-premises address the modernization of your applications, removing the heavy lifting
infrastructure as base capacity while bursting into AWS during peaks or as of keeping up with upstream Kubernetes and security patches, so you can
their business grows. Over time, as they retire their on-premises focus on the business value.
hardware, they would continue to move the dial to use more compute on Data sovereignty: Some large data sets can not or will not soon leave the
AWS until they have fully migrated. data center due to legal requirements concerning the location of the data.
Yet EKS Anywhere helps to move the stateless part of the application to
Customer Benefits: the cloud, while keeping data in place.
Fully managed cloud-based control plane: No need to run, update, or Bursting: Seasonal workloads can require a lot of compute (5x to 10x more
maintain container orchestrators on-premises. than the baseline) for a days or weeks. Being able to burst into the cloud
Consistent tooling and governance: Use the same tools and APIs for all provides this temporary capacity. With EKS Anywhere you can now
container-based applications regardless of operating environment. manage your workloads across on-premises and the cloud consistently
Manage your hybrid footprint: Run applications in on-premises and cost-effectively.
environments and easily expand to cloud when you're ready.
Resources: Website Customer Benefits:
Simplify and automate Kubernetes management: EKS Anywhere provides
you with consistent Kubernetes management tooling optimized to simplify
cluster installation with default configurations for OS, container registry,
logging, monitoring, networking, and storage.
Create consistent clusters: Amazon EKS Anywhere uses EKS Distro, the
same Kubernetes distribution deployed by Amazon EKS, allowing you to
easily create clusters consistent with Amazon EKS best practices. EKS
Anywhere eliminates the fragmented collection of vendor support
agreements and tools required to install and operate Kubernetes clusters
on-premises.
Deliver a more reliable Kubernetes environment: EKS Anywhere gives you
a Kubernetes environment on-premises that is easier to support. EKS
Anywhere helps you integrate Kubernetes with existing infrastructure,
keep open source software up to date and patched, and maintain business
continuity with cluster backups and recovery.
Resources: Website
2
AWS Proton AWS Lambda Container Image Support & 1ms
What is it? billing granularity
AWS Proton is the first fully managed application deployment service for
What is it?
container and serverless applications. Platform teams can use
AWS Lambda supports packaging and deploying functions as container
Proton to connect and coordinate all the different tools needed for
images, making it easy for customers to build Lambda based applications by
infrastructure provisioning, code deployments, monitoring, and updates.
using familiar container image tooling, workflows, and dependencies.
Customers also benefit from the operational simplicity, automatic scaling
Proton enables platform teams to give developers an easy way to deploy
with sub-second startup times, high availability, native integrations with 140
their code using containers and serverless technologies, using the
AWS services, and pay for use model offered by AWS Lambda. Enterprise
management tools, governance, and visibility needed to ensure consistent
customers can use a consistent set of tools with both their Lambda and
standards and best practices.
containerized applications for central governance requirements such as
Availability: security scanning and image signing. Customers can create their container
During preview: us-east-1, us-east-2, us-west-2, ap-northeast-1, and eu- deployment images by starting with either AWS Lambda provided base
west-1. Global region availability planned for GA images or by using one of their preferred community or private enterprise
images.
Use Cases:
Streamlined management: Platform teams use AWS Proton to manage Availability:
and enforce a consistent set of standards for compute, networking, Container Image Support for AWS Lambda and 1ms billing granularity for
continuous integration/continuous delivery (CI/CD), and security and AWS Lambda are available in all regions where AWS Lambda is available,
monitoring in modern container and serverless environments. With except for regions in China.
Proton, you can see what was deployed and who deployed it. You can
Use Cases:
automate in-place infrastructure updates when you update your
Build cross-platform applications, with both containers and AWS Lambda
templates.
Large applications, or applications relying on large dependencies, such as
Managed developer self-service: AWS Proton enables platform teams to
machine learning, analytics, or data intensive apps.
offer a curated self-service interface for developers, using the familiar
Customers who want to run serverless applications but have standardized
experience of the AWS Management Console or AWS Command Line
on container tooling within their organizations
Interface (AWS CLI). Using approved stacks, authorized developers in your
organization are able to use Proton to create and deploy a new Customer Benefits:
production infrastructure service for their container and serverless Leverage familiar container tooling and workflows: Leverage the flexibility
applications. and familiarity of container tooling, and the agility and operational
Infrastructure as code (IaC) adoption: AWS Proton uses infrastructure as simplicity of AWS Lambda to be more agile when building applications.
code (IaC) to define application stacks and configure resources. It Get the flexibility of containers and agility of AWS Lambda: When invoked,
integrates with popular AWS and third-party CI/CD and observability functions deployed as container images are executed as-is, with sub-
tools, offering a flexible approach to application management. Proton second automatic scaling. You benefit from high availability, only pay for
makes it easy to provide your developers with a curated set of building what you use and can take advantage of 140 native service integrations.
blocks they can use to accelerate the pace of business innovation. Build and deploy large workloads to AWS Lambda: With container images
Customer Benefits: of up to 10GB, you can easily build and deploy larger workloads that rely
on sizable dependencies, such as machine learning or data intensive
Set guardrails: AWS Proton enables your developers to safely adopt and
workloads.
deploy applications using approved stacks that you manage. It delivers the
right balance of control and flexibility to ensure developers can continue
rapid innovation.
Increase developer productivity: AWS Proton lets you adopt new
technologies without slowing your developers down. It gives them
infrastructure provisioning and code deployment in a single interface,
allowing developers to focus on their code.
Enforce best practices: When you adopt a new feature or best practice,
AWS Proton helps you update out- of-date applications with a single click.
With Proton, you can ensure consistent architecture across your
organization.
3
Amazon EKS Add-ons Amazon EKS Distro
What is it? What is it?
Amazon Elastic Kubernetes Service (Amazon EKS) gives you the flexibility to Amazon EKS Distro is a Kubernetes distribution used by Amazon EKS to help
start, run, and scale Kubernetes applications in the AWS cloud or on- create reliable and secure clusters. EKS Distro includes binaries and
premises. Amazon EKS helps you provide highly-available and secure clusters containers of open source Kubernetes, etcd (cluster configuration database),
and automates key tasks such as patching, node provisioning, and updates. networking, storage plugins, all tested for compatibility. You can deploy EKS
NEW! Add-ons – Add-ons are common operational software which extend Distro wherever your applications need to run.
the operational functionality of Kubernetes. You can use EKS to install and You can deploy clusters and let AWS take care of testing and tracking
keep this software up to date. When you start an Amazon EKS cluster, you Kubernetes updates, dependencies, and patches. Each EKS Distro verifies
can select the add-ons that you would like to run in the cluster, including new Kubernetes versions for compatibility. The source code, open source
Kubernetes tools for observability, networking, autoscaling, and AWS service tools, and settings are provided for reproducible builds. EKS Distro will
integrations. provide extended support for Kubernetes, with builds of previous versions
updated with the latest security patches. EKS Distro is available as open
Availability: source on GitHub.
Amazon EKS is generally available in all AWS public regions as of November
2020. Support in the new Osaka region is coming soon. Availability:
Amazon EKS Distro is open source software that can be run anywhere.
Use Cases:
Customer Benefits:
Hybrid Deployments
Web Applications Get consistent Kubernetes builds: EKS Distro provides the same installable
builds and code of open source Kubernetes that are used by Amazon EKS.
Big data
You can perform reproducible builds with the provided source code,
Machine Learning
tooling, and documentation.
Batch Processing
Run Kubernetes on any infrastructure: You can deploy EKS Distro on your
Customer Benefits: own self-provisioned hardware infrastructure, including bare-metal
NEW! Service Integrations – AWS Controllers for Kubernetes (ACK) lets servers or VMware vSphere virtual machines, or on Amazon EC2 instances.
you directly manage AWS services from Kubernetes. ACK makes it simple Have a more reliable and secure distribution: EKS Distro will provide
to build scalable and highly-available Kubernetes applications that utilize extended support for Kubernetes versions in alignment with the Amazon
AWS services. EKS Version Lifecycle Policy, by updating builds of previous versions with
NEW! Integrated Kubernetes Console – EKS provides an integrated the latest critical security patches.
console for Kubernetes clusters. Cluster operators and application
developers can use EKS as a single place to organize, visualize, and Resources: Website | What’s new post
troubleshoot their Kubernetes applications running on Amazon EKS. The
EKS console is hosted by AWS and is available automatically for all EKS
clusters.
NEW! Add-ons – Add-ons are common operational software which extend
the operational functionality of Kubernetes. You can use EKS to install and
keep this software up to date. When you start an Amazon EKS cluster, you
can select the add-ons that you would like to run in the cluster, including
Kubernetes tools for observability, networking, autoscaling, and AWS
service integrations.
Resources: Website | What’s new post
4
Amazon Managed Workflows for Apache Airflow Amazon MQ
What is it? What is it?
Amazon Managed Workflows are a managed orchestration service for Amazon MQ is a managed message broker service for Apache ActiveMQ and
Apache Airflow that makes it easy to set up and operate end-to-end data RabbitMQ that makes it easy to set up and operate message brokers on
pipelines in the cloud at scale. Apache Airflow is an open-source tool used to AWS. Amazon MQ reduces your operational responsibilities by managing the
programmatically author, schedule, and monitor sequences of processes and provisioning, setup, and maintenance of message brokers for you. Because
tasks referred to as “workflows.” With Managed Workflows you can use the Amazon MQ connects to your current applications with industry-standard
same open source Airflow platform and Python language to create APIs and protocols, you can easily migrate to AWS without having to rewrite
workflows without having to manage the underlying infrastructure for code.
scalability, availability, and security. Managed Workflows automatically scale Availability:
its workflow execution capacity up and down to meet your needs, and is Amazon MQ is available in 19 AWS Regions, see details on the AWS Regions
integrated with AWS security services to enable fast and secure access to Table.
data.
Customer Benefits:
Availability:
Migrate quickly: Connecting your current applications to Amazon MQ is
us-east-1 (N. Virginia), us-east-2 (Ohio), us-west-2 (Oregon), eu-north-1
easy because it uses industry-standard APIs and protocols for messaging,
(Stockholm), eu-west-1 (Ireland), eu-central-1 (Frankfurt), ap-southeast-2
including JMS, NMS, AMQP 1.0 and 0-9-1, STOMP, MQTT, and
(Sydney), ap-northeast-1 (Tokyo), and ap-southeast-1 (Singapore)
WebSocket. This enables you to move from any message broker that uses
Use Cases: these standards to Amazon MQ by simply updating the endpoints of your
Enable Complex Workflows: Big data platforms often need complicated applications to connect to Amazon MQ.
data pipelines that connect many internal and external services. To use Offload operational responsibilities: Amazon MQ manages the
this data, customers need to first build a workflow that defines the series administration and maintenance of message brokers and automatically
of sequential tasks that prepare and process the data. Managed provisions infrastructure for high availability. There is no need to provision
Workflows execute these workflows on a schedule or on-demand. hardware or install and maintain software and Amazon MQ automatically
Coordinate Extract, Transform, and Load (ETL) Jobs: You can use Managed manages tasks such as software upgrades, security updates, and failure
Workflows as an open source alternative to orchestrate multiple ETL jobs detection and recovery.
involving a diverse set of technologies in an arbitrarily complex ETL Durable messaging made easy: Amazon MQ is automatically provisioned
workflow. for high availability and message durability when you connect your
Prepare Machine Learning (ML) Data: In order to enable machine message brokers. Amazon MQ stores messages redundantly across
learning, source data must be collected, processed, and normalized so multiple Availability Zones (AZ) within an AWS region and will continue to
that ML modeling systems like the fully managed service Amazon be available if a component or AZ fails.
SageMaker can train on that data. Managed Workflows solve this problem
by making it easier to stitch together the steps it takes to automate your Resources: Website
ML pipeline.
Customer Benefits:
Deploy Airflow rapidly at scale: Get started in minutes from the AWS
Management Console, CLI, AWS CloudFormation, or AWS SDK. Create an
account and begin deploying Directed Acyclic Graphs (DAGs) to your
Airflow environment immediately without reliance on development
resources or provisioning infrastructure.
Run Airflow with built-in security: With Managed Workflows, your data is
secure by default as workloads run in your own isolated and secure cloud
environment using Amazon’s Virtual Private Cloud (VPC), and data is
automatically encrypted using AWS Key Management Service (KMS).
Reduce operational costs: Managed Workflows are a managed service,
removing the heavy lift of running open source Apache Airflow at scale.
With Managed Workflows, you can reduce operational costs and
engineering overhead while meeting the on-demand monitoring needs of
end to end data pipeline orchestration.
Resources: Website
5
AWS Categories: Compute
Availability: D3 instances are a great fit for dense storage workloads including big data
us-east-1 (N. Virginia), us-east-2 (Ohio), us-west-2 (Oregon), eu-west-1 and analytics, data warehousing, and high scale file systems. D3en instances
(Ireland), and ap-southeast-1 (Singapore) are a great fit for dense and distributed workloads including high capacity
data lakes, clustered file systems, and other multi-node storage systems with
Customer Benefits: significant inter-node I/O. With D3 and D3en instances, you can easily
Quickly provision macOS environments: Time and resources previously migrate from previous- generation D2 instances or on-premises
spent building and maintaining on-premises macOS environments can infrastructure to a platform optimized for dense HDD storage workloads.
now be refocused on building creative and useful apps. Development
teams can now seamlessly provision and access macOS compute Availability:
environments to enjoy faster app builds and convenient, distributed us-east-1, us-east-2, us-west-2, and eu-west-1 regions
testing, without having to procure, configure, operate, maintain, and Customer Benefits:
upgrade fleets of physical computers. Lower costs: Next-generation Amazon EC2 D3 instances provide increased
Reduce costs: Mac instances allow developers to launch macOS price-performance, and lower cost than D2 instances. D3 and D3en
environments within minutes, adjust provisioned capacity as needed, and instances feature 30% higher compute performance than D2 instances.
only pay for actual usage with AWS’s pay-as-you-go pricing. Developers D3en instances also offer 80% lower cost-per-TB of storage compared to
save money since they only need to pay for the systems that are in use. D2 instances.
For example, more capacity can be used when building an app, and less Better performance: D3 and D3en instances satisfy the needs of
capacity when testing. applications with high requirements for sequential storage throughput.
Extend your toolkits: Amazon EC2 Mac instances provide developers D3 and D3en instances enable 45% and 100% higher disk throughput
seamless access to the broad set of over 175 AWS services so they can respectively compared to D2 instances. D3 and D3en instances provide
more easily and efficiently collaborate with team members, and develop, 2.5x and 7.5x higher networking throughput respectively than D2
test, share, analyze, and improve their apps. Customers can leverage AWS instances, allowing for high speed multi-node configurations.
services such as Elastic Block Store (EBS) for block-level storage, Elastic Maximize resource efficiency: D3 and D3en instances are powered by the
Load Balancer (ELB) for distributing build queues, Simple Storage Service AWS Nitro System, a combination of dedicated hardware and lightweight
(S3) for extreme scale object storage, Amazon Machine Images (AMIs) for hypervisor, which delivers practically all of the compute and memory
orchestration, and CodeBuild for managed CI/CD. resources of the host hardware to your instances. This frees up additional
Resources: Website | What’s new post compute, memory and I/O, allowing your applications to do more with
available hardware resources including local, HDD storage.
Resources: Website | What’s new post
6
Amazon EC2 Instances Powered by AWS Graviton2 Amazon EC2 G4ad instances
Processors What is it?
G4ad instances are powered by AMD Radeon Pro V520 GPUs, providing the
What is it?
best price performance for graphics intensive applications in the cloud. These
The new general purpose (M6g), general purpose burstable (T4g), compute
instances offer up to 45% better price performance compared to G4dn
optimized (C6g), and memory optimized (R6g) Amazon EC2 instances deliver
instances, which were already the lowest cost instances in the cloud, for
up to 40% improved price performance over comparable x86-based
graphics applications such as remote graphics workstations, game streaming,
instances for a broad spectrum of workloads including application servers,
and rendering that leverage industry-standard APIs such as OpenGL, DirectX,
open source databases, in-memory caches, microservices, gaming servers,
and Vulkan. They provide up to 4 AMD Radeon Pro V520 GPUs, 64 vCPUs, 25
electronic design automation, high-performance computing, and video
Gbps networking, and 2.4 TB local NVMe-based SSD storage.
encoding. M6gd, C6gd, and R6gd are variants of these instances with local
NVMe-based SSD storage, and C6gn instances deliver 100 Gbps networking Availability:
for compute intensive applications with support for Elastic Fabric Adapter us-east-1 (N. Virginia), us-east-2 (Ohio), us-west-2 (Oregon), eu-west-1
(EFA). These instances are powered by new AWS Graviton2 processors that (Ireland), and ap-southeast-1 (Singapore)
deliver up to 7x performance, 4x the number of compute cores, 2x larger Use Cases:
private caches per core, and 5x faster memory compared to the first- Virtual Workstations
generation AWS Graviton Processors. AWS Graviton2 processors are built on Graphics intensive applications
advanced 7 nanometer manufacturing technology. They utilize 64-bit Arm
Neoverse cores and custom silicon designed by AWS, and introduce several Customer Benefits:
performance optimizations versus the first generation. AWS Graviton2 Highest Perfromance and Lowest Cost Instances for Graphics Intensive
processors provide 2x faster floating-point performance per core for Applications: G4ad instances are the lowest cost instances in the cloud for
scientific and high-performance computing workloads, custom hardware graphics intensive applications. They provide up to 45% better price
acceleration for compression workloads, fully encrypted DRAM memory, and performance, including up to 40% better graphics performance, compared
optimized instructions for faster CPU-based machine learning inference. to G4dn instances for graphics applications such as remote graphics
workstations, game streaming, and rendering that leverage industry
Availability: standard APIs such as OpenGL, DirectX, and Vulkan.
US East (N. Virginia, Ohio), US West (N. California, Oregon), Europe (Ireland,
Simplified Management of Virtual Workstations at the Lowest Cost in the
Frankfurt, London), Canada (Central) and Asia Pacific (Mumbai,
Cloud: G4ad instances allow customers to configure virtual workstations
Singapore, Sydney, Tokyo) regions
with high-performance simulation, rendering, and design capabilities in
Customer Benefits: minutes, allowing customers to scale quickly. Customers can use AMD
Best price performance for a broad spectrum of workloads: AWS Radeon Pro Software for Enterprise and high-performance remote display
Graviton2-based general-purpose (M6g), general-purpose burstable (T4g), protocol, NICE DCV, with G4ad instances at no additional cost to manage
compute-optimized (C6g), and memory-optimized (R6g) EC2 instances their virtual workstation environments with support for up to two 4k
deliver up to 40% better price performance over comparable current monitors per GPU.
generation x86-based instances for a broad spectrum of workloads such Dependability in Third Party Applications: The AMD professional graphics
as application servers, micro-services, video encoding, high-performance solution includes an extensive Independent Software Vendor (ISV)
computing, electronic design automation, compression, gaming, open- application testing and certification process called the Day Zero
source databases, in-memory caches, and CPU-based machine learning Certification Program. This helps ensure that developers can leverage the
inference. latest AMD Radeon Pro Software for Enterprise features combined with
Extensive ecosystem support: AWS Graviton2 processors, based on the the reliability of certified software on the day of the driver release.
64-bit Arm architecture, are supported by popular Linux operating Resources: Website
systems including Amazon Linux 2, Red Hat, SUSE, and Ubuntu. Many
popular applications and services from AWS and Independent Software
Vendors also support AWS Graviton2-based instances, including Amazon
ECS, Amazon EKS, Amazon ECR, Amazon CodeBuild, Amazon
CodeCommit, Amazon CodePipeline, Amazon CodeDeploy, Amazon
CloudWatch, Crowdstrike, Datadog, Docker, Drone, GitLab, Jenkins,
NGINX, Qualys, Rancher, Rapid7, Tenable, and TravisCI. Arm developers
can also leverage this ecosystem to build applications natively in the
cloud, thereby eliminating the need for emulation and cross-compilation,
which are error prone and time consuming.
Enhanced security for cloud applications: Developers building applications
for the cloud rely on cloud infrastructure for security, speed and optimal
resource footprint. AWS Graviton2 processors feature key capabilities
that enables developers to run cloud native applications securely, and at
scale, including always-on 256-bit DRAM encryption and 50% faster per
core encryption performance compared to first-generation AWS Graviton.
Graviton2 powered instances are built on the Nitro System that features
the Nitro security chip with dedicated hardware and software for security
functions, as well as encrypted EBS storage volumes by default.
Resources Website
7
AWS Wavelength Zone in Las Vegas Amazon EC2 instances powered by Habana
What is it? Accelerators
Today, we are announcing the availability of a new AWS Wavelength Zone
What is it?
on Verizon’s 5G Ultra Wideband network in Las Vegas. Wavelength Zones
Amazon EC2 instances powered by Habana accelerators are a new type of
are now available in eight cities, including the seven previously announced
EC2 instance specifically optimized for deep learning training workloads to
cities of Boston, San Francisco Bay Area, New York City, Washington DC,
deliver the lowest cost-to-train machine learning models in the cloud.
Atlanta, Dallas, and Miami.
Habana-based instances are ideal for deep learning training workloads of
AWS Wavelength brings AWS services to the edge of the 5G network, applications such as natural language processing, object detection and
minimizing the latency to connect to an application from 5G connected classification, recommendation engines and autonomous vehicle perception.
devices. Application traffic can reach application servers running in Habana, an Intel company, will provide the SynapseAI SDK and tools that
Wavelength Zones, AWS infrastructure deployments that embed AWS simplify building with or migrating from current GPU-based EC2 instances to
compute and storage services within the communications service providers’ Habana-based EC2 instances. SynapseAI will be natively integrated with
datacenters at the edge of the 5G networks, without leaving the telco common ML frameworks like TensorFlow and PyTorch, and provide the
provider’s network. This reduces the extra network hops to the Internet that ability to easily port existing training models from using GPUs to Habana
can result in latencies of 10s of milliseconds, preventing customers from accelerators. Customers will be able to launch the new EC2 instances using
taking full advantage of the bandwidth and latency advancements of 5G. AWS Deep Learning AMIs, or via Amazon EKS and ECS for containerized
applications, and also have the ability to use these instances via Amazon
Availability:
Sagemaker.
Today, Wavelength was announced for availability in Las Vegas. In August
2020, AWS announced the launch of two Wavelength Zones, in San Availability:
Francisco and Boston, with Verizon. Wavelength Zones in 8 other cities in the Amazon EC2 Habana-based instances will be available in April 2021 in 3 sizes
United States are planned for launch in 2020. Globally, AWS is partnering across in 2 regions: us-east-1 and us-west-2.
with other leading edge telecommunications companies including KDDI, SK They can be purchased as On-Demand, Reserved Instances, Savings Plan or
Telecom, and Vodafone to launch Wavelength across Europe, Japan, and Spot Instances. Habana-based instances are also available for use with
South Korea in 2020, with more telco partners coming soon. Amazon SageMaker, Amazon EKS and Amazon
Use cases: Customer Benefits:
Connected Vehicles: Cellular Vehicle-to-Everything (C-V2X) is an Better performance and lower cost: Habana based EC2 instances will
increasingly important platform for enabling intelligent driving, real-time leverage up to 8 Habana Gaudi accelerators and deliver up to 40% better
HD-maps, road safety, and more. price performance than current GPU-based EC2 instances for training
Interactive Live Video Streams: Wavelength provides the ultra-low latency deep learning models. Habana-based instances also provide customers
needed to live stream high-resolution video and high-fidelity audio, as the ability to scale out from a single accelerator to hundreds in
well as to embed interactive experiences into live video streams. significantly reducing time-to-train.
AR/VR: By accessing compute resources on AWS Wavelength, AR/VR Resources: Website
applications can reduce the Motion to Photon (MTP) latencies to the <20
ms benchmark needed to offer a realistic customer experience.
Smart Factories: Industrial automation applications use ML inference at
the edge to analyze images and videos to detect quality issues on fast
moving assembly lines and trigger actions to remediate the problem.
Real-time gaming: With AWS Wavelength, the most demanding games
can be made available on end devices that have limited processing power
by streaming these games from game servers in Wavelength Zones.
Healthcare: ML-assisted diagnostics: AI/ML driven video analytics and
image matching solutions help doctors speed up diagnosis of observed
conditions.
Customer Benefits:
Ultra-low latency for 5G: Wavelength combines AWS compute and
storage services with the high bandwidth and low latency of 5G networks
to enable developers to innovate and build a whole new class of
applications that serve end-users with ultra-low latencies over the 5G
network.
Consistent AWS experience: Wavelength enables you to use familiar and
powerful AWS tools and services to build, manage, secure, and scale your
applications.
Flexible and scalable: With Wavelength, you can start small and scale as
your needs grow, without worrying about managing physical hardware or
maximizing the utilization of purchased capacity.
Global 5G network: Wavelength will be available within communications
service providers’ (CSP) networks such as Verizon, Vodafone, KDDI, and SK
Telecom. More CSPs around the world will be available in the near future.
Resources: Website | What’s New post
8
AWS Trainium AWS Outposts 1U and 2U Servers
What is it? What is it?
AWS Trainium is high performance machine learning (ML) chip, custom AWS Outposts is a fully managed service that extends AWS infrastructure,
designed by AWS to provide the best price performance for training machine AWS services, APIs, and tools to virtually any datacenter, co-location space,
learning models in the cloud. The Trainium chip is specifically optimized for or on-premises facility for a truly consistent hybrid experience. AWS
deep learning training workloads for applications including image Outposts is ideal for workloads that need low latency access to on-premises
classification, semantic search, translation, voice recognition, natural applications or systems, local data processing, and to securely store sensitive
language processing and recommendation engines. AWS Trainium uses the customer data that needs to remain anywhere there is no AWS region,
AWS Neuron SDK which is integrated with popular ML frameworks, including including inside company-controlled environments or countries.
TensorFlow, MXNet, and PyTorch, allowing customers to easily migrate from AWS Outposts 1U and 2U form factors are rack-mountable servers that
using GPU instances for training deep learning models with minimal code provide local compute and networking services to edge locations that have
changes. AWS Trainium will be available via Amazon EC2 instances and AWS limited space or smaller capacity requirements. Outposts servers are ideal for
Deep Learning AMIs as well as managed services including Amazon customers with low-latency or local data processing needs for on-premises
SageMaker, Amazon ECS, EKS, and AWS Batch. locations, like retail stores, branch offices, healthcare provider locations, or
Availability: factory floors.
AWS Trainium will be available in all AWS commercial and GovCloud regions. AWS will deliver Outposts servers directly to you, and you can either have
Use Cases: your onsite personnel install them or have them installed by a preferred
Virtual Workstations third-party contractor. After the Outposts servers are connected to your
Graphics intensive applications network, AWS will remotely provision compute and storage resources so you
can start launching applications.
Customer Benefits:
Better performance and lower cost: AWS Trainium will deliver the most AWS Outposts 1U and 2U form factors will be available in 2021. To receive
cost-effective ML training in the cloud and will offer the most TFLOPS of more information about Outposts servers, sign up here.
compute power of any ML instance in the cloud. Customers will be able to Availability:
achieve significantly better performance in training machine learning At GA, Outposts can be shipped to and installed in the following countries
models and realize dramatically lower cost compared to AWS EC2 GPU NA – US; EMEA - All EU countries, Switzerland, and Norway; APAC - Australia,
instances. Japan, and South Korea.
Resources: Website Use Cases:
Low Latency: Customers with low latency requirements need to make
near real time responses to end user applications or have to communicate
with other on-premises systems or control on-site equipment. They have
adopted the Amazon cloud for centralized operations but need to run
compute, graphics, or storage intensive workloads on premises to execute
localized workflows with precision and quality.
Local Data Processing: Customers that need to access data stores that will
remain on-premises for a time. Some customers run data intensive
workloads that collect and process hundreds of TBs of data a day. They
would like to process this data locally to respond to events in real time and
to have better control on analyzing, backing up, and restoring the data.
Key Verticals
Manufacturing Automation: Use AWS services to run manufacturing
process control systems such as MES and SCADA systems and applications
that need to run close to factory floor equipment.
Health Care: Apply analytics and machine learning AWS services to health
management systems that need to remain on premises due to low latency
processing or patient health information (PHI) requirements.
Telecommunications: Use cloud services and tools to orchestrate, update,
scale, and manage the lifecycle of Virtual Network Functions (VNFs) across
cloud, on premises, and edge.
Media & Entertainment: Access the latest GPU innovations on premises
for graphics processing, audio and video rendering.
Financial Services: Build next-generation trading and exchange platforms
that serve all participants at low latency.
Retail: Leverage AWS database, container, and analytics services to enable
retail innovations such as connected store experiences, run point-of-sale
systems to process in-person transactions locally.
Customer Benefits:
Run AWS Services On Premises
Store and Process Data On Premises
Truly Consistent Hybrid Experience
Fully Managed Infrastructure
Resources: Website
9
AWS Local Zones Amazon EC2 M5zn Instances
What is it? What is it?
AWS Local Zones are a type of AWS infrastructure deployment that places Amazon EC2 M5 Instances are the next generation of the Amazon EC2
AWS compute, storage, database, and other select services closer to large General Purpose compute instances. M5 instances offer a balance of
population, industry, and IT centers. With AWS Local Zones, you can easily compute, memory, and networking resources for a broad range of
run latency-sensitive portions of applications local to end-users in a specific workloads. This includes web and application servers, small and mid-sized
geography, delivering single-digit millisecond latency for use cases such as databases, cluster computing, gaming servers, caching fleets, and app
media & entertainment content creation, real-time gaming, live video development environments. Additionally, M5d, M5dn, and M5ad instances
streaming, AR/VR, and machine learning inference. have local storage, offering up to 3.6TB of NVMe-based SSDs.
Each AWS Local Zone location is an extension of an AWS Region where you Customer Benefits:
can run your latency-sensitive applications using AWS services Flexibility and choice: Choose between a selection of 60 different instance
such as Amazon Elastic Compute Cloud, Amazon Virtual Private Cloud, choices with multiple processor options (Intel Xeon Scalable processor or
Amazon Elastic Block Store, Amazon Elastic Container Service, and AMD EPYC processor), storage options (EBS or NVMe SSD), network
Amazon Elastic Kubernetes Service in geographic proximity to end-users. options (up to 100 Gbps), and instance sizes to optimize both cost and
AWS Local Zones provide a high-bandwidth, secure connection between performance for your workload needs.
local workloads and those running in the AWS Region, allowing you to Lower TCO: By leveraging the higher number of cores per processor, M5
seamlessly connect back to your other workloads running in AWS and to the instances provide customers with a higher instance density than the
full range of in-region services through the same APIs and tool sets. You can previous generation, which results in a reduction in per-instance TCO.
build and deploy applications using AWS services in proximity to your end- With the largest instance size of 24xlarge, customers can scale-up and
users and reduce end-to-end throughput needs for your applications. consolidate their workloads on a fewer number of instances, to help
AWS Local Zones are managed and supported by AWS, bringing you all of the lower their total cost of ownership.
scalability, and security benefits of the cloud. With AWS Local Zones, you can Maximize resource efficiency: M5 instances are built on the AWS Nitro
easily build and deploy latency-sensitive applications closer to your end- System, a combination of dedicated hardware and lightweight hypervisor
users using a consistent set of AWS services and pay only for the resources which delivers practically all of the compute and memory resources of the
that you use. host hardware to your instances for better overall performance and
Availability: security.
AWS Local Zones are generally available in Los Angeles, CA and in preview in Resources: Website
Boston, Houston, and Miami. Get started with the LA Local zones here.
Customers can sign up for access to the preview for Local zones in Boston,
Houston, and Miami here.
Use Cases:
Media & Entertainment Content Creation: Run latency-sensitive
workloads, such as live production, video editing, and graphics-intensive
virtual workstations for artists in geographic proximity to AWS Local
Zones.
Real-time Multiplayer Gaming: Deploy latency-sensitive game servers in
AWS Local Zones to run real-time multiplayer game sessions and maintain
a reliable gameplay experience. With AWS Local Zones, you can deploy
your game servers closer to your players than ever before for a real-time
and interactive in- game experience.
ML: Easily host and train models continuously for high performance low-
latency inference at the edge. Work with your data, experiment with
algorithms, and visualize your output faster in AWS Local Zones.
Video Streaming: Live stream video content with single digit milli-second
latency and high fidelity to your end users. Perform computation and
analysis of your video content close to the event and seamlessly extend
across Availability Zones and AWS Local Zones close to your end users for
high fidelity streaming.
AR/VR: Support AR/VR applications by performing computation and
analysis close to your end users with AWS Local Zones. Effectively reduce
the Motion to Photon (MTP) latencies to the <20 ms benchmark needed
to offer a realistic customer experience.
Customer Benefits:
Low latency to local end-users: AWS Local Zones place compute, storage,
database, and other select AWS services closer to end-users to enable you
to open up new possibilities and deliver innovative applications and
services that require single-digit millisecond latencies for more end users.
Consistent AWS experience: AWS Local Zones enable you to use the same
AWS infrastructure, services, APIs, and tool sets that you are familiar with
in the cloud. Applications also have fast, secure, and seamless access to
the full breadth of services in the parent region.
Resources: Website
10
Amazon EC2 R5b Instances
What is it?
Amazon EC2 R5 instances are the next generation of memory optimized
instances for the Amazon Elastic Compute Cloud. R5 instances are well
suited for memory intensive applications such as high-performance
databases, distributed web scale in-memory caches, mid-size in-memory
databases, real time big data analytics, and other enterprise applications.
Additionally, you can choose from a selection of instances that have options
for local NVMe storage, EBS optimized storage (up to 60 Gbps), and
networking (up to 100 Gbps).
Customer benefits:
Flexibility and Choice: Choose from a selection of almost 60 different
instance choices with options for processors (Intel Xeon Scalable
processor or AMD EPYC processor), instance storage (NVMe SSD), EBS
volumes storage (up to 60 Gbps), networking (up to 100 Gbps), and
instance sizes to optimize both cost and performance for your workload
needs.
More memory: R5 instances support the high memory requirements of
certain applications to increase performance and reduce latency. R5
instances deliver additional memory per vCPU and the largest size,
r5.24xlarge, provides 768 GiB of memory, allowing customers to scale-up
and consolidate their workloads on a fewer number of instances.
Maximize resource efficiency: R5 instances are powered by the AWS Nitro
System, a combination of dedicated hardware and lightweight hypervisor,
which delivers practically all of the compute and memory resources of the
host hardware to your instances. This frees up additional memory for
your workloads which boosts performance and lowers the $/GiB costs.
Resources: Website
11
AWS Categories: End User Compute
12
AWS Categories: AI/ML
Resources: Website
13
Amazon DevOps Guru Amazon SageMaker Feature Store
What is it? What is it?
Amazon DevOps Guru is a machine learning (ML) powered DevOps service Amazon SageMaker Feature Store is a feature store for machine learning
that gives you a simpler way to measure and improve an application’s (ML) serving features in both real-time and in batch. Using SageMaker
operational performance and availability and reduce expensive downtime– Feature Store, you can store, discover, and share features so you don’t need
no machine learning expertise required. to recreate the same features for different ML applications saving months of
Using machine learning models informed by years of operational expertise in development effort.
building, scaling, and maintaining highly available applications at Your ML models use inputs called “features” to make predictions. For
Amazon.com, DevOps Guru identifies behaviors that deviate from normal example, lot size could be a feature in a model that predicts housing prices.
operating patterns. When DevOps Guru identifies a critical issue, it Features need to be available in large batches for training and also in real-
automatically alerts you with a summary of related anomalies, the likely root time to make fast predictions. For example, in a housing price predictor
cause, and context on when and where the issue occurred. DevOps Guru model, users expect an immediate update as new listings become available.
also, when possible, provides prescriptive recommendations on how to The quality of your predictions is dependent on keeping features consistent,
remediate the issue. but requires months of coding and deep expertise to keep features
consistent across training and development environments.
Availability: Amazon SageMaker Feature Store provides a consistent set of features so
us-east-1 (N. Virginia), us-east-2 (Ohio), us-west-2 (Oregon), eu-west-1 you get the exact same features for training and inference, and you can
(Ireland), and ap-northeast-1 (Tokyo) easily share features across your organization which improves collaboration
Use Cases: and eliminates rework.
Operational audits: IT managers responsible for reliability of their Availability:
applications can use DevOps Guru to get a quick summary of all the Amazon SageMaker Feature Store is available in all AWS Regions where
operationally significant events, identified and sorted by their severity. In SageMaker is available. See details on the AWS Regions Table
the console, you can search for issues in specific applications, identify Use Cases:
trends, and decide where developers should spend their time and Model features are required for all machine learning applications, so
resources. Amazon SageMaker Feature Store can be used for all ML use cases.
Proactive resource exhaustion planning: Build predictive alarming for
exhaustible resources such as memory, CPU, and disk space with DevOps Customer Benefits:
Guru. It forecasts when resource utilization will exceed the provisioned Develop models faster: Amazon SageMaker Feature Store provides a
capacity, and informs you by creating a notification in the dashboard, central repository of features so they can be used for many applications
helping you avoid an impending outage. across your organization. By discovering and reusing features that are
Predictive maintenance: Site reliability engineers can use DevOps Guru already deployed, you spend less time on data preparation and feature
insights to prevent incidents before they occur. DevOps Guru flags computation and more time on innovation.
medium- and low severity findings that might not be critical but, if left Increase model accuracy: Accuracy of ML models can be increased by
alone, worsen over time and affect the availability of your application. looking at model metadata such as the dataset used, model attributes,
This helps you plan, prioritize, and avoid unforeseen downtime. and hyperparameters. In addition to the actual features, Amazon
SageMaker Feature Store stores metadata for each feature so you can
Customer Benefits: understand its impact while building and training models.
Automatically detect operational issues: DevOps Guru continuously Track model lineage for compliance: With Amazon SageMaker Feature
analyzes streams of disparate data and watches thousands of metrics to Store, you can track lineage of the feature generation process. The feature
establish normal bounds for application behavior. It discovers and store maintains the data lineage for every feature providing the required
classifies resources like application metrics, logs, events, and traces in information to understand how a feature was generated. This helps with
your account, automatically identifies deviations from normal activity, and addressing compliance requirements in regulated industries.
surfaces high severity issues to quickly alert you of downtime.
Resolve issues quickly with ML-powered insights: DevOps Guru helps to
Resources: Website | What's new post
reduce your issue resolution time and assists in root cause identification
by correlating multiple metrics and events anomalies. When an
operational issue occurs, it generates insights with a summary of related
anomalies, contextual information about the issue, and when possible
actionable recommendations for remediation.
Easily scale and maintain availability: As you migrate and adopt new AWS
services, DevOps Guru automatically adapts to changing behavior and
evolving system architecture. With DevOps Guru, you save time and effort
otherwise spent on monitoring applications and manually updating static
rules and alarms. In just a few clicks, DevOps Guru starts analyzing your
AWS application activity.
Resources: Website
Distributed training on Amazon SageMaker Amazon CodeGuru updates
What is it? What is it?
Training models on large datasets can take hours, slowing down your ability Amazon CodeGuru is a developer tool that provides intelligent
to deploy your latest innovations into production. You can split large training recommendations to improve your code quality and identify an application’s
datasets across multiple GPUs (data parallelism), but splitting data can take most expensive lines of code. Integrate CodeGuru into your existing software
weeks of experimentation to do efficiently. Also, more advanced ML use development workflow to automate code reviews during application
cases may require large models. For example, models can have billions of development and continuously monitor application's performance in
parameters and be petabytes in size. As a result, the models are often too big production and provide recommendations and visual clues on how to
to fit on a single GPU. You can split large models across multiple GPUs (model improve code quality, application performance, and reduce overall cost.
parallelism), but finding the best way to split up the model and adjust Use Cases:
training code can take weeks and delay your time to market. Improve application performance: Amazon CodeGuru Profiler is always
For customers using GPUs, Amazon SageMaker makes it faster to perform searching for application performance optimizations, identifying your
data parallelism and model parallelism. With minimal code changes, most “expensive” lines of code and recommending ways to fix them to
SageMaker helps split your data across multiple GPUs in a way that achieves reduce CPU utilization, cut compute costs, and improve application
near-linear scaling efficiency. SageMaker also helps split your model across performance.
multiple GPUs by automatically profiling and partitioning your model with Detect deviation from AWS API and SDK best practices: Amazon CodeGuru
fewer than 10 lines of code in your TensorFlow or PyTorch training script Reviewer is trained using rule mining and supervised machine learning
Availability: models that use a combination of logistic regression and neural networks
Distributed training is available in all AWS Regions where SageMaker is to look at code changes intended to improve the quality of the code, and
available. See details on the AWS Regions Table. cross-references them against documentation data.
Resources: Website | What’s new post Resources: Website | What’s new post
Amazon SageMaker Edge Manager Amazon Lookout for Metrics
What is it? What is it?
Amazon SageMaker Edge Manager provides model management for edge
Amazon Lookout for Metrics uses machine learning (ML) to detect anomalies
devices so you can optimize, secure, monitor, and maintain machine learning
in virtually any time series-driven business and operational metrics–such as
models on fleets of edge devices such as smart cameras, robots, personal
revenue performance, purchase transactions, and customer acquisition and
computers, and mobile devices.
retention rates–with no ML experience required.
Amazon SageMaker Edge Manager makes it easy to manage ML models on
Amazon Lookout for Metrics automatically connects to popular databases
edge devices. SageMaker Edge Manager uses SageMaker Neo to compile
and SaaS applications to continuously monitor metrics that you care about,
and optimize models for edge devices. Then, SageMaker Edge Manager
and sends you alerts as soon as anomalies are detected. When it finds
packages the model with its runtime and credentials for deployment. You
anomalies, Amazon Lookout for Metrics immediately sends you alerts,
have the flexibility to use AWS IoT Greengrass or your own on-device
groups anomalies that might be related to the same event, and helps you
deployment mechanism to deploy models to the edge. Once a model is
identify the root cause so that you can fix an issue or quickly react to
deployed, SageMaker Edge Manager manages each model on each device by
opportunities. It also ranks anomalies in the order of severity, so that you can
collecting metrics, sampling input/output data, and sending the data
focus on what matters the most, and lets you to tune the results by providing
securely to your Amazon S3 buckets for monitoring, labeling, and retraining
feedback based on your knowledge about your business, and uses your
so you can continuously improve model quality. And, because SageMaker
feedback to improve the accuracy of results over time.
Edge Manager enables you to manage models separately from the rest of
the application, you can update the model and the application Availability:
independently reducing costly downtime and service disruptions. Amazon Lookout for Metrics is a gated preview and will available in 5 regions
at launch: us-east-1, us-east-2, us-west-2, ap-northeast-1, and eu-west-1.
Availability:
Use Cases:
us-east-1, us-west-2, us-east-2, eu-west-1, eu-central-1, and ap-northeast-1,
By metric category
see details on the AWS Regions Table.
Customer Engagement: Ensure a seamless customer experience by
Use Cases: detecting sudden changes in metrics across the customer journey such as
Driver-assist dashcam: Connected vehicle solution providers use Amazon during enrollment, login, and engagement.
SageMaker Edge Manager to operate ML models to driver dashcams. The Operational: Proactively monitor metrics like latency, CPU utilization, and
models help detect pedestrians and road hazards to improve the safety of error rates to mitigate service interruptions.
both drivers and pedestrians. Sales: Quickly track changes in win rate, pipeline coverage, and average
Theft detection: Amazon SageMaker Edge Manager is used by retailers to deal size to evaluate business growth opportunities.
identify theft during checkout. Image detection models run on smart Marketing: With actionable marketing analytics, quickly detect how your
cameras at checkout counters and send alerts when the merchandise campaigns, partners, and ad platform metrics affect your overall traffic
does not match the scanned barcode. volume, revenue, churn, and conversion.
Predictive maintenance: Amazon SageMaker Edge Manager runs By Industry
predictive maintenance models on gateway servers at manufacturing Retail: Gain insights into category-level revenue and margin by monitoring
facilities in order to predict which machines are at high risk of failure. inventory levels, item pricing, promotional traffic, and conversion.
When possible failure is detected, alerts are sent to staff so they can Gaming: Boost player engagement and optimize gaming revenue by
remediate the issue monitoring changes in new users, active users, level-completion rate, in-
Customer Benefits: app purchases, and retention rate.
Run ML models up to 28x faster: Amazon SageMaker Edge Manager Ad Tech: Optimize ad spend by detecting spikes or dips in metrics like
automatically optimizes ML models for deployment on a wide variety of reach, impressions, views, and ad clicks.
edge devices, including CPUs, GPUs, and embedded ML accelerators. Telecom: Reduce customer frustration by detecting unexpected changes
SageMaker Edge Manager compiles your trained model into an in network performance metrics, like tracking traffic channel (TCH),
executable that discovers and applies specific performance optimizations evolved packet core (EPC), and Erlang.
that will make your model run most efficiently on the target hardware Customer Benefits:
platform. Highly accurate anomaly detection: Detects anomalies in metrics with high
Improve model quality: Amazon SageMaker Edge Manager continuously accuracy using ML technology and over 20 years of experience at Amazon.
monitors each model instance across your device fleet to detect when Actionable results at scale: Helps you identify the root cause by grouping
model quality declines. Declines in model quality can be caused related anomalies together and ranking them in the order of severity, so
differences in the data used to make predictions compared to the data that you can diagnose issues or identify opportunities quickly.
used to train the model or by changes in the real world. For example, Integration with AWS databases and SaaS applications: Connects with
changing economic conditions could drive new interest rates affecting commonly used AWS databases and SaaS applications. Sends alerts
home purchasing predictions. through multiple channels, and automatically triggers pre-defined custom
Easily integrate with device applications: Amazon SageMaker Edge actions, such as filing trouble tickets when anomalies are detected.
Manager supports gRPC, an open source remote procedure call, which Tunable results: Uses your feedback on detected anomalies to
allows you to integrate SageMaker Edge Manager into your existing edge automatically tune the results and improve accuracy over time.
applications through common programming languages, such as Android
Java, C++, C#, and Python.
Resources: External Website | What’s new post
Resources: External Website | What’s new post
Amazon SageMaker Debugger Amazon SageMaker Clarify
What is it? What is it?
With Amazon SageMaker Debugger you can detect bottlenecks and training Amazon SageMaker Clarify provides data to help you make your machine
problems in real-time so you can correct problems before the model is learning (ML) models fair and transparent by detecting bias so you can take
deployed to production. SageMaker Debugger collects, analyzes, and corrective action.
generates alerts, reports, and visualizations providing insights for you to act Amazon SageMaker Clarify detects bias across the entire ML workflow—
and train models faster. including during data preparation, after training, and ongoing over time—and
Amazon SageMaker Debugger captures model metrics and monitors system also includes tools to explain ML models and their predictions. You can skip
resources and profiles ML framework resources during ML model training, the tedious processes of implementing third-party tools and improve fairness
without requiring additional code. All metrics are captured in real-time so and transparency to improve trust with your customers, all within
you can correct issues during training, which speeds up training time and SageMaker. SageMaker Clarify also provides transparency through model
enables you to get higher quality models to production much faster. explainability reports that you can share with customers, business leaders, or
auditors, so all stakeholders can see how and why models make predictions.
Availability:
Amazon SageMaker Debugger is available in all AWS Regions where Availability:
SageMaker is available. See details on the AWS Regions Table Amazon SageMaker Clarify is available in all AWS Regions where SageMaker
is available. See details on the AWS Regions Table
Use Cases:
Use Cases:
Consolidate multiple tools: Amazon SageMaker Debugger provides a
single, unified tool that data scientists can use to collect training data Regulatory Compliance: Regulations such as the Equal Credit Opportunity
across different parameters in real-time, gain visibility into the effects of Act (ECOA) or Fairness in Housing Act often require companies to remain
different parameter values, and receive alerts for the appropriate action unbiased and to be able to explain financial decisions. Amazon SageMaker
to be taken. can help flag any potential bias present in the initial data or in the financial
model after training, and can also help explain which data caused an ML
Visualize training data: Amazon SageMaker Debugger renders
model to make a particular financial decision.
visualizations of training data and helps you visualize tensors in your
network to determine their state at each point in the training process. Internal Reporting & Compliance: Data science teams are often required
This is useful in scenarios such as determining stale or saturated data or to justify or explain ML models to internal stakeholders, such as internal
mapping effects of specific parameters on the model. auditors or executives who would like more transparency. Amazon
SageMaker can provide data science teams with a graph of feature
Explain ML models better: Amazon SageMaker Debugger saves the state
importance when requested, and can quantify potential bias in an ML
of ML models at periodic intervals and enables you to explain the model
model or its data to provide the information needed to support internal
predictions in real-time during training or offline after the training is
presentations or mandates.
completed. This helps you to interpret better and explain the predictions
the trained model makes. With SageMaker Debugger, you can explain the Operational Excellence: Machine learning is often applied in operational
internal mechanics of an ML model and eliminate the black box aspects of scenarios, such as predictive maintenance or supply chain operations.
predictions, leading to better business outcomes. However, data science teams may want insight into why a given machine
needs to be repaired, or why an inventory model is recommending surplus
Customer Benefits: stock in a particular location. Amazon SageMaker can detail the causes for
Generate ML models faster: Amazon SageMaker Debugger helps generate individual predictions, helping data science teams to work with other
ML models faster by providing you with full visibility and control during internal teams to improve operations.
the training process, to quickly troubleshoot and take corrective
Customer Benefits:
measures. With SageMaker Debugger, you can take immediate action if
any anomalies such as overfitting overtraining models are detected, Find imbalances in data: Amazon SageMaker Clarify is integrated with
resulting in faster model generation for deployment. With the insights Amazon SageMaker Data Wrangler, making it simple to identify bias
provided by SageMaker Debugger, you can reduce the time required to during data preparation. You specify attributes of interest, such as gender
troubleshoot models from weeks to days, with no additional code. or age, and Amazon SageMaker Clarify runs a set of algorithms to detect
the presence of bias in those attributes. After the algorithm runs,
Optimize system resources with no additional code: Using the profiling
SageMaker Clarify provides a visual report with a description of the
capability of Amazon SageMaker Debugger, you can automatically
sources and severity of possible bias so that you can take steps to
monitor system resources such as CPU, GPU, network, and memory to
mitigate.
give you a complete view of current resource utilization. Additionally, the
profiler suggests recommendations to reallocate resources if there are Check your trained model for bias: Ensure that predictions are fair by
being underutilized or if there are bottlenecks, helping you to optimize checking trained models for imbalances, such as more frequent denial of
resources effectively. You can profile your training job on the SageMaker services to one protected class than another. Amazon SageMaker Clarify is
Studio visual interface at any time. integrated with SageMaker Experiments so that after a model has been
trained, you can identify attributes you would like to check for bias, such
Make ML training transparent: Amazon SageMaker Debugger makes the
as income or marital status.
training process transparent so you can explain if the ML model is
progressively learning correct parameter values such as gradients to yield Monitor your model for bias: While your initial data or model may not
the desired results. Insights into the training data are provided by have been biased, changes in the world may cause bias to develop over
automatically capturing real-time metrics such as weights and tensors time. For example, a substantial change in mortgage rates could cause a
during training to help improve model accuracy. Debugging is made easy home loan application model to become biased. Amazon SageMaker
with a visual interface to analyze the debug data and take corrective Clarify is integrated with SageMaker Model Monitor, enabling you to
actions specific to the models that are being trained. configure alerting systems like Amazon CloudWatch to notify you if your
model begins to develop bias.
Resources: Website | What's new post | Detailed blog post
Resources: Website
Amazon SageMaker JumpStart
What is it?
Amazon SageMaker JumpStart helps you quickly and easily get started with
machine learning. To make it easier to get started, SageMaker
JumpStart provides a set of solutions for the most common use cases that
can be deployed readily with just a few clicks. The solutions are fully
customizable and showcase the use of AWS CloudFormation templates and
reference architectures so you can accelerate your ML journey.
SageMaker JumpStart also supports one-click deployment and fine-tuning of
more than 150 popular open source models for modalities such as natural
language processing, object detection, and image classification.
Availability:
Amazon SageMaker JumpStart is available in all AWS Regions where
SageMaker is available. See details on the AWS Regions Table
Use Cases:
There are 15+ pre-built solutions for common ML use cases including
predictive maintenance, demand forecasting, fraud detection, and
personalized recommendations.
Customer Benefits:
Accelerate time to deploy over 150 open source models: Amazon
SageMaker JumpStart provides one-click deployable ML models and
algorithms from popular model zoos, including PyTorch Hub and
Tensorflow Hub. One-click deployable ML models and algorithms are
easily deployable for image classification, object detection, and language
modeling use cases, minimizing the time to deploy ML models originating
from outside of SageMaker.
15+ pre-built solutions for common ML use cases: With Amazon
SageMaker JumpStart, you can move quickly from concept to production
with pre-built solutions that include all of the components needed to
deploy a ML application in SageMaker with a few clicks, including an AWS
CloudFormation template, reference architecture, and getting started
content. Solutions are fully customizable so you can easily modify to fit
your specific use case and dataset, and can be readily deployed with just a
few clicks. These end-to-end solutions cover common use case, from
predictive maintenance, demand forecasting, to fraud detection and
personalized recommendations.
Get started with just a few clicks: Amazon SageMaker JumpStart provides
notebooks, blogs, and video tutorials designed to help you when you
want to learn something new or encounter roadblocks. Content is easily
accessible within Amazon SageMaker Studio, enabling you to get started
with ML faster.
Resources: Website
Amazon QuickSight Q Amazon Redshift AQUA
What is it? What is it?
Amazon QuickSight Q uses machine learning-powered, natural language Today, in the analytics press release, we announced that AQUA (Advanced
query (NLQ) technology to enable business users to ask ad-hoc questions of Query Accelerator) for Amazon Redshift preview is now open to all
their data in natural language and get answers in seconds. To ask a question, customers and AQUA will be generally available in January 2021.
users simply type it into the Amazon QuickSight Q search bar. Amazon AQUA is a new distributed and hardware-accelerated cache that enables
QuickSight Q uses machine learning (natural language processing, schema Redshift queries to run up to 10x faster than other cloud data warehouses.
understanding, and semantic parsing for SQL code generation) to generate a Existing data warehousing architectures with centralized storage require
data model that automatically understands the meaning of and relationships data be moved to compute clusters for processing. As data warehouses
between business data, so users can receive highly accurate answers to their continue to grow over the next few years, the network bandwidth needed to
business questions in seconds by simply using the business language that move all this data becomes a bottleneck on query performance.
they are used to. Amazon QuickSight Q comes pre-trained on large volumes
of real-world data from various domains and industries like sales, marketing, AQUA takes a new approach to cloud data warehousing. AQUA brings the
operations, retail, human resources, pharmaceuticals, insurance, energy, compute to storage by doing a substantial share of data processing in-place
and more, so it is already optimized to understand complex business on the innovative cache. In addition, it uses AWS-designed processors and a
language. For example, sales users can ask, “How is my sales tracking against scale-out architecture to accelerate data processing beyond anything
quota?”, or retail users can ask, “What are the top products sold week-over- traditional CPUs can do today.
week by region?” Furthermore, users can get more complete and accurate Availability:
answers because the query is applied to all of the data, not just the datasets Customers can sign up for the AQUA preview now and will be contacted
in pre-determined model. And because Amazon QuickSight Q does this within a week with instructions. In order to use AQUA, customers must be
automatically, it eliminates the need for BI teams to spend time in building using RA3.4xl or RA3.16xl nodes in us-east-1 (N. Virginia), us-west-2
and updating data models, saving weeks of effort. (Oregon), or us-east-2 (Ohio) regions.
Availability: Customer Benefits:
Amazon QuickSight Q will be in Gated Preview where customers need to Brings compute closer to storage - AQUA accelerates Redshift queries by
sign-up to get access. running data intensive tasks such as such as filtering and aggregation
Use Cases: closer to the storage layer. This avoids networking bandwidth limitations
Amazon QuickSight Q is optimized to understand complex business by eliminating unnecessary data movement between where data is stored
language and data models from multiple domains, including and compute clusters.
o Sales (“How is my sales tracking against quota?”) Powered by AWS-Designed Processors - AQUA uses AWS-designed
o Marketing (“What is the conversion rate across my campaigns?”) processors to accelerate queries. This includes AWS Nitro chips adapted
o Retail (“What are the top products sold week over week by to speed up data encryption and compression, and custom analytics
region?”) processors, implemented in FPGAs, to accelerate operations such as
o HR, Advertising, amongst others filtering and aggregation.
Scale out Architecture - AQUA can process large amounts of data in
Customer Benefits:
parallel across multiple nodes, and automatically scales out to add more
Get answers in seconds: With Amazon QuickSight Q, business users can capacity as your storage needs grow over time.
simply type a question in plain English and get an answer such as a
number, chart, or table in seconds.
Resources: Website
Use business language that you are used to: With Amazon QuickSight Q,
you can ask questions using phrases and business language that you use
every day as part of your functional or vertical domain. Amazon
QuickSight Q is optimized to understand complex business language and
data models from multiple domains
Ask any question on all your data: Amazon QuickSight Q provides answers
to questions on all of your data. Unlike conventional NLQ- based BI tools,
Q is not limited to answering questions from a single dataset or
dashboard
Resources: Website
Amazon Neptune ML
What is it?
Amazon Neptune ML is a new capability of Amazon Neptune that uses Graph
Neural Networks (GNNs), a machine learning technique purpose-built for
graphs, to make easy, fast, and more accurate predictions using graph data.
With Neptune ML, you can improve the accuracy of most predictions for
graphs by over 50% when compared to making predictions using non-graph
methods.
Using the Deep Graph Library (DGL), an open-source library that makes it
easy to apply deep learning to graph data, Neptune ML automates the heavy
lifting of selecting and training the best ML model for graph data, and lets
users run machine learning on their graph directly using Neptune APIs and
queries. As a result, you can now create, train, and apply ML on Amazon
Neptune data in hours instead of weeks without the need to learn new tools
and ML technologies.
Availability:
Amazon Neptune ML is available in all AWS Regions where Neptune is
available. See details on the AWS Regions Table
Use Cases:
Fraud Detection Companies lose millions (even billions) of dollars in fraud,
and want to detect fraudulent users, accounts, devices, IP address or
credit cards to minimize the loss. You can use a graph-based
representation to capture the interaction of the entities (user, device or
card) and detect aggregations such as when a user initiates multiple mini
transactions or uses different accounts that are potentially fraudulent.
Product recommendation Traditional recommendations use analytics
services manually to make product recommendations. Neptune ML can
identify new relationships directly on graph data, and easily recommend
the list of games a player would be interested to buy, other players to
follow, or products to purchase.
Customer Acquisition: Neptune ML automatically recommends next steps,
or product discounts to certain customers based on where they are in the
acquisition funnel.
Knowledge Graph: Knowledge graphs consolidate and integrate an
organization’s information assets and make them more readily available
to all members of the organization. Neptune ML can infer missing links
across data sources, identify similar entities to enable better knowledge
discovery for all.
Customer Benefits:
Make predictions on graph data without ML expertise: Neptune ML
automatically creates, trains, and applies ML models on your graph data.
It uses DGL to automatically choose and train the best ML model for your
workload, enabling you to make ML-based predictions on graph data in
hours instead of weeks.
Improve the accuracy of most predictions by over 50%: Neptune ML uses
GNNs, a state of art ML technique applied to graph data that can reason
over billions of relationships in graphs, to enable you to make more
accurate predictions.
Resources: Website | What's new post | Leadership authored Blog
AWS Categories: Storage
NEW! The Amplify admin UI is an abstraction layer on top of the Amplify CLI,
and lets you configure back-ends on AWS with a graphical user interface. It
also allow you to manage content, users and user groups in the app and
assign this outside of the group of developers working on the application.
The admin UI does not require an AWS account until the point you need the
CLI.
Availability:
All AWS markets.
Customer Benefits:
Easily manage app users and app content: The Amplify admin UI (NEW!)
provides even non-developers with administrative access to manage app
users and app content without an AWS account.
AWS Categories: Management and Governance
AWS SaaS Boost SaaS Lens for the Well Architected Tool
What is it? What is it?
AWS SaaS Boost is an open source ready-to-use reference environment that The SaaS Lens for the AWS Well-Architected Tool enables customers to
helps Independent Software Vendors (ISVs) to accelerate your move to review and improve their cloud-based architectures and better understand
Software-as-a-Service (SaaS). From small specialized software businesses to the business impact of their design decisions. The SaaS Lens for the AWS
large global solution providers, AWS SaaS Boost helps you accelerate moving Well-Architected Tool measures architecture against best practices and
your applications to AWS with minimal modifications. Build, provision, and provides actionable insights to achieve a well-architected system that is
manage your SaaS environment with greater confidence based on AWS best more likely to achieve reliability, security, efficiency, and cost-effectiveness
practices and proven patterns from hundreds of successful SaaS companies. in the cloud.
Availability: Use cases:
Available in all regions. See details in the AWS Regions Table. Kick-off your SaaS Journey: Leverage the SaaS Lens Whitepaper, best
practices and improvement plans by technical teams as a starting place to
Customer Benefits:
learn development concepts to begin your journey to SaaS.
Accelerate their development to a SaaS model on AWS faster with fewer
Improve your architecture: Review SaaS workload against a list of Best
resources.
Practices and leverage the improvement plans and resources to gain
Remove the complexity and risk of building SaaS so product teams can
knowledge to improve systems.
focus on customer experience and innovation.
Identify and resolve risks: The SaaS Lens for the Well-Architected Tool
Simplify SaaS operations with out-of-the box availability of key processes
provides guidance to identify Medium-Risk and High-Risk issues that can
including automated onboarding, tenant monitoring and upgrade
impact your development roadmap.
orchestration.
Customer Benefits:
Resources: Website | Blog
Learn architectural best practices for designing and operating systems in
the cloud.
Measure your architecture against best practices and receive actionable
insights for improvement.
Have a well-architected system that is more likely to achieve reliability,
security, efficiency, and cost-effectiveness in the cloud.
Resources: Website | Whitepaper | Blog
AWS SaaS Factory Insights Hub
What is it?
The AWS SaaS Factory Insights Hub is a growing library of business and
technical content to help customers gain insights, make informed decisions,
and enable themselves at any stage of the software-as-a-service (SaaS)
journey on AWS. AWS Partners can search by topics most relevant to their
Foundational Technical Review Lens in the Well-
business, content types, or specific business or technical role to find Architected Tool
whitepapers, case studies, best practices, videos, and more.
What is it?
Use cases: The AWS Foundational Technical Review (FTR) Lens in the AWS Well-
Whether you work for or with an organization offering SaaS solutions to Architected Tool provides a self-service way for AWS Partners to prepare for
customers, or you just want to take your SaaS knowledge to the next the Foundational Technical Review (formally known as the Technical
level, the AWS SaaS Factory Insights Hub will help customers stay up-to- Baseline Review). The AWS Well-Architected FTR Lens includes best practices
date on all things SaaS on AWS. for security, reliability, and operational excellence, representing the best-
Customer Benefits: practice requirements necessary for membership in the AWS Partner
Network. These best practices help partners take their first step to becoming
AWS SaaS Factory Insights Hub allows customers to search and browse
Well-Architected.
available resources by role, knowledge level, content category, content
type, or keywords. Customers can also view all new and featured content Availability:
to follow the latest updates from the AWS SaaS Factory team. They can Available to customers and AWS Partners at no additional charge and is
find various resources covering both business and technical aspects of a offered in all Regions where the AWS Well-Architected Tool is available.
SaaS delivery model, such as SaaS 101, SaaS product strategy, go-to- Customer Benefits:
market (GTM), packaging and pricing, migration strategies, billing and Easily identify risks in your architectures related to the Foundational
metering, tenant isolation, and data partitioning. Technical Review
Resources: Website | Blog Identify how to make workload improvements, mitigate risks, and
successfully complete the FTR.
Resources: Website | User Guide | Foundational Technical Page
ISV Partner Path ProServe Ready
What is it? What is it?
ISV Partner Path, a distinct partner journey enabling a streamlined AWS The Public Sector ProServe Ready program provides AWS Consulting
Partner Network (APN) experience for Independent Software Vendors (ISVs) Partners a formal and standardized way to work with AWS Professional
to build, market, and sell their solutions on AWS. ISV Partner Path Services (“ProServe”) on subcontracted engagements with AWS customers.
accelerates an ISV’s engagement with AWS through prescriptive guidance, Bringing ProServe Ready to our Public Sector Partners and customers
curated programs, focused benefits, Marketplace capabilities, and unique accelerates our customers’ journey to the cloud. ProServe Ready offers
co-selling access—all accessible with no tier-based requirements. We are partners formalized training on ProServe best practices, enabling them to
introducing a new partner journey (ISV) in addition to the two (Consulting work seamlessly with AWS ProServe.
and Technology) today, separating ISV from the Technology Partner journey.
Availability:
We will not use APN Tiers (Registered, Select, Advanced) in the ISV Partner
Currently in pilot in the US and EMEA
Path as the default leveling framework for ISV Partners
Customer Benefits:
Availability:
ISV Partner Path will be available in January 2021, following the Learn architectural best practices for designing and operating systems in
announcement at reinvent on December 3 2020. the cloud.
Measure your architecture against best practices and receive actionable
Customer Benefits: insights for improvement.
Introducing ISV partner Path allows us remove the previous challenges Have a well-architected system that is more likely to achieve reliability,
that ISVs had with tier structure as well as reducing requirements for security, efficiency, and cost-effectiveness in the cloud.
entry, therefore enabling them to engage more quickly with AWS.
We will focus on the Partner solution instead of the Partner tier which
makes this more relevant for the way that this Partner type goes to
market with their customers
Think Big for Small Business Pilot AWS Public Safety & Disaster Response
What is it? Competency Expands to include Technology
Think Big for Small Business is an AWS Partner Network (APN) program to
further enable and accelerate Small and/or Diverse Partners (often Partners
designated as Minority-Owned Business). The Program addresses their What is it?
challenges in meeting APN tier requirements and incentivizes partner to We are excited to launch an additional track within this AWS Competency
grow and sustain their AWS businesses. that showcases specialized and dedicated AWS Technology Partners.
Availability: The expansion includes the addition of 16 solutions from independent
Ongoing global pilot software vendors (ISVs) that deliver AWS Partner technology for emergency
Use cases: management operations, justice public safety applications, PSDR
Whether you work for or with an organization offering SaaS solutions to infrastructure resilience and recovery, 911 and emergency communications,
customers, or you just want to take your SaaS knowledge to the next and PSDR data and analytics.
level, the AWS SaaS Factory Insights Hub will help customers stay up-to- Resources: Website
date on all things SaaS on AWS.
Benefits:
The Program provides small/diverse partners in Registered and Select
Tiers with provisional access to APN tier benefits through a set of
requirements proportional to partner size, essentially giving them more
time and needed resources to achieve APN requirements. It also offers a
limited-time Technical Capability discount to small/diverse partners in the
Public Sector Solution Provider Program and Public Sector Distribution
while they work towards a competency. In addition, participating partners
will have access to a Small Partner Guide to navigate all relevant AWS
programs and resources for growing their business with AWS.
AWS Partner Security Solutions for Government AI and ML Rapid Adoption Assistance
Workloads For Public Sector Partners
What is it? What is it?
Government agencies and public sector organizations need rapidly The American AI Initiative directs U.S. government agencies to double down
deployable and dependable security solutions to support their missions. To on efforts to advance artificial intelligence (AI) in order to protect and
respond, Amazon Web Services (AWS) launched the Security Solutions for improve the security and economy of our nation. AI and related technologies
Government Workloads initiative under the Authority to Operate (ATO) on (including machine learning [ML] and deep learning [DL]) can effectively
AWS Program. This initiative works with Public Sector partners, members of transform the way the government operates.
the AWS Partner Network (APN), to develop security solutions designed to
meet the unique security and compliance requirements of public sector AI and ML Rapid Adoption Assistance, is an additional benefit available for
workloads. members of the Public Sector Partner (PSP) Program under the AWS Partner
Network (APN). This initiative provides partners with a direct, scalable, and
The Security Solutions for Government Workloads initiative provides six
automated mechanism to reach out to AWS experts for assistance in
different partner-designed offerings to support remote workforce security
delivering AI-based solutions that can help U.S. government agencies
and web portal security for customer workloads.
provide better services United States residents.
AWS Public Sector Partners configure and manage these repeatable
Partner Benefits:
packages. This model enables global scalability and availability while
supporting localized customizations for unique markets. Reduce ramp-up time for your AI and ML applications and deliver
advanced technology solutions: The AWS AI and ML subject matter
Customer Benefits: experts will help partners build an AI and ML roadmap and accelerate
Rapid solution deployment: Reduce ramp-up time and accelerate security their solution development by guiding through the envision, enablement,
capabilities for government and public sector customer workloads by and building phases.
using pre-configured and/or managed solutions. Differentiate your business and grow your AWS practice: Develop a
High standards for privacy and data security: Deploy security solutions business plan to expand your public sector customer base through the
configured and managed by AWS Public Sector Partners with a focus on American AI Initiative. Achieve recognition for your AI and ML solutions
end-to-end security enforcement and automation. through the government, education and nonprofit competencies, the
Comprehensive security and compliance controls: Meet security and AWS GovCloud (US) skill, and AWS solution provider programs.
compliance standards for finance, retail, healthcare, government, and Simplify cloud procurement strategy and build your portfolio: Set the
more with third-party validation of global compliance requirements stage to win business and contracts in public sector with dedicated
achieved and continually monitored by AWS to help customers. support from the public sector bid and proposal team. Develop core go-
to-market assets to highlight your expertise on AWS AI and ML and earn
Resources: Website
trust with customers.
Resources: Website