0% found this document useful (0 votes)
277 views37 pages

AWS Reinvent 2020

Red Hat OpenShift Service on AWS and Amazon Elastic Container Registry (ECR) Public were announced, providing integrated OpenShift on AWS and a public container registry. Amazon Elastic Container Service (ECS) Anywhere and Amazon EKS Anywhere were also announced, allowing customers to use ECS and EKS on any infrastructure including on-premises, without installing container orchestration software. The services aim to make it easier to deploy containers anywhere while reducing management effort and improving availability, security, and collaboration capabilities for container images and software.

Uploaded by

Avid Tes
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
277 views37 pages

AWS Reinvent 2020

Red Hat OpenShift Service on AWS and Amazon Elastic Container Registry (ECR) Public were announced, providing integrated OpenShift on AWS and a public container registry. Amazon Elastic Container Service (ECS) Anywhere and Amazon EKS Anywhere were also announced, allowing customers to use ECS and EKS on any infrastructure including on-premises, without installing container orchestration software. The services aim to make it easier to deploy containers anywhere while reducing management effort and improving availability, security, and collaboration capabilities for container images and software.

Uploaded by

Avid Tes
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 37

Updated 12/9/2020

See below for a summary of each new service and its key customer benefit announced at re:Invent 2020.

News announced today (12/9) has been tagged

AWS Categories: Serverless and Containers

Red Hat OpenShift Service on AWS Amazon Elastic Container Registry (ECR) Public
What is it? What is it?
Red Hat OpenShift Service on AWS provides an integrated experience to use Amazon Elastic Container Registry (ECR) is a fully managed container registry
OpenShift. If you are already familiar with OpenShift, you can accelerate your that makes it easy to store, manage, share, and deploy your container images
application development process by leveraging familiar OpenShift APIs and and artifacts anywhere. Amazon ECR eliminates the need to operate your
tools for deployments on AWS. With Red Hat OpenShift Service on AWS, you own container repositories or worry about scaling the underlying
can use the wide range of AWS compute, database, analytics, machine infrastructure. Amazon ECR hosts your images in a highly available and high-
learning, networking, mobile, and other services to build secure and scalable performance architecture, allowing you to reliably deploy images for your
applications faster. Red Hat OpenShift Service on AWS comes with pay-as container applications. You can share container software privately within
you-go hourly and annual billing, a 99.95% SLA, and joint support from AWS your organization or publicly worldwide for anyone to discover and
and Red Hat. download.
Red Hat OpenShift Service on AWS makes it easier for you to focus on Availability:
deploying applications and accelerating innovation by moving the cluster  ECR is available for use globally, see details on the AWS Regions Table.
lifecycle management to Red Hat and AWS. With Red Hat OpenShift Service
Use Cases:
on AWS, you can run containerized applications with your existing OpenShift
 NEW: Public container image and artifact gallery: You can discover and use
workflows and reduce the complexity of management.
container software that vendors, open source projects and community
Availability: developers share publicly in the Amazon ECR public gallery. Popular base
ROSA is in limited preview at this time. Customers can register interest at: images such as operating systems, AWS-published images, Kubernetes
https://pages.awscloud.com/ROSA_Preview.html add-ons and files such as Helm charts can be found in the gallery.
Customer Benefits:  Team and public collaboration: Amazon ECR supports the ability to define
 Clear path to running in the cloud: Red Hat OpenShift Service on AWS and organize repositories in your registry using namespaces. This allows
delivers the production-ready OpenShift that many enterprises already you to organize your repositories based on your team’s existing
use on- premises today, simplifying the ability to shift workloads to the workflows. You can set which API actions another user may perform on
AWS public cloud as business needs change. your repository (e.g., create, list, describe, delete, and get) through
 Deliver high-quality applications faster: Remove barriers to development resource-level policies, allowing you to easily share your repositories with
and build high-quality applications faster with self-service provisioning, different users and AWS accounts, or publicly with anyone in the world.
automatic security enforcement, and consistent deployment. Accelerate Customer Benefits:
change iterations with automated development pipelines, templates, and  Reduce your effort with a fully managed registry: Amazon Elastic
performance monitoring. Container Registry eliminates the need to operate and scale the
 Flexible, cost-efficient pricing: Scale per your business needs and pay as infrastructure required to power your container registry. There is no
you go with flexible pricing with an on-demand hourly or annual billing software to install and manage or infrastructure to scale. Just push your
model. container images to Amazon ECR and pull the images using any container
management tool when you need to deploy.
Resources: Website  Securely share and download container images Amazon Elastic Container
Registry transfers your container images over HTTPS and automatically
encrypts your images at rest. You can configure policies to manage
permissions and control access to your images using AWS Identity and
Access Management (IAM) users and roles without having to manage
credentials directly on your EC2 instances.
 Provide fast and highly available access: Amazon Elastic Container Registry
has a highly scalable, redundant, and durable architecture. Your container
images are highly available and accessible, allowing you to reliably deploy
new containers for your applications. You can reliably distribute public
container images as well as related files such as helm charts and policy
configurations for use by any developer. ECR automatically replicates
container software to multiple AWS Regions to reduce download times
and improve availability.

Resources: Website

1
Amazon Elastic Container Service (ECS) Anywhere Amazon EKS Anywhere
What is it? What is it?
Amazon Elastic Container Service (ECS) Anywhere is a capability in Amazon Amazon EKS Anywhere is a new deployment option for Amazon EKS that
ECS that enables customers to easily run and manage container-based enables you to easily create and operate Kubernetes clusters on-premises,
applications on-premises, including on virtual machines (VMs), bare metal including on your own virtual machines (VMs) and bare metal servers. EKS
servers, and other customer-managed infrastructure. Anywhere provides an installable software package for creating and
With this announcement, customers will now be able to use ECS on any operating Kubernetes clusters on-premises and automation tooling for
compute infrastructure, whether in AWS regions, AWS Local Zones, AWS cluster lifecycle support.
Wavelength, AWS Outposts, or in any on-premises environment, without EKS Anywhere creates clusters based on Amazon EKS Distro, the same
installing or operating container orchestration software. Kubernetes distribution used by EKS for clusters on
AWS. EKS Anywhere enables you to automate cluster management, reduce
Availability: support costs, and eliminate the redundant effort of using multiple tools for
Amazon ECS Anywhere is planned to be available in all standard regions operating Kubernetes clusters. EKS Anywhere is fully supported by AWS. In
where Amazon ECS is available. addition, you can leverage the EKS console to view all your Kubernetes
Use Cases: clusters, running anywhere.
 Use ECS as a common tool to deploy “anywhere”: ECS Anywhere offers Availability:
customers a single container orchestration platform for consistent tooling As an on-premises offering, EKS Anywhere can run anywhere
and deployment experience across AWS and on-premises environments
including now on customer-managed infrastructure. With ECS Anywhere, Use cases:
you get the same powerful simplicity of the ECS API, cluster management,  Train models in the cloud and run inference on premises: With EKS
monitoring, and tooling for containers running anywhere. Anywhere, you can now combine and benefit the best of both worlds:
 Run containers on customer-managed infrastructure to meet specific train your ML model in the cloud, using AWS managed services and use
requirements: ECS Anywhere enables customers to run workloads on- the trained ML model in your on-premises setup.
premises on their own infrastructure for reasons such as regulatory,  Workload migration (on-premises to cloud): With EKS Anywhere, you can
latency, security, and data residency requirements. have the same EKS tooling on-premises, and this consistency provides a
 Leverage the simplicity of ECS while making use of existing capital quicker on-ramp of your Kubernetes-based workloads to the
investments: ECS Anywhere allows customers to utilize their on-premises cloud.Increase operational efficiencies
investments as they need to in order to run containerized applications.  Application modernization: EKS Anywhere empowers you to finally
Additionally, some customers are looking to use their on-premises address the modernization of your applications, removing the heavy lifting
infrastructure as base capacity while bursting into AWS during peaks or as of keeping up with upstream Kubernetes and security patches, so you can
their business grows. Over time, as they retire their on-premises focus on the business value.
hardware, they would continue to move the dial to use more compute on  Data sovereignty: Some large data sets can not or will not soon leave the
AWS until they have fully migrated. data center due to legal requirements concerning the location of the data.
Yet EKS Anywhere helps to move the stateless part of the application to
Customer Benefits: the cloud, while keeping data in place.
 Fully managed cloud-based control plane: No need to run, update, or  Bursting: Seasonal workloads can require a lot of compute (5x to 10x more
maintain container orchestrators on-premises. than the baseline) for a days or weeks. Being able to burst into the cloud
 Consistent tooling and governance: Use the same tools and APIs for all provides this temporary capacity. With EKS Anywhere you can now
container-based applications regardless of operating environment. manage your workloads across on-premises and the cloud consistently
 Manage your hybrid footprint: Run applications in on-premises and cost-effectively.
environments and easily expand to cloud when you're ready.
Resources: Website Customer Benefits:
 Simplify and automate Kubernetes management: EKS Anywhere provides
you with consistent Kubernetes management tooling optimized to simplify
cluster installation with default configurations for OS, container registry,
logging, monitoring, networking, and storage.
 Create consistent clusters: Amazon EKS Anywhere uses EKS Distro, the
same Kubernetes distribution deployed by Amazon EKS, allowing you to
easily create clusters consistent with Amazon EKS best practices. EKS
Anywhere eliminates the fragmented collection of vendor support
agreements and tools required to install and operate Kubernetes clusters
on-premises.
 Deliver a more reliable Kubernetes environment: EKS Anywhere gives you
a Kubernetes environment on-premises that is easier to support. EKS
Anywhere helps you integrate Kubernetes with existing infrastructure,
keep open source software up to date and patched, and maintain business
continuity with cluster backups and recovery.

Resources: Website

2
AWS Proton AWS Lambda Container Image Support & 1ms
What is it? billing granularity
AWS Proton is the first fully managed application deployment service for
What is it?
container and serverless applications. Platform teams can use
AWS Lambda supports packaging and deploying functions as container
Proton to connect and coordinate all the different tools needed for
images, making it easy for customers to build Lambda based applications by
infrastructure provisioning, code deployments, monitoring, and updates.
using familiar container image tooling, workflows, and dependencies.
Customers also benefit from the operational simplicity, automatic scaling
Proton enables platform teams to give developers an easy way to deploy
with sub-second startup times, high availability, native integrations with 140
their code using containers and serverless technologies, using the
AWS services, and pay for use model offered by AWS Lambda. Enterprise
management tools, governance, and visibility needed to ensure consistent
customers can use a consistent set of tools with both their Lambda and
standards and best practices.
containerized applications for central governance requirements such as
Availability: security scanning and image signing. Customers can create their container
During preview: us-east-1, us-east-2, us-west-2, ap-northeast-1, and eu- deployment images by starting with either AWS Lambda provided base
west-1. Global region availability planned for GA images or by using one of their preferred community or private enterprise
images.
Use Cases:
 Streamlined management: Platform teams use AWS Proton to manage Availability:
and enforce a consistent set of standards for compute, networking, Container Image Support for AWS Lambda and 1ms billing granularity for
continuous integration/continuous delivery (CI/CD), and security and AWS Lambda are available in all regions where AWS Lambda is available,
monitoring in modern container and serverless environments. With except for regions in China.
Proton, you can see what was deployed and who deployed it. You can
Use Cases:
automate in-place infrastructure updates when you update your
 Build cross-platform applications, with both containers and AWS Lambda
templates.
 Large applications, or applications relying on large dependencies, such as
 Managed developer self-service: AWS Proton enables platform teams to
machine learning, analytics, or data intensive apps.
offer a curated self-service interface for developers, using the familiar
 Customers who want to run serverless applications but have standardized
experience of the AWS Management Console or AWS Command Line
on container tooling within their organizations
Interface (AWS CLI). Using approved stacks, authorized developers in your
organization are able to use Proton to create and deploy a new Customer Benefits:
production infrastructure service for their container and serverless  Leverage familiar container tooling and workflows: Leverage the flexibility
applications. and familiarity of container tooling, and the agility and operational
 Infrastructure as code (IaC) adoption: AWS Proton uses infrastructure as simplicity of AWS Lambda to be more agile when building applications.
code (IaC) to define application stacks and configure resources. It  Get the flexibility of containers and agility of AWS Lambda: When invoked,
integrates with popular AWS and third-party CI/CD and observability functions deployed as container images are executed as-is, with sub-
tools, offering a flexible approach to application management. Proton second automatic scaling. You benefit from high availability, only pay for
makes it easy to provide your developers with a curated set of building what you use and can take advantage of 140 native service integrations.
blocks they can use to accelerate the pace of business innovation.  Build and deploy large workloads to AWS Lambda: With container images
Customer Benefits: of up to 10GB, you can easily build and deploy larger workloads that rely
on sizable dependencies, such as machine learning or data intensive
 Set guardrails: AWS Proton enables your developers to safely adopt and
workloads.
deploy applications using approved stacks that you manage. It delivers the
right balance of control and flexibility to ensure developers can continue
rapid innovation.
 Increase developer productivity: AWS Proton lets you adopt new
technologies without slowing your developers down. It gives them
infrastructure provisioning and code deployment in a single interface,
allowing developers to focus on their code.
 Enforce best practices: When you adopt a new feature or best practice,
AWS Proton helps you update out- of-date applications with a single click.
With Proton, you can ensure consistent architecture across your
organization.

Resources: Website | What’s new post

3
Amazon EKS Add-ons Amazon EKS Distro
What is it? What is it?
Amazon Elastic Kubernetes Service (Amazon EKS) gives you the flexibility to Amazon EKS Distro is a Kubernetes distribution used by Amazon EKS to help
start, run, and scale Kubernetes applications in the AWS cloud or on- create reliable and secure clusters. EKS Distro includes binaries and
premises. Amazon EKS helps you provide highly-available and secure clusters containers of open source Kubernetes, etcd (cluster configuration database),
and automates key tasks such as patching, node provisioning, and updates. networking, storage plugins, all tested for compatibility. You can deploy EKS
NEW! Add-ons – Add-ons are common operational software which extend Distro wherever your applications need to run.
the operational functionality of Kubernetes. You can use EKS to install and You can deploy clusters and let AWS take care of testing and tracking
keep this software up to date. When you start an Amazon EKS cluster, you Kubernetes updates, dependencies, and patches. Each EKS Distro verifies
can select the add-ons that you would like to run in the cluster, including new Kubernetes versions for compatibility. The source code, open source
Kubernetes tools for observability, networking, autoscaling, and AWS service tools, and settings are provided for reproducible builds. EKS Distro will
integrations. provide extended support for Kubernetes, with builds of previous versions
updated with the latest security patches. EKS Distro is available as open
Availability: source on GitHub.
Amazon EKS is generally available in all AWS public regions as of November
2020. Support in the new Osaka region is coming soon. Availability:
Amazon EKS Distro is open source software that can be run anywhere.
Use Cases:
Customer Benefits:
 Hybrid Deployments
 Web Applications  Get consistent Kubernetes builds: EKS Distro provides the same installable
builds and code of open source Kubernetes that are used by Amazon EKS.
 Big data
You can perform reproducible builds with the provided source code,
 Machine Learning
tooling, and documentation.
 Batch Processing
 Run Kubernetes on any infrastructure: You can deploy EKS Distro on your
Customer Benefits: own self-provisioned hardware infrastructure, including bare-metal
 NEW! Service Integrations – AWS Controllers for Kubernetes (ACK) lets servers or VMware vSphere virtual machines, or on Amazon EC2 instances.
you directly manage AWS services from Kubernetes. ACK makes it simple  Have a more reliable and secure distribution: EKS Distro will provide
to build scalable and highly-available Kubernetes applications that utilize extended support for Kubernetes versions in alignment with the Amazon
AWS services. EKS Version Lifecycle Policy, by updating builds of previous versions with
 NEW! Integrated Kubernetes Console – EKS provides an integrated the latest critical security patches.
console for Kubernetes clusters. Cluster operators and application
developers can use EKS as a single place to organize, visualize, and Resources: Website | What’s new post
troubleshoot their Kubernetes applications running on Amazon EKS. The
EKS console is hosted by AWS and is available automatically for all EKS
clusters.
 NEW! Add-ons – Add-ons are common operational software which extend
the operational functionality of Kubernetes. You can use EKS to install and
keep this software up to date. When you start an Amazon EKS cluster, you
can select the add-ons that you would like to run in the cluster, including
Kubernetes tools for observability, networking, autoscaling, and AWS
service integrations.
Resources: Website | What’s new post

4
Amazon Managed Workflows for Apache Airflow Amazon MQ
What is it? What is it?
Amazon Managed Workflows are a managed orchestration service for Amazon MQ is a managed message broker service for Apache ActiveMQ and
Apache Airflow that makes it easy to set up and operate end-to-end data RabbitMQ that makes it easy to set up and operate message brokers on
pipelines in the cloud at scale. Apache Airflow is an open-source tool used to AWS. Amazon MQ reduces your operational responsibilities by managing the
programmatically author, schedule, and monitor sequences of processes and provisioning, setup, and maintenance of message brokers for you. Because
tasks referred to as “workflows.” With Managed Workflows you can use the Amazon MQ connects to your current applications with industry-standard
same open source Airflow platform and Python language to create APIs and protocols, you can easily migrate to AWS without having to rewrite
workflows without having to manage the underlying infrastructure for code.
scalability, availability, and security. Managed Workflows automatically scale Availability:
its workflow execution capacity up and down to meet your needs, and is Amazon MQ is available in 19 AWS Regions, see details on the AWS Regions
integrated with AWS security services to enable fast and secure access to Table.
data.
Customer Benefits:
Availability:
 Migrate quickly: Connecting your current applications to Amazon MQ is
us-east-1 (N. Virginia), us-east-2 (Ohio), us-west-2 (Oregon), eu-north-1
easy because it uses industry-standard APIs and protocols for messaging,
(Stockholm), eu-west-1 (Ireland), eu-central-1 (Frankfurt), ap-southeast-2
including JMS, NMS, AMQP 1.0 and 0-9-1, STOMP, MQTT, and
(Sydney), ap-northeast-1 (Tokyo), and ap-southeast-1 (Singapore)
WebSocket. This enables you to move from any message broker that uses
Use Cases: these standards to Amazon MQ by simply updating the endpoints of your
 Enable Complex Workflows: Big data platforms often need complicated applications to connect to Amazon MQ.
data pipelines that connect many internal and external services. To use  Offload operational responsibilities: Amazon MQ manages the
this data, customers need to first build a workflow that defines the series administration and maintenance of message brokers and automatically
of sequential tasks that prepare and process the data. Managed provisions infrastructure for high availability. There is no need to provision
Workflows execute these workflows on a schedule or on-demand. hardware or install and maintain software and Amazon MQ automatically
 Coordinate Extract, Transform, and Load (ETL) Jobs: You can use Managed manages tasks such as software upgrades, security updates, and failure
Workflows as an open source alternative to orchestrate multiple ETL jobs detection and recovery.
involving a diverse set of technologies in an arbitrarily complex ETL  Durable messaging made easy: Amazon MQ is automatically provisioned
workflow. for high availability and message durability when you connect your
 Prepare Machine Learning (ML) Data: In order to enable machine message brokers. Amazon MQ stores messages redundantly across
learning, source data must be collected, processed, and normalized so multiple Availability Zones (AZ) within an AWS region and will continue to
that ML modeling systems like the fully managed service Amazon be available if a component or AZ fails.
SageMaker can train on that data. Managed Workflows solve this problem
by making it easier to stitch together the steps it takes to automate your Resources: Website
ML pipeline.
Customer Benefits:
 Deploy Airflow rapidly at scale: Get started in minutes from the AWS
Management Console, CLI, AWS CloudFormation, or AWS SDK. Create an
account and begin deploying Directed Acyclic Graphs (DAGs) to your
Airflow environment immediately without reliance on development
resources or provisioning infrastructure.
 Run Airflow with built-in security: With Managed Workflows, your data is
secure by default as workloads run in your own isolated and secure cloud
environment using Amazon’s Virtual Private Cloud (VPC), and data is
automatically encrypted using AWS Key Management Service (KMS).
 Reduce operational costs: Managed Workflows are a managed service,
removing the heavy lift of running open source Apache Airflow at scale.
With Managed Workflows, you can reduce operational costs and
engineering overhead while meeting the on-demand monitoring needs of
end to end data pipeline orchestration.

Resources: Website

5
AWS Categories: Compute

Amazon EC2 Mac Instances Amazon EC2 D3 and D3en Instances


What is it? What is it?
Mac instances enable customers to run on-demand macOS workloads in the Amazon EC2 D3 and D3en instances provide cost-effective, high capacity
cloud for the first time, extending the flexibility, scalability, and cost benefits local storage-per-vCPU for massively-scaled storage workloads. D3 and D3en
of AWS to all Apple developers. Customers who rely on the Xcode IDE for instances are the next generation of dense HDD storage instances, offering
creating iPhone, iPad, Mac, Apple Watch, Apple TV, and Safari apps can now 30% higher processor performance, increased capacity, and reduced cost
provision and access macOS environments within minutes with simple mouse compared to D2 instances. Additionally, D3 instances provide 2.5x higher
clicks or API calls, dynamically scale capacity as needed, and benefit from networking speed and 45% higher disk throughput compared to D2
AWS’s pay-as-you-go pricing. instances. D3en instances, enhanced storage and high-speed networking
Amazon EC2 Mac instances are built on Mac mini computers, and offer variants, provide 7.5x higher networking speed, 100% higher disk
customers a choice of both the macOS Mojave (10.14) and macOS Catalina throughput, 7x more storage capacity (up to 336 TB), and 80% lower cost
(10.15) versions. per-TB of storage compared to D2 instances.

Availability: D3 instances are a great fit for dense storage workloads including big data
us-east-1 (N. Virginia), us-east-2 (Ohio), us-west-2 (Oregon), eu-west-1 and analytics, data warehousing, and high scale file systems. D3en instances
(Ireland), and ap-southeast-1 (Singapore) are a great fit for dense and distributed workloads including high capacity
data lakes, clustered file systems, and other multi-node storage systems with
Customer Benefits: significant inter-node I/O. With D3 and D3en instances, you can easily
 Quickly provision macOS environments: Time and resources previously migrate from previous- generation D2 instances or on-premises
spent building and maintaining on-premises macOS environments can infrastructure to a platform optimized for dense HDD storage workloads.
now be refocused on building creative and useful apps. Development
teams can now seamlessly provision and access macOS compute Availability:
environments to enjoy faster app builds and convenient, distributed us-east-1, us-east-2, us-west-2, and eu-west-1 regions
testing, without having to procure, configure, operate, maintain, and Customer Benefits:
upgrade fleets of physical computers.  Lower costs: Next-generation Amazon EC2 D3 instances provide increased
 Reduce costs: Mac instances allow developers to launch macOS price-performance, and lower cost than D2 instances. D3 and D3en
environments within minutes, adjust provisioned capacity as needed, and instances feature 30% higher compute performance than D2 instances.
only pay for actual usage with AWS’s pay-as-you-go pricing. Developers D3en instances also offer 80% lower cost-per-TB of storage compared to
save money since they only need to pay for the systems that are in use. D2 instances.
For example, more capacity can be used when building an app, and less  Better performance: D3 and D3en instances satisfy the needs of
capacity when testing. applications with high requirements for sequential storage throughput.
 Extend your toolkits: Amazon EC2 Mac instances provide developers D3 and D3en instances enable 45% and 100% higher disk throughput
seamless access to the broad set of over 175 AWS services so they can respectively compared to D2 instances. D3 and D3en instances provide
more easily and efficiently collaborate with team members, and develop, 2.5x and 7.5x higher networking throughput respectively than D2
test, share, analyze, and improve their apps. Customers can leverage AWS instances, allowing for high speed multi-node configurations.
services such as Elastic Block Store (EBS) for block-level storage, Elastic  Maximize resource efficiency: D3 and D3en instances are powered by the
Load Balancer (ELB) for distributing build queues, Simple Storage Service AWS Nitro System, a combination of dedicated hardware and lightweight
(S3) for extreme scale object storage, Amazon Machine Images (AMIs) for hypervisor, which delivers practically all of the compute and memory
orchestration, and CodeBuild for managed CI/CD. resources of the host hardware to your instances. This frees up additional
Resources: Website | What’s new post compute, memory and I/O, allowing your applications to do more with
available hardware resources including local, HDD storage.
Resources: Website | What’s new post

6
Amazon EC2 Instances Powered by AWS Graviton2 Amazon EC2 G4ad instances
Processors What is it?
G4ad instances are powered by AMD Radeon Pro V520 GPUs, providing the
What is it?
best price performance for graphics intensive applications in the cloud. These
The new general purpose (M6g), general purpose burstable (T4g), compute
instances offer up to 45% better price performance compared to G4dn
optimized (C6g), and memory optimized (R6g) Amazon EC2 instances deliver
instances, which were already the lowest cost instances in the cloud, for
up to 40% improved price performance over comparable x86-based
graphics applications such as remote graphics workstations, game streaming,
instances for a broad spectrum of workloads including application servers,
and rendering that leverage industry-standard APIs such as OpenGL, DirectX,
open source databases, in-memory caches, microservices, gaming servers,
and Vulkan. They provide up to 4 AMD Radeon Pro V520 GPUs, 64 vCPUs, 25
electronic design automation, high-performance computing, and video
Gbps networking, and 2.4 TB local NVMe-based SSD storage.
encoding. M6gd, C6gd, and R6gd are variants of these instances with local
NVMe-based SSD storage, and C6gn instances deliver 100 Gbps networking Availability:
for compute intensive applications with support for Elastic Fabric Adapter us-east-1 (N. Virginia), us-east-2 (Ohio), us-west-2 (Oregon), eu-west-1
(EFA). These instances are powered by new AWS Graviton2 processors that (Ireland), and ap-southeast-1 (Singapore)
deliver up to 7x performance, 4x the number of compute cores, 2x larger Use Cases:
private caches per core, and 5x faster memory compared to the first-  Virtual Workstations
generation AWS Graviton Processors. AWS Graviton2 processors are built on  Graphics intensive applications
advanced 7 nanometer manufacturing technology. They utilize 64-bit Arm
Neoverse cores and custom silicon designed by AWS, and introduce several Customer Benefits:
performance optimizations versus the first generation. AWS Graviton2  Highest Perfromance and Lowest Cost Instances for Graphics Intensive
processors provide 2x faster floating-point performance per core for Applications: G4ad instances are the lowest cost instances in the cloud for
scientific and high-performance computing workloads, custom hardware graphics intensive applications. They provide up to 45% better price
acceleration for compression workloads, fully encrypted DRAM memory, and performance, including up to 40% better graphics performance, compared
optimized instructions for faster CPU-based machine learning inference. to G4dn instances for graphics applications such as remote graphics
workstations, game streaming, and rendering that leverage industry
Availability: standard APIs such as OpenGL, DirectX, and Vulkan.
US East (N. Virginia, Ohio), US West (N. California, Oregon), Europe (Ireland,
 Simplified Management of Virtual Workstations at the Lowest Cost in the
Frankfurt, London), Canada (Central) and Asia Pacific (Mumbai,
Cloud: G4ad instances allow customers to configure virtual workstations
Singapore, Sydney, Tokyo) regions
with high-performance simulation, rendering, and design capabilities in
Customer Benefits: minutes, allowing customers to scale quickly. Customers can use AMD
 Best price performance for a broad spectrum of workloads: AWS Radeon Pro Software for Enterprise and high-performance remote display
Graviton2-based general-purpose (M6g), general-purpose burstable (T4g), protocol, NICE DCV, with G4ad instances at no additional cost to manage
compute-optimized (C6g), and memory-optimized (R6g) EC2 instances their virtual workstation environments with support for up to two 4k
deliver up to 40% better price performance over comparable current monitors per GPU.
generation x86-based instances for a broad spectrum of workloads such  Dependability in Third Party Applications: The AMD professional graphics
as application servers, micro-services, video encoding, high-performance solution includes an extensive Independent Software Vendor (ISV)
computing, electronic design automation, compression, gaming, open- application testing and certification process called the Day Zero
source databases, in-memory caches, and CPU-based machine learning Certification Program. This helps ensure that developers can leverage the
inference. latest AMD Radeon Pro Software for Enterprise features combined with
 Extensive ecosystem support: AWS Graviton2 processors, based on the the reliability of certified software on the day of the driver release.
64-bit Arm architecture, are supported by popular Linux operating Resources: Website
systems including Amazon Linux 2, Red Hat, SUSE, and Ubuntu. Many
popular applications and services from AWS and Independent Software
Vendors also support AWS Graviton2-based instances, including Amazon
ECS, Amazon EKS, Amazon ECR, Amazon CodeBuild, Amazon
CodeCommit, Amazon CodePipeline, Amazon CodeDeploy, Amazon
CloudWatch, Crowdstrike, Datadog, Docker, Drone, GitLab, Jenkins,
NGINX, Qualys, Rancher, Rapid7, Tenable, and TravisCI. Arm developers
can also leverage this ecosystem to build applications natively in the
cloud, thereby eliminating the need for emulation and cross-compilation,
which are error prone and time consuming.
 Enhanced security for cloud applications: Developers building applications
for the cloud rely on cloud infrastructure for security, speed and optimal
resource footprint. AWS Graviton2 processors feature key capabilities
that enables developers to run cloud native applications securely, and at
scale, including always-on 256-bit DRAM encryption and 50% faster per
core encryption performance compared to first-generation AWS Graviton.
Graviton2 powered instances are built on the Nitro System that features
the Nitro security chip with dedicated hardware and software for security
functions, as well as encrypted EBS storage volumes by default.
Resources Website

7
AWS Wavelength Zone in Las Vegas Amazon EC2 instances powered by Habana
What is it? Accelerators
Today, we are announcing the availability of a new AWS Wavelength Zone
What is it?
on Verizon’s 5G Ultra Wideband network in Las Vegas. Wavelength Zones
Amazon EC2 instances powered by Habana accelerators are a new type of
are now available in eight cities, including the seven previously announced
EC2 instance specifically optimized for deep learning training workloads to
cities of Boston, San Francisco Bay Area, New York City, Washington DC,
deliver the lowest cost-to-train machine learning models in the cloud.
Atlanta, Dallas, and Miami.
Habana-based instances are ideal for deep learning training workloads of
AWS Wavelength brings AWS services to the edge of the 5G network, applications such as natural language processing, object detection and
minimizing the latency to connect to an application from 5G connected classification, recommendation engines and autonomous vehicle perception.
devices. Application traffic can reach application servers running in Habana, an Intel company, will provide the SynapseAI SDK and tools that
Wavelength Zones, AWS infrastructure deployments that embed AWS simplify building with or migrating from current GPU-based EC2 instances to
compute and storage services within the communications service providers’ Habana-based EC2 instances. SynapseAI will be natively integrated with
datacenters at the edge of the 5G networks, without leaving the telco common ML frameworks like TensorFlow and PyTorch, and provide the
provider’s network. This reduces the extra network hops to the Internet that ability to easily port existing training models from using GPUs to Habana
can result in latencies of 10s of milliseconds, preventing customers from accelerators. Customers will be able to launch the new EC2 instances using
taking full advantage of the bandwidth and latency advancements of 5G. AWS Deep Learning AMIs, or via Amazon EKS and ECS for containerized
applications, and also have the ability to use these instances via Amazon
Availability:
Sagemaker.
Today, Wavelength was announced for availability in Las Vegas. In August
2020, AWS announced the launch of two Wavelength Zones, in San Availability:
Francisco and Boston, with Verizon. Wavelength Zones in 8 other cities in the Amazon EC2 Habana-based instances will be available in April 2021 in 3 sizes
United States are planned for launch in 2020. Globally, AWS is partnering across in 2 regions: us-east-1 and us-west-2.
with other leading edge telecommunications companies including KDDI, SK They can be purchased as On-Demand, Reserved Instances, Savings Plan or
Telecom, and Vodafone to launch Wavelength across Europe, Japan, and Spot Instances. Habana-based instances are also available for use with
South Korea in 2020, with more telco partners coming soon. Amazon SageMaker, Amazon EKS and Amazon
Use cases: Customer Benefits:
 Connected Vehicles: Cellular Vehicle-to-Everything (C-V2X) is an  Better performance and lower cost: Habana based EC2 instances will
increasingly important platform for enabling intelligent driving, real-time leverage up to 8 Habana Gaudi accelerators and deliver up to 40% better
HD-maps, road safety, and more. price performance than current GPU-based EC2 instances for training
 Interactive Live Video Streams: Wavelength provides the ultra-low latency deep learning models. Habana-based instances also provide customers
needed to live stream high-resolution video and high-fidelity audio, as the ability to scale out from a single accelerator to hundreds in
well as to embed interactive experiences into live video streams. significantly reducing time-to-train.
 AR/VR: By accessing compute resources on AWS Wavelength, AR/VR Resources: Website
applications can reduce the Motion to Photon (MTP) latencies to the <20
ms benchmark needed to offer a realistic customer experience.
 Smart Factories: Industrial automation applications use ML inference at
the edge to analyze images and videos to detect quality issues on fast
moving assembly lines and trigger actions to remediate the problem.
 Real-time gaming: With AWS Wavelength, the most demanding games
can be made available on end devices that have limited processing power
by streaming these games from game servers in Wavelength Zones.
 Healthcare: ML-assisted diagnostics: AI/ML driven video analytics and
image matching solutions help doctors speed up diagnosis of observed
conditions.
Customer Benefits:
 Ultra-low latency for 5G: Wavelength combines AWS compute and
storage services with the high bandwidth and low latency of 5G networks
to enable developers to innovate and build a whole new class of
applications that serve end-users with ultra-low latencies over the 5G
network.
 Consistent AWS experience: Wavelength enables you to use familiar and
powerful AWS tools and services to build, manage, secure, and scale your
applications.
 Flexible and scalable: With Wavelength, you can start small and scale as
your needs grow, without worrying about managing physical hardware or
maximizing the utilization of purchased capacity.
 Global 5G network: Wavelength will be available within communications
service providers’ (CSP) networks such as Verizon, Vodafone, KDDI, and SK
Telecom. More CSPs around the world will be available in the near future.
Resources: Website | What’s New post

8
AWS Trainium AWS Outposts 1U and 2U Servers
What is it? What is it?
AWS Trainium is high performance machine learning (ML) chip, custom AWS Outposts is a fully managed service that extends AWS infrastructure,
designed by AWS to provide the best price performance for training machine AWS services, APIs, and tools to virtually any datacenter, co-location space,
learning models in the cloud. The Trainium chip is specifically optimized for or on-premises facility for a truly consistent hybrid experience. AWS
deep learning training workloads for applications including image Outposts is ideal for workloads that need low latency access to on-premises
classification, semantic search, translation, voice recognition, natural applications or systems, local data processing, and to securely store sensitive
language processing and recommendation engines. AWS Trainium uses the customer data that needs to remain anywhere there is no AWS region,
AWS Neuron SDK which is integrated with popular ML frameworks, including including inside company-controlled environments or countries.
TensorFlow, MXNet, and PyTorch, allowing customers to easily migrate from AWS Outposts 1U and 2U form factors are rack-mountable servers that
using GPU instances for training deep learning models with minimal code provide local compute and networking services to edge locations that have
changes. AWS Trainium will be available via Amazon EC2 instances and AWS limited space or smaller capacity requirements. Outposts servers are ideal for
Deep Learning AMIs as well as managed services including Amazon customers with low-latency or local data processing needs for on-premises
SageMaker, Amazon ECS, EKS, and AWS Batch. locations, like retail stores, branch offices, healthcare provider locations, or
Availability: factory floors.
AWS Trainium will be available in all AWS commercial and GovCloud regions. AWS will deliver Outposts servers directly to you, and you can either have
Use Cases: your onsite personnel install them or have them installed by a preferred
 Virtual Workstations third-party contractor. After the Outposts servers are connected to your
 Graphics intensive applications network, AWS will remotely provision compute and storage resources so you
can start launching applications.
Customer Benefits:
 Better performance and lower cost: AWS Trainium will deliver the most AWS Outposts 1U and 2U form factors will be available in 2021. To receive
cost-effective ML training in the cloud and will offer the most TFLOPS of more information about Outposts servers, sign up here.
compute power of any ML instance in the cloud. Customers will be able to Availability:
achieve significantly better performance in training machine learning At GA, Outposts can be shipped to and installed in the following countries
models and realize dramatically lower cost compared to AWS EC2 GPU NA – US; EMEA - All EU countries, Switzerland, and Norway; APAC - Australia,
instances. Japan, and South Korea.
Resources: Website Use Cases:
 Low Latency: Customers with low latency requirements need to make
near real time responses to end user applications or have to communicate
with other on-premises systems or control on-site equipment. They have
adopted the Amazon cloud for centralized operations but need to run
compute, graphics, or storage intensive workloads on premises to execute
localized workflows with precision and quality.
 Local Data Processing: Customers that need to access data stores that will
remain on-premises for a time. Some customers run data intensive
workloads that collect and process hundreds of TBs of data a day. They
would like to process this data locally to respond to events in real time and
to have better control on analyzing, backing up, and restoring the data.
Key Verticals
 Manufacturing Automation: Use AWS services to run manufacturing
process control systems such as MES and SCADA systems and applications
that need to run close to factory floor equipment.
 Health Care: Apply analytics and machine learning AWS services to health
management systems that need to remain on premises due to low latency
processing or patient health information (PHI) requirements.
 Telecommunications: Use cloud services and tools to orchestrate, update,
scale, and manage the lifecycle of Virtual Network Functions (VNFs) across
cloud, on premises, and edge.
 Media & Entertainment: Access the latest GPU innovations on premises
for graphics processing, audio and video rendering.
 Financial Services: Build next-generation trading and exchange platforms
that serve all participants at low latency.
 Retail: Leverage AWS database, container, and analytics services to enable
retail innovations such as connected store experiences, run point-of-sale
systems to process in-person transactions locally.
Customer Benefits:
 Run AWS Services On Premises
 Store and Process Data On Premises
 Truly Consistent Hybrid Experience
 Fully Managed Infrastructure
Resources: Website

9
AWS Local Zones Amazon EC2 M5zn Instances
What is it? What is it?
AWS Local Zones are a type of AWS infrastructure deployment that places Amazon EC2 M5 Instances are the next generation of the Amazon EC2
AWS compute, storage, database, and other select services closer to large General Purpose compute instances. M5 instances offer a balance of
population, industry, and IT centers. With AWS Local Zones, you can easily compute, memory, and networking resources for a broad range of
run latency-sensitive portions of applications local to end-users in a specific workloads. This includes web and application servers, small and mid-sized
geography, delivering single-digit millisecond latency for use cases such as databases, cluster computing, gaming servers, caching fleets, and app
media & entertainment content creation, real-time gaming, live video development environments. Additionally, M5d, M5dn, and M5ad instances
streaming, AR/VR, and machine learning inference. have local storage, offering up to 3.6TB of NVMe-based SSDs.
Each AWS Local Zone location is an extension of an AWS Region where you Customer Benefits:
can run your latency-sensitive applications using AWS services  Flexibility and choice: Choose between a selection of 60 different instance
such as Amazon Elastic Compute Cloud, Amazon Virtual Private Cloud, choices with multiple processor options (Intel Xeon Scalable processor or
Amazon Elastic Block Store, Amazon Elastic Container Service, and AMD EPYC processor), storage options (EBS or NVMe SSD), network
Amazon Elastic Kubernetes Service in geographic proximity to end-users. options (up to 100 Gbps), and instance sizes to optimize both cost and
AWS Local Zones provide a high-bandwidth, secure connection between performance for your workload needs.
local workloads and those running in the AWS Region, allowing you to  Lower TCO: By leveraging the higher number of cores per processor, M5
seamlessly connect back to your other workloads running in AWS and to the instances provide customers with a higher instance density than the
full range of in-region services through the same APIs and tool sets. You can previous generation, which results in a reduction in per-instance TCO.
build and deploy applications using AWS services in proximity to your end- With the largest instance size of 24xlarge, customers can scale-up and
users and reduce end-to-end throughput needs for your applications. consolidate their workloads on a fewer number of instances, to help
AWS Local Zones are managed and supported by AWS, bringing you all of the lower their total cost of ownership.
scalability, and security benefits of the cloud. With AWS Local Zones, you can  Maximize resource efficiency: M5 instances are built on the AWS Nitro
easily build and deploy latency-sensitive applications closer to your end- System, a combination of dedicated hardware and lightweight hypervisor
users using a consistent set of AWS services and pay only for the resources which delivers practically all of the compute and memory resources of the
that you use. host hardware to your instances for better overall performance and
Availability: security.
AWS Local Zones are generally available in Los Angeles, CA and in preview in Resources: Website
Boston, Houston, and Miami. Get started with the LA Local zones here.
Customers can sign up for access to the preview for Local zones in Boston,
Houston, and Miami here.
Use Cases:
 Media & Entertainment Content Creation: Run latency-sensitive
workloads, such as live production, video editing, and graphics-intensive
virtual workstations for artists in geographic proximity to AWS Local
Zones.
 Real-time Multiplayer Gaming: Deploy latency-sensitive game servers in
AWS Local Zones to run real-time multiplayer game sessions and maintain
a reliable gameplay experience. With AWS Local Zones, you can deploy
your game servers closer to your players than ever before for a real-time
and interactive in- game experience.
 ML: Easily host and train models continuously for high performance low-
latency inference at the edge. Work with your data, experiment with
algorithms, and visualize your output faster in AWS Local Zones.
 Video Streaming: Live stream video content with single digit milli-second
latency and high fidelity to your end users. Perform computation and
analysis of your video content close to the event and seamlessly extend
across Availability Zones and AWS Local Zones close to your end users for
high fidelity streaming.
 AR/VR: Support AR/VR applications by performing computation and
analysis close to your end users with AWS Local Zones. Effectively reduce
the Motion to Photon (MTP) latencies to the <20 ms benchmark needed
to offer a realistic customer experience.
Customer Benefits:
 Low latency to local end-users: AWS Local Zones place compute, storage,
database, and other select AWS services closer to end-users to enable you
to open up new possibilities and deliver innovative applications and
services that require single-digit millisecond latencies for more end users.
 Consistent AWS experience: AWS Local Zones enable you to use the same
AWS infrastructure, services, APIs, and tool sets that you are familiar with
in the cloud. Applications also have fast, secure, and seamless access to
the full breadth of services in the parent region.
Resources: Website

10
Amazon EC2 R5b Instances
What is it?
Amazon EC2 R5 instances are the next generation of memory optimized
instances for the Amazon Elastic Compute Cloud. R5 instances are well
suited for memory intensive applications such as high-performance
databases, distributed web scale in-memory caches, mid-size in-memory
databases, real time big data analytics, and other enterprise applications.
Additionally, you can choose from a selection of instances that have options
for local NVMe storage, EBS optimized storage (up to 60 Gbps), and
networking (up to 100 Gbps).
Customer benefits:
 Flexibility and Choice: Choose from a selection of almost 60 different
instance choices with options for processors (Intel Xeon Scalable
processor or AMD EPYC processor), instance storage (NVMe SSD), EBS
volumes storage (up to 60 Gbps), networking (up to 100 Gbps), and
instance sizes to optimize both cost and performance for your workload
needs.
 More memory: R5 instances support the high memory requirements of
certain applications to increase performance and reduce latency. R5
instances deliver additional memory per vCPU and the largest size,
r5.24xlarge, provides 768 GiB of memory, allowing customers to scale-up
and consolidate their workloads on a fewer number of instances.
 Maximize resource efficiency: R5 instances are powered by the AWS Nitro
System, a combination of dedicated hardware and lightweight hypervisor,
which delivers practically all of the compute and memory resources of the
host hardware to your instances. This frees up additional memory for
your workloads which boosts performance and lowers the $/GiB costs.
Resources: Website

11
AWS Categories: End User Compute

New features in Amazon Connect


What is it?
Amazon Connect is an easy to use omnichannel cloud contact center that
helps you provide superior customer service at a lower cost. Over 10 years
ago, Amazon’s retail business needed a contact center that would give our
customers personal, dynamic, and natural experiences. We couldn’t find one
that met our needs, so we built it. We've now made this available for all
businesses, and today thousands of companies ranging from 10 to tens of
thousands of agents use Amazon Connect to serve millions of customers
daily.
Availability:
To learn about Amazon Connect’s availability see the Amazon Connect
Regions Table.
Use Cases:
 Omnichannel customer service: Amazon Connect provides a seamless
omnichannel experience through a single unified contact center for voice,
chat, and task management. Amazon Connect offers high-quality audio
capabilities, natural interactive voice response (IVR), and interactive
chatbots that operate seamlessly with web and mobile chat contact flows.
 Automated agent assist: Amazon Connect Wisdom leverages machine
learning to help agents resolve customer issues faster using powerful
search to quickly find relevant content, like frequently asked questions
(FAQs), step-by-step instructions, and wikis, across multiple knowledge
repositories, such as Salesforce, ServiceNow, and Zendesk. Amazon
Connect Wisdom also uses real-time analytics to detect customer issues
and provide the agent's relevant content in real time, resulting in faster
issue resolution and improved customer satisfaction.
Customer Benefits:
 Make changes in minutes not months: Amazon Connect is so simple to
set-up and use, you can increase your speed of innovation. With only a
few clicks, you can set up an omnichannel contact center and agents can
begin talking and messaging with customers right away. Making changes
is easy with an intuitive UI that allows you to create voice and chat
contact flows, or agent tasks without any coding, rather than custom
development that can take months and cost millions of dollars.
 Save up to 80% compared to traditional contact center solutions: Amazon
Connect costs less than legacy contact center systems. With Amazon
Connect you pay only for what you use, plus any associated telephony
and messaging charges. With Amazon Connect there are no minimum
monthly fees, long-term commitments, upfront license charges, and
pricing is not based on peak capacity, agent seats, or maintenance.
 Easily scale to meet unpredictable demand: Amazon Connect has the
flexibility to scale your contact center up or down to any size, onboarding
tens of thousands of agents in response to normal business cycles or
unplanned events. As part of the AWS cloud, you can support your
customers by accessing Amazon Connect from anywhere in the world on
secure, reliable, and highly scalable infrastructure. All you need is a
supported web browser and an internet connection to engage with
customers from anywhere.

Resources: Website 1 Website 2 | What’s new post 1 | What’s new post 2

12
AWS Categories: AI/ML

Amazon SageMaker Pipelines Amazon SageMaker Data Wrangler


What is it? What is it?
Amazon SageMaker Pipelines is the world’s first machine learning (ML) CI/CD SageMaker Data Wrangler takes the tedium out of preparing training data by
service accessible to every developer and data scientist. SageMaker Pipelines allowing data scientists and ML engineers to analyze and prepare data for
brings CI/CD practices to ML reducing the months of coding required to machine learning applications from a single interface. Instead of requiring
manually stitch together different code packages to just a few hours. complex queries to collect data from different sources, SageMaker Data
ML workflows are typically out of reach for all but the largest enterprises, Wrangler connects to data sources with just a few clicks. Its ready-to-use
because they are hard to build. To build ML workflows, you typically need to visualization templates and built-in data transforms streamline the process of
create hundreds of code packages for data preparation, model training, and cleaning, verifying, and exploring data so you can produce accurate ML
model deployment, and stitch them together so they run as a sequence of models without writing a single line of code. Once your training data is
steps. The process is tedious and error prone because you need to define prepared, you can automate data preparation and, through integration with
the order of the steps while keeping track of dependencies between each SageMaker Pipelines, add it as a step into your ML workflow.
step, making it slow and difficult to scale model production. Availability:
With just a few clicks in SageMaker Pipelines, you can create an automated Amazon SageMaker Data Wrangler is available in all AWS Regions where
machine learning workflow. SageMaker Pipelines takes care of all the heavy SageMaker Studio is available. See details on the AWS Regions Table
lifting involved with managing the dependencies between each step of the Use Cases:
workflow and orchestrates them so you can scale to thousands of models in  Cleanse & Explore Your Data: Data scientists need to collect data in
production and expand your use of machine learning across more lines of various formats from different sources, which requires creating complex
business. queries and using import tools to load the data into a data preparation
Availability: environment. The data selection tool in Amazon SageMaker Data
Amazon SageMaker Pipelines is available in all AWS Regions where Wrangler makes it easy to select and query data from one of several data
SageMaker is available. See details on the AWS Regions Table sources. Once data is imported, you can view statistics and access a suite
of built-in data transforms designed to reduce tedious tasks such as data
Use Cases: cleansing and exploration.
 Workflows are required for all machine learning applications, so Amazon  Visualize & Understand Your Data: SageMaker Data Wrangler provides a
SageMaker Pipelines can be used for all ML use cases. set of visualization templates, such as histograms, scatter plots, and box
Customer Benefits: and whisker plots, so you can quickly detect outliers or extreme values
 Compose and manage ML workflows: Amazon SageMaker Pipelines within a data set without the need to write code. You can also use ML
enables you to build an automated sequence of steps to move models model report capabilities to gain an understanding of important columns
from concept to production. You can build every step of the ML lifecycle in your data set, and proactively identify potential inconsistencies in the
with an easy to use Python interface for creating pipelines to develop and data preparation workflow.
deploy models, automate the process through built-in CI/CD templates,  Enrich Your Data: Data scientists must use feature engineering to
and monitor the pipelines using SageMaker Studio. You can also manage transform data into a format that can be used to build an accurate ML
the dependencies between each step, build the correct sequence, and model. SageMaker Data Wrangler provides pre-configured data
execute the steps automatically, reducing months of coding to a few transformation tools so you can easily perform feature engineering.
hours. Within SageMaker Data Wrangler, you can also identify imbalance in
 Scale workflows to thousands of models: Amazon SageMaker Pipelines datasets and spot potential bias in training data.
automatically tracks code, datasets, and model versions through each Customer Benefits:
step of the machine learning lifecycle. This enables you to go back and  Operationalize ML workflows faster: With a single visual interface, you can
replay model generation steps, troubleshoot problems, and reliably track manage all steps of the data preparation workflow and quickly
the lineage of models at scale, across thousands of models in production. operationalize it into a production setting. Without manually sifting
 Track and access model versions in a model registry: You can have through and translating hundreds of lines of data preparation code, you
hundreds of machine learning workflows in your business, each with a can export your data preparation workflow to a notebook or code script to
different version of the same model, which makes tracking model easily bring the workflow into production.
versions tedious and time-consuming. To help you track versions, Amazon  Select and Query Data with a Few Clicks: Preparing high-quality training
SageMaker Pipelines provides a central repository of trained models data often requires the creation of complex queries to collect data in
called a model registry. You can access the model registry through various formats from different sources. With SageMaker Data Wrangler’s
SageMaker Studio or programmatically through the Python SDK making it data selection tool, you can quickly select data from multiple data sources,
easy to deploy your models you are responsible for, across development such as Amazon Athena, Amazon Redshift, AWS Lake Formation, Amazon
and production S3, and Amazon SageMaker Feature Store. You can write queries for data
sources and import data directly into SageMaker from various file formats,
Resources: Website | What's new post such as CSV files, parquet files, and database tables.
 Easily Transform Data: Amazon SageMaker Data Wrangler offers a rich
selection of pre-configured data transforms, such as convert column type,
rename column, and delete column, so you can transform your data into
formats that can be effectively used for ML models without writing a
single line of code. You can convert a text field column into a numerical
column with a single click, or author custom transforms in PySpark, SQL,
and Pandas to provide flexibility across your organization.

Resources: Website

13
Amazon DevOps Guru Amazon SageMaker Feature Store
What is it? What is it?
Amazon DevOps Guru is a machine learning (ML) powered DevOps service Amazon SageMaker Feature Store is a feature store for machine learning
that gives you a simpler way to measure and improve an application’s (ML) serving features in both real-time and in batch. Using SageMaker
operational performance and availability and reduce expensive downtime– Feature Store, you can store, discover, and share features so you don’t need
no machine learning expertise required. to recreate the same features for different ML applications saving months of
Using machine learning models informed by years of operational expertise in development effort.
building, scaling, and maintaining highly available applications at Your ML models use inputs called “features” to make predictions. For
Amazon.com, DevOps Guru identifies behaviors that deviate from normal example, lot size could be a feature in a model that predicts housing prices.
operating patterns. When DevOps Guru identifies a critical issue, it Features need to be available in large batches for training and also in real-
automatically alerts you with a summary of related anomalies, the likely root time to make fast predictions. For example, in a housing price predictor
cause, and context on when and where the issue occurred. DevOps Guru model, users expect an immediate update as new listings become available.
also, when possible, provides prescriptive recommendations on how to The quality of your predictions is dependent on keeping features consistent,
remediate the issue. but requires months of coding and deep expertise to keep features
consistent across training and development environments.
Availability: Amazon SageMaker Feature Store provides a consistent set of features so
us-east-1 (N. Virginia), us-east-2 (Ohio), us-west-2 (Oregon), eu-west-1 you get the exact same features for training and inference, and you can
(Ireland), and ap-northeast-1 (Tokyo) easily share features across your organization which improves collaboration
Use Cases: and eliminates rework.
 Operational audits: IT managers responsible for reliability of their Availability:
applications can use DevOps Guru to get a quick summary of all the Amazon SageMaker Feature Store is available in all AWS Regions where
operationally significant events, identified and sorted by their severity. In SageMaker is available. See details on the AWS Regions Table
the console, you can search for issues in specific applications, identify Use Cases:
trends, and decide where developers should spend their time and  Model features are required for all machine learning applications, so
resources. Amazon SageMaker Feature Store can be used for all ML use cases.
 Proactive resource exhaustion planning: Build predictive alarming for
exhaustible resources such as memory, CPU, and disk space with DevOps Customer Benefits:
Guru. It forecasts when resource utilization will exceed the provisioned  Develop models faster: Amazon SageMaker Feature Store provides a
capacity, and informs you by creating a notification in the dashboard, central repository of features so they can be used for many applications
helping you avoid an impending outage. across your organization. By discovering and reusing features that are
 Predictive maintenance: Site reliability engineers can use DevOps Guru already deployed, you spend less time on data preparation and feature
insights to prevent incidents before they occur. DevOps Guru flags computation and more time on innovation.
medium- and low severity findings that might not be critical but, if left  Increase model accuracy: Accuracy of ML models can be increased by
alone, worsen over time and affect the availability of your application. looking at model metadata such as the dataset used, model attributes,
This helps you plan, prioritize, and avoid unforeseen downtime. and hyperparameters. In addition to the actual features, Amazon
SageMaker Feature Store stores metadata for each feature so you can
Customer Benefits: understand its impact while building and training models.
 Automatically detect operational issues: DevOps Guru continuously  Track model lineage for compliance: With Amazon SageMaker Feature
analyzes streams of disparate data and watches thousands of metrics to Store, you can track lineage of the feature generation process. The feature
establish normal bounds for application behavior. It discovers and store maintains the data lineage for every feature providing the required
classifies resources like application metrics, logs, events, and traces in information to understand how a feature was generated. This helps with
your account, automatically identifies deviations from normal activity, and addressing compliance requirements in regulated industries.
surfaces high severity issues to quickly alert you of downtime.
 Resolve issues quickly with ML-powered insights: DevOps Guru helps to
Resources: Website | What's new post
reduce your issue resolution time and assists in root cause identification
by correlating multiple metrics and events anomalies. When an
operational issue occurs, it generates insights with a summary of related
anomalies, contextual information about the issue, and when possible
actionable recommendations for remediation.
 Easily scale and maintain availability: As you migrate and adopt new AWS
services, DevOps Guru automatically adapts to changing behavior and
evolving system architecture. With DevOps Guru, you save time and effort
otherwise spent on monitoring applications and manually updating static
rules and alarms. In just a few clicks, DevOps Guru starts analyzing your
AWS application activity.

Resources: Website
Distributed training on Amazon SageMaker Amazon CodeGuru updates
What is it? What is it?
Training models on large datasets can take hours, slowing down your ability Amazon CodeGuru is a developer tool that provides intelligent
to deploy your latest innovations into production. You can split large training recommendations to improve your code quality and identify an application’s
datasets across multiple GPUs (data parallelism), but splitting data can take most expensive lines of code. Integrate CodeGuru into your existing software
weeks of experimentation to do efficiently. Also, more advanced ML use development workflow to automate code reviews during application
cases may require large models. For example, models can have billions of development and continuously monitor application's performance in
parameters and be petabytes in size. As a result, the models are often too big production and provide recommendations and visual clues on how to
to fit on a single GPU. You can split large models across multiple GPUs (model improve code quality, application performance, and reduce overall cost.
parallelism), but finding the best way to split up the model and adjust Use Cases:
training code can take weeks and delay your time to market.  Improve application performance: Amazon CodeGuru Profiler is always
For customers using GPUs, Amazon SageMaker makes it faster to perform searching for application performance optimizations, identifying your
data parallelism and model parallelism. With minimal code changes, most “expensive” lines of code and recommending ways to fix them to
SageMaker helps split your data across multiple GPUs in a way that achieves reduce CPU utilization, cut compute costs, and improve application
near-linear scaling efficiency. SageMaker also helps split your model across performance.
multiple GPUs by automatically profiling and partitioning your model with  Detect deviation from AWS API and SDK best practices: Amazon CodeGuru
fewer than 10 lines of code in your TensorFlow or PyTorch training script Reviewer is trained using rule mining and supervised machine learning
Availability: models that use a combination of logistic regression and neural networks
Distributed training is available in all AWS Regions where SageMaker is to look at code changes intended to improve the quality of the code, and
available. See details on the AWS Regions Table. cross-references them against documentation data.

Use Cases: Customer Benefits:


 Object Detection: For object detection, model training time is often a  Catch code problems before they hit production: For code reviews,
bottleneck, slowing data science teams down as they wait several days or developers commit their code to GitHub, GitHub Enterprise, Bitbucket
weeks for results. SageMaker’s data parallelism library can help data Cloud, and AWS CodeCommit and add CodeGuru Reviewer as one of the
science teams efficiently split training data and quickly scale to hundreds code reviewers, with no other changes to the normal development
or even thousands of GPUs, reducing training time from days to minutes. process. CodeGuru Reviewer analyzes existing code bases in the
 Natural Language Processing: In natural language understanding, data repository, identifies hard to find bugs and critical issues with high
scientists often improve model accuracy by increasing the number of accuracy, provides intelligent suggestions on how to remediate them, and
layers and the size of the neural network which creates models with creates a baseline for successive code reviews.
billions of parameters such as GPT-2, GPT-3, T5, and Megatron. Splitting  Fix Security Vulnerabilities: CodeGuru Reviewer Security Detector
model layers and operations across GPUs can take weeks, but the model leverages machine learning and AWS’s years of security experience to
parallelism library in SageMaker automatically analyzes and splits the improve your code security. It ensures that your code follows best
model efficiently to enable data science teams to start training large practices for KMS, EC2 APIs and common Java crypto and TLS/SSL libraries.
models within minutes. When the security detector discovers an issue, a recommendation for
 Computer Vision: In computer vision, hardware constraints often force remediation is provided along with an explanation for why the code
data scientists to pick batch sizes or input sizes that are smaller than they improvement is suggested, thereby enabling Security Engineers to focus
would prefer. For example, bigger inputs may improve model accuracy but on architectural and application-specific security best-practices.
may cause out-of-memory errors and poor performance with smaller  Continuous monitoring to proactively improve code quality: For every pull
batch sizes. SageMaker offers the flexibility to easily train models request initiated, CodeGuru Reviewer automatically analyzes the
efficiently with lower batch sizes or train with bigger inputs by leveraging incremental code changes and posts recommendations directly on the pull
managed distributed training. request. Additionally, it supports full repository or code base scan for
periodic code maintainability, and code due diligence initiatives to ensure
Customer Benefits: that your code quality is consistent.
 Reduce training time: Amazon SageMaker reduces training time by 25% or
more by making it easy to split training data across GPUs. For example, Resources: Website | What’s new post
training Mask R-CNN on p3dn.24xlarge runs 25% faster on SageMaker
compared to Horovod. The reduction in training time is possible because
SageMaker manages the GPUs running in parallel to achieve optimal
synchronization.
 Optimized for AWS: Using open source tools for distributed training that
are not optimized for AWS results in poor scaling efficiency. SageMaker’s
data parallelism library provides communication algorithms that are
designed to fully utilize the AWS network and infrastructure to achieve
near-linear scaling efficiency. For example, BERT on p3dn.24xlarge
instances achieves a scaling efficiency of 88% using SageMaker, or a 27%
improvement over the same model using Horovod.
 Support for popular ML framework APIs: SageMaker enables you to reuse
existing APIs for training without writing any custom SageMaker training
code. SageMaker supports DistributedDataParallel (DDP) for PyTorch and
Horovod for TensorFlow.

Resources: Website | What’s new post


AWS for Industrial Amazon Lookout for Vision
What is it? What is it?
‘AWS for Industrial’ is a new go-to-market umbrella initiative comprised of Amazon Lookout for Vision enables you to find visual defects in industrial
new and existing services and solutions from AWS and our strategic partners products, accurately and at scale. It uses computer vision to identify missing
built and packaged specifically for developers, engineers and operators at components in an industrial product, damage to vehicles or structures,
industrial sites. AWS solutions can include reference architectures, AWS irregularities in production lines, and even miniscule defects in silicon wafers
CloudFormation templates, deployment guides, and Quick Starts to help — or any other physical item where quality is important such as a missing
customers speed deployment of their own applications. Amazon Panorama capacitor on printed circuit boards.
Appliance, Amazon Panorama Device SDK, Amazon Monitron, Amazon Visual inspection of industrial processes typically involves manual inspection,
Lookout for Vision, and Amazon Lookout for Equipment join an existing suite which can be tedious and inconsistent. For example, an automobile door
of services including AWS IoT SiteWise, the AWS Snow Family, AWS Outposts, assembly line requires quality inspectors to identify scratches or
and Amazon Timestream to make it easy for customers to digitize, monitor discoloration on newly painted door panels to prevent shipment of defective
and optimize their industrial operations. products. Computer vision brings speed, consistency, and accuracy, but
Increasingly, industrial customers across asset intensive industries such as implementation can be complex and require teams of data scientists to build,
manufacturing, energy, mining, transportation, and agriculture are leveraging deploy, and manage the machine learning models needed to identify defects.
new digital technologies to drive faster and better decisions. The ‘AWS for With Amazon Lookout for Vision you can automate real-time visual
Industrial’ initiative simplifies the process of building and deploying inspection with computer vision for processes like quality control and defect
innovative Internet of Things (IoT), Artificial Intelligence (AI), Machine assessment - with no machine learning expertise required. You can get
Learning (ML), analytics and edge solutions to achieve step change started in minutes by providing as few as 30 images for the process you want
improvements in operational efficiency, quality, and agility. Industrial to visually inspect, such as machine parts or manufactured products. Amazon
customers seek cloud and edge solutions to their business problems rather Lookout for Vision then analyses images from your cameras that monitor the
than a collection of individual services, and with the additional product truth process line, in real-time, to quickly and accurately identify anomalies like
from our newly launched industrial services, the ‘AWS for Industrial’ initiative dents, cracks and scratches. It spots differences between the baseline images
unites AWS offerings under a single go-to-market motion to meet the market provided and the image feed from the process line, and reports the presence
demand across the multiple industrial customer sub-segments. of product defects. Reports are available in an easy to use dashboard in the
Availability: AWS management console, so that you can take action quickly and reduce
For availability of AWS services relevant for industrial customers such as further defects – saving you time and money.
Amazon Panorama (Appliance and SDK), Amazon Monitron, Amazon Availability:
Lookout for Vision, Amazon Lookout for Equipment, AWS IoT SiteWise, the us-east-2, us-west-2, us-east-1, eu-west-1, eu-central-1, ap-northeast-1, ap-
AWS Snow Family (Snowball, Snowcone), AWS Outposts, and northeast-2
Amazon Timestream, see details on the AWS Regions Table.
Use Cases:
Use Cases:
 Detect part damage: With Amazon Lookout for Vision, customers will
 Engineering & Design: Modern product design requires sophisticated data detect damage to a product’s surface quality, color, and shape. For
storage, compute, and collaboration. With AWS and our extensive example, you can detect dents, scratches, and poorly welded surfaces on
network of industrial partners, you can transform your engineering, an automotive door panel across the fabrication and assembly processes.
design, and simulation efforts with the most comprehensive set of cloud
 Identify missing components: Amazon Lookout for Vision will identify
solutions available today, while leveraging the highest level of security to
missing assembly components related to the absence, presence or
protect your intellectual property
placement and positioning of objects.
 Production & Asset Performance Management: Digital transformation
 Uncover process issues: Lookout for Vision can detect a defect that has a
enables industrial customers to maximize productivity and asset
repeating pattern, which indicates a potential process issue. For example,
availability, and lower costs. To do this, industrial customers must liberate
you can detect repeated scruff marks on a nylon bobbin, which in
data from their legacy operational technology systems and leverage new
combination with machine tag information can be used to identify an
tools in the cloud. With AWS and our network of leading industrial
underlying process issue.
partners, you can transform your industrial operations with the most
comprehensive and advanced set of cloud solutions available today, while Customer Benefits:
taking advantage of security designed for the most sensitive industries.  Quickly and easily improve processes: Amazon Lookout for Vision gives
 Supply Chain Management: As modern supply chains continue to expand, you a fast and easy way to implement computer vision-based inspection in
they also are becoming more complex and disparate — they require a industrial processes, at scale. Provide as few as 30 baseline good images
unified view of data, while still being able to independently verify their and Lookout for Vision will automatically build a model for you in minutes.
transactions, such as production and transport updates. Solutions built You can then processes images from IP cameras in batch or in real-time to
using AWS services, such as Amazon Managed Blockchain, can provide the quickly and accurately identify anomalies like dents, cracks and scratches.
end-to-end visibility today’s supply chains need to track and trace their  Increase production quality, fast: With Lookout for Vision you can reduce
entire production process with unprecedented efficiency. defects in production processes, real-time. It identifies and reports visual
 Worker Safety & Productivity: Industrial companies need to empower anomalies in an easy to use dashboard so you can take action quickly to
their teams with the technology needed to keep the organization healthy, stop more defects from occurring – increasing production quality and
safe, and productive. With AWS and our extensive network of industrial reducing costs.
partners, you can keep your staff safe by monitoring employee health to  Reduce operational costs: Lookout for Vision reports trends in your visual
meet pandemic guidelines, reduce errors with digital job aids, automate inspection data, such as identifying processes with the highest defect rate,
manual workflows, enhance productivity, and reduce manual processing or flagging recent variations in defects. This gives you the ability to
and documentation. determine whether to schedule maintenance on the process line or
 Quality Management: Industrial customers are increasingly focused on reroute production to another machine before costly, unplanned
improving quality to maintain brand reputation, satisfy their customers, downtime occurs.
and manage costs. AWS and our extensive network of partners can help Resources: Website | What’s new post
you customize and automate quality inspection with fast, fully scalable
computer vision solutions to improve accuracy, reduce cost, and maintain
the quality bar that your customers expect.
Resources: Website | Industrial Blog
Amazon Monitron Amazon Lookout for Equipment
What is it? What is it?
Amazon Monitron is an end-to-end system that detects abnormal machine Amazon Lookout for Equipment is an industrial equipment anomaly
behavior, so you can enable predictive maintenance and reduce lost detection service that uses your machine data to detect abnormal
productivity from unplanned machine downtime. Reliability managers can equipment behavior automatically, so you can avoid unplanned downtime
quickly deploy Monitron to easily track machine health for industrial and optimize performance. Further, Amazon Lookout for Equipment
equipment such as such as bearings, motors, gearboxes, and pumps without leverages the best machine learning (ML) model for the job by searching 28K
any development work or specialized training. algorithms and parameters to define the best fit analytics – making ML
Amazon Monitron enables customers to start proactively monitoring their accessible and scaleable to industrial customers across all industrial
equipment in just a few hours, without any software development or machinery.
specialized training. Monitron is a secure end-to-end system that includes Lookout for Equipment enables operators to build custom ML models using
sensors to capture vibration and temperature data, gateways to their own historical, time-series machine data (temperature, vibration,
automatically transfer data to the AWS Cloud, ML-based software that rotation, pitch, rpms, flow rates, and more) along with historical
analyzes the data for abnormal machine patterns, and a companion mobile maintenance events automatically. This service requires little or no ML
app for simple system setup and immediate notifications of abnormal expertise, which makes accessing and scaling ML across industrial assets
machine behavior. assessable to industrial facilities and across industrial fleets. With Lookout
Availability: for Equipment, little or no ML expertise is required. You pay only for what
Amazon Monitron is available in us-east-1 and will be available in additional you use, there are no minimum fees, and no upfront commitments.
regions soon. You can buy Amazon Monitron Starter Kits, Availability:
Sensors and Gateways on amazon.com and Amazon Business and ship them US West, eu-west-1, ap-northeast-2
to any location in the US, UK and EU. Use Cases:
Use Cases:  Scaling Anomaly detection: Amazon Lookout for Equipment automatically
 Enable predictive maintenance: With Amazon Monitron, you can enable searches through up to 28,000 parameters to derive the optimal normal
predictive maintenance for your equipment. Predictive maintenance is multi-variate relationships between each sensor within hours versus
the activity of monitoring and evaluating the condition of equipment, traditionally months of development. The result is being able to develop
detecting developing faults and planning specific corrective maintenance custom ML model specific to each equipment’s unique operating
activities at a time when it is most cost effective. Monitron detects conditions effectively across 100s, if not 1000s of equipment.
developing faults and notifies the technicians about it, allowing them to  Enable advanced ML analytics in the hands of operators: Until now,
plan and execute corrective measures at an optimal time. machine learning has been exclusively leveraged by data scientists. With
 Monitor remotely: With Monitron, you can remotely monitor equipment Amazon Lookout for Equipment, an operator or engineer can enable
at your site without having to take readings manually. Amazon Monitron machine learning insights for abnormal equipment detection for uses
Sensor wakes up periodically and captures readings. When Amazon such as predictive maintenance. Amazon Lookout for Equipment provides
Monitron notifies you of a developing fault, you can schedule a time to a user friendly and workflow agnostic approach to leveraging ML so that
investigate and execute a repair before secondary damage occurs, saving an operator only needs to decide on the right inputs to use and use the
you time and money. right labeled examples of failure to generate insights in hours.
 Track the condition of inaccessible equipment: Today’s safety standards  Integrate ML inference into your monitoring software: Industrial
require fixed guards to be mounted on rotating equipment to protect companies are constantly working to avoid unplanned downtime,
people from injury. Often fixed guards restrict maintenance technician’s improve operational efficiency, and get actionable real-time alerts. With
access to equipment to perform condition monitoring checks. Monitron Amazon Lookout for Equipment, you can run ML inference on real-time
Sensors are wireless and small, so the condition of components in data to detect abnormal equipment behavior. The results can be
restricted areas can now be monitored safely. integrated into your existing monitoring software or you can leverage
AWS IoT SiteWise to get alerts and visualize real-time output.
Customer Benefits:
 Easy to install, and easy to use: Monitron works right out of the box. Customer Benefits:
Monitron Sensors and Gateways are easy to install and use so technicians  Automate the iterative steps of machine learning to enable access and
can start monitoring equipment in less than an hour. scaleability: Amazon Lookout for Equipment provides a user friendly UI to
 Reduce unplanned downtime: Monitron detects abnormal machine put advanced ML analytics in the hands of operators. This service also
conditions proactively with ML technology and industry recognized automates time and resource intensive iterative machine learning steps
vibration ISO standards, and thereby helps reduce costly and unplanned to enable scale across equipment, assets and applications.
downtime.  Identify subtle issues earlier: Amazon Lookout for Equipment
 Cost effective: Monitron offers a cost-effective way to start monitoring automatically identifies equipment anomalies by learning the healthy
your equipment, with low upfront hardware investment and pay-as-you- state and operational relationships between sensors on each asset.
go software. Lookout for Equipment can then pinpoint subtle changes in patterns and
the highest contributing factor which enables operations to respond
 Continuously improving: Reliability managers and technicians can add
quickly with greater confidence.
feedback directly in the Monitron mobile app and benefit from
continuously improving ML model performance. Monitron Sensors and  Best fit model for the application: Amazon Lookout for Equipment not
Gateways are remotely updated over the air (OTA), providing system only automates machine learning steps but searches through thousands
improvements over the life of your installation. of machine data feature combinations to select the best ML model for the
application.
Resources: Website | What’s new post
Resources: Website | What’s new post
AWS Panorama Amazon HealthLake
What is it? What is it?
AWS Panorama is a managed service for building, deploying, and managing Amazon HealthLake is a HIPAA-eligible service that enables healthcare
computer vision applications (Panorama applications) that can be deployed providers, health insurance companies, and pharmaceutical companies to
to edge devices (Panorama devices). The first Panorama device will be the store, transform, query, and analyze health data in a consistent fashion in
AWS Panorama Appliance, a computer vision appliance, available in April the AWS Cloud at petabyte scale. Health data is frequently incomplete and
2021. The AWS Panorama Appliance Developer Kit will be available in limited inconsistent, and is often unstructured, with information contained in clinical
quantity at re:Invent 2020 so application developers can build their apps notes, laboratory reports, insurance claims, medical images, recorded
ahead of AWS Panorama Appliance availability. The Panorama Appliance conversations, and time series data.
Developer Kit provides extra on-device logging and debugging to make it Amazon HealthLake removes the heavy lifting of organizing, indexing, and
easier for developers to test and debug their computer vision applications. structuring patient information, to provide a complete view of each patient’s
AWS Panorama is a machine learning appliance and SDK, which allow you to medical history in a secure, compliant, and auditable manner. It transforms
bring computer vision (CV) to your on-premises cameras or on new unstructured data using specializedmachine learning models, like natural
Panorama enabled devices. This gives you the ability to make real-time language processing, to automatically understand and extract meaningful
decisions to improve your operations. With Panorama, you can use live video medical information from the data and provides powerful query and search
feeds to automate monitoring or visual inspection tasks, like evaluating capabilities. Organizations can use advanced analytics and ML models, such
manufacturing quality, finding bottlenecks in industrial processes, and as Amazon QuickSight and Amazon SageMaker to analyze and understand
assessing worker safety within your facilities. relationships, identify trends, and make predictions from the newly
normalizedand structured data.
Availability:
Panorama is available in the us-east-1 (N. Virginia) and us-west-2 (Oregon) Availability:
regions us-east-1 (N. Virginia)

Use Cases: Use Cases:


 Reimagined retail insights: In retail environments, Panorama enables you  Population health management: Amazon HealthLake helps healthcare
to run multiple, simultaneous CV models using your existing onsite organizations analyze population health trends, outcomes, and costs. This
cameras. Applications for retail analytics, such as for people counting, gives organizations the tools to identify the most appropriate intervention
heat mapping, and queue management, can help you get started quickly. for a patient population and choose better care management options
By using the streamlined management capabilities that Panorama offers, with ready-to-use Jupyter notebooks with pre-trained ML algorithms.
you can easily scale your CV applications to include multiple process  Improving quality of care: Amazon HealthLake aids hospitals, health
locations or stores. insurance companies, and life sciences organizations to close gaps in care,
 Workplace safety and social distance monitoring: Panorama allows you to improve quality, and reduce cost by bringing together a complete view of
monitor workplace safety, get notified immediately about any potential a patient’s medical history. HealthLake provides a significant leap forward
issues or unsafe situations, and take corrective action. for these organizations by predicting disease onset and identifying
patients requiring additional care.
 Supply chain efficiency: In manufacturing and assembly environments,
Panorama can help to provide critical input to supply chain operations by  Streamlined data operations: Medical data, which takes many forms, from
tracking throughput, recognizing bar codes or labels of parts or completed prescriptions to insurance claims to imaging, is difficult to ingest and
products, or monitoring individual workstations to measure productivity. make sense of. Amazon HealthLake removes the heavy lifting and reduces
 Manufacturing quality control: Panorama can help improve product operational overhead using document classification, and natural language
quality and decrease costs from manufacturing defects, by processing CV understanding such as text extraction, speech to text technologies, and
at the edge and notifying you immediately of any anomalies in production medical comprehension capabilities to streamline data operations.
so you can take quick corrective action. Customer Benefits:
Customer Benefits:  Easily transform health data: Amazon HealthLake can automatically
 Real-time visibility for fast decision making: You can analyze video feeds understand and extract meaningful medical information from raw,
within milliseconds, enabling real-time visibility into operations and fast disparate data, such as prescriptions, procedures, and diagnoses-
decision making with Panorama enabled devices or the Panorama revolutionizing a process that was traditionally manual
Appliance.  Identify trends and make predictions: Healthcare organizations can store,
 Easily add to your existing infrastructure: Plug AWS Panorama Appliance transform, and prepare their patient health information to unlock novel
in, connect it to your network, and the device automatically identifies insights. This gives healthcare organizations new tools to improve care
camera streams and starts interacting with your existing fleet of IP and intervene more quickly to save lives and reduce costs
cameras. The Panorama Appliance also seamlessly works alongside your  Support interoperable standards: Interoperability ensures that health
existing video management systems (VMS). data is shared in a consistent, compatible format across multiple
 Enable CV in limited connectivity environments: AWS Panorama devices applications. Amazon HealthLake creates a complete view of each
run CV models directly on the device (at the edge), meaning you can get patient’s medical history, and structures it in the Fast Healthcare
access to real-time predictions in remote and isolated places where cloud Interoperability Resources (FHIR) standard format to facilitate the
connectivity can be slow, expensive, or completely non-existent. exchange of information.

Resources: Website | What’s new post Resources: Website | What’s new post
Amazon SageMaker Edge Manager Amazon Lookout for Metrics
What is it? What is it?
Amazon SageMaker Edge Manager provides model management for edge
Amazon Lookout for Metrics uses machine learning (ML) to detect anomalies
devices so you can optimize, secure, monitor, and maintain machine learning
in virtually any time series-driven business and operational metrics–such as
models on fleets of edge devices such as smart cameras, robots, personal
revenue performance, purchase transactions, and customer acquisition and
computers, and mobile devices.
retention rates–with no ML experience required.
Amazon SageMaker Edge Manager makes it easy to manage ML models on
Amazon Lookout for Metrics automatically connects to popular databases
edge devices. SageMaker Edge Manager uses SageMaker Neo to compile
and SaaS applications to continuously monitor metrics that you care about,
and optimize models for edge devices. Then, SageMaker Edge Manager
and sends you alerts as soon as anomalies are detected. When it finds
packages the model with its runtime and credentials for deployment. You
anomalies, Amazon Lookout for Metrics immediately sends you alerts,
have the flexibility to use AWS IoT Greengrass or your own on-device
groups anomalies that might be related to the same event, and helps you
deployment mechanism to deploy models to the edge. Once a model is
identify the root cause so that you can fix an issue or quickly react to
deployed, SageMaker Edge Manager manages each model on each device by
opportunities. It also ranks anomalies in the order of severity, so that you can
collecting metrics, sampling input/output data, and sending the data
focus on what matters the most, and lets you to tune the results by providing
securely to your Amazon S3 buckets for monitoring, labeling, and retraining
feedback based on your knowledge about your business, and uses your
so you can continuously improve model quality. And, because SageMaker
feedback to improve the accuracy of results over time.
Edge Manager enables you to manage models separately from the rest of
the application, you can update the model and the application Availability:
independently reducing costly downtime and service disruptions. Amazon Lookout for Metrics is a gated preview and will available in 5 regions
at launch: us-east-1, us-east-2, us-west-2, ap-northeast-1, and eu-west-1.
Availability:
Use Cases:
us-east-1, us-west-2, us-east-2, eu-west-1, eu-central-1, and ap-northeast-1,
By metric category
see details on the AWS Regions Table.
 Customer Engagement: Ensure a seamless customer experience by
Use Cases: detecting sudden changes in metrics across the customer journey such as
 Driver-assist dashcam: Connected vehicle solution providers use Amazon during enrollment, login, and engagement.
SageMaker Edge Manager to operate ML models to driver dashcams. The  Operational: Proactively monitor metrics like latency, CPU utilization, and
models help detect pedestrians and road hazards to improve the safety of error rates to mitigate service interruptions.
both drivers and pedestrians.  Sales: Quickly track changes in win rate, pipeline coverage, and average
 Theft detection: Amazon SageMaker Edge Manager is used by retailers to deal size to evaluate business growth opportunities.
identify theft during checkout. Image detection models run on smart  Marketing: With actionable marketing analytics, quickly detect how your
cameras at checkout counters and send alerts when the merchandise campaigns, partners, and ad platform metrics affect your overall traffic
does not match the scanned barcode. volume, revenue, churn, and conversion.
 Predictive maintenance: Amazon SageMaker Edge Manager runs By Industry
predictive maintenance models on gateway servers at manufacturing  Retail: Gain insights into category-level revenue and margin by monitoring
facilities in order to predict which machines are at high risk of failure. inventory levels, item pricing, promotional traffic, and conversion.
When possible failure is detected, alerts are sent to staff so they can  Gaming: Boost player engagement and optimize gaming revenue by
remediate the issue monitoring changes in new users, active users, level-completion rate, in-
Customer Benefits: app purchases, and retention rate.
 Run ML models up to 28x faster: Amazon SageMaker Edge Manager  Ad Tech: Optimize ad spend by detecting spikes or dips in metrics like
automatically optimizes ML models for deployment on a wide variety of reach, impressions, views, and ad clicks.
edge devices, including CPUs, GPUs, and embedded ML accelerators.  Telecom: Reduce customer frustration by detecting unexpected changes
SageMaker Edge Manager compiles your trained model into an in network performance metrics, like tracking traffic channel (TCH),
executable that discovers and applies specific performance optimizations evolved packet core (EPC), and Erlang.
that will make your model run most efficiently on the target hardware Customer Benefits:
platform.  Highly accurate anomaly detection: Detects anomalies in metrics with high
 Improve model quality: Amazon SageMaker Edge Manager continuously accuracy using ML technology and over 20 years of experience at Amazon.
monitors each model instance across your device fleet to detect when  Actionable results at scale: Helps you identify the root cause by grouping
model quality declines. Declines in model quality can be caused related anomalies together and ranking them in the order of severity, so
differences in the data used to make predictions compared to the data that you can diagnose issues or identify opportunities quickly.
used to train the model or by changes in the real world. For example,  Integration with AWS databases and SaaS applications: Connects with
changing economic conditions could drive new interest rates affecting commonly used AWS databases and SaaS applications. Sends alerts
home purchasing predictions. through multiple channels, and automatically triggers pre-defined custom
 Easily integrate with device applications: Amazon SageMaker Edge actions, such as filing trouble tickets when anomalies are detected.
Manager supports gRPC, an open source remote procedure call, which  Tunable results: Uses your feedback on detected anomalies to
allows you to integrate SageMaker Edge Manager into your existing edge automatically tune the results and improve accuracy over time.
applications through common programming languages, such as Android
Java, C++, C#, and Python.
Resources: External Website | What’s new post
Resources: External Website | What’s new post
Amazon SageMaker Debugger Amazon SageMaker Clarify
What is it? What is it?
With Amazon SageMaker Debugger you can detect bottlenecks and training Amazon SageMaker Clarify provides data to help you make your machine
problems in real-time so you can correct problems before the model is learning (ML) models fair and transparent by detecting bias so you can take
deployed to production. SageMaker Debugger collects, analyzes, and corrective action.
generates alerts, reports, and visualizations providing insights for you to act Amazon SageMaker Clarify detects bias across the entire ML workflow—
and train models faster. including during data preparation, after training, and ongoing over time—and
Amazon SageMaker Debugger captures model metrics and monitors system also includes tools to explain ML models and their predictions. You can skip
resources and profiles ML framework resources during ML model training, the tedious processes of implementing third-party tools and improve fairness
without requiring additional code. All metrics are captured in real-time so and transparency to improve trust with your customers, all within
you can correct issues during training, which speeds up training time and SageMaker. SageMaker Clarify also provides transparency through model
enables you to get higher quality models to production much faster. explainability reports that you can share with customers, business leaders, or
auditors, so all stakeholders can see how and why models make predictions.
Availability:
Amazon SageMaker Debugger is available in all AWS Regions where Availability:
SageMaker is available. See details on the AWS Regions Table Amazon SageMaker Clarify is available in all AWS Regions where SageMaker
is available. See details on the AWS Regions Table
Use Cases:
Use Cases:
 Consolidate multiple tools: Amazon SageMaker Debugger provides a
single, unified tool that data scientists can use to collect training data  Regulatory Compliance: Regulations such as the Equal Credit Opportunity
across different parameters in real-time, gain visibility into the effects of Act (ECOA) or Fairness in Housing Act often require companies to remain
different parameter values, and receive alerts for the appropriate action unbiased and to be able to explain financial decisions. Amazon SageMaker
to be taken. can help flag any potential bias present in the initial data or in the financial
model after training, and can also help explain which data caused an ML
 Visualize training data: Amazon SageMaker Debugger renders
model to make a particular financial decision.
visualizations of training data and helps you visualize tensors in your
network to determine their state at each point in the training process.  Internal Reporting & Compliance: Data science teams are often required
This is useful in scenarios such as determining stale or saturated data or to justify or explain ML models to internal stakeholders, such as internal
mapping effects of specific parameters on the model. auditors or executives who would like more transparency. Amazon
SageMaker can provide data science teams with a graph of feature
 Explain ML models better: Amazon SageMaker Debugger saves the state
importance when requested, and can quantify potential bias in an ML
of ML models at periodic intervals and enables you to explain the model
model or its data to provide the information needed to support internal
predictions in real-time during training or offline after the training is
presentations or mandates.
completed. This helps you to interpret better and explain the predictions
the trained model makes. With SageMaker Debugger, you can explain the  Operational Excellence: Machine learning is often applied in operational
internal mechanics of an ML model and eliminate the black box aspects of scenarios, such as predictive maintenance or supply chain operations.
predictions, leading to better business outcomes. However, data science teams may want insight into why a given machine
needs to be repaired, or why an inventory model is recommending surplus
Customer Benefits: stock in a particular location. Amazon SageMaker can detail the causes for
 Generate ML models faster: Amazon SageMaker Debugger helps generate individual predictions, helping data science teams to work with other
ML models faster by providing you with full visibility and control during internal teams to improve operations.
the training process, to quickly troubleshoot and take corrective
Customer Benefits:
measures. With SageMaker Debugger, you can take immediate action if
any anomalies such as overfitting overtraining models are detected,  Find imbalances in data: Amazon SageMaker Clarify is integrated with
resulting in faster model generation for deployment. With the insights Amazon SageMaker Data Wrangler, making it simple to identify bias
provided by SageMaker Debugger, you can reduce the time required to during data preparation. You specify attributes of interest, such as gender
troubleshoot models from weeks to days, with no additional code. or age, and Amazon SageMaker Clarify runs a set of algorithms to detect
the presence of bias in those attributes. After the algorithm runs,
 Optimize system resources with no additional code: Using the profiling
SageMaker Clarify provides a visual report with a description of the
capability of Amazon SageMaker Debugger, you can automatically
sources and severity of possible bias so that you can take steps to
monitor system resources such as CPU, GPU, network, and memory to
mitigate.
give you a complete view of current resource utilization. Additionally, the
profiler suggests recommendations to reallocate resources if there are  Check your trained model for bias: Ensure that predictions are fair by
being underutilized or if there are bottlenecks, helping you to optimize checking trained models for imbalances, such as more frequent denial of
resources effectively. You can profile your training job on the SageMaker services to one protected class than another. Amazon SageMaker Clarify is
Studio visual interface at any time. integrated with SageMaker Experiments so that after a model has been
trained, you can identify attributes you would like to check for bias, such
 Make ML training transparent: Amazon SageMaker Debugger makes the
as income or marital status.
training process transparent so you can explain if the ML model is
progressively learning correct parameter values such as gradients to yield  Monitor your model for bias: While your initial data or model may not
the desired results. Insights into the training data are provided by have been biased, changes in the world may cause bias to develop over
automatically capturing real-time metrics such as weights and tensors time. For example, a substantial change in mortgage rates could cause a
during training to help improve model accuracy. Debugging is made easy home loan application model to become biased. Amazon SageMaker
with a visual interface to analyze the debug data and take corrective Clarify is integrated with SageMaker Model Monitor, enabling you to
actions specific to the models that are being trained. configure alerting systems like Amazon CloudWatch to notify you if your
model begins to develop bias.
Resources: Website | What's new post | Detailed blog post
Resources: Website
Amazon SageMaker JumpStart
What is it?
Amazon SageMaker JumpStart helps you quickly and easily get started with
machine learning. To make it easier to get started, SageMaker
JumpStart provides a set of solutions for the most common use cases that
can be deployed readily with just a few clicks. The solutions are fully
customizable and showcase the use of AWS CloudFormation templates and
reference architectures so you can accelerate your ML journey.
SageMaker JumpStart also supports one-click deployment and fine-tuning of
more than 150 popular open source models for modalities such as natural
language processing, object detection, and image classification.

Availability:
Amazon SageMaker JumpStart is available in all AWS Regions where
SageMaker is available. See details on the AWS Regions Table
Use Cases:
 There are 15+ pre-built solutions for common ML use cases including
predictive maintenance, demand forecasting, fraud detection, and
personalized recommendations.
Customer Benefits:
 Accelerate time to deploy over 150 open source models: Amazon
SageMaker JumpStart provides one-click deployable ML models and
algorithms from popular model zoos, including PyTorch Hub and
Tensorflow Hub. One-click deployable ML models and algorithms are
easily deployable for image classification, object detection, and language
modeling use cases, minimizing the time to deploy ML models originating
from outside of SageMaker.
 15+ pre-built solutions for common ML use cases: With Amazon
SageMaker JumpStart, you can move quickly from concept to production
with pre-built solutions that include all of the components needed to
deploy a ML application in SageMaker with a few clicks, including an AWS
CloudFormation template, reference architecture, and getting started
content. Solutions are fully customizable so you can easily modify to fit
your specific use case and dataset, and can be readily deployed with just a
few clicks. These end-to-end solutions cover common use case, from
predictive maintenance, demand forecasting, to fraud detection and
personalized recommendations.
 Get started with just a few clicks: Amazon SageMaker JumpStart provides
notebooks, blogs, and video tutorials designed to help you when you
want to learn something new or encounter roadblocks. Content is easily
accessible within Amazon SageMaker Studio, enabling you to get started
with ML faster.

Resources: Website | What’s new post


AWS Categories: Analytics

AWS Glue DataBrew AWS Glue Elastic Views


What is it? What is it?
AWS Glue DataBrew is a new visual data preparation tool that makes it easy AWS Glue Elastic Views is a new capability of AWS Glue that makes it easy to
for data analysts and data scientists to clean and normalize data to prepare it build materialized views to combine and replicate data across multiple data
for analytics and machine learning. You can choose from over 250 pre-built stores without you having to write custom code.
transformations to automate data preparation tasks, all without the need to New applications and features often require you to combine data that
write any code. You can automate filtering anomalies, converting data to resides across multiple data stores, including relational and non-relational
standard formats, and correcting invalid values, and other tasks. After your databases. Accessing, combining, replicating, and keeping this data up-to-
data is ready, you can immediately use it for analytics and machine learning date requires manual work and custom code that can take months of
projects.You only pay for what you use - no upfront commitment. development time.
Availability: With AWS Glue Elastic Views, you can use familiar Structured Query
us-east-1 (N. Virginia), us-east-2 (Ohio), us-west-2 (Oregon), eu-west-1 Language (SQL) to quickly create a virtual table—called a view—from
(Ireland), eu-central-1 (Frankfurt), ap-southeast-2 (Sydney), and ap- multiple different source data stores. Based on this view, AWS Glue Elastic
northeast-1 (Tokyo). Views copies data from each source data store and creates a replica—called
Use Cases: a materialized view—in a target database. AWS Glue Elastic Views monitors
 Self-service visual data preparation for analytics and machine learning: for changes to data in your source data stores continuously, and provides
AWS Glue DataBrew enables you to explore and experiment with data updates to your target data stores automatically, ensuring data accessed
directly from your data lake, data warehouses, and databases, including through the materialized view is always up-to-date.
Amazon S3, Amazon Redshift, AWS Lake Formation, Amazon Aurora, and Availability:
Amazon RDS. You can choose from over 250 prebuilt transformations in AWS Glue Elastic Views is available in limited preview in US East (N. Virginia),
AWS Glue DataBrew to automate data preparation tasks, such as filtering US East (Ohio), US West (Oregon), EU (Ireland), and Asia Pacific
anomalies, standardizing formats, and correcting invalid values. After the (Tokyo). Customers can apply for the preview here.
data is prepared, you can immediately use it for analytics and machine
learning.. Use Cases:
 Combine data across multiple databases and data stores: AWS Glue Elastic
Customer Benefits: Views combines data from more than one data store in near-real time. For
 Profile data to evaluate data quality: Evaluate the quality of your data by example, you can combine data from an Amazon DynamoDB database
profiling it to understand data patterns and detect anomalies; connect with data from an Amazon Aurora database and copy it to Amazon
terabytes and even petabytes of data directly from your data lake, data Redshift.
warehouses, and databases.  Replicate data across multiple databases and data stores: AWS Glue Elastic
 Clean and Normalize data without writing code: Choose from over 250 Views replicates data across multiple databases and data stores. For
built-in transformations to visualize, clean, and normalize your data with example, you can create a copy of a DynamoDB table in Amazon
an interactive, point-and-click visual interface. Elasticsearch Service to enable full text search on the DynamoDB data.
 Map Data Lineage: Visually map the lineage of your data to understand  Integrate operational and analytical systems: AWS Glue Elastic Views
the various data sources and transformation steps that the data has been simplifies running analytical queries on your most recent operational data.
through. For example, you can create database views over data in your operational
 Automate data preparation tasks: Automate data cleaning and databases and materialize those views in your data warehouse or data
normalization tasks by applying saved transformations directly to new lake.
data as it comes into your source system.
Customer Benefits:
 Use familiar SQL to create a materialized view: AWS Glue Elastic Views
Resources: Website
enables you to create materialized views across many databases and data
stores using familiar SQL. AWS Glue Elastic Views supports Amazon
DynamoDB, Amazon Redshift, Amazon S3, and Amazon Elasticsearch
Service, with support for more data stores to follow.
 Copies data from each source data store to a target data store: AWS Glue
Elastic Views handles all of the heavy lifting of copying and combining data
from source to target data stores, without you having to write custom
code or use unfamiliar ETL tools and programming languages. AWS Glue
Elastic Views reduces the time it takes to combine and replicate data
across data stores from months to minutes.
 Automatically keeps the data in the target data store updated: AWS Glue
Elastic Views monitors for changes to data in your source data stores
continuously, and provides updates to your target data stores
automatically. This ensures that applications always access up-to-date
data in the materialized views.

Resources: Website
Amazon QuickSight Q Amazon Redshift AQUA
What is it? What is it?
Amazon QuickSight Q uses machine learning-powered, natural language Today, in the analytics press release, we announced that AQUA (Advanced
query (NLQ) technology to enable business users to ask ad-hoc questions of Query Accelerator) for Amazon Redshift preview is now open to all
their data in natural language and get answers in seconds. To ask a question, customers and AQUA will be generally available in January 2021.
users simply type it into the Amazon QuickSight Q search bar. Amazon AQUA is a new distributed and hardware-accelerated cache that enables
QuickSight Q uses machine learning (natural language processing, schema Redshift queries to run up to 10x faster than other cloud data warehouses.
understanding, and semantic parsing for SQL code generation) to generate a Existing data warehousing architectures with centralized storage require
data model that automatically understands the meaning of and relationships data be moved to compute clusters for processing. As data warehouses
between business data, so users can receive highly accurate answers to their continue to grow over the next few years, the network bandwidth needed to
business questions in seconds by simply using the business language that move all this data becomes a bottleneck on query performance.
they are used to. Amazon QuickSight Q comes pre-trained on large volumes
of real-world data from various domains and industries like sales, marketing, AQUA takes a new approach to cloud data warehousing. AQUA brings the
operations, retail, human resources, pharmaceuticals, insurance, energy, compute to storage by doing a substantial share of data processing in-place
and more, so it is already optimized to understand complex business on the innovative cache. In addition, it uses AWS-designed processors and a
language. For example, sales users can ask, “How is my sales tracking against scale-out architecture to accelerate data processing beyond anything
quota?”, or retail users can ask, “What are the top products sold week-over- traditional CPUs can do today.
week by region?” Furthermore, users can get more complete and accurate Availability:
answers because the query is applied to all of the data, not just the datasets Customers can sign up for the AQUA preview now and will be contacted
in pre-determined model. And because Amazon QuickSight Q does this within a week with instructions. In order to use AQUA, customers must be
automatically, it eliminates the need for BI teams to spend time in building using RA3.4xl or RA3.16xl nodes in us-east-1 (N. Virginia), us-west-2
and updating data models, saving weeks of effort. (Oregon), or us-east-2 (Ohio) regions.
Availability: Customer Benefits:
Amazon QuickSight Q will be in Gated Preview where customers need to  Brings compute closer to storage - AQUA accelerates Redshift queries by
sign-up to get access. running data intensive tasks such as such as filtering and aggregation
Use Cases: closer to the storage layer. This avoids networking bandwidth limitations
 Amazon QuickSight Q is optimized to understand complex business by eliminating unnecessary data movement between where data is stored
language and data models from multiple domains, including and compute clusters.
o Sales (“How is my sales tracking against quota?”)  Powered by AWS-Designed Processors - AQUA uses AWS-designed
o Marketing (“What is the conversion rate across my campaigns?”) processors to accelerate queries. This includes AWS Nitro chips adapted
o Retail (“What are the top products sold week over week by to speed up data encryption and compression, and custom analytics
region?”) processors, implemented in FPGAs, to accelerate operations such as
o HR, Advertising, amongst others filtering and aggregation.
 Scale out Architecture - AQUA can process large amounts of data in
Customer Benefits:
parallel across multiple nodes, and automatically scales out to add more
 Get answers in seconds: With Amazon QuickSight Q, business users can capacity as your storage needs grow over time.
simply type a question in plain English and get an answer such as a
number, chart, or table in seconds.
Resources: Website
 Use business language that you are used to: With Amazon QuickSight Q,
you can ask questions using phrases and business language that you use
every day as part of your functional or vertical domain. Amazon
QuickSight Q is optimized to understand complex business language and
data models from multiple domains
 Ask any question on all your data: Amazon QuickSight Q provides answers
to questions on all of your data. Unlike conventional NLQ- based BI tools,
Q is not limited to answering questions from a single dataset or
dashboard

Resources: Website | What’s new post


Amazon Redshift ML
What is it? Amazon Redshift feature updates
Redshift ML is a new capability for Amazon Redshift that make it easy for
data analysts and database developers to create, train, and deploy Amazon What is it?
SageMaker models using SQL. With Amazon Redshift ML, customers can use We announced several features for Amazon Redshift, including:
SQL statements to create and train Amazon SageMaker models on their data  Amazon Redshift data sharing (preview): A new way to securely share live
in Amazon Redshift and then use those models for predictions such as churn data across Redshift in an organization and externally. Data sharing
detection and risk scoring directly in their queries and reports. improves the agility of organizations by giving them instant, granular and
high-performance access to data across Redshift clusters without the
Availability: need to copy or move it. Data sharing provides live access to the data so
The Redshift ML preview is available in: us-east-1 (N. Virginia), us-east-2 that users can see the most up-to-date and consistent information as it is
(Ohio), us-west-2 (Oregon), ca-central-1 (Canada Central), eu-west-1 updated in the data warehouse.
(Ireland), eu-central-1 (Frankfurt), ap-northeast-1 (Tokyo), ap-southeast-2  RA3.xlplus GA: RA3 with managed storage enables customers to scale and
(Sydney), and ap-southeast-1 (Singapore) pay for compute and storage separately. This new, smaller, node size joins
Use Cases: the RA3.4xl and RA3.16xl nodes we launched last year.
 Predictive analytics with Amazon Redshift: With Redshift ML, you can  Amazon Redshift Automated Performance Tuning GA: A new self-tuning
embed predictions like churn prediction, fraud detection, and risk scoring capability, Automatic Table Optimization, optimizes the physical design of
directly in queries and reports. Use the SQL function to apply the ML tables by automatically setting sort and distribution keys to improve query
model to your data in queries, reports, and dashboards. For example, you speed, without requiring any administrator intervention.
can run the “customer churn” SQL function on new customer data in your  Partner console integration (preview): Enables customers to launch the
data warehouse on a regular basis to predict customers at risk of churn Partner Integration Wizard from the Redshift cluster details page and
and feed this information to your sales and marketing teams so they can select partners already integrated in the console to accelerate data
take preemptive action such as sending these customers an offer designed onboarding. Our launch partners include Matillion, Sisense, FiveTran,
to retain them. Segment and ETLeap.
Customer Benefits:  Cross-AZ cluster recovery: A few ability to move a cluster to another
 No prior ML experience needed: Redshift ML makes it easy to benefit from Availability Zone (AZ) without any loss of data or changes to your
the ML capabilities in Amazon SageMaker directly in Redshift so you don’t applications.
have to learn new platforms, tools, or languages. Redshift ML provides  Federated Query updates (preview): With Redshift Federated query,
simple, optimized, and secure integration between Redshift and Amazon customers can combine operational data that is stored in popular
SageMaker and enables inference within the Redshift cluster, making it databases such as RDS and Aurora PostgreSQL. Now, we also offer RDS
easy to use model predictions in queries and applications. There is no MySQL and Aurora MySQL support in preview.
need to manage a separate inference model end point, and the training  Native semi-structured data support with Super data type with JSON
data is secured end-to-end with encryption. support (preview): A new data type SUPER that will support nested data
 Use ML on your Redshift data using standard SQL: With Redshift ML you formats such as JSON and enable customers to ingest, store, and query
can create, train, and apply ML models on your Redshift data using nested data natively in Amazon Redshift. JSON formatted data can be
standard SQL. To get started, use the CREATE MODEL SQL command in stored in SUPER columns.
Redshift and specify training data either as a table or SELECT statement. Availability:
Redshift ML then compiles and imports the trained model inside the  Amazon Redshift data sharing: US East (Ohio), US East (N. Virginia), US
Redshift data warehouse and prepares a SQL inference function that can West (N. California), US West (Oregon), Europe (Frankfurt), Europe
be immediately used in SQL queries. Redshift ML automatically handles all (Ireland), Asia Pacific (Tokyo), Asia Pacific (Sydney), and Asia Pacific
the steps needed to train and deploy a model. (Seoul).
 RA3.xlplus nodes are generally available in Asia Pacific (Seoul, Sydney,
Resources: What’s New blog | External Webpage | Detailed Blog | Tokyo), Brazil (São Paulo), Canada (Central), EU (Ireland, Paris), US East (N.
Leadership authored Blog Virginia, Ohio), and US West (N. California, Oregon) regions.
 Automatic Table Optimization is available on Amazon Redshift version
1.0.21291 in all regions where the Redshift Advisor is available. Refer to
the this link for Amazon Redshift Advisor availability.
 Partner console is available to new and existing customers. Refer to the
AWS Region Table for Amazon Redshift availability.
 Cluster relocation capability is available in all commercial regions where
the RA3 instance type is supported.
 Federated Query updates available to all Amazon Redshift customers for
preview. Refer to the AWS Region Table for Amazon Redshift availability.
 The support for native semi-structured data processing in Amazon
Redshift is available as public preview in SQL_PREVIEW track.
Resources: What’s new post [Data Sharing] | What’s new post [RA3] |
What’s new [Automated Performance Tuning] | What’s new [Partner
console] | What’s new [Cross-AZ cluster recovery] | What’s new [Federated
Query updates] | What’s new [Native semi-structured data support]
Amazon EMR on Amazon EKS AWS Lake Formation features: Transactions, Row-
What is it? level Security, and Acceleration
Amazon EMR on Amazon EKS provides a new deployment option for Amazon
What is it?
EMR that allows you to run Apache Spark on Amazon Elastic Kubernetes
AWS Lake Formation transactions, row-level security, and acceleration are
Service (Amazon EKS). If you already use Amazon EMR, you can now run
now available for preview. These capabilities are available via new, open,
Amazon EMR based applications with other types of applications on the
and public update and access APIs for data lakes. These APIs extend AWS
same Amazon EKS cluster to improve resource utilization and simplify
Lake Formation’s governance capabilities with row-level security. In addition,
infrastructure management across multiple AWS Availability Zones. If you
with this preview, we introduce governed tables - a new Amazon S3 table
already run big data frameworks on Amazon EKS, you can now use Amazon
type that supports atomic, consistent, isolated, and durable (ACID)
EMR to automate provisioning and management, and run Apache Spark up
transactions. AWS Lake Formation transactions simplify ETL script and
to 3x faster. With this deployment option, you can focus on running analytics
workflow development, and allow multiple users to concurrently and reliably
workloads while Amazon EMR on Amazon EKS builds, configures, and
insert, delete, and modify rows across multiple governed tables. AWS Lake
manages containers.
Formation automatically compacts and optimizes storage of governed tables
Availability: in the background to improve query performance.
Amazon EMR on Amazon EKS is available in all commercial AWS Regions
Availability:
except for AWS China (Beijing), AWS China (Ningxia), Asia Pacific
This feature is in preview in the US East (N. Virginia) AWS Region.
(Osaka-Local), and AWS GovCloud (US) regions.
Resources: Website
Use Cases:
 Consolidated Workloads: Amazon EMR on Amazon EKS can rapidly start
and run jobs from multiple customer organizations on the same
infrastructure. Cost sensitive development jobs can be executed on
compute provided by AWS Fargate, while production jobs requiring higher
performance can be backed by Amazon EC2 Reserved Instances.
Additional or unused capacity can be used for other containerized
workloads such as pre- or post-processing of the data.
 Low Latency batch jobs: Amazon EMR on Amazon EKS can begin running
jobs within seconds without having to wait for provisioning a dedicated
cluster. Jobs can then be scheduled at increasing frequency to provide
increased resolution of analytics.
 Distributed Analytics with Multi-AZ workloads: Amazon EMR on Amazon
EKS simplifies operations of Spark workloads by running the job within a
single AZ, or for higher-availability spreading the job across multiple AZs.
Customer Benefits:
 Simplify Running Spark on Kubernetes: Amazon EKS provides customers
with a managed experience for running Kubernetes on AWS, enabling you
to add compute capacity using EKS Managed Node Groups or using AWS
Fargate. EMR jobs can access their data on Amazon S3 while monitoring
and logging can be integrated with Amazon CloudWatch. Amazon Identity
and Access Management (IAM) enables role based access control for both
jobs and to dependent AWS services.
 Consolidate workloads to run on Amazon EKS: Customers can run multiple
Spark jobs simultaneously alongside other containerized workloads on the
same Amazon EKS cluster. This results in reduced management overhead
and increased resource utilization.
 Run jobs without the need to provision clusters: A job’s dependencies and
configuration parameters are stored within the job definition. This
eliminates having to pre-create clusters that are tightly coupled to EMR
versions, Spark parameters or job dependencies. EMR on EKS deploys, on-
demand, the resources required to run the job based on the job
definition, avoiding the need for pre-provisioned clusters for ad-hoc,
interactive or batch workloads.
Resources: Website | What’s new post
AWS Categories: Database

Amazon Aurora Serverless v2 Amazon Babelfish for Aurora PostgreSQL


What is it? What is it?
Amazon Aurora Serverless v2 (Preview) is the new version of Aurora Babelfish is a new translation layer for Amazon Aurora PostgreSQL that
Serverless, an on-demand, auto-scaling configuration of Amazon Aurora that enables Aurora to understand commands from applications written for
automatically starts up, shuts down, and scales capacity up or down based Microsoft SQL Server.
on your application's needs. It scales instantly from hundreds to hundreds- Migrating from legacy SQL Server databases can be time consuming and
of-thousands of transactions in a fraction of a second. As it scales, it adjusts resource intensive. When migrating your databases, you can automate the
capacity in fine-grained increments to provide just the right amount of migration of your database schema and data using the AWS Database
database resources that the application needs. There is no database capacity Migration Service (DMS), but there is often more work to do, to migrate the
for you to manage, you pay only for the capacity your application consumes, application itself including re-writing application code that interacts with the
and you can save up to 90% of your database cost compared to the cost of database.
provisioning capacity for peak load. Aurora Serverless v2 (Preview) is With Babelfish, Aurora PostgreSQL now understands T-SQL, Microsoft SQL
currently available in preview for Aurora with MySQL compatibility only. Server's proprietary SQL dialect, and supports the same communications
Availability: protocol, so your apps that were originally written for SQL Server can now
Amazon Aurora Serverless v2 is available in a gated preview for Amazon work with Aurora with fewer code changes. As a result, the effort required to
Aurora with MySQL compatibility in US East (N. Virginia) at this time. modify and move applications running on SQL Server 2014 or newer to
Aurora is reduced, leading to faster, lower risk, and more cost-effective
Use Cases:
migrations.
 Enterprise database fleet management: Enterprises with hundreds or
thousands of applications, each backed by one or more databases, must Availability:
manage resources for their entire database fleet. As application Available in preview in us-east-1. At GA, it will be available in all commercial
requirements fluctuate, continuously monitoring and adjusting capacity regions.
for each and every database to ensure high performance, high availability Customer Benefits:
and remain under budget is a daunting task. With Aurora Serverless v2  Highly Scalable: Scale instantly, from hundreds to hundreds-of-thousands
(Preview), database capacity is automatically adjusted based on of transactions, in a fraction of a second.
application demand and you no longer need to manually manage  Reduce migration time and risk: With Babelfish, Amazon Aurora
thousands of databases in your database fleet. PostgreSQL supports commonly used T-SQL language and semantics which
 Software-as-a-Service applications: Software-as-a-Service (SaaS) vendors reduces the amount of code changes related to database calls in an
typically operate hundreds or thousands of Aurora databases, each application. As a result, the amount of application code you need to re-
supporting a different customer, in a single cluster to improve utilization write is minimized, reducing the risk of any new application errors.
and cost efficiency. With Aurora Serverless v2 (Preview), SaaS vendors can  Migrate at your own pace: With Babelfish, you can run SQL Server code
provision Aurora database clusters for each individual customer without side-by-side with new functionality built using native PostgreSQL APIs.
worrying about costs of provisioned capacity. It automatically shuts down Babelfish enables Aurora PostgreSQL to work with commonly-used SQL
databases when they are not in use to save costs and instantly adjusts Server query tools, commands, and drivers. As a result, you can continue
databases capacity to meet changing application requirements. developing with the tools you are familiar with.
 Scaled-out databases split across multiple servers: Customers with high
write or read requirements often split databases across several instances Resources: Website | What’s new post
to achieve higher throughput. However, customers often provision too
many or too few instances, increasing cost or limiting scale. With Aurora
Serverless v2 (Preview), customers split databases across several Aurora
instances and let the service adjust capacity instantly and automatically
based on need.
Customer Benefits:
 Highly Scalable: Scale instantly, from hundreds to hundreds-of-thousands
of transactions, in a fraction of a second.
 Highly Available: Power your business critical workloads with the full
breadth of Aurora features, including backtrack, cloning, Global Database,
Multi-AZ, and read replicas.
 Cost effective: Scale in fine-grained increments to provide just the right
amount of database resources and pay only for capacity consumed.

Resources: Website
Amazon Neptune ML
What is it?
Amazon Neptune ML is a new capability of Amazon Neptune that uses Graph
Neural Networks (GNNs), a machine learning technique purpose-built for
graphs, to make easy, fast, and more accurate predictions using graph data.
With Neptune ML, you can improve the accuracy of most predictions for
graphs by over 50% when compared to making predictions using non-graph
methods.
Using the Deep Graph Library (DGL), an open-source library that makes it
easy to apply deep learning to graph data, Neptune ML automates the heavy
lifting of selecting and training the best ML model for graph data, and lets
users run machine learning on their graph directly using Neptune APIs and
queries. As a result, you can now create, train, and apply ML on Amazon
Neptune data in hours instead of weeks without the need to learn new tools
and ML technologies.
Availability:
Amazon Neptune ML is available in all AWS Regions where Neptune is
available. See details on the AWS Regions Table
Use Cases:
 Fraud Detection Companies lose millions (even billions) of dollars in fraud,
and want to detect fraudulent users, accounts, devices, IP address or
credit cards to minimize the loss. You can use a graph-based
representation to capture the interaction of the entities (user, device or
card) and detect aggregations such as when a user initiates multiple mini
transactions or uses different accounts that are potentially fraudulent.
 Product recommendation Traditional recommendations use analytics
services manually to make product recommendations. Neptune ML can
identify new relationships directly on graph data, and easily recommend
the list of games a player would be interested to buy, other players to
follow, or products to purchase.
 Customer Acquisition: Neptune ML automatically recommends next steps,
or product discounts to certain customers based on where they are in the
acquisition funnel.
 Knowledge Graph: Knowledge graphs consolidate and integrate an
organization’s information assets and make them more readily available
to all members of the organization. Neptune ML can infer missing links
across data sources, identify similar entities to enable better knowledge
discovery for all.
Customer Benefits:
 Make predictions on graph data without ML expertise: Neptune ML
automatically creates, trains, and applies ML models on your graph data.
It uses DGL to automatically choose and train the best ML model for your
workload, enabling you to make ML-based predictions on graph data in
hours instead of weeks.
 Improve the accuracy of most predictions by over 50%: Neptune ML uses
GNNs, a state of art ML technique applied to graph data that can reason
over billions of relationships in graphs, to enable you to make more
accurate predictions.
Resources: Website | What's new post | Leadership authored Blog
AWS Categories: Storage

Amazon EBS gp3 Volume Amazon EBS Provisioned IOPS Volume


What is it? What is it?
Amazon EBS gp3 volumes are the latest generation of general-purpose SSD- Provisioned IOPS volumes, backed by solid-state drives (SSDs), are the
based EBS volumes that enable customers to provision performance highest performance Elastic Block Store (EBS) storage volumes designed for
independent of storage capacity, while providing up to 20% lower price per your critical, IOPS-intensive and throughput-intensive workloads that require
GB than existing gp2 volumes. With gp3 volumes, customers can scale IOPS low latency.
(input/output operations per second) and throughput without needing to Now in Preview: io2 Block Express: Customers that need sub-millisecond
provision additional block storage capacity. This means customers only pay latency or need to go beyond the current single volume peak performance
for the storage they need. and throughput, can sign up for a preview of io2 volumes running on next
Customer Benefits: generation Amazon EBS storage server architecture (io2 Block Express).
 Ease of use: gp3 volumes take all the guesswork out of provisioning Designed to provide 4,000 MB/s throughput per volume, 256K IOPS/volume,
capacity and performance for your applications. You get sustained, up to 64 TiB storage capacity, and 1,000 IOPS/GB as well as 99.999%
baseline performance of 3,000 IOPS at any volume size. This means that durability and sub-millisecond latency. With io2 Block Express, customers
even if you don’t provision any IOPS, your applications will consistently now get SAN (Storage Area Network) like performance in a high durability
get this baseline performance for the smallest of volumes. For use cases block store in the cloud with the ability to scale, provision, and pay for just
where your application needs more performance than the baseline, you the capacity they need.
simply provision the IOPS or throughput you need, without having to add
more capacity. Resources: Website | What’s new post
 Higher performance and throughput: gp3 volumes make it easy and cost
effective for customers to meet the IOPS and throughput requirements
for the majority of their applications, including virtual desktops, medium
sized single instance databases such as Microsoft SQL Server and Oracle,
latency sensitive interactive applications based on frameworks like Kafka
and Spark, and dev/test environments. The new gp3 volumes deliver a
baseline performance of 3,000 IOPS and 125 MB/s at any volume size.
Customers looking for higher performance can scale up to 16,000 IOPS
and 1,000 MB/s for an additional fee.
 Lower cost: gp3 offers SSD-performance at a 20% lower cost per GB than
gp2 volumes. Furthermore, by decoupling storage performance from
capacity, you can easily provision higher IOPS and throughput without the
need to provision additional block storage capacity, thereby improving
performance and reducing costs.
Resources: Website | What’s new post
AWS Categories: Mobile

AWS Amplify featuring New Admin UI


What is it?
AWS Amplify is a set of tools and services that can be used together or on
their own, to help front-end web and mobile developers build scalable full
stack applications, powered by AWS. With Amplify, you can configure app
backends and connect your app in minutes, deploy static web apps in a few
clicks, and easily manage app content outside the AWS console. Get to
market faster with AWS Amplify.

NEW! The Amplify admin UI is an abstraction layer on top of the Amplify CLI,
and lets you configure back-ends on AWS with a graphical user interface. It
also allow you to manage content, users and user groups in the app and
assign this outside of the group of developers working on the application.
The admin UI does not require an AWS account until the point you need the
CLI.

Availability:
All AWS markets.
Customer Benefits:
 Easily manage app users and app content: The Amplify admin UI (NEW!)
provides even non-developers with administrative access to manage app
users and app content without an AWS account.
AWS Categories: Management and Governance

AWS Service Catalog AppRegistry


What is it?
AWS Service Catalog allows organizations to create and manage catalogs of
IT services that are approved for use on AWS. These IT services can include
everything from virtual machine images, servers, software, and databases to
complete multi-tier application architectures. AWS Service Catalog allows
you to centrally manage deployed IT services, your applications, resources,
and metadata. This helps you achieve consistent governance and meet your
compliance requirements, while enabling users to quickly deploy only the
approved IT services they need.
With AWS Service Catalog AppRegistry, organizations can understand the
application context of their AWS resources. You can define and manage your
applications and their metadata, to keep track of things like cost,
performance, security, compliance and operational status at the application
level.
Availability:
For a full list of supported AWS Regions, see details on the AWS Regions
Table.
Use Cases:
 Define and Manage Applications and Metadata
 Create application definitions that include resource collections and
metadata from AWS services and ISV partners.
 Integrate AppRegistry with your application development processes to
maintain a single source of truth.
 Get application context - Know what application your resource belongs to,
and vice versa.
Customer Benefits:
 Ensure compliance with corporate standards: AWS Service Catalog
provides a single location where organizations can centrally manage
catalogs of IT services. With AWS Service Catalog you can control which IT
services and versions are available, what is configured in each of the
available service, and who gets permission access by individual, group,
department or cost center.
 Help employees quickly find and deploy approved IT services: With AWS
Service Catalog, you define your own catalog of AWS services and AWS
Marketplace software, and make them available for your organization.
Then, end users can discover and deploy IT services using a self-service
portal.
 Centrally manage IT service lifecycle: AWS Service Catalog enables you to
add new versions of IT services, and end users are notified so they can
keep abreast of the latest updates. With AWS Service Catalog you can
control the use of IT services by specifying constraints, such as limiting the
AWS regions in which a product can be launched.
Resources: Website
AWS Categories: Security, Identity, and Compliance

AWS Audit Manager


What is it?
AWS Audit Manager helps you continuously audit your AWS usage to
simplify how you assess risk and compliance with regulations and industry
standards. Audit Manager automates evidence collection to make it easier to
assess if your policies, procedures, and activities, also known as controls, are
operating effectively. When it is time for an audit, AWS Audit Manager helps
you manage stakeholder reviews of your controls and enables you to build
audit-ready reports with much less manual effort.
Availability:
us-east-1 (N. Virginia), us-east-2 (Ohio), us-west-1 (N. California), us-west-2
(Oregon), ap-southeast-2 (Sydney), ap-northeast-1 (Tokyo), ap-southeast-1
(Singapore), eu-west-1 (Ireland), eu-central-1 (Frankfurt), eu-west-2
(London)
Use Cases:
 Transition from manual to automated evidence collection: AWS Audit
Manager enables you to move from manually collecting, reviewing, and
managing evidence to a solution that automates evidence collection and
helps to manage evidence security and integrity.
 Continuous auditing and compliance: With AWS Audit Manager, you have
an increased level of transparency into usage activity and changes in the
environment. You can continuously collect evidence, monitor your
compliance posture, and proactively reduce risk by fine-tuning your
controls.
 Internal risk assessments: Easily perform assessments to help assess risks
unique to your business. You can customize a prebuilt framework or build
your own framework from scratch. Then, launch an assessment to
automatically collect evidence helping you validate if your internal
controls are working as intended.
Customer Benefits:
 Easily map your AWS usage to controls: AWS Audit Manager provides
prebuilt frameworks that include mappings of AWS resources to control
requirements for well-known industry standards and regulations. A
prebuilt framework includes a collection of controls with descriptions and
testing information, which are grouped in accordance to the
requirements of an industry standard or regulation, such as CIS AWS
Foundations Benchmarks, GDPR, or PCI DSS. You can fully customize these
prebuilt frameworks and controls to tailor them to your unique needs.
 Save time with automated collection of evidence: AWS Audit Manager
saves you time by automatically collecting and organizing evidence as
defined by each control requirement. With Audit Manager, you can focus
on reviewing the relevant evidence to ensure your controls are working as
intended. For example, you can configure an Audit Manager assessment
to automatically collect configuration snapshots from resources on a
daily, weekly, or monthly basis, subject to underlying AWS service
configurations.
 Streamline collaboration across teams: AWS Audit Manager helps you
streamline audit stakeholder collaboration. For example, the delegation
feature enables you to assign controls in your assessment to a subject
matter expert to review. You might delegate to a network security
engineer to confirm the evidence properly demonstrates that you meet a
specific security requirement. Audit Manager also allows team members
to comment on evidence, upload manual evidence, and update the status
of each control

Resources: Website | What’s new post


AWS Categories: Marketplace

Professional Services in AWS Marketplace Managed Entitlements in AWS License Manager


What is it? What is it?
Professional Services available in AWS Marketplace enable you to find and AWS License Manager makes it easier to manage your software licenses
buy assessments, implementation, support, managed services, and training from vendors such as Microsoft, SAP, Oracle, and IBM across AWS and on-
for your third-party software. AWS Marketplace helps you find the software premises environments. AWS License Manager lets administrators create
and associated services you need to innovate all in one place, simplifying customized licensing rules that mirror the terms of their licensing
procurement. You can discover complete business solutions and curated agreements. Administrators can use these rules to help prevent licensing
service offerings from independent software vendors and consulting violations, such as using more licenses than an agreement stipulates. Rules in
partners, and select payment options and contract terms that fit your needs. AWS License Manager help prevent a licensing breach by stopping the
All charges are simplified onto your AWS bill. instance from launching or by notifying administrators about the
infringement. Administrators gain control and visibility of all their licenses
You can get started today using AWS Marketplace, and take advantage with the AWS License Manager dashboard and reduce the risk of non-
streamlined vendor onboarding and standardized license terms to accelerate compliance, misreporting, and additional costs due to licensing overages.
your time to contract. Independent software vendors (ISVs) can also use AWS License Manager to
Availability: easily distribute and track licenses.
Professional Services in AWS Marketplace is available in 24 AWS Regions, see AWS License Manager also simplifies the management of your software
details on the AWS Regions Table. licenses that require Amazon EC2 Dedicated Hosts. In AWS License Manager,
Professional Services Categories: administrators can specify their Dedicated Host management preferences
 Implementation: Help with configuration, set up, and deployment of for host allocation and host capacity utilization. Once set up, AWS License
third-party software Manager takes care of these administrative tasks on your behalf, so that you
 Assessments: Evaluation of your current operating environment to find can seamlessly launch instances just like you would launch an EC2 instance
the right software for your organization with AWS-provided licenses.
 Premium Support: Access to guidance and assistance from independent At re:Invent we announced new capabilities allowing you purchase software
software vendors and consulting partners, designed for your needs licenses in AWS Marketplace and track them in AWS License Manager
 Managed Services: End-to-end environment management from managed entitlements.
independent software vendors or consulting partners on your behalf Customer Benefits:
 Training: Tailored workshops, programs, and educational tools provided  Gain control over license usage: The way organizations manage licenses
by experts to help your employees learn best practices can vary from using simple spreadsheets to highly customized solutions.
Customer Benefits: Often, these approaches require manual and ad-hoc reporting that can be
 Find & buy complete cloud solutions: Purchase an end-to-end business inaccurate and quickly outdated. With AWS License Manager,
solution- all in one place. With Professional Services, you can discover administrators can create custom licensing rules, provision, and track
curated offerings and request associated services alongside your third- licenses across multiple accounts on AWS and on-premises environments.
party software, so accessing the tools you need is easy. AWS License Manager centralizes license usage, providing organizations
 Simplify procurement cycles: Streamline subscriptions as you engage with with greater visibility and control over how software licenses are used and
independent software vendors and consulting partners. AWS Marketplace can prevent misuse before it happens.
enables you to cut down on onboarding time and quickly get the software  Reduce costs: AWS License Manager provides a centralized view of license
and associated services you need. usage, so that administrators can determine the right number of licenses
 Customize pricing, payment schedule, & terms: Obtain payment, contract required, and not purchase more licenses than needed. With this
terms, and pricing options that best fit your organization’s needs. You can improved visibility, you can also control overages and avoidpenalties from
pay all charges up front, schedule predictable payments over time, or licensing audits.
define contract terms to align with your requirements. Resources: Website
Resources: Website
AWS Categories: Partners

AWS SaaS Boost SaaS Lens for the Well Architected Tool
What is it? What is it?
AWS SaaS Boost is an open source ready-to-use reference environment that The SaaS Lens for the AWS Well-Architected Tool enables customers to
helps Independent Software Vendors (ISVs) to accelerate your move to review and improve their cloud-based architectures and better understand
Software-as-a-Service (SaaS). From small specialized software businesses to the business impact of their design decisions. The SaaS Lens for the AWS
large global solution providers, AWS SaaS Boost helps you accelerate moving Well-Architected Tool measures architecture against best practices and
your applications to AWS with minimal modifications. Build, provision, and provides actionable insights to achieve a well-architected system that is
manage your SaaS environment with greater confidence based on AWS best more likely to achieve reliability, security, efficiency, and cost-effectiveness
practices and proven patterns from hundreds of successful SaaS companies. in the cloud.
Availability: Use cases:
Available in all regions. See details in the AWS Regions Table.  Kick-off your SaaS Journey: Leverage the SaaS Lens Whitepaper, best
practices and improvement plans by technical teams as a starting place to
Customer Benefits:
learn development concepts to begin your journey to SaaS.
 Accelerate their development to a SaaS model on AWS faster with fewer
 Improve your architecture: Review SaaS workload against a list of Best
resources.
Practices and leverage the improvement plans and resources to gain
 Remove the complexity and risk of building SaaS so product teams can
knowledge to improve systems.
focus on customer experience and innovation.
 Identify and resolve risks: The SaaS Lens for the Well-Architected Tool
 Simplify SaaS operations with out-of-the box availability of key processes
provides guidance to identify Medium-Risk and High-Risk issues that can
including automated onboarding, tenant monitoring and upgrade
impact your development roadmap.
orchestration.
Customer Benefits:
Resources: Website | Blog
 Learn architectural best practices for designing and operating systems in
the cloud.
 Measure your architecture against best practices and receive actionable
insights for improvement.
 Have a well-architected system that is more likely to achieve reliability,
security, efficiency, and cost-effectiveness in the cloud.
Resources: Website | Whitepaper | Blog
AWS SaaS Factory Insights Hub
What is it?
The AWS SaaS Factory Insights Hub is a growing library of business and
technical content to help customers gain insights, make informed decisions,
and enable themselves at any stage of the software-as-a-service (SaaS)
journey on AWS. AWS Partners can search by topics most relevant to their
Foundational Technical Review Lens in the Well-
business, content types, or specific business or technical role to find Architected Tool
whitepapers, case studies, best practices, videos, and more.
What is it?
Use cases: The AWS Foundational Technical Review (FTR) Lens in the AWS Well-
 Whether you work for or with an organization offering SaaS solutions to Architected Tool provides a self-service way for AWS Partners to prepare for
customers, or you just want to take your SaaS knowledge to the next the Foundational Technical Review (formally known as the Technical
level, the AWS SaaS Factory Insights Hub will help customers stay up-to- Baseline Review). The AWS Well-Architected FTR Lens includes best practices
date on all things SaaS on AWS. for security, reliability, and operational excellence, representing the best-
Customer Benefits: practice requirements necessary for membership in the AWS Partner
Network. These best practices help partners take their first step to becoming
 AWS SaaS Factory Insights Hub allows customers to search and browse
Well-Architected.
available resources by role, knowledge level, content category, content
type, or keywords. Customers can also view all new and featured content Availability:
to follow the latest updates from the AWS SaaS Factory team. They can Available to customers and AWS Partners at no additional charge and is
find various resources covering both business and technical aspects of a offered in all Regions where the AWS Well-Architected Tool is available.
SaaS delivery model, such as SaaS 101, SaaS product strategy, go-to- Customer Benefits:
market (GTM), packaging and pricing, migration strategies, billing and  Easily identify risks in your architectures related to the Foundational
metering, tenant isolation, and data partitioning. Technical Review
Resources: Website | Blog  Identify how to make workload improvements, mitigate risks, and
successfully complete the FTR.
Resources: Website | User Guide | Foundational Technical Page
ISV Partner Path ProServe Ready
What is it? What is it?
ISV Partner Path, a distinct partner journey enabling a streamlined AWS The Public Sector ProServe Ready program provides AWS Consulting
Partner Network (APN) experience for Independent Software Vendors (ISVs) Partners a formal and standardized way to work with AWS Professional
to build, market, and sell their solutions on AWS. ISV Partner Path Services (“ProServe”) on subcontracted engagements with AWS customers.
accelerates an ISV’s engagement with AWS through prescriptive guidance, Bringing ProServe Ready to our Public Sector Partners and customers
curated programs, focused benefits, Marketplace capabilities, and unique accelerates our customers’ journey to the cloud. ProServe Ready offers
co-selling access—all accessible with no tier-based requirements. We are partners formalized training on ProServe best practices, enabling them to
introducing a new partner journey (ISV) in addition to the two (Consulting work seamlessly with AWS ProServe.
and Technology) today, separating ISV from the Technology Partner journey.
Availability:
We will not use APN Tiers (Registered, Select, Advanced) in the ISV Partner
Currently in pilot in the US and EMEA
Path as the default leveling framework for ISV Partners
Customer Benefits:
Availability:
ISV Partner Path will be available in January 2021, following the  Learn architectural best practices for designing and operating systems in
announcement at reinvent on December 3 2020. the cloud.
 Measure your architecture against best practices and receive actionable
Customer Benefits: insights for improvement.
 Introducing ISV partner Path allows us remove the previous challenges  Have a well-architected system that is more likely to achieve reliability,
that ISVs had with tier structure as well as reducing requirements for security, efficiency, and cost-effectiveness in the cloud.
entry, therefore enabling them to engage more quickly with AWS.
 We will focus on the Partner solution instead of the Partner tier which
makes this more relevant for the way that this Partner type goes to
market with their customers

Think Big for Small Business Pilot AWS Public Safety & Disaster Response
What is it? Competency Expands to include Technology
Think Big for Small Business is an AWS Partner Network (APN) program to
further enable and accelerate Small and/or Diverse Partners (often Partners
designated as Minority-Owned Business). The Program addresses their What is it?
challenges in meeting APN tier requirements and incentivizes partner to We are excited to launch an additional track within this AWS Competency
grow and sustain their AWS businesses. that showcases specialized and dedicated AWS Technology Partners.
Availability: The expansion includes the addition of 16 solutions from independent
Ongoing global pilot software vendors (ISVs) that deliver AWS Partner technology for emergency
Use cases: management operations, justice public safety applications, PSDR
 Whether you work for or with an organization offering SaaS solutions to infrastructure resilience and recovery, 911 and emergency communications,
customers, or you just want to take your SaaS knowledge to the next and PSDR data and analytics.
level, the AWS SaaS Factory Insights Hub will help customers stay up-to- Resources: Website
date on all things SaaS on AWS.
Benefits:
 The Program provides small/diverse partners in Registered and Select
Tiers with provisional access to APN tier benefits through a set of
requirements proportional to partner size, essentially giving them more
time and needed resources to achieve APN requirements. It also offers a
limited-time Technical Capability discount to small/diverse partners in the
Public Sector Solution Provider Program and Public Sector Distribution
while they work towards a competency. In addition, participating partners
will have access to a Small Partner Guide to navigate all relevant AWS
programs and resources for growing their business with AWS.
AWS Partner Security Solutions for Government AI and ML Rapid Adoption Assistance
Workloads For Public Sector Partners
What is it? What is it?
Government agencies and public sector organizations need rapidly The American AI Initiative directs U.S. government agencies to double down
deployable and dependable security solutions to support their missions. To on efforts to advance artificial intelligence (AI) in order to protect and
respond, Amazon Web Services (AWS) launched the Security Solutions for improve the security and economy of our nation. AI and related technologies
Government Workloads initiative under the Authority to Operate (ATO) on (including machine learning [ML] and deep learning [DL]) can effectively
AWS Program. This initiative works with Public Sector partners, members of transform the way the government operates.
the AWS Partner Network (APN), to develop security solutions designed to
meet the unique security and compliance requirements of public sector AI and ML Rapid Adoption Assistance, is an additional benefit available for
workloads. members of the Public Sector Partner (PSP) Program under the AWS Partner
Network (APN). This initiative provides partners with a direct, scalable, and
The Security Solutions for Government Workloads initiative provides six
automated mechanism to reach out to AWS experts for assistance in
different partner-designed offerings to support remote workforce security
delivering AI-based solutions that can help U.S. government agencies
and web portal security for customer workloads.
provide better services United States residents.
AWS Public Sector Partners configure and manage these repeatable
Partner Benefits:
packages. This model enables global scalability and availability while
supporting localized customizations for unique markets.  Reduce ramp-up time for your AI and ML applications and deliver
advanced technology solutions: The AWS AI and ML subject matter
Customer Benefits: experts will help partners build an AI and ML roadmap and accelerate
 Rapid solution deployment: Reduce ramp-up time and accelerate security their solution development by guiding through the envision, enablement,
capabilities for government and public sector customer workloads by and building phases.
using pre-configured and/or managed solutions.  Differentiate your business and grow your AWS practice: Develop a
 High standards for privacy and data security: Deploy security solutions business plan to expand your public sector customer base through the
configured and managed by AWS Public Sector Partners with a focus on American AI Initiative. Achieve recognition for your AI and ML solutions
end-to-end security enforcement and automation. through the government, education and nonprofit competencies, the
 Comprehensive security and compliance controls: Meet security and AWS GovCloud (US) skill, and AWS solution provider programs.
compliance standards for finance, retail, healthcare, government, and  Simplify cloud procurement strategy and build your portfolio: Set the
more with third-party validation of global compliance requirements stage to win business and contracts in public sector with dedicated
achieved and continually monitored by AWS to help customers. support from the public sector bid and proposal team. Develop core go-
to-market assets to highlight your expertise on AWS AI and ML and earn
Resources: Website
trust with customers.
Resources: Website

AWS Mainframe Migration Competency AWS Energy Competency


What is it? What is it?
Recognizing the complexity of a mainframe migration, our customers seek AWS Partners play a vital role in these efforts by supporting our customers
proven methodologies, tools, and best practices to empower successful worldwide and building, implementing, and integrating technology to enable
migrations. The AWS Partner Network (APN) plays a critical role in these the transformation of complex business and operational systems. This helps
efforts by providing proven technology products and services for customers’ energy companies drive cost reduction and efficiency, and deliver step
mainframe migrations. change innovations that position operators for success as energy portfolios
We are excited to pre-announce the launch of the AWS Mainframe transition to a lower carbon world.
Migration category within the AWS Migration Competency. These validations
give customers confidence in choosing AWS Partner solutions. We are excited to pre-announce the AWS Energy Competency Program,
which will formally launch in 2021 and introduce AWS Technology and
AWS Partner solutions: Consulting Partners who have achieved this high-bar designation.
 The AWS Mainframe Migration Technology Partners category recognizes
AWS Partners with proven technology and customer success, migrating Resources: Blog
both mainframe applications and data to AWS.
 The AWS Mainframe Migration Consulting Partners category recognizes
AWS Partners with mature practices and a track record of successful
mainframe workload migrations.
Resources: Blog
AWS ISV Accelerate Program Updates to Authority to Operate (ATO) on AWS
What is it? Program
The AWS ISV Accelerate Program is a co-sell program for AWS Partners who
What is it?
provide software solutions that run on or integrate with AWS. The program
The Authority to Operate (ATO) on AWS is an Amazon Web Services (AWS)
helps you drive new business and accelerate sales cycles by connecting the
Partner Network (APN) program which provides resources to solution
participating Independent Software Vendors (ISVs) with the AWS Sales
providers running on AWS who need assistance in their pursuit of a
organization.
compliance authorization. In addition to FedRAMP, ATO on AWS Program
The AWS ISV Accelerate Program provides you with co-sell support and now supports 1) Financial: PCI-DSS, 2) Health: HIPAA/HITRUST, 3) Public
benefits to easily gain access to millions of active AWS customers with AWS safety & tax: CJIS, IRS 1075, 4) International: IRAP, Protection-B, GDPR, and
field sellers globally. Co-selling provides better customer outcomes and 5) Defense: DoD IL4 / IL5 / IL6, CMMC.
assures mutual commitment from AWS and Partners.
Partner Benefits:
Partner Benefits:  Accelerates security & compliance authorization process
 Drive visibility with AWS Sales: Your solutions will be included in an AWS  Reduces cost & time (Average 18-24 months) – FedRAMP
Account Manager facing solution library with links to your solution  Provides reusable artifacts including guidance, templates, tools, and pre-
collateral (e.g. sales and solution briefs). Additionally, you are eligible to built templates for APN Solutions
participate in activities that help you drive awareness with the AWS Sales  Builds and Optimizes DevOps, SecOps, Continuous Integration/Continuous
teams. Delivery (CI/CD), Continuous Risk Treatment (CRT) strategies
 Focused co-sell support and resources: You will gain prioritized access to  Develops proven Techniques using AWS Security Automation and
the AWS co-sell support team, which is aligned with AWS Account Orchestration (SAO) methodology
Managers working closely with AWS customers to drive adoption of ISV
solutions. You will have access to webinars that provide guidance on how Resources: Website
to successfully work with the AWS Sales organization.
 Reduced AWS Marketplace listing fees: AWS Marketplace is a digital
catalog with thousands of software listings from ISVs that make it easy for
customers to find, buy, deploy, and manage software that run on AWS.
You are eligible for reduced listing fees for selling your solutions on the
AWS Marketplace.
Resources: Website

AWS Outposts Partners AWS Travel and Hospitality Competency


What is it? What is it?
AWS Outposts Ready Partners offer products that integrate with AWS AWS Travel and Hospitality Competency Technology and Consulting Partners
Outposts deployments. Customers can discover products on this page that provide technology products and services to accelerate the industry’s
are tested on AWS Outposts and follow AWS security and architecture best modernization and innovation journey from behind-the-scenes operational
practices. AWS Competency Partners are ready to help AWS customers efficiencies to guest-facing customer experiences. These include a 360-
migrate and deploy their applications to AWS Outposts. degree view of customer and operational data, digital engagement with
customers, connected experiences with smart assets, and modernized core
Customer benefits:
travel and hospitality applications. These AWS Partners are validated for
 The AWS Service Ready Program helps AWS customers find AWS
technical proficiency and customers’ success to help travel and hospitality
Technology Partner products that integrate with specific AWS services.
organizations build a resilient business and accelerate innovation.
These AWS Partners have demonstrated experience and success helping
AWS customers evaluate and use their technology productively, at scale Availability
with varying levels of complexity. The AWS Travel and Hospitality Competency is now open to AWS Partners
 The AWS Partner Competency Program has validated that the partners who provide specialized industry solutions. The AWS T&H Competency
below have demonstrated they can help enterprise customers migrate Validation Checklists for ISV and Consulting Partners provide the criteria
applications and legacy infrastructure to AWS. necessary to achieve the AWS Travel and Hospitality Competency
designation.
Resources: External Website
Benefits
 Introducing the AWS Travel and Hospitality Competency will help support
customers in building industry resilience for the long run
 This Competency takes on the heavy lifting of identifying and validating
the most experienced AWS Partners who can help industry customers
succeed—specifically at times of disruption
Resources: Blog
RDS Service Delivery Program
What is it?
The Amazon RDS Service Delivery Program validates SI Partners for following
best practices with Amazon RDS, demonstrating technical proficiency and
proven customer success by specific database engine type. Amazon RDS
Service Delivery Partners have proven success helping customers with
database monitoring, security, and performance using Amazon RDS database
engines (Aurora MySQL, Aurora PostgreSQL, RDS for PostgreSQL, RDS for
MySQL, RDS for MariaDB, RDS for Oracle, and RDS for SQL Server). Amazon
RDS Service Delivery Partners are vetted through a rigorous technical
validation by individual database engine type that verifies they have
successfully implemented Amazon RDS for customers and followed AWS
best practices for the service.
Customer Benefits:
 Customers and database service teams now have the ability to identify
Partners validated for specific RDS database engine proficiency (e.g.
Aurora MySQL, RDS for Oracle).
Resources: Website | Blog | PSF | PDP

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy