0% found this document useful (0 votes)
58 views34 pages

Cloud Concepts

Cloud computing provides on-demand shared resources such as compute, storage, database, and network services that can be rapidly deployed. It includes various deployment models (public, private, hybrid) and service models (IaaS, PaaS, SaaS), each offering distinct benefits and use cases. Key concepts include scalability, flexibility, and cost optimization, with services like AWS EC2, S3, and Lambda facilitating diverse cloud applications.

Uploaded by

dedhiatanvi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
58 views34 pages

Cloud Concepts

Cloud computing provides on-demand shared resources such as compute, storage, database, and network services that can be rapidly deployed. It includes various deployment models (public, private, hybrid) and service models (IaaS, PaaS, SaaS), each offering distinct benefits and use cases. Key concepts include scalability, flexibility, and cost optimization, with services like AWS EC2, S3, and Lambda facilitating diverse cloud applications.

Uploaded by

dedhiatanvi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 34

Cloud Concepts

Cloud computing is remotely virtual pool of on-demand shared resources offering Compute, Storage,
Database and Network services that can be rapidly deployed at scale.

Virtualization – Shared hardware is achieved through a hypervisor.

A hypervisor is a piece of software used to create virtualized environment allowing for multiple VMs to
be installed on the same host.

Benefits of Virtualization:

 Reduced capital expenditure.


 Reduced operating costs.
 Smaller footprints.
 Optimization of resources.

Compute objects provide the brains to process your workload.

Storage resources allow you to save and store your data.

Database resources allow you to store structured sets of data used by your applications.

Network resources provide the connectivity allowing all other resources (compute/storage/database) to
communicate with each other.

Cloud Deployment Models

 Public Cloud – A vendor makes available the use of shared infrastructure, including compute,
storage, database, network resources. It can be provisioned on demand and typically accessed
over the internet for public usage. The consumer will never see the hardware or know the exact
physical location of their data.
 Private Cloud – A private cloud is different to a public cloud in that the infrastructure is privately
hosted, managed and owned by the individual company using it.
 Hybrid Cloud – A hybrid cloud is a model that makes use of both Public and Private clouds. This
model may be used for seasonal burst traffic, or for Disaster Recovery. These are normally short-
term configurations (may be for test/dev purposes) and can be often be a transitional state for
enterprises.

Key Cloud Concepts

 On-demand Resourcing – When you want to provision a resource within the cloud, its almost
immediately available to you to allocate where and when you need.
 Scalability – Cloud computing offers you the ability to rapidly scale your environments resources
both up and down and in and out.
 Economy of Scale – The huge scale of resources public cloud offerings provide offers
exceptionally low resources costs compared to traditional hosting.
 Flexibility and Elasticity – You can choose the amount of resources you require. How much and
how long you want it and what scale.
 Growth – Cloud computing offers your organisation the ability to grow using a wide range of
resources and services.
 Utility Based Metering – With many Cloud services, you only pay for what you use.
 Shared Infrastructure – Host within the cloud are virtualized, as a result multiple tenants can be
running instances on the same hardware.
 Highly Available – Many of the core services with the Public Cloud and its underlying
infrastructure are replicated across different geographic zones and regions.
 Security – This is achieved by adhering to global compliance programs across multiple industries
and by applying the shared responsibility model.

Cloud Service Models

 Infrastructure as a Service (IaaS) – The highest level of customization and management is


offered by IaaS providers. This service allows you to architect your own portion of the cloud, by
confirming a virtual network.
 Platform as a Service (PaaS) – PaaS providers give a greater level of management and control.
You have access to framework from the operating system and up. The underlying architecture,
host hardware, network components and OS are typically managed, maintained and supported
by the vendor. This makes for a great deployment service for developers.
 Software as a Service (SaaS) – They are usually simple in their design focusing on the ease of use
to appeal to the wider audience. E.g. Gmail is fully managed and accessed over the internet,
there is no requirement to install any software on local device.
 Disaster Recovery as a Service (DaaS)
 Communication as a Service (CaaS)
 Monitoring as a Service (MaaS)

Common Use cases of Cloud Computing

 Traffic Bursting – You experience times withing the year (predicted seasonal), where you
infrastructure takes a heavier load impact than other times of the year.
 Backup / DR – Due to the public cloud’s built in resiliency and durability, this makes way for a
great solution for the backup requirements.
 Web Hosting – Many organisations choose to host their web services on the cloud due to its
ability to load balance across multiple instances and scale up and down quickly and
automatically as traffic increases and decreases.
 Test/DEV Environments – Using public cloud allows you to spin up servers as and when you
need them and then shut them down when finished.
 Proof of Concept – The Cloud allows you to implement a proof of concept design and bring
them to life at a fraction of the cost.
 Big Data/Data Manipulation – The cloud also makes it easier and cheaper to manage big data.

AWS Well Architected Framework – The document provided by AWS where best practices and lessons
learned are documented for all customers to benefit from the collected know-how. 6 pillars are:

 Security – Protecting your data and systems.


 Reliability – Dynamically acquiring compute resources to meet demand. It also includes
Recovery – From infrastructure or service failures by implementing enough redundancy, backup,
restore, and recovery procedures.
 Operation Excellence – Running your workloads with enough automation and visibility to gain
insights into day-to-day operations.
 Performance Efficiency – Using compute resources efficiently while minimizing over-provisioning
as utilization fluctuates.
 Cost Optimization – Eliminating unneeded expenses.
 Sustainability – Energy consumption and environmental impact.
Compute
Compute is closely related to common server components such as CPUs and RAM. A physical server
within a data center would be considered a Compute resource as it may have multiple CPUs and many
Gigabytes of RAM.

Elastic Compute Cloud (EC2) – EC2 allows you to deploy virtual servers with in your AWS environment.
Most people will require an EC2 instance within their environment as a part of at least one of their
solutions.

The EC2 service can be broken down in the following components:

 Amazon Machine Images (AMIs) – AMIs are essentially templates of pre-configured EC2
instances which allow you to quickly launch new EC2 instance based on the configuration within
the AMI.
 Instance Types – An instance type simply defines size of the instance based on different
parameters for CPU, memory and networking capacity.
o Micro instances – low cost, minimum CPU, can be used for low traffic websites
o General Purpose – balance of compute, memory and network resources, can be used
for small and medium databases and data processing tasks.
o Compute Optimized – high CPU as compared to memory, can be used with high traffic
front end, on-demand batch processing.
o GPU instances – Graphics Processing Unit, used for graphics intensive application
o FPGA instances – It provides field programmable gate arrays that can be programmed
to create application specific hardware accelerations like financial computing.
o Memory Optimized – Used for applications that require processing with lot of data.
o Storage Optimized – Used for that require high I/O and high storage capacity
requirements.
 Instance Purchasing Options – You can purchase EC2 instance using variety of purchase options.
o On-Demand Instances – can be launched anytime, can be used as long as you want.
Used for short-term uses like testing and development environments.
o Reserved Instances – purchase for a set period of time for reduced cost between 1 to 3
years. Low cost as compared to on-demand instances.
o Scheduled Instances – similar to reserved instances, you pay for the reservation on a
recurring schedule, either daily, weekly or monthly. Cheaper than on-demand but you
would be charged even if you did not use the instance.
o Spot Instances – It allows you to bid for an unused EC2 compute resources. No
guarantees for a fixed period of time. You need to pay higher than the spot price. Spot
price depends on demand and supply. As soon as your bid price becomes low than the
spot price you will be issued 2 min warning before instance will be terminated and
removed. Useful for processing data that can be suddenly interrupted.
o On-Demand Capacity Reservations – Reserve capacity based on different attributes
such as instance type, platform and tenancy, within a particular availability zone for a
period of time.
 Tenancy – This relates to what underlying host your EC2 instances will reside on, so essentially
the physical server within an AWS Data Center.
o Shared Tenancy – EC2 instance is launched on any available host with the required
resources. It can be used for multiple customers.
o Dedicated Tenancy – Hosted on hardware that no other customer can access.
Additional cost. Hardware can be shared by others using your same account.
o Dedicated Host – Additional visibility and control on the physical host. Allows to use the
same host for a number of instances.
 User Data – It allows you to enter commands that will run during the first boot cycle of that
instance. E.g. download latest OS updates.
 Storage Options – Selecting storage for your EC2 instance will depend on the instance selected,
what you intend to use the instance for and how critical the data is.
o Persistent Storage – Available by attaching EBS volume. EBS volumes are separated
from the EC2 instance. You can disconnect the volume from the EC2 instance
maintaining the data.
o Ephemeral Storage – Create by EC2 instances using local storage. Physically attached to
the underlying host. When the instance is stopped or terminated all saved data on
 Security – During the creation of your EC2 instance you will be asked to select a Security Group
for your instance.

Status Checks – It is used to check health and status of EC2. Helps in trouble shooting.

 System Status Checks – If system status check fails, issue is likely with hardware. E.g. loss of
power. It is out of our control.
 Instance Status Checks – If this fails, your input will be required to resolve it. This check looks
into EC2 and not underlying host. E.g. Incorrect network configuration, corrupt file system.

EC2 Container Service (ECS) – This service allows you to run Docker-enabled applications packaged as
containers across a cluster of EC2 instances without requiring you to manage a complex and
administratively heavy cluster management system.

Launching and ECS Cluster

 Fargate Launch – Requires you to specify the CPU and memory required, define networking and
IAM policies, in addition to you having to package your application into containers.
 EC2 Launch – You are responsible for patching and scaling your instances and you can specify
instance type and how many containers should be in a cluster.

Monitoring Containers – Monitoring is taken care of through the use of Amazon CloudWatch.

Amazon ECS Cluster – It is comprised of a collection of EC2 instances. Features such as Security Group,
Elastic Load Balancing and Auto Scaling can be used with these instances. Clusters act as a resource pool,
aggregating resources such as CPU and memory. They are dynamically scalable and multiple instances
can be used. They can be scaled in multiple Availability zones in same region. They cannot scale in
multiple regions.

Elastic Container Registry (ECR) – ECR provides a secure location to store and manage your docker
images. This is fully managed service, so you don’t need to provision any infrastructure to allow you to
create this registry of docker images. It allows developers to push, pull and manage their library of
docker images in central and secure location.

Components:

 Registry – ECR registry allows you to host and store your docker images, as well as create image
repositories.
 Authorization token – To begin the authorization process to communicate your docker client
with your default registry, you can run get-login command using the AWS CLI. The token that is
generated can be used for next 12 hours.
 Repository – These are objects within your registry that allows you to group together and
secure different docker images. You can create multiple repositories with the registry allowing
you to organize and manage your docker images into different categories.
 Repository Policy – You can control access to repositories using IAM policies and repository
policies.
 Image – Once you configured you registry, repository and security controls, you can then start
storing your docker images in required repositories.

Elastic Container Service for Kubernetes (EKS) – Kubernetes is an open source container orchestration
tool designed to automate, deploy, scale and operate containerized applications. AWS provides a
managed service allowing you to run Kubernetes across your AWS infrastructure without having to take
care of provisioning and running the Kubernetes management infrastructure in what’s referred to as the
control plane. You only need to provision and maintain the worker nodes.

AWS Elastic Beanstalk – AWS Elastic Beanstalk is an AWS managed service that takes your uploaded
code of your web application code and automatically provisions and deploys the required resources
within AWS to make the web application operational. These resources include EC2, Auto Scaling,
application health-monitoring and Elastic Load Balancing, in addition to capacity provisioning. An ideal
service for engineers who may not have the familiarity or the necessary skills within AWS to deploy,
provision, monitor and scale the correct environment to run the developed application. It is free service.

AWS Lambda – It is a serverless compute service that allows you to run your application code without
having to manage EC2 instances. The service does require compute power to carry out your code
requests, but because the AWS user does not need to be concerned with what managing this compute
power or where it is provisioned, it is considered serverless. You only have to pay for compute power
when lambda is in use via Lambda functions.

AWS Batch – It is used to manage and run batch computing workloads within AWS. You can create a
cluster of compute resources which is highly scalable taking advantage of the elasticity of AWS coping
with any level of batch processing whilst optimizing the distribution of the workloads.

Amazon Lightsail – It is essentially a VPS backed by AWS infrastructure, much like an EC2 instance but
without as many configurable steps throughout its creation. It is designed to be simple, quick and very
easy to use at a low-cost point, for small scale use cases by small businesses. You can run multiple
lightsail instances together.
Elastic Load Balancer (ELB) – The main function of an ELB is to help manage and control the flow of
inbound requests destined to a group of targets by distributing these requests evenly across the
targeted resources group. The targets defined can be situated between different availability zones.

 Application Load Balancer – Flexible feature set for your web applications running http or https
protocols.
 Network Load Balancer – Operates at the connection level, routing traffic to targets within your
VPC.
 Classic Load Balancer – Operates at both connection and request level.

ELB Components:

 Listener
 Targets
 Rules
 Health Checks
 Internal ELB
 Internet facing ELB
 ELB Nodes
 Cross Zone Load Balancing

Server Certificates (SSL / TLS) – ACM allows you to create and provision SSL/TLS server certificates to be
used within your AWS environment across different services. IAM is used as your certificate manager
when deploying your ELBs in regions that are not supported by ACM.

Application Load Balancer (ALB) – ALB operates at the application layer. The application layer serves as
the interface for users and application processes to access network services.

Network Load Balancer (NLB) – NLB operates at the transport layer enabling you to balance request
purely based upon the TCP protocol.

Classic Load Balancer (CLB) – It supports TCP, SSL/TLS, HTTP and HTTPS protocols.

EC2 Auto Scaling – Auto scaling is a mechanism that automatically allows you to increase or decrease
your EC2 resources to meet the demand based off of custom defined metrices and thresholds.
Storage
Simple Storage Service (S3) – It is fully managed object-based service that is highly available, highly
durable, cost effective and widely accessible. To store objects in S3 you first need to define and create a
bucket. You can create folders in a bucket.

Parameters considered while billing S3:

 Storage – Costs vary with the number and size of objects stored in S3 buckets.
 Request – The number and types of request incur charges
 Data Transfer – Data transferred out of Amazon S3 region.

Storage Classes

S3 Standard S3 INT S3 S-IA S3 Z-IA S3 Glacier S3 G-DA


(S3 Glacier
(S3 (S3 (S3 One Deep
Intelligent) Standard Zone Archive)
Infrequent Infrequent
Access) Access)

High Yes Yes Yes Yes No No


Throughput
Low Latency Yes Yes Yes Yes No No
Access to Frequent Frequent Infrequent Infrequent Access to Minimal
Data and data at a access
infrequent cost Retrieval
through within 12
expedited, hours
standard
and bulk
options
Durability Eleven 9s Eleven 9s Eleven 9s Eleven 9s, Eleven 9s Eleven 9s
but across a
single AZ
Availability 99.99% 99.9% 99.9% 99.5% 99.99% 99.99%
SSL/TLS to Yes Yes Yes Yes Yes Yes
encrypt data
in transit
Lifecycle Yes Yes Yes Yes Yes Yes
rules to
automate
data storage
managemen
t
Availability >= 3 >= 3 >= 3 1 >= 3 >= 3
Zones
Min Storage None 30 days 30 days 30 days 90 days 180 days
duration
S3 Glacier Retrieval Options

 Expedited – Under 250 MB, available in 5 minutes.


 Standard – Any size, available in 3-5 hours.
 Bulk- PB of data at a time, 5-12 hours.

EC2 Instance Storage – Instance store volumes provide ephemeral storage (temporary). Not
recommended for critical or valuable data. If instance is stopped or terminated your data is lost. In case
of reboot data will remain intact.

Benefits:

 No additional cost, its include in price of instance.


 Offer very high I/O speed.
 It is ideal for rapidly changing data without the need of retention like cache or buffer.

Elastic Block Storage (EBS) – It provides persistent and durable block level storage. It offers far more
flexibility with regards to managing data. There are independent of the EC2 instance. They are logically
attached to the instance instead of directly attached to instance like instance store volumes. EBS volume
can be attached to same EC2 instance. But we can attach multiple EBS volumes to same EC2. EBA allows
you to take backups whenever required. EBS backups are called snapshot. It can be manually triggered
or use Amazon Cloudwatch events for automated schedule of backups. These backups are store on S3,
so they are very durable. They are also incremental. New EBS volume can be create from this snapshot.
EBS volume is only available in a single availability zone.

EBS Volume Types

 SSD (Solid State Drives) – Suited for work with smaller blocks.
 HD (Hard disk Drives) – Suited that require higher throughput/large blocks of data.

EBS Security – During setup you just need to select Enable Volume Encryption and EBS will take care of
encrypting the data. It uses AES-256 along with AWS KMS (Key Management Service).

Elastic File System (EFS) –

 Amazon Simple Storage Service (S3) – It is object storage solution. It stores everything as single
object. In this kind of storage if you upload a file and then if the file changes, then entire file will
be replaced. It is ideal for where files are written ones and accessed many times. Not ideal for
heavy read and write at the same time. Good for heavy video, audio and backup files. Netflix
uses S3
 Amazon Elastic Block Store (EBS) – It is block level storage. File is not stored as single objects,
they are stored in blocks, so that the portion of the file that has changed will be updated. It is
ideal for low latency access where fast read and write access are needed. It is like a hard drive.
 Amazon Elastic File System (EFS) – It can be accessed by multiple EC2 instances at once. It has
standard file system hierarchy. You can rename, lock, update files. This type of storage allows
you to accessible to network resources. EFS is fully managed, highly available and durable
service. It can be easily scaled to petabytes in size with low latency access. EFS has been
designed to maintain a high level of throughput (MB/s). It is replicated in different AZs in same
region.

Storage Classes and Performance Options

 Standard – Can be accessed any time, so cost is more, standard latency. Charges based on
storage used/month.
 Infrequent Access (IA) – Infrequent access, so cost reduction, higher latency. Charges for every
read and write.

Performance Modes

 General Purpose – Standard throughput, low latency, <= 7k IOP/sec


 Max I/O – Unlimited throughput, higher latency, >=7k IOP/sec

Throughput Modes

 Busting throughput -
 Provisioned throughput

Amazon EFS vs Amazon FSx

 Amazon EFS – Simple and scalable file system that can be used for linux based workloads, it can
access from difference AZs. Provides high availability and strong durability for your data.
 Amazon FSx
o FSx for Windows File Server – Fully managed Microsoft windows file system.
o FSx for Lustre – Focused on high performance computing. POSIX-Compliant and ready to
use for linux-based applications.

AWS Backup - This service helps manage and implements backups.

AWS Storage Gateway – For hybrid storage solution, we can use AWS Storage gateway.

 S3 File Gateway – Enables you to store objects in Amazon S3 using the NFS or SMB protocol.
 FSx File Gateway – Enables you to store objects in Amazon FSx for Windows File Server using the
SMB protocol.
 Tape Gateway – Replaces physical Tape libraries with iSCSI-VTL (or virtual tape libraries)
 Volume Gateway – Enables you to store data in block storage using the iSCSI protocol.

AWS Elastic Disaster Recovery (DRS) – DRS that allows you to leverage AWS to recover from application
failures that happen in both physical or virtual servers.
Network and Content Delivery
VPC (Virtual Private Cloud) – A VPC is an isolated segment of the AWS infrastructure allowing you to
provision your cloud resources. You are allowed 5 VPCs per region per AWS account.

Subnets – Subnets reside inside your VPC. They allow you to segment your VPC into multiple networks.

Public Subnet – Public subnet is accessible from outside of your VPC i.e. from the internet.

Private Subnet – Any resources created within your private subnet, like backend databases will be
inaccessible from the internet.

All subnets within the VPC can communicate with each other. Each subnet has a route table, with
destination and target. You can add route to the table with target at IGW, this will make your subnet
public.

Internet Gateway (IGW) – It is managed component by AWS, it is attached to your VPC. It acts as
gateway between your VPC and outside world (internet).

If subnet has 256 IP addresses, you can only use 251, 5 reserved for:

 Network address – 1st IP 10.0.1.0


 AWS Routing – 2nd IP 10.0.1.1
 AWS DNS – 3rd IP 10.0.1.2
 AWS Future use – 4th IP 10.0.1.3
 Broadcast – Last IP 10.0.1.255

Network Access Control Lists (NACLs) – A network access control list (NACL) is an optional layer of
security for your VPC that acts as a firewall for controlling traffic in and out of one or more subnets. They
are used at network and subnet layer. They are stateless.

Security Groups – They are used at Instance layer. They are stateful.

NAT Gateway – Private subnets do not have access to internet. If we want to download OS updates, we
can add NAT gateway to the public subnet so it will have access to internet. In private subnet route
table, we will add entry for NAT gateway. So now any resource in private subnet needs to gain access to
internet for update, it can do via NAT gateway. NAT Gateway will use IGW to download the update and
send to private subnet. So NAT Gateway allows private instances to be able to access the internet while
blocking connections initiated from the internet.

VPN (Virtual Private Network) – Connect your on-premises networks and remote workers to the cloud.
It uses internet to connect to your VPC.

VPN Tunnel – It can be started from Customer Gateway and is connected to Virtual Private Gateway in
your private subnet.

Direct Connect – It is a cloud service solution that makes it easy to establish a dedicated network
connection from your premises to AWS.

VPC Peering – It is a networking connection between 2 VPCs. You can create a VPC peering connection
between your own VPCs, with a VPC is another AWS account, or with a VPC in a different AWS region.
VPC Transmit

Amazon Route 53 – It is highly available and scalable domain name system. It provides secure and
reliable routing of request, both for services withing AWS and infrastructure that is outside of AWS. It
provides this service through its global network of authoritative DNS servers that reduce latency and can
be managed via the management console or API.

Hosted Zones – It is a container that holds information about how you want to route traffic for a domain
such as CloudAcademy.com

 Private Hosted Zone – For Amazon VPC, this zone determines how trafiic is routed within the
Amazon VPC, If your resources are not accessible outside of the VPC you can use any domain
name you wish.
 Public Hosted Zone – This zone determines how traffic is routed on the internet and can be
created when you register your domain with Route 53.

Domain Types

 Generic Top-Level Domains (TLDs) – .watch for streaming videos, .clothing for fashion
 Geographic Domains - .com.au for Australia, .uk for United Kingdom

Routing Policies – When you create a resource record set, you must choose a routing policy that will be
applied to it, and this then determines how Route 53 will respond to these queries.

 Simple Routing Policy – This is the default policy, and it is for single resources that perform a
given function.
 Failover Routing Policy – This allows you to route traffic to different resources based upon the
health.
 Geo-Location Routing Policy – This lets you route traffic based on the geographic location of
your users.
 Geoproximity Routing Policy – This policy is based upon the location of both the users and your
resources.
 Latency Routing Policy – This is suitable when you have resources in multiple regions and want
low latency.
 Multivalue Answer Routing Policy - This allows you to get a response from a DNS request from
up to 8 records at once that are picked at random.
 Weighted Routing Policy – This is suitable when you have multiple resource records that
perform the same function.

Amazon CloudFront – It is AWS’s fault tolerant and globally scalable content delivery network service. It
provides seamless integration with other Amazon Web Services services to provide an easy way to
distribute content. Securely deliver content with low latency and high transfer speeds.

Amazon Global Accelerator – AWS Global Accelerator is a networking service that helps you improve
the availability, performance, and security of your public applications. Global Accelerator provides two
global static public IPs that act as a fixed entry point to your application endpoints, such as Application
Load Balancers, Network Load Balancers, Amazon Elastic Compute Cloud (EC2) instances, and elastic IPs.
AWS Transit Gateway – AWS Transit Gateway connects your Amazon Virtual Private Clouds (VPCs) and
on-premises networks through a central hub. This connection simplifies your network and puts an end
to complex peering relationships. Transit Gateway acts as a highly scalable cloud router—each new
connection is made only once.
Billing
Five pillars of Cost Optimization

 Right Sizing your Instances


 Increase Elasticity
 Pick the Right Pricing Model
 Match Usage to Storage Cost
 Measuring and Monitoring

Total Cost of Ownership (TCO) – It is a comprehensive assessment of IT’s total costs or other costs over
time. For IT it includes:

 Hardware and software acquisition


 Management and support
 Communications
 End-user expenses
 The opportunity cost of downtime, training, and other productivity losses

Support Plans

 Basic
o Free with all AWS accounts
 Developer
o For individual users who are familiarizing themselves with AWS
o Good for those who need occasional technical support.
 Business
o Designed for companies with multiple accounts running production environments.
o A good option for anyone extensively using multiple AWS services
o Higher Subscription fee
 Enterprise On-Ramp
o Designed for companies running business critical workloads in AWS that require faster
response times.
 Enterprise
o Designed for companies with multiple AWS accounts that are running numerous mission
critical, large scale production environments.
o Highest monthly cost.

All above plans have access to:

 Customer service and communities.


 AWS documentation, whitepapers, and the re:Post community forums.
 AWS Trusted advisor checks.
 Access to Personal Health Dashboard

Basic and Developer are limited to six code Trusted Advisor checks related to AWS best practices.
Business and Enterprise OnRamp and Enterprise have access to the complete set of Trusted Advisor
checks. It also has access to AWS Health API. 24/7 phone, web and chat access to Cloud Support
Engineers. Unlimited cases and unlimited contacts. Access to AWS Support App in Slack and AWS
Support API.

Enterprise also includes Trusted Advisor Priority, which provides prioritized recommendations directly
from your AWS account team.

Response Times

 Developer – System Impaired, AWS will respond in 12 hours.


 Business – Production System down, AWS will respond in 1 hour.
 Enterprise on-Ramp – Business critical system down, AWS will respond in 30 mins
 Enterprise – Business/Mission critical system down, AWS will respond in 15 mins


Migration and Transfer
AWS Cloud Adoption Framework (AWS CAF) – It helps you accelerate your business’s digital
transformation to the AWS cloud. Migrating to AWS cloud helps reduce business risk and increase
operational efficiency.

CAF perspectives and foundational capabilities:

 Business
 People
 Governance
 Platform
 Security
 Operations

AWS Application Discovery Service

AWS Application Migration Service

AWS Database Migration Service

AWS Migration Hub

AWS Schema Conversion Tool

AWS Snow Family

AWS Transfer Family

Three Stages of Migration

 Access – It determines how prepared you are as an organization to begin your migration to
AWS. It enables formulation of goals and objectives. Present an effective business case. AWS
migration service related to this stage are AWS Migration Evaluator and AWS Migration Hub.
 Mobilize – Emphasis on detailed migration planning and strategy. Identifies any skills gaps in
your workforce. AWS migration service related to this stage is AWS Application Discovery
Service.
 Migrate and Modernize – Used to design your deployments and solutions. Identify and
dependencies and understand interconnectivity required between AWS services. Validation of
your design. AWS migration services used for servers, DBs and applications are AWS Application
Discovery Service and AWS Application Migration Service. AWS migrations services used for
migration of data are AWS Snow Family, AWS Transfer Family, AWS DataSync, AWS Service
Catalog and AWS Storage Gateway.

AWS Migration Evaluator – It provides a mechanism to help you baseline your on-premise environment.
It project costs using cost modeling and data analysis. It accelerates your successful digital
transformation migration to AWS.
AWS Migration Hub – A powerful tool to help manage large migrations across multiple locations with
multiple services. It provides a dashboard overview of your migration project. It acts as a nerve center of
your migration. Discover and migrate existing services in different locations.

AWS Application Discovery Service

 Agent Based Discovery


 Agentless Discovery

AWS Application Migration Service – A great service to help you migrate your applications to AWS with
minimal downtime and interruption. Better over using CloudEndure.

AWS Database Migration Service – Designed to help you migrate your relational, noSQL databases, and
data warehouses, with minimal downtime and security in mind.

AWS Storage Gateway – Provides a gateway between your own data center storage systems and AWS
storage systems such as Amazon S3 and FSx. Acts as a software appliance stored in your own data
center.

 File Gateways – Allows you to securely store files as objects using File Gateway for Amazon S3 or
Amazon FSx.
 Stored Volume Gateway – Used as a way to back up your local storage volumes to Amazon S3.
 Cached Volume Gateway – Primary data storage is on Amazon S3 instead of your local storage
like it is with Stored Volume Gateway.
 Tape Gateway – Allows you to back up your data to Amazon S3 leveraging the Glacier storage
classes.
Decoupled and Event Driven Architecture
Decoupled Architecture – By using a decoupled architecture you are building a solution put together
using different components and services that can operate and execute independently.

Event Driven Architecture

 Producer – It is the element within the infrastructure that will push an event to the event
router.
 Event Router – It then processes the event and takes the necessary action in pushing the
outcome to the consumers.
 Consumers – It executes the appropriate action as requested

Amazon Simple Queue Service (SQS) - Amazon Simple Queue Service (Amazon SQS) lets you send,
store, and receive messages between software components at any volume, without losing messages or
requiring other services to be available. Standard SQS queues do not guarantee that message order will
be maintained and only guarantees at-least once delivery of messages. Max capacity – 120000
messages. Amazon SQS provides short polling and long polling to receive messages from a queue. By
default, queues use short polling.

With short polling, the ReceiveMessage request queries only a subset of the servers (based on a
weighted random distribution) to find messages that are available to include in the response. Amazon
SQS sends the response right away, even if the query found no messages.

With long polling, the ReceiveMessage request queries all of the servers for messages. Amazon SQS
sends a response after it collects at least one available message, up to the maximum number of
messages specified in the request. Amazon SQS sends an empty response only if the polling wait time
expires.

FIFO Queue – Maintain message order as First in First Out and implement the exactly once delivery
mechanism. Can have a maximum of 20k messages while processing.

Dead Letter Queue (DLQ) – It is an Amazon SQS queue that an Amazon SNS subscription can target for
messages that can’t be delivered to subscribers successfully. Messages that can’t be delivered are held
in the DLQ for further analysis or reprocessing.

Simple Notification Service (SNS) – Amazon Simple Notification Service (Amazon SNS) sends
notifications two ways, A2A and A2P. A2A provides high-throughput, push-based, many-to-many
messaging between distributed systems, microservices, and event-driven serverless applications. These
applications include Amazon Simple Queue Service (SQS), Amazon Kinesis Data Firehose, AWS Lambda,
and other HTTPS endpoints. A2P functionality lets you send messages to your customers with SMS texts,
push notifications, and email.

Amazon MQ Service – It is AWS managed broker service for Apache Active MQ and is complaint with
existing code leveraging jms, nms and websockets. The idea of this service to enable you and migrate
your messaging and application without having to rewrite your code.

 Cost Effective
 Automated administration and maintenance
 Highly available in a region
 Storage is implemented across multiple availability zones
 Can implement active and standby configurations with automatic failover.
 Message encryption in transit.

Its service integrates seamlessly with Amazon CloudWatch for monitoring of methods on existing
queues, topics, message brokers. It also integrates with AWS CloudTrail for login.

Real-time Messaging and Kinesis Data Streams – A real-time data collection messaging service.
Maintains a copy of all the data received in the order received for 24 hours by default and up to 8760
hours if configured using IncreaseStreamRetentionPeriod and DecreaseStreamRetentionPeriod. Kinesis
Data Streams allows for real time processing of streaming big data and the ability to replay records to
multiple Amazon Kinesis applications. The Amazon Kinesis Client Library (KCL) delivers all records for
the given partition key to the same record processor making it easier to build multiple applications that
read the Amazon Kinesis Stream for purpose of counting, aggregating and filtering.

AWS Serverless and Categories

 Serverless Compute Services


o AWS Lambda – It is an event driven service. It needs an action or event to trigger the
code you wish to run. It can be run for max 15 mins
o AWS Fargate – It allows you to run serverless containers on Amazon Elastic Container
Service. You are allowed to run your Fargate tasks for an unlimited amount of time.
 Serverless Application Integration Services
o Amazon EventBridge – EventBridge is a serverless service that uses events to connect
application components together, making it easier for you to build scalable event-driven
applications.
o AWS Step Functions – It can be described as a serverless state machine service. It allows
you to create serverless workflows where you can have your system wait for input,
make decisions and process information based on the input variables.
o Amazon SQS – It is a messaging queue system. It can help you decouple your
applications by providing a system for sending, storing, and receiving messages between
multiple software components.
o Amazon SNS – A Pub Sub notification service that provides both application to
application or application to person communication. It works well for high-throughput
applications as well as many-to-many messaging between distributed systems.
o Amazon API Gateway – It helps you deal with building, publishing, monitoring, securing
and maintaining APIs. It supports serverless, generic web applications, and even
containerized workloads on the back end.
o AWS AppSync – It allows you to manage and synchronize data across multiple mobile
devices and users. It allows you to build real-time multi-user collaborative tools and
applications that work between browsers, mobile applications and even Amazon Alexa
skills.
 Serverless Data Storage Services
o Amazon S3 – It is an object-based serverless storage system that is able to handle a
nearly unlimited amount of data. It supports file upto 5 TB. It is cheapest data storage
system available. Native integrations with AWS Lambda.
o Amazon DynamoDB – It is a fully managed serverless NoSQL database that has been
built to run high-performance applications at any scale. The service can operate at
single-digit millisecond latency.
o Amazon RDS Proxy – It is a fully managed, serverless, highly available database proxy
for Amazon RDS. The proxy allows you to build serverless applications that are more
scalable than your standard direct to RDS implementations.
o Amazon Aurora Serverless – It is a fully on-demand SQL database configuration for
Amazon Aurora. It operates on a pay-per-second basis while the database is active and
cab be used through a simple database endpoint.
Management and Governance
AWS CloudTrail - CloudTrail is active in your AWS account when you create it and doesn't require any
manual setup. When activity occurs in your AWS account, that activity is recorded in a CloudTrail event.

CloudTrail Events

 Management Event – Also known as Control Plane Operations, these track information about
management operations taken against AWS resources within your account.
 Data Events – Also known as Data Plane Operations, these show information about resource
operations performed on or in a resource.
 CloudTrail Insight Events – Allows you to capture events triggered by unusual activity within
your account. It is stored in different folder in S3 to the management and data events and
contain information about:
o Time of the event
o Error codes
o Associated APIs
o Additional Statistics

AWS CloudTrail Lakes – You can keep the event data in an event data store for up to seven years, or
2557 days. By default, event data is retained for the maximum period, 2557 days. AWS CloudTrail Lake
lets you run SQL-based queries on your events.

AWS Config – AWS Config continually assesses, audits, and evaluates the configurations and
relationships of your resources on AWS, on premises, and on other clouds.

 Capture resource changes


 Act as a resource inventory
 Store configuration history
 Provides snapshot of configuration
 Notifications about changes
 Provide AWS CloudTrail Integration
 Use rules to check compliancy
 Security Analysis
 Identify relationships

It is region specific, so you need to configure if for each region.

Amazon CloudWatch – It is global service that has been designed to be your window in health and
operational performance of your application and infrastructure. It is able to collect and present
meaningful operational data from your resources and allow you to monitor and review their
performance.

Rules – A rule acts as a filter for incoming streams of event traffic and then routes these events to the
appropriate target defined within the rule. A rule itself can route traffic to multiple targets in same
region.
Targets – Targets are where the events are sent by the rules, such as AWS Lambda, SNS, Kinesis, SQS. All
events received by the target are in JSON format.

Event Buses – An event bus is the component that actually receives the Event from your applications
and your rules are associated with a specific event bus.

 CloudWatch Dashboards
 CloudWatch Metrics and Anomaly Detection
 CloudWatch Alarms
 CloudWatch EventBridge
 CloudWatch Logs
 CloudWatch Insights

AWS Organizations

 Organizations
 Root
 Organization Units
 Accounts
 Service Control Policies

AWS Control Tower – It is the simplest and most powerful way to create, govern and administer large
numbers of user accounts. It is a service that offers larger control method to creating, managing,
distributing and auditing multiple accounts.

AWS Service Catalog – It allows you to centrally manage commonly deployed IT services and helps you
achieve consistent governance and meet your compliance requirements, while enabling users to quickly
deploy only the approved IT services they need.

AWS Cognito – It lets you add user sign-up, sign-in and access control to your web and mobile apps
quickly and easily.
Security, Identity and Compliance
AWS Artifact – AWS Artifact is your go-to, central resource for compliance-related inst that matters to
you. It provides on-demand access to security and compliance reports from AWS and ISVs who sell their
products on AWS Marketplace.

AWS Identity and Access Management (IAM) – With AWS Identity and Access Management (IAM), you
can specify who or what can access services and resources in AWS, centrally manage fine-grained
permissions, and analyze access to refine permissions across AWS.

Authentication:

 Username and Password


 Multifactor authentication (MFA)
 Federated Access (no AWS IAM user credentials needed)

Access Management:

 Users – We can configure MFA at user level. A user can be part of max. 10 groups.
 Groups – Groups have users and they have policies that allow or deny access to AWS resources.
A group can have 10 policies attached at once.
 Roles – IAM Roles allows user, other AWS services and applications to adopt temporary IAM
permissions to access AWS resources.
 Policies – Policies can be attached to users, groups and roles. Policies include Managed policies
and In-Line policies.
o AWS Managed Policies – These are list of predefined policies granting varied access to
different AWS services.
o Customer Managed Policies – These are policies created and written by you as the
customer.
o In-Line policies – These are not stored in library. Instead they need to be written and
embedded in User, User Group or Role. So same policy cannot be applied to another
identity like Managed policy.
 Identity Providers – If you want to give federated access to AWS resources then you must add
an identity provider. Federated access allows credentials external to AWS to be used as a means
of authentication to your AWS resources.
 Password Policies – In account setting you can enforce password policy. It applies to all IAM
users within your account.
 Security Token Service Endpoints – The STS service is used to allow you to request temporary,
limited-privilege credentials for both IAM users and federated users. A region can be activated
or deactivated for STS.

Access Reports:

 Access Analyzer – AWS IAM Access Analyzer provides the following capabilities:
o IAM Access Analyzer helps identify resources in your organization and accounts that are
shared with an external entity.
o IAM Access Analyzer validates IAM policies against policy grammar and best practices.
o IAM Access Analyzer generates IAM policies based on access activity in your AWS
CloudTrail logs.
 Credential Report – You can generate and download a credential report that lists all users in
your account and the status of their various credentials, including passwords, access keys, and
MFA devices.
 Organization Activity – If using AWS organizations, it allows you to select Organization Unit (OU)
or account to view service activity for last 365 days. You can drill down to user account to check
with service they have accessed.
 Service Control Policies (SCP) – Service control policies (SCPs) are a type of organization policy
that you can use to manage permissions in your organization. SCPs offer central control over the
maximum available permissions for all accounts in your organization. SCPs help you to ensure
your accounts stay within your organization’s access control guidelines.

AWS Trusted Advisor – The main function of Trusted Advisor is to recommend improvements across
your AWS account to help optimize and streamline your environment based on these AWS best
practices.

 Cost Optimization
 Performance
 Security
 Fault Tolerance
 Service Limit

AWS Web Application Firewall (WAF) – AWS WAF helps you protect against common web exploits and
bots that can affect availability, compromise security, or consume excessive resources.

AWS Firewall Manager – Firewall Manager is particularly useful when you want to protect your entire
organization rather than a small number of specific accounts and resources, or if you frequently add new
resources that you want to protect. Firewall Manager also provides centralized monitoring of DDoS
attacks across your organization.

DDoS Attack – A distributed denial-of-service (DDoS) attack is a malicious attempt to disrupt the normal
traffic of a targeted server, service or network by overwhelming the target or its surrounding
infrastructure with a flood of Internet traffic.

AWS Shield – AWS Shield is a managed DDoS protection service that safeguards applications running on
AWS.

Amazon Inspector - Amazon Inspector is an automated vulnerability management service that


continually scans AWS workloads for software vulnerabilities and unintended network exposure.

Amazon GaurdDuty – Amazon GuardDuty is a threat detection service that continuously monitors your
AWS accounts and workloads for malicious activity and delivers detailed security findings for visibility
and remediation.

Amazon Macie – Amazon Macie is a data security service that uses machine learning (ML) and pattern
matching to discover and help protect your sensitive data.
AWS Secrets Manager - AWS Secrets Manager helps you manage, retrieve, and rotate database
credentials, API keys, and other secrets throughout their lifecycles.
Machine Learning and Analytics
Amazon Sagemaker – Fully managed service that provides all the tools to build, train and deploy
machine learning (ML) models in a single platform.

Amazon Lex – It is a service for building conversational interfaces using voice and text.

Amazon Athena – Enables you to interactively query data in S3 and other data sources using standard
Structured Query Language (SQL) syntax. Allows you to analyze petabytes of data without the need to
provision any infrastructure.

AWS Glue – AWS Glue is a serverless data integration service that makes it easier to discover, prepare,
move, and integrate data from multiple sources for analytics, machine learning (ML), and application
development.

Amazon QuickSight – It is a very fast, easy-to-use, cloud-powered business analytics service that makes
it easy for all employees within an organization to build visualizations, perform ad-hoc analysis, and
quickly get business insights from their data, anytime, on any device.

Amazon Rekognition – Amazon Rekognition offers pre-trained and customizable computer vision (CV)
capabilities to extract information and insights from your images and videos.

Amazon Kendra – It is an enterprise service that allows developers to add a search capability to their
applications.

AWS Partner Network (APN) – It is focused on helping partners build successful AWS-based businesses
to drive superb customer experiences. This is accomplished by developing a global ecosystem of
Partners with specialities unique to each customer’s need. There are 2 types of APN partners:

 APN Consulting Partners – They are professional services firms that help customers to all sizes
design, architect, migrate or build new applications on AWS.
 APN Technology Partners – They provide software solutions that are either hosted on or
integrated with the AWS platform.
Database
Types of Databases:

 Relational Databases
 Key Value Databases
 Document Databases
 In-Memory Databases
 Graph Databases
 Columnar Databases
 Time Series Databases
 Quantum Ledger Databases
 Search Databases

Amazon Relational Database Service (RDS)

 MySQL
 MariaDB
 ProgreSQL
 Amazon Aurora – Features of Aurora
o Speed
o High Availability
o Cheap
o Fault tolerant and self-healing,
o Replication in 3 AZs
o Low latency
o Continuous backup on S3
o Point in time recovery
 Oracle
 SQL Server

Amazon DynamoDB

 Non-Relational Database
 Schemaless
 Severless
 Key-value and document database
 Fully managed with built-in security, backup and restore and in-memory caching for internet-
scale applications
 Replicated in 3 AZs, so highly available
 High speed
 Durable
 Scalable
 Global
 Point in time recovery
 Best used by OLTP that require high scalability and data durability
 Not ideal for OLAP workloads and adhoc query access

2 modes

Provisioned throughput mode – You can specify no. of read and writes for the tables, you pay for RCU
(read capacity unit) and WCU (write capacity unit) u have configured with or without using them.

On-demand capacity mode – It will do autoscaling, so you don’t need to configure RCU and WCU. Cost
more per request as compared to provisioned throughput, but if there is no request you don’t pay
anything.

Interact with DynamoDB using:

 AWS Console
 AWS Command line interface CLI
 AWS SDKs
 NoSQL workbench for DynamoDB

Amazon Neptune – A fast, reliable, secure and fully managed graph database service.

Amazon MemoryDB for Redis – Fully managed, in-memory, redis compactible data store
Amazon Inspector CloudWatch AWS Config CloudTrail
It monitors network It monitors AWS It monitors and records It monitors events such
accessibility of your EC2 resources and the your AWS resource as actions taken using
instances and the applications that run on configurations. AWS services.
security state of your AWS in real time.
applications that run on
those instances.
Managing RTO and RPO for AWS Disaster Recovery
Disaster Recovery Strategy:

 Recovery Time Objective (RTO) – The maximum amount of time in which a service can remain
unavailable for before it is damaging to the business.
 Recovery Point Objective (RPO) – The maximum amount of time for which data could be lost
for a service.

Different Recovery Strategies:

 Backup and Restore


o Provides highest RTO and RPO
o Typically provides RTO in 24 hours or less
o RPO can generally be measured in hours.
 Pilot Light
o Decreases your RTO to hours
o Decreases your RPO to minutes
o More expensive than Backup and Restore
 Warm Standby
o RTO can be measured in minutes
o RPO can be measured in seconds
o More expensive than Pilot Light
 Multi-Site Active/Active
o The most expansive of the 4 recovery strategies
o RTO and RPO are close to zero
o The most effective strategy to reduce business impact

AWS Elastic Disaster Recovery (AWS DRS) – It minimizes downtime and data loss with fast, reliable
recovery of on-premises and cloud-based applications using affordable storage, minimal compute, and
point-in-time recovery.
Amazon Detective – It is an AWS security service that allows you to assess, investigate, and pinpoint the
source of suspected security vulnerabilities or suspicious activity in your AWS environment. It builds
interactive visualizations and models of the AWS environment using machine learning, statistical
analysis, and graph theory, allowing you to quickly and easily identify and investigate security problems.

When you decouple from the data center you will be able to:

 Decrease your TCO (Total Cost of Ownership)


 Reduce complexity
 Adjust capacity of fly
 Reduce time to market
 Deploy quickly, even worldwide
 Increase efficiencies
 Innovate more
 Spend your resources strategically
 Enhance security

AWS Health – It provides ongoing visibility into your resource performances and the availability of your
AWS services and accounts. You can use AWS Health events to learn how service and resources changes
might affect your applications running on AWS. It provides relevant and timely information to help you
manage events in progress.

IAM Policy Simulator – It evaluates the policies that you choose and determines the effective
permissions for each of the actions that you specify. The simulator uses the same policy evaluation
engine that is used during real requests to AWS services.

Global Services

 AWS IAM
 AWS Organizations
 AWS Account Management
 AWS Network Manager
 Route 53 Private DNS
 CloudFront
 AWS Security Token Service
 WAF

Amazon Storage Regional Service:

 S3
 DynamoDB
 Storage Gateway
 EBS Snapshots
 AWS CloudTrail
 Amazon VPC
 AWS Lambda

Amazone Zonal Service:


 EC2
 EBS

AWS Systems Manager – It is a secure end-to-end management solution for resources on AWS and in
multicloud and hybrid environments.

Best practices when you build an application in cloud:

 Design for failure


 Decouple your components
 Implement elasticity
 Think parallel

AWS MarketPlace – The AWS Marketplace enables qualified partners to market and sell their software
to AWS Customers. AWS Marketplace is an online software store that helps customers find, buy, and
immediately start using the software and services that run on AWS.

AWS Management Console – The AWS Management Console is a web application that comprises and
refers to a broad collection of service consoles for managing AWS resources. When you first sign in, you
see the console home page.

Amazon EMR – It is a web service that enables businesses, researchers, data analysts, and developers to
easily and cost-effectively process vast amounts of data.

Amazon Connect - Provide superior customer service at a lower cost with an easy-to-use cloud contact
center

Amazon Chime - Meet, chat, and place business phone calls with a single, secure application.

AWS Resource Access Manager (RAM) – AWS RAM helps you securely share your resources across AWS
accounts, within your organization or organizational units (OUs), and with IAM roles and users for
supported resource types.

AWS CloudHSM – AWS CloudHSM helps you meet corporate, contractual, and regulatory compliance
requirements for data security. It is standards-complaint and enables you to export all of your keys to
most other commercially available HSMs, subject to your configuration. It is a fully managed service that
automates time-consuming administrative tasks for you, such as hardware provisioning, software
patching, high availability, and backups.

AWS Architecture Center – The AWS Architecture Center provides reference architecture diagrams,
vetted architecture solutions, Well-Architected best practices, patterns, icons, and more. This expert
guidance was contributed by cloud architecture experts from AWS, including AWS Solutions Architects,
Professional Services Consultants, and Partners.

Amazon MQ – Message brokers allow software systems, which often use different programming
languages on various platforms, to communication and exchange information. Amazon MQ is a managed
message broker service for Apache ActiveMQ and RabbitMQ that streamlines setup, operation, and
management of message brokers on AWS. With a few steps, Amazon MQ can provision your message
broker with support for software version upgrades.
Amazon WorkMail – Amazon WorkMail is a secure, managed business email and calendar service with
support for existing desktop and mobile email client applications. Amazon WorkMail gives users the
ability to seamlessly access their email, contacts, and calendars using the client application of their
choice, including Microsoft Outlook, native iOS and Android email applications, any client application
supporting the IMAP protocol, or directly through a web browser.

Amazon Redshift – Amazon Redshift uses SQL to analyze structured and semi-structured data across
data warehouses, operational databases, and data lakes, using AWS-designed hardware and machine
learning to deliver the best price performance at any scale.

Other Services
Simple Email Service (SES) – SES handles an automated email system to communicate with your
customers.

AWS AppConfig – AWS AppConfig feature flags and dynamic configurations help software builders
quickly and securely adjust application behavior in production environments without full code
deployments. AWS AppConfig speeds up software release frequency, improves application resiliency,
and helps you address emergent issues more quickly.

AWS CLI – The AWS Command Line Interface (AWS CLI) is a unified tool to manage your AWS services.
With just one tool to download and configure, you can control multiple AWS services from the
command line and automate them through scripts.

AWS Cloud9 – It is a cloud-based integrated development environment (IDE) that lets you write, run,
and debug your code with just a browser.

AWS CloudShell – Using AWS CloudShell, a browser-based shell, you can quickly run scripts with the
AWS Command Line Interface (CLI), experiment with service APIs using the AWS CLI, and use other tools
to increase your productivity. The CloudShell icon appears in AWS Regions where CloudShell is available.

AWS X-Ray – It helps developers analyze and debug production, distributed applications, such as those
built using a microservices architecture. With X-Ray, you can understand how your application and its
underlying services are performing to identify and troubleshoot the root cause of performance issues
and errors.

AWS CodeBuild – It is a fully managed continuous integration service that compiles source code, runs
tests, and produces ready-to-deploy software packages.

AWS CodeCommit – It is a secure, highly scalable, fully managed source control service that hosts
private Git repositories.

AWS CodeDeploy – It is a fully managed deployment service that automates software deployments to
various compute services, such as Amazon Elastic Compute Cloud (EC2), Amazon Elastic Container
Service (ECS), AWS Lambda, and your on-premises servers. Use CodeDeploy to automate software
deployments, eliminating the need for error-prone manual operations.

AWS CodePipeline – It is a fully managed continuous delivery service that helps you automate your
release pipelines for fast and reliable application and infrastructure updates.
AWS CodeStar – It enables you to quickly develop, build, and deploy applications on AWS.

AWS OpsWork – It is a configuration management service that helps customers configure and operate
applications both on-premises and in the AWS Cloud, using Chef and Puppet.

Amazon AppStream 2.0 – A fully managed non-persistent desktop and application service for remotely
accessing your work.

Amazon Workspaces – It is a managed, secure Desktop-as-a-Service (DaaS) solution where you provision
either Windows or Linux desktops in just few minutes and quickly scale to provide thousands of
desktops to workers across the globe.

AWS Amplify – it is a complete solution that lets frontend web and mobile developers easily build, ship,
and host full-stack applications on AWS, with the flexibility to leverage the breadth of AWS services as
use cases evolve. No cloud expertise needed.

AWS AppSync – Develop applications faster with serverless GraphQL and Pub/Sub APIs.

Amazon Device Farm Service – It is an app testing service that lets you test and interact with your
Android, IOS and web apps on many devices at once or reproduce issues on a device in real time.

AWS IoT Core – It is fully managed platform that allows you to connect IoT devices (sensors, wearables
and smart appliances) to the AWS cloud without needing to manage any infrastructure. AWS IoT Core
lets you connect billions of IoT devices and route trillions of messages to AWS services without
managing infrastructure. It runs on the cloud.

AWS IoT Greengrass – It provides client software and open source edge runtime environment for
running IoT applications across fleets of devices. It runs on the edge.

AWS Wavelength – It embeds AWS compute and storage services within 5G networks, providing mobile
edge computing infrastructure for developing, deploying, and scaling ultra-low-latency applications.

AWS Quick Starts – Quick Starts are automated reference deployments built by Amazon Web Services
(AWS) solutions architects and AWS Partners. By using best practices and automating hundreds of
manual procedures, Quick Starts can help you deploy popular technologies to AWS in minutes.

AWS Directory Service – AWS Directory Service for Microsoft Active Directory, also known as AWS
Managed Microsoft AD, activates your directory-aware workloads and AWS resources to use managed
AD on AWS.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy