Aws Cloud Book Latest
Aws Cloud Book Latest
Learning Best Practices: Books often cover best practices for using AWS
services efficiently, securely, and cost-effectively. This knowledge can help
you design and implement solutions that are robust and scalable
Deep Dives and Case Studies: Some books offer in-depth explanations
of specific AWS services or case studies illustrating real-world
implementations.
These can help deepen your understanding of how to use AWS effectively in
practical scenarios.
Though we have taken utmost efforts to present you this book error free, but
still, it may contain some errors or mistakes. Students are encouraged to
bring, if there are any mistakes or errors in this document to our notice.
What I’ve found is, cloud is the future for almost every business and
most of young people aren’t aware much about it.
Cloud computing is the on-demand delivery of IT resources over the
Internet with pay-as-you-go pricing. Instead of buying, owning, and
maintaining physical data centers and servers, you can access
technology services, such as computing power, storage, and databases,
on an as-needed basis from a cloud provider like Amazon Web Services
(AWS).
This article will talk about several sub-topics related to cloud technology
and amazon web services(AWS) as follows:
What is cloud?
History of cloud computing.
Which companies provide cloud services and the topmost in the
market.
What is an AWS?
What can you do with AWS?
Criteria and way to get a cloud job.
Bonus: Best courses for learning cloud in AWS.
1
What is cloud?
Cloud, as the name might suggest, its’ not really cloud or something
above in the air. It’s simply somebody else’s computer or more
precisely, a server. Now, most of us don’t realise that we use cloud on a
daily basis, without actually knowing what it is or how are we even
using it?
Let’s say, you want to setup your business online by creating websites
and storing the information provided by all your users. This, in typical
early 90’s scenario would require several rooms for servers and for
storage of data. This also depends upon your size of business. You will
also require manager, administrators, engineers and, professionals for
managing and administrating the servers.
2
2011: Apple launches iCloud and Microsoft buys Skype
2015: Global Cloud industry exceeds $100 Billion revenues
2016: AWS exceeds $12 Billion in IaaS/PaaS revenues and now
offers 70 distinct Cloud services
2017: Microsoft passes $10 Billion in SaaS revenue. Salesforce is #2
SaaS player with $8.5 Billion revenues.
2018: Global Cloud IT infrastructure spend exceeds traditional IT
2019: SaaS market exceeds $110 Billion revenues.
2020: Total Cloud services revenues exceed $250 Billion.
Now, I hope you’ve got the idea most web hosting companies offer you.
They offer you management of your platform, server, storage and
professional security. So, you only have to focus on serving great
content!
3
Here is a list of my top 10 cloud service providers:
1. Amazon Web Services (AWS)
2. Microsoft Azure
3. Google Cloud
4. Alibaba Cloud
5. IBM Cloud
6. Oracle
7. Salesforce
8. SAP
9. Rackspace Cloud
10. VMWare
The following table summarizes the top 3 key players and their offerings
in the cloud computing world:
4
What is an AWS?
AWS’s revenue in the year 2018 was $25.6 billion with a profit of $7.2
billion. The revenue is expected to grow to $33 billion in 2019.
5
In simple words AWS allows you to do the following things-
Compute:
LightSail — If you don’t have any prior experience with AWS this is
for you. It automatically deploys and manages compute, storage and
networking capabilities required to run your applications.
6
EKS (Elastic Container Service for Kubernetes) — Allows you to
use Kubernetes on AWS without installing and managing your own
Kubernetes control plane. It is a relatively new service.
Storage:
S3 (Simple Storage Service) — Storage service of AWS in which
we can store objects like files, folders, images, documents, songs, etc.
It cannot be used to install software, games or Operating System.
EFS (Elastic File System) — Provides file storage for use with your
EC2 instances. It uses NFSv4 protocol and can be used concurrently
by thousands of instances.
7
Databases:
RDS (Relational Database Service) — Allows you to run relational
databases like MySQL, MariaDB, PostgreSQL, Oracle or SQL
Server. These databases are fully managed by AWS like installing
antivirus and patches.
8
Whether you’re already an experienced IT professional seeking to take
your career in a new direction or new to cloud computing (or IT, for that
matter), there are several reasons why you should consider AWS. And
since AWS is the leading public cloud computing service that is widely
adopted by organizations both large and small, then it also follows that
learning AWS has become a necessity for IT professionals who want to
secure their future careers.
9
Choose a Career Path that Suits You Best:
There are a lot of AWS career paths from which you can choose. The
career path you want can be based on either:
You could also choose a specialty area on which to focus your attention
and validate advanced skills in specific technical domains.
10
Why Linux is needed for Cloud and DevOps
professionals
11
leverage these tools and contribute to the open-source community.
12
By having a strong foundation in Linux, cloud and DevOps
professionals can navigate the ecosystem, work with essential tools,
automate tasks efficiently, and contribute effectively to the
infrastructure and deployment processes required in these fields.
13
Linux Commands which are commonly used for
System Admins/Cloud & DevOps Engineers
14
directory destination_directory (copy a directory and its contents).
mv: Move/rename files and directories.
Example: mv file.txt new_location/file.txt (move a file), mv file.txt
new_name.txt (rename a file), mv directory new_location/directory
(move a directory).
cat: Display the contents of a file.
Example: cat file.txt.
less: View the contents of a file interactively.
Example: less file.txt.
15
.txt extension).
16
network.
Example: ssh user@hostname (connect to a remote machine).
17
Example: top (display live process information).
18
chroot: Change the root directory for a specific command or process.
Example: chroot /new_root_directory command (run a command
with a different root directory).
19
URL).
20
iostat: Report CPU and I/O statistics for devices and partitions.
Example: iostat
lsof: List open files and processes, useful for identifying resource
usage.
Example: lsof -i (list network connections)
21
netstat: Display network connection information, routing tables, and
network interface statistics.
Example: netstat -s (display network statistics)
iotop: Monitor I/O usage information of processes and disks.
Example: iotop
22
ip: Show or manipulate routing, network devices, and addresses.
Example: ip addr show
23
AWS Regions and Availability Zones
24
Purpose: Regions enable users to deploy applications and data across
multiple locations to enhance availability, reduce latency, and comply
with regulatory requirements.
Reduced Latency
Selecting a Region closest to your end-users can significantly reduce
latency, improving the user experience for your applications.
25
Regions allow you to store data in specific geographic locations,
meeting legal or regulatory requirements regarding data sovereignty.
Disaster Recovery
By using multiple AZs within a Region or across regions, businesses can
implement disaster recovery strategies that allow for rapid recovery of
IT systems without data loss in the case of a disaster.
26
Secure Remote Administration and Troubleshooting
of EC2 Instances
This action will direct you to a splash page where you can input settings
for your new virtual machine. Assign a name to your instance and
choose an appropriate AMI. For this project, ensure you select a Linux-
based AMI, preferably one that is eligible for the free tier.
27
Maintain the “Instance Type” as t2.micro, as it qualifies for the free tier
and is ideal for this demonstration. Moving on to the next step will
require delving into slightly more technical details.
Step 2: Creating a Keypair:
Now, we need to generate a key pair to securely establish SSH
connections to our instance. Although restricting SSH access to specific
IP addresses is an option, it’s not as secure as using AWS’s integrated
key generator. Therefore, when you reach this section, locate the “Create
new key pair” link and click on it.
28
Upon clicking the “Create new key pair” link, a popup will appear
enabling you to generate a new key pair. Assign it a meaningful and
easily memorable name. Ensure that you select an RSA key pair type
and opt for the .pem key file format.
You might have observed the warning message above the “Create key
pair” button. By generating a key pair, you’ll be downloading a key onto
your local computer. Therefore, it’s crucial to remember where you save
it and how to identify the key later when connecting to the instance.
For my use, I ensured to download the .pem file it generates a key pair to
remote into your EC2 instance. Be aware of where you downloaded the
.pem file, you might want to place it in an easily accessible folder, as
you’ll need to access it later.
Step 3: Security Groups and Ports:
29
Next step, you’ll encounter a “Network Settings” tab. Navigate to the
top right-hand corner of the tab and select the “Edit” option.
If you’ve left the setting on the default “anything” rule with our key pair,
an unauthorized individual on the internet wouldn’t be able to directly
log into your instance. However, they can attempt to access the instance,
30
and over time, they may potentially breach the key pair, posing a
security risk.
To mitigate this risk, we can take proactive measures by specifying a
particular IP address as the sole source permitted to connect to our
instance (My IP) on port 22.
31
Up and running.
Once the next page loads, you’ll find a plethora of information about
your instance. Most of this information can be disregarded for the
current task. Locate and click on the “Connect” button to proceed.
When the next page loads, it should automatically open into the “SSH
client” tab, providing all the necessary information to connect to your
instance.
Now, let’s open our command line. If you’re using a Linux or Linux-like
operating system, you’ll need to adjust the permissions on your .pem key
file. AWS provides the command for this, which you can copy and run
32
in the terminal. However, since I’m on a Windows OS, I can skip this
step.
Before running any commands, we need to navigate our “current
working directory” to the location where our .pem file is stored.
Depending on where you downloaded it, you’ll need to use
the cd command to navigate to the associated directory.
Once you’re in the correct directory, you can copy the command from
the “Connect” tab in the AWS console and paste it into your terminal.
Then, execute the command. If this is your first time connecting to the
instance with this particular IP address, it will prompt you to confirm the
connection. Type “yes” into the command line and press Enter to
proceed.
SSH’d.
33
We are going to replicate the following functions with a Windows
server.
34
Don’t forget the security group rules! You don’t want your EC2 instance
compromised. Proceed to launch your Windows EC2 instance.
See a difference?
35
There’s a difference in the “Connect” page compared to the Linux
server.
Click on the “Download remote desktop” file to install Windows RDP
onto your PC.
36
Just like that, we have spun a fully operational Windows Server from the
comfort of our home. This demonstrates the power and range of Cloud
Computing.
37
Elastic Compute Cloud – EC2
EC2 is a web service which provides secure, resizable compute
capacity in the cloud
EC2 interface allows you to obtain & configure capacity with minimal
friction
EC2 offers the broadest and deepest compute platform with choice of
processor, storage, networking, operating system, and purchase model.
Amazon offer the fastest processor in the cloud and they are the only
cloud with 400 Gbpsethernet networking
Amazon have the most powerful GPU instances for machine learning
and graphic workloads.
38
Various configurations of CPU, Memory, Storage, Networking
Capacity for your instances, known as instance types
Secure login information for your instances using keypairs (AWS
Stores Public key, and you storethe private key)
Storage volumes for temporary data that’s deleted when you stop,
hibernate or terminate your
instances, known as Instance store volumes
Persistent storage volumes for your data using elastic block storage,
known as amazon EBSVolumes
Multiple physical locations for your resources, such as instance & EBS
volumes, known asregions and availability zones.
A firewall that enables you to specify the protocols, ports and source IP
ranges that can reachyour instance using security group
Static IPV4 addresses for dynamic cloud computing known as Elastic
IP Address
On-Demand Instances:
You pay for compute capacity by the hour or the second depending on
which instances you run
No long-term commitment or upfront payments are needed
You can increase or decrease your compute capacity depending on the
demands of yourapplication and only pay the specified per hourly rates
for the instance you use.
On Demand Instances are Recommended for
Users that prefer the low cost and flexibility of amazon EC2 without
any upfront payment orlong term commitment
Applications with short term, spiky or unpredictable workloads that
39
cannot be interrupted.
Applications being developed or tested on amazon EC2 for the first
time
Spot Instances:
Amazon EC2 spot instances allow you to request spare amazon EC2
computing capacity for up to 90% of on-demand price
Spot instances are recommended for
Applications that have flexible start and end times
Applications that are only feasible at very low compute prices
No guarantee for 24x7 uptime
Reserved Instances:
Reserved Instances provide you with a significant discount (up to 75%)
compared to on-demandinstance pricing
For applications that have steady state or predictable usage, reserved
instance can providesignificant savings compared to using on-demand
instances
Recommended for:
Applications with steady state usage
Applications that may require reserved capacity
Customers that can commit to using EC2 over a 1- or 3-year term to
reduce their total computingcosts
Savings Plan:
Savings plans are a flexible pricing model that offer low prices on EC2 in
exchange for acommitment to a consistent amount usage for a 1- or 3-year
term. Discount up to 72%
40
different use cases.
Instance types comprise varying combinations of CPU, memory, storage,
and networking capacity Eachinstance type includes one or more instance
sizes, allowing you to scale your resources to the requirements of your
target workload.
General Purpose
Compute Optimized
Accelerated Computing (GPU Optimized)
Memory Optimized
Storage Optimized
General Purpose:
General purpose instances provide a balance of compute, memory and
networking resources, and can be used for a variety of diverse workloads.
These instances are ideal for applications that use these resources in equal
proportions such as web servers and code repositories.
Ex: Mac, T4g, T3, T3a, T2, M6g, M5, M5a, M5n, M5zn, M4, A1
Compute Optimized:
Compute Optimized instances are ideal for compute bound applications
that benefit from highperformance processors. Instances belonging to this
family are well suited for batch processing workloads, media transcoding,
high performance web servers, high performance computing
Ex: C6g, C6gn, C5, C5a, C5n, C4
Memory Optimized:
Memory optimized instances are designed to deliver fast performance for
workloads thatprocess large data sets in memory.
Use case: Memory-intensive applications such as open-source databases,
in-memory caches, and real time big data analytics
Ex: R6g, R5, R5a, R5b, R5n, R4, X2gd, X1e, X1, u, Z1d
41
Accelerated Computing:
Accelerated computing instances use hardware accelerators, or co-
processors, to perform functions, such as floating-point number
calculations, graphics processing, or data pattern matching, more
efficiently than is possible in software running on CPUs.
Storage Optimized:
Storage optimized instances are designed for workloads that require high,
sequential read andwrite access to very large data sets on local storage.
They are optimized to deliver tens of thousands oflow-latencies, random
I/O operations per second (IOPS) to applications.
Ex: I3, I3en, D2, D3, D3en, H1
Instance Features:
Amazon EC2 instances provide a number of additional features to help
you deploy, manage, andscale your applications.
Burstable Performance instances
Multiple Storage Options
EBS Optimized Instances
Cluster Networking
42
equivalent to 20% of a CPU core (20% x 60 mins = 12 mins). If the
instance does not use the credits it receives, they are stored in its CPU
Credit balance up to a maximum of 288 CPU Credits. When the t2.small
instance needs to burst to more than 20% of a core, it draws from its CPU
Credit balance to handle this surge automatically.
Cluster Networking:
Select EC2 instances support cluster networking when launched into a
common cluster placement group. A cluster placement group provides
low-latency networking between all instances in the cluster. The
bandwidth an EC2 instance can utilize depends on the instance type and
itsnetworking performance specification.
43
this relates to whatunderlying host your EC2 instance will reside on
Shared Tenancy
Dedicated Instance
Dedicated Host
Shared Tenancy :
this option will launch your EC2 instance on any available host with the
specified resources required for your selected instance type. Regardless of
which other customers and users alsohave EC2 instances running on the
same host. Means We are going to share the physical resources withother
customers.
AWS implement advanced security mechanisms to prevent one EC2
instance from accessing another onthe same host.
Dedicated Instance:
Dedicated instances are hosted on hardware that no other customer can
access. Itcan only be accessed by your own AWS account. You may be
required to launch your instances as a dedicated instance due to internal
security policies or external compliance controls.
Dedicated instances do incur additional charges due to the fact you are
preventing other customers from running EC2 instances on the same
hardware and so there will likely be unused capacity remaining.
Dedicated Host:
A dedicated host is effectively the same as dedicated instances. However
they offer additional visibility and control, how you can place your
instances on the physical host. They also allow you to use your existing
licenses, such as PA-VM license or Windows Server licenses Etc. Using
dedicatedhosts give you the ability to use the same host for a number of
instances that you want to launch and align with any compliance and
regulatory requirements.
44
Following are the list of important terms need to know before
creating EC2 instances:
Amazon Machine Image (AMI)
Instance Type
Network
Subnet
Public IP
Elastic IP
Private IP
Placement Group
Root Volume
Security Group
KeyPair
Instance Type:
Instance types comprise varying combinations of CPU, Memory, storage
& Networkingcapacity and give you the flexibility to choose the
appropriate mix of resources for your applications.
Subnet:
Subnet is a subnetwork in your virtual network of your Amazon Network.
By default, there is onesubnet per availability zone.
Public IP:
45
A public IP is an IP Address which can be used to access internet and
allow the communicationover the internet. Public IP will be assigned by
amazon and it is dynamic. If you stop and start your EC2 instance, The
public IP will change.
Elastic IP(EIP):
Elastic IP is a kind of Fixed Public IP address which we can attach to our
Instances. Elastic IP will not change if we stop & Start our EC2 instances.
We need to request EIP from amazon and it will be free if we attach to
any instances, if you keep this EIP unused in your account then it will be
chargedafter initial 1st hour.
Private IP:
Private IP can be used to establish the communication with in the same
network only, Private (internal) addresses are not routed on the Internet
and no traffic can be sent to them from the Internet, means no internet
access will be available over private address.
46
Partition Placement Group: Partition placement groups help reduce
the likelihood of correlated hardware failures for your application. When
using partition placement groups, Amazon EC2 divideseach group into
logical segments called partitions. Amazon EC2 ensures that each
partition within a placement group has its own set of racks. Each rack has
its own network and power source. No two partitions within a placement
group share the same racks, allowing you to isolate the impact of
hardware failure within your application.
47
Spread placement groups are recommended for applications that have a
small number of criticalinstances that should be kept separate from each
other.
A spread placement group can span multiple Availability Zones in the
same Region. You can have amaximum of seven running instances per
Availability Zone per group.
48
When you create a new security group, it has no inbound rules.
By default, a security group includes an outbound rule that allows all
outbound traffic.
By default we can create 2500 security groups per region, can be
increased upto 5000 per region
60 inbound and outbound rules per security group
KeyPair: Key pair is a combination of public key and private key which
can be used to encrypt and decrypt the data, is a set of security credentials
that you use to prove your identity when connecting to an instance.
Amazon EC2 stores the public key and user stores the private key.
To-do List 1:
1. Launch Windows Server 2016 EC2 instance in N.Virginia Region
While creating EC2 instance open RDP port in the security group from
your IP only
Decrypt the password using keystore file which we used while creating
the EC2 instance
Access Windows server using Remote desktop connection tool
2. Launch Amazon Linux2 EC2 instance in N.Virginia Region
While creating EC2 instance open SSH port in the security group from
anywhere
Generate Private key using puttygen tool from the keypair which we
used while creatingthe EC2 instance
Access Amazon Linux EC2 instance using putty tool
3. Install webserver on Amazon Linux2 EC2 and host a website
4. Create Custom AMI from the Amazon Linux EC2 in which we hosted
the website.
49
5. Launch New EC2 instance from the custom AMI in N.Virginia Region
6. Copy the AMI from N.Virginia to Mumbai Region
7. Launch New Instance from Custom AMI in Mumbai Region
8. Share custom AMI with a specific Amazon Account & launch New
EC2 instance in the otheramazon account
9. Share custom AMI with public
50
VIRTUAL PRIVATE CLOUD
What is VPC ? and Let’s Create VPC in AWS
51
to your VPCs and subnets. You can also bring your public IPv4 and
IPv6 addresses to AWS and allocates them to resources in your VPC,
such as EC2 instances, NAT gateways, and Network Load Balancers.
Routing: The Route tables decide where the traffic should go.
Inside subnets, the resources like instances are all connected because
of the local routing table which is set already while launching
instances in the subnets.
52
The description of this figure:
There are two stacks of resources are created in one VPC. By the
looks of the figure, it is a 3-tier infrastructure because the VPC
consists of a Database tier, an application tier that processes the
internal codebase, and the last one is Web tier which is a presentation
tier shown to the clients or Customers.
All resources are kept private because of the security regions. We
don’t want our databases to be compromised by some hacker, because
databases keep the most crucial data important to the customer like
credit card info, IDs, and so on.
There are two Routing tables, one routing table is the default routing
table and is responsible for interconnection between subnets. And
another routing table is for routing the subnet to the internet gateway
which leads to the External networks.
53
Here are the steps for setting up a VPC in the AWS
environment:
I have my own diagram to create the structure of the VPC:
54
192.168.0.0. to 192.168.255.255 which means around 65,536 instances
can be allocated with IPs. In this case, CIDR will be written like
192.168.0.0/16.
55
b.) Private Subnet:
Choose “VPC ID” as “my_VPC”. Name the Subnet as “Private
Subnet” and choose “Availability Zone (AZ)” as “ap-south-1b”.
Give “IPv4 CIDR block” as “192.168.2.0/24” and create it.
56
Route Tables:
57
The Route table we set up, allows the server to go to the internet. Here
we set up the destination as 0.0.0.0/0 which means when instances are
alive, they can go to the internet. The internet gateway is a router that
leads to the internet. All the subnets first pass through the internet
gateway to be able to connect to the internet.
58
Now all the main VPC settings have been done. Let’s test it now and
launch an instance.
59
4. Change the Security Group rule, allow it to All traffic, and the
Storage setting will be as it is.
60
Now Launch it successfully.
Now Let’s test the launched Instances. First Connect the Public_instance
since Public Instance is attached to the Route table and the Route table
has a route for the internet gateway to go to 0.0.0.0/0 (public network).
We will able to use ssh protocol to connect to the instance through the
internet.
If we ping through Public_instance to 8.8.8.8, it will work.
Let’s test the Private_instance. if you try to connect it using ssh protocol.
it won’t work. In this case, we don’t have public IP, so here it won’t
work.
61
When you connect your Public_instance, and through Public_instance
you try to ping to the Private_instance’s Private IP address, It will be
able to ping because within the VPC there is local connectivity between
the instance.
62
Whenever there is a need to access the private_instance, we can use a
public_instance to connect with it. just see below:
63
VPC Peering across Two Region
A virtual private cloud (VPC) is a virtual network dedicated to your
AWS account. It is logically isolated from other virtual networks in the
AWS Cloud.
A VPC peering connection is a networking connection between two
VPCs that enables you to route traffic between them using private IPv4
addresses or IPv6 addresses. Instances in either VPC can communicate
with each other as if they are within the same network. You can create a
VPC peering connection between your own VPCs, or with a VPC in
another AWS account.
For example, if you have more than one AWS account, you can peer the
VPCs across those accounts to create a file sharing network. You can
also use a VPC peering connection to allow other VPCs to access
resources you have in one of your VPCs. When you establish peering
relationships between VPCs across different AWS Regions, resources in
the VPCs (for example, EC2 instances and Lambda functions) in
different AWS Regions can communicate with each other using private
IP addresses, without using a gateway, VPN connection, or network
appliance.
Pricing for a VPC peering connection There is no charge to create a
VPC peering connection. All data transfer over a VPC Peering
connection that stays within an Availability Zone (AZ) is free. Charges
apply for data transfer over a VPC Peering connections that cross
Availability Zones and Regions
64
Creating two VPC in two different regions.and then established a connection
between them and then create two EC2 and test their connectivity in both the VPC
Region 1- Virginia
—————————————
* Create VPC in one region (Virginia region)
* Create subnet -1
Create subnet associated with VPC-1
2. Enter Details and click on create button
65
* Create Internet gateway
1. Click on Internet Gateway
2. Enter name and click on create
66
2. Select created internet gateway and attach to VPC-1
67
b. Click on Routes section and enter internet gateway at 0.0.0.0/0
68
* Creating EC2 Instance in Virginia region
Go to EC2 dashboard and click on launch instance
69
5. Click on instance and edit inbound rule in security group
70
6. connect to instance-virginia
Part II
Region 2 - Mumbai
—————————————
* Create VPC in another region (Mumbai region)
71
* Create subnet-2
Create subnet associated with VPC-2
2. Enter Details and click on create button
72
* Create Internet gateway
1. Click on Internet Gateway
2. Enter name and click on create
73
* Create Route table
a. creates Route table and select VPC-2
74
c. Associate public subnet into subnet association section as follows
75
5. Click on instance and edit inbound rule in security group
76
7. Connect to instance-Mumbai and use command
Command : ping <private ip of Virginia region instance>
Result : As both instances are in different VPC ,they will not connect
with each other
77
Enter name
Select Vpc -1 to peer with
Select Account or Region as per your requirement
Here both VPC are in different region but with same account so select
“My Account “ and “Another region”
5. Select Region name and Paste VPC ID (refer next point ) of Mumbai
region VPC
78
7. Click on create peering connection button
79
9. Now Go to Mumbai region and then in peering connection
10. Click on actions
11. Select “Accept request”
80
2. Click on Routes section and enter peering connection at
192.168.0.0/16 (VPC CIDR of vpc-2)
— — — — — — — — — — — — — —*— — — — — — — — —
-* — — — — — — — — — — — —
Go to Mumbai region and Select Route-2
81
2. Click on Routes section and enter peering connection at 10.0.0.0/16
(VPC CIDR of vpc-1)
* Connect Both Instance and test their connectivity in both the VPC
1. Now Connect to Instance-Mumbai
82
2. Now Connect to Instance-Virginia
ss
83
AWS TRANSIT GATEWAY
Introduction:
AWS Transit Gateway is a service that allows customers to connect their
Amazon Virtual Private Clouds (VPCs) and on-premises networks to a
central hub. This simplifies network management by providing a single
gateway to manage network connectivity between multiple VPCs and
on-premises networks.
84
attached VPCs and on-premises networks grows, making it easier to
accommodate growing network traffic.
85
capabilities.
86
Connect VPCs across multiple accounts and AWS Regions.
Create a hub-and-spoke model for segmented networks.
Share centralized internet connectivity across accounts.
Migrate from a mesh or hub-and-spoke model to a Transit Gateway.
Connect remote offices and data centers to AWS.
To get started with Transit Gateway, the first step is to create the Transit
Gateway resource in your desired AWS region.
When creating the Transit Gateway, you need to specify a name tag so it
can be easily identified. You also have the option to enable DNS support
if you need resolution between your connected networks.
Some key considerations when creating the Transit Gateway
Transit Gateways are regional resources. So, you need to decide which
region makes the most sense as the connectivity hub for your use case.
By default, a new Transit Gateway will be created in the default VPC for
the region. You can choose to create it in a custom VPC if required.
Select the appropriate size for your Transit Gateway based on the
expected network traffic volume. Sizing can be adjusted later if needed.
You can enable sharing with other accounts upon creation or do it later.
Account sharing allows connections from other accounts.
Logging can be enabled to track connection activity and events. The logs
will be sent to CloudWatch Logs.
Once the Transit Gateway is created, you will get an ID for it that is
87
needed to attach VPCs and other networks. It takes some time for the
Transit Gateway to be ready for use after creation.
So those are some of the key options to consider when creating your
Transit Gateway in the region of your choice. The console wizard will
guide you through all the necessary configuration.
88
redundancy and scaling.
The Transit Gateway provides connectivity between the VPCs as soon
as the attachments are created and routes propagated. So, you can build
out connectivity to more VPCs incrementally.
89
Enable route propagation.
The attachment creation process will take some time to complete. Once
available, your on-premises network will be able to connect to the VPCs
and networks attached to the Transit Gateway.
You can create multiple VPN or Direct Connect attachments for high
availability and failover between your data center and Transit Gateway.
90
You can create complex network segmentation policies by leveraging
multiple route tables.
For example, you can create tiers like Public, Private, Restricted and
assign VPC subnets to them via route tables.
Route priorities determine which route takes effect if there are multiple
routes to a destination.
By leveraging custom route tables, you can dial in fined-grained control
over how traffic flows between your connected networks using Transit
Gateway.
91
For security, enable VPC route table propagation sparingly for shared
accounts.
Use RAM resources to allow sharing Transit Gateways across regions.
By sharing Transit Gateways, you can significantly simplify
connectivity and reduce provisioning time across different accounts in
your organization. But balance the convenience with appropriate access
controls.
AWS provides an easy-to-use wizard in the console to guide you through
the configuration process.
Transit Gateway:
A transit gateway is a network transit hub that you can use to
interconnect your virtual private clouds (VPCs) and on-premises
networks. As your cloud infrastructure expands globally, inter-Region
peering connects transit gateways together using the AWS Global
Infrastructure. All network traffic between AWS data centers is
automatically encrypted at the physical layer.
92
Select VPC from AWS console and Create VPC
93
Then click on Create VPC.
In a similar way , create other 2 CPCs naming VPC B and VPC C with
10.20.0.0/16 . 10.30.0.0/16 as IP addresses respectively.
From the diagram , we can see VPC A is public and VPC B & C are
private. So, we need to configure Internet Gateway for VPC A.
Click on Internet Gateway from the LHS panel and click on create
Internet Gateway.
94
Now select the newly created IGW and select Attach to VPC from
Actions
Select VPC A from drop down and click on Attach internet gateway
95
Next, we need to create Subnets, for that click on Subnets from the LHS
panel and click on create subnet for VPC A as per below
Likewise , we have to create 3 subnets for each VPC. I have created with
the info below:
VPC-A-Public-Subnet1
10.10.1.0/24
VPC-B-Private-Subnet1
10.20.1.0/24
VPC-C-Private-Subnet1
10.30.1.0/24
Now we need to add the route tables , Click on Route Tables from the
LHS panel and click on create route table.
96
Next, we have to associate the subnet with routing table. For that select
VPC-A-Route -> Click on Subnet associations -> Edit subnet
associations, then select VPC-A-Public-Subnet1 ->Save associations.
We’re all set in the VPC part , Now select EC2 from AWS console and
create 2 instances as per above diagram.
97
Configure Network Settings as below
98
Rest all settings are default or same as VPC-A-Public. Then create one
more instance same as VPC-B-Private.
Also create one more inbound rule in VPC-A-Public as shown below
99
Once the instances are up and running , take the ssh connection and
login as root. Then from VPC-A-Public , check if the private IPs of
VPC-B-Private and VPC-C-Private are reachable. It should not be
reachable as below
100
Select Transit gateway attachment from LHS panel and create Transit
gateway attachment for every VPCs.
Give a name and select the transit gateway as shown below:
101
Then configure the attachment as below:
Now go to Route Tables, select VPC A and add route as shown below:
102
Update the same for VPC B and VPC C
Now check the connectivity from VPC-A-Public , check if the private
IPs of VPC-B-Private and VPC-C-Private are reachable
Conclusion:
AWS Transit Gateway simplifies cloud network architectures by acting
as a hub to interconnect your VPCs, VPNs, and data centers. It
eliminates complex mesh topologies and provides easy scalability,
centralized management, and secure network segmentation. As your
cloud footprint grows, Transit Gateway is key to maintaining a simple,
efficient, and secure network topology.
103
CREATE VPC ENDPOINT FOR S3 BUCKET IN
AWS
104
Here VPC Endpoint for S3 comes to the rescue. VPC Endpoint for S3
provides us a secure link to access resources stored on S3 without
routing through the internet. AWS doesn’t charge anything for using this
service.
VPC Endpoint:
VPC Endpoint for aws services enables us to privately connect our VPC
to aws supported services without requiring an internet gateway, NAT
device, VPN connection. Instances in our VPC do not require public IP
addresses to communicate with aws services.
Types of VPC Endpoints:
1. Interface Endpoint: It is an elastic network interface with a private
IP address from the IP address range of your subnet that serves as an
entry point for traffic destined to a supported service.
2. Gateway Endpoint: This type is used for connecting your VPC to
AWS services over a scalable and highly available VPC endpoint.
Gateway endpoints are usually associated with services that are accessed
over an Internet Gateway, such as Amazon S3 and DynamoDB. Here we
will talk about S3 Vpc endpoints, which is a type of Gateway Endpoint.
By using VPC endpoints, you can create a more isolated and secure
environment for your AWS resources while still enabling them to access
the necessary services without exposing them to the public internet.
105
Step 2: create two subnet one as public another one as
private, in public subnet give IPv4 CIDR as 10.0.0.0/24 and
in private subnet give IPv4 CIDR as 10.0.1.0/24
106
Step 4: Attach your igw to your VPC
107
route table contains a set of rules (routes) that dictate how network
traffic is directed within the VPC
here we create two route tables one for pubic and another one for private
subnet.
Step 6: Subnet Association
Each subnet in a VPC must be associated with a route table. This
association determines how traffic is routed for resources within that
subnet.
108
Step 7: To provide internet access to resources within a subnet, you
would add a default route (0.0.0.0/0) with the target set to an Internet
Gateway (IGW). This allows traffic from the subnet to flow through the
IGW to the public internet.
Step 8: perform similar steps for private route table and do subnet
association to it.
Step 9: launch EC2 instances
109
Step 10: add network setting in EC2 instances, we select our own VPC
which we created in (Step 1) and select public subnet where public IP is
enable , and launch our instances
110
Step 12: create S3 bucket, named as boon123 and upload some file in
it.
Our main aim is to access these files without using the internet on our
private server. We have not provided a public IP. If we are able to access
these files from our private server, then we have established the endpoint
connection correctly.
Step 13: create endpoint connection name as (ujjwal-endpoint), in
service category we select AWS services
111
In services we select Gateway Endpoint, is used for connecting your
VPC to AWS services over a scalable and highly available VPC
endpoint. Gateway endpoints are usually associated with services that
are accessed over an Internet Gateway, such as Amazon S3 and
DynamoDB.
we select our both route tables and give full access in policy; we
established our endpoint connection.
112
Step 14: First, we connect to our public server. After successful
configuration on this server, we proceed to configure AWS on an
Amazon Linux instance.
113
Step 16: then we have to check entire S3 content from our private
server we have to create a file for our private server key(.pem key) by
using command then copy public key to newly created .pem file
and after that we have to run command
vim filename.pem
chmod 600 filename.pem
Step 17: then once again we have to configure our aws in private
server
114
run same command again
We can access the files present in our S3 bucket from a private server
without using the internet, by established endpoint connection
Conclusion
In conclusion, utilizing an AWS S3 VPC endpoint offers a secure and
efficient means of accessing S3 buckets from within an Amazon Virtual
Private Cloud (VPC). By establishing a direct and private connection
between resources in the VPC and S3 without traversing the public
internet, VPC endpoints enhance security and reduce latency. This
115
setup ensures that data transfers to and from S3 remain within the AWS
network, mitigating exposure to potential security threats and optimizing
performance. Implementing S3 VPC endpoints is therefore a
recommended best practice for organizations seeking to maximize the
security and efficiency of their AWS infrastructure.
116
Security Groups and NACL
Security Groups and Network Access Control Lists in AWS and to
understand when to use them and when not to.
Let’s start with the basic definitions
Security Group:
Security Group is a stateful firewall which can be associated with
Instances. Security Group acts like a Firewall to Instance or Instances.
Security Group will always have a hidden Implicit Deny in both Inbound
and Outbound Rules. So, we can only allow something explicitly, but not
deny something explicitly in Security Groups.
117
Default Security Group:
By default, a Security Group is like:
When we talk about the default Security Group, there are two things to
discuss — AWS created Default SG, User Created Default SG.
AWS creates a default SG when it creates a default VPC — in this
security group they will add an inbound rule which says all Instances in
this Security Group can talk to each other.
Any Security Group created by a User explicitly, wouldn’t contain this
Inbound Rule which would allow communication between the Instances,
we should explicitly add it if required.
Both in the AWS created SG and User Created Custom SG, the
Outbound Rules would be the same — which allows ALL TRAFFIC
out.
We cannot add a Deny Rule, both in Inbound and Outbound Rules as
there’s a hidden default Implicit Deny Rule in Security Groups. All we
can do is allow which is required, everything else which isn’t allowed by
us is blocked.
A default security group that is created by default in the default VPC by
AWS looks like this —
118
Default Security Group Outbound Rules.
Stateful Firewall
Connection Tracking
Stateful Firewall:
Stateful means — maintain the state of connection so that you introduce
yourself only once, not every time you start talking — think TCP
session, once established, they start talking till one of them says Finish or
Reset.
The reason why a Security Group is called a Stateful Firewall is —
Security Group basically maintains the State of a connection, meaning —
if an instance sends a request , the response traffic from outside is
allowed back irrespective of the inbound rules, and vice versa.
Example: If my security group inbound rule allows NO TRAFFIC and
outbound rule allows ALL TRAFFIC and I visit a website on my
instance, the response from the WebServer back to my instance will be
allowed even though the inbound rule denies everything.
119
Security Group achieves this by leveraging something knows as
Connection Tracking which we will be discussing shortly.
Connection Tracking:
Security Groups use Connection Tracking to keep track of connection
information that flows in and out of an instance, this information
includes — IP address, Port number and some other information(for
some specific protocols).
Security Group needs to track any connection only in this case — if
there’s no inbound/outbound rule that allows everything. Let’s say we
have allowed ALL traffic from outside and ALL traffic to outside, it need
not track anything because, whatever comes and goes should be allowed.
Type — Type of Traffic which can be TCP, UDP, ICMP. Type field
provides the well-used protocols, when selected it auto fills the Protocol
field. You may also select a Custom Protocol Rule, which allows you to
select the Protocol field from a wide range of Protocols.
Protocol — As mentioned already, if you select a Custom Protocol Rule
in Type field, you can select a Protocol from the available Protocol List.
120
Port Range — You can specify a single port or a range of ports like this
5000–6000.
Source[Inbound Rules only] — Can be Custom — a single IP address
or an entire CIDR block, anywhere — 0.0.0.0/0 in case of IPv4, My IP
Address — AWS auto-detects your Public IP address. Destination can
only be mentioned in Outbound Rule.
Destination [Outbound Rules only] — Can be Custom — a single IP
address or an entire CIDR block, anywhere — 0.0.0.0/0 in case of IPv4,
My IP Address — AWS auto-detects your Public IP address. Source can
only be mentioned in Inbound Rule.
Description — This field is optional. You can add a description which
helps you to keep a track of which rule is for what.
Default NACL:
By default, a NACL is like:
When we create a VPC, a default NACL will be created which will allow
ALL Inbound Traffic and Outbound Traffic. If we don’t associate a
Subnet to NACL, the default NACL in that VPC will be associated to
that Subnet. A default NACL looks like this —
121
NACL Features:
Statelessness:
Unlike Security Groups, NACL doesn’t maintain any track of
connections which makes it completely Stateless, meaning — if some
traffic is allowed in NACL Inbound Rule, the response Outbound traffic
is not allowed by default unless specified in the Outbound Rules.
Rule Number — Rules are evaluated starting with the lowest numbered
rule. If a rule matches, it gets executed without checking for any other
higher numbered rules.
Type — Type of Traffic which can be TCP, UDP, ICMP. Type field
provides the well-used protocols, when selected it auto fills the Protocol
122
field. You may also select a Custom Protocol Rule, which allows you to
select the Protocol field from a wide range of Protocols.
Protocol — As mentioned already, if you select a Custom Protocol Rule
in Type field, you can select a Protocol from the available Protocol List.
Port Range — You can specify a single port or a range of ports like this
5000–6000.
Source [Inbound Rules only] — Can be a Single IP Address or an
entire CIDR block. Destination can only be mentioned in Outbound
Rule.
Destination[Outbound Rules only] — Can be a Single IP Address or
an entire CIDR block. Source can only be mentioned in Inbound Rule.
Allow/Deny — Specifies whether to allow or deny traffic.
Use Case:
I will give an example to make you understand when to use Security
Group and when to use NACL —
Let’s say you have allowed SSH Access of an Instance to a User in Dev
Team and he’s connected to it and actively accessing it and for some
reason(realizing that the user is involved in some malicious activity) you
wanted to remove his SSH access.
In this case you have two choices —
1) Remove SSH inbound allow rule of that user in the Security Group
Inbound Rule.
2) Add an NACL Rule explicitly denying traffic from his IP address. If
you go with the first one, he would not lose his SSH connection, this is
due to the connection tracking behavior of Security Groups. If you go
with the latter choice, NACL would immediately block his Connection.
123
So, in this case, it’s better to use a NACL Deny Rule rather than deleting
a Security Group allow Rule.
Key Points:
Single NACL can be associated with multiple Subnets, however single
Subnet cannot be associated with multiple NACLs at same time as there
can be multiple Deny Rules which contradict each other.
Security Groups:
VPC Security Groups per Region — 2500
Rules Per Security Group — 60 Inbound and 60 Outbound.
Key Points:
Single Security Group can be associated to multiple Instances and unlike
NACL, multiple Instances can be associated with multiple Security
Groups as there cannot be explicit Deny rules which can contradict each
other here.
These quota limits are the default ones, if you want to increase the limit
you can request AWS to do so. Some quota limits in the VPC are strict
and cannot be increased.
124
Elastic Block Store
Amazon Elastic Block Store (EBS): Reliable Block Storage
125
and availability of their data, which is vital for their day-to-day
operations.
126
their storage needs.
Another important capability of Amazon EBS is its support for Elastic
Volumes. Elastic Volumes allows businesses to adjust the size,
performance, and type of their EBS volumes without interrupting their
EC2 instances. This feature enables businesses to optimize their storage
resources and adapt to changing workload demands seamlessly.
Additionally, Amazon EBS provides encryption at rest, ensuring that
your data is protected from unauthorized access. By leveraging AWS
Key Management Service (KMS), businesses can encrypt their EBS
volumes and manage encryption keys securely. This feature is
particularly crucial for businesses that handle sensitive or confidential
data.
127
and data warehouses. These volumes deliver high throughput and low-
cost storage, making them cost-effective solutions for data-intensive
applications.
Cold HDD (sc1) volumes are designed for infrequently accessed
workloads, such as backups and disaster recovery. These volumes offer
the lowest cost per gigabyte and are suitable for data that does not require
frequent access.
By understanding the characteristics and use cases of each volume type,
businesses can optimize their storage infrastructure and ensure optimal
performance and cost-efficiency.
128
Securing and optimizing Amazon EBS is crucial to ensure the integrity
and performance of your storage infrastructure. Here are some best
practices to consider:
129
Monitoring and managing the performance of your Amazon EBS
volumes is essential to ensure optimal storage performance and identify
any potential issues. Amazon CloudWatch provides a range of metrics
and alarms that can help you monitor the health and performance of your
EBS volumes.
Some key metrics to monitor include volume read/write operations,
volume latency, and volume throughput. By tracking these metrics, you
can identify any performance bottlenecks and take appropriate actions to
optimize your storage configuration.
In addition to monitoring, Amazon EBS provides features such as Elastic
Volumes and Enhanced Monitoring that allow you to proactively manage
and optimize your storage resources. Elastic Volumes enables you to
adjust the size and performance of your volumes without interrupting
your EC2 instances, providing flexibility and cost optimization.
Enhanced Monitoring provides additional insights into the performance
of your EBS volumes, allowing you to fine-tune your storage
configuration for optimal performance.
130
archiving. By integrating EBS with these services, businesses can create
cost-effective and scalable backup solutions that meet their specific
requirements.
It’s important to note that a comprehensive backup and disaster recovery
strategy should include regular testing and validation of the recovery
process. By periodically restoring snapshots and verifying data integrity,
businesses can ensure the effectiveness of their backup and recovery
procedures.
Case studies: Real-world examples of businesses benefiting from
Amazon EBS
131
Conclusion: Why Amazon EBS is the ideal solution for
secure and dependable block storage
In conclusion, Amazon Elastic Block Store (EBS) is a powerful and
versatile block storage solution that offers businesses the security,
reliability, and scalability they need to drive their success. With its range
of volume types, robust features, and seamless integration with other
AWS services, EBS provides businesses with the flexibility and control
they require for their data storage needs.
By understanding the benefits, features, and best practices associated
with Amazon EBS, businesses can leverage this solution to optimize
their storage infrastructure, enhance data security, and ensure high-
performance storage for their critical workloads.
So, if you’re looking for a secure and dependable block storage solution
that can drive your business success, look no further than Amazon EBS.
Take advantage of its capabilities, implement best practices, and unlock
the full potential of your data storage infrastructure.
132
Elastic File System
Mount Elastic File System (EFS) on EC2
Well, you’ve come to the right place! In this guide, we’ll go through the
steps to create an Elastic File System , we’ll launch and configure two
Amazon EC2 Instances ,we ’ll practice mounting the EFS to both
instances by logging into each instance via SSH authentication and we’ll
practice sharing files between two instances.
Introduction
What’s Amazon Elastic File System (EFS) ?
133
the same file system concurrently, providing a simple and scalable
solution for applications that require shared access to files.
Ease of Use: EFS is easy to set up and manage, eliminating the need
for manual intervention in capacity planning or performance tuning.
Architecture Diagram
134
Task Steps
Step 1: Sign in to AWS Management Console
On the AWS sign-in page, enter your credentials to log in to your
AWS account and click on the Sign in button.
Once Signed In to the AWS Management Console, Make the default
AWS Region as US East (N. Virginia) us-east-1
3. Click on Instances from the left side bar and then click on Launch
instances.
135
7. For Instance Type: select t2.micro
136
10. In Network Settings Click on Edit:
137
11. Keep Rest thing Default and Click on Launch Instance Button.
15. Take note of the IPv4 Public IP Addresses of the EC2 instances and
138
save them for later.
4. Enter the details below, Type the Name as EFS-Demo and make
sure default VPC and default Regional options are selected.
5. Uncheck the option of Enable automated backups
139
6. Leave everything by default and click on the Next button present
below.
7. Network Access:
VPC:
An Amazon EFS file system is accessed by EC2 instances running
inside one of your VPCs.
Choose the same VPC you selected while launching the EC2
instance (leave as default).
Mount Targets:
Instances connect to a file system by using a network interface
called a mount target. Each mount target has an IP address,
which we assign automatically or you can specify.
We will select all the Availability Zones (AZ’s) so that the EC2
instances across your VPC can access the file system.
Select all the Availability Zones, and in the Security Groups,
select EFS Security Group instead of the default value.
Make sure you remove the default security group and select the EFS
Security Group, otherwise you will get an error in further steps.
140
Click on Next button.
141
click on Connect button.(Keep everything else as default)
A new tab will open in the browser where you can execute the CLI
Commands.
sudo -s
yum -y update
142
yum install -y amazon-efs-utils
mkdir efs
8. To do so, navigate to the AWS console and click on the created file
system. On the top-right corner, click on View details then click
on Attach
143
9. To display information for all currently-mounted file systems, we’ll
use the command bellow:
df -h
mkdir aws
144
Select EC2 Instance Connect option and click
on Connect button.(Keep everything else as default
A new tab will open in the browser where you can execute the CLI
Commands.
3. Switch to root user
sudo -s
yum -y update
145
mkdir efs
Copy the command of Using the EFS mount helper into the CLI.
df -h
sudo -s
3. Navigate to the efs directory in both the servers using the command
146
cd efs
touch hello.txt
ls -ltr
cd efs
7. You can see the file created on this server as well. This proves that
our EFS is working.
8. You can try creating files (touch command) or directories (mkdir
command) on other servers to continue to grow the EFS implementation.
147
Demystifying AWS Load Balancers: Understanding
Elastic, Application, Network, and Gateway Load
Balancers
In the realm of cloud computing, load balancing plays a crucial role in
distributing incoming traffic across multiple targets to ensure high
availability, fault tolerance, and scalability of applications. In the high-
traffic world of cloud applications, ensuring smooth operation and
optimal performance requires a skilled conductor — the load balancer.
AWS offers a robust suite of load balancers, each catering to specific
needs. Amazon Web Services (AWS) offers a suite of load balancers
tailored to different use cases and requirements. In this blog, we’ll delve
into the distinctions between AWS Elastic Load Balancer (ELB),
Application Load Balancer (ALB), Network Load Balancer (NLB), and
Gateway Load Balancer (GWLB), exploring their features, examples,
and dissimilarities. Additionally, we’ll shed light on the flow hash
algorithm used by AWS load balancers to route traffic efficiently.
The Balancing Act: What They Do
At their core, all these load balancers perform the same essential
function: distributing incoming traffic across a pool of resources,
ensuring no single server gets overwhelmed. This enhances application
availability and responsiveness for your users.
1. Elastic Load Balancer (ELB):
Description: AWS Elastic Load Balancer (ELB) is the original load
balancer service offered by AWS, providing basic traffic distribution
across multiple targets within a single AWS region. It is a simple and
cost-effective way to distribute traffic across multiple EC2 instances.
ELB supports both HTTP and TCP traffic.
148
Example: Distributing incoming traffic across multiple EC2
instances running web servers to ensure high availability and fault
tolerance for a web application.
Features:
Simple to configure and manage
Supports HTTP and TCP traffic
Can be used to distribute traffic across multiple EC2 instances
Offers a variety of features, including health checks, sticky sessions,
and SSL termination.
Use Cases:
Distributing traffic to web servers
Load balancing for TCP applications, such as databases and mail
servers
Providing SSL termination for web applications
149
Features:
Supports HTTP/2, WebSockets, and container-based applications
Offers a variety of features, including health checks, sticky sessions,
and SSL termination
Can be used to distribute traffic across multiple EC2 instances,
containers, and Lambda functions
Use Cases:
Load balancing for web applications
Distributing traffic to microservices
Load balancing for container-based applications
3. Network Load Balancer (NLB):
Description: AWS Network Load Balancer (NLB) operates at the
transport layer (Layer 4) of the OSI model, offering ultra-low latency
and high throughput for TCP and UDP traffic. NLB is a high-
performance load balancer that is designed for use with TCP
applications. It offers very low latency and high throughput. This
prioritizes speed and efficiency, making it ideal for high-volume,
low-latency applications like gaming servers or chat platforms.
Example: Load balancing traffic for TCP-b ased services such as
databases, FTP servers, and gaming applications that require high
performance and minimal overhead. NLB is ideal for applications that
require low latency, such as gaming, financial trading, and video
streaming.
Features:
Very low latency and high throughput
Supports TCP traffic
Can be used to distribute traffic across multiple EC2 instances
150
Offers a variety of features, including health checks and sticky
sessions
Use Cases:
Load balancing for TCP applications, such as gaming, financial
trading, and video streaming
Distributing traffic to EC2 instances that are running in a VPC
4. Gateway Load Balancer (GLB):
Description: The GLB is a versatile player, operating across layers
3 (network layer) and 7 (application layer). It acts as a central
gateway for managing virtual appliances like firewalls or intrusion
detection systems. It balances traffic across these appliances while
maintaining secure communication through VPC endpoints. AWS
Gateway Load Balancer (GLB) is designed for deploying, scaling,
and managing third-party virtual appliances such as firewalls,
intrusion detection systems (IDS), and encryption appliances. GLB is
a load balancer that is designed for use with VPC endpoints. It allows
you to load balance traffic to endpoints in a private VPC. GLB is
ideal for applications that require access to private resources, such as
databases and internal APIs.
Example: Deploying a third-party firewall appliance to inspect and
filter traffic between VPCs or between on-premises networks and the
AWS cloud.
Features:
Load balances traffic to endpoints in a private VPC
Supports HTTP and TCP traffic
Can be used to distribute traffic across multiple endpoints
Offers a variety of features, including health checks and sticky
sessions
151
Use Cases:
Load balancing for applications that require access to private
resources
Distributing traffic to endpoints in a private VPC
Similarities: A United Front
High Availability: All load balancers ensure that even if
individual instances fail, traffic seamlessly flows to healthy ones,
keeping your application up and running.
Dissimilarities:
Layer of Operation: ALB operates at Layer 7 (application layer),
allowing for content-based routing, while NLB operates at Layer 4
(transport layer), focusing on routing traffic based on IP addresses
and ports.
152
preferred for TCP-based workloads requiring high performance and
minimal overhead.
Flow Hash Algorithm: The flow hash algorithm is used by AWS load
balancers to distribute incoming traffic across multiple targets while
maintaining session affinity for stateful protocols. The flow hash
algorithm calculates a hash value based on specific attributes of each
incoming request, such as source IP address, destination IP address,
source port, destination port, and protocol. This hash value is then used
to determine which target receives the incoming request. The flow hash
algorithm takes into account the source IP address, destination IP
153
address, and destination port of each request. This ensures that requests
from the same client are always sent to the same target.
The flow hash algorithm is a very effective way to distribute traffic
evenly across multiple targets. It is also very efficient, as it does not
require any additional overhead.
It takes a portion of the data flow (like source and destination IP
addresses, ports) and generates a hash value. Based on this hash, the load
balancer directs traffic to a specific instance. This ensures even
distribution and prevents overloading individual instances.
Examples
Example 1: Load balancing a web application
You can use an ALB to load balance traffic to a web application that is
running on multiple EC2 instances. The ALB will distribute traffic
evenly across the instances and will ensure that requests from the same
client are always sent to the same instance.
Example 2: Load balancing a TCP application
You can use an NLB to load balance traffic to a TCP application that is
running on multiple EC2 instances. The NLB will provide very low
latency and high throughput, making it ideal for applications that require
low latency, such as gaming, financial trading, and video streaming.
Example 3: Load balancing traffic to a private VPC
You can use a GLB to load balance traffic to a private VPC. This allows
you to load balance traffic to endpoints in a private VPC, such as
databases and internal APIs.
Choosing the Right Load Balancer: It All Depends
Selecting the optimal load balancer hinges on your application’s unique
requirements:
154
ALB: Ideal for web applications requiring intelligent routing based
on application logic.
NLB: Perfect for high-performance applications that prioritize speed
and low latency.
GLB: The go-to choose for managing and scaling virtual appliances
within your network.
Conclusion: AWS offers a range of load balancing options, each
tailored to different use cases and requirements. By understanding the
distinctions between Elastic Load Balancer (ELB), Application Load
Balancer (ALB), Network Load Balancer (NLB), and Gateway Load
Balancer (GWLB), you can choose the right load balancing solution to
optimize the performance, availability, and scalability of your
applications in the AWS cloud. Additionally, the flow hash algorithm
employed by AWS load balancers ensures efficient traffic distribution
while maintaining session affinity, further enhancing the reliability and
performance of your application deployments.
When choosing a load balancer, it is important to consider the following
factors:
The type of traffic that you need to load balance
The latency and throughput requirements of your application
The features that you need
By considering these factors, you can choose the right load balancer for
your application and ensure that your traffic is distributed evenly and
efficiently.
155
If you are looking for a comprehensive guide on setting up an AWS
Application Load Balancer (ALB) with two EC2 instances, displaying
their IP addresses using a bash script, and demonstrating the load
balancer’s functionality, then you’re in the right place!
In this step-by-step guide, we will take you through the entire process,
starting with the basics and leading you through configuring the load
balancer, setting up the instances, and testing the load balancer’s
functionality.
By the end of this guide, you will clearly understand how to set up an
Application Load Balancer on AWS and use it to distribute traffic across
multiple instances.
Set Up EC2 Instances
156
Go to Advanced Settings and in user data add the following bash
script to display the IP address of the instance:
#!/bin/bash
# install httpd (Amazon Linux 2)
yum update -y
yum install -y httpd
systemctl start httpd
systemctl enable httpd
echo "<h1>Hello World from $(hostname -f)</h1>" > /var/www/html/index.html
Create instance
Repeat this process to launch another EC2 instance.
Once the instances are ready copy their IP address and paste them into
your Internet Browser to test it’s working:
hitc-ec2-demo-01
hitc-ec2-demo-02
157
Go to the “Elastic IPs” section in the EC2 dashboard.
Allocate and associate an Elastic IP to each of your instances. This
ensures that your instances have static public IP addresses.
Set Up Load Balancer
158
Configure load balancer name such as hitc-alb demo
Scheme should be set to Internet-Facing
IP Address Type to IPv4
Network Mapping — select first 3 AZs in your selected region
(e.g. us-east-1a, us-east-1b, us-east-1c)
159
Security group —Click on the link Create a new security group for
ALB with the following config
160
Refresh and add newly created hitc-sg-alb
In the next window we can check that the targets have been successfully
registered.
161
Go back to your ALB setup, refresh and add the newly created target
group
162
Wait a few moments until the Provisioning of the Load Balancer is
completed
Then go back to Target Group that we previously created and check the
status, that both registered targets are showing Healthy Status
163
Congratulations! You have successfully set up an AWS Application
Load Balancer with two EC2 instances, displayed their IP addresses
using a bash script, and demonstrated the load balancer’s functionality.
Closing Thoughts
As you wrap up, it’s important to know how to master the AWS
Application Load Balancer (ALB) if you want to optimise your cloud
infrastructure for scalability, reliability, and performance. This tutorial
will teach you how to set up an ALB, configure EC2 instances, and use
its features to distribute traffic efficiently.
It’s important to regularly review and optimise your configurations to
adapt to changing demands and ensure peak performance as you
continue your journey with AWS and cloud computing. Experiment with
different load balancing strategies, monitor your resources, and stay
updated with best practices to stay ahead in this dynamic field.
164
AUTO SCALING
Achieve High Availability with AWS Auto Scaling
Use Case:
Your company has a web server that has a volatile level of traffic. Your
company has to ensure that the webservers are always available and
currently have a fixed amount of instances to guarantee that even at a
max CPU Utilization, the web server will be able to perform. The
problem is that when the traffic is low, the unused web servers are
unnecessarily costing the company money. The current way of having a
fixed number of instances also presents a problem if for some reason a
web server goes down and a new one has to be manually spun up.
To solve this issue for our made-up ABC Company, we will create an
Auto Scaling Group with a policy to scale in or out depending on
demand with a minimum of 2 instances and a maximum of 5. One policy
will scale out if CPU Utilization goes over 80% and the other will scale
in if CPU Utilization goes under 40%.
Prerequisites
165
Multi Availability Zone VPC with public subnets.
A Webserver security group that allows inbound HTTP from
0.0.0.0./0 and SSH from your ip.
Launch Template
Navigate to EC2 Dashboard.
166
5. For Instance type select t2.micro.
6. Since we plan to SSH into an instance later, you will need to select a
key pair. You can use an existing Key pair or you can create a new one.
To create a new one click Create new key pair.
167
7. A Create Key Pair dialog box should pop up. Enter a Key pair
name. Select a File format and click Create key pair.
168
10. Scroll down to User data. Copy and paste the following into the User
data field:
#!/bin/bash
yum update -y
yum install -y httpd
systemctl start httpd
systemctl enable httpd
This code will be run when each instance from the template boots up. It
is updating patches, installing and starting an Apache webserver.
11. Click Create launch template.
3. Enter a name for Auto Scaling group name and select our Launch
template from the drop down.
169
4. Review the information and click Next.
VPC: Select the VPC you’d like to launch the instances in. Be sure to
select the VPC in which your Security Group is associated
Subnets: Select all public subnets in your VPC in you web layer (if
you have a tiered architecture). In my custom VPC for ABC
Company, I have 3 public subnets and have chosen all 3 for high
availability.
7. Click Next.
170
8. On the Configure advanced options page select No load
balancer and keep the default Health checks. Then click Next.
9. On the Configure group size and scaling policies page:
Desired capacity: 2
Minimum capacity: 2
Maximum capacity: 5
This will ensure that we always have at least 2 instances running and up
to 5 if CPU Utilization gets too high.
10. For Scaling policies select None and click Next.
13. If you navigate to the Activity tab you will see that two instances
have been created to meet our minimum capacity.
CloudWatch Alarms
We will need to create two CloudWatch alarms that will trigger our
171
Scaling Policies if they go into an alarm state.
5. If you have multiple auto scaling groups then you can use search to
find the one, we created by typing our auto scaling group name. Then
select the one for CPU Utilization.
172
9. click Next.
10. You can add a notification if you’d like but I’m going to
click Remove. One can always be added later.
11. Name our alarm and add a description then click Next.
173
12. Review then click Create alarm.
13. Repeat steps 1–12 for our scale in alarm with the following changes:
Step 8: Lower/Equal 40
Step 11: Alarm name: Simple-Scaling-AlarmLow-SubtractCapacity
Note: When you first create your alarms the Status will say insufficient
Add Scaling Policy
Navigate to Auto Scaling Groups and click our newly created auto
scaling group.
174
3. For Policy type select Simple scaling. For CloudWatch alarm select
the AddCapacity alarm and for Take the action select Add 1 capacity
units.
175
5. You should now see two policies added to the Auto Scaling Group.
Testing
Let’s first test to make sure our apache web server is running.
Navigate to the EC2 Dashboard.
Click Instances.
Select one of our instances.
In the details stab, in the Instance summary section copy the Public
IPv4 DNS address and paste it into a new tab.
176
Success!
DO NOT, I REPEAT DO NOT click the open address link. When you
click this link, it will open the link in a new tab which on the surface
seems like what you would want to do, however please learn from my
mistake. When you click this link, it will open up the IPv4 DNS in https
and since we have not set up https, we will get an error. I spent an hour
troubleshooting what the possible issue could be before realizing my
mistake.
Let’s test to see if our Auto Scaling Group works if an instance was to
fail and go below our minimum capacity.
Navigate to the EC2 Dashboard.
Click Instances.
Select one of our instances. Click Instance state and click Terminate
instance.
177
5. You can navigate to the IPv4 DNS for the new instance to verify the
apache web server is working.
Make sure to run them both around the same time. It will do no good
to run the commands on one instance, wait 10 minutes, then run the
command on the other instance.
Note: It took some time before both instances were maxed out. You may
want to play with upping the cpu number if needed.
3. Once our alarm status changes to In alarm, we should see our Auto
Scaling Group launch a new instance.
5. Navigate to Auto Scaling groups, select our group and review
the Activity history.
178
6. You should also now see three instances when you navigate
to Instances.
7. You can either wait until our commands stop running or you can
cancel in the terminal with Ctrl + C.
8. Once CPU Utilization goes below 40%, we should see a scale in
action triggered by our other alarm. Navigate back to Auto Scaling
groups.
9. Select the Activity tab and note that the Auto Scaling group
terminated and instance.
179
AWS Web Application Firewall
There are many security threats that exist today in a typical enterprise
distributed application.
DDoS: Flood Attacks (SYN Floods, UDP Floods, ICMP Floods,
HTTP Floods, DNS Query Floods), Reflection Attacks
180
Web ACL
When WAF associating any of the above three AWS services, it
associates with a Web ACL. A Web ACL is a fundamental component of
WAF, which defines a set of rules for any of these services (See Figure
2).
181
In order to demonstrate the WAF capability, it is always good to go
through a simple scenario that can showcase its capability. Here, I am
going to block a CloudFront distribution, which I created some time ago.
So, if you are trying this out, please make sure you have one of the
services (CloudFront, API Gateway or ALB) is created already before
trying this out.
Figure 4
Give a name to Web ACL and associate a Resource Type to it. Here we
are associate a CloudFront distribution (See Figure 5), which I have
already created before. You can attach this to not only CloudFront but
ALB and API Gateway as well.
182
Figure 5
Figure 6
Click Next button and you will get another page to add your rules to
Web ACL. We will skip this for the moment allowing us to do it at a
later stage.
Select Allow for Web ACL Action as well.
Leave Set Rule Priority as it is and click Next.
183
Leave Configure Metrics and click Next.
Finally review your selections and click Create Web ACL button.
The above will create a Web ACL without any rules. You can go back to Web
ACL link and you will see the below. Make sure not to select a region and
select Global (CloudFront) in the top drop down to see your created Web ACL
(See Figure 7).
Figure 7
Figure 8
184
Task 3: Add a Rule to the created condition
In order to create a rule, you need to create a Rule Group.
Go to AWS WAF → Rule Group → Click Create Rule Groups button
(See Figure 9)
Figure 9
Click Next → Click Add Rule button → Set the following parameters to
create a Rule
Rule Name → MyRule
If a Request → Select Matches the requirement
Statement (Inspect)→ Select Originates from an IP Address In
Statement (IP Set) → Select the IP Set that you created in Task 2
Action → Select Block
Click Next
Select the Rule Priority. This is not required here since you have only
one rule.
Finally review your selections and click Create Rule Group to confirm
your rule settings.
Task 4: Add the created Rule Group / Rule to the Web ACL
Go to AWS WAF → Web ACL → Select the Web ACL that you have
185
created → Click Rules tab (See Figure 10).
Figure 10
You can see the Web ACL still does not have its rules attached.
Click Add Rules button drop down → Select Add my own rules and rule
groups
Figure 11
Give a name for the rule that you are specifying here (See Figure 11).
[P.Note: I strongly feel the new WAF UI has some issues related its
fields. This is a good example of having to define Rule name twice.
Once under the Rules Group and once under Web ACL rule
186
attachments.]
Select the Rules Group that you created from the drop down and
click Add rule button and then click Save.
Now you can see the added rule is attached to the Web ACL.
Now it is time to browse the web URL that you have blocked for your
IP. If all fine, it will be similar to below screen (See Figure 12).
Figure 12
If you want to remove the blocking, you can go to the Web ACL and
delete the related Rule and try the web link again. After a few refresh
attempts, you will get your site back.
187
Amazon Web Application Firewall (AWS WAF): Web
Security for AWS Users
188
How Amazon Web Application Firewall (WAF) Works ?
The working of WAF in AWS mentioned below.
189
Now let’s get started with WAF and create web ACL in some steps.
Step 2: In the next step you need to create the IP Set to deny the
application access. Click on IP Set then select Create IP Set then add the
IP list then click on Create IP set which needs to block the access of the
application. The IP that we have added to the list does not access the
application over the Internet.
190
Step 3: Create web ACL: Open a new tab of the browser then go to
AWS Console and search for Web Application Firewall. You will land
on the WAF home page, and choose to Create Web ACL.
Step 4: Give a Name: Type the name you want to use to identify this
web ACL. After that, enter Description if you want (optional), add the
AWS resources (Application Load Balancer, Amazon API Gateway
REST API, Amazon App Runner service, AWS AppSync GraphQL
API, Amazon Cognito user pool, AWS Verified Access), and then
hit Next.
191
Step 3: Add your Own rules and rule group: In the next step, you need
to add rules and rule groups. Click on Add my won rules and rule
groups. You will land on a new page to Rules type then select IP Set and
choose the IP set which is created in Step2 and click on the add rule
option mentioned in the below snapshot.
192
Step 4: Once the rule is created then Select Rule and click on the Next
Step 5: Configure CloudWatch Metrics
Step 6: Review Web ACL Configuration: In the final step, check all
the rules and hit Create Web ACL.
Finally, a message will pop up You Successfully created web ACL: ACL-name
Then test the application access on the internet, The IP added in the IP
set that is blocked will get 403 Forbidden, and all other users will access
193
the application.
Estimating Costs
The cost of AWS WAF can vary depending on the scale of your
deployment, ranging from a few dollars per month for small
deployments to several thousand dollars per month for large-scale
deployments. AWS WAF pricing is based on the number of web
requests processed and the number of security rules that are used.
Example of cost for our example (1 Web ACL with a few managed
rules):
$5.00 per web ACL per month (prorated hourly) * 1 web ACL = $5.00
$1.00 per rule per month (prorated hourly) * 5 rules = $1.00
$0.60 per million requests processed * 1 (we will assume 1 million
request) = $0.60
$0.10 per alarm metric * 1 alarm = $0.10
Total: $6.70 per month
194
Conclusion
AWS Web Application Firewall provides a managed solution to protect
your web applications and APIs against common exploits and
vulnerabilities. By leveraging WAF’s advanced rulesets and integration
with services like Application Load Balancer, you can effectively filter
malicious web traffic while allowing legitimate users access. With
customizable rules, real-time metrics, and easy association with AWS
resources, WAF is a robust web application firewall to secure your
workloads in the cloud. Carefully monitor your WAF to fine-tune rules
and maximize threat protection. Using AWS WAF can improve your
overall security posture in the cloud.
195
Image taken from amazon. com
196
Ans- AWS WAF gives a developer the ability to customize security
rules to allow, block or monitor Web requests. Amazon
CloudFront (AWS’ content delivery network) receives a request from
an end user and forwards that request to AWS WAF for inspection.
AWS WAF then responds to either block or allow the request. A
developer can also use AWS WAF’s integration with CloudFront to
apply protection to sites that are hosted outside of AWS.
Que 4 -Can I use AWS WAF to protect web sites not hosted
in AWS?
Ans- Yes, AWS WAF is integrated with Amazon CloudFront,
which supports custom origins outside of AWS.
197
WAF rules. You can add Managed Rules to your existing AWS WAF
web ACL to which you might have already added your own rules.
The number of rules inside a Managed Rule does not count towards
your limit. However, each Managed Rule added to your web ACL will
count as 1 rule.
198
traffic to it and resumes only when the instance is healthy again.
ELB monitors the health of its registered targets and ensures that the
traffic is routed only to healthy instances.
ELB’s are configured to accept incoming traffic by specifying one or
more listeners.
199
use Gateway Load Balancer.
200
the nodes.
Due to the point above, internal load balancers can only route requests
from clients with access to the VPC for the load balancer.
Note: Both internet-facing and internal load balancers route requests to
your targets using Private IP addresses.
Implement
Task:
Sign in to AWS Management Console. Launch First EC2 Instance
(MyEC2Server1).
Launch Second EC2 Instance (MyEC2Server2).
Create a Target Group (MyWAFTargetGroup)
Create an Application Load Balancer (MyWAFLoadBalancer).
Test Load Balancer DNS.
Create AWS WAF Web ACL (MyWAFWebAcl).
Test Load Balancer DNS.
Solution:
Task 1: Sign in to AWS Management Console and launch
First EC2 Instance
In this task, we are going to launch the first EC2 instance
(MyEC2Server1) by providing the required configurations like name,
AMI selection, security group , instance type and other settings.
Furthermore, we will provide the user data as well.
1) Goto Services menu in the top left, then click on EC2 in the Compute
201
section. Navigate to Instances from the left side menu and click
on Launch Instances button.
2) Enter/select the required details:
√ Name : Enter MyEC2Server1
√ Amazon Machine Image (AMI) : select Amazon Linux 2 AMI
√ Instance Type : Select t2.micro
√ Under the Key Pair (login) section : Click on Create new key pair
hype
Key pair name: MyWebserverKey
Key pair type: RSA
Private key file format: .pem or .ppk
Click on Create key pair and then select the created key pair from the
drop-down.
202
Similarly add For HTTP and HTTPS, click on Add security
group rule.
#!/bin/bash
sudo su
yum update -y
yum install httpd -y
systemctl start httpd
systemctl enable httpd
echo "<html><h1> Welcome to My Server 1 </h1><html>" >>
/var/www/html/index.html
203
√ Under the Network Settings section : Click on Edit button
Auto-assign public IP: select Enable
Firewall (security groups) : Select existing security
group MyWebserverSG
√ Under the Advanced details section :
Under the User data: copy and paste the following script to create an
HTML page served by Apache httpd web server:
#!/bin/bash
sudo su
yum update -y
yum install httpd -y
systemctl start httpd
systemctl enable httpd
echo "<html><h1> Welcome to Whizlabs Server 2 </h1><html>" >>
/var/www/html/index.html
204
3. Enter basic configuration:
√ Choose a target type : Select Instances
205
√ Leave everything as default and click on Next button.
√ Register targets:
Select the two instances we have created
i.e. MyEC2Server1 and MyEC2Server2.
Click on Include as pending below and scroll down.
Review targets and click on Create target group button.
206
Task 4: Create an Application Load Balancer
(MyWAFLoadBalancer)
In this task, we are going to create an Application Load balancer by
providing the required configurations like name, target group etc.
1) In the EC2 console, navigate to Load Balancers in the left-side panel
under Load Balancing. Click on Create Load Balancer at the top-left to
create a new load balancer for our web servers.
207
Network mapping:
VPC : Select Default
Mappings : Check All Availability Zones
Security groups: Select an existing security group
i.e. MyWebserverSG from the drop-down menu.
208
You have successfully created Application Load Balancer.
Task 5: Test Load Balancer DNS
In this task, we will test the working of load balancer by copying the
DNS to the browser and find out whether it is able to distribute the
traffic or not.
1. Now navigate to the Target Groups from the left side menu under
Load balancing.
2. Click on the MyWAFTargetGroup Target group name.
3. Now select the Targets tab and wait till both the targets become
Healthy (Important).
Now again navigate to Load Balancers from the left side menu under
Load balancing. Select the MyWAFLoadBalancer Load Balancer and
copy the DNS name under Description tab.
209
1. Copy the DNS name of the ELB and enter the address in the browser.
You should see index.html page content of Web Server 1 or Web Server
2
Now Refresh the page a few times. You will observe that the index pages change
each time you refresh.
Note: The ELB will equally divide the incoming traffic to both servers in a Round
Robin manner.
2. Test SQL Injection :
Along with the ELB DNS add the following URL
parameter: /product?item=securitynumber’+OR+1=1 —
Syntax : http://<ELB DNS>/product?item=securitynumber’+OR+1=1 —
You will be able to see output similar to below.
Here the SQL Injection went inside the server and since we only have an
index page, the server doesn’t know how to solve the URL that is why
you got Not Found page.
210
3. Test Query String Parameter :
Along with the ELB DNS add the following URL
parameter: /?admin=123456
Syntax : http://<ELB DNS>/?admin=123456
You will be able to see output similar to below.
Here also the Query string went inside the server and the server always
passes the query string inside and it is resolved by the code that you
write. Here the query string is passed and there is no code to resolve the
this but it won’t throw any error it just becomes an unused value. so you
got a response back.
Task 6: Create AWS WAF Web ACL (MyWAFWebAcl)
In this task , we are going to create an AWS WAF Web ACL where we
will add some customized rules for location restriction, query strings and
1. Navigate to WAF by clicking on the Services menu in the top, then
click on WAF & Shield in the Security, Identity &
Compliance section.
2. On the left side menu, select Web ACL’s and then click on Create
web ACL button.
211
3. Describe web ACL and associate it to AWS resources :
Name : Enter MyWAFWebAcl
Description : Enter WAF for SQL Injection, Geo location and Query
String parameters
CloudWatch metric name : Automatically selects the WAF name, so no
changes required.
Resource type : Select Regional resources
Region : Select current region from the dropdown.
Associated AWS resources : Click on the Add AWS resources button.
Resource type : Select Application Load Balancer
Select MyWAFLoadBalancer Load balancer from the list.
212
Now click on the Add button. Click on the Next button.
Add rules and rule groups : Here we will be adding three rules.
Rule 1
Under Rules, click on Add rules and then select Add my own rules
and rule groups.
Rule type : Select Rule builder
Name : Enter GeoLocationRestriction
Type : Select Regular type
If a request : Select Doesn’t match the statement (NOT)
Inspect : Select Originates from a country in
Country codes : Select <Your Country> In this example we select
India-IN
IP address to use to determine the country of origin : Select Source IP
address
Note : You can also select multiple countries also.
Under Then : Action Select Block. Click on Add rule.
Here we are only allowing requests to come from India and all the
requests that come from other countries will be blocked.
213
214
Rule 2
Under Rules, click on Add rules and then select Add my own
rules and rule groups.
Rule type : Select Rule builder
Name : Enter QueryStringRestriction
Type : Select Regular type
If a request : Select matches the statement
Inspect : Select Query string
Match type : Select Contains string
String to match : Enter admin
Text transformation : Leave as default.
Under Then : Action Select Block.
Click on Add rules.
Anytime in the request URL contains a query string as admin WAF will
block that request.
Rule 3
215
Under Rules, click on Add rules and then select Add managed
rule groups.
It will take a few minutes to load the page. It lists all the rules which
are managed by AWS.
Click on AWS managed rule groups.
Scroll down to SQL database and enable the corresponding Add to
web ACL button.
Under Default web ACL action for requests that don’t match any rules,
Default action Select Allow. Click on the Next button.
216
Set rule priority:
No changes required, leave as default. Note : You can move the rules
based on your priority.
Click on the Next button.
Configure metrics:
Leave it as default. Click on the Next button.
Review and create web ACL :
Review the configuration done, scroll to the end and click on Create
web ACL button.
217
Web ACL created.
Now Refresh the page a few times. You will observe that the index
pages change each time you refresh. thus, ELB is working fine. ●
Note: The ELB will equally divide the incoming traffic to both servers in
a Round Robin manner.
2. Test SQL Injection
Along with the ELB DNS add the following URL
parameter: /product?item=securitynumber’+OR+1=1 —
Syntax : http://<ELB
DNS>/product?item=securitynumber’+OR+1=1 —
You will be able to see the below output. Unlike Page Not found error
218
before.
Do you know?
WAF can offer protection against Distributed Denial of Service (DDoS)
attacks by analyzing traffic patterns, detecting abnormal behaviour, and
mitigating the impact of such attacks.
219
RDS (Relational Database Service)
We dive into the heart of relational databases on the cloud with Amazon
RDS. In this session, we’ll explore the fundamentals of RDS, its
benefits, and how to get started with launching and configuring RDS
instances. Additionally, we’ll walk through the process of connecting to
RDS instances from EC2 instances.
220
choose the engine that best suits their application requirements.
2. Automated Backups and Point-in-Time Recovery: RDS
automatically takes backups of your databases according to the
retention period you specify. It also enables point-in-time recovery,
allowing you to restore your database to any specific point within the
retention period.
3. High Availability and Replication: RDS provides high availability
features such as Multi-AZ (Availability Zone) deployments and Read
Replicas. Multi-AZ deployments replicate your database
synchronously across multiple Availability Zones to ensure data
durability and fault tolerance, while Read Replicas enable you to
offload read traffic from the primary database instance, improving
performance and scalability.
4. Security and Compliance: RDS offers several security features to
help you secure your databases, including network isolation using
Amazon VPC, encryption at rest using AWS KMS (Key Management
Service), and SSL encryption for data in transit. RDS also supports
database authentication mechanisms such as IAM database
authentication and traditional username/password authentication.
5. Scalability and Performance: With RDS, you can easily scale your
database instance vertically (by increasing instance size) or
horizontally (by adding Read Replicas). RDS also provides
performance monitoring metrics and tools to help you optimize
database performance.
221
handle routine database administration tasks such as provisioning,
patching, backups, and monitoring, freeing up developers and DBAs
to focus on application development and business logic.
2. High Availability and Reliability: Managed services typically offer
built-in high availability features such as automated failover, data
replication, and backup/restore capabilities, ensuring that your
databases are highly available and reliable.
3. Scalability and Performance: Managed services make it easy to
scale your databases up or down based on demand. They often provide
tools and features for performance optimization and monitoring,
helping you maintain optimal database performance.
4. Security and Compliance: Managed database services offer robust
security features and compliance certifications to help you meet your
security and regulatory requirements. They handle security patching,
encryption, access control, and auditing, reducing the risk of security
breaches and data loss.
222
4. Specify Instance Details:
Storage Type and Allocated Storage: Select the storage type (e.g.,
General Purpose SSD, Provisioned IOPS SSD) and specify the
allocated storage space for your database.
Network & Security: Choose the Virtual Private Cloud (VPC) where
you want to launch your RDS instance. Configure the subnet group
and specify security groups to control inbound and outbound traffic.
Database Options: Set the database name, port, and parameter group
(optional).
Backup: Configure automated backups and specify the retention
period for backup storage.
223
Enable encryption at rest using AWS Key Management Service
(KMS) for enhanced data security.
Configuring Parameters:
Once you’ve launched your RDS instance, you may need to configure
additional parameters based on your application requirements. Here’s
how you can configure parameters for your RDS instance:
1. Parameter Groups: RDS parameter groups contain configuration
settings that govern the behavior of your database instance. You can
create custom parameter groups or use default parameter groups
provided by AWS.
2. Modify Parameters:
Navigate to the RDS dashboard and select your RDS instance.
In the “Configuration” tab, click on “Modify” to change parameter
settings.
224
4. Monitor Performance: Monitor the performance of your RDS
instance after applying parameter changes to ensure that your database is
operating optimally.
Prerequisites:
Both RDS and EC2 instances must be in the same VPC: Ensure
that your RDS instance and EC2 instance are deployed within the
same Virtual Private Cloud (VPC) to enable network communication
between them.
Security Group Configuration: Configure the security group
associated with your RDS instance to allow inbound traffic from the
security group associated with your EC2 instance on the appropriate
database port (e.g., 3306 for MySQL, 5432 for PostgreSQL).
225
Install the appropriate database client for your RDS database engine
(e.g., MySQL client for MySQL databases, PostgreSQL client for
PostgreSQL databases). You can install these clients using package
managers like apt (for Ubuntu) or yum (for Amazon Linux).
For PostgreSQL:
3. Provide Credentials:
When you run the command, you’ll be prompted to enter the password
for the specified user. Enter the password associated with the username
you provided.
4. Verification:
After successfully connecting, you’ll be presented with a database
prompt or console, indicating that you’re connected to your RDS
instance from your EC2 instance.
You can now execute SQL queries, perform database operations, and
interact with your RDS database as needed from your EC2 instance.
226
Below are examples covering the topics mentioned earlier: launching an
RDS instance, configuring parameters, and connecting to the RDS
instance from an EC2 instance using MySQL as the database engine.
Example: Launching an RDS Instance (Using AWS CLI)
"ParameterName=max_connections,ParameterValue=200,ApplyMethod=immediate"
For Ubuntu/Debian:
227
For Amazon Linux:
Conclusion:
Amazon RDS, from launching instances to connecting them from EC2.
Managed database services like RDS empower developers to focus on
building applications while offloading the heavy lifting of database
management to AWS.
228
Deploying multi-AZ RDS instances with Read
Capabilities
What is Multi AZ RDS?
When using AWS RDS, we can deploy instances in
multiple availability zones which would increase the availability of the
instances in case of a disaster and would also help in the architecture
being more reliable and Fault tolerant.
Problem Statement:
Although we have high availability of the application due to instances
deployed in Multi AZ, the standby instances do not accept any read
traffic, which means that even though we have the instances in different
AZ’s, we can’t perform any read/write operations on them. Any
read/write operations have to be done on the primary instance which
229
would synchronously replicate the same to the standby instances which
can also lead to performance bottlenecks on the primary instances as it
would accept the read, write operations and replicate the same to the
standby instances in different availability zones. Also, in order to
perform any such, read operations on the DB other than the writer node,
we would need to provision Read Replicas which have an additional cost
associated as well.
Solution:
AWS has introduced a new cluster option for the RDS service which
deploys the instances in Multi AZ fashion while also allowing the
instances to accept Read Only traffic. Thus, resolving the problem
statement as discussed earlier.
230
Note 1: This option is available only for Amazon RDS for MySQL
version 8.0.26 and PostgreSQL version 13.4 and in the US East (N.
Virginia), US West (Oregon), and EU (Ireland) Regions. This setup is
available only for R6gd or M6gd instance types.
Note 2: The SLA(Service Level Agreement) for this cluster setup does
match the SLA for the RDS service provided by AWS and hence this
setup is ideal for Development and Test environments but not
Production environments which require the same SLA for RDS.
Deployment Steps:
Prerequisites:
A Valid AWS Account with permissions to create, Read and Write
access to RDS.
The VPC has at least one subnet in each of the three Availability
Zones in the Region where you want to deploy your DB cluster. This
configuration ensures that your DB cluster always has at least two
readable standby DB instances available for failover, in the unlikely
231
event of an Availability Zone failure.
The Setup may result in cost for the DB, hence ensure you don’t have
any billing related issues for the account.
An EC2 instance to connect to the DB or a local machine.
Procedure:
DB Creation and Setup
1. Go to the AWS console, search for RDS and select Create Database
2. Select the Dev/Test Template and select Multi AZ DB Cluster-
preview option
Note: Only available in US East (N. Virginia), US West (Oregon), and
EU (Ireland) Regions
3. Select the checkbox to acknowledge the SLA for the DB cluster.
232
4. Enter the required parameters such as DB name, Db username and
Password.
We have selected the m6gd instance type and allocated space as 100 Gb
and Iops as 1000 as it’s the minimum requirement to save costs.
233
5. For the purpose of this Blog, we have enabled Public access to the
DB, but it is highly recommended that you do not enable public
access as it’s a more secure approach to protect the DB.
7. The DB uses 5432 port to communicate and hence we can either use
an existing security group which has the port enabled or create a new
Security Group.
234
9. We have disabled automated backups and encryption for the purpose
of this Blog, but it is highly recommended to enable these for more
security, reliability and disaster recovery.
235
10. Let the other settings be default, else we can change them according
to our preference and click create Database.
236
11. The DB takes a couple of Minutes to create and once created the
results will be similar to as shown below.
12. Once created we can see cluster endpoint names for both the reader
node and writer node. We can select individual instances and view their
endpoints as well
237
13. Connect to the local machine or EC2 instance which already has the
psql package installed . For the purpose of this blog, we have used a
linux 2 AMI which is free tier eligible and installed the psql package on
it for connecting to the PostgreSQL DB created.
238
Create a Database, connect to the database and insert records into it
239
If we try to write records to the reader endpoint, we will get an error as it
is a reader endpoint and not a writer endpoint, we can write records only
via the writer endpoint.
240
Let’s connect to Instance 2 using the reader endpoint and query the table.
241
4. Let’s connect to instance 1 which is the writer instance and write
records to it.
242
Rebooting Setup
In case the instances are rebooting, we can connect to the other instances
for read operations
Performing Failover:
Let’s perform a manual failover on the DB and check the results.
243
3. If we view carefully, after the failover, Node 3 became the writer
instance while node 1 and node 2 became the reader instances.
244
5. Now instance 2 became the writer node while instance 1 and instance
3 became the reader node
Deleting Cluster
Once done, we can delete the cluster and terminate the Ec2 instances as
well in order to save costs.
245
Conclusion:
The Multi AZ RDS with Read capability provides the best of both
worlds, High Availability and Read capabilities. This also ensures we
don’t require separate Read Replica just to offset the read traffic which
can be taken care of by the standby nodes, thus ensuring cost saving,
greater reliability and availability of the Database Layer. Since the Nodes
are highly available, even in case of a failover or disaster, our DB
remains safe and completely operational. AWS is expected to roll out this
feature to other DB engines as well and hopefully the SLA would also be
amended to ensure this solution is ideal for the Production environments
as well, which would serve as a Boon for the database layer in various
applications and complex environments.
246
Amazon DynamoDB
247
servers, allowing the database to handle high volumes of requests and
support massive workloads. You can scale up or down based on
demand without any downtime.
248
DynamoDB article if you are new to DynamoDB. That will help you get
a quick grasp on what the use cases of something like DynamoDB are.
If you are stuck in choosing a database, the article SQL vs NoSQL:
Choosing the Right Database Model for Your Business Needs will
help you get started on the right path.
Wallet Payments
Card Payments
Apart from this whenever there is a transaction done with an amount
greater than 100$, then we need to trigger a notification after 3 days into
the payments service that propagates the same to the Rewards
Service which then issues a coupon to the user.
I am listing out the components here. We will be going in depth into each
component after that. Please go through them sequentially as you would
require previous context to understand the latest.
1. Tables
2. Items
— Time To Live (TTL)
3. Attributes
4. Primary Key
—Partition Key (Hash Key)
—Sort Key (Range Key)
5. Secondary Indexes
— Local Secondary Index (LSI)
— Global Secondary Index (GSI)
249
6. Streams
7. DynamoDB Accelerator (DAX)
Let’s understand these components one by one with the help of the
example. I will be providing examples of these components to their
counterpart examples in SQL databases.
1. Tables
Tables are the fundamental storage units that hold data records. The
counterpart for a table in the SQL world is also a table.
According to our example, the Table would be a Transactions
table that would essentially record all the data w.r.t. transactions done
by the system like transaction data, transaction status history data etc
2. Items
Item is a single data record in a table. Each item in a table can be
uniquely identified by the stated Primary Key (Simple Primary
Key or Composite Primary Key) of the table.
250
3. Attributes
Attributes are pieces of data attached to a single item. It can be of
different data types, such as string, number, Boolean, binary, or complex
types like lists, sets, or maps. Attributes are not required to be
predefined, and each item can have different attributes.
An attribute is comparable to a column in the SQL world.
In our example a few attributes can be the following:
4. Primary Key
Primary key is a unique identifier that is used to uniquely identify each
item (row) within a table. The primary key is essential for efficient data
retrieval and storage in DynamoDB. It consists of one or two attributes:
251
Sort Key (Range Key):
The sort key is an optional attribute that, when combined with the
partition key, creates a composite primary key. The sort key allows you
to further refine the ordering of items within a partition. It helps in
performing range queries and sorting items based on the sort key’s
value. The combination of the partition key and the sort key must be
unique within the table. Examples of sort keys include timestamps,
dates, or any attribute that provides a meaningful ordering of items
within a partition. In our case, the Transactions table can have a Sort
Key depicting the type of item it is. As an example, one of the item can
have a Sort Key with value TRANSACTION_HISTORY that
essentially stores the status updates of the transaction. Similarly,
something like TRANSACTION_REWARDS_NOTIFICATION can
cater to our requirement of sending a notification of a specific
transaction to the Rewards system.
5. Secondary Index
The primary key uniquely identifies an item in a table, and you may
make queries against the table using the primary key. However,
sometimes you have additional access patterns that would be inefficient
252
with your primary key. DynamoDB has secondary indexes to enable
these additional access patterns. DynamoDB supports two kinds of
secondary index:
Same Partition Key: The partition key for the LSI is the same as the
base table’s partition key. This means that the LSI partitions the data
in the same way as the base table. There are a few caveats however,
which are listed down as follows:
Subset of Attributes: When creating an LSI, you can specify a
subset of attributes from the base table to include in the index. These
attributes become part of the index and can be projected into the
index’s key schema or as non-key attributes.
Query Flexibility: With an LSI, you can perform queries that utilize
the LSI’s partition key and sort key. This allows you to efficiently
retrieve a subset of data based on specific query requirements without
scanning the entire table.
Eventual Consistency: Unlike the primary index (base table), LSIs
only support eventual consistency for read operations. This means
that after a write operation, the index may not immediately reflect the
updated data.
Write Performance: When you modify data in a table with LSIs,
253
DynamoDB needs to update the base table as well as all the
corresponding LSIs. This can impact the write performance compared
to a table without any secondary indexes
.
However, keep in mind that the number of LSIs you can create per table
is limited, and the provisioned throughput is shared between the base
table and all its LSIs.
For example, if there is a requirement to query all transactions for a
specific customer within a particular date range. Then, you can utilize a
composite LSI of the type + created_at attribute for achieving that.
In this query, user_id represents the specific user you want to query, and
the sort key contains the range of dates you are interested in. By utilizing
the LSI, DynamoDB can efficiently retrieve the transactions that match
the specified user_id and fall within the given date range.
254
The LSI in this example helps you perform targeted queries on
transactions based on the customer ID and transaction date, without
having to scan the entire table. It provides an alternative access pattern
for retrieving transaction data and can improve query performance for
specific use cases.
Query Flexibility: With a GSI, you can perform queries based on the
GSI’s partition key and sort key. This allows you to efficiently retrieve a
subset of data based on specific query requirements without scanning the
entire table.
255
not immediately reflect the updated data.
However, keep in mind that creating GSIs can consume additional
storage and provisioned throughput, so careful consideration is needed
to optimize the indexing strategy based on your application’s
requirements.
Let’s introduce the field called payment_method which we mentioned
earlier. To provide an alternative access pattern based on payment
methods, you can create a GSI on the transaction table with the
following attributes:
256
6. Streams
Capture Changes:
DynamoDB Streams captures changes happening in real-time as
modifications are made to the table. It provides a durable and reliable
way to track and react to changes in your data. This use-case is often
referred to as CDC or Change Data Capture.
257
Data Synchronization:
Streams can be used for replicating data across multiple DynamoDB
tables or databases. By consuming the stream and applying the
changes to other destinations, you can keep different data stores
synchronized in near real-time.
Cross-Region Replication:
Streams can be utilized to replicate data from one DynamoDB table
to another in a different AWS region. This helps in creating disaster
recovery setups or distributing read traffic across regions.
To consume DynamoDB Streams, you can use AWS Lambda, which
allows you to write code that runs in response to stream events. You can
also use AWS services like Amazon Kinesis Data Streams or custom
applications to process and react to the stream events.
Remember our use-case that whenever there is a transaction done with
an amount greater than 100$, then we need to trigger a notification after
3 days into the payments service that propagates the same to
the Rewards Service which then issues a coupon to the user. DDB
Streams along with TTL can help us achieve this.
258
Amazon Web Services (AWS) specifically designed for DynamoDB.
DAX improves the performance of DynamoDB by caching frequently
accessed data in memory, reducing the need to access the underlying
DynamoDB tables for every request. It provides low-latency read access
to the cached data, resulting in faster response times and reduced
database load.
Here are some key features and benefits of DAX:
In-Memory Caching:
DAX caches frequently accessed data from DynamoDB tables in
memory. This eliminates the need for repeated reads from the
database, resulting in reduced latency and improved response times
for read-intensive workloads.
Seamless Integration:
DAX is fully compatible with DynamoDB and integrates seamlessly
with existing DynamoDB applications. You can simply point your
application to the DAX endpoint, and it will automatically route read
requests to the DAX cluster.
259
Cost Optimization:
By reducing the number of read operations on the underlying
DynamoDB tables, DAX can help lower the cost of running read-
intensive applications by reducing provisioned throughput and
minimizing the number of DynamoDB read capacity units required.
DAX is particularly useful for applications with high read traffic, such as
Payment Recon systems, real-time analytics, gaming leader-boards,
session stores etc. It improves the overall performance and efficiency of
DynamoDB, providing a seamless caching layer that enhances the speed
and scalability of your applications without sacrificing data consistency.
Features of DynamoDB:
NoSQL database:
Managed and Serverless: It has a flexible schema which allows you to
260
have many different attributes for one item. We can easily adapt
business requirement change without having to refine the table schema.
It also supports key-value and document data models.
DynamoDB Streams:
It captures data modification events i.e. create, update, or delete items in
a table near real time. Each record has a unique sequence number which
is used for ordering
Secondary Indexes:
DynamoDB provides fast access to items in a table by specifying
primary key values. However, many applications might benefit from
having one or more secondary (or alternate) keys available, to allow
efficient access to data with attributes other than the primary key.
DynamoDB provides 2 kinds of Indexes: Global Secondary Index and
Local Secondary Index.
261
On-Demand Capacity Mode:
Allows user to scale seamlessly without capacity planning. It ensures
optimal performance and cost efficiency for fluctuating workloads.
Point-in-time Recovery:
Enables users to restore their data to any second within a 35 days of
retention period, protecting against accidental data loss. It provides
peace of mind by allowing effortless recovery from user errors or
malicious actions, ensuring data integrity and availability.
Encryption at Rest:
By default, DynamoDB encrypts all data at rest using AWS KMS (Key
Management Service), providing an additional layer of protection
against unauthorized access.
262
Give the table name (AWS Learners), partition key (LearnerName) and
sort key (Certifications)
For table Settings, select Customize settings (To enable auto-scaling for
our table). DynamoDB auto scaling will change the read and write
capacity of your table based on request volume. Rest of the settings
should be remained as it is, just add the json policy.
263
Add data to the NoSQL table
Click on Create Item
264
Enter the value for LearnerName and Certification
265
Deleting an existing item
Here I am deleting the item named Jay
266
Delete a NoSQL Table:
Deleting the entire AWSLearner table.
267
Type confirm to delete entire table
So, we created our first DynamoDB table, added items to the table, and
then queried the table to find the items we wanted. Also, learned how to
visually manage your DynamoDB tables and items through the AWS
Management Console.
268
SIMPLE STORAGE SERVICE
269
S3 concepts:
Buckets are containers for objects stored in Amazon S3. Every object
is contained in a bucket. Think of it as a folder for organizing files.
Keys are the unique identifiers for an object within a bucket. Every
object in a bucket has exactly one key that you can use later for object
retrieval.
If you enable versioning in the bucket, it doesn’t overwrite the existing
object if the key already exists. Instead, it adds enumerated versions.
The combination of a bucket, key, and version ID uniquely identifies
each object. When you create an S3 bucket, you have to select the region
where it belongs. When you go to the S3 console, you will see all
buckets on one screen. That illustrates the S3 bucket is a regional service
but the S3 namespace is global. The best practice is to select the region
that is physically closest to you, to reduce transfer latency. You can
create a folder in an S3 bucket to organize your data. Data engineers use
S3 folders for achieving data partitioning by date in the S3 data lake.
270
S3 Permissions
S3 permissions allow you to have granular control over who can view,
access, and use specific buckets and objects. Permissions functionality
can be found on the bucket and object levels. There are 3 types of
permissions in S3:
Storage Classes
A storage class represents the classification assigned to each object. Each
storage class has varying attributes that dictate:
Storage cost
Frequency of access
271
Storage classes are:
S3 Standard for frequently accessed data. The default option and
expensive.
The storage cost is cheaper but charges more when retrieving data.
S3 Glacier Instant Retrieval for archive data that needs immediate
access. Decent pricing for object storage.
I’ll be regularly using a work file for the next 30 days, so please keep
it in the standard class during this time.
After this initial period, I’ll only need to access the file once a week
272
for the subsequent 60 days. Therefore, after 30 days, please transfer
the file from the standard class to the infrequently accessed class.
273
To optimize your website’s performance globally and bolster security,
consider leveraging the CDN (Content Delivery Network) service,
CloudFront, on top of the website hosted on S3. It’s advisable to
implement Origin Access Identity (OAI) as a best practice, as this
secures the website assets stored in Amazon S3. With OAI, direct public
access to the bucket is restricted, ensuring that the public can only access
the website through CloudFront. Additionally, CloudFront aids in cost
reduction by minimizing the number of requests to S3. It’s important to
note that every individual request in S3 incurs a charge, highlighting the
general cost implications associated with cloud-based services.
274
Clients accessing a static website deploy through the CDN (CloudFront)
CloudFront is a versatile and widely utilized service. Here are the key
aspects:
You can run business logic code and small functions with
Lambda@Edge. For example, inspect headers of requests and pass
the requests down only a valid token is present.
Presigned URL
The S3-presigned URL stands out as a crucial feature, widely utilized in
275
practical applications. It allows clients to download and upload an object
to the bucket with a temporary URL. For instance, consider a book-
selling application where customers need immediate access to their
purchased books. In such cases, leveraging S3-presigned URLs allows
you to generate temporary links, granting customers access to download
their books promptly after purchase.
Another practical application of S3-presigned URLs arises when
uploading large files to an S3 bucket through the API Gateway.
Typically, API Gateways serve as the entry point for RESTful endpoints.
However, API Gateway has a limitation of handling requests up to 10
MB. I encountered this issue while dealing with an endpoint for blobs.
The endpoint functioned properly for certain files but failed for others,
leading to extensive debugging efforts. Initially, I couldn’t pinpoint
whether the problem lay with the API Gateway or S3 configuration, as
everything appeared correct. Eventually, I discovered the 10 MB request
limit of API Gateway, which was causing the failure. To address this, I
implemented a solution using pre-signed URLs. Instead of directly
storing files from the API Gateway to S3, I introduced an additional step.
Initially, the request goes to the API Gateway, where authorization for
file storage is verified. If authorized, the API Gateway returns a
presigned URL. Subsequently, the client application makes another call
to store the file using the provided URL.
276
S3 Gateway endpoint
All resources within the AWS cloud are integrated via the AWS Global
Network, as they operate within the same AWS network and
infrastructure. These endpoints facilitate communication between
instances within a Virtual Private Cloud (VPC) and various AWS
services, enabling efficient interaction across different accounts within
the AWS ecosystem. AWS resources in your account can be connected
to the resources in my AWS account through VPC endpoints.
When I was working on a multi-account initiative, we frequently
encountered the need to link resources across various accounts. VPC
endpoints come in handy for this purpose. When I required connectivity
between resources in different accounts, I simply submitted a ticket to
our DevOps team requesting the creation of a VPC endpoint. This
streamlined the process, enabling seamless connectivity to resources in
different AWS accounts. Although VPC endpoints offer practical
solutions, it’s essential to note that they incur additional charges.
Therefore, it’s advisable to decommission VPC endpoints once their use
is no longer required, as a best practice.
S3 is a publicly accessible service available for anyone to use. To
connect to S3, you need a valid token and an internet connection.
However, in certain scenarios, EC2 servers may not have internet
connectivity due to stringent security reasons. Nonetheless, these servers
can still establish connections with S3 using a VPC endpoint, specifically
the S3 Gateway endpoint. You can configure which resources can access
the S3 bucket through the VPC endpoint by writing an S3 bucket policy.
277
S3 Event notification
You can use the Amazon S3 Event Notifications feature to receive
notifications when certain events happen in your S3 buckets such as a
new object being created, or an object getting removed. It is a way to
achieve a modern Event-Driven Architecture that emphasizes asynchrony
and loose-couple which helps achieve better responsiveness.
When users upload profile pictures, synchronously generating a
thumbnail could take seconds. However, employing an Event-Driven
approach with S3 Event Notifications could significantly reduce latency
from seconds to milliseconds. Here’s how it works: After the user
uploads the profile picture, it’s stored in S3. Subsequently, S3 event
notifications initiate a new object-created event, triggering a Lambda
function. This Lambda function asynchronously generates the
thumbnails, optimizing the process for faster response times.
The S3 Event notification can trigger SNS, SQS, and Lambda. When
setting up the event notification, make sure you have the right resource-
based policy set on the destination or SNS/SQS.
278
S3 Global Replication
A seasoned software architect shared with me a strategy for architecting
global applications, S3 Global Replication. This feature boasts
impressive capabilities, allowing data replication between regions in a
matter of minutes. The replication enables automatic, asynchronous
copying of objects across Amazon S3 buckets between different accounts
and regions. An object may be replicated to a single destination bucket or
multiple destination buckets.
279
Meet compliance requirements — Although Amazon S3 stores your
data across multiple geographically distant Availability Zones by
default, compliance requirements might dictate that you store data at
even greater distances (regions).
Minimize latency — If your customers are in two geographic
locations, you can minimize latency in accessing objects by
maintaining object copies in AWS Regions that are geographically
closer to your users.
Object Lock
Store objects using a write-once-read-many (WORM) model to help you
prevent objects from being deleted or overwritten for a fixed amount of
time or indefinitely. Object Lock provides two ways to manage object
retention:
280
protected and can’t be overwritten or deleted.
Multipart Upload
Multipart upload allows you to upload a single object as a set of parts.
Each part is a contiguous portion of the object’s data. You can upload
these object parts independently and in any order. If transmission of any
part fails, you can retransmit that part without affecting other parts. After
all parts of your object are uploaded, Amazon S3 assembles these parts
and creates the object.
In general, when your object size reaches 100 MB, you should consider
using multipart uploads instead of uploading the object in a single
operation. It can make your app faster. But adds complexity as well. For
example, you have to provide more parameters in the API and there are
edge cases such as what happens when there are incomplete files.
Pause and resume object uploads — You can upload object parts over
time. After you initiate a multipart upload, there is no expiry; you
must explicitly complete or stop the multipart upload.
Begin an upload before you know the final object size — You can
281
upload an object as you are creating it.
S3 Transfer Acceleration
Amazon S3 Transfer Acceleration offers a significant boost in content
transfers. Users operating web or mobile applications with a broad user
base or those hosted far from their S3 bucket may encounter prolonged
and fluctuating upload and download speeds over the internet. S3
Transfer Acceleration (S3TA) addresses these challenges by minimizing
variability in internet routing, congestion, and speeds that typically
impact transfers. It effectively reduces the perceived distance to S3 for
remote applications by leveraging Amazon CloudFront’s Edge Locations
and AWS backbone networks, along with network protocol
optimizations, S3TA enhances transfer performance, ensuring smoother
and faster data transfers.
S3 Encryption
In our database, we had a column storing PII (Personal Identifiable
Information) data, which required encryption for security purposes.
Rather than investing significant effort and writing extensive code to
handle this, we opted for a solution with minimal effort. Leveraging
AWS S3 encryption support, we securely stored PII data payloads in an
S3 bucket to comply with regulations. Then we stored the corresponding
S3 object keys in database tables, effectively keeping sensitive
information out of the database.
282
regulations without any effort from you.
S3 Pricing
I had been using Google Drive for storing my images and videos for 7+
years. I paid $20 yearly to get 100 GB storage. It had been red for days.
Then, I downloaded all my media and uploaded them to Amazon S3.
Since I did not access my files frequently, I picked the archival class
(Glacier Instant Retrieval) which costs $0.004 per GB per month. So, the
yearly cost to store my memories reduced from $20 to $4.8 which is a
75% cost reduction.
Pricing varies by region like all other services. There are many factors it
charges for:
283
Requests & data retrieval — Number of requests against the bucket
Data transfer — It is a hidden cost. It is like a tax when running apps
in the cloud.
Management & Analytics (if you enable)
284
IAM Programmatic access and AWS CLI
IAM Programmatic access
In order to access your AWS account from a terminal or system, you can
use AWS Access keys and AWS Secret Access keys.
AWS CLI
The AWS Command Line Interface (AWS CLI) is a unified tool to
manage your AWS services. With just one tool to download and
configure, you can control multiple AWS services from the command
line and automate them through scripts.
The AWS CLI v2 offers several new features including improved
installers, new configuration options such as AWS IAM Identity Center
(successor to AWS SSO), and various interactive features.
Task-01
Create AWS_ACCESS_KEY_ID and
AWS_SECRET_ACCESS_KEY from AWS Console.
Log in to your AWS Management Console.
Click on your username in the top right corner of the console and
select “Security Credentials” from the drop-down menu.
285
Click on the “Access keys (access key ID and secret access key)”
section.
286
Click on “Create Access Key.”
Your access key ID and secret access key will be displayed. Make sure
to download the CSV file with your access key information and store it
in a secure location.
287
Task-02
Install the AWS CLI by following the instructions for your operating
system: https://docs.aws.amazon.com/cli/latest/userguide/install-
cliv2.html
288
Check aws-cli version
Once you have installed the AWS CLI, open a terminal or command
prompt and run the following command to configure your account
credentials:
You will be prompted to enter your AWS Access Key ID and Secret
Access Key. Copy and paste access key and secret key from downloaded
csv file. You will also be prompted to enter your default region and
output format. Choose the region that is closest to your location and
289
select a suitable output format.
Once you have entered your credentials and configured your default
settings, you can test that the CLI is working by running the following
command:
This command should list the contents of your default S3 bucket. You
have now set up and installed the AWS CLI and configured your
account credentials.
290
Data Preservation Strategies for EC2 Instances:
Safeguarding Your Information Before Destruction
we will explore how to preserve data in our EC2 instance before it gets
destroyed.
Use Case:
Imagine you are responsible for an e-commerce website hosted on an
EC2 instance. You decide to upgrade your application to a newer
version. Before initiating the upgrade, you need to preserve customer
data, transaction records, and other critical information stored on the
instance to ensure a seamless transition and avoid any data loss.
Overview :
Workflow Diagram
Steps :
Step 1:
291
Creating the EC2 Instance :
Navigate to the EC2 console.
292
293
Now , I am going to insert some Data in the Instance.
294
Step 2 :
Creating AMI from Instance :
Now , I am going to create AMI from our Instance.
Steps :
295
Step 3 :
Terminating the Instance :
Now , I am going to Terminate the Instance and after that Iam going to
create a new instance to retrieve data.
Steps :
296
Step 4 :
Banking-up and Restore the Data :
Now , I am going to Create a new instance to retrieve data.
Steps :
297
298
299
We successfully backed-up and restored the Old Data.
300
AWS EBS Snapshots
EBS is the network storage drive and can be connected with one EC2
instance at a time. EBS cannot be connected from one availability
zone to another.
Snapshots are the backups of the EBS instance and can be restored in
any other availability zone as a copy (with the same data).
301
Go to the Storage tab of the EC2 instance → Click the volume id
302
Write the description of the snapshot you are making. Click on the
Create Snapshot button
A snapshot is created. Click on the snapshot link on left. You will find
303
your snapshot there.
304
Choose your region region and click Copy button
A new copy of the snapshot will be created with your new region.
305
You can change the availability zone of the drive from the below
selection, the rest of the details will be the same as previous. Click
Create Volume.
306
is set up.
Closing thoughts:
In this article, we have understood how EBS snapshots work in AWS
and to restore the EBS volume drive from Snapshots.
307
Elastic Beanstalk: Advantages and Drawbacks
First, let’s start with the basics: According to the AWS site, “Elastic
Beanstalk makes it even easier for developers to quickly deploy and
manage applications in the AWS cloud. Developers simply upload their
application, and Elastic Beanstalk automatically handles the deployment
details of capacity provisioning, load balancing, auto-scaling, and
application health monitoring.”
Advantages:
Elastic Beanstalk’s main benefits include timesaving server
configuration, powerful customization, and a cost-effective price point.
Lightning-Fast Configuration with Automation
308
Rails Servers: For Rails, you can run either a Passenger or a Puma
application stack — we’ve only used Puma so far, and the servers
will be configured by Elastic Beanstalk to run a Puma process (with
Nginx used as a reverse proxy in front of it) and a reasonable server
configuration for log files, directory security, and user security.
Security: New AWS security groups and policies are created, along
with permissions for these services to securely talk to each other. All
the servers are configured so they can only talk to and have
permissions for what they need. For example, your Rails servers have
just one port open specifically for the load balancers, and nothing can
talk to your DB server, except for your Rails servers. This is fantastic,
because it can be hard to do correctly on your own.
309
account and password and have it accessible to your running Rails
code as an ENV variable.
310
your server needs to your service load. Elastic Beanstalk even has auto-
scaling functionality built in, which we never used, but would be a great
way to save money for larger applications by only bringing up extra
servers when needed. Overall, our costs on a application with a similar
size and scope we built during the same timeframe was 400% higher on
Heroku vs. Elastic Beanstalk. Although each application is different, it’s
a good ballpark comparison.
Drawbacks
Some of the biggest pains with Elastic Beanstalk include unreliable
deployments, lack of transparency and documentation around stack and
application upgrades, and an overall lack of clear documentation.
Unreliable Deployment
We do a lot of deployments — we have a continuous integration setup
via Codeship, so with every commit (or merged pull request), if tests
pass, we deploy a new version. We practice small, incremental changes
and strive to be as nimble as possible. Some days, we might deploy 10
times.
Over the last year or so of using Elastic Beanstalk, our deploys have
failed five or six times. When the failure happens, we get no indication
why, and further deployments will fail as well. On the positive side, this
didn’t result in downtime for us. We simply couldn’t deploy, and if we
tried again it would fail.
Each time, we needed to troubleshoot and fix on our own. We found and
tried multiple solutions, such as terminating the instance that had the
deployment issue, and let Elastic Beanstalk recover. Sometimes, we
could ssh into the stuck machine, kill a process that was part of the eb
deploy, the machine would recover. But overall, we didn’t know what
failed, and it’s never a good thing to not be sure that your machine is in a
good state.
311
Considering we have done over 1000 deployments; this isn’t a high
failure rate. It never hit us at a critical time, but what if this happened
when we were trying to do a hotfix for a performance issue that was
crippling our site? Or, what if we had larger sites with more than two or
three front-end servers? Would this increase our deployment failure
rate? For the two applications we have done, we decided that the risk of
this happening was small and that it didn’t warrant switching to a new
service. For some applications this would not be an acceptable risk.
Deployment Speed
Deployments would take five minutes at least, and sometimes stretch to
15, for a site with just two front-ends. With more servers, deployments
could take even longer. This might not seem like much, but we have
setup other Rails environments where deployment can be done in one or
two minutes. And this can be critical if you are trying to be responsive in
real-time.
Attempts have been made to improve the Elastic Beanstalk deployment
process, and a good summary to start with is this post from HE:labs. We
may try some things from there in the future.
Stack Upgrades
Elastic Beanstalk comes out with new stack versions all the time — but
we have zero information on what has changed. No release notes, no
blog post, not even a forum post. Sometimes, it’s obvious — the version
of Ruby or Puma will change. But other times, it’s just an upgrade.
We’ve done several upgrades, and sometimes it goes smoothly, and
sometimes it takes a week.
Old Application Versions
Another thing we learned is to occasionally delete old application
versions. With every deploy, Elastic Beanstalk archives the old
application version in an S3 bucket. However, if there are 500 old
versions, further deploys fail. You can delete them through the Elastic
312
Beanstalk UI, but this caught us off guard multiple times. Although this
seems like a small problem, I really don’t like it when deployments fail.
All of these problems are an indication of Elastic Beanstalk’s general
lack of transparency. We had to figure out a lot of these issues on our
own and through blog posts and internet searches. On the plus side, we
had complete transparency into EC2 instances, database, and other
services created, so we were free to learn on our own. And while stack
upgrades and failed deployments are the most clear moments of pain, in
general they were indicators of the types of things that you have to learn
on your own.
Summary
Elastic Beanstalk helps us to easily deploy updates to our Rails
application while also leveraging Amazon’s powerful infrastructure.
Enhancing our deployment process with containers — like Docker —
will add even more versatility. Thanks to the fine-grain control offered
by Elastic Beanstalk we get to choose technologies that work best for us.
Ultimately, we found the most helpful thing about Elastic Beanstalk to
be that its automation features let us easily deploy updates to our Rails
application. While it’s certainly not a perfect tool, if you’re looking to
reduce system operations and just focus on what you’re developing,
Elastic Beanstalk is a solid choice.
313
Adding a custom domain for the AWS Elastic
Beanstalk application using Route 53.
314
{
"name": "node-typescript-boilerplate",
"version": "1.0.0",
"description": "",
"main": "src/index.ts",
"scripts": {
"watch": "tsc -w",
"dev": "nodemon dist/index.js",
"start": "node dist/index.js",
"build": "tsc",
"build:lib": "tsc"
},
"keywords": [],
"author": "",
"license": "ISC",
"dependencies": {
"@types/jsonwebtoken": "^8.5.8",
"aws-sdk": "^2.1167.0",
"bcrypt": "^5.0.1",
"chalk": "^4.1.2",
"dotenv": "^16.0.1",
"express": "^4.18.1",
"express-validator": "^6.14.2",
"joi": "^17.6.0",
"jsonwebtoken": "^8.5.1",
"lodash": "^4.17.21",
"moment": "^2.29.3",
315
"mongoose": "^6.4.4",
"multer": "^1.4.5-lts.1",
"multer-s3": "^3.0.1",
"nodemailer": "^6.7.5",
"ts-node": "^10.8.2"
},
"devDependencies": {
"@types/aws-sdk": "^2.7.0",
"@types/bcrypt": "^5.0.0",
"@types/express": "^4.17.13",
"@types/lodash": "^4.14.182",
"@types/multer-s3": "^3.0.0",
"typescript": "^4.7.4"
}
}
In the package file specified above, we need to ensure that the scripts tag
contains a start script. This script will be used to bootstrap the
application on the Elastic Beanstalk server. In the sample code, we have
configured the start script as node dist/index.js.
Our app is ready, so let’s create the Stretch Bean Establishment app.
316
Getting started with AWS Elastic Beanstalk
317
Then set the version label to 1 and choose Single instance in the
"Presets" section and click Next.
Note: Prefer High availability for the production environment.
318
Step 2: Configure service access
In this section, it is necessary to set up IAM roles. We must create two
IAM roles, one for Elastic Beanstalk and one for EC2
For the service role, select Create and use new service role. It'll
automatically create and provide the required permissions
In order to ssh into your EC2 instance via terminal, create a key-value
pair and select it. Skip this step if you do not wish to log onto EC2.
Create an IAM role with the following permissions and add the role to
the ‘EC2 instance profile’ and proceed next.
AWSElasticBeanstalkWebTier
AWSElasticBeanstalkWorkerTier
AWSElasticBeanstalkMulticontainerDocker
319
Step 3: Set up networking, database, and tags
I’m going to skip this step because I’m using a Mongoose database, so I
don’t need to do this step.
Step 4: Configure instance traffic and scaling
It’s not necessary to make any changes here unless you need them. If
you’re creating this sample app, leave the fields with their default values.
An Amazon Linux machine will be created by Elastic Beanstalk by
default.
320
Step 5: Configure updates, monitoring, and logging
Choose Basic in "Health reporting" and uncheck Managed updates
activation.
321
Add your environment variables and click Next.
322
In the end, examine all your configurations and proceed with the next
step.
Now you can see why I spent hours on this process in the first place.
Whenever I made a mistake, I had to wait about 10 to 15 minutes to
check the result and redo all the steps above if anything went wrong.
The Elasticated Bean will definitely test your patience, so be calm and
relaxed.
When everything is finished, the health will turn green and a domain
URL will be generated.
323
The following page will appear when you open the URL if you used my
example repo.
324
Enter your domain name in the “Domain Name” field. The name of the
custom domain you want to add (e.g., example.com) should be given
here.
Once the hosted zone is established, you will be directed to the records
management page. DNS records need to be added to point to the
resources you want to associate with your custom domain. Records (for
IPv4 addresses), AAAA records (for IPv6 addresses), CNAME records
(for aliases), MX records (for mail servers), etc. are all common record
types.
325
3. Add Name Server(NS) in your domain provider.
To point to the set of nameservers on AWS, custom NS records will be
added to GoDaddy next.
Click on My Products on the top toolbar. The DNS button on the
justingerber.com domain should be clicked next.
326
Depending on your requirements, you can choose between GoDaddy’s
default nameservers or custom nameservers. Here are the two options:
Default Nameservers: If you’re using GoDaddy’s default
nameservers, you will typically see an option to choose “GoDaddy
Nameservers” or “Default Nameservers.” Select this option if you
want to use GoDaddy’s own nameservers for your domain.
Custom Nameservers: If you have your own nameservers (provided
by a hosting provider, for example), you’ll want to choose the option
to enter custom nameservers. Enter the nameserver addresses
provided by your hosting provider.
327
Fill out the necessary information for the certification manager form.
Make sure to use ‘*.doaminName.com’ when adding a name.
After creating the certification, you will have a new screen that contains
the necessary details related to your domain and certification. Click on
Create a record in route 53, which creates an SSL certificate for your
domain.
328
Press the Create Records button. Route 53 will have a CNAME record
added by this.
329
Create a custom domain in route 53 and select an Elastic Beanstalk
service.
5. Create a type A record in Route 53 in your custom domain
add.
Create a record by clicking on the “Create record” button.
330
Make sure the alias switch is activated.
Alias should be selected as the first drop-down list under Route traffic
in the Elastic Beanstalk environment.
Create a second list under Route traffic to Asia Pacific (Mumbai)[ap-
south-1] for Route traffic to Asia Pacific (Mumbai)[ap-south-1]. you
may select anyone according to your country.
Ensure that the third drop-down list under Route traffic is set to your
elastic Beanstalk environment.
Press the Create records button.
Make sure that “A” and “CNAME” record is present on Route 53 for the
weather application.
331
6. Adding SSL in your Elastic Beanstalk.
Under the Environment Name column, click the Weather-test-app-dev
app that was added.
332
Click the edit button under the Instance traffic and scaling. Continue
scrolling until you come across Listeners.
Click on the configuration option on the left menu. Click the edit button
located in the load balancer section. Click on the button that says Add
listener.
333
Click the Add button.
Important: Once this has been added, the changes have not been saved
yet.
Scroll to the bottom of the page and click the Apply button.
Reflecting the environment on your custom domain will take some time.
334
Using CloudWatch for Resource Monitoring, Create
CloudWatch Alarms and Dashboards
Introduction:
What’s Amazon CloudWatch?
Amazon CloudWatch is an AWS service for monitoring and managing
resources in the cloud. It ensures the reliability, availability, and
performance of AWS applications and infrastructure.
Architecture Diagram:
335
Task Steps:
Step 1:
Sign in to AWS Management Console
On the AWS sign-in page ,enter your credentials to log in to your AWS
account and click on the Sign in button.
Once Signed in to the AWS Management Console, Make the default
AWS Region as US East (N. Virginia) us-east-1
Step 2:
Launching an EC2 Instance
In this step, we are going to launch an EC2 Instance that will be used for
checking various features in CloudWatch.
Make sure you are in the N.Virginia Region.
Navigate to EC2 by clicking on the Services menu in the top, then click
on EC2 in the Compute section.
3. Navigate to Instances from the left side menu and click on Launch
instances button.
336
4. Name : Enter MyEC2Server
5. For Amazon Machine Image (AMI): Select Amazon Linux and the
select Amazon Linux 2 AMI from the drop-down.
Note: if there are two AMI’s present for Amazon Linux 2 AMI, choose
any of them.
337
7. For Key pair: Select Create a new key pair Button
Key pair name: MyEC2Key
Key pair type: RSA
Private key file format: .pem
338
To add SSH :
Choose Type: Select SSH
Source: Select Anywhere
339
Note: Select the instance and Copy the Instance-ID and save it for later,
we need to search the metrics in CloudWatch based on this.
Step 3 :
SSH into EC2 Instance and install necessary Software’s
Follow the instructions bellow to SSH to your EC2 instance
Once instance is launched, Select EC2 Instance Connect option and click
on Connect button.(Keep everything else as default)
340
A new tab will open in the browser where you can execute the CLI
Commands.
2. Once you are logged into the EC2 instance, switch to root user.
sudo su
3. Update :
yum update -y
341
4. Stress Tool : Amazon Linux 2 AMI does not have the stress tool
installed by default, we will need to install some packages
sudo amazon-linux-extras install epel -y
yum install stress -y
5. Stress tool will be used for simulating EC2 metrics. Once we create
the CloudWatch Alarm, we shall come back to SSH and
trigger CPUUtilization using it.
Step 4:
Create SNS Topic
In this step, we are going to create a SNS Topic.
Make sure you are in the N.Virginia Region.
Navigate to Simple Notification Service by clicking on the Services
menu available under the Application Integration section.
3. Click on Topics in the left panel and then click on Create topic button.
4. Under Details:
Type: Select Standard
Name: Enter MyServerMonitor
342
Display name: Enter MyServerMonitor
5. Leave other options as default and click on Create topic button. A SNS
topic will be created.
Step 5:
Subscribe to an SNS Topic
Once SNS topic is created, click on SNS topic MyServerMonitor.
Click on Create subscription button.
343
3. Under Details:
Protocol : Select Email
Endpoint : Enter your email address
Note: Make sure you give proper email address as this is where your
notification will be delivered.
4. You will receive a subscription confirmation to your email address
344
5. Click on Confirm subscription.
Step 6:
Using CloudWatch to Check EC2 CPU Utilization Metrics in
CloudWatch Metrics
Navigate to CloudWatch by clicking on the Services menu available
under the Management & Governance section.
345
2. Click on All metrics under Metrics in the Left Panel.
3. You should be able to see EC2 under All Metrics. If EC2 is not
visible, please wait for 5–10 minutes, CloudWatch usually takes around
5–10 minutes after the creation of EC2 to start fetching metric details.
5. Here you can see various metrics. Select the CPUUtilization metric to
see the graph.
346
6. Now at the top of the screen, you can see the CPU Utilization graph
(which is at zero since we have not stressed the CPU yet).
Step 7:
Create CloudWatch Alarm
CloudWatch alarms are used to watch a single CloudWatch metric or the
result of a math expression based on CloudWatch metrics.
Click on In alarms under Alarms in the left panel of the CloudWatch
dashboard.
347
Click on Select metric. It will open the Select Metrics page.
Enter your EC2 Instance-ID in the search bar to get metrics for
MyEC2Server
Choose the CPU Utilization metric.
Click on Select metric button.
348
4. Now, configure the alarm with the following details:
Under Metrics
Period: Select 1 Minute
Under Conditions
Threshold type: Choose Static
Whenever CPUUtilization is…: Choose Greater
than: Enter 30
Leave other values as default and click on Next button.
349
5. In Configure actions page:
Under Notification
Alarm state trigger: Choose In Alarm
Select an SNS topic: Choose Select an existing SNS topic
Send a notification to… : Choose MyServerMonitor SNS topic which
was created earlier.
350
7. A preview of the Alarm will be shown. Scroll down and click
on Create alarm button.
8. A new CloudWatch Alarm is now created.
Whenever the CPU Utilization goes above 30 for more than 1 minute,
an SNS Notification will be triggered and you will receive an email
Step 8:
Testing CloudWatch Alarm by Stressing CPU Utilization
SSH back into the EC2 instance — MyEC2Server.
The stress tool has already been installed. Let’s run a command to
increase the CPU Utilization manually.
sudo stress --cpu 10 -v --timeout 400s
351
3. This command shall monitor the process created by the stress
tool(which we triggered manually). It will run for 6 minutes and 40
seconds. It will monitor CPU utilization, which should remain very near
100% for that amount of time.
4. Open another Terminal on your local machine and SSH back in EC2
instance — MyEC2Server.
5. Run this command to see the CPU utilization if you are a MAC or
Linux User. For Windows User, you can navigate to Task manager.
top
352
6. You can now see that %Cpu(s) is 100. By running this stress
command, we have manually increased the CPU utilization of the EC2
Instance.
7. After 400 Seconds, the %Cpu will reduce back to 0.
353
Step 9 :
Checking For an Email from the SNS Topic
Navigate to your mailbox and refresh it. You should see a new email
notification for MyServerCPUUtilizationAlarm.
Step 10:
Checking the CloudWatch Alarm Graph
Navigate back to CloudWatch page, Click on Alarms.
Click on MyServerCPUUtilizationAlarm.
On the Graph, you can see places where CPUUtilization has gone above
the 30% threshold.
354
4. We can trigger CPUUtilization multiple times to see the spike on the
graph.
5. You have successfully triggered a CloudWatch Alarm
for CPUUtilzation.
Step 11:
Create a CloudWatch Dashboard
We can create a simple Cloudwatch dashboard to see
the CPUUtilization and various other metric widgets.
Click on Dashboard in the left panel of the CloudWatch page.
Click on Create dashboard button.
355
Dashboard name: Enter MyEC2ServerDashboard
356
Click on Create Widget button.
3. Depending on how many times you triggered the stress command, you
will see different spikes in the timeline.
4. Now click on the Save button.
5. You can also add multiple Widgets to the same Dashboard by clicking
on Add widget button.
357
Exploring AWS CloudTrail: Auditing and Monitoring
AWS API Activity
What is CloudTrail?
CloudTrail continuously monitors and logs account activity across all
AWS services, including actions taken by a user, role, or AWS service.
The recorded information includes the identity of the API caller, the time
of the API call, the source IP address of the API caller, the request
parameters, and the response elements returned by the AWS service.
358
Here are some key reasons to use CloudTrail:
CloudTrail Events
CloudTrail categorizes events into two types:
359
Data events: Provides information about resource operations
performed on or in a resource. These include operations like Amazon
S3 object-level API activity.
You can choose to log both management and data events or just
management events. Data events allow more granular visibility into
resource access.
Enabling CloudTrail
Enabling CloudTrail is simple and can be done in a few
steps:
Sign into the AWS Management Console and open the CloudTrail
console.
Get started by creating a new trail and specify a name.
Choose whether to log management and/or data events.
Specify an existing S3 bucket or create a new one where logs will be
stored.
Click Create to finish enabling CloudTrail.
Once enabled, CloudTrail will begin recording events and delivering
log files to the designated S3 bucket. You can customize trails further
by adding tags, configuring log file validation, logging to CloudWatch
Logs, and more.
Use Cases
Here are some common use cases for CloudTrail:
User Activity Monitoring: Review which users and accounts are
performing actions across services.
360
incident occurs by reviewing relevant events.
Prerequisites
Before starting, you should have:
An AWS account
Basic understanding of AWS services
An S3 bucket to store the CloudTrail logs
Enabling CloudTrail
Let’s start by enabling CloudTrail across all Regions:
361
Under Storage location, create or select an existing S3 bucket.
For log file encryption, select AWS KMS to encrypt the logs.
Click “Create” to enable the trail.
CloudTrail will now begin recording events and sending log files to
the designated S3 bucket.
Go to the S3 console and open the bucket storing the CloudTrail logs.
Open one of the log files and inspect the JSON content.
You will see API call details like source IP, user agent, resource
affected, and parameters.
The logs provide a comprehensive audit trail of all API activity across
services.
362
Here are some common AWS CLI commands for working with AWS
CloudTrail:
Create CloudTrail trail
363
remove-tags — Removes tags from a trail
list-public-keys — Lists public keys for log file validation
get-trail-status — Returns status of CloudTrail logging
list-trails — Lists trails in the account
Final Words
AWS CloudTrail provides a simple yet powerful way to gain visibility
into activity across your AWS account. By recording API calls made to
various AWS services, CloudTrail delivers detailed audit logs that can be
analyzed for security, compliance, and operational purposes. This tutorial
guided you through enabling CloudTrail across all Regions, inspecting
the generated log files, and leveraging CloudTrail Insights to detect
unusual activity. With CloudTrail activated, you now have
comprehensive visibility into changes, user activity, and resource access
within your AWS environment. Be sure to consult the CloudTrail logs
regularly for auditing, monitoring AWS usage, troubleshooting issues,
and investigating security incidents. We encourage you to explore the
other capabilities of CloudTrail such as log file encryption, log
validation, data event logging, and integrating logs with other AWS
services. CloudTrail is a key component of the AWS shared
responsibility model, enabling you to monitor the activity within your
account and respond appropriately.
364
activity within your AWS environments.
With AWS Cloud Trail, you can search and track all account activities to
monitor user changes, compliance, error rates, and risks.
The capabilities of CloudTrail are essential to simplifying your AWS
environment troubleshooting and letting you identify areas that need
improvements.
File Integrity: File integrity checks for file validation and whether
all the files are corrupt. If there’s any form of corruption in any of the
log files, it’ll destroy the integrity of the file.
Getting Started
In your AWS Management Console, search and click on AWS
CloudTrail.
365
Create a New Trail by clicking on Create Trail.
Choose your Trail attributes. Enter your Trail name and storage
location (select an existing S3 bucket or create a new S3 bucket).
Enable your log file encryption with your file validation. This will
ensure all aws resources are encrypted.
366
When you’re done configuring your Trail attributes, click on Next.
Next, choose your log events. In AWS CloudTrail, there are three types
of events. Management events, Data events, and Insights events.
Management events are free and can be viewed in the event history
tab for 90 days. Data events are not free to the user and cannot be
viewed in the event history tab. Insights events let you identify
unusual activity, errors, or user behavior in your account.
Only Management events are free for your workloads. Data and Insights
events will incur costs. In this tutorial, we’ll be using Management
Events.
When you’re done configuring log events, click on Next, you’ll see
the overview and general details of your configuration, and click
on Create Trail.
In your Trails dashboard, you’ll see the Trail you just created.
367
Integrate other AWS resources with your trail to see how it works and
see different log events. For example, in my S3 bucket, I’ll upload a
new file into my S3 bucket. Once I’m done uploading the file, I’ll
automatically see the events in my CloudTrail.
In your CloudTrail event history, you’ll see all your events and logs
from your S3 bucket.
368
You’ll see your event records and referenced resources when you
click on them.
You can also filter your event history based on AWS access key,
Event ID, Event Name, Event Source, Resource name, and user type.
369
You’ll see the PUT event history in your Event Name, the S3 bucket we
updated earlier.
When you click on Cloud Trail, you can see the logs from each AWS
Region.
370
Conclusion
You can see how fast it is to enable and configure AWS CloudTrail on
your AWS resources and view log events in your Event History
dashboard. CloudTrail is a service that has the primary function to record
and track all AWS API requests made. These API calls can be
programmatic requests initiated by a user using an SDK, from the AWS
CLI, or within the AWS management console. With our Open-Source
workflows, you can automatically send an API request with our ops cli to
automatically enable logs and events into your AWS resources.
371
Route 53
we dive into Route 53, Amazon’s highly scalable and reliable Domain
Name System (DNS) web service. Route 53 offers a plethora of features
to manage your domain names and direct internet traffic efficiently.
Let’s explore the key concepts and functionalities of Route 53.
372
Global Coverage: Route 53 has a global network of DNS servers
strategically located around the world. This ensures fast and reliable
DNS resolution for users accessing your applications from different
geographic regions.
Integration with AWS Services: Route 53 seamlessly integrates with
other AWS services such as Elastic Load Balancing (ELB), Amazon
S3, Amazon EC2, and more. This allows you to easily map domain
names to your AWS resources and manage traffic routing efficiently.
Advanced Routing Policies: Route 53 supports various routing
policies like simple routing, weighted routing, latency-based routing,
geolocation routing, and failover routing. These policies enable you
to implement sophisticated traffic management strategies based on
your specific requirements.
Health Checks: Route 53 provides health checks to monitor the
health and availability of your resources. You can configure health
checks for endpoints like web servers, load balancers, and more.
Route 53 automatically routes traffic away from unhealthy endpoints,
helping you maintain high availability and reliability.
373
tolerance.
Let’s explore how to configure DNS records and health checks in Route
53.
374
domain or subdomain to another domain’s canonical name. This is
often used for creating aliases for domains.
Using the AWS CLI: You can use the AWS CLI to manage Route
53 DNS records programmatically. Commands like ‘aws route53
change-resource-record-sets’ enable you to add, update, or delete
DNS records in your hosted zones.
375
Define Health Check Settings: Specify the endpoint or resource
you want to monitor, along with the protocol (HTTP, HTTPS, TCP, or
HTTPS), port, and other relevant settings.
376
This is the most basic routing policy where you associate a single
DNS record with a single resource. When a DNS query is received,
Route 53 responds with the IP address associated with the DNS
record.
Useful for directing traffic to a single resource, such as a web server
or a load balancer.
For example, you might allocate 70% of traffic to one resource and
30% to another to perform A/B testing or gradually shift traffic during
deployments.
377
Commonly used for disaster recovery scenarios.
Latency-Based Routing:
Latency-based routing is particularly powerful for optimizing the
performance of globally distributed applications. Here’s how it works:
2. Latency Measurements:
Route 53 measures the latency between end users and your resources
from multiple AWS regions.
It uses this information to determine the optimal resource to which
traffic should be directed based on the lowest latency.
2. Traffic Distribution:
When Route 53 receives a DNS query, it evaluates the latency to each
resource and directs the query to the resource with the lowest latency
for that particular user.
This ensures that users are automatically routed to the resource that
378
offers the best performance from their location.
Implementation:
To implement latency-based routing in Route 53:
1. Create Resource Records:
Define the DNS records for your resources (e.g., EC2 instances, ELB
endpoints) in your Route 53 hosted zone.
Conclusion:
Route 53 is a powerful tool for managing DNS and routing traffic
effectively within AWS and beyond. Understanding its features and
configurations is essential for building scalable and reliable web
applications.
379
CloudFront in AWS
CloudFront
1. Content Delivery Network (CDN): CloudFront is a CDN
service provided by AWS. CDNs help deliver content (like web pages,
images, videos) to users globally with low latency by caching content
at edge locations.
380
Adobe Media Server and the Real-Time Messaging Protocol (RTMP).
Uses of CloudFront
1. Create a Distribution: Set up a new CloudFront distribution in the
AWS Management Console.
381
performance requirements.
CloudFront is a powerful tool for optimizing content delivery and
enhancing the performance of your web applications globally.
382
AWS ACM
Introduction:
In the rapidly evolving landscape of web security, securing your website
with SSL/TLS certificates has become paramount. Amazon Web
Services (AWS) provides a robust solution for certificate management
through AWS Certificate Manager (ACM). In this blog post, we’ll delve
into the key features of AWS ACM, its benefits, and how it simplifies
the process of obtaining, managing, and deploying SSL/TLS certificates.
Additionally, we’ll explore the concepts of Public and Private Certificate
Authorities (CAs) and how they contribute to the security ecosystem.
383
Integrated with AWS Services: ACM seamlessly integrates
with other AWS services, such as Elastic Load Balancer (ELB),
CloudFront, and API Gateway. This integration simplifies the process
of associating certificates with these services, reducing the time and
effort required for deployment.
384
Private Certificate Authorities (CAs): Private CAs, on the other
hand, are used within a specific organization or network. They are ideal
for internal communication where the trust is established within a closed
environment. AWS ACM supports private CAs, allowing organizations
to manage their internal certificates securely.
Conclusion:
AWS ACM emerges as a powerful tool in the realm of certificate
management, offering a seamless and secure experience for users. By
automating the certificate lifecycle, integrating with various AWS
services, and providing global coverage, ACM empowers businesses to
prioritize application development while ensuring robust security.
Embrace the simplicity and efficiency of AWS ACM, whether you’re
utilizing public or private CAs, to fortify your web applications with the
strength of SSL/TLS encryption.
385
Streamlining Mobile App Development with AWS
Amplify Console
386
Another advantage of using AWS Amplify Console is its scalability and
flexibility. With its automatic branch deployments feature, developers
can easily create new branches for different features or bug fixes and
have them automatically deployed to separate environments. This allows
for easy experimentation and iteration, ensuring that the app
development process remains agile and efficient.
Furthermore, AWS Amplify Console provides a simple and intuitive
user interface that makes it easy for developers to manage their app’s
deployment and hosting. With just a few clicks, developers can
configure their app’s settings, set up custom domains, and monitor the
deployment process. This eliminates the need for complex manual
configurations and reduces the risk of human error.
Key features of AWS Amplify Console
AWS Amplify Console is packed with powerful features that make it an
essential tool for mobile app development. Here are some of its key
features:
Continuous deployment: AWS Amplify Console allows
developers to set up automated deployments for their app. Whenever
changes are pushed to the repository, Amplify Console automatically
builds and deploys the updated app, ensuring a smooth deployment
process.
Environment variables: With Amplify Console, developers can
easily manage environment variables for different stages of their
app’s development. This allows for easy configuration of variables
such as API endpoints, database credentials, and third-party
integrations.
Branch deployments: Amplify Console enables developers to
create separate branches for different features or bug fixes. Each
branch can have its own environment and deployment settings,
allowing for easy testing and experimentation.
387
Custom domains: Developers can easily set up custom domains
for their app with Amplify Console. This gives the app a professional
and branded look, enhancing user trust and engagement.
Automatic SSL certificates: Amplify Console automatically
provisions and manages SSL certificates for custom domains,
ensuring secure communication between the app and its users.
Setting up AWS Amplify Console for mobile app
development
Getting started with AWS Amplify Console is quick and easy. Here’s a
step-by-step guide to setting it up for your mobile app development:
Create an AWS account: If you don’t already have one, sign up
for an AWS account at aws.amazon.com. This will give you access to
all the AWS services, including Amplify Console.
Install the Amplify CLI: The Amplify CLI is a command-line
tool that helps you create and manage your app’s backend resources.
Install it by running the following command in your terminal: npm
install -g @aws-amplify/cli.
Initialize your app: Navigate to your app’s root directory and run
the command amplify init. This will initialize your app with Amplify
and create a new Amplify environment.
Connect your app to the cloud: Once your app is initialized,
you can start connecting it to the cloud. Use the Amplify CLI
commands to add backend services such as authentication, storage,
and databases.
Configure Amplify Console: After setting up the backend, run
the command amplify console to open the Amplify Console in your
browser. Here, you can configure your app's deployment settings,
custom domains, and environment variables.
Deploy your app: Finally, use the Amplify CLI command amplify
push to deploy your app to the Amplify Console. This will build your
388
app, create the necessary resources, and deploy it to the specified
environment.
Integrating AWS Amplify Console with your mobile app
development workflow
AWS Amplify Console seamlessly integrates with popular development
workflows, making it easy for developers to incorporate it into their
existing processes. Here are a few ways you can integrate Amplify
Console with your mobile app development workflow:
Version control integration: Amplify Console supports
integration with popular version control systems like GitHub, GitLab,
and Bitbucket. This allows you to automatically build and deploy
your app whenever changes are pushed to your repository.
Build hooks: Amplify Console provides build hooks that can be
used to trigger custom build scripts or external services. This enables
you to incorporate additional build steps or automated testing into
your app’s deployment pipeline.
Webhooks: Amplify Console can also send webhooks to external
services, enabling you to trigger custom actions or notifications based
on the app’s deployment status. This can be useful for sending
notifications to team members or integrating with other tools in your
development workflow.
API integration: Amplify Console provides a RESTful API that
allows you to programmatically manage your app’s deployments and
settings. This enables you to automate certain tasks or integrate
Amplify Console with other tools in your development workflow.
Streamlining the deployment process with AWS Amplify
Console
One of the biggest challenges in mobile app development is the
deployment process. Traditional deployment methods often involve
389
manual configurations, complex build scripts, and potential human
errors. However, with AWS Amplify Console, deploying your app
becomes a breeze.
Amplify Console simplifies the deployment process by automating key
tasks and providing an intuitive user interface. Here’s how it streamlines
the deployment process:
Continuous deployment: With Amplify Console, every code
change triggers an automated deployment. This means that as soon as
you push changes to your repository, Amplify Console automatically
builds and deploys the updated app. This eliminates the need for
manual deployments and reduces the risk of human error.
Automatic branch deployments: Amplify Console allows you
to create separate branches for different features or bug fixes. Each
branch can have its own environment and deployment settings. This
enables you to test and iterate on new features without affecting the
main production environment.
Preview deployments: Amplify Console provides a preview URL
for each deployment, allowing you to easily preview and test your
app before making it live. This is particularly useful for testing new
features or bug fixes in a controlled environment.
Rollback feature: In case of any issues or bugs in a deployment,
Amplify Console allows you to easily rollback to a previous version
with just a few clicks. This ensures that you can quickly revert to a
stable version of your app without any downtime.
Optimizing mobile app performance with AWS Amplify
Console
Performance is a critical aspect of mobile app development. Users
expect apps to be fast, responsive, and reliable. AWS Amplify Console
provides several features and optimizations that can help you optimize
your app’s performance:
390
Content delivery network (CDN): Amplify Console
automatically deploys your app to a global CDN, ensuring that your
app’s static assets are served from the closest edge location. This
reduces latency and improves the app’s overall performance.
Automatic asset optimization: Amplify Console automatically
optimizes your app’s static assets, including images, CSS, and
JavaScript files. This reduces the file size of these assets, resulting in
faster load times and better user experience.
GZIP compression: Amplify Console automatically enables GZIP
compression for your app’s assets, reducing the size of transferred
data and improving network performance.
Cache control: Amplify Console allows you to configure cache
control headers for your app’s assets. This enables you to control how
long assets are cached by the user’s browser, reducing the number of
requests made to the server and improving performance.
Monitoring and troubleshooting mobile app development with AWS
Amplify Console
Monitoring and troubleshooting are essential aspects of mobile app
development. AWS Amplify Console provides several tools and features
that help you monitor and troubleshoot your app’s development process:
1. Deployment logs: Amplify Console provides detailed deployment
logs that allow you to track the progress of your app’s deployment.
These logs include information about build times, errors, and
warnings, enabling you to quickly identify and fix any issues.
2. Real-time metrics: Amplify Console provides real-time metrics for
your app’s deployments, including build times, deployment durations,
and success rates. These metrics help you monitor the performance of
your app’s deployment process and identify any bottlenecks or issues.
3. Alerts and notifications: Amplify Console allows you to set up alerts
and notifications for your app’s deployments. You can configure
alerts based on criteria such as deployment failures, long build times,
391
or high error rates. This enables you to proactively monitor your
app’s development process and take immediate action when
necessary.
4. Integration with AWS CloudWatch: Amplify Console integrates
seamlessly with AWS CloudWatch, allowing you to collect and
analyze logs, metrics, and events from your app’s deployments. This
provides deeper insights into your app’s performance and helps you
troubleshoot any issues.
Case studies: Success stories of mobile app development with AWS
Amplify Console
AWS Amplify Console has been used by numerous organizations to
streamline their mobile app development process. Here are a couple of
success stories:
1. Company A: Company A, a fast-growing startup, used AWS
Amplify Console to build and deploy their mobile app. By leveraging
Amplify Console’s continuous deployment and automatic branch
deployments features, they were able to rapidly iterate on new
features and bug fixes. This allowed them to launch their app in
record time and achieve a high level of user satisfaction.
2. Company B: Company B, a large enterprise, used AWS Amplify
Console to simplify their complex mobile app development
workflow. With Amplify Console’s environment variables and
integration with version control systems, they were able to automate
their deployment process and reduce the risk of human error. This
resulted in significant time and cost savings for the company.
These success stories highlight the effectiveness of AWS Amplify
Console in streamlining mobile app development and enabling
organizations to deliver high-quality apps in a timely manner.
Conclusion: Streamlining mobile app development with AWS Amplify
Console
392
In conclusion, AWS Amplify Console is revolutionizing the way mobile
apps are developed. Its powerful features and seamless integration with
other AWS services make it a must-have tool for any app developer.
With Amplify Console, developers can streamline the entire app
development lifecycle, from code changes to continuous deployment and
hosting. Its scalability and flexibility enable easy adaptation and
iteration of apps, making it ideal for both small startups and large
enterprises.
Furthermore, Amplify Console’s optimization and monitoring features
help developers optimize their app’s performance and troubleshoot any
issues. With real-time metrics,
detailed deployment logs, and integration with AWS CloudWatch,
developers can ensure that their app is performing at its best.
So why not give AWS Amplify Console a try and experience the
convenience and efficiency it brings to your mobile app development
process? Streamline your workflow, deliver high-quality apps, and stay
ahead in the fast-paced digital landscape.
393
AWS Lambda
Serverless Architecture:
The advancement of technology has generated new needs. The
increasing demand, load and costs have accelerated the development of
new methods. In addition, the development of cloud technology and
innovations have brought new services and concepts into our lives. One
of these concepts is serverless architecture.
While developing, our primary goal is to create a structure that will
solve a problem. However, in doing so, we are also forced to consider
other things. We have to think about the server configuration where the
application will run, as well as authorization, load balancing, and many
other aspects. Serverless architecture (another term used in place of
“serverless” is “Functions as a Service”) is a design approach that
enables you to build and run applications and services without the need
to manage the infrastructure.
Serverless architecture is not a way of assuming that servers are no
longer required or that applications will not run on servers. Instead, it is
a pattern or approach that helps us think less about servers in the context
of software development and management. This approach allows us to
eliminate the need to worry about issues related to scaling, load
balancing, server configurations, error management, deployment, and
runtime. With serverless architecture, we are essentially outsourcing one
of the most challenging aspects of running a software in production,
which is managing operational tasks.
Every technology has its own drawbacks, and serverless is no exception.
Here are the main situations in which it is generally not recommended to
use serverless architecture:
394
possible to keep the function in an active state by sending periodic
requests to it. This helps ensure that the necessary resources are
already initialized and ready to handle incoming requests efficiently.
Long-running workloads may be more expensive to run on serverless
platforms compared to using a dedicated server, which can be more
efficient in these cases. When deciding between these options, it is
crucial to carefully consider the specific needs and requirements of
the workload.
Testing and debugging code in a serverless computing environment
can be challenging due to the nature of these cloud systems and the
lack of back-end visibility for developers.
As a serverless application that relies on external vendors for back-
end services, it is natural to have a certain level of reliance on those
vendors. However, if you decide to switch vendors at any point, it can
be challenging to reconfigure your serverless architecture to
accommodate the new vendor’s features and workflows.
Due to time limitations imposed by the vendor (for example, AWS
allows up to 15 minutes), it is not possible to perform long-running
tasks.
395
developing and deploying their applications without worrying about
server management.
Serverless architecture allows for applications to be scaled
automatically. This means that as demand for the application
increases, the necessary resources will be automatically allocated to
meet that demand, without the need for manual intervention. This can
provide a high degree of flexibility and scalability for organizations
using serverless architectures for their applications.
Serverless architectures enable the creation of development
environments that are easier to set up, which can lead to faster
delivery and more rapid deployment of applications.
When using serverless services, you only pay for the specific
instances or invocations you use rather than being charged for idle
servers or virtual machines that you may not be utilizing.
Use Cases
Serverless computing is well suited for tasks that are triggered by an
event. If you have an event that needs to be run based on some
trigger, serverless architecture can be an effective solution. An
example of this is when a user signs up for a service on a website and
a welcome email is automatically sent in response.
Serverless computing allows for the creation of RESTful APIs that
can be easily scaled as needed.
Serverless computing is a relatively new technology; it has advantages
and disadvantages. However, it is not a suitable solution for every
situation, and it is important to carefully consider all infrastructure
requirements before deciding to use it as your execution model. If you
currently host small functions on your own servers or virtual servers, it
may be beneficial for you to consider the benefits of using a serverless
computing solution.
There are a variety of platforms that offer a range of services for
serverless architecture. One such platform is Amazon Web Services,
which offers a number of serverless services. AWS provides AWS
396
Lambda, AWS Fargate for computing; Amazon EventBridge, AWS Step
Functions, Amazon SQS, Amazon SNS, Amazon API Gateway, AWS
AppSync for application integration; and Amazon S3, Amazon EFS,
Amazon DynamoDB, Amazon RDS Proxy, Amazon Aurora Serverless,
Amazon Redshift Serverless, Amazon Neptune serverless for data store.
I will now provide an explanation of one of the most widely utilized and
practical services among these options, which is AWS Lambda.
AWS Lambda
AWS Lambda is an event-driven cloud service from Amazon Web
Services (AWS) that enables users to execute their own code, known as
“functions,” without the need to worry about the underlying
infrastructure. These functions can be written in various programming
languages and runtimes supported by AWS Lambda and be uploaded to
the service for execution.
AWS Lambda automatically manages the scaling and allocation of
resources for these functions, providing a convenient and efficient way to
run code in the cloud.
AWS Lambda functions can be used to perform a wide range of
computing tasks, such as serving web pages, processing streams of data,
calling APIs, and integrating with other AWS services. These functions
are designed to be flexible and can be used for a variety of purposes,
making them a powerful tool for cloud computing.
397
charged based on the amount of allocated memory and the run time
required for the function to complete.
AWS manages the entire infrastructure layer of AWS Lambda, so
customers do not have visibility into how the system operates. However,
this also means that customers do not need to worry about tasks, such as
updating the underlying machines or managing network contention, as
these responsibilities are handled by AWS.
398
individual functions based on demand, enabling different parts of the
API to scale differently according to current usage levels. This enables
cost-effective and flexible API set-ups.
399
Get ready to dive into the world of serverless web application
development on AWS. In this series, we’ll guide you through the
process of creating a dynamic web app that calculates the area of a
rectangle based on user-provided length and width values. We’ll
leverage the power of AWS Amplify for web hosting, AWS Lambda
functions for real-time calculations, DynamoDB for storing and
retrieving results, and API Gateway for seamless communication. By the
end of this journey, you’ll have the skills to build a responsive and
scalable solution that showcases the true potential of serverless
architecture. Let’s embark on this development adventure together!
Prerequisites
Have an AWS account. If you don’t have one, sign up here and enjoy
the benefits of the Free-Tier Account
Access to the project files: Amplify Web-app
<!DOCTYPE html>
<html>
<head>
<meta charset="UTF-8">
<title>Rectangle</title>
<!-- Styling for the client UI -->
<style>
h1 {
color: #FFFFFF;
font-family: system-ui;
margin-left: 20px;
}
body {
background-color: #222629;
}
400
label {
color: #86C232;
font-family: system-ui;
font-size: 20px;
margin-left: 20px;
margin-top: 20px;
}
button {
background-color: #86C232;
border-color: #86C232;
color: #FFFFFF;
font-family: system-ui;
font-size: 20px;
font-weight: bold;
margin-left: 30px;
margin-top: 20px;
width: 140px;
}
input {
color: #222629;
font-family: system-ui;
font-size: 20px;
margin-left: 10px;
margin-top: 20px;
width: 100px;
}
</style>
<script>
// callAPI function that takes the length and width numbers as
parameters
var callAPI = (length,width)=>{
// instantiate a headers object
var myHeaders = new Headers();
// add content type header to object
myHeaders.append("Content-Type", "application/json");
// using built in JSON utility package turn object to string and
store in a variable
var raw = JSON.stringify({"length":length,"width":width});
// create a JSON object with parameters for API call and store in
a variable
var requestOptions = {
method: 'POST',
headers: myHeaders,
body: raw,
redirect: 'follow'
};
// make API call with parameters and use promises to get response
fetch("YOUR API URL", requestOptions)
.then(response => response.text())
.then(result => alert(JSON.parse(result).body))
.catch(error => console.log('error', error));
}
401
</script>
</head>
<body>
<h1>AREA OF A RECTANGLE!</h1>
<form>
<label>Length:</label>
<input type="text" id="length">
<label>Width:</label>
<input type="text" id="width">
<!-- set button onClick method to call function we defined passing
input values as parameters -->
<button type="button"
onclick="callAPI(document.getElementById('length').value,document.getElementB
yId('width').value)">CALCULATE</button>
</form>
</body>
</html>
2. The file should look like this when opened on a browser. It gives
spaces to input the length and width of a rectangle and a ‘Calculate’
button
402
2. Click on ‘GET STARTED’
4. select the source for your app files. They can be in a remote repository
or local. We will use ‘Deploy without Git provider’ since our files are
local. We also need to use a compressed folder with our files. Click on
‘Continue’
403
5. Give the app a name, an environment name, choose the method as
‘Drag and drop’ and selct the index.zip file (zip all the app files. In this
case, it is only the index.html file). Click on ‘Save and deploy’
404
7. The app opens on the browser. (You might need to refresh the
deployment page on Amplify. Maybe it’s a bug or something ● )
405
4. Copy the following Lambda function onto your lambda_function.py
file. Please note the DynamoDB name. We will be using this name later
as we create the DB.
# import the AWS SDK (for Python the package name is boto3)
import boto3
# define the handler function that the Lambda service will use an entry point
def lambda_handler(event, context):
# extract the two numbers from the Lambda service's event object
Area = int(event['length']) * int(event['width'])
# write result and time to the DynamoDB table using the object we
instantiated and save response in a variable
response = table.put_item(
Item={
'ID': str(Area),
'LatestGreetingTime':now
406
})
5. Click on ‘Deploy’
2. In the list for ‘Choose an API type’, select ‘Build’ for ‘REST API’
407
3. Choose the ‘REST’ protocol for the API, select ‘New API’ under
‘Create new API’ and give the API a name, then click on ‘Create API’
4. On the page that appears, select ‘Resources’ on the Left Panel, On the
‘Actions’ drop-down, select ‘Create method’. Select ‘POST’ on the drop
down that appears then click on the ✔. Select ‘Lambda Function’ as the
Integration type and type the name of the lambda function in the
‘Lambda Function’ box. Click on ‘Save’
408
5. On the dialog box that appears to Add Permission to Lambda
Function, click ‘OK’
8. Once all the checks are complete, click on ‘Actions, then, ‘Deploy
API’
409
9. Give the ‘Stage name’, then click ‘Deploy’
10. The Invoke URL is what you replace “YOUR API URL” with on the
index.html file. Insert the URL, regenerate the index.zip and reupload to
Amplify
410
2. Click on ‘Create table’
3. Give the table a name, for ‘Partition key’ input ‘ID’. Leave the rest as
default, scroll to the bottom and click on ‘Create table’
4. Select the table name. Under the overview tab, expand ‘Additional
info’, then take note of the ARN
411
arn:aws:dynamodb:us-east-1:494225556983:table/Area_table
6. A new tab opens in IAM and we can add permissions to the role.
Click on ‘Add permissions’, then ‘Create inline policy’
412
7. Select the JSON Tab and copy the following policy. Replace “YOUR-
TABLE-ARN” with the ARN of your table that we copied in step 4,
then click ‘Next’ at the bottom
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"dynamodb:PutItem",
"dynamodb:DeleteItem",
"dynamodb:GetItem",
"dynamodb:Scan",
"dynamodb:Query",
"dynamodb:UpdateItem"
413
],
"Resource": "YOUR-TABLE-ARN"
}
]
}
8. On the ‘Review and create’ page, give the policy a name the click on
‘Create policy’ at the bottom of the page
Testing
Now that we are done, let’s see what we have. Open the AWS Amplify
domain. It should open our app.
2. Input values for the Length and Width and click on “Calculate”. The
solution should pop up on the screen. (Returned in the browser through
API Gateway)
414
3. Yaaaay!!!!! And we are successful
Conclusion
In this comprehensive guide, we’ve embarked on an exciting journey
into the realm of serverless web application development on AWS.
We’ve built a dynamic web app that calculates the area of a rectangle
based on user-provided length and width values. Leveraging the power
of AWS Amplify for web hosting, AWS Lambda functions for real-time
calculations, DynamoDB for result storage, and API Gateway for
seamless communication, we’ve demonstrated the incredible potential of
serverless architecture.
Serverless EC2 Instance Scheduler for Company
Working Hours
Scenario:
In some companies, there is no need to run their EC2 instances 24/7; they
require instances to operate during specific time periods, such as
company working hours, from 8:00 AM in the morning to 5:00 PM in the
evening. To address this scenario, I will implement two Lambda
functions responsible for starting and stopping instances. These Lambda
415
functions will be triggered by two CloudWatch Events in the morning
and evening. This solution is fully serverless.
Steps:
Step 1 :Creating the Instance :
Navigate to the EC2 Console.
Follow the Outlined steps below.
416
417
Step 2 :Creating the Policy:
Navigate to the IAM Console.
Click on “Policies” and then Click on “Create policy”
418
419
5. Now we have created a policy for starting instances. We also need to
create a policy for stopping the instances. This is because we are going
to create two Lambda functions: one for starting and one for stopping
the instances. Each function will have its own role, and we will attach
these two policies to their respective roles.
6. Now we are going to repeat the same steps for Creating Stopping
Policy also.
7. Everything is same , Except Actions because we are going to stop the
instance.
8. The Actions are DescribeInstances , Stop Instances .
9. Keep your Policy name as “stop-ec2-instance”.
420
421
422
Now again, go to the Lambda console and then test the code.
423
1. Now we Created lambda function for Starting Instance.
2. We have to Repeat the same steps again to Create a Lambda function
for Stopping Instance , keep your lambda function name as “Stop-
EC2-demo”.
3. The only changes we have to make are to replace the default code
with the ‘stop-ec2-instance.py’ code and attach the policy we created
for stopping instances to the role of this Lambda function.
424
Now, we are ready to proceed and create schedules for this
functions.
425
Note : Keep your rule name as “start-ec2-rule” , I mistakenly named it
‘role’ Please do not name it as ‘role.’
426
427
428
We have now created a schedule for starting the instance every day at
8:00 AM.
Next, we need to create a schedule for stopping instances.
To create the schedule for stopping instances, follow the same steps
as for starting instance scheduling with a few changes, Keep your rule
name as “stop-ec2-rule”.
1. The changes include modifying the scheduled time and selecting the
appropriate scheduling function.
2. We need to change the schedule time to 17:00 because it will stop the
429
Lambda function at 17:00 IST (5:00 PM).
3. We have to Change the Function as Stop-EC2-demo
430
Build a REST API with API Gateway.
Store data in a NoSQL database using DynamoDB.
Manage permissions with IAM policies. Integrate your frontend code
with the backend services.
I recommend you follow the tutorial one time and then try it by yourself
the second time. And before we begin, ensure you have an AWS
account. Sign up for a free tier account if you haven’t already.
Now let’s get started!
431
flex-direction: column; /* Aligning form elements vertically */
align-items: center; /* Centering form elements horizontally */
background-color: #fff; /* Adding a white background to the form */
padding: 20px; /* Adding padding to the form */
border-radius: 8px; /* Adding border radius to the form */
}
label, button {
color: #FF9900;
font-family: Arial, Helvetica, sans-serif;
font-size: 20px;
margin: 10px 0; /* Adding margin between elements */
}
input {
color: #232F3E;
font-family: Arial, Helvetica, sans-serif;
font-size: 20px;
margin: 10px 0; /* Adding margin between elements */
width: 250px; /* Setting input width */
padding: 5px; /* Adding padding to input */
}
button {
background-color: #FF9900; /* Adding background color to button */
color: #fff; /* Changing button text color */
border: none; /* Removing button border */
padding: 10px 20px; /* Adding padding to button */
cursor: pointer; /* Changing cursor to pointer on hover */
}
h1{
color: #202b3c;
font-family: Arial, Helvetica, sans-serif;
}
</style>
<script>
// Define the function to call the API with the provided first name, last
name, and phone number
let callAPI = (fname, lname, pnumber)=>{
// Create a new Headers object and set the 'Content-Type' to
'application/json'
let myHeaders = new Headers();
432
myHeaders.append("Content-Type", "application/json");
// Use the fetch API to send the request to the specified URL
fetch("https://uvibtoen42.execute-api.us-east-1.amazonaws.com/web-
app-stage", requestOptions) // Replace "API_KEY" with your actual API endpoint
.then(response => response.text()) // Parse the response as text
.then(result => alert(JSON.parse(result).message)) // Parse the
result as JSON and alert the message
.catch(error => console.log('error', error)); // Log any errors to
the console
}
</script>
</head>
<body>
<form>
<h1>Contact Management System</h1>
<label>First Name :</label>
<input type="text" id="fName">
<label>Last Name :</label>
<input type="text" id="lName">
<label>Phone Number :</label>
<input type="text" id="pNumber">
<button type="button"
onclick="callAPI(document.getElementById('fName').value,
document.getElementById('lName').value,
document.getElementById('pNumber').value)">Submit</button>
<!-- Button to submit user input without reloading the page -->
433
<!-- When clicked, it calls the callAPI function with values from the
input fields -->
</form>
</body>
</html>
There are multiple ways to upload our code into Amplify console. For
example, I like using Git and Github. To keep this article simple, I will
show you how to do it directly by drag and drop method into Amplify.
To do this — we have to compress our HTML file.
Now, make sure you’re in the closest region to where you live, you can
see the region name at the top right of the page, right next to the account
name. Then let’s go to the AWS Amplify console. It will look something
like this:
When we click “Get Started,” it will take us to the following screen (we
will go with Amplify Hosting on this screen):
434
You will start a manual deployment. Give your app a name, I’ll call it
“Contact Management System”, and ignore the environment name.
Then, drop the compressed index file and click Save and Deploy.
Amplify will deploy the code, and return a domain URL where we can
access the website.
435
Step 2: Create an AWS Lambda Serverless function
We will create a serverless function using the AWS Lambda service in
this step. A Lambda function is a serverless function that executes
code in response to events. You don’t need to manage servers or
worry about scaling, making it a cost-effective solution for simple
tasks. To give you some idea, a great example of Serverless
computing in real life is vending machines. They send the request to
the cloud and process the job only somebody starts using the machine.
Let’s go to the Lambda service inside the AWS console. By the way,
make sure you are creating the function in the same region in which
you deployed the web application code in Amplify.
Time to create a function. Give it a name, I’ll call it “my-web-app-
function”, and for runtime programming language parameters: I’ve
chosen Python 3.12, but feel free to choose a language and version
that you are more comfortable and familiar with.
436
After our lambda function is created, scroll down and you will see the
following screen:
Now, let’s edit the lambda function. Here is a function that extracts first
and last names from the event JSON input. And then returns a context
dictionary. The body key stores the JSON, which is a greeting string.
After editing, click Deploy to save my-web-app-function, and then click
Test to create an event.
437
To configure a test event, give the event a name like “MyEventTest”,
modify the Event JSON attributes and save it.
Now click on the big blue test button so we can test the Lambda
function.
438
Test Event Name
Response
Function Logs
Request ID
Step 3: Create Rest API with API Gateway
Now let’s go ahead and deploy our Lambda function to the Web
Application. We will use Amazon API Gateway to create a REST API
that will let us make requests from the web browser. API Gateway acts
as a bridge between your backend services (like Lambda functions) and
your frontend application. It allows you to create APIs that expose
functionality to your web app.
REST: Representational State Transfer.
API: Application Programming Interface.
Go to the Amazon API Gateway to create a new REST API.
At the API creation page, we have to give it a name for example “Web
App API”, and choose a protocol type and endpoint type for the REST
API (select Edge-optimized).
439
Now we have to create a POST method so click on Create method.
In the Create method page, select the method type as POST, the
integration type should be Lambda function, ensure the Region is the
same Region you’ve used to create the lambda function and select the
Lambda function we just created. Finish by clicking on Create method at
the bottom of the page.
440
Now we need to enable CORS, so select the / and then click enable
CORS
In the CORS settings, just tick the POST box and leave everything else
as default, then click save.
After enabling CORS headers, click on the orange Deploy API button.
A window will pop up, under stage select new stage and give the stage a
name, for example “web-app-stage”, then click deploy.
441
When you view the stage, there will be a URL named Invoke URL.
Make sure to copy that URL; we will use it to invoke our lambda
function in the final step of this project.
442
Now we have to fill out some information about our data table, like the
name “contact-management-system-table”, and the partition key is ID.
The rest leave as default. Click Create table.
443
services: Amplify, Lambda, DynamoDB, and API Gateway. It’s
essential to understand how they communicate with each other and
what kind of information they share.
Now back to our project, we have to define an IAM policy to give
access to our lambda function to write/update the data in the
DynamoDB table.
So go back to the AWS Lambda console, and click on the lambda
function we just created. Then go to the configuration tab, and on the
left menu click on Permissions. Under Execution role, you will see a
Role name.
Then click on JSON, delete what’s on the Policy editor and paste the
444
following.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"dynamodb:PutItem",
"dynamodb:DeleteItem",
"dynamodb:GetItem",
"dynamodb:Scan",
"dynamodb:Query",
"dynamodb:UpdateItem"
],
"Resource": "YOUR-DB-TABLE-ARN"
}
]
}
Now close this window, and back to the Lambda function, go to the
Code tab and we will update the lambda function python code with the
following.
445
import
json
import boto3
from time import gmtime, strftime
dynamodb = boto3.resource('dynamodb')
table = dynamodb.Table('contact-management-system-table')
now = strftime("%a, %d %b %Y %H:%M:%S +0000", gmtime())
return {
'statusCode': 200,
'body': json.dumps('Hello from Lambda, ' + name)
}
'statusCode': 200,
'body': json.dumps('Hello from Lambda, ' + name)
}
The response is in REST API format. After making the changes, make
sure to deploy the code. After the deployment is concluded, we can Test
the program by clicking on the blue test button.
446
We can also check the results on the DynamoDB table. When we run the
function it updates the data on our table. So go to AWS DynamoDB,
click on explore items in the left nav bar, click on your table. Here is the
object returned from the lambda function:
447
we did in step 1, and upload it again to AWS using the console.
Click on the new link you got and let’s test it.
Our data tables receive the post request with the entered data. The
lambda function invokes the API when the “Call API” button is clicked.
Then using javascript, we send the data in JSON format to the API. You
can find the steps under the callAPI function.
You can find the items returned to my data table below:
448
Conclusion
You have created a simple web application using the AWS cloud
platform. Cloud computing is snowballing and becoming more and more
part of developing new software and technologies.
If you feel up for a challenge, next you could:
Enhance the frontend design
Add user authentication and authorization
Set up monitoring and analytics dashboards
Implement CI/CD pipelines to automate the build, test, and
deployment processes of your web application using services like
AWS CodePipeline, AWS CodeBuild, and AWS CodeDeploy.
This text explains step by step how to automatically start and stop an EC2
instance in AWS using AWS Lambda function and Amazon Event Bridge.
449
We may not need the servers (EC2 instance) in AWS to run
continuously. Running it on only when needed and shutting it down
when the work is completed prevents waste of resources and saves our
budget.
It can be managed manually at irregular intervals or for servers that are
not tied to a specific schedule. However, on servers that need to be start
and stop in a certain schedule, we can automate this process using the
AWS Lambda function and Amazon Event Bridge. I describe step by
step now.
450
AWS Create Function
We go inside the created function. We delete the default code under the
Code tab and paste the following code.
451
This script takes the instance_id as a parameter. If the instance_id is
incorrect or missing, the warning ‘instance_id parameter is missing’ is
returned. If the instance_id is correct, the process continues. An EC2
client is created and the status of the instance is checked. If the instance
is already running, no action is taken and the warning “EC2 instance is
already running” is returned. If the instance is not running, it is
initialised.
After pasting the code, we load the code by pressing the “Deploy”
button.
452
AWS Lambda Function Configuration
Under Basic Settings, a description can optionally be written. Then, we specify the
amount of memory and storage required. In this example, the minimum values will
be enough.
The timeout specifies the maximum running time of the function. This duration
refers to the time from the start of the function to its completion. When the timeout
duration is exceeded, the function execution is stopped and the result is returned.
This parameter determines how long the function has to complete its function.
Since the server can take a long time to initialise, we should set this time to at least
5 minutes. Other properties are by default left and saved.
453
After Basic Settings, we go to the “Permission” menu under the
“Configuration” tab and go to the role settings by clicking the role that
will run the function.
Attach Policy
The default policy is not authorised in ec2 instances. To provide this, we
select the “Attach policies” option under the Add permissions button .
With the addition of the new policy, the authorizations of the role will
454
change.
You can find the Instance ID in the instance section under the EC2
dashboard.
After typing the correct instance id to the instance_Id variable in the test
section, save it and start the test process by clicking the test button.
455
Our test was successful. Now we make sure by checking the status on
the ec2 dashboard.
In this way, we have seen that our Lambda function works successfully.
Create Schedule
We will trigger the Lambda Function that we have built by creating a
Schedule on Amazon Event Bridge. For this, we go to Amazon
EventBridge service and click on the “Create schedule” button.
456
Amazon EventBridge
In the page that appears, we give a name to “Schedule name” and select
“Recurring schedule” and “Cron-based schedule” options.
For example, if we want our server to start at 09.00 every day, we fill in
the blanks as follows, answer the “Flexible time window” question as
off and click next.
457
Amazon EventBridge Cron-based Schedule
Then we select the Lambda Function that we have created and write the
instance id as a parameter in the Payload section, just as we wrote in the
test section of the Lambda Function, and click the next button.
458
Amazon EventBridge Schedule with AWS Lambda
On the following page, select NONE for the “Action after schedule
completion” question and disable the “Retry policy”.
Then select “Create new role for this schedule” in the “Execution role”
459
question and click next.
460
Amazon Event Bridge Schedule
461
AWS SNS — Simple Notification Service
what if you want to send one message to many receivers? So, we have
the possibility to do a direct integration.
So, we have our create service and needs to send email, talk to their
XYZ service, talk to their shipping service, maybe talk to another SQS
queue, so we could integrate all these things together, but it would be
quite difficult.
The other approach is to do something called Pub / Sub, or publish-
subscribe.
And so, our create service publishes data to our SNS topic, and our SNS
topic has many subscribers and all these subscribers get that data in real
time as a notification. So, it could be an email notification, text message,
shipping service, SQS queue. You’re basically able to send your
message once to an SNS topic and have many services receive it.
Basic Introduction:
An Amazon SNS topic is a logical access point which acts as a
communication channel. A topic lets you group multiple endpoints (such
as AWS Lambda, Amazon SQS, HTTP/S, or an email address). To
broadcast the messages of a message-producer system (for example, an
e-commerce website) working with multiple other services that require
its messages (for example, checkout and fulfillment systems), you can
create a topic for your producer system. The first and most common
Amazon SNS task is creating a topic
462
Each subscriber to the topic will get all the messages now we have new
feature to filter messages. Up to 10,000,000 subscriptions per topic &
100,000 topics limit.
Subscribers can be:
SQS
HTTP/HTTPS (with delivery retries — how many times).
Lambda
Emails
SMS messages
Mobile Notification
463
server instance attached to one of the queues could handle the processing
or fulfillment of the order while the other server instance could be
attached to a data warehouse for analysis of all orders received.
It’s fully decoupled & there is no data loss. It has ability to add receivers
of data later.
Fanout
Let’s do Hands-On, we will go to SNS console & give the topic name,
I’m giving “MyTestTopic” & click on Next Step.
We’re not going to apply any custom changes so will keep as default &
click on Create Topic.
464
Now as we see in below picture, there is no Subscription in Topic so we
will create subscription, click in Create Subscription.
Now we will choose the protocol, there are many protocols available
there like HTTP/HTTPS, Email, Lambda, SQS, SMS etc. I’ll choose
Email & give the email id & Create subscription.
465
Now you will see the status confirmed as I confirm the subscription in
my email.
So, this is my console, you can add more subscription. I have added 2
subscriptions. Let’s publish some message. Click on Publish Message
button on top right-hand side.
466
Now go to SQS Queue & click on View/Delete Message in
Actions(Make sure you subscribe the Topic in SQS)
Actions — Subscribe to SNS Topic (Do this work before publishing
message from SNS.
Now you can see you message in Queue as well as in your email.
467
Aws SQS- SIMPLE QUEUE SERVICE
What is SQS?
SQS stands for SIMPLE QUEUE SERVICE.
SQS was the first service available in AWS
AMAZON SQS is a web service that gives you access to message
queue that can be used to store message while waiting for a computer
to process them.
Amazon SQS is a distributed queue system that enables web service
applications to quickly and reliably queue messages that one
component in the application generates to be consumed by another
component where a queue is a temporary repository for messages that
are awaiting processing.
with the help of SQS ,you can send ,store and receive messages
between software components at any volume without losing
messages.
using Amazon SQS, you can separate the components of an
application so that they can run independently , easing message
management between components.
Any component of a distributed application can store the message in
the queue.
Messages can contain up to 256 kB of text in any format such as
JSON,XML ,etc.
Amazon SQS can be described as commoditization of the messaging
service. Well known examples of messaging service Technologies
include IBM WEBSPHERE MQ and MICROSOFT MESSAGE
468
QUEUING.
Unlike these Technologies,users do not need to maintain their own
server. Amazon does it for them and sells the SQS service at a per-use
rate.
Authentication:-
Amazon SQS provides authentication procedures to allow for secure
handling of data. Amazon uses its AMAZON WEB SERVER (AWS)
identification to do this, requiring users to have an Aws enabled account
with amazon.com.Aws assigns a pair of related identifiers ,your Aws
access keys,to an Aws enabled Account to perform identification.
Aws uses the access key ID provided in a service request to look up an
account’s secret Amazon .com then calculates a digital signature with
the key .if they match then the user is considered authentic, if not then
the authentication fails and the request is not processed.
MESSAGE delivery:-
Amazon SQS guarantees at least- once delivery. Messages are stored on
multiple servers for redundancy and to ensure availability. If a message
is delivered while a server is not available, it may not be removed from
that server queue and may be resent.
The service supports both unlimited queues and message traffic.
you can get started with Amazon SQS for free .All customers can make
1 million Amazon SQS request for free each month. Some application
might be able to operate within this FREE tier limit.Aws free tier
includes 1 million requests with Amazon simple queue service.
469
messages from a queue at the same time .
Conclusion:-
SQS is a pull- based ,not push-based
message are 256 kB in size
message are kept in a queue from 1 minute to 14 days.
the default retention period is 4 days .
it guarantees that your message will be processed at least one.
So, here’s about the short information about AMAZON SQS.
Scenario:
A company wants to create a system that can track customer orders and
send notifications when orders have been shipped. They want to use
AWS services to build the system. They have decided to use SQS,
470
Lambda, and Python for the project.
1) Create a Standard SQS Queue using Python.
2) Create a Lambda function in the console with a Python 3.7 or higher
runtime
3) Modify the Lambda to send a message to the SQS queue. Your
message should contain either the current time or a random number. You
can use the built-in test function for testing.
4) Create an API gateway HTTP API type trigger.
5) Test the trigger to verify the message was sent.
Prerequisites:
AWS CLI and Boto3 installed
AWS account with I AM user access, NOT root user
Basic AWS command line knowledge
Basic Python programming language
Basic knowledge of AWS Interactive Development Environment
(IDE)
471
provision, manage, or scale servers. It is highly scalable, cost-effective,
and reliable, and integrates seamlessly with other AWS services to
enable building of scalable and highly available distributed systems and
microservices architectures.
#!/usr/bin/env python3.7
Alright, let’s run the code in AWS Cloud9 IDE . Great! We see SQS
472
queue url displayed. Let’s copy and save this url, we will need it for
later.
Ok, let’s double check if our SQS queue is created in AWS console.
473
Author from scratch > Function Name: LambdaSQS > Runtime: Python
3.8 > Architecture x86_64 > Execution role: Create a new role with basic
Lambda permissions > Create Function
We see our Lambda function has been created. Let’s update the
permissions for the role and attach to the Lambda function.
474
After click on LambdaSQS-role link, it will prompt you to IAM page,
click >Add permissions > Attach policies.
475
Search SQS > Check AmazonSQSFullAccess > Add permission
476
Great! Now our Lambda function have basic lambda function
permission and Full SQS Access permission assigned to the attached
role.
477
Add destination > Source: Asynchronous invocation (is a feature
provided by AWS Lambda that allows you to invoke a Lambda function
without waiting for the function to complete its execution) > Condition:
On success > Destination type: SQS_queue > Destination:
Week15Project-sqs-queue
478
send_message — Boto3 1.26.115 documentation (amazonaws.com)
Go back in Lambda, use the Github gist Lambda SQS python script code
and paste it in the lambda function code source, then click Deploy.
Lambda SQS Python Script Code (github.com)
import json
import boto3
from datetime import datetime
import dateutil.tz
sqs = boto3.client('sqs')
479
We have successfully Deployed the code, lets test the code click Test.
Configure test event page will show up after. Select test event action:
Create new event > Event Name: testSQSLambda > Event sharing
settings: Private > Tempalte: apigateway-aws-proxy > Save
480
Configu11'\e test event X
A \Qs.t eve:nt is a JSON object that mooks the stnJcturo oi rnque:sts Qmittoo by AWS seNices to invoke a ambda fum:tfon_
Use it to ,;ee the function's illVOGltion result.
To [nvol«! your iuriction without saving an event, configurehe JSON event, then choose Test.
Eve:ntnarne
I testSQSL.ambda
Maxanum of 25 characters consisting of letters. number,, doU. lryphens and underscores..
Q Shareable:
This event Is. ;wajlab1e to 1AM us.er.. wllhln the s.ame aocount .mo h.we permission, to aocess and u,e >hareab e events. Learn more [!":
Template: • opticmal
I apigateway-aws-proxy
EventJSON ormatJSON
1T lil
2 "body": "eyJ0ZXN0IjoiYm9keS]9",
I
3 "resource": "/{pNn<y+}",
4 "path": "/path/to/resource",,
S "nttpMethod" : "POST",
i; "isBase64Encoded": tr,ue,
7 T "(!Ue ryst ringJ>ar ters" : {
8 °foo": "bar"
9 },
10 T "multivalueQuerystringpara ters": {
11 T "foo": [
12 "bar"
13 ]
14 },
1ST "pathParameters": {
16 "proxy": "/path/to/resource"
17 },
18T "stagevariatJles": {
19 lllb az1•: ..(11.E(..
20 },
21T "headers": {
22 "Accept": "te:ct/h l,application/xh l+ l,application/xml;q=0.9,- age/ ,'ebp,*/*;q=0.8",
23 • Accept-Encodin:g" : "gzip, def1Me, s.dch",
24 "Accept-Language": "en-US,er1;q=>0.8",
2S • cache-control• : " ax- age=0",
26 "CloudFror1t-i'or,,.;'arded-Proto": "ltttps",
27 CloudFront-Is-Desktop-Vie111e-r-": "true",,
0
28 "cloudFror1t-Is-Mobile-Viewer": "false",,
29 "CloudFront-Is-smartTV-Vie111er·": "false",
481
We can now click Test again and see if the Test Event was successful.
Awesome! We see StatusCode: 200 and the ‘body’ show our message
along with the timestamp. This means our code is running correctly!
482
Trigger configuration: > Select API Gateway > Intent: Create a new API
> API typ: HTTP API > Security: Open > Click Add
483
In Configuration tab > Triggers > Click on the newly created API
Gateway link.
Let’s go back to Amazon SQS Queue > Click on Send and receive
message
484
Click on Poll for message to see if the Lambda function sent a message
to the queue.
485
Current Time! Awesome we completed our objectives! We are able to
send a message to the SQS queue by triggering our Lambda function
with API gateway!
ADVANCED:
486
In Amazon SNS console, create subscription tab.
487
Subscription successfully created
488
Amazon Simple Email Service (SES)
489
example, you can:
490
Amazon SES Quick Start:
This procedure leads you through the steps to sign up for AWS, verify
your email address, send your first email, think about how you’ll handle
bounces and complaints, and move out of the Amazon simple Email
Service (Amazon SES) sandbox.
491
recipients from your mailing list.
After you send a few test emails to yourself, use the Amazon SES
mailbox simulator for further testing because emails to the mailbox
simulator do not count towards your sending quota or your bounce
and complaint rates. For more information on the mailbox simulator,
see Testing Email Sending in Amazon SES .
Monitor your sending activity, such as the number of emails that you
have sent and the number that have bounced or received complaints.
For more information, see Monitoring Your Amazon SES Sending
Activity.
Verify entire domains so that you can send email from any email
address in your domain without verifying addresses individually. For
more information, see Verifying Domains in Amazon SES.
Increase the chance that your emails will be delivered to your
recipients’ inboxes instead of junk boxes by authenticating your
emails. For more information, see Authenticating Your Email in
Amazon SES .
492
Introduction:
Sending emails is a common requirement for many applications, whether
it’s sending notifications, newsletters, or transactional emails. In this
blog post, we’ll explore how to leverage the power of Amazon Simple
Email Service (SES) and AWS Lambda to automate the process of
sending emails when new objects are uploaded to an Amazon S3 bucket.
This powerful combination allows you to build scalable and efficient
email notification systems.
Setting up s3
here is how you can create a bucket in S3, here only change is the name
and ACL and the rest is the same
493
creating bucket in S3
enable ACL
494
here our bucket is created simple.
Setting Lambda
below shows how you can create a lambda function handling our Python
code
495
Triggering S3 and integrating Lambda with S3 object
here is how step by step you can set up S3 object to trigger when
something is uploaded in out bucket.
496
497
here you have to connect the lambda with which you want your bucket
to react or trigger when something is put in it.
refresh the lambda page and you’ll see this, which shows our integration
is done here with S3.
Giving permissions
our lambda needs permissions to perform activities, so here we will
navigate to the permission section of our lambda and select these 2
permissions shown below as our work is with these two
these permissions basically mean that our lambda is permitted to do
498
anything with S3 and SES i.e full access
import json
import boto3
s3_client = boto3.client('s3')
a = s3_client.get_object(Bucket='testqejifiui30',Key='email_test')
a = a['Body'].read()
response = ses_client.send_email(
Source='shubhangorei@gmail.com', # Replace with the sender's email
address
Destination={
'ToAddresses': object_data['emails'] # Replace with the recipient's
email address
499
},
Message={
'Subject': {
'Data': 'this is my subject', # Replace with the email subject
},
'Body': {
'Text': {
'Data': 'body is good', # Replace with the email body
}
}
}
)
return {
'statusCode': 200,
'body': json.dumps('mail send')
}
The Lambda function retrieves the object from S3, reads its content,
converts it to a JSON format, and extracts the email addresses. Then, it
uses the Boto3 library to interact with Amazon SES and sends the email
to the recipients specified in the JSON file.
Make sure to replace the following placeholders in the code with your
own values:
500
Additionally, ensure that the appropriate IAM role is assigned to the
Lambda function to grant it the necessary permissions for accessing S3
and sending emails using Amazon SES.
Once you have customized the code and deployed the Lambda function,
it will be triggered whenever a new object is uploaded to the specified
S3 bucket, and it will send an email to the recipients specified in the
JSON file.
Please note that the above code assumes that the email addresses are
provided in the JSON file under the key 'emails'. Adjust the code
accordingly if your JSON file has a different structure.
501
Setting up SES
hear my account is in the test phase, as all new users are in this phase if
you want to go to the production phase where we can send mail to any
unknown ID, you can request for it in the SES interface.
here I have verified the mail which I'm going to upload and send mail
otherwise an error will show up
It’s important to note that when using Amazon SES in the sandbox
mode, you can only send emails to verified email addresses. This means
that the email addresses specified in the JSON file need to be verified in
the Amazon SES console. If they are not verified, the emails will not be
sent.
this is my file which I'm going to upload in S3, it’s in JSON format
{
"emails": ["vmrreddy913@gmail.com"]
}
502
to your needs, the code is all explanatory.
response = ses_client.send_email(
Source='vmrreddy913@gmail.com', # Replace with the sender's email
address
Destination={
'ToAddresses': object_data['emails'] # Replace with the recipient's
email address
},
Message={
'Subject': {
'Data': 'this is my subject', # Replace with the email subject
},
'Body': {
'Text': {
'Data': 'body is good', # Replace with the email body
}
}
}
)
503
Creating a Highly Available 3-Tier Architecture for
Web Applications in AWS
504
What is a 3-Tier Architecture?
A three-tier architecture comprises three layers, namely the presentation
tier, the application tier, and the data tier. The presentation tier serves as
the front-end, hosting the user interface, such as the website that users or
clients interact with. The application tier, commonly referred to as the
back-end, processes the data. Finally, the data tier is responsible for data
storage and management.
505
Using the architecture diagram as a reference, we will need to start by
creating a new VPC with 2 public subnets and 4 private subnets.
Log into the AWS management console and click the Create VPC
button.
We are going to create a VPC with multiple public and private subnets,
availability zones, and more, so let’s choose “VPC and more.”
Name your VPC. I am using the auto-assigned IPV4-CIDR block of
“10.0.0.0/16.” Choose these settings:
no IPV6
default Tenancy
2 Availability Zones
2 public subnets
4 private subnets
506
Next, for Nat gateway choose “in 1 AZ,” none for VPC endpoints, and
leave the Enable DNS hostnames and Enable DNS resolutions boxes
checked.
Before we create the VPC expand the customize AZ’s and customize
subnets CIDR blocks tabs.
507
Click the “Create VPC” button.
The diagram below highlights the route that your new VPC will take.
508
You will then be shown a workflow chart that shows your resources
being created.
509
Next click on the Subnets tab in the VPC console. Select one of the new
subnets that was created, then under the “Actions” tab, expand the down
arrow and select “Edit subnet settings.”
510
Check “Enable auto-assign IPv4 address” and “Save.” We need to do
this for all 6 new subnets that were created.
Update Web Tier Public Route Table:
We need to navigate to the route table tag under the VPC dashboard to
make sure that the Route table that was automatically created is
associated with the correct subnets. Below I highlighted in green the
correct public subnets that already associated. If you have none or you
only created a stand-alone VPC, you will have to click on “edit subnet
associations” and select the ones needed.
511
Part 2: Creating a Web Server Tier
Next, we will create our first tier that represents our front-end user
interface (web interface). We will create an auto scaling group of EC2
instances that will host a custom webpage for us. Start by heading to
512
Select the key pair that you will use, and make sure to select your new
VPC and the correct subnet. Auto-assign IP should be enabled.
Create a new security group. For inbound security group rules, add rules
for ssh, HTTP, and HTTPS from anywhere. This is not standard or safe,
practice, but for this demonstration it is fine.
513
Leave the configuration storage settings alone. In the Advanced details,
head all the way to the bottom. We are going to use a script to launch an
Apache web server when the instance starts.
514
Launch your new instance!
Once your instance is up and running, copy the public IP address and
paste it into a web server.
515
outline what resources are going to be allocated when an auto scaling
group launches on-demand instances. Under the EC2 dashboard, select
Launch Templates, and click the “Create launch template” button.
516
Use our recently launched AMI t2.micro instance type and select your
key pair.
517
For the firewall, use “Select existing security group,” and make sure the
security group (SG) that we created for the web tier is selected. Under
Advanced network configuration, enable Auto-assign public IP.
We are going to leave the storage options alone for now. Click on the
Advanced details tab, scroll down, and enter the same script as we did
earlier for our EC2 instance.
518
Click the “Create launch template” button.
519
Under Network, make sure to select the VPC that you created earlier,
then also under availability zones and subnets select the public subnets
that were created; yours may differ.
Click the Next button.
Now we are given to option to allocate a load balancer for our ASG. A
load balancer will distribute the load from incoming traffic across
multiple servers. This helps with availability and performance.
520
Select “Attach to a new load balancer” and “Application load balancer,”
name your load balancer, then select “Internet facing” as this is for our
web tier.
Your VPC and the two public subnets should already be selected.
Under the “Listeners and routing” section, select “Create a target
group,” which should be on port 80 for HTTP traffic.
521
Leave No VPC Lattice service checked.
Click to turn on Elastic Load Balancing health checks.
522
Next, we configure the group size and scaling policy for our ASG. For
reliability and performance, enter 2 for desired capacity and minimum
capacity. For maximum capacity, enter 3.
523
On the next screen we could add notifications through SNS topics, but I
skipped this for now. Click the Next button.
The next screen allows you to add tags which can help to search, filter,
and track your ASG’s across AWS. For now, just click the Next button.
Review your setting on the next page, and at the bottom click the
“Create auto scaling group” button.
You should see a lovely green banner declaring your success here. After
the ASG is finished updating capacity, navigate to your EC2 dashboard
to confirm that your new instance has been created.
Note: In my previous examples my names were not accepted for the auto
scaling group, make sure you follow the naming conventions.
524
As you can see, the ASG is doing its job.
525
Name your new template, and select the Guidance tab again.
Select browse more AMI’s, then Amazon Linux 2 for your AMI. Select
t2.micro for instance type, and also select you keypair.
526
Under the network setting, we want to limit access to the application tier
for security purposes. You don’t want any public access to the the
application layer or the data tier after it. We will create a new security
group. Select our VPC; I realize now that for this part I could have
chosen a better name!
Name your new SG and select the VPC that we created when we started.
527
We will create 3 security group rules starting with ssh; use My IP as the
source. For Security Group 2, use Custom TCP; the source here is the
security group from our web tier (tier-1). For the third group, select All
ICMP-IPv4, and set the Source type as anywhere. This will allow us to
ping our application tier from the public internet to test if traffic is
routed properly.
528
This is my updated screenshot
We will again leave the storage volumes alone, and head to the bottom
of Advanced details to enter our script. And then click next.
529
I went back to the security group and updated my inbound rules.
Once this was fixed, I went back and recreated the application layer
template using the ApplicationTierSG1 that I just altered. I just double
checked the security group rules and used the same settings for
everything else above.
Application Tier Auto Scaling Group:
Ok, now we are ready to create our auto scaling group for the
application layer. Under the EC2 dashboard go to create an auto scaling
group.
Name your new ASG and select the proper launch template, then click
the Next button.
Choose the correct VPC and 2 private subnets, then click the Next
button.
530
We are again given the option to attach a load balancer, and we want to
do this. Select an application load balancer, name it, and set it as an
internal load balancer. Double check that the VPC and subnets are
correct. Mine are.
531
Under “Listeners and routing” create a new target group, and use port 80
once again.
532
Below I have again chosen to turn on health checks and enable group
metrics within CloudWatch.
533
On the next screen set your desired capacity, minimum capacity, and
maximum capacity again.
534
Click the Next button, add notifications if you want or need and then
tags. Review your new ASG settings and create it.
As you can see below, my new application layer ASG is updating the
capacity.
Once the new EC2 instances are created and running, we will try to ssh
into them. If we set it up correctly, we should not be able to.
When I tried to SSH into the application tier EC2 instance running, the
connection timed out, this is exactly what we want here.
535
After waiting for my new ASG to update its capacity, it only started 1
instance. I must have clicked back and reset my capacity selections to
the default 1 instance. Below I updated the capacity that I wanted.
I apologize for the changes in settings here. I completed this project over
a long period of time and multiple computers.
536
Add another subnet that is private.
537
Create a DB Subnet Group:
We will begin by creating a subnet groups. Navigate to the RDS console
and on the left side menu, click “Subnet Groups” and then the orange
“Create DB subnet group.”
For the next part we need to know the availability zones for the last two
subnets that were automatically created. Head back to the VPC console.
Under Subnets, find the last 2 subnets that you have; make sure not to
select the private subnets that you already used in tier 2.
Back at the RDS console, select the availability zones that you are going
to use.
538
Next up we need to select the proper subnet; the drop down menu only
lists the subnet ID. Below is another screenshot of my subnets; the
second column is the ID.
539
Next you can choose a muli-AZ deployment with 3 database instances.
One is a primary instance with two read-only standby instances. This
makes for a very reliable system, but we do not need this at this time.
540
There are also availability and durability options; however, with the
free-tier, none are available. We do not need them either.
Under Settings, name your DB and create a master username and
password. This username and password these should be different than
your AWS account user login, as it is specific to the database you are
creating.
541
You will need your username and password. Make sure to store them in
a secure place!
Under Instance configuration, the burstable classes option is is pre-
selected because it’s the only one available for the free tier. I left my
instance type as adb.t2.micro. You can add storage as needed; I left mine
on default settings.
542
We are going to set up our network manually so choose not to connect to
an EC2 resource. Select the proper VPC; the subnet group that you
created earlier should be listed as default. Select Create new VPC
security group (firewall).
543
544
In Database authentication I left the default checked.
545
Update the Database Tier Security Group:
Navigate over to the VPC console, select Security groups on the left side
menu, and then find the database tier security group you just created.
Select the security group that you just created. You need to edit inbound
rules; by default the database SG has an inbound rule to allow
MySQL/Aurora traffic on port 3306 from your IP address. Delete this
rule.
546
Create a new rule for MYSQL/Aurora on port 3306: for the Source,
select Custom to add your security group for your application layer (tier-
2 SG).
547
Our three-tier architecture is done! We have already tested our web and
application layers, but we are going to go a step futher here.
Part 5: Testing
We can’t directly SSH to the database, but we can use an SSH
forwarding agent to achieve this.
You need to add your access key pair file to your keychain. To do this,
first make sure you are in your local host (use the command exit to get
out of any EC2 instance you’re connected to). Then use the following
command:
ssh-add -K <keypair.pem>
Now that your key pair file is added to your keychain, the SSH agent
will scan through all of the keys associated with the keychain and find
your matching key.
Now reconnect to the web tier EC2; however, this time use -A to specify
you want to use the SSH agent.
ssh -A ec2-user@<public-ip-address>
Once you are logged back into your tier-1 EC2, use the following
command to check if the SSH agent forwarded the private key.
ssh-add -l
548
Our key pair has been forwarded to our public instance. Go copy your
tier-2 application layer private IP address and copy it into the next
command.
ssh -A ec2-user@<private-ip-address>
We have now SSH’ed from your public tier 1 web instance into your
private tier 2 application instance!
Testing Connectivity to the Database Tier
There are a few ways you can connect to your RDS database from your
application tier. One way is to install MySQL on your private tier 2
instance to access your database. We are going to utilize this method.
While logged into your application tier instance, use this command:
sudo dnf install mariadb105-server
549
This command installs the MariDB package, which is used to read
MySQL. Once installed, you should be able to use the following
command to log into your RDS MySQL database. You will need your
RDS endpoint, user name, and password. To find your RDS database
endpoint, navigate to the database you created and find the endpoint
under Connectivity & Security.
mysql -h <rds-database-endpoint> -P 3306 -u <username> -p
550
How to Deploy a Static Website with AWS Using S3,
Route 53, CloudFront, and ACM
Launching your own website is exciting, but figuring out how to get it
online can be a bit overwhelming, especially if you’re new to the
process.
If you’ve created a website using for example HTML, CSS, and
JavaScript and want to get it up and running on the internet, AWS
(Amazon Web Services) can help.
In this beginner-friendly guide, we’ll walk through the steps together.
We’ll use AWS services like S3 (for storage), Route 53 (for managing
your domain name), CloudFront (for delivering your content quickly),
and ACM (for keeping your site secure). Additionally, we’ll explore a
straightforward process to purchase a domain for your website,
making it incredibly easy to have your very own web address.
By the end, you’ll have a clear understanding of how to host your
website on AWS and make it accessible to anyone on the web!
551
Before we delve into the process, let’s understand the potential costs
involved in using these services:
S3: If you’re within the AWS Free Tier, hosting your website files on
S3 should incur no cost. Outside of the Free Tier, expenses are
typically minimal, often just a few cents.
Taking all this into account, let’s dive into deploying our
static website using AWS!
Buy a Domain
552
Before diving into the intricacies of using AWS services to host your
website, let’s first explore the fundamental step of acquiring a domain.
Your domain is like your home’s address on the internet; it’s how people
will find and remember your website. This step is crucial as it sets the
foundation for your online presence, giving your website its unique
identity in the vast landscape of the web.
There are various platforms where you can purchase a domain, such as:
Hostinger
Namecheap
GoDaddy
Each of them offer simple processes to acquire your unique web address.
For our example, we’ll navigate through Hostinger, but the process
remains fairly consistent across these platforms. Once on their website,
553
AWS — S3
2. Search for S3
Navigate to the “Services” section and in the search bar type “S3” and
press enter. Amazon S3 will appear as the first option in the list of
services.
554
3. Create a S3 Bucket
AWS S3 is perfect for storing files affordably. When your website
consists of client-side code only, you can set up S3 for hosting a static
website easily.
555
In my case, the bucket name already exists because I created it before,
but you shouldn’t have any issues.
As you scroll down you have to uncheck the option for Block all public
access. Typically, it’s not advised to do so, as indicated by the warning
prompt you’ll receive upon disabling it. However, since you’re crafting a
website that you want to be accessible worldwide, turning this off is
suitable for your purpose
556
Proceed by utilising the default settings for the remaining bucket
configurations.
Then proceed to click on “Create bucket”
By following these steps, you’ll create an S3 bucket that is accessible to
the public, allowing you to host and share your website’s content
effectively.
557
upload interface.
Choose whether you need to upload a folder or individual files. If you
have folders containing your website content, click on “Add
folder.” Otherwise, if you only have specific files, click on “Add
files.”
Browse and select all the necessary files and folders from your local
machine that make up your website, including files like index.html,
CSS files, and images.
After selecting all the relevant files, click on the “Upload” button
located at the bottom right corner.
5. Enable Web Hosting
558
After entering the necessary information, scroll down to the end of the
page and click “Save changes” located at the bottom right corner.
{
"Version":"2012-10-17",
"Statement":[{
559
"Sid":"PublicReadObject",
"Effect":"Allow",
"Principal": "*",
"Action":["s3:GetObject"],
"Resource":"arn:aws:s3:::your-bucket-name/*"
}]
}
Remember not to remove the “/*” after the bucket name in the
JSON policy. This ensures that all objects within the bucket are
selected to be made public.
Finally, click on the “Save changes” option at the end to apply the
bucket policy.
It is important to create bucket policies for granting public access to files
otherwise, accessing the content of your website would not be possible.
Instead, you would encounter a “403 Forbidden” message.
560
Congratulations on successfully hosting your web in S3 with public
access! However, to elevate its professionalism, it would be even more
impressive if it were linked to a custom domain. We’ve previously
purchased a domain, and now we’ll utilise it to enhance the final
result. □
Navigate to the “Services” section and in the search bar type “Route
53” and press enter. Route 53 will appear as the first option in the list
of services.
561
Click on “Hosted zones” and then on “Create hosted zone”.
Domain name: Enter the domain you purchased from the third-party
provider, select “Public hosted zone”, then click “Create hosted
zone”.
562
After creating your public hosted zone, you will find 4 listed name
servers.
563
Take a note of these for later!
564
3. Create a Record to point to the S3
Having set up a public hosted zone, the next step is creating a record
dictating how traffic should be directed when visitors access your
domain name. For that follow these steps:
Enter your “Hosted zone” and click on “Create record”
565
If nothing appears here, it might be because your bucket is not named
the same as your domain. In that case, you’ll have to recreate the bucket
with your domain’s exact name.
And finally:
Routing policy: Simple routing.
Evaluate target health: YES.
Click on “Create records”.
Your changes may take up to 60 seconds to become active. Once
the Status switches from PENDING to INSYNC , you’re all set to
test out your modifications.
Let’s run a test! If everything went good, entering your domain name
into a browser (like anaquirosa.com) should lead Route 53 to direct you
to the S3 website. This means you should see your website!!
566
“Not secure” alert
567
section. Creating a certificate in any other region will render it
unusable with CloudFront, where you will ultimately need it.
568
Before the certificate can be issued, Amazon requires confirmation of
your domain ownership and your ability to modify DNS settings (within
Route 53).
569
Congrats! You have a TLS/SSL certificate
Create a CloudFront Distribution
Your website’s files are sitting in S3, but here’s the catch: certificates
don’t work directly with S3 buckets. What you’ll need is
a CloudFront distribution to link to that S3 bucket. Then, you apply
the certificate to this CloudFront setup.
CloudFront is Amazon’s content delivery network (CDN) that speeds up
content delivery worldwide by storing it closer to users. It’s fantastic for
videos and images, making them load faster. If your website is basic or
has small files, you might not notice a huge difference in speed. But
using CloudFront is essential to apply the TLS/SSL certificate you made
earlier.
Navigate to CloudFront.
570
Click on “Use website endpoint” and AWS will update the
endpoint for you.
Scroll down until the section “Default cache behaviour”
Viewer protocol policy: Redirect HTTP to HTTPS
571
Scroll down to “Web Application Firewall (WAF)”
Open a new browser tab and paste that address into the navigation
572
bar.
If everything went good, you should now notice the padlock icon in your
browser (or something similar, it depends on the browser) signaling that
you’re securely connected via the certificate configured in Certificate
Manager.
573
Route traffic to: Alias to Cloudfront distribution.
Choose Region: This option is selected for you and grayed out.
Choose your distribution (it should automatically populate in the
third dropdown)
Click “Save”.
574
You did it!!
If everything worked, you should be able to navigate to your domain
name and have it load your website on a secure connection!!
575
576