0% found this document useful (0 votes)
11 views583 pages

Aws Cloud Book Latest

This document emphasizes the importance of reading a book on AWS cloud services for understanding its vast array of offerings, learning best practices, and preparing for certifications. It outlines the benefits of AWS, including its comprehensive services, global availability, and the growing demand for AWS professionals in the job market. Additionally, it highlights the author's expertise and provides an overview of the content covered in the book, including various AWS services and their applications.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views583 pages

Aws Cloud Book Latest

This document emphasizes the importance of reading a book on AWS cloud services for understanding its vast array of offerings, learning best practices, and preparing for certifications. It outlines the benefits of AWS, including its comprehensive services, global availability, and the growing demand for AWS professionals in the job market. Additionally, it highlights the author's expertise and provides an overview of the content covered in the book, including various AWS services and their applications.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 583

Why You Need This Book

Reading this book about AWS (Amazon Web Services) cloud


services can be beneficial for several reasons:

Understanding AWS Services: AWS offers a vast array of services for


cloud computing, ranging from storage and computing to machine learning
and artificial intelligence. Reading a book on AWS can help you understand
the various services available, their features, and how they can be used to
meet different business needs

Learning Best Practices: Books often cover best practices for using AWS
services efficiently, securely, and cost-effectively. This knowledge can help
you design and implement solutions that are robust and scalable

Preparing for Certification: If you're pursuing AWS certifications,


such as AWS Certified Solutions Architect or AWS Certified Developer,
studying from books can provide
comprehensive coverage of the topics you need to know for the exams.

Staying Updated: Cloud computing is a rapidly evolving field, and AWS


frequently introduces new services and
updates existing ones. Books authored by AWS experts often
provide insights into the latest trends, updates, and best practices in the AWS
ecosystem

Deep Dives and Case Studies: Some books offer in-depth explanations
of specific AWS services or case studies illustrating real-world
implementations.
These can help deepen your understanding of how to use AWS effectively in
practical scenarios.

Reference Material: Books can serve as valuable reference material that


you can consult whenever you encounter challenges or need to refresh your
knowledge about specific AWS services or concepts.
Overall, reading AWS cloud books can be a valuable investment in your
professional development if you work with or plan to work with AWS cloud
services.

About the Author:


Madhukar Reddy Venna has earned a master's degree from California State
University. He is a highly acclaimed trainer, author and solutions provider.
He regularly trains students in in-house, at Vcube software Solutions. He has
an experience of more than 12+ years in industry and Training, where he has
delivered more than 200 batches successfully in Devops, Devsecops, Mlops,
Aiops

Though we have taken utmost efforts to present you this book error free, but
still, it may contain some errors or mistakes. Students are encouraged to
bring, if there are any mistakes or errors in this document to our notice.

Few details About the Author


Author work authorization

Author Social Security Number

Author with JNTU KAKINADA Vice Chancellor (Dr . G.V.R. PRASAD


RAJU)
Author with ANDHRA UNIVERSITY Vice Chancellor (Prof.P.V.G.D.
Prasad Reddy)
Author with Acharaya Nagarjuna University Vice Chancellor
(Prof.Rajasekhar.P,M.A, M.Phil)
INDEX
S NO NAME PAGE NO
1 What Is AWS Cloud 1
2 Why Linux Need 11
3 Linux Commands 14
4 Region & Availability Zones 24
5 Secure Remote Administration 27
6 Elastic Cloud Compute 38
7 Virtual Private Cloud 51
8 Virtual Private Cloud Peering 64
9 Transit Gateway 84
10 Virtual Private Cloud Endpoint 104
11 Security & Network Access Control List 117
12 Elastic Block Storage 125
13 Elastic File System 133
14 Load Balancer 148
15 Auto Scaling 165
16 Web Application Firewall 180
17 Relational Database 220
18 Dynamo Database 247
19 Simple Storage Service 269
20 Identity Access Management 285
21 Amazon Machine Image 291
22 Snapshot 301
23 Elastic Beam Stack 308
24 Cloud Watch 335
25 Cloud Trail 358
26 Route 53 372
27 Cloud Front 380
28 Access Certificate Manager 383
29 Amplify 386
30 Lambda 394
31 Simple Notification Service 462
32 Simple Queue Service 468
33 Simple Email Service 489
33 Project – 1 504
34 Project – 2 551
What is an AWS and cloud technology?

What I’ve found is, cloud is the future for almost every business and
most of young people aren’t aware much about it.
Cloud computing is the on-demand delivery of IT resources over the
Internet with pay-as-you-go pricing. Instead of buying, owning, and
maintaining physical data centers and servers, you can access
technology services, such as computing power, storage, and databases,
on an as-needed basis from a cloud provider like Amazon Web Services
(AWS).

This article will talk about several sub-topics related to cloud technology
and amazon web services(AWS) as follows:

 What is cloud?
 History of cloud computing.
 Which companies provide cloud services and the topmost in the
market.
 What is an AWS?
 What can you do with AWS?
 Criteria and way to get a cloud job.
 Bonus: Best courses for learning cloud in AWS.

1
What is cloud?

Cloud, as the name might suggest, its’ not really cloud or something
above in the air. It’s simply somebody else’s computer or more
precisely, a server. Now, most of us don’t realise that we use cloud on a
daily basis, without actually knowing what it is or how are we even
using it?

Let’s say, you want to setup your business online by creating websites
and storing the information provided by all your users. This, in typical
early 90’s scenario would require several rooms for servers and for
storage of data. This also depends upon your size of business. You will
also require manager, administrators, engineers and, professionals for
managing and administrating the servers.

Okay, so on a smaller scale, creating a blogging platform as a part of


your secondary source of income you would need a i7 core processor
and a proper storage which could keep your data and other information.
Now, it’s a lot of work because along with getting new content you
would also need to manage your little baby server and organize the data
on a daily basis. And also, keeping your server up 24*7? It’s still a lot of
work, isn’t it?

History of cloud computing:

The story of Cloud computing so far…


 1999: Salesforce.com launches CRM as a service
 2002: Amazon launches AWS for developers
 2006: AWS launches pay-per-use commercial cloud with S3 (storage)
and EC2 (computer) services
 2008: Google launches App Engine offering developers a scalable
application environment
 2010: Microsoft launches Azure IaaS (Beta version)

2
 2011: Apple launches iCloud and Microsoft buys Skype
 2015: Global Cloud industry exceeds $100 Billion revenues
 2016: AWS exceeds $12 Billion in IaaS/PaaS revenues and now
offers 70 distinct Cloud services
 2017: Microsoft passes $10 Billion in SaaS revenue. Salesforce is #2
SaaS player with $8.5 Billion revenues.
 2018: Global Cloud IT infrastructure spend exceeds traditional IT
 2019: SaaS market exceeds $110 Billion revenues.
 2020: Total Cloud services revenues exceed $250 Billion.

Which companies provide cloud services and the topmost in


the market?

Now, I hope you’ve got the idea most web hosting companies offer you.
They offer you management of your platform, server, storage and
professional security. So, you only have to focus on serving great
content!

Cloud computing makes it easier, cheaper and faster to run state-of-the-


art IT architectures in any type of company, large or small. Businesses
benefit from cheaper, faster, more scalable IT resources in the Cloud and
users get a better experience. A virtuous circle exists between software
users and software developers in SaaS Clouds: developers can improve
the software faster because they can see usage and performance data in
real time. Meanwhile, users get the latest software upgrades as soon as
they are released, without having to pay more or having to fiddle with
clumsy downloads.

According to a report by Canalys shown in the below chart, in Q4, 2020,


AWS cloud grew by 28% and Azure, Google, and Alibaba clouds grew
50%, 58%, and 54% respectively. As of this report, AWS has 31% of
total cloud market share followed by Azure, Google, and Alibaba that
have 20%, 7%, and 6% respectively.

3
Here is a list of my top 10 cloud service providers:
1. Amazon Web Services (AWS)
2. Microsoft Azure
3. Google Cloud
4. Alibaba Cloud
5. IBM Cloud
6. Oracle
7. Salesforce
8. SAP
9. Rackspace Cloud
10. VMWare

The following table summarizes the top 3 key players and their offerings
in the cloud computing world:

4
What is an AWS?

Amazon Web Services (AWS)

Amazon Web Services (AWS) is an Amazon company that was


launched in the year 2002. AWS is the most popular cloud service
provider in the world.

Amazon Web Services (AWS) is the world’s most comprehensive and


broadly adopted cloud platform, offering over 165 fully-featured
services from data centers globally. This service is used by millions of
customers.

AWS’s revenue in the year 2018 was $25.6 billion with a profit of $7.2
billion. The revenue is expected to grow to $33 billion in 2019.

AWS global availability:


AWS offers the largest footprint in the market. No other cloud provider
offers as many regions or Availability Zones (AZs). This includes 78
AZs within 25 geographic regions around the world. Furthermore, AWS
has announced plans for 9 more AZs and three more regions in Cape
Town, Jakarta, and Milan.

5
In simple words AWS allows you to do the following things-

 Running web and application servers in the cloud to host dynamic


websites.
 Securely store all your files on the cloud so you can access them from
anywhere.
 Using managed databases like MySQL, PostgreSQL, Oracle or SQL
Server to store information.
 Deliver static and dynamic files quickly around the world using a
Content Delivery Network (CDN).
 Send bulk email to your customers.

Compute:

 EC2 (Elastic Compute Cloud) — These are just the virtual


machines in the cloud on which you have the OS level control. You
can run whatever you want in them.

 LightSail — If you don’t have any prior experience with AWS this is
for you. It automatically deploys and manages compute, storage and
networking capabilities required to run your applications.

 ECS (Elastic Container Service) — It is a highly scalable container


service to allows you to run Docker containers in the cloud.

6
 EKS (Elastic Container Service for Kubernetes) — Allows you to
use Kubernetes on AWS without installing and managing your own
Kubernetes control plane. It is a relatively new service.

 Lambda — AWS’s serverless technology that allows you to


run functions in the cloud. It’s a huge cost saver as you pay only
when your functions execute.

 Batch — It enables you to easily and efficiently run batch


computing workloads of any scale on AWS using Amazon EC2 and
EC2 spot fleet.

 Elastic Beanstalk — Allows automated deployment and


provisioning of resources like a highly scalable production website.

Storage:
 S3 (Simple Storage Service) — Storage service of AWS in which
we can store objects like files, folders, images, documents, songs, etc.
It cannot be used to install software, games or Operating System.

 EFS (Elastic File System) — Provides file storage for use with your
EC2 instances. It uses NFSv4 protocol and can be used concurrently
by thousands of instances.

 Glacier — It is an extremely low-cost archival service to store files


for a long time like a few years or even decades.

 Storage Gateway — It is a virtual machine that you install on your


on-premise servers. Your on-premise data can be backed up to AWS
providing more durability.

7
Databases:
 RDS (Relational Database Service) — Allows you to run relational
databases like MySQL, MariaDB, PostgreSQL, Oracle or SQL
Server. These databases are fully managed by AWS like installing
antivirus and patches.

 DynamoDB — It is a highly scalable, high-performance NoSQL


database. It provides single-digit millisecond latency at any scale.

 Elasticache — It is a way of caching data inside the cloud. It can be


used to take load off of your database by caching most frequent
queries.

 Neptune — It has been launched recently. It is a fast, reliable and


scalable graph database service.

 RedShift — It is AWS’s data warehousing solution that can be used


to run complex OLAP queries.

Demand for AWS Jobs Outstrips Available Professionals:


In the public cloud job market, there are between six to 12 times more
job postings available than there are job seekers, and 60 percent of these
job postings are AWS-related. Employers in the United States, for
example, say that it’s quite a challenge finding professionals with cloud
computing skills in general. This imbalance will continue to be the case
for a long time to come.

8
Whether you’re already an experienced IT professional seeking to take
your career in a new direction or new to cloud computing (or IT, for that
matter), there are several reasons why you should consider AWS. And
since AWS is the leading public cloud computing service that is widely
adopted by organizations both large and small, then it also follows that
learning AWS has become a necessity for IT professionals who want to
secure their future careers.

How to Learn AWS:


Now that you have some solid reasons why an AWS career can be
beneficial, the next step is to find out how you can go about acquiring
the necessary knowledge, skills, and certifications for AWS.

There is an Abundance of AWS Learning Resources:


Choose Wisely
Since AWS certifications were first introduced in 2013, a lot of
resources have been made available ranging from books, manuals,
courses, AWS practice exams, and AWS communities. These resources
are all useful for those seeking to start and grow their career in AWS.
However, choosing the right learning resource is critical since there is a
lot to go through since some courses are simply better than others.

Bonus: AWS Certifications


AWS certifications are divided into four major categories —
Foundational, Associate, Professional, and Specialty.

9
Choose a Career Path that Suits You Best:
There are a lot of AWS career paths from which you can choose. The
career path you want can be based on either:

 The Role: Such as cloud practitioner, operations, architect, and


developer
 The Solution: Such as storage, machine learning, and AWS media
services

You could also choose a specialty area on which to focus your attention
and validate advanced skills in specific technical domains.

10
Why Linux is needed for Cloud and DevOps
professionals

Linux is widely used in the cloud and DevOps professions


for several reasons:
 Compatibility: Linux is the dominant operating system in the cloud
and DevOps ecosystem. Most cloud providers, such as Amazon Web
Services (AWS), Google Cloud Platform (GCP), and Microsoft
Azure, offer Linux-based virtual machines and container services.
Having Linux proficiency allows professionals to work seamlessly in
these environments.

 Open-source tools and technologies: Linux is the foundation for


many open-source tools and technologies commonly used in cloud
and DevOps, such as Docker, Kubernetes, Ansible, Terraform, and
Git. Familiarity with Linux enables professionals to effectively

11
leverage these tools and contribute to the open-source community.

 Command-line proficiency: Linux provides a powerful command-


line interface (CLI) that allows professionals to efficiently manage
and automate tasks. Many cloud and DevOps workflows involve CLI-
based operations, including provisioning and configuring cloud
resources, deploying applications, and scripting automation
processes.

 Scripting and automation capabilities: Linux offers a rich set of


scripting and automation capabilities through shells like Bash and
programming languages like Python. DevOps professionals often
write scripts and automation code to orchestrate infrastructure,
deployment pipelines, and various operational tasks.

 Security and performance: Linux is known for its robust security


features and performance optimizations. Understanding Linux
security mechanisms, file permissions, and network configurations is
crucial for maintaining secure and high-performing cloud and
DevOps environments.

 Infrastructure-as-Code (IaC): Infrastructure-as-Code is a key


principle in cloud and DevOps. Linux provides the flexibility and
control necessary to define infrastructure configurations as code using
tools like Terraform and Ansible. These configurations can be
versioned, tested, and deployed, resulting in consistent and
reproducible infrastructure.

 Troubleshooting and debugging: In complex cloud and DevOps


environments, issues can arise at various levels, from applications to
infrastructure. Linux expertise allows professionals to effectively
troubleshoot problems, diagnose performance bottlenecks, and debug
issues at the system and application levels.

12
 By having a strong foundation in Linux, cloud and DevOps
professionals can navigate the ecosystem, work with essential tools,
automate tasks efficiently, and contribute effectively to the
infrastructure and deployment processes required in these fields.

13
Linux Commands which are commonly used for
System Admins/Cloud & DevOps Engineers

There are several fundamental Linux commands you should be familiar


with to navigate and operate Linux-based systems efficiently:

Linux Basic Commands:


 ls: List files and directories in the current directory.
Example: ls -l (detailed list), ls -a (including hidden files), ls -lh
(human-readable file sizes).
 cd: Change directory.
Example: cd /path/to/directory (absolute path), cd directory (relative
path), cd .. (go up one directory).
 pwd: Print the current working directory (shows the path of the
current directory).
 mkdir: Create a new directory.
Example: mkdir directory_name.
 rm: Remove files and directories.
Example: rm file.txt (remove a file), rm -r directory (remove a
directory and its contents).
 cp: Copy files and directories.
Example: cp file.txt destination_directory (copy a file), cp -r

14
directory destination_directory (copy a directory and its contents).
 mv: Move/rename files and directories.
Example: mv file.txt new_location/file.txt (move a file), mv file.txt
new_name.txt (rename a file), mv directory new_location/directory
(move a directory).
 cat: Display the contents of a file.
Example: cat file.txt.
 less: View the contents of a file interactively.
Example: less file.txt.

 head: Display the first few lines of a file.


Example: head -n 10 file.txt (display the first 10 lines).

 tail: Display the last few lines of a file.


Example: tail -n 5 file.txt (display the last 5 lines).

 grep: Search for a pattern in files.


Example: grep “pattern” file.txt (search for a pattern in a file).
 chmod: Change file permissions.
Example: chmod +x script.sh (add executable permissions to a file).

 chown: Change the owner of a file or directory.


Example: chown user:group file.txt (change the owner and group of
a file).

 sudo: Execute a command with superuser (administrative) privileges.


Example: sudo apt-get install package_name (install a package using
the package manager).

Linux Intermediate Commands:


 find: Search for files and directories based on various criteria.
Example: find /path/to/search -name “*.txt” (find all files with the

15
.txt extension).

 grep: Search for patterns within files.


Example: grep “pattern” file.txt (search for a pattern in a file).
 sed: Stream editor for modifying text.
Example: sed ‘s/foo/bar/’ file.txt (replace “foo” with “bar” in
file.txt).

 awk: Text processing tool for extracting and manipulating data.


Example: awk ‘{print $1}’ file.txt (print the first field of each line in
file.txt).

 sort: Sort lines of text files.


Example: sort file.txt (sort the lines in file.txt alphabetically).
 uniq: Remove duplicate lines from a sorted file.
Example: uniq file.txt (remove duplicate lines from file.txt).

 wc: Word, line, character, and byte count.


Example: wc -l file.txt (count the number of lines in file.txt).

 tar: Archive files into a tarball (compressed file).


Example: tar -czvf archive.tar.gz files/ (create a compressed tarball
of the “files” directory).
 gzip: Compress files.
Example: gzip file.txt (compress file.txt, creating file.txt.gz).
 gunzip: Decompress gzip files.
Example: gunzip file.txt.gz (decompress file.txt.gz).

 wget: Download files from the web.


Example: wget https://example.com/file.txt (download file.txt from a
URL).

 ssh: Secure Shell — remotely connect to another machine over a

16
network.
Example: ssh user@hostname (connect to a remote machine).

 scp: Securely copy files between hosts over a network.


Example: scp file.txt user@remote:/path/to/destination (copy file.txt
to a remote machine).
 du: Estimate file and directory disk usage.
Example: du -sh directory (display the total size of the directory in
human-readable format).
 df: Report file system disk space usage.
Example: df -h (display disk space usage of all mounted file systems
in human-readable format).

Linux Advanced Commands:


 rsync: Synchronize files and directories between local and remote
systems.
Example: rsync -avz source/ destination/ (synchronize the contents of
the source directory to the destination directory).

 scp: Securely copy files between hosts over a network.


Example: scp -r user@remote:/path/to/source/ /path/to/destination/
(copy files and directories recursively between remote and local
machines).

 ssh-keygen: Generate SSH key pairs for secure authentication.


Example: ssh-keygen -t rsa -b 4096 (generate a 4096-bit RSA key
pair).

 screen: Create and manage multiple terminal sessions within a single


SSH session.
Example: screen (start a new screen session), screen -r (resume a
detached screen session).

 top: Monitor system processes and resource usage in real-time.

17
Example: top (display live process information).

 htop: Interactive process viewer with an enhanced UI.


Example: htop (launch htop process viewer).
 cron: Schedule recurring tasks or jobs.
Example: crontab -e (edit the user’s crontab file), crontab -l (list the
user’s crontab entries).

 systemctl: Control and manage system services and daemons.


Example: systemctl start service_name (start a service), systemctl
stop service_name (stop a service).

 journalctl: View and manage system logs.


Example: journalctl -u service_name (display logs for a specific
service), journalctl -f (follow logs in real-time).

 dd: Convert and copy files and disk images.


Example: dd if=/dev/sda of=image.img bs=4M (create an image of
the /dev/sda disk).

 lsof: List open files and processes.


Example: lsof -i :port_number (list processes using a specific port).

 tcpdump: Capture and analyze network traffic.


Example: tcpdump -i eth0 port 80 (capture HTTP traffic on the eth0
interface).

 nc: Netcat — network utility for reading/writing data across network


connections.
Example: nc -l -p port_number (listen on a specific port for incoming
connections).

 strace: Trace system calls and signals of a running program.


Example: strace -p process_id (trace system calls of a specific
process).

18
 chroot: Change the root directory for a specific command or process.
Example: chroot /new_root_directory command (run a command
with a different root directory).

Linux Networking Commands:


 ifconfig: Display or configure network interfaces.
Example: ifconfig eth0 (display information about the eth0
interface).

 ip: Show or manipulate routing, network devices, and addresses.


Example: ip addr show (display IP addresses of network interfaces).
 ping: Send ICMP echo requests to a specified network host.
Example: ping google.com (send ICMP echo requests
to google.com).

 traceroute: Print the route packets take to a network host.


Example: traceroute google.com (trace the route to google.com).
 netstat: Display network connection information, routing tables, and
network interface statistics.
Example: netstat -tuln (display listening ports).

 ss: Utility to investigate sockets.


Example: ss -tunap (display TCP and UDP sockets and associated
processes).

 dig: DNS lookup utility for querying DNS servers.


Example: dig google.com (perform a DNS lookup for google.com).

 host: DNS lookup utility for querying DNS servers.


Example: host google.com (perform a DNS lookup for google.com).

 wget: Download files from the web.


Example: wget https://example.com/file.txt (download file.txt from a

19
URL).

 curl: Command-line tool for transferring data using various


protocols.
Example: curl https://example.com (retrieve the contents of a
webpage).
 ssh: Secure Shell — remotely connect to another machine over a
network.
Example: ssh user@hostname (connect to a remote machine).
 scp: Securely copy files between hosts over a network.
Example: scp file.txt user@remote:/path/to/destination (copy file.txt
to a remote machine).

 iptables: Firewall administration tool for IPv4 packets.


Example: iptables -L (display the current firewall rules).
 ip6tables: Firewall administration tool for IPv6 packets.
Example: ip6tables -L (display the current IPv6 firewall rules).

 route: Show or manipulate the IP routing table.


Example: route -n (display the routing table).

Linux Performance Commands:


 top: Display real-time system information, including CPU usage,
memory usage, and running processes.
Example: top
 htop: Interactive process viewer with an enhanced UI, providing
detailed system monitoring.
Example: htop
 vmstat: Report virtual memory statistics, including CPU usage,
memory utilization, and I/O statistics.
Example: vmstat

20
 iostat: Report CPU and I/O statistics for devices and partitions.
Example: iostat

 sar: Collect, report, or save system activity information.


Example: sar -u (display CPU usage)
 free: Display memory usage and statistics.
Example: free
 ps: Report a snapshot of the current processes, including CPU and
memory usage.
Example: ps aux

 pidstat: Report statistics for processes, including CPU, memory, and


I/O usage.
Example: pidstat
 dstat: Versatile resource statistics tool that combines multiple
performance metrics.
Example: dstat

 perf: Powerful performance profiling tool for analyzing and


investigating system behavior.
Example: perf record -p PID (record performance data for a specific
process)

 strace: Trace system calls and signals of a running program.


Example: strace -p PID (trace system calls for a specific process)
 uptime: Display system uptime and load averages.
Example: uptime

 lsof: List open files and processes, useful for identifying resource
usage.
Example: lsof -i (list network connections)

21
 netstat: Display network connection information, routing tables, and
network interface statistics.
Example: netstat -s (display network statistics)
 iotop: Monitor I/O usage information of processes and disks.
Example: iotop

Linux troubleshooting Commands:


 dmesg: Display the system’s kernel ring buffer messages, which can
provide information about hardware and driver issues.
Example: dmesg

 journalctl: View and manage system logs, including systemd logs.


Example: journalctl
 lsmod: List loaded kernel modules.
Example: lsmod
 lspci: List PCI devices connected to the system.
Example: lspci

 lsusb: List USB devices connected to the system.


Example: lsusb
 lsblk: List information about block devices (disks).
Example: lsblk
 fdisk: Display or manipulate disk partition table.
Example: fdisk -l (list disk partitions)
 blkid: Print block device attributes, such as UUIDs and file system
types.
Example: blkid
 ifconfig: Display or configure network interfaces.
Example: ifconfig

22
 ip: Show or manipulate routing, network devices, and addresses.
Example: ip addr show

 ping: Send ICMP echo requests to a specified network host.


Example: ping google.com
 traceroute: Print the route packets take to a network host.
Example: traceroute google.com

 netstat: Display network connection information, routing tables, and


network interface statistics.
Example: netstat -tuln
 ssh: Secure Shell — remotely connect to another machine over a
network.
Example: ssh user@hostname

 sudo: Execute a command with superuser (administrative) privileges.


Example: sudo command

23
AWS Regions and Availability Zones

Amazon Web Services (AWS) is renowned for its global cloud


infrastructure, designed to offer high availability, fault tolerance, and
scalability. At the heart of this infrastructure are AWS Regions and
Availability Zones (AZs). Understanding these concepts is crucial for
deploying resilient and efficient applications on AWS. This article will
delve into the definitions, explanations, advantages, use cases, and real-
life examples of AWS Regions and AZs.

What are AWS Regions?


AWS Regions are separate geographic areas around the world, such as
North America, Europe, Asia, etc. Each Region is a collection of
Availability Zones, which are isolated locations within a Region. AWS
operates many Regions worldwide, allowing users to deploy their
applications close to their end-users, reducing latency and improving
performance.

Definition and Explanation:

 AWS Region: A specific geographical location hosting two or more


Availability Zones.

24
 Purpose: Regions enable users to deploy applications and data across
multiple locations to enhance availability, reduce latency, and comply
with regulatory requirements.

What are Availability Zones?


Availability Zones are distinct locations within a Region that are
engineered to be isolated from failures in other AZs. They offer the
ability to operate production applications and databases that are more
highly available, fault-tolerant, and scalable than would be possible from
a single data center.

Definition and Explanation


 Availability Zone: A data center or cluster of data centers within a
Region that is designed to be insulated from failures in other AZs.

 Purpose: AZs provide a foundation for building highly available and


fault-tolerant applications by distributing them across multiple,
isolated locations within a Region.

Advantages of Using AWS Regions and AZs


Enhanced Reliability and Fault Tolerance
By distributing your applications and data across multiple AZs within a
Region, you can achieve higher levels of fault tolerance. In the event of
an AZ failure, your application can continue to operate from other AZs,
ensuring uninterrupted service.

Reduced Latency
Selecting a Region closest to your end-users can significantly reduce
latency, improving the user experience for your applications.

Compliance and Data Sovereignty

25
Regions allow you to store data in specific geographic locations,
meeting legal or regulatory requirements regarding data sovereignty.

Use Cases and Real-Life Examples


Global Web Application
A company deploying a web application for a global audience can use
multiple Regions to serve content from the closest Region to the user,
reducing latency and improving load times.

Disaster Recovery
By using multiple AZs within a Region or across regions, businesses can
implement disaster recovery strategies that allow for rapid recovery of
IT systems without data loss in the case of a disaster.

High Availability Database


Deploying a database across multiple AZs within a Region can ensure
that even in the event of a complete AZ failure, the database remains
available, minimizing downtime.
AWS Regions and Availability Zones are fundamental components of
AWS’s global infrastructure, offering significant advantages in terms of
availability, fault tolerance, performance, and compliance. By
strategically deploying applications and data across these geographic
constructs, AWS users can ensure their systems are resilient, responsive,
and compliant with regulatory requirements. Whether you’re running a
global application, implementing a robust disaster recovery plan, or
requiring high availability for critical databases, understanding and
leveraging AWS Regions and AZs is key to achieving your objectives.
As of today, AWS spans 105 Availability Zones within 33 geographic
regions around the world, with announced plans for 12 more Availability
Zones and 4 more AWS Regions in Germany, Malaysia, New Zealand,
and Thailand. Read more at : https://aws.amazon.com/about-aws/global-
infrastructure/regions_az/

26
Secure Remote Administration and Troubleshooting
of EC2 Instances

Step 1: Setting Up the EC2:


Sign in to your AWS account and access the EC2 Dashboard. Locate the
“Launch Instances” button and click on it to proceed.

This action will direct you to a splash page where you can input settings
for your new virtual machine. Assign a name to your instance and
choose an appropriate AMI. For this project, ensure you select a Linux-
based AMI, preferably one that is eligible for the free tier.

27
Maintain the “Instance Type” as t2.micro, as it qualifies for the free tier
and is ideal for this demonstration. Moving on to the next step will
require delving into slightly more technical details.
Step 2: Creating a Keypair:
Now, we need to generate a key pair to securely establish SSH
connections to our instance. Although restricting SSH access to specific
IP addresses is an option, it’s not as secure as using AWS’s integrated
key generator. Therefore, when you reach this section, locate the “Create
new key pair” link and click on it.

28
Upon clicking the “Create new key pair” link, a popup will appear
enabling you to generate a new key pair. Assign it a meaningful and
easily memorable name. Ensure that you select an RSA key pair type
and opt for the .pem key file format.

You might have observed the warning message above the “Create key
pair” button. By generating a key pair, you’ll be downloading a key onto
your local computer. Therefore, it’s crucial to remember where you save
it and how to identify the key later when connecting to the instance.

For my use, I ensured to download the .pem file it generates a key pair to
remote into your EC2 instance. Be aware of where you downloaded the
.pem file, you might want to place it in an easily accessible folder, as
you’ll need to access it later.
Step 3: Security Groups and Ports:

29
Next step, you’ll encounter a “Network Settings” tab. Navigate to the
top right-hand corner of the tab and select the “Edit” option.

Once clicked, you have the opportunity to input additional options,


including the Virtual Private Cloud (VPC) where the instance resides.

If you’ve left the setting on the default “anything” rule with our key pair,
an unauthorized individual on the internet wouldn’t be able to directly
log into your instance. However, they can attempt to access the instance,

30
and over time, they may potentially breach the key pair, posing a
security risk.
To mitigate this risk, we can take proactive measures by specifying a
particular IP address as the sole source permitted to connect to our
instance (My IP) on port 22.

My IP would be the IP address of your local PC.

AWS offers a convenient feature where it can automatically populate


your IP address for the security group. This enables you to restrict access
to the address associated with the device you’re currently using. It’s
important to note that IP addresses are typically location-dependent and
may change, especially if you’re using a VPN. Therefore, it’s essential
to pay close attention to the specific CIDR (Classless Inter-Domain
Routing) that is auto-populated to ensure accurate access restrictions.
Go ahead and “Launch” your instance.

That means we’ve successfully created an EC2 instance. Now we need


to see if we can connect to it.
Step 4: SSH and Testing that Connection
Next, you’ll need to navigate to the “Instances” tab located on the left-
hand side. Upon clicking it, a list of instances associated with your
account will appear. Locate your instance within this list, and then click
on the “Instance ID” portion of the entry.

31
Up and running.

Once the next page loads, you’ll find a plethora of information about
your instance. Most of this information can be disregarded for the
current task. Locate and click on the “Connect” button to proceed.

When the next page loads, it should automatically open into the “SSH
client” tab, providing all the necessary information to connect to your
instance.
Now, let’s open our command line. If you’re using a Linux or Linux-like
operating system, you’ll need to adjust the permissions on your .pem key
file. AWS provides the command for this, which you can copy and run

32
in the terminal. However, since I’m on a Windows OS, I can skip this
step.
Before running any commands, we need to navigate our “current
working directory” to the location where our .pem file is stored.
Depending on where you downloaded it, you’ll need to use
the cd command to navigate to the associated directory.
Once you’re in the correct directory, you can copy the command from
the “Connect” tab in the AWS console and paste it into your terminal.
Then, execute the command. If this is your first time connecting to the
instance with this particular IP address, it will prompt you to confirm the
connection. Type “yes” into the command line and press Enter to
proceed.

SSH’d.

Now we run whoami to make certain we’re now connected.

And presto! We’ve deployed an EC2 instance and connected to it with


SSH. Now it’s safe to turn it off via the AWS Console so you don’t get
charged money.
Step 5: Windows Time

33
We are going to replicate the following functions with a Windows
server.

Still t.2.micro, if it’s free it’s for me.

Generate a key pair for your Windows Instance.

34
Don’t forget the security group rules! You don’t want your EC2 instance
compromised. Proceed to launch your Windows EC2 instance.

Once the instance is up and running, click on the “Connect” button in


the upper-right corner. Click on “RDP client”.

See a difference?

35
There’s a difference in the “Connect” page compared to the Linux
server.
Click on the “Download remote desktop” file to install Windows RDP
onto your PC.

Click on the “Get password” icon bolded in dark grey. Input


your .pem file, which allows you to retrieve the password for your RDP
session. Open your RDP application and input the generated password
associated with the “Administrator” username.

Big Yes here.

36
Just like that, we have spun a fully operational Windows Server from the
comfort of our home. This demonstrates the power and range of Cloud
Computing.

Don’t forget to terminate your instances, Linux and Windows, to prevent


charges from occurring. Imagine you surpass the free-tier limit and get a
surprise bill. Not the type of surprise you would be hoping for.

37
Elastic Compute Cloud – EC2
 EC2 is a web service which provides secure, resizable compute
capacity in the cloud
 EC2 interface allows you to obtain & configure capacity with minimal
friction
 EC2 offers the broadest and deepest compute platform with choice of
processor, storage, networking, operating system, and purchase model.
 Amazon offer the fastest processor in the cloud and they are the only
cloud with 400 Gbpsethernet networking
 Amazon have the most powerful GPU instances for machine learning
and graphic workloads.

Reliable, Scalable, Infrastructure on Demand:


 Increase or decrease capacity with in minutes, not hours or days
 SLA commitment of 99.99% availability for each amazon EC2 region.
Each region consists ofatleast 3 availability zones
 Region/AZ model is recognized by gartner as the recommended
approach for running enterpriseapplications that require high
availability.

AWS Supports 89 security standards & compliance


certifications including:
 PCI-DSS
 HIPAA/HITECH
 FedRAMP
 GDPR
 FIPS
 NIST etc..

Features of Amazon EC2:


 Virtual Computing instances, known as instances
 Pre-configured templates for your instances, Known as AMI, which
contains Operating System, Configuration and software.

38
 Various configurations of CPU, Memory, Storage, Networking
Capacity for your instances, known as instance types
 Secure login information for your instances using keypairs (AWS
Stores Public key, and you storethe private key)
 Storage volumes for temporary data that’s deleted when you stop,
hibernate or terminate your
 instances, known as Instance store volumes
 Persistent storage volumes for your data using elastic block storage,
known as amazon EBSVolumes
 Multiple physical locations for your resources, such as instance & EBS
volumes, known asregions and availability zones.
 A firewall that enables you to specify the protocols, ports and source IP
ranges that can reachyour instance using security group
 Static IPV4 addresses for dynamic cloud computing known as Elastic
IP Address

Amazon EC2 Provides the following purchasing options:


 On-Demand
 Spot Instances
 Reserved Instances
 Savings Plan

On-Demand Instances:
 You pay for compute capacity by the hour or the second depending on
which instances you run
 No long-term commitment or upfront payments are needed
 You can increase or decrease your compute capacity depending on the
demands of yourapplication and only pay the specified per hourly rates
for the instance you use.
On Demand Instances are Recommended for

 Users that prefer the low cost and flexibility of amazon EC2 without
any upfront payment orlong term commitment
 Applications with short term, spiky or unpredictable workloads that

39
cannot be interrupted.
 Applications being developed or tested on amazon EC2 for the first
time

Spot Instances:
 Amazon EC2 spot instances allow you to request spare amazon EC2
computing capacity for up to 90% of on-demand price
 Spot instances are recommended for
 Applications that have flexible start and end times
 Applications that are only feasible at very low compute prices
 No guarantee for 24x7 uptime

Reserved Instances:
 Reserved Instances provide you with a significant discount (up to 75%)
compared to on-demandinstance pricing
 For applications that have steady state or predictable usage, reserved
instance can providesignificant savings compared to using on-demand
instances

Recommended for:
 Applications with steady state usage
 Applications that may require reserved capacity
 Customers that can commit to using EC2 over a 1- or 3-year term to
reduce their total computingcosts

Savings Plan:
Savings plans are a flexible pricing model that offer low prices on EC2 in
exchange for acommitment to a consistent amount usage for a 1- or 3-year
term. Discount up to 72%

Amazon EC2 Instance Types:


Amazon EC2 provides a wide selection of instance types optimized to fit

40
different use cases.
Instance types comprise varying combinations of CPU, memory, storage,
and networking capacity Eachinstance type includes one or more instance
sizes, allowing you to scale your resources to the requirements of your
target workload.
 General Purpose
 Compute Optimized
 Accelerated Computing (GPU Optimized)
 Memory Optimized
 Storage Optimized

General Purpose:
General purpose instances provide a balance of compute, memory and
networking resources, and can be used for a variety of diverse workloads.
These instances are ideal for applications that use these resources in equal
proportions such as web servers and code repositories.
Ex: Mac, T4g, T3, T3a, T2, M6g, M5, M5a, M5n, M5zn, M4, A1

Compute Optimized:
Compute Optimized instances are ideal for compute bound applications
that benefit from highperformance processors. Instances belonging to this
family are well suited for batch processing workloads, media transcoding,
high performance web servers, high performance computing
Ex: C6g, C6gn, C5, C5a, C5n, C4
Memory Optimized:
Memory optimized instances are designed to deliver fast performance for
workloads thatprocess large data sets in memory.
Use case: Memory-intensive applications such as open-source databases,
in-memory caches, and real time big data analytics
Ex: R6g, R5, R5a, R5b, R5n, R4, X2gd, X1e, X1, u, Z1d

41
Accelerated Computing:
Accelerated computing instances use hardware accelerators, or co-
processors, to perform functions, such as floating-point number
calculations, graphics processing, or data pattern matching, more
efficiently than is possible in software running on CPUs.

Use Case: Machine learning, high performance computing,


computational fluid dynamics, computationalfinance, seismic analysis,
speech recognition, autonomous vehicles, and drug discovery.
Ex: P4, P3, P2, Inf1, G4dn, G3, F1

Storage Optimized:
Storage optimized instances are designed for workloads that require high,
sequential read andwrite access to very large data sets on local storage.
They are optimized to deliver tens of thousands oflow-latencies, random
I/O operations per second (IOPS) to applications.
Ex: I3, I3en, D2, D3, D3en, H1

Instance Features:
Amazon EC2 instances provide a number of additional features to help
you deploy, manage, andscale your applications.
 Burstable Performance instances
 Multiple Storage Options
 EBS Optimized Instances
 Cluster Networking

Burstable Performance Instances: Amazon EC2 allows you to


choose between Fixed Performance Instances (e.g. M5, C5, and R5) and
Burstable Performance Instances (e.g. T3). Burstable Performance
Instances provide a baseline level of CPU performance with the ability to
burst above the baseline.
For example, a t2.small instance receives credits continuously at a rate of
12 CPU Credits per hour. Thiscapability provides baseline performance

42
equivalent to 20% of a CPU core (20% x 60 mins = 12 mins). If the
instance does not use the credits it receives, they are stored in its CPU
Credit balance up to a maximum of 288 CPU Credits. When the t2.small
instance needs to burst to more than 20% of a core, it draws from its CPU
Credit balance to handle this surge automatically.

Multiple Storage Options: Amazon EC2 allows you to choose


between multiple storage options basedon your requirements. Amazon
EBS is a durable, block-level storage volume that you can attach to a
single, running Amazon EC2 instance.
Amazon EBS provides three volume types to best meet the needs of your
workloads: General Purpose(SSD), Provisioned IOPS (SSD), and
Magnetic.

EBS Optimized instances:


For an additional, low, hourly fee, customers can launch selected Amazon
EC2 instances types as EBS-optimized instances. For M6g, M5, M4, C6g,
C5, C4, R6g, P3, P2, G3, and D2 instances, this feature is enabled by
default at no additional cost. EBS-optimized instances enable EC2
instances to fully use the IOPS provisioned on an EBS volume.
EBS-optimized instances deliver dedicated throughput between Amazon
EC2 and Amazon EBS, withoptions between 500 and 4,000 Megabits per
second (Mbps) depending on the instance type used.

Cluster Networking:
Select EC2 instances support cluster networking when launched into a
common cluster placement group. A cluster placement group provides
low-latency networking between all instances in the cluster. The
bandwidth an EC2 instance can utilize depends on the instance type and
itsnetworking performance specification.

EC2 Tenancy Model:


AWS offers 3 different types of tenancy model for your EC2 instances.

43
this relates to whatunderlying host your EC2 instance will reside on
 Shared Tenancy
 Dedicated Instance
 Dedicated Host

Shared Tenancy :
this option will launch your EC2 instance on any available host with the
specified resources required for your selected instance type. Regardless of
which other customers and users alsohave EC2 instances running on the
same host. Means We are going to share the physical resources withother
customers.
AWS implement advanced security mechanisms to prevent one EC2
instance from accessing another onthe same host.

Dedicated Instance:
Dedicated instances are hosted on hardware that no other customer can
access. Itcan only be accessed by your own AWS account. You may be
required to launch your instances as a dedicated instance due to internal
security policies or external compliance controls.
Dedicated instances do incur additional charges due to the fact you are
preventing other customers from running EC2 instances on the same
hardware and so there will likely be unused capacity remaining.

Dedicated Host:
A dedicated host is effectively the same as dedicated instances. However
they offer additional visibility and control, how you can place your
instances on the physical host. They also allow you to use your existing
licenses, such as PA-VM license or Windows Server licenses Etc. Using
dedicatedhosts give you the ability to use the same host for a number of
instances that you want to launch and align with any compliance and
regulatory requirements.

44
Following are the list of important terms need to know before
creating EC2 instances:
 Amazon Machine Image (AMI)
 Instance Type
 Network
 Subnet
 Public IP
 Elastic IP
 Private IP
 Placement Group
 Root Volume
 Security Group
 KeyPair

Amazon Machine Image:


An Amazon Machine Image (AMI) Provides the information required to
launchan instance. An AMI Includes the following, one or more Elastic
Block Store snapshot, a template for theroot volume of the instance (for
example Operating system, software, configurations etc.)

Instance Type:
Instance types comprise varying combinations of CPU, Memory, storage
& Networkingcapacity and give you the flexibility to choose the
appropriate mix of resources for your applications.

Subnet:
Subnet is a subnetwork in your virtual network of your Amazon Network.
By default, there is onesubnet per availability zone.

Public IP:

45
A public IP is an IP Address which can be used to access internet and
allow the communicationover the internet. Public IP will be assigned by
amazon and it is dynamic. If you stop and start your EC2 instance, The
public IP will change.

Elastic IP(EIP):
Elastic IP is a kind of Fixed Public IP address which we can attach to our
Instances. Elastic IP will not change if we stop & Start our EC2 instances.
We need to request EIP from amazon and it will be free if we attach to
any instances, if you keep this EIP unused in your account then it will be
chargedafter initial 1st hour.

Private IP:
Private IP can be used to establish the communication with in the same
network only, Private (internal) addresses are not routed on the Internet
and no traffic can be sent to them from the Internet, means no internet
access will be available over private address.

Placement group: is a logical grouping of instances with in a single


availability zone. AWS provides threetypes of placement groups
 Cluster
 Partition
 Spread

Cluster: A cluster placement group is a logical grouping of instances


within a single Availability Zone. Instances in the same cluster placement
group enjoy a higher per-flow throughput limit for TCP/IP traffic and are
placed in the same high-bisection bandwidth segment of the network.
The following image shows instances that are placed into a cluster
placement group.

46
Partition Placement Group: Partition placement groups help reduce
the likelihood of correlated hardware failures for your application. When
using partition placement groups, Amazon EC2 divideseach group into
logical segments called partitions. Amazon EC2 ensures that each
partition within a placement group has its own set of racks. Each rack has
its own network and power source. No two partitions within a placement
group share the same racks, allowing you to isolate the impact of
hardware failure within your application.

Spread: A spread placement group is a group of instances that are each


placed on distinct racks, with each rack having its own network and power
source.
The following image shows seven instances in a single Availability Zone
that are placed into a spreadplacement group. The seven instances are
placed on seven different racks

47
Spread placement groups are recommended for applications that have a
small number of criticalinstances that should be kept separate from each
other.
A spread placement group can span multiple Availability Zones in the
same Region. You can have amaximum of seven running instances per
Availability Zone per group.

Root Volume: The storage which we used to install Operating system


for instance is called as root volume (Ex: C:\ Drive). The following
volume types are supported as root volumes: General purpose SSD,
Provisioned IOPS SSD, Magnetic.

Security Group: A Security group acts as a virtual firewall for your


instance to control incoming & Outgoing traffic. Security groups to be
attached and we can attach 5 security groups to each instance.
Following are the basic characteristics of a security groups:

 You can specify allow rules, but not deny rules


 You can specify separate rules for inbound and outbound traffic.
 Security group rules enable you to filter traffic based on protocols and
port numbers.
 Security groups are stateful — if you send a request from your
instance, the response traffic forthat request is allowed to flow in
regardless of inbound security group rules.

48
 When you create a new security group, it has no inbound rules.
 By default, a security group includes an outbound rule that allows all
outbound traffic.
 By default we can create 2500 security groups per region, can be
increased upto 5000 per region
 60 inbound and outbound rules per security group

KeyPair: Key pair is a combination of public key and private key which
can be used to encrypt and decrypt the data, is a set of security credentials
that you use to prove your identity when connecting to an instance.
Amazon EC2 stores the public key and user stores the private key.

Elastic Compute Cloud (EC2) Lab:


Prerequisite:
 Amazon Account access
 Putty & puttygen tools on your computer

To-do List 1:
1. Launch Windows Server 2016 EC2 instance in N.Virginia Region
 While creating EC2 instance open RDP port in the security group from
your IP only
 Decrypt the password using keystore file which we used while creating
the EC2 instance
 Access Windows server using Remote desktop connection tool
2. Launch Amazon Linux2 EC2 instance in N.Virginia Region
 While creating EC2 instance open SSH port in the security group from
anywhere
 Generate Private key using puttygen tool from the keypair which we
used while creatingthe EC2 instance
 Access Amazon Linux EC2 instance using putty tool
3. Install webserver on Amazon Linux2 EC2 and host a website
4. Create Custom AMI from the Amazon Linux EC2 in which we hosted
the website.

49
5. Launch New EC2 instance from the custom AMI in N.Virginia Region
6. Copy the AMI from N.Virginia to Mumbai Region
7. Launch New Instance from Custom AMI in Mumbai Region
8. Share custom AMI with a specific Amazon Account & launch New
EC2 instance in the otheramazon account
9. Share custom AMI with public

50
VIRTUAL PRIVATE CLOUD
What is VPC ? and Let’s Create VPC in AWS

AWS's Virtual Private Cloud (VPC) is one of its services. A VPC is a


private cloud computing environment contained within a public cloud.
The VPC is a virtually isolated environment made to provide a private
environment according to the needs of IT companies and business
requirements.
By default, you can create up to 5 VPCs. You can ask for additional
VPCs using the VPC Request Limit Increase form.
An IT company hosts its products and services on servers for customers
to see. They make sure no one has access to their databases or their
internal codebase. That’s why IT companies isolate their databases,
CRM information, and internal code bases from the customers.

The Virtual Private Cloud consists of the following features:


 Subnets: A subnet is a range of IP addresses in your VPC. Subnets
consist of instances which are servers, DBMSs, and CRMs. A subnet
must reside in a single Availability Zone. After you add subnets, you
can launch AWS resources into a specified subnet.

 IP Addressing: You can assign IPv4 addresses and IPv6 addresses

51
to your VPCs and subnets. You can also bring your public IPv4 and
IPv6 addresses to AWS and allocates them to resources in your VPC,
such as EC2 instances, NAT gateways, and Network Load Balancers.

 Routing: The Route tables decide where the traffic should go.
Inside subnets, the resources like instances are all connected because
of the local routing table which is set already while launching
instances in the subnets.

 Gateways: As the name suggests the gateways are meant to connect


to the outside world. In this case, the AWS internet gateway will
connect you to other networks like public networks, internet, and
other VPCs.

 NAT gateway: NAT Gateways allow private subnets to connect to


the Internet but it works only one way. Instances in a private subnet
can connect to the network or services outside of their VPCs but
external services can’t initiate the network connection with those
instances.

 Endpoints: A VPC endpoint to connect to AWS services privately,


without the use of an internet gateway or NAT device.

 VPN connections: Connect your VPCs to your on-premises


networks using AWS Virtual Private Network (AWS VPN).

 Peering connections: The VPCs have the ability to connect to


other VPCs using VPC peering. In this one, VPCs in certain regions
can connect to another VPC that is located in other regions.

Let’s see how the VPC network looks:

52
The description of this figure:
 There are two stacks of resources are created in one VPC. By the
looks of the figure, it is a 3-tier infrastructure because the VPC
consists of a Database tier, an application tier that processes the
internal codebase, and the last one is Web tier which is a presentation
tier shown to the clients or Customers.
 All resources are kept private because of the security regions. We
don’t want our databases to be compromised by some hacker, because
databases keep the most crucial data important to the customer like
credit card info, IDs, and so on.

 There are two Routing tables, one routing table is the default routing
table and is responsible for interconnection between subnets. And
another routing table is for routing the subnet to the internet gateway
which leads to the External networks.

53
Here are the steps for setting up a VPC in the AWS
environment:
I have my own diagram to create the structure of the VPC:

So, we are going to implement above same VPC Structure.


Let's start with AWS VPC services.

Virtual Private Cloud:


 First Create a VPC, and name it whatever you want. I name It as
“my_VPC”.
 Give “IPv4 CIDR” as “192.168.0.0/16”. “CIDR” mean “Classless
inter-domain Routing”. It is decided based on Netmask. Let me give
you some examples:-

If the netmask is 255.255.255.0 and IP is 192.168.55.0, then IP ranges


from 192.168.55.0–192.168.55.255 means around 256 instances can be
allocated with IPs but since some IPs are reserved by other resources,
Elastic Compute Engine (EC2) only can be allocated 251 IPs.

If the netmask is 255.255.0.0 and IP is 192.168.0.0, then IP ranges from

54
192.168.0.0. to 192.168.255.255 which means around 65,536 instances
can be allocated with IPs. In this case, CIDR will be written like
192.168.0.0/16.

If the netmask is 255.0.0.0 and IP is 192.0.0.0 then CIDR will be like


192.0.0.0/8.

Subnets: Create two subnets “Public Subnet” and “Private Subnet”.

a.) Public Subnet:


 Choose “VPC ID” as “my_VPC”. Name the Subnet as “Public
Subnet” and choose “Availability Zone (AZ)” as “ap-south-
1a”. Since we using Region Mumbai, there are three AZ, you can
choose anyone but the next time you choose, your choice should be a
different region So that your public subnet and private subnet can be
isolated.
 Give “IPv4 CIDR block” as “192.168.1.0/24” and create it.

55
b.) Private Subnet:
Choose “VPC ID” as “my_VPC”. Name the Subnet as “Private
Subnet” and choose “Availability Zone (AZ)” as “ap-south-1b”.
Give “IPv4 CIDR block” as “192.168.2.0/24” and create it.

Internet Gateways: Let’s create an Internet gateway.


Give name as “my_internet_gateway”. And Attach it to “my_VPC”.

56
Route Tables:

 Create a new route table and name it “my_route_table”.


 Give VPC as “my_VPC” and create it.

3. Edit route: Select Edit routes and Add route respectively.


Set Destination as 0.0.0.0/0 (means public network)
and Target as internet gateway (my_internet_gateway).

57
The Route table we set up, allows the server to go to the internet. Here
we set up the destination as 0.0.0.0/0 which means when instances are
alive, they can go to the internet. The internet gateway is a router that
leads to the internet. All the subnets first pass through the internet
gateway to be able to connect to the internet.

Here is the first destination set to 192.168.0.0/16 which is the default


that makes sure that all the resources in the VPC have the local
connectivity

4. Subnet association: This is where we tell the VPC which subnet


we want to attach to the route table and that will lead to the public
internet. In this, we have to select the subnet we want to associate with
the internet gateway.
a.) Click on Edit subnet associations and select “Public Subnet”,
and save the subnet association.

58
Now all the main VPC settings have been done. Let’s test it now and
launch an instance.

We will launch two instances. Instead of default VPC, we will


use my_VPC to launch instances. The first instance will use a Public
subnet and another instance will use a Private Subnet.

Launching Test Instances:


Public Instance:
1. This instance will be launched with Public Subnet. Name the instance
as Public_instance. Choose the AMI as Amazon Linux.

2. Instance Type as t2.micro (for just testing). Provide key-pair.

3. Edit the Network settings, Choose VPC as my_VPC. Choose Subnet


as Public Subnet and Auto-assign public IP should be enabled.

59
4. Change the Security Group rule, allow it to All traffic, and the
Storage setting will be as it is.

5. Now launch it successfully.


Private Instance:
 Do the same as you did in the Public Instance. Name the Private
instance as Private_instance and choose the subnet as Private
Subnet.

 Here if you Enable Auto-assign Public IP, then there is no meaning


of assigning a Public IP, because Private_instance is isolated. Keep
the Public IP disabled.

60
Now Launch it successfully.
Now Let’s test the launched Instances. First Connect the Public_instance
since Public Instance is attached to the Route table and the Route table
has a route for the internet gateway to go to 0.0.0.0/0 (public network).
We will able to use ssh protocol to connect to the instance through the
internet.
If we ping through Public_instance to 8.8.8.8, it will work.

Let’s test the Private_instance. if you try to connect it using ssh protocol.
it won’t work. In this case, we don’t have public IP, so here it won’t
work.

61
When you connect your Public_instance, and through Public_instance
you try to ping to the Private_instance’s Private IP address, It will be
able to ping because within the VPC there is local connectivity between
the instance.

62
Whenever there is a need to access the private_instance, we can use a
public_instance to connect with it. just see below:

So That’s it, we have created our VPC in AWS.

63
VPC Peering across Two Region
A virtual private cloud (VPC) is a virtual network dedicated to your
AWS account. It is logically isolated from other virtual networks in the
AWS Cloud.
A VPC peering connection is a networking connection between two
VPCs that enables you to route traffic between them using private IPv4
addresses or IPv6 addresses. Instances in either VPC can communicate
with each other as if they are within the same network. You can create a
VPC peering connection between your own VPCs, or with a VPC in
another AWS account.
For example, if you have more than one AWS account, you can peer the
VPCs across those accounts to create a file sharing network. You can
also use a VPC peering connection to allow other VPCs to access
resources you have in one of your VPCs. When you establish peering
relationships between VPCs across different AWS Regions, resources in
the VPCs (for example, EC2 instances and Lambda functions) in
different AWS Regions can communicate with each other using private
IP addresses, without using a gateway, VPN connection, or network
appliance.
Pricing for a VPC peering connection There is no charge to create a
VPC peering connection. All data transfer over a VPC Peering
connection that stays within an Availability Zone (AZ) is free. Charges
apply for data transfer over a VPC Peering connections that cross
Availability Zones and Regions

64
Creating two VPC in two different regions.and then established a connection
between them and then create two EC2 and test their connectivity in both the VPC

Region 1- Virginia
—————————————
* Create VPC in one region (Virginia region)

* Create subnet -1
Create subnet associated with VPC-1
2. Enter Details and click on create button

65
* Create Internet gateway
1. Click on Internet Gateway
2. Enter name and click on create

66
2. Select created internet gateway and attach to VPC-1

* Create Route table


1. Now create Route table and select VPC-1

67
b. Click on Routes section and enter internet gateway at 0.0.0.0/0

c. Associate subnet-1 into subnet association section as follows

68
* Creating EC2 Instance in Virginia region
Go to EC2 dashboard and click on launch instance

2. Edit Network setting as


VPC= VPC-1
subnet = subnet-1
Auto assign public ip =enable
3. In security group SSH ,HTTP ,HTTPs should be selected
4. Click On launch Instance

69
5. Click on instance and edit inbound rule in security group

4. Allow All ICMP -IPv4 i.e ICMP protocol

70
6. connect to instance-virginia

Part II
Region 2 - Mumbai
—————————————
* Create VPC in another region (Mumbai region)

71
* Create subnet-2
Create subnet associated with VPC-2
2. Enter Details and click on create button

72
* Create Internet gateway
1. Click on Internet Gateway
2. Enter name and click on create

3. Select created internet gateway and attach to VPC-2

73
* Create Route table
a. creates Route table and select VPC-2

b. Click on Routes section and enter internet gateway at 0.0.0.0/0

74
c. Associate public subnet into subnet association section as follows

* Creating EC2 Instance in Mumbai region


Go to EC2 dashboard and click on launch instance

2. Edit Network setting as


VPC= VPC-2
subnet = subnet-2
Auto assign public ip =enable
3. In security group SSH ,HTTP ,HTTPs should be selected
4. Click on launch Instance

75
5. Click on instance and edit inbound rule in security group

6. Allow All ICMP -IPv4 i.e ICMP protocol

76
7. Connect to instance-Mumbai and use command
Command : ping <private ip of Virginia region instance>
Result : As both instances are in different VPC ,they will not connect
with each other

Solution for connecting Instances with Different VPC is VPC


Peering
VPC Peering
 Click on peering connection

77
 Enter name
 Select Vpc -1 to peer with
 Select Account or Region as per your requirement
Here both VPC are in different region but with same account so select
“My Account “ and “Another region”
5. Select Region name and Paste VPC ID (refer next point ) of Mumbai
region VPC

6. Go to Mumbai region -> Go to Vpc -> Select VPC-2 ->Copy VPC-ID


and paste in above info

78
7. Click on create peering connection button

8. Peering connection get created

79
9. Now Go to Mumbai region and then in peering connection
10. Click on actions
11. Select “Accept request”

12. click on Accept request and peering connection is now active

* Edit Route Table


Go to Virginia region and Select Route-1

80
2. Click on Routes section and enter peering connection at
192.168.0.0/16 (VPC CIDR of vpc-2)

— — — — — — — — — — — — — —*— — — — — — — — —
-* — — — — — — — — — — — —
Go to Mumbai region and Select Route-2

81
2. Click on Routes section and enter peering connection at 10.0.0.0/16
(VPC CIDR of vpc-1)

* Connect Both Instance and test their connectivity in both the VPC
1. Now Connect to Instance-Mumbai

Command: ping <private ip of Virginia region instance>


Result:

82
2. Now Connect to Instance-Virginia

Command : ping <private ip of Mumbai region instance>


Result:

ss

83
AWS TRANSIT GATEWAY
Introduction:
AWS Transit Gateway is a service that allows customers to connect their
Amazon Virtual Private Clouds (VPCs) and on-premises networks to a
central hub. This simplifies network management by providing a single
gateway to manage network connectivity between multiple VPCs and
on-premises networks.

Components of AWS Transit Gateway


AWS Transit Gateway consists of the following components:
 Transit Gateway: This is the central hub that connects multiple
VPCs and on-premises networks. Transit Gateway acts as a transit
point for traffic between these networks.
 Attachments: Attachments are the connections between the Transit
Gateway and VPCs or on-premises networks. VPC attachments are
created by attaching a VPC to the Transit Gateway, while VPN and
Direct Connect attachments are created by creating a VPN or Direct
Connect connection and attaching it to the Transit Gateway.
 Route tables: Each attachment has its own route table, which
specifies how traffic should be routed between the attachment and
other attachments.

Benefits of AWS Transit Gateway


AWS Transit Gateway offers several benefits, including:

 Simplified network management: With Transit Gateway,


customers can manage network connectivity between multiple VPCs
and on-premises networks from a single gateway, reducing the need
for complex networking configurations and enabling easier network
management.

 Scalability: Transit Gateway is designed to scale as the number of

84
attached VPCs and on-premises networks grows, making it easier to
accommodate growing network traffic.

 Improved performance: Transit Gateway uses a highly available


architecture and is designed for low-latency connections, enabling fast
and reliable network performance.

Difference between AWS Transit Gateway and VPC Peering


While AWS Transit Gateway and VPC peering, both provide
connectivity between VPCs, there are several key differences between
the two services:

 Scalability: VPC peering is limited to a single VPC, while Transit


Gateway can connect multiple VPCs and on-premises networks.

 Simplified network management: Transit Gateway provides a


centralized hub for managing network connectivity between multiple
VPCs and on-premises networks, while VPC peering requires
customers to manage each VPC peering connection separately.

 Routing flexibility: Transit Gateway allows for more complex


routing configurations, while VPC peering has more limited routing

85
capabilities.

What is AWS Transit Gateway?


AWS Transit Gateway is a service that acts as a hub to connect VPCs
and on-premises networks. It acts as a central routing engine that
eliminates the need for each VPC to have individual connections
between them.
With Transit Gateway, you only need to create connections from the
VPCs, VPNs, and Direct Connect links to the Transit Gateway. Transit
Gateway will then dynamically route traffic between all the connected
networks.

Benefits of Using Transit Gateway


Some of the main benefits of using Transit Gateway include:

 Simplified network topology — No need for mesh network between


Centralized connectivity configuration and no need to manage routing
tables between VPCs VPCs. Just connect each VPC to the Transit
Gateway
 Scalability — Easily scale up to tens of thousands of VPC and
remote office connections.

Reduced operational complexity


 Shared network transit — Allows different accounts and VPCs
to use the same Transit Gateway.

 On-premises connectivity — Connect seamlessly to on-premises


data centers.

Transit Gateway Use Cases


Transit Gateway is a versatile service that can cater to a variety of use
cases:

86
 Connect VPCs across multiple accounts and AWS Regions.
 Create a hub-and-spoke model for segmented networks.
 Share centralized internet connectivity across accounts.
 Migrate from a mesh or hub-and-spoke model to a Transit Gateway.
 Connect remote offices and data centers to AWS.

Getting Started with Transit Gateway


To start using Transit Gateway, you need to perform the following steps:

Create a Transit Gateway in a specific region

To get started with Transit Gateway, the first step is to create the Transit
Gateway resource in your desired AWS region.
When creating the Transit Gateway, you need to specify a name tag so it
can be easily identified. You also have the option to enable DNS support
if you need resolution between your connected networks.
Some key considerations when creating the Transit Gateway
Transit Gateways are regional resources. So, you need to decide which
region makes the most sense as the connectivity hub for your use case.
By default, a new Transit Gateway will be created in the default VPC for
the region. You can choose to create it in a custom VPC if required.
Select the appropriate size for your Transit Gateway based on the
expected network traffic volume. Sizing can be adjusted later if needed.
You can enable sharing with other accounts upon creation or do it later.
Account sharing allows connections from other accounts.
Logging can be enabled to track connection activity and events. The logs
will be sent to CloudWatch Logs.
Once the Transit Gateway is created, you will get an ID for it that is

87
needed to attach VPCs and other networks. It takes some time for the
Transit Gateway to be ready for use after creation.
So those are some of the key options to consider when creating your
Transit Gateway in the region of your choice. The console wizard will
guide you through all the necessary configuration.

Attach VPCs by creating Transit Gateway attachments


Once the Transit Gateway is created, the next step is to start attaching
VPCs. Each VPC that needs to connect to the Transit Gateway needs to
have an attachment created.

Some key points when creating VPC attachments:


You can attach VPCs from the same account as the Transit Gateway or
from other accounts if account sharing is enabled.
For each VPC attachment, you need to provide the ID of the Transit
Gateway, the ID of the VPC, and the subnets to associate.
An attachment propagation setting determines whether routes get
automatically propagated to the VPC route table. You can enable or
disable propagation.
Option to enable DNS support for private IP addresses in the VPC to be
accessible across networks.
You can control access to the VPC by adding a transit gateway route
table and using resource attachments.
Creating an attachment adds an entry in the VPC route table with the
Transit Gateway as the target.
Attachment creation takes time to complete. The VPC can start sending
traffic to the Transit Gateway once the state changes to available.
You can create multiple attachments from the same VPC for

88
redundancy and scaling.
The Transit Gateway provides connectivity between the VPCs as soon
as the attachments are created and routes propagated. So, you can build
out connectivity to more VPCs incrementally.

For on-premises connectivity, create VPN or Direct Connect


attachments:
The Transit Gateway allows you to connect your on-premises networks
and data centers using VPN or Direct Connect links.
For VPN connectivity, you need to create an AWS Site-to-Site VPN
connection from your customer gateway router to the Transit Gateway.
The customer gateway can be a physical device or a software appliance.

To create the VPN attachment:


Provide the Transit Gateway ID, customer gateway ID, VPN connection
ID.
Configure the inside and outside IP addresses for the VPN tunnel.
Specify the AWS side ASN for BGP routing.
Enable route propagation to exchange routes between the Transit
Gateway and on-premises network.
For Direct Connect connectivity, you need to link your Direct Connect
connection or LAG to the Transit Gateway.

To create the Direct, Connect attachment:


Specify the Direct Connect connection ID and the Transit Gateway ID.
Provide the inside and outside IP addresses.
Enable BGP for propagating routes.
Specify the ASNs for the AWS and customer side.

89
Enable route propagation.
The attachment creation process will take some time to complete. Once
available, your on-premises network will be able to connect to the VPCs
and networks attached to the Transit Gateway.
You can create multiple VPN or Direct Connect attachments for high
availability and failover between your data center and Transit Gateway.

Configure route tables to define traffic flow between


connections
Here are some more details on configuring route tables with AWS
Transit Gateway to define traffic flow between the connected networks:
Transit Gateway uses route tables to determine how traffic should flow
between the VPCs, VPNs, and Direct Connect attachments.

Some key points on configuring route tables:


By default, there is a default route table that allows full communication
between all attachments and VPCs.
You can create additional, custom route tables that can selectively allow
or deny traffic between resources.
Route tables can be associated with VPC or VPN/Direct Connect
attachments to control which networks they can communicate with.
Each route table can have multiple route table associations and
propagations.
Associations determine which attachments can route traffic using the
route table. Propagations automatically add routes to the associated
attachments.
Routes can be manually added, for example, to route traffic for a
particular VPC subnet to an internet gateway.

90
You can create complex network segmentation policies by leveraging
multiple route tables.
For example, you can create tiers like Public, Private, Restricted and
assign VPC subnets to them via route tables.
Route priorities determine which route takes effect if there are multiple
routes to a destination.
By leveraging custom route tables, you can dial in fined-grained control
over how traffic flows between your connected networks using Transit
Gateway.

Share the Transit Gateway with other accounts


Transit Gateways can be shared with other AWS accounts to allow inter-
account connectivity. Here are some key points on sharing
When you create a Transit Gateway, you can enable sharing with other
accounts. This allows accounts you authorize to attach their VPCs.
To enable sharing, you need to provide the account ID or organization
ARN with which the Transit Gateway will be shared.
You can share the Transit Gateway only within the same AWS
organization if you have enabled Resource Access Manager.
The owner account has full control over the Transit Gateway. Shared
accounts have limited privileges.
Shared accounts can view and work only with their own VPC
attachments and route tables.
To simplify management, shared accounts can be provided access via
IAM to work with attachments and routes.
Sharing is transitive. If Account A shares the TGW with Account B, and
B shares with C, then C can also use the TGW.

91
For security, enable VPC route table propagation sparingly for shared
accounts.
Use RAM resources to allow sharing Transit Gateways across regions.
By sharing Transit Gateways, you can significantly simplify
connectivity and reduce provisioning time across different accounts in
your organization. But balance the convenience with appropriate access
controls.
AWS provides an easy-to-use wizard in the console to guide you through
the configuration process.

Transit Gateway:
A transit gateway is a network transit hub that you can use to
interconnect your virtual private clouds (VPCs) and on-premises
networks. As your cloud infrastructure expands globally, inter-Region
peering connects transit gateways together using the AWS Global
Infrastructure. All network traffic between AWS data centers is
automatically encrypted at the physical layer.

Here we’re doing hands-on based on the below diagrams:

92
Select VPC from AWS console and Create VPC

93
Then click on Create VPC.
In a similar way , create other 2 CPCs naming VPC B and VPC C with
10.20.0.0/16 . 10.30.0.0/16 as IP addresses respectively.

From the diagram , we can see VPC A is public and VPC B & C are
private. So, we need to configure Internet Gateway for VPC A.
Click on Internet Gateway from the LHS panel and click on create
Internet Gateway.

94
Now select the newly created IGW and select Attach to VPC from
Actions

Select VPC A from drop down and click on Attach internet gateway

95
Next, we need to create Subnets, for that click on Subnets from the LHS
panel and click on create subnet for VPC A as per below

Likewise , we have to create 3 subnets for each VPC. I have created with
the info below:
VPC-A-Public-Subnet1
10.10.1.0/24
VPC-B-Private-Subnet1
10.20.1.0/24
VPC-C-Private-Subnet1
10.30.1.0/24
Now we need to add the route tables , Click on Route Tables from the
LHS panel and click on create route table.

96
Next, we have to associate the subnet with routing table. For that select
VPC-A-Route -> Click on Subnet associations -> Edit subnet
associations, then select VPC-A-Public-Subnet1 ->Save associations.

Do the same for VPC-B-Route and VPC-C-Route.


Select the VPC-A-Route and go to Routes->Edit routes and add as per
below , then click on save changes.

We’re all set in the VPC part , Now select EC2 from AWS console and
create 2 instances as per above diagram.

97
Configure Network Settings as below

And launch the instance.


Now create VPC-B-Private as below

98
Rest all settings are default or same as VPC-A-Public. Then create one
more instance same as VPC-B-Private.
Also create one more inbound rule in VPC-A-Public as shown below

99
Once the instances are up and running , take the ssh connection and
login as root. Then from VPC-A-Public , check if the private IPs of
VPC-B-Private and VPC-C-Private are reachable. It should not be
reachable as below

Now we need to connect VPCs among each other.


Go to VPC and click on Transit Gateway.
Create Transit Gateway as shown below

100
Select Transit gateway attachment from LHS panel and create Transit
gateway attachment for every VPCs.
Give a name and select the transit gateway as shown below:

101
Then configure the attachment as below:

Sameway create another 2 attachments for VPC B and VPC C.


Select Transit gateway route tables from the LHS panel and go to the
routes , we can see the VPC CIDR have been listed there.

Now go to Route Tables, select VPC A and add route as shown below:

102
Update the same for VPC B and VPC C
Now check the connectivity from VPC-A-Public , check if the private
IPs of VPC-B-Private and VPC-C-Private are reachable

We can see it is reachable. This is how the Transit gateway works...!!!!

Conclusion:
AWS Transit Gateway simplifies cloud network architectures by acting
as a hub to interconnect your VPCs, VPNs, and data centers. It
eliminates complex mesh topologies and provides easy scalability,
centralized management, and secure network segmentation. As your
cloud footprint grows, Transit Gateway is key to maintaining a simple,
efficient, and secure network topology.

103
CREATE VPC ENDPOINT FOR S3 BUCKET IN
AWS

By default, all the communication between servers (whether local or on


aws EC2-instance) and S3 is routed through the internet. Even though
EC2 instances are also provided by aws, all requests from EC2 to S3
routes through the public internet. Therefore, we will be charged for all
this data transmission.
AWS S3:
AWS S3 (Simple Storage Service) is one of the most well-known
services being offered by aws. It provides a reliable, global and
inexpensive storage option for large quantities of data. It can be used to
store and protect any amount of data for a range of use cases, such as
websites, mobile applications, backup and restore, archive, enterprise
applications, IoT devices, and big data analytics.

Why do we need VPC Endpoint for S3:

104
Here VPC Endpoint for S3 comes to the rescue. VPC Endpoint for S3
provides us a secure link to access resources stored on S3 without
routing through the internet. AWS doesn’t charge anything for using this
service.
VPC Endpoint:
VPC Endpoint for aws services enables us to privately connect our VPC
to aws supported services without requiring an internet gateway, NAT
device, VPN connection. Instances in our VPC do not require public IP
addresses to communicate with aws services.
Types of VPC Endpoints:
1. Interface Endpoint: It is an elastic network interface with a private
IP address from the IP address range of your subnet that serves as an
entry point for traffic destined to a supported service.
2. Gateway Endpoint: This type is used for connecting your VPC to
AWS services over a scalable and highly available VPC endpoint.
Gateway endpoints are usually associated with services that are accessed
over an Internet Gateway, such as Amazon S3 and DynamoDB. Here we
will talk about S3 Vpc endpoints, which is a type of Gateway Endpoint.
By using VPC endpoints, you can create a more isolated and secure
environment for your AWS resources while still enabling them to access
the necessary services without exposing them to the public internet.

Step 1: create vpc name as “kuku-vpc” select IPV4 CIDR


(10.0.0.0/16)

105
Step 2: create two subnet one as public another one as
private, in public subnet give IPv4 CIDR as 10.0.0.0/24 and
in private subnet give IPv4 CIDR as 10.0.1.0/24

Step 3: create internet gateway


An Internet Gateway (IGW) is a fundamental component of Amazon
Web Services (AWS) networking that provides a connection between
your Virtual Private Cloud (VPC) and the public internet. It allows
resources within your VPC to access and communicate with services and
resources on the internet and vice versa.

106
Step 4: Attach your igw to your VPC

Step 5: create route tables


A route table is a networking component used in Amazon Web Services
(AWS), particularly within Virtual Private Clouds (VPCs), to control the
routing of network traffic between different subnets and destinations. A

107
route table contains a set of rules (routes) that dictate how network
traffic is directed within the VPC

here we create two route tables one for pubic and another one for private
subnet.
Step 6: Subnet Association
Each subnet in a VPC must be associated with a route table. This
association determines how traffic is routed for resources within that
subnet.

108
Step 7: To provide internet access to resources within a subnet, you
would add a default route (0.0.0.0/0) with the target set to an Internet
Gateway (IGW). This allows traffic from the subnet to flow through the
IGW to the public internet.

Step 8: perform similar steps for private route table and do subnet
association to it.
Step 9: launch EC2 instances

109
Step 10: add network setting in EC2 instances, we select our own VPC
which we created in (Step 1) and select public subnet where public IP is
enable , and launch our instances

Step 11: similarly, we have to create private EC2 instances where we


select private subnet and public IP is disable and launch the private
server.

110
Step 12: create S3 bucket, named as boon123 and upload some file in
it.

Our main aim is to access these files without using the internet on our
private server. We have not provided a public IP. If we are able to access
these files from our private server, then we have established the endpoint
connection correctly.
Step 13: create endpoint connection name as (ujjwal-endpoint), in
service category we select AWS services

111
In services we select Gateway Endpoint, is used for connecting your
VPC to AWS services over a scalable and highly available VPC
endpoint. Gateway endpoints are usually associated with services that
are accessed over an Internet Gateway, such as Amazon S3 and
DynamoDB.

we select our both route tables and give full access in policy; we
established our endpoint connection.

112
Step 14: First, we connect to our public server. After successful
configuration on this server, we proceed to configure AWS on an
Amazon Linux instance.

Step 15: after configuration we run commands for access our S3


bucket and checking the contents of a bucket. It provides a quick way to
obtain an overview of the objects stored within a bucket without needing
to use the AWS Management Console.

#aws s3 ls s3://(bucket name)

113
Step 16: then we have to check entire S3 content from our private
server we have to create a file for our private server key(.pem key) by
using command then copy public key to newly created .pem file
and after that we have to run command

vim filename.pem
chmod 600 filename.pem

then run SSH command for private server.

Step 17: then once again we have to configure our aws in private
server

114
run same command again

#aws s3 ls s3://(bucket name)

We can access the files present in our S3 bucket from a private server
without using the internet, by established endpoint connection

Conclusion
In conclusion, utilizing an AWS S3 VPC endpoint offers a secure and
efficient means of accessing S3 buckets from within an Amazon Virtual
Private Cloud (VPC). By establishing a direct and private connection
between resources in the VPC and S3 without traversing the public
internet, VPC endpoints enhance security and reduce latency. This

115
setup ensures that data transfers to and from S3 remain within the AWS
network, mitigating exposure to potential security threats and optimizing
performance. Implementing S3 VPC endpoints is therefore a
recommended best practice for organizations seeking to maximize the
security and efficiency of their AWS infrastructure.

116
Security Groups and NACL
Security Groups and Network Access Control Lists in AWS and to
understand when to use them and when not to.
Let’s start with the basic definitions

Security Group — Security Group is a stateful firewall to the


instances. Here stateful means, security group keeps a track of the State.
Operates at the instance level.

Network Access Control List — NACL is stateless, it won’t keep


any track of the state. Operates at Subnet level.

Security Group and NACL Basic Architecture in AWS

Security Group:
Security Group is a stateful firewall which can be associated with
Instances. Security Group acts like a Firewall to Instance or Instances.
Security Group will always have a hidden Implicit Deny in both Inbound
and Outbound Rules. So, we can only allow something explicitly, but not
deny something explicitly in Security Groups.

117
Default Security Group:
By default, a Security Group is like:
When we talk about the default Security Group, there are two things to
discuss — AWS created Default SG, User Created Default SG.
AWS creates a default SG when it creates a default VPC — in this
security group they will add an inbound rule which says all Instances in
this Security Group can talk to each other.
Any Security Group created by a User explicitly, wouldn’t contain this
Inbound Rule which would allow communication between the Instances,
we should explicitly add it if required.
Both in the AWS created SG and User Created Custom SG, the
Outbound Rules would be the same — which allows ALL TRAFFIC
out.
We cannot add a Deny Rule, both in Inbound and Outbound Rules as
there’s a hidden default Implicit Deny Rule in Security Groups. All we
can do is allow which is required, everything else which isn’t allowed by
us is blocked.
A default security group that is created by default in the default VPC by
AWS looks like this —

Default Security Group Inbound Rules

118
Default Security Group Outbound Rules.

Security Group Features:


There are two main features which will make Security Groups different
from NACLs —

 Stateful Firewall
 Connection Tracking

Stateful Firewall:
Stateful means — maintain the state of connection so that you introduce
yourself only once, not every time you start talking — think TCP
session, once established, they start talking till one of them says Finish or
Reset.
The reason why a Security Group is called a Stateful Firewall is —
Security Group basically maintains the State of a connection, meaning —
if an instance sends a request , the response traffic from outside is
allowed back irrespective of the inbound rules, and vice versa.
Example: If my security group inbound rule allows NO TRAFFIC and
outbound rule allows ALL TRAFFIC and I visit a website on my
instance, the response from the WebServer back to my instance will be
allowed even though the inbound rule denies everything.

119
Security Group achieves this by leveraging something knows as
Connection Tracking which we will be discussing shortly.

Connection Tracking:
Security Groups use Connection Tracking to keep track of connection
information that flows in and out of an instance, this information
includes — IP address, Port number and some other information(for
some specific protocols).
Security Group needs to track any connection only in this case — if
there’s no inbound/outbound rule that allows everything. Let’s say we
have allowed ALL traffic from outside and ALL traffic to outside, it need
not track anything because, whatever comes and goes should be allowed.

Security Group Rule Fields:

Editing Security Group Inbound & Outbound Rules

Type — Type of Traffic which can be TCP, UDP, ICMP. Type field
provides the well-used protocols, when selected it auto fills the Protocol
field. You may also select a Custom Protocol Rule, which allows you to
select the Protocol field from a wide range of Protocols.
Protocol — As mentioned already, if you select a Custom Protocol Rule
in Type field, you can select a Protocol from the available Protocol List.

120
Port Range — You can specify a single port or a range of ports like this
5000–6000.
Source[Inbound Rules only] — Can be Custom — a single IP address
or an entire CIDR block, anywhere — 0.0.0.0/0 in case of IPv4, My IP
Address — AWS auto-detects your Public IP address. Destination can
only be mentioned in Outbound Rule.
Destination [Outbound Rules only] — Can be Custom — a single IP
address or an entire CIDR block, anywhere — 0.0.0.0/0 in case of IPv4,
My IP Address — AWS auto-detects your Public IP address. Source can
only be mentioned in Inbound Rule.
Description — This field is optional. You can add a description which
helps you to keep a track of which rule is for what.

NACL — Network Access Control List:


NACLs are stateless firewalls which work at Subnet Level, meaning
NACLs act like a Firewall to an entire subnet or subnets. A default
NACL allows everything both Inbound and Outbound Traffic. Unlike
Security Groups, in NACLs we have to explicitly tell what to deny in
Inbound and Outbound Rules. There’s no Implicit Deny in NACL.

Default NACL:
By default, a NACL is like:
When we create a VPC, a default NACL will be created which will allow
ALL Inbound Traffic and Outbound Traffic. If we don’t associate a
Subnet to NACL, the default NACL in that VPC will be associated to
that Subnet. A default NACL looks like this —

121
NACL Features:
Statelessness:
Unlike Security Groups, NACL doesn’t maintain any track of
connections which makes it completely Stateless, meaning — if some
traffic is allowed in NACL Inbound Rule, the response Outbound traffic
is not allowed by default unless specified in the Outbound Rules.

NACL Rule Fields:

Editing NACL Inbound Rules

Editing NACL Outbound Rules

Rule Number — Rules are evaluated starting with the lowest numbered
rule. If a rule matches, it gets executed without checking for any other
higher numbered rules.
Type — Type of Traffic which can be TCP, UDP, ICMP. Type field
provides the well-used protocols, when selected it auto fills the Protocol

122
field. You may also select a Custom Protocol Rule, which allows you to
select the Protocol field from a wide range of Protocols.
Protocol — As mentioned already, if you select a Custom Protocol Rule
in Type field, you can select a Protocol from the available Protocol List.
Port Range — You can specify a single port or a range of ports like this
5000–6000.
Source [Inbound Rules only] — Can be a Single IP Address or an
entire CIDR block. Destination can only be mentioned in Outbound
Rule.
Destination[Outbound Rules only] — Can be a Single IP Address or
an entire CIDR block. Source can only be mentioned in Inbound Rule.
Allow/Deny — Specifies whether to allow or deny traffic.

Security Group and NACL Key Differences:


SG and NACL Differences

Use Case:
I will give an example to make you understand when to use Security
Group and when to use NACL —
Let’s say you have allowed SSH Access of an Instance to a User in Dev
Team and he’s connected to it and actively accessing it and for some
reason(realizing that the user is involved in some malicious activity) you
wanted to remove his SSH access.
In this case you have two choices —
1) Remove SSH inbound allow rule of that user in the Security Group
Inbound Rule.
2) Add an NACL Rule explicitly denying traffic from his IP address. If
you go with the first one, he would not lose his SSH connection, this is
due to the connection tracking behavior of Security Groups. If you go
with the latter choice, NACL would immediately block his Connection.

123
So, in this case, it’s better to use a NACL Deny Rule rather than deleting
a Security Group allow Rule.

NACL & SG Default Quota:


NACL :
 NACLs Per VPC — 200
 Rules per NACL — 20

Key Points:
Single NACL can be associated with multiple Subnets, however single
Subnet cannot be associated with multiple NACLs at same time as there
can be multiple Deny Rules which contradict each other.

Security Groups:
 VPC Security Groups per Region — 2500
 Rules Per Security Group — 60 Inbound and 60 Outbound.

Key Points:
Single Security Group can be associated to multiple Instances and unlike
NACL, multiple Instances can be associated with multiple Security
Groups as there cannot be explicit Deny rules which can contradict each
other here.
These quota limits are the default ones, if you want to increase the limit
you can request AWS to do so. Some quota limits in the VPC are strict
and cannot be increased.

124
Elastic Block Store
Amazon Elastic Block Store (EBS): Reliable Block Storage

In today’s digital age, businesses rely heavily on data storage solutions to


store and manage their critical information. One such solution that has
gained significant popularity is Amazon Elastic Block Store (EBS). EBS
is a cloud-based block storage service offered by Amazon Web Services
(AWS) that provides secure and reliable storage for your business’s data.

Understanding block storage and its importance for


businesses
Before delving into the benefits and features of Amazon EBS, it’s
crucial to understand what block storage is and why it is essential for
businesses. Block storage is a type of data storage that allows data to be
stored and retrieved in fixed-sized blocks or chunks. This method
provides businesses with more flexibility and control over their data, as
it allows for direct access to specific data blocks.
Block storage is especially crucial for businesses that deal with large
amounts of data or require high-performance storage solutions. It
enables faster data access, efficient data replication, and increased
storage capacity. With block storage, businesses can ensure the integrity

125
and availability of their data, which is vital for their day-to-day
operations.

Benefits of using Amazon EBS for block storage:


Amazon EBS offers several benefits that make it an ideal choice for
businesses looking for secure and dependable block storage.
Firstly, EBS provides high durability and availability. It automatically
replicates data within a specific Availability Zone (AZ), ensuring that
your data remains safe and accessible even in the event of a hardware
failure. This level of redundancy guarantees business continuity and
minimizes the risk of data loss.
Secondly, EBS allows for easy scalability. Businesses can increase or
decrease their storage capacity as per their requirements without any
disruption to their operations. This flexibility enables businesses to adapt
to changing storage needs, whether it’s handling increased data volumes
or downsizing storage requirements.
Furthermore, Amazon EBS offers snapshot capabilities, allowing
businesses to create point-in-time copies of their volumes. These
snapshots can be used for backup, disaster recovery, or even to create
new volumes in different regions, providing an additional layer of data
protection.

Key features and capabilities of Amazon EBS


Amazon EBS comes with a range of features and capabilities that
enhance its functionality and make it a powerful storage solution for
businesses.
One key feature is the ability to choose between different volume types
based on your workload requirements. EBS offers General Purpose SSD
(gp2), Provisioned IOPS SSD (io1), Throughput Optimized HDD (st1),
and Cold HDD (sc1) volume types. Each type is optimized for specific
use cases, ensuring that businesses can select the most suitable option for

126
their storage needs.
Another important capability of Amazon EBS is its support for Elastic
Volumes. Elastic Volumes allows businesses to adjust the size,
performance, and type of their EBS volumes without interrupting their
EC2 instances. This feature enables businesses to optimize their storage
resources and adapt to changing workload demands seamlessly.
Additionally, Amazon EBS provides encryption at rest, ensuring that
your data is protected from unauthorized access. By leveraging AWS
Key Management Service (KMS), businesses can encrypt their EBS
volumes and manage encryption keys securely. This feature is
particularly crucial for businesses that handle sensitive or confidential
data.

Types of Amazon EBS volumes and their use cases


Amazon EBS offers different types of volumes, each designed to cater to
specific use cases. Understanding these volume types can help businesses
make informed decisions when selecting the most appropriate storage
solution.
General Purpose SSD (gp2) volumes are suitable for a wide range of
workloads, including boot volumes, small to medium-sized databases,
and development/test environments. These volumes provide a balance of
price and performance, making them a popular choice for many
businesses.
Provisioned IOPS SSD (io1) volumes are designed for applications that
require consistently high performance and low latency. Use cases include
large databases, data warehousing, and applications with high transaction
rates. io1 volumes allow businesses to specify the desired level of
input/output operations per second (IOPS), providing predictable
performance for critical workloads.
Throughput Optimized HDD (st1) volumes are ideal for frequently
accessed, large sequential workloads, such as log processing, big data,

127
and data warehouses. These volumes deliver high throughput and low-
cost storage, making them cost-effective solutions for data-intensive
applications.
Cold HDD (sc1) volumes are designed for infrequently accessed
workloads, such as backups and disaster recovery. These volumes offer
the lowest cost per gigabyte and are suitable for data that does not require
frequent access.
By understanding the characteristics and use cases of each volume type,
businesses can optimize their storage infrastructure and ensure optimal
performance and cost-efficiency.

How to create and attach an Amazon EBS volume to an EC2


instance
Creating and attaching an Amazon EBS volume to an EC2 instance is a
straightforward process.
To create a new EBS volume, you can navigate to the EC2 dashboard in
the AWS Management Console and click on “Volumes” in the left-hand
menu. From there, you can choose the volume type, size, and other
parameters based on your requirements. Once the volume is created, you
can attach it to an EC2 instance by selecting the instance and clicking on
“Actions” and then “Attach Volume.”
Alternatively, you can use the AWS Command Line Interface (CLI) or
AWS SDKs to create and attach EBS volumes programmatically. This
method is particularly useful for businesses that have automated
deployment processes or require a high level of customization.
When attaching an EBS volume to an EC2 instance, it’s essential to
ensure that the instance is in the same Availability Zone as the volume.
This ensures optimal performance and availability.

Best practices for securing and optimizing Amazon EBS

128
Securing and optimizing Amazon EBS is crucial to ensure the integrity
and performance of your storage infrastructure. Here are some best
practices to consider:

 Implement encryption at rest: Enable encryption for your EBS


volumes using AWS KMS. This ensures that your data is protected
from unauthorized access, even if the physical volume is
compromised.

 Regularly back up your EBS volumes: Take regular snapshots


of your EBS volumes to create backups. These snapshots can be used
for disaster recovery or to create new volumes in different regions,
providing an additional layer of data protection.

 Monitor and optimize performance: Utilize Amazon


CloudWatch to monitor the performance of your EBS volumes. By
tracking metrics such as volume latency and throughput, you can
identify bottlenecks and optimize your storage configuration for better
performance.

 Use Elastic Volumes to adjust storage capacity: Leverage


Elastic Volumes to resize your EBS volumes as needed. This allows
you to scale your storage resources based on actual usage, reducing
costs and optimizing performance.

 Implement access controls: Use AWS Identity and Access


Management (IAM) to manage user access and permissions for your
EBS volumes. By granting least privilege access, you can ensure that
only authorized users can modify or access your storage resources.
By following these best practices, businesses can enhance the security,
performance, and cost-effectiveness of their Amazon EBS
implementation.

Monitoring and managing Amazon EBS performance

129
Monitoring and managing the performance of your Amazon EBS
volumes is essential to ensure optimal storage performance and identify
any potential issues. Amazon CloudWatch provides a range of metrics
and alarms that can help you monitor the health and performance of your
EBS volumes.
Some key metrics to monitor include volume read/write operations,
volume latency, and volume throughput. By tracking these metrics, you
can identify any performance bottlenecks and take appropriate actions to
optimize your storage configuration.
In addition to monitoring, Amazon EBS provides features such as Elastic
Volumes and Enhanced Monitoring that allow you to proactively manage
and optimize your storage resources. Elastic Volumes enables you to
adjust the size and performance of your volumes without interrupting
your EC2 instances, providing flexibility and cost optimization.
Enhanced Monitoring provides additional insights into the performance
of your EBS volumes, allowing you to fine-tune your storage
configuration for optimal performance.

Backup and disaster recovery strategies with Amazon EBS


Backup and disaster recovery are critical aspects of any business’s data
storage strategy. Amazon EBS offers several features and capabilities
that enable businesses to implement robust backup and disaster recovery
strategies.
One such feature is the ability to create snapshots of your EBS volumes.
Snapshots are point-in-time copies of your volumes that can be used to
create new volumes or restore existing volumes in the event of data loss
or system failure. By regularly taking snapshots and storing them in
different regions, businesses can ensure data availability and recovery in
case of a disaster.
In addition to snapshots, businesses can also leverage AWS services such
as Amazon S3 and AWS Storage Gateway for long-term data backup and

130
archiving. By integrating EBS with these services, businesses can create
cost-effective and scalable backup solutions that meet their specific
requirements.
It’s important to note that a comprehensive backup and disaster recovery
strategy should include regular testing and validation of the recovery
process. By periodically restoring snapshots and verifying data integrity,
businesses can ensure the effectiveness of their backup and recovery
procedures.
Case studies: Real-world examples of businesses benefiting from
Amazon EBS

To illustrate the real-world benefits of Amazon EBS, let’s take a look at


a couple of case studies:
 Company XYZ, a leading e-commerce platform, relies on Amazon
EBS to store and manage its extensive product catalog. By utilizing
Provisioned IOPS SSD (io1) volumes, Company XYZ ensures fast
and predictable performance for its database-intensive workloads. The
ability to scale storage capacity seamlessly has enabled the company
to handle increased traffic and data volumes without impacting user
experience.

 Company ABC, a healthcare provider, uses Amazon EBS to store


sensitive patient data securely. By leveraging encryption at rest and
regular snapshots, Company ABC ensures data privacy and enables
quick recovery in case of a system failure. The high durability and
availability of EBS have provided the company with peace of mind,
knowing that their critical patient information is protected and
accessible at all times.
These case studies highlight how Amazon EBS has helped businesses
across various industries achieve secure and dependable block storage
solutions.

131
Conclusion: Why Amazon EBS is the ideal solution for
secure and dependable block storage
In conclusion, Amazon Elastic Block Store (EBS) is a powerful and
versatile block storage solution that offers businesses the security,
reliability, and scalability they need to drive their success. With its range
of volume types, robust features, and seamless integration with other
AWS services, EBS provides businesses with the flexibility and control
they require for their data storage needs.
By understanding the benefits, features, and best practices associated
with Amazon EBS, businesses can leverage this solution to optimize
their storage infrastructure, enhance data security, and ensure high-
performance storage for their critical workloads.
So, if you’re looking for a secure and dependable block storage solution
that can drive your business success, look no further than Amazon EBS.
Take advantage of its capabilities, implement best practices, and unlock
the full potential of your data storage infrastructure.

132
Elastic File System
Mount Elastic File System (EFS) on EC2

Well, you’ve come to the right place! In this guide, we’ll go through the
steps to create an Elastic File System , we’ll launch and configure two
Amazon EC2 Instances ,we ’ll practice mounting the EFS to both
instances by logging into each instance via SSH authentication and we’ll
practice sharing files between two instances.

Introduction
What’s Amazon Elastic File System (EFS) ?

 Amazon EFS is a fully managed, scalable file storage


service designed to provide shared access to files across multiple
Amazon EC2 instances.

 It is particularly useful for applications and workloads that require


shared file storage in a cloud environment.

Key features and aspects of Amazon EFS:


 Scalability: Amazon EFS can scale automatically as your storage
needs grow, accommodating varying workloads without requiring you
to provision additional capacity.

 Shared File Storage: EFS allows multiple EC2 instances to access

133
the same file system concurrently, providing a simple and scalable
solution for applications that require shared access to files.

 Performance: It is designed to deliver low-latency


performance, suitable for a wide range of applications, including big
data analytics, media processing, and content management.

 Ease of Use: EFS is easy to set up and manage, eliminating the need
for manual intervention in capacity planning or performance tuning.

 Compatibility: It supports the Network File System version 4


(NFSv4) protocol, making it compatible with Linux-based EC2
instances.

 Security: EFS supports encryption of data at rest and in transit,


helping you maintain the security of your file system.

 Lifecycle Management: You can manage the lifecycle of your


files with the EFS Infrequent Access storage class, which provides a
lower-cost storage option for files that are accessed less frequently.

Architecture Diagram

134
Task Steps
Step 1: Sign in to AWS Management Console
 On the AWS sign-in page, enter your credentials to log in to your
AWS account and click on the Sign in button.
 Once Signed In to the AWS Management Console, Make the default
AWS Region as US East (N. Virginia) us-east-1

Step 2: Launching two EC2 Instances


 Make sure you are in the N. Virginia Region.

 Navigate to the Services menu in the top, then click on EC2 in


the Compute section.

3. Click on Instances from the left side bar and then click on Launch
instances.

4. Number of Instances: Enter 2 on the right side under summary

5. Name: Enter MyEC2

6. For Amazon Machine Image (AMI): Search for Amazon Linux 2


AMI in the search box and click on the Select button.

135
7. For Instance Type: select t2.micro

8. For Key pair: Select Create a new key pair Button

 Key pair name: MyEC2Key


 Key pair type: RSA
 Private key file format: .pem

9. Select Create key pair Button.

136
10. In Network Settings Click on Edit:

 Auto-assign public IP: Enable


 Select Create new Security group
 Security Group Name: Enter EFS Security Group
 To add SSH:
 Choose Type: SSH
 Source: Anywhere
For NFS:
 Click on Add security group rule
 Choose Type: NFS
 Source: Anywhere

137
11. Keep Rest thing Default and Click on Launch Instance Button.

12. Select View all Instances to View Instance you Created.

13. Launch Status: Your instance is now launching. Click on


the instance ID and wait until the initialization status changes to
Running.
14. Click on each instance and enter a name as MyEC2–1 and MyEC2–
2.

15. Take note of the IPv4 Public IP Addresses of the EC2 instances and

138
save them for later.

Step 3: Creating an Elastic File System


Navigate to EFS by clicking on the Services menu at the top. Click
on EFS in the Storage section.

2. Click on Create file system

3. Click on Customize button.

4. Enter the details below, Type the Name as EFS-Demo and make
sure default VPC and default Regional options are selected.
5. Uncheck the option of Enable automated backups

139
6. Leave everything by default and click on the Next button present
below.
7. Network Access:
VPC:
 An Amazon EFS file system is accessed by EC2 instances running
inside one of your VPCs.
 Choose the same VPC you selected while launching the EC2
instance (leave as default).

Mount Targets:
 Instances connect to a file system by using a network interface
called a mount target. Each mount target has an IP address,
which we assign automatically or you can specify.

 We will select all the Availability Zones (AZ’s) so that the EC2
instances across your VPC can access the file system.
 Select all the Availability Zones, and in the Security Groups,
select EFS Security Group instead of the default value.
 Make sure you remove the default security group and select the EFS
Security Group, otherwise you will get an error in further steps.

140
 Click on Next button.

8. File system policy — optional let it be optional only. Click


on Next button.
9. Review and Create: Review the configuration below before
proceeding to create your file system. Click on Create button.
10. Congratulations on creating the EFS File system, it’s time
to mount your EC2 Instance with the EFS File system.

Step 4: Mount the File System to MyEC2–1 Instance


 Select the MyEC2–1 Instance and copy the IPv4 Public IP.
 SSH into the EC2 Instance
 Once instance is launched, Select EC2 Instance Connect option and

141
click on Connect button.(Keep everything else as default)

 A new tab will open in the browser where you can execute the CLI
Commands.

3. Switch to root user

sudo -s

4. Run the updates using the following command:

yum -y update

5. Install the NFS client as amazon-efs-utils.

142
yum install -y amazon-efs-utils

6. Create a directory by the name efs

mkdir efs

7. We have to mount the file system in this directory.

8. To do so, navigate to the AWS console and click on the created file
system. On the top-right corner, click on View details then click
on Attach

 Copy the command of Using the EFS mount helper.

143
9. To display information for all currently-mounted file systems, we’ll
use the command bellow:

df -h

10. Create a directory in our current location:

mkdir aws

Step 5: Mount the File System to MyEC2–2 Instance


 Select the MyEC2–2 Instance and copy the IPv4 Public IP.
 SSH into the EC2 Instance

144
 Select EC2 Instance Connect option and click
on Connect button.(Keep everything else as default

 A new tab will open in the browser where you can execute the CLI
Commands.
3. Switch to root user

sudo -s

4. Run the updates using the following command:

yum -y update

5. Install the NFS client as amazon-efs-utils.

yum -y install amazon-efs-utils

6. Create a directory with the name efs

145
mkdir efs

7. We have to mount the file system in this directory.


8. To do so, navigate to the AWS console and click on the created file
system. On the top-right corner, click on Attach.

 Copy the command of Using the EFS mount helper into the CLI.

 To display information for all currently mounted file systems, we’ll


use the command:

df -h

Step 6: Testing the File System


1. SSH into both instances in a side-by-side view on your machine, if
possible.
2. Switch to root user

sudo -s

3. Navigate to the efs directory in both the servers using the command

146
cd efs

4. Create a file in any one server.

touch hello.txt

5. Check the file using the command

ls -ltr

6. Now go to the other server and give the command

cd efs

7. You can see the file created on this server as well. This proves that
our EFS is working.
8. You can try creating files (touch command) or directories (mkdir
command) on other servers to continue to grow the EFS implementation.

147
Demystifying AWS Load Balancers: Understanding
Elastic, Application, Network, and Gateway Load
Balancers
In the realm of cloud computing, load balancing plays a crucial role in
distributing incoming traffic across multiple targets to ensure high
availability, fault tolerance, and scalability of applications. In the high-
traffic world of cloud applications, ensuring smooth operation and
optimal performance requires a skilled conductor — the load balancer.
AWS offers a robust suite of load balancers, each catering to specific
needs. Amazon Web Services (AWS) offers a suite of load balancers
tailored to different use cases and requirements. In this blog, we’ll delve
into the distinctions between AWS Elastic Load Balancer (ELB),
Application Load Balancer (ALB), Network Load Balancer (NLB), and
Gateway Load Balancer (GWLB), exploring their features, examples,
and dissimilarities. Additionally, we’ll shed light on the flow hash
algorithm used by AWS load balancers to route traffic efficiently.
The Balancing Act: What They Do
At their core, all these load balancers perform the same essential
function: distributing incoming traffic across a pool of resources,
ensuring no single server gets overwhelmed. This enhances application
availability and responsiveness for your users.
1. Elastic Load Balancer (ELB):
 Description: AWS Elastic Load Balancer (ELB) is the original load
balancer service offered by AWS, providing basic traffic distribution
across multiple targets within a single AWS region. It is a simple and
cost-effective way to distribute traffic across multiple EC2 instances.
ELB supports both HTTP and TCP traffic.

148
 Example: Distributing incoming traffic across multiple EC2
instances running web servers to ensure high availability and fault
tolerance for a web application.
Features:
 Simple to configure and manage
 Supports HTTP and TCP traffic
 Can be used to distribute traffic across multiple EC2 instances
 Offers a variety of features, including health checks, sticky sessions,
and SSL termination.
Use Cases:
 Distributing traffic to web servers
 Load balancing for TCP applications, such as databases and mail
servers
 Providing SSL termination for web applications

2. Application Load Balancer (ALB):


 Description: AWS Application Load Balancer (ALB) operates at
the application layer (Layer 7) of the OSI model, enabling advanced
routing and content-based routing capabilities. ALB is a newer type
of load balancer that is designed for modern applications. It offers a
number of features that are not available in ELB, such as support for
HTTP/2, WebSockets, and container-based applications. it’s
inspecting incoming requests based on factors like HTTP headers,
path, or cookies. This allows for intelligent routing based on
application logic. For instance, an ALB can direct traffic to a specific
server based on the user’s location or the type of request.
 Example: Routing traffic based on URL paths or hostnames to
different backend services, such as directing /api requests to a set of
API servers and /app requests to web servers.

149
Features:
 Supports HTTP/2, WebSockets, and container-based applications
 Offers a variety of features, including health checks, sticky sessions,
and SSL termination
 Can be used to distribute traffic across multiple EC2 instances,
containers, and Lambda functions
Use Cases:
 Load balancing for web applications
 Distributing traffic to microservices
 Load balancing for container-based applications
3. Network Load Balancer (NLB):
 Description: AWS Network Load Balancer (NLB) operates at the
transport layer (Layer 4) of the OSI model, offering ultra-low latency
and high throughput for TCP and UDP traffic. NLB is a high-
performance load balancer that is designed for use with TCP
applications. It offers very low latency and high throughput. This
prioritizes speed and efficiency, making it ideal for high-volume,
low-latency applications like gaming servers or chat platforms.
 Example: Load balancing traffic for TCP-b ased services such as
databases, FTP servers, and gaming applications that require high
performance and minimal overhead. NLB is ideal for applications that
require low latency, such as gaming, financial trading, and video
streaming.
Features:
 Very low latency and high throughput
 Supports TCP traffic
 Can be used to distribute traffic across multiple EC2 instances

150
 Offers a variety of features, including health checks and sticky
sessions
Use Cases:
 Load balancing for TCP applications, such as gaming, financial
trading, and video streaming
 Distributing traffic to EC2 instances that are running in a VPC
4. Gateway Load Balancer (GLB):
 Description: The GLB is a versatile player, operating across layers
3 (network layer) and 7 (application layer). It acts as a central
gateway for managing virtual appliances like firewalls or intrusion
detection systems. It balances traffic across these appliances while
maintaining secure communication through VPC endpoints. AWS
Gateway Load Balancer (GLB) is designed for deploying, scaling,
and managing third-party virtual appliances such as firewalls,
intrusion detection systems (IDS), and encryption appliances. GLB is
a load balancer that is designed for use with VPC endpoints. It allows
you to load balance traffic to endpoints in a private VPC. GLB is
ideal for applications that require access to private resources, such as
databases and internal APIs.
Example: Deploying a third-party firewall appliance to inspect and
filter traffic between VPCs or between on-premises networks and the
AWS cloud.
Features:
 Load balances traffic to endpoints in a private VPC
 Supports HTTP and TCP traffic
 Can be used to distribute traffic across multiple endpoints
 Offers a variety of features, including health checks and sticky
sessions

151
Use Cases:
 Load balancing for applications that require access to private
resources
 Distributing traffic to endpoints in a private VPC
Similarities: A United Front
 High Availability: All load balancers ensure that even if
individual instances fail, traffic seamlessly flows to healthy ones,
keeping your application up and running.

 Scalability: They automatically adjust to traffic fluctuations,


scaling resources up or down as needed.

 Health Monitoring: They constantly monitor the health of target


instances and remove unhealthy ones from the pool.

Dissimilarities:
 Layer of Operation: ALB operates at Layer 7 (application layer),
allowing for content-based routing, while NLB operates at Layer 4
(transport layer), focusing on routing traffic based on IP addresses
and ports.

 Performance Characteristics: NLB offers ultra-low latency and


high throughput for TCP and UDP traffic, making it ideal for high-
performance applications, whereas ALB provides advanced routing
features and supports WebSocket and HTTP/2 protocols.

 Use Cases: ALB is suitable for modern application architectures,


microservices, and container-based environments, while NLB is

152
preferred for TCP-based workloads requiring high performance and
minimal overhead.

 Routing Intelligence: ALBs excel in application-level routing,


while NLBs prioritize speed and efficiency.

 Supported Protocols: ALBs handle HTTP/HTTPS traffic, while


NLBs work with TCP/UDP protocols.

 Virtual Appliance Management: GLBs are specifically


designed for managing and scaling virtual appliances.
The following table summarizes the key dissimilarities between the four
types of AWS load balancers:

dissimilarities between the four types of AWS load balancers

Flow Hash Algorithm: The flow hash algorithm is used by AWS load
balancers to distribute incoming traffic across multiple targets while
maintaining session affinity for stateful protocols. The flow hash
algorithm calculates a hash value based on specific attributes of each
incoming request, such as source IP address, destination IP address,
source port, destination port, and protocol. This hash value is then used
to determine which target receives the incoming request. The flow hash
algorithm takes into account the source IP address, destination IP

153
address, and destination port of each request. This ensures that requests
from the same client are always sent to the same target.
The flow hash algorithm is a very effective way to distribute traffic
evenly across multiple targets. It is also very efficient, as it does not
require any additional overhead.
It takes a portion of the data flow (like source and destination IP
addresses, ports) and generates a hash value. Based on this hash, the load
balancer directs traffic to a specific instance. This ensures even
distribution and prevents overloading individual instances.
Examples
Example 1: Load balancing a web application
You can use an ALB to load balance traffic to a web application that is
running on multiple EC2 instances. The ALB will distribute traffic
evenly across the instances and will ensure that requests from the same
client are always sent to the same instance.
Example 2: Load balancing a TCP application
You can use an NLB to load balance traffic to a TCP application that is
running on multiple EC2 instances. The NLB will provide very low
latency and high throughput, making it ideal for applications that require
low latency, such as gaming, financial trading, and video streaming.
Example 3: Load balancing traffic to a private VPC
You can use a GLB to load balance traffic to a private VPC. This allows
you to load balance traffic to endpoints in a private VPC, such as
databases and internal APIs.
Choosing the Right Load Balancer: It All Depends
Selecting the optimal load balancer hinges on your application’s unique
requirements:

154
 ALB: Ideal for web applications requiring intelligent routing based
on application logic.
 NLB: Perfect for high-performance applications that prioritize speed
and low latency.
 GLB: The go-to choose for managing and scaling virtual appliances
within your network.
Conclusion: AWS offers a range of load balancing options, each
tailored to different use cases and requirements. By understanding the
distinctions between Elastic Load Balancer (ELB), Application Load
Balancer (ALB), Network Load Balancer (NLB), and Gateway Load
Balancer (GWLB), you can choose the right load balancing solution to
optimize the performance, availability, and scalability of your
applications in the AWS cloud. Additionally, the flow hash algorithm
employed by AWS load balancers ensures efficient traffic distribution
while maintaining session affinity, further enhancing the reliability and
performance of your application deployments.
When choosing a load balancer, it is important to consider the following
factors:
 The type of traffic that you need to load balance
 The latency and throughput requirements of your application
 The features that you need
By considering these factors, you can choose the right load balancer for
your application and ensure that your traffic is distributed evenly and
efficiently.

Create AWS Application Load Balancer (ALB)

155
If you are looking for a comprehensive guide on setting up an AWS
Application Load Balancer (ALB) with two EC2 instances, displaying
their IP addresses using a bash script, and demonstrating the load
balancer’s functionality, then you’re in the right place!
In this step-by-step guide, we will take you through the entire process,
starting with the basics and leading you through configuring the load
balancer, setting up the instances, and testing the load balancer’s
functionality.
By the end of this guide, you will clearly understand how to set up an
Application Load Balancer on AWS and use it to distribute traffic across
multiple instances.
Set Up EC2 Instances

Launch EC2 Instances:


 Go to the AWS Management Console and navigate to the EC2
dashboard.
 Click on “Launch Instance” and choose an Amazon Machine
Image (AMI) of your choice (e.g., Amazon Linux 2).
 Select an instance type, configure instance details, such as storage,
tags
 Configure security group (allow HTTP and SSH), and review.

156
 Go to Advanced Settings and in user data add the following bash
script to display the IP address of the instance:

#!/bin/bash
# install httpd (Amazon Linux 2)
yum update -y
yum install -y httpd
systemctl start httpd
systemctl enable httpd
echo "<h1>Hello World from $(hostname -f)</h1>" > /var/www/html/index.html

 Create instance
 Repeat this process to launch another EC2 instance.

Once the instances are ready copy their IP address and paste them into
your Internet Browser to test it’s working:

hitc-ec2-demo-01

hitc-ec2-demo-02

Assign Elastic IPs (Optional but recommended):

157
 Go to the “Elastic IPs” section in the EC2 dashboard.
 Allocate and associate an Elastic IP to each of your instances. This
ensures that your instances have static public IP addresses.
Set Up Load Balancer

Create a Load Balancer:


 Go to the EC2 dashboard and click on “Load Balancers” under the
“Load Balancing” section.
 Click on “Create Load Balancer” and choose “Application Load
Balancer”.

158
 Configure load balancer name such as hitc-alb demo
 Scheme should be set to Internet-Facing
 IP Address Type to IPv4
 Network Mapping — select first 3 AZs in your selected region
(e.g. us-east-1a, us-east-1b, us-east-1c)

159
Security group —Click on the link Create a new security group for
ALB with the following config

160
Refresh and add newly created hitc-sg-alb

 In Listeners and routing click on Create target group. Target type


should be set to Instances as we have 2 EC2 instances. Target
group name could be hitc-tg-alb. Protocol set to HTTP. IP Address
type should be set to IPv4 and Protocol version to HTTP1. Lastly,
health checks should be set to HTTP. Click Next.
 In Register Targets, select both EC2 instances and click on the
button Include as pending below, and then register the targets.

In the next window we can check that the targets have been successfully
registered.

161
Go back to your ALB setup, refresh and add the newly created target
group

Click Create Load Balancer

162
Wait a few moments until the Provisioning of the Load Balancer is
completed

Then go back to Target Group that we previously created and check the
status, that both registered targets are showing Healthy Status

Test Load Balancer


 Select your load balancer, and copy its DNS name.
 Paste the DNS name into your browser.
 You should see the same page as before, but this time, the IP
addresses displayed should be those of the instances as served by
the load balancer.

163
Congratulations! You have successfully set up an AWS Application
Load Balancer with two EC2 instances, displayed their IP addresses
using a bash script, and demonstrated the load balancer’s functionality.
Closing Thoughts
As you wrap up, it’s important to know how to master the AWS
Application Load Balancer (ALB) if you want to optimise your cloud
infrastructure for scalability, reliability, and performance. This tutorial
will teach you how to set up an ALB, configure EC2 instances, and use
its features to distribute traffic efficiently.
It’s important to regularly review and optimise your configurations to
adapt to changing demands and ensure peak performance as you
continue your journey with AWS and cloud computing. Experiment with
different load balancing strategies, monitor your resources, and stay
updated with best practices to stay ahead in this dynamic field.

164
AUTO SCALING
Achieve High Availability with AWS Auto Scaling

Use Case:
Your company has a web server that has a volatile level of traffic. Your
company has to ensure that the webservers are always available and
currently have a fixed amount of instances to guarantee that even at a
max CPU Utilization, the web server will be able to perform. The
problem is that when the traffic is low, the unused web servers are
unnecessarily costing the company money. The current way of having a
fixed number of instances also presents a problem if for some reason a
web server goes down and a new one has to be manually spun up.
To solve this issue for our made-up ABC Company, we will create an
Auto Scaling Group with a policy to scale in or out depending on
demand with a minimum of 2 instances and a maximum of 5. One policy
will scale out if CPU Utilization goes over 80% and the other will scale
in if CPU Utilization goes under 40%.

Prerequisites

165
 Multi Availability Zone VPC with public subnets.
 A Webserver security group that allows inbound HTTP from
0.0.0.0./0 and SSH from your ip.

Launch Template
 Navigate to EC2 Dashboard.

 Click Launch Templates then click Create launch template.

3. Add a Launch template name and Template version description.

4. For AMI select Amazon Linux 2 AMI (HVM).

166
5. For Instance type select t2.micro.

6. Since we plan to SSH into an instance later, you will need to select a
key pair. You can use an existing Key pair or you can create a new one.
To create a new one click Create new key pair.

167
7. A Create Key Pair dialog box should pop up. Enter a Key pair
name. Select a File format and click Create key pair.

8. Back on the Create launch template page under Network settings


select your WebServer Security Group.

9. Scroll down to Advanced details and click to expand.

168
10. Scroll down to User data. Copy and paste the following into the User
data field:
#!/bin/bash
yum update -y
yum install -y httpd
systemctl start httpd
systemctl enable httpd

This code will be run when each instance from the template boots up. It
is updating patches, installing and starting an Apache webserver.
11. Click Create launch template.

Auto Scaling Group


1. Navigate to EC2 and select Auto Scaling Groups on the left side.

2. Click Create Auto Scaling group.

3. Enter a name for Auto Scaling group name and select our Launch
template from the drop down.

169
4. Review the information and click Next.

5. On the Configure Settings page select Adhere to launch


template for Instance purchase options.
6. For Network:

 VPC: Select the VPC you’d like to launch the instances in. Be sure to
select the VPC in which your Security Group is associated
 Subnets: Select all public subnets in your VPC in you web layer (if
you have a tiered architecture). In my custom VPC for ABC
Company, I have 3 public subnets and have chosen all 3 for high
availability.

7. Click Next.

170
8. On the Configure advanced options page select No load
balancer and keep the default Health checks. Then click Next.
9. On the Configure group size and scaling policies page:
Desired capacity: 2

 Minimum capacity: 2
 Maximum capacity: 5
This will ensure that we always have at least 2 instances running and up
to 5 if CPU Utilization gets too high.
10. For Scaling policies select None and click Next.

11. I don’t plan to add notifications or tags so click Skip to review.


Feel free to add either if you’d like, but it’s not necessary for this
project.
12. Click Create auto scaling group.

13. If you navigate to the Activity tab you will see that two instances
have been created to meet our minimum capacity.

CloudWatch Alarms
We will need to create two CloudWatch alarms that will trigger our

171
Scaling Policies if they go into an alarm state.

1. Navigate to CloudWatch. Click Alarms then Create Alarm.

2. Click Select metric.


3. Click EC2.

4. Click By auto scaling group

5. If you have multiple auto scaling groups then you can use search to
find the one, we created by typing our auto scaling group name. Then
select the one for CPU Utilization.

6. Click Select metric.


7. Keep all metric defaults.
8. For the Conditions section:

 Threshold type: Static


 Whenever <alarm name> is: Greater/Equal than: 80

172
9. click Next.
10. You can add a notification if you’d like but I’m going to
click Remove. One can always be added later.

11. Name our alarm and add a description then click Next.

173
12. Review then click Create alarm.
13. Repeat steps 1–12 for our scale in alarm with the following changes:
Step 8: Lower/Equal 40
Step 11: Alarm name: Simple-Scaling-AlarmLow-SubtractCapacity
Note: When you first create your alarms the Status will say insufficient
Add Scaling Policy
Navigate to Auto Scaling Groups and click our newly created auto
scaling group.

2. Select the Auto scaling tab and click Add policy.

174
3. For Policy type select Simple scaling. For CloudWatch alarm select
the AddCapacity alarm and for Take the action select Add 1 capacity
units.

4. Repeat steps 1–3 but to remove capacity.

175
5. You should now see two policies added to the Auto Scaling Group.

Testing

Let’s first test to make sure our apache web server is running.
 Navigate to the EC2 Dashboard.
 Click Instances.
 Select one of our instances.
 In the details stab, in the Instance summary section copy the Public
IPv4 DNS address and paste it into a new tab.

You should see your Test Page.

176
Success!

DO NOT, I REPEAT DO NOT click the open address link. When you
click this link, it will open the link in a new tab which on the surface
seems like what you would want to do, however please learn from my
mistake. When you click this link, it will open up the IPv4 DNS in https
and since we have not set up https, we will get an error. I spent an hour
troubleshooting what the possible issue could be before realizing my
mistake.

A valuable learning experiences

Let’s test to see if our Auto Scaling Group works if an instance was to
fail and go below our minimum capacity.
 Navigate to the EC2 Dashboard.
 Click Instances.
 Select one of our instances. Click Instance state and click Terminate
instance.

4. After some time you should see a new instance created.

177
5. You can navigate to the IPv4 DNS for the new instance to verify the
apache web server is working.

Let’s test CPU Utilization


 SSH into both of your running instances.

 Make sure to run them both around the same time. It will do no good
to run the commands on one instance, wait 10 minutes, then run the
command on the other instance.

Run the following:


sudo amazon-linux-extras install epel -y
sudo yum install -y stress

3. Then run the following:


stress --cpu 12 --timeout 600

Note: It took some time before both instances were maxed out. You may
want to play with upping the cpu number if needed.
3. Once our alarm status changes to In alarm, we should see our Auto
Scaling Group launch a new instance.
5. Navigate to Auto Scaling groups, select our group and review
the Activity history.

178
6. You should also now see three instances when you navigate
to Instances.

7. You can either wait until our commands stop running or you can
cancel in the terminal with Ctrl + C.
8. Once CPU Utilization goes below 40%, we should see a scale in
action triggered by our other alarm. Navigate back to Auto Scaling
groups.
9. Select the Activity tab and note that the Auto Scaling group
terminated and instance.

179
AWS Web Application Firewall
There are many security threats that exist today in a typical enterprise
distributed application.
 DDoS: Flood Attacks (SYN Floods, UDP Floods, ICMP Floods,
HTTP Floods, DNS Query Floods), Reflection Attacks

 Application Vulnerabilities: SQL Injections, Cross Site Scripting


(XSS), Open Web Application Security Project (OWASP), Common
Vulnerabilities and Exposures (CVE)

 Bad Bots: Crawlers, Content Scrapers, Scanners and Probes


Out of these, AWS WAF can be used to handle security threats such as
SQL injections, Cross Site Scripting (XSS) in a typical web application.
The web application HTTP requests, can be routed via AWS WAF and
then will be forwarded to either one of the AWS services.
 AWS CloudFront (A Global Service)
 AWS API Gateway (A Regional Service)
 AWS Application Load Balancer (A Regional Service)
Logging and Monitoring of WAF are handled by Kinesis
Firehose and CloudWatch respectively.

180
Web ACL
When WAF associating any of the above three AWS services, it
associates with a Web ACL. A Web ACL is a fundamental component of
WAF, which defines a set of rules for any of these services (See Figure
2).

Figure 2 — Conditions, Rules and Web ACLs

As mentioned, a Web ACL is a collection of rules. A rule is a collection


of conditions (See Figure 3).

Figure 3- WAF with Web ACLs

How to create a Web ACL in WAF?

181
In order to demonstrate the WAF capability, it is always good to go
through a simple scenario that can showcase its capability. Here, I am
going to block a CloudFront distribution, which I created some time ago.
So, if you are trying this out, please make sure you have one of the
services (CloudFront, API Gateway or ALB) is created already before
trying this out.

Task 1: Describe a Web ACL and associate it to AWS


resources
Go to AWS WAF → Web ACLs → Click Create Web ACL button (See
Figure 4).

Figure 4

Give a name to Web ACL and associate a Resource Type to it. Here we
are associate a CloudFront distribution (See Figure 5), which I have
already created before. You can attach this to not only CloudFront but
ALB and API Gateway as well.

182
Figure 5

Click Add AWS Resources button to associate the CloudFront


Distribution that you created before (See Figure 6).

Figure 6

Click Next button and you will get another page to add your rules to
Web ACL. We will skip this for the moment allowing us to do it at a
later stage.
Select Allow for Web ACL Action as well.
Leave Set Rule Priority as it is and click Next.

183
Leave Configure Metrics and click Next.

Finally review your selections and click Create Web ACL button.

The above will create a Web ACL without any rules. You can go back to Web
ACL link and you will see the below. Make sure not to select a region and
select Global (CloudFront) in the top drop down to see your created Web ACL
(See Figure 7).

Figure 7

However, even if you see a created Web ACL, CloudFront propagation


for this update will take a bit of time. You can see it if you visit the
CloudFront console page. Give a little bit of time finish the CloudFront
propagation before you start the next step.

Task 2: Add a Condition to block my IP address


Go to AWS WAF → IP Sets → Click Create IP Set button.
Select IPV4 and give your IP address with /32 as the postfix. If you are
not sure how to get your network’s public IP, you may type “What is my
IP” on Google. It is that simple (See Figure 8).

Figure 8

184
Task 3: Add a Rule to the created condition
In order to create a rule, you need to create a Rule Group.
Go to AWS WAF → Rule Group → Click Create Rule Groups button
(See Figure 9)

Figure 9

Click Next → Click Add Rule button → Set the following parameters to
create a Rule
Rule Name → MyRule
If a Request → Select Matches the requirement
Statement (Inspect)→ Select Originates from an IP Address In
Statement (IP Set) → Select the IP Set that you created in Task 2
Action → Select Block
Click Next
Select the Rule Priority. This is not required here since you have only
one rule.
Finally review your selections and click Create Rule Group to confirm
your rule settings.
Task 4: Add the created Rule Group / Rule to the Web ACL
Go to AWS WAF → Web ACL → Select the Web ACL that you have

185
created → Click Rules tab (See Figure 10).

Figure 10

You can see the Web ACL still does not have its rules attached.
Click Add Rules button drop down → Select Add my own rules and rule
groups

Figure 11

Give a name for the rule that you are specifying here (See Figure 11).
[P.Note: I strongly feel the new WAF UI has some issues related its
fields. This is a good example of having to define Rule name twice.
Once under the Rules Group and once under Web ACL rule

186
attachments.]
Select the Rules Group that you created from the drop down and
click Add rule button and then click Save.
Now you can see the added rule is attached to the Web ACL.
Now it is time to browse the web URL that you have blocked for your
IP. If all fine, it will be similar to below screen (See Figure 12).

Figure 12

If you want to remove the blocking, you can go to the Web ACL and
delete the related Rule and try the web link again. After a few refresh
attempts, you will get your site back.

187
Amazon Web Application Firewall (AWS WAF): Web
Security for AWS Users

What is an AWS WAF ?


AWS Web Application Firewall (WAF) is a firewall that helps protect
your web applications from common web exploits that could affect
application availability, compromise security, or consume excessive
resources. AWS WAF gives you control over how traffic reaches your
applications by enabling you to create security rules that block common
attack patterns, such as SQL injection or cross-site scripting, and rules
that filter out specific traffic patterns you define. You can deploy AWS
WAF on either Amazon CloudFront as part of your CDN solution or the
Application Load Balancer (ALB) that fronts your web servers or origin
servers running on EC2.

Key Features of AWS WAF


 Customizable Rules: Create rules to filter traffic based on
conditions like IP addresses, HTTP headers, and body contents.
 Real-Time Metrics and Logging: Monitor web traffic and get
real-time metrics and logs for in-depth analysis.
 Integration with AWS Services: Seamlessly integrates with
services like Amazon CloudFront and Application Load Balancer.

Compare AWS Security Groups, NACL, AWS WAF,


Network Firewall

188
How Amazon Web Application Firewall (WAF) Works ?
The working of WAF in AWS mentioned below.

 AWS Firewall Manage: It Manages multiple AWS Web


Application Firewall Deployments.
 AWS WAF: Protect deployed applications from common web
exploits.
 Create a Policy: Now you can build your rules using the visual
rule builder.
 Block Filter: Block filters protect against exploits and vulnerability
attacks.
 Monitor: Use Amazon CloudWatch for incoming traffic metrics &
Amazon Kinesis Firehose for request details, then tune rules based on
metrics and log data.
WAF AWS monitors all incoming and outgoing web requests forwarded
to API Gateway, Amazon CloudFront, and Application Load Balancer.

189
Now let’s get started with WAF and create web ACL in some steps.

Step 1: Create web ACL:


First, sign up for an AWS account, then go to AWS Console and search
for Web Application Firewall. You will land on the WAF home page.

Step 2: In the next step you need to create the IP Set to deny the
application access. Click on IP Set then select Create IP Set then add the
IP list then click on Create IP set which needs to block the access of the
application. The IP that we have added to the list does not access the
application over the Internet.

190
Step 3: Create web ACL: Open a new tab of the browser then go to
AWS Console and search for Web Application Firewall. You will land
on the WAF home page, and choose to Create Web ACL.

Step 4: Give a Name: Type the name you want to use to identify this
web ACL. After that, enter Description if you want (optional), add the
AWS resources (Application Load Balancer, Amazon API Gateway
REST API, Amazon App Runner service, AWS AppSync GraphQL
API, Amazon Cognito user pool, AWS Verified Access), and then
hit Next.

191
Step 3: Add your Own rules and rule group: In the next step, you need
to add rules and rule groups. Click on Add my won rules and rule
groups. You will land on a new page to Rules type then select IP Set and
choose the IP set which is created in Step2 and click on the add rule
option mentioned in the below snapshot.

192
Step 4: Once the rule is created then Select Rule and click on the Next
Step 5: Configure CloudWatch Metrics
Step 6: Review Web ACL Configuration: In the final step, check all
the rules and hit Create Web ACL.
Finally, a message will pop up You Successfully created web ACL: ACL-name

Then test the application access on the internet, The IP added in the IP
set that is blocked will get 403 Forbidden, and all other users will access

193
the application.

Best Practices for Using AWS WAF


 Regularly Update Rules: Keep your rule sets updated to protect
against the latest vulnerabilities and threats.
 Testing and Validation: Regularly test new rules in a non-
production environment to validate their efficacy and minimize false
positives.
 Layered Security: Combine AWS WAF with other AWS security
services for a comprehensive security strategy.

Estimating Costs
The cost of AWS WAF can vary depending on the scale of your
deployment, ranging from a few dollars per month for small
deployments to several thousand dollars per month for large-scale
deployments. AWS WAF pricing is based on the number of web
requests processed and the number of security rules that are used.
Example of cost for our example (1 Web ACL with a few managed
rules):
$5.00 per web ACL per month (prorated hourly) * 1 web ACL = $5.00
$1.00 per rule per month (prorated hourly) * 5 rules = $1.00
$0.60 per million requests processed * 1 (we will assume 1 million
request) = $0.60
$0.10 per alarm metric * 1 alarm = $0.10
Total: $6.70 per month

194
Conclusion
AWS Web Application Firewall provides a managed solution to protect
your web applications and APIs against common exploits and
vulnerabilities. By leveraging WAF’s advanced rulesets and integration
with services like Application Load Balancer, you can effectively filter
malicious web traffic while allowing legitimate users access. With
customizable rules, real-time metrics, and easy association with AWS
resources, WAF is a robust web application firewall to secure your
workloads in the cloud. Carefully monitor your WAF to fine-tune rules
and maximize threat protection. Using AWS WAF can improve your
overall security posture in the cloud.

AWS WAF-To Block SQL Injection

Implementing AWS WAF with ALB to block SQL Injection, Geo


Location and Query string
AWS WAF helps you protect against common web exploits and bots
that can affect availability, compromise security, or consume excessive
resources.

195
Image taken from amazon. com

Que1-What is AWS WAF?


Ans- AWS WAF is a web application firewall that helps you to protect
your web applications against common web exploits that might affect
availability and compromise security.
It gives you control over how traffic reaches your applications by
enabling you to create security rules that block common attack patterns
like SQL injection and cross-site scripting.
It only allows the request to reach the server based on the rules or
patterns you define. AWS WAF also allows us to review rules and
customize them to prevent new attacks from reaching the server.

Que 2- What is the difference between a firewall and a


WAF?
Ans- A WAF protects web applications by targeting Hypertext Transfer
Protocol (HTTP) traffic. This differs from a standard firewall, which
provides a barrier between external and internal network traffic. A WAF
sits between external users and web applications to analyze all HTTP
communication.

Thus, firewall is usually associated with protection of only the network


and transport layers (layers 3 and 4). However, a web application
firewall (WAF) provides protection to layer 7.

Que 3- Explain how AWS WAF work. And how it integrates


with CloudFront and CloudWatch?

196
Ans- AWS WAF gives a developer the ability to customize security
rules to allow, block or monitor Web requests. Amazon
CloudFront (AWS’ content delivery network) receives a request from
an end user and forwards that request to AWS WAF for inspection.
AWS WAF then responds to either block or allow the request. A
developer can also use AWS WAF’s integration with CloudFront to
apply protection to sites that are hosted outside of AWS.

 Developers create rules in AWS WAF that can include placing


limitations on certain IP addresses, HTTP headers and URI strings.
AWS WAF rules can prevent common Web attacks, such as SQL
injection and cross-site scripting (XSS), which look to exploit
vulnerabilities in a site or application. Rules take roughly one minute
to activate, and a developer can track the effectiveness of those rules
by viewing real-time metrics in Amazon CloudWatch or through
sampled Web requests stored in the AWS WAF API or AWS
Management Console. These metrics include IP addresses, geo
locations and URIs for each request.

Que 4 -Can I use AWS WAF to protect web sites not hosted
in AWS?
Ans- Yes, AWS WAF is integrated with Amazon CloudFront,
which supports custom origins outside of AWS.

Que 5- How is Amazon WAF priced?


Ans- The cost of WAF is only for what you use. The pricing is based
on how many rules you deploy and how many web requests your
application receives.
There are no upfront commitments. AWS WAF charges are in addition
to Amazon CloudFront pricing, the Application Load Balancer (ALB)
pricing, Amazon API Gateway pricing, and/or AWS AppSync pricing.

Que 6- Can I use Managed Rules along with my existing


AWS WAF rules?
Ans- Yes, you can use Managed Rules along with your custom AWS

197
WAF rules. You can add Managed Rules to your existing AWS WAF
web ACL to which you might have already added your own rules.
The number of rules inside a Managed Rule does not count towards
your limit. However, each Managed Rule added to your web ACL will
count as 1 rule.

Que 7-What services does AWS WAF support?


Ans- AWS WAF can be deployed on Amazon CloudFront, the
Application Load Balancer (ALB), Amazon API Gateway, and AWS
AppSync.
As part of Amazon CloudFront it can be part of your Content
Distribution Network (CDN) protecting your resources and content at
the Edge locations. As part of the Application Load Balancer, it can
protect your origin web servers running behind the ALBs. As part of
Amazon API Gateway, it can help secure and protect your REST APIs.
As part of AWS AppSync, it can help secure and protect your GraphQL
APIs.

Que 8- What is Elastic Load Balancing or ELB?


Ans- ELB is a service that automatically distributes incoming
application traffic and scales resources to meet traffic demands. It helps
in adjusting capacity according to incoming application and network
traffic.
It can be enabled within a single availability zone or across multiple
availability zones to maintain consistent application performance.
ELB offers features like:
 Detection of unhealthy EC2 instances.
 Spreading EC2 instances across healthy channels only.
 Centralized management of SSL certificates.
 Optional public key authentication.
 Support for both IPv4 and IPv6.
 ELB accepts incoming traffic from clients and routes requests to its
registered targets.
 When an unhealthy target or instance is detected, ELB stops routing

198
traffic to it and resumes only when the instance is healthy again.
 ELB monitors the health of its registered targets and ensures that the
traffic is routed only to healthy instances.
 ELB’s are configured to accept incoming traffic by specifying one or
more listeners.

Image taken from amazon. com

Que 9- How do I decide which load balancer to select for my


application?
Ans- Elastic Load Balancing (ELB) supports four types of load
balancers:
 Application Load Balancers.
 Network Load Balancers.
 Gateway Load Balancers.
 Classic Load Balancers.
You can select the appropriate load balancer based on your application
needs. If you need to load balance HTTP requests, we recommend you
use the Application Load Balancer (ALB).
For network/transport protocols (layer4 — TCP, UDP) load balancing,
and for extreme performance/low latency applications we recommend
using Network Load Balancer.
If your application is built within the Amazon Elastic Compute Cloud
(Amazon EC2) Classic network, you should use Classic Load
Balancer.
If you need to deploy and run third-party virtual appliances, you can

199
use Gateway Load Balancer.

Que 10- What are listeners in ELB?


Ans- A listener is a process that checks for connection requests.
Listeners are configured with a protocol and port number from the client
to the ELB and vice-versa i.e., back from ELB to the client.

Que 11- How are load Banacers configured?


Ans- Each load balancer is configured differently.
For Application and Network Load Balancers, you register targets in
target groups and route traffic to target groups.
Gateway Load Balancers use Gateway Load Balancer endpoints to
securely exchange traffic across VPC boundaries.
For Classic Load Balancers, you register instances with the load
balancer.
AWS recommends users to work with Application Load Balancer to
use multiple Availability Zones because if one availability zone fails,
the load balancer can continue to route traffic to the next available one.
We can have our load balancer be either internal or internet-facing.
The nodes of an internet-facing load balancer have Public IP addresses,
and the DNS name is publicly resolvable to the Public IP addresses of
the nodes.
Due to the point above, internet-facing load balancers can route requests
from clients over the Internet.
The nodes of an internal load balancer have only Private IP addresses,
and the DNS name is publicly resolvable to the Private IP addresses of

200
the nodes.
Due to the point above, internal load balancers can only route requests
from clients with access to the VPC for the load balancer.
Note: Both internet-facing and internal load balancers route requests to
your targets using Private IP addresses.
Implement

Task:
 Sign in to AWS Management Console. Launch First EC2 Instance
(MyEC2Server1).
 Launch Second EC2 Instance (MyEC2Server2).
 Create a Target Group (MyWAFTargetGroup)
 Create an Application Load Balancer (MyWAFLoadBalancer).
 Test Load Balancer DNS.
 Create AWS WAF Web ACL (MyWAFWebAcl).
 Test Load Balancer DNS.
Solution:
Task 1: Sign in to AWS Management Console and launch
First EC2 Instance
In this task, we are going to launch the first EC2 instance
(MyEC2Server1) by providing the required configurations like name,
AMI selection, security group , instance type and other settings.
Furthermore, we will provide the user data as well.
1) Goto Services menu in the top left, then click on EC2 in the Compute

201
section. Navigate to Instances from the left side menu and click
on Launch Instances button.
2) Enter/select the required details:
√ Name : Enter MyEC2Server1
√ Amazon Machine Image (AMI) : select Amazon Linux 2 AMI
√ Instance Type : Select t2.micro
√ Under the Key Pair (login) section : Click on Create new key pair
hype
Key pair name: MyWebserverKey
Key pair type: RSA
Private key file format: .pem or .ppk
Click on Create key pair and then select the created key pair from the
drop-down.

√ Under the Network Settings section :


Click on Edit button. Auto-assign public IP: select Enable

√ Firewall (security groups) : Select Create a new security group


Security group name : Enter MyWebserverSG
Description : Enter My EC2 Security Group
To add SSH:Choose Type: SSH
Source: Anywhere (From ALL IP addresses accessible).

202
Similarly add For HTTP and HTTPS, click on Add security
group rule.

√ Under the Advanced details section :


Under the User data: copy and paste the following script to create an
HTML page served by an Apache HTTPD web server.

#!/bin/bash
sudo su
yum update -y
yum install httpd -y
systemctl start httpd
systemctl enable httpd
echo "<html><h1> Welcome to My Server 1 </h1><html>" >>
/var/www/html/index.html

Keep everything else as default and click on the Launch


instance button.
Task 2: Launch Second EC2 Instances (MyEC2Server2)
Launch the second EC2 instance similar to previous one with slight
variation, by providing the required configurations like name, AMI
selection, security group , instance type and other settings. Furthermore,
here too we will provide the user data as well. Here,
√ Name : Enter MyEC2Server2
√ Instance Type : Select t2.micro
√ Key Pair (login) section : Select MyWebserverKey from the list.

203
√ Under the Network Settings section : Click on Edit button
Auto-assign public IP: select Enable
Firewall (security groups) : Select existing security
group MyWebserverSG
√ Under the Advanced details section :
Under the User data: copy and paste the following script to create an
HTML page served by Apache httpd web server:

#!/bin/bash
sudo su
yum update -y
yum install httpd -y
systemctl start httpd
systemctl enable httpd
echo "<html><h1> Welcome to Whizlabs Server 2 </h1><html>" >>
/var/www/html/index.html

Keep everything else as default, and click on the Launch


instance button.

Task 3: Create a Target Group (MyWAFTargetGroup)


In this task, we are going to create a target group for the load balancer
and will add the target instances so that the load balancer can distribute
the traffic among these instances.
1. In the EC2 console, navigate to Target groups in the left-side panel
under Load Balancer in the Load Balancing section.
2. Click on Create target group button on the top right corner.

204
3. Enter basic configuration:
√ Choose a target type : Select Instances

√ Target group name : Enter MyWAFTargetGroup


Protocol : Select HTTP
Port : Enter 80
√ Health Checks:
Health check protocol : Select HTTP

205
√ Leave everything as default and click on Next button.
√ Register targets:
 Select the two instances we have created
i.e. MyEC2Server1 and MyEC2Server2.
 Click on Include as pending below and scroll down.
Review targets and click on Create target group button.

Your Target group has been successfully created.

206
Task 4: Create an Application Load Balancer
(MyWAFLoadBalancer)
In this task, we are going to create an Application Load balancer by
providing the required configurations like name, target group etc.
1) In the EC2 console, navigate to Load Balancers in the left-side panel
under Load Balancing. Click on Create Load Balancer at the top-left to
create a new load balancer for our web servers.

On the next screen, choose Application Load Balancer since we are


testing the high availability of the web application and click
on Create button.

2) Enter the required Basic configuration:


Load balancer name: Enter MyWAFLoadBalancer
Scheme: Select Internet-facing
IP address type: Choose Ipv4

207
Network mapping:
VPC : Select Default
Mappings : Check All Availability Zones
Security groups: Select an existing security group
i.e. MyWebserverSG from the drop-down menu.

Listeners and routing:


Protocol : Select HTTP
Port : Enter 80
Default action : Select MyWAFTargetGroup from the drop down
menu

Leave everything as default and click on Create load balancer button.

208
You have successfully created Application Load Balancer.
Task 5: Test Load Balancer DNS
In this task, we will test the working of load balancer by copying the
DNS to the browser and find out whether it is able to distribute the
traffic or not.
1. Now navigate to the Target Groups from the left side menu under
Load balancing.
2. Click on the MyWAFTargetGroup Target group name.
3. Now select the Targets tab and wait till both the targets become
Healthy (Important).

Now again navigate to Load Balancers from the left side menu under
Load balancing. Select the MyWAFLoadBalancer Load Balancer and
copy the DNS name under Description tab.

209
1. Copy the DNS name of the ELB and enter the address in the browser.
You should see index.html page content of Web Server 1 or Web Server
2

Now Refresh the page a few times. You will observe that the index pages change
each time you refresh.
Note: The ELB will equally divide the incoming traffic to both servers in a Round
Robin manner.
2. Test SQL Injection :
 Along with the ELB DNS add the following URL
parameter: /product?item=securitynumber’+OR+1=1 —
 Syntax : http://<ELB DNS>/product?item=securitynumber’+OR+1=1 —
You will be able to see output similar to below.

Here the SQL Injection went inside the server and since we only have an
index page, the server doesn’t know how to solve the URL that is why
you got Not Found page.

210
3. Test Query String Parameter :
 Along with the ELB DNS add the following URL
parameter: /?admin=123456
 Syntax : http://<ELB DNS>/?admin=123456
You will be able to see output similar to below.

Here also the Query string went inside the server and the server always
passes the query string inside and it is resolved by the code that you
write. Here the query string is passed and there is no code to resolve the
this but it won’t throw any error it just becomes an unused value. so you
got a response back.
Task 6: Create AWS WAF Web ACL (MyWAFWebAcl)
In this task , we are going to create an AWS WAF Web ACL where we
will add some customized rules for location restriction, query strings and
1. Navigate to WAF by clicking on the Services menu in the top, then
click on WAF & Shield in the Security, Identity &
Compliance section.
2. On the left side menu, select Web ACL’s and then click on Create
web ACL button.

211
3. Describe web ACL and associate it to AWS resources :
Name : Enter MyWAFWebAcl
Description : Enter WAF for SQL Injection, Geo location and Query
String parameters
CloudWatch metric name : Automatically selects the WAF name, so no
changes required.
Resource type : Select Regional resources
Region : Select current region from the dropdown.
Associated AWS resources : Click on the Add AWS resources button.
 Resource type : Select Application Load Balancer
 Select MyWAFLoadBalancer Load balancer from the list.

212
Now click on the Add button. Click on the Next button.
Add rules and rule groups : Here we will be adding three rules.
Rule 1
 Under Rules, click on Add rules and then select Add my own rules
and rule groups.
 Rule type : Select Rule builder
 Name : Enter GeoLocationRestriction
 Type : Select Regular type
 If a request : Select Doesn’t match the statement (NOT)
 Inspect : Select Originates from a country in
 Country codes : Select <Your Country> In this example we select
India-IN
 IP address to use to determine the country of origin : Select Source IP
address
Note : You can also select multiple countries also.
 Under Then : Action Select Block. Click on Add rule.
Here we are only allowing requests to come from India and all the
requests that come from other countries will be blocked.

213
214
Rule 2
 Under Rules, click on Add rules and then select Add my own
rules and rule groups.
 Rule type : Select Rule builder
 Name : Enter QueryStringRestriction
 Type : Select Regular type
 If a request : Select matches the statement
 Inspect : Select Query string
 Match type : Select Contains string
 String to match : Enter admin
 Text transformation : Leave as default.
 Under Then : Action Select Block.
 Click on Add rules.
Anytime in the request URL contains a query string as admin WAF will
block that request.

Rule 3

215
 Under Rules, click on Add rules and then select Add managed
rule groups.
 It will take a few minutes to load the page. It lists all the rules which
are managed by AWS.
 Click on AWS managed rule groups.
 Scroll down to SQL database and enable the corresponding Add to
web ACL button.

Scroll down to the end and click on Add rules button.


Now we have 3 rules added.

Under Default web ACL action for requests that don’t match any rules,
Default action Select Allow. Click on the Next button.

216
 Set rule priority:
 No changes required, leave as default. Note : You can move the rules
based on your priority.
 Click on the Next button.
Configure metrics:
 Leave it as default. Click on the Next button.
Review and create web ACL :
 Review the configuration done, scroll to the end and click on Create
web ACL button.

It will take a few seconds to create the Web ACL.

217
Web ACL created.

Note: I explicitly associated as it failed, later after creation of WAF


ACLs.
Task 7: Test Load Balancer DNS
1. Again navigate to Load Balancers from the left side menu under Load
balancing. Select the MyWAFLoadBalancer Load Balancer and copy
the DNS name under Description tab.
Copy the DNS name of the ELB and enter the address in the browser.
You should see index.html page content of Web Server 1 or Web Server
2.

Now Refresh the page a few times. You will observe that the index
pages change each time you refresh. thus, ELB is working fine. ●
Note: The ELB will equally divide the incoming traffic to both servers in
a Round Robin manner.
2. Test SQL Injection
 Along with the ELB DNS add the following URL
parameter: /product?item=securitynumber’+OR+1=1 —
 Syntax : http://<ELB
DNS>/product?item=securitynumber’+OR+1=1 —
You will be able to see the below output. Unlike Page Not found error

218
before.

Here SQL Injection is blocked by WAF before it goes inside the


server. ●
3. Test Query String Parameter
 Along with the ELB DNS add the following URL
parameter: /?admin=123456
 Syntax : http://<ELB DNS>/?admin=123456
You will be able to see the below output.

Here also the Query string which contains admin is blocked by


WAF before it could go inside the server. ●

Do you know?
WAF can offer protection against Distributed Denial of Service (DDoS)
attacks by analyzing traffic patterns, detecting abnormal behaviour, and
mitigating the impact of such attacks.

219
RDS (Relational Database Service)
We dive into the heart of relational databases on the cloud with Amazon
RDS. In this session, we’ll explore the fundamentals of RDS, its
benefits, and how to get started with launching and configuring RDS
instances. Additionally, we’ll walk through the process of connecting to
RDS instances from EC2 instances.

Introduction to RDS and managed database services:


Let’s delve deeper into the Introduction to RDS and managed database
services.

Introduction to RDS (Relational Database Service)


Amazon Relational Database Service (RDS) is a fully managed database
service provided by AWS. It allows users to set up, operate, and scale
relational databases in the cloud without worrying about the underlying
infrastructure. RDS automates routine database tasks such as hardware
provisioning, database setup, patching, and backups, enabling
developers to focus more on their applications and less on database
management.

Key Features of RDS:


1. Multiple Database Engine Support: RDS supports various popular
database engines, including MySQL, PostgreSQL, MariaDB, Oracle,
SQL Server, and Amazon Aurora. This flexibility allows users to

220
choose the engine that best suits their application requirements.
2. Automated Backups and Point-in-Time Recovery: RDS
automatically takes backups of your databases according to the
retention period you specify. It also enables point-in-time recovery,
allowing you to restore your database to any specific point within the
retention period.
3. High Availability and Replication: RDS provides high availability
features such as Multi-AZ (Availability Zone) deployments and Read
Replicas. Multi-AZ deployments replicate your database
synchronously across multiple Availability Zones to ensure data
durability and fault tolerance, while Read Replicas enable you to
offload read traffic from the primary database instance, improving
performance and scalability.
4. Security and Compliance: RDS offers several security features to
help you secure your databases, including network isolation using
Amazon VPC, encryption at rest using AWS KMS (Key Management
Service), and SSL encryption for data in transit. RDS also supports
database authentication mechanisms such as IAM database
authentication and traditional username/password authentication.
5. Scalability and Performance: With RDS, you can easily scale your
database instance vertically (by increasing instance size) or
horizontally (by adding Read Replicas). RDS also provides
performance monitoring metrics and tools to help you optimize
database performance.

Managed Database Services:


Managed database services like RDS abstract much of the operational
overhead associated with traditional database management. They offer
the following benefits:
1. Simplified Database Administration: Managed database services

221
handle routine database administration tasks such as provisioning,
patching, backups, and monitoring, freeing up developers and DBAs
to focus on application development and business logic.
2. High Availability and Reliability: Managed services typically offer
built-in high availability features such as automated failover, data
replication, and backup/restore capabilities, ensuring that your
databases are highly available and reliable.
3. Scalability and Performance: Managed services make it easy to
scale your databases up or down based on demand. They often provide
tools and features for performance optimization and monitoring,
helping you maintain optimal database performance.
4. Security and Compliance: Managed database services offer robust
security features and compliance certifications to help you meet your
security and regulatory requirements. They handle security patching,
encryption, access control, and auditing, reducing the risk of security
breaches and data loss.

Launching RDS instances and configuring parameters:


These are fundamental steps in setting up relational databases using
Amazon RDS. Here’s a detailed guide on how to launch RDS instances
and configure parameters:

Launching RDS Instances:


1. Navigate to the RDS Console: Log in to your AWS Management
Console and navigate to the RDS service.
2. Click on “Create database”: In the RDS dashboard, click on the
“Create database” button to initiate the instance creation process.
3. Choose Engine and Version: Select the database engine you want to
use (e.g., MySQL, PostgreSQL, Oracle, SQL Server) and choose the
version that best suits your application requirements.

222
4. Specify Instance Details:

 DB Instance Class Choose the appropriate instance class based on


your compute and memory requirements.

 Multi-AZ Deployment: Optionally, select multi-AZ deployment for


high availability and redundancy across multiple Availability Zones.

 Storage Type and Allocated Storage: Select the storage type (e.g.,
General Purpose SSD, Provisioned IOPS SSD) and specify the
allocated storage space for your database.

 DB Instance Identifier: Provide a unique identifier for your RDS


instance.
5. Configure Advanced Settings:

 Network & Security: Choose the Virtual Private Cloud (VPC) where
you want to launch your RDS instance. Configure the subnet group
and specify security groups to control inbound and outbound traffic.
 Database Options: Set the database name, port, and parameter group
(optional).
 Backup: Configure automated backups and specify the retention
period for backup storage.

 Monitoring: Enable enhanced monitoring to collect additional


metrics for your RDS instance.
 Maintenance: Specify the preferred maintenance window for
applying patches and updates to your database instance.
6. Add Database Authentication and Encryption:
 Set the master username and password for your RDS instance.

223
 Enable encryption at rest using AWS Key Management Service
(KMS) for enhanced data security.

7. Review and Launch:


 Double-check all the configurations you’ve made for your RDS
instance.

 Click on the “Create database” button to launch your RDS instance.

Configuring Parameters:
Once you’ve launched your RDS instance, you may need to configure
additional parameters based on your application requirements. Here’s
how you can configure parameters for your RDS instance:
1. Parameter Groups: RDS parameter groups contain configuration
settings that govern the behavior of your database instance. You can
create custom parameter groups or use default parameter groups
provided by AWS.
2. Modify Parameters:
 Navigate to the RDS dashboard and select your RDS instance.
 In the “Configuration” tab, click on “Modify” to change parameter
settings.

 You can modify various parameters such as database engine settings,


memory allocation, logging options, and performance parameters.
3. Apply Changes:
 After modifying parameters, review the changes and click on “Apply
immediately” or choose a maintenance window for applying changes.
 AWS RDS will apply the parameter changes to your database
instance without causing downtime.

224
4. Monitor Performance: Monitor the performance of your RDS
instance after applying parameter changes to ensure that your database is
operating optimally.

Connecting to RDS instances from EC2:


It is a common scenario in cloud-based applications where you might
have your application servers (EC2 instances) accessing databases
hosted on RDS. Here’s a step-by-step guide on how to connect to RDS
instances from EC2 instances:

Prerequisites:
 Both RDS and EC2 instances must be in the same VPC: Ensure
that your RDS instance and EC2 instance are deployed within the
same Virtual Private Cloud (VPC) to enable network communication
between them.
 Security Group Configuration: Configure the security group
associated with your RDS instance to allow inbound traffic from the
security group associated with your EC2 instance on the appropriate
database port (e.g., 3306 for MySQL, 5432 for PostgreSQL).

Steps to Connect to RDS from EC2:


1. Identify RDS Endpoint and Port:
 Log in to the AWS Management Console and navigate to the RDS
service.
 Select your RDS instance and note down the endpoint (e.g., my-
database-instance.abcdefg123.us-west-2.rds.amazonaws.com) and the
port number (default is 3306 for MySQL, 5432 for PostgreSQL).

2. Install Database Client on EC2 Instance:


 Connect to your EC2 instance using SSH.

225
 Install the appropriate database client for your RDS database engine
(e.g., MySQL client for MySQL databases, PostgreSQL client for
PostgreSQL databases). You can install these clients using package
managers like apt (for Ubuntu) or yum (for Amazon Linux).

2. Connect Using Command Line:


Once the database client is installed, you can connect to your RDS
instance from the command line using the following syntax:
For MySQL:

mysql -h <RDS_endpoint> -P <port> -u <username> -p

For PostgreSQL:

psql -h <RDS_endpoint> -p <port> -U <username> -d <database_name>

Replace <RDS_endpoint>, <port>,


<username>, and <database_name> with your actual RDS endpoint,
port, username, and database name respectively.

3. Provide Credentials:
When you run the command, you’ll be prompted to enter the password
for the specified user. Enter the password associated with the username
you provided.

4. Verification:
 After successfully connecting, you’ll be presented with a database
prompt or console, indicating that you’re connected to your RDS
instance from your EC2 instance.

 You can now execute SQL queries, perform database operations, and
interact with your RDS database as needed from your EC2 instance.

226
Below are examples covering the topics mentioned earlier: launching an
RDS instance, configuring parameters, and connecting to the RDS
instance from an EC2 instance using MySQL as the database engine.
Example: Launching an RDS Instance (Using AWS CLI)

aws rds create-db-instance \


--db-instance-identifier mydbinstance \
--db-instance-class db.t2.micro \
--engine mysql \
--allocated-storage 20 \
--master-username mymasteruser \
--master-user-password mymasterpassword \
--availability-zone us-west-2a \
--backup-retention-period 7 \
--port 3306 \
--multi-az \
--engine-version 5.7

Example: Configuring Parameters (Using AWS CLI)

aws rds modify-db-parameter-group \


--db-parameter-group-name mydbparametergroup \
--parameters
"ParameterName=innodb_buffer_pool_size,ParameterValue=67108864,ApplyMethod=im
mediate" \

"ParameterName=max_connections,ParameterValue=200,ApplyMethod=immediate"

Example: Connecting to RDS Instance from EC2 (Using


MySQL Client)
1. Install MySQL Client on EC2 instance:

For Ubuntu/Debian:

sudo apt-get update


sudo apt-get install mysql-client -y

227
For Amazon Linux:

sudo yum update


sudo yum install mysql -y

2. Connect to RDS Instance:


For MySQL:

mysql -h mydbinstance.abcdefg123.us-west-2.rds.amazonaws.com -u mymasteruser


-p

You’ll be prompted to enter the password for the master user


(‘mymasteruser’). After entering the password, you’ll be connected to
the MySQL prompt where you can execute SQL queries.
These examples demonstrate the basic steps for launching an RDS
instance, configuring parameters, and connecting to the RDS instance
from an EC2 instance using MySQL as the database engine. Make sure
to replace placeholders like ‘mydbinstance’, ‘mymasteruser’,
‘mymasterpassword’, and ‘abcdefg123.us-west-2’ with your actual
values.

Conclusion:
Amazon RDS, from launching instances to connecting them from EC2.
Managed database services like RDS empower developers to focus on
building applications while offloading the heavy lifting of database
management to AWS.

228
Deploying multi-AZ RDS instances with Read
Capabilities
What is Multi AZ RDS?
When using AWS RDS, we can deploy instances in
multiple availability zones which would increase the availability of the
instances in case of a disaster and would also help in the architecture
being more reliable and Fault tolerant.

In the diagram shown above, a multi-AZ RDS is deployed in


Availability Zones A and B. When there is a database failure in AZ A,
the DNS would switch to AZ B instance which would have all the data
as multi-AZ instances are synchronously replicated which means any
update to the first instance would replicate the same update to the second
instance in a synchronized fashion.
Thus, the application is always able to access the database in case of a
disaster/database failure.

Problem Statement:
Although we have high availability of the application due to instances
deployed in Multi AZ, the standby instances do not accept any read
traffic, which means that even though we have the instances in different
AZ’s, we can’t perform any read/write operations on them. Any
read/write operations have to be done on the primary instance which

229
would synchronously replicate the same to the standby instances which
can also lead to performance bottlenecks on the primary instances as it
would accept the read, write operations and replicate the same to the
standby instances in different availability zones. Also, in order to
perform any such, read operations on the DB other than the writer node,
we would need to provision Read Replicas which have an additional cost
associated as well.

Note: We can have instances in multiple AZ’s and have an


order/priority assigned to them, which means in case of the disaster the
DNS would switch to the instances based on the order/priority assigned

Solution:
 AWS has introduced a new cluster option for the RDS service which
deploys the instances in Multi AZ fashion while also allowing the
instances to accept Read Only traffic. Thus, resolving the problem
statement as discussed earlier.

 The Instances are created in a Cluster fashion and while the


communication with the primary instances take place with the cluster
endpoint, we also have instances and Reader endpoints to redirect
traffic to the other instances and utilize the read capabilities of these
instances in order to ensure the Multi AZ instances can accept read
traffic. Since the Replication between the Primary and the other
standby instances is synchronous, at any point in time the standby
instances would have the same set of changes/updates as the primary
instance.

230
Note 1: This option is available only for Amazon RDS for MySQL
version 8.0.26 and PostgreSQL version 13.4 and in the US East (N.
Virginia), US West (Oregon), and EU (Ireland) Regions. This setup is
available only for R6gd or M6gd instance types.
Note 2: The SLA(Service Level Agreement) for this cluster setup does
match the SLA for the RDS service provided by AWS and hence this
setup is ideal for Development and Test environments but not
Production environments which require the same SLA for RDS.

Deployment Steps:
Prerequisites:
 A Valid AWS Account with permissions to create, Read and Write
access to RDS.
 The VPC has at least one subnet in each of the three Availability
Zones in the Region where you want to deploy your DB cluster. This
configuration ensures that your DB cluster always has at least two
readable standby DB instances available for failover, in the unlikely

231
event of an Availability Zone failure.

 The Setup may result in cost for the DB, hence ensure you don’t have
any billing related issues for the account.
 An EC2 instance to connect to the DB or a local machine.

Procedure:
DB Creation and Setup
1. Go to the AWS console, search for RDS and select Create Database
2. Select the Dev/Test Template and select Multi AZ DB Cluster-
preview option
Note: Only available in US East (N. Virginia), US West (Oregon), and
EU (Ireland) Regions
3. Select the checkbox to acknowledge the SLA for the DB cluster.

232
4. Enter the required parameters such as DB name, Db username and
Password.

We have selected the m6gd instance type and allocated space as 100 Gb
and Iops as 1000 as it’s the minimum requirement to save costs.

233
5. For the purpose of this Blog, we have enabled Public access to the
DB, but it is highly recommended that you do not enable public
access as it’s a more secure approach to protect the DB.
7. The DB uses 5432 port to communicate and hence we can either use
an existing security group which has the port enabled or create a new
Security Group.

8. We can go to additional configuration for more detailed information


and configuration settings

234
9. We have disabled automated backups and encryption for the purpose
of this Blog, but it is highly recommended to enable these for more
security, reliability and disaster recovery.

235
10. Let the other settings be default, else we can change them according
to our preference and click create Database.

236
11. The DB takes a couple of Minutes to create and once created the
results will be similar to as shown below.
12. Once created we can see cluster endpoint names for both the reader
node and writer node. We can select individual instances and view their
endpoints as well

237
13. Connect to the local machine or EC2 instance which already has the
psql package installed . For the purpose of this blog, we have used a
linux 2 AMI which is free tier eligible and installed the psql package on
it for connecting to the PostgreSQL DB created.

Using Cluster Writer Endpoint


In the below screenshot, we are using the Cluster Writer endpoint to
connect to the cluster and perform write operations on the DB

238
Create a Database, connect to the database and insert records into it

Connect To Cluster Reader Endpoint


Connect to the Cluster Reader endpoint and query the records.

239
If we try to write records to the reader endpoint, we will get an error as it
is a reader endpoint and not a writer endpoint, we can write records only
via the writer endpoint.

Connect to Individual Instance Endpoints


Let’s connect to the individual endpoints of the instances and perform the
read and write operations.

Note: In the current configuration, instance 1 is the writer node while


instance 2 and 3 are reader nodes. We can perform read and write
operations on instance 1 but only read operations on instance 2 and 3

240
Let’s connect to Instance 2 using the reader endpoint and query the table.

Let’s connect to the 3rd instance and query the table.

241
4. Let’s connect to instance 1 which is the writer instance and write
records to it.

242
Rebooting Setup
In case the instances are rebooting, we can connect to the other instances
for read operations

Performing Failover:
Let’s perform a manual failover on the DB and check the results.

243
3. If we view carefully, after the failover, Node 3 became the writer
instance while node 1 and node 2 became the reader instances.

4. Let’s perform the failover again and check the results.

244
5. Now instance 2 became the writer node while instance 1 and instance
3 became the reader node

Deleting Cluster
Once done, we can delete the cluster and terminate the Ec2 instances as
well in order to save costs.

245
Conclusion:
The Multi AZ RDS with Read capability provides the best of both
worlds, High Availability and Read capabilities. This also ensures we
don’t require separate Read Replica just to offset the read traffic which
can be taken care of by the standby nodes, thus ensuring cost saving,
greater reliability and availability of the Database Layer. Since the Nodes
are highly available, even in case of a failover or disaster, our DB
remains safe and completely operational. AWS is expected to roll out this
feature to other DB engines as well and hopefully the SLA would also be
amended to ensure this solution is ideal for the Production environments
as well, which would serve as a Boon for the database layer in various
applications and complex environments.

246
Amazon DynamoDB

Quick brush up of DynamoDB


DynamoDB is a fully managed NoSQL database service provided by
Amazon Web Services (AWS). It is designed for applications that
require low latency, high scalability, and seamless scalability of
throughput and storage.

Key features of DynamoDB include:


 NoSQL Database:
DynamoDB is a NoSQL database, which means it doesn’t adhere to
the traditional relational database model. It provides a flexible
schema, allowing you to store and retrieve unstructured, semi-
structured, and structured data.

 Fully Managed Service:


DynamoDB is a fully managed service provided by AWS. This
means that AWS handles the underlying infrastructure management,
such as provisioning, scaling, replication, backups, and patching,
allowing you to focus on building applications rather than managing
the database infrastructure.
 Scalability and Performance:
DynamoDB offers seamless scalability for both read and write
capacity. It automatically distributes data and traffic across multiple

247
servers, allowing the database to handle high volumes of requests and
support massive workloads. You can scale up or down based on
demand without any downtime.

 Flexible Data Model:


DynamoDB uses a key-value data model with support for composite
primary keys. Each item (row) in the database is uniquely identified
by its primary key, which can be a single attribute or a combination of
attributes. This flexible data model enables efficient access patterns
for different types of queries.

 High Availability and Durability:


DynamoDB replicates data across multiple Availability Zones (AZs)
within a region to ensure high availability and durability. It
automatically synchronously replicates data across three AZs,
providing fault tolerance and data redundancy.

 Performance at Any Scale:


DynamoDB offers consistent single-digit millisecond latency for both
read and write operations, regardless of the data volume or
throughput requirements. It achieves this by using SSD storage and a
distributed architecture optimized for low-latency performance.

 Security and Fine-Grained Access Control:


DynamoDB provides several security features, including encryption
at rest and in transit, fine-grained access control through AWS
Identity and Access Management (IAM), and integration with AWS
Identity and Access Management Service (IAM roles).

 Integration with AWS Ecosystem:


DynamoDB seamlessly integrates with other AWS services, allowing
you to build powerful and scalable applications. It can be easily
combined with services like AWS Lambda, Amazon S3, Amazon
Redshift, and others to create complete data-driven solutions.

Do check out the DynamoDB 101 : An introduction to

248
DynamoDB article if you are new to DynamoDB. That will help you get
a quick grasp on what the use cases of something like DynamoDB are.
If you are stuck in choosing a database, the article SQL vs NoSQL:
Choosing the Right Database Model for Your Business Needs will
help you get started on the right path.

Architectural components of DynamoDB


For a clearer understanding, let’s follow an example and try to base our
thinking on that. Let’s take an example of a Payments System. The use-
case is that we are designing the database for the same. Let’s say the
system supports 2 modes of Payments:

 Wallet Payments
 Card Payments
Apart from this whenever there is a transaction done with an amount
greater than 100$, then we need to trigger a notification after 3 days into
the payments service that propagates the same to the Rewards
Service which then issues a coupon to the user.
I am listing out the components here. We will be going in depth into each
component after that. Please go through them sequentially as you would
require previous context to understand the latest.
1. Tables
2. Items
— Time To Live (TTL)
3. Attributes
4. Primary Key
—Partition Key (Hash Key)
—Sort Key (Range Key)
5. Secondary Indexes
— Local Secondary Index (LSI)
— Global Secondary Index (GSI)

249
6. Streams
7. DynamoDB Accelerator (DAX)
Let’s understand these components one by one with the help of the
example. I will be providing examples of these components to their
counterpart examples in SQL databases.

1. Tables
Tables are the fundamental storage units that hold data records. The
counterpart for a table in the SQL world is also a table.
According to our example, the Table would be a Transactions
table that would essentially record all the data w.r.t. transactions done
by the system like transaction data, transaction status history data etc

2. Items
 Item is a single data record in a table. Each item in a table can be
uniquely identified by the stated Primary Key (Simple Primary
Key or Composite Primary Key) of the table.

 According to our example, in your Transactions table, an item would


be a particular Transaction’s data or Transaction’s history record.

 The counterpart for an item in the SQL world is a record or a row.


 DynamoDB provides a TTL feature that allows you to automatically
delete expired items from a table. By specifying a TTL attribute for a
data record, DynamoDB will automatically remove the item from the
table when the TTL value expires.
 Now, this would be helpful in fulfilling the use-case presented in our
example where we would need to trigger a notification 3 days post
having a successful transaction over 100$. We simply set the
appropriate TTL when the required conditions are met and we are
done.

250
3. Attributes
Attributes are pieces of data attached to a single item. It can be of
different data types, such as string, number, Boolean, binary, or complex
types like lists, sets, or maps. Attributes are not required to be
predefined, and each item can have different attributes.
An attribute is comparable to a column in the SQL world.
In our example a few attributes can be the following:

 transaction_id → This attribute represents the unique identifier of


each transaction. It could be a string or a number.
 transaction_date → This attribute stores the timestamp or date
when the transaction occurred. It could be stored as a string or in a
specific date/time format.
 user_id → This attribute represents the identifier of the customer
associated with the transaction. It could be a string or a number,
depending on how you identify customers.
 amount → This attribute stores the monetary value of the
transaction. It could be represented as a number or a string, depending
on your application’s requirements.

4. Primary Key
Primary key is a unique identifier that is used to uniquely identify each
item (row) within a table. The primary key is essential for efficient data
retrieval and storage in DynamoDB. It consists of one or two attributes:

 Partition Key (Hash Key):

The partition key is a single attribute that determines the partition or


physical storage location of the item. DynamoDB uses the partition
key’s value to distribute the data across multiple partitions for scalability
and performance. Each partition key value must be unique within the
table. In our case, the Transactions table can have a Partition Key
of user_id

251
 Sort Key (Range Key):

The sort key is an optional attribute that, when combined with the
partition key, creates a composite primary key. The sort key allows you
to further refine the ordering of items within a partition. It helps in
performing range queries and sorting items based on the sort key’s
value. The combination of the partition key and the sort key must be
unique within the table. Examples of sort keys include timestamps,
dates, or any attribute that provides a meaningful ordering of items
within a partition. In our case, the Transactions table can have a Sort
Key depicting the type of item it is. As an example, one of the item can
have a Sort Key with value TRANSACTION_HISTORY that
essentially stores the status updates of the transaction. Similarly,
something like TRANSACTION_REWARDS_NOTIFICATION can
cater to our requirement of sending a notification of a specific
transaction to the Rewards system.

Quick display of the Architectural components of DynamoDB

5. Secondary Index
The primary key uniquely identifies an item in a table, and you may
make queries against the table using the primary key. However,
sometimes you have additional access patterns that would be inefficient

252
with your primary key. DynamoDB has secondary indexes to enable
these additional access patterns. DynamoDB supports two kinds of
secondary index:

Local Secondary Index


Local Secondary Index (LSI) is an additional index that you can create
on a table. LSI allows you to define a different sort key for the index
while keeping the same partition key as the table. They are useful when
you have different access patterns or when you want to query data based
on attributes other than the primary key. They provide more flexibility in
data retrieval without the need to create a separate table or duplicate
data.

 Same Partition Key: The partition key for the LSI is the same as the
base table’s partition key. This means that the LSI partitions the data
in the same way as the base table. There are a few caveats however,
which are listed down as follows:

Subset of Attributes: When creating an LSI, you can specify a
subset of attributes from the base table to include in the index. These
attributes become part of the index and can be projected into the
index’s key schema or as non-key attributes.

Query Flexibility: With an LSI, you can perform queries that utilize
the LSI’s partition key and sort key. This allows you to efficiently
retrieve a subset of data based on specific query requirements without
scanning the entire table.

Eventual Consistency: Unlike the primary index (base table), LSIs
only support eventual consistency for read operations. This means
that after a write operation, the index may not immediately reflect the
updated data.

Write Performance: When you modify data in a table with LSIs,

253
DynamoDB needs to update the base table as well as all the
corresponding LSIs. This can impact the write performance compared
to a table without any secondary indexes
.
However, keep in mind that the number of LSIs you can create per table
is limited, and the provisioned throughput is shared between the base
table and all its LSIs.
For example, if there is a requirement to query all transactions for a
specific customer within a particular date range. Then, you can utilize a
composite LSI of the type + created_at attribute for achieving that.

Local Secondary Index with a hybrid of type + created_at

In this query, user_id represents the specific user you want to query, and
the sort key contains the range of dates you are interested in. By utilizing
the LSI, DynamoDB can efficiently retrieve the transactions that match
the specified user_id and fall within the given date range.

254
The LSI in this example helps you perform targeted queries on
transactions based on the customer ID and transaction date, without
having to scan the entire table. It provides an alternative access pattern
for retrieving transaction data and can improve query performance for
specific use cases.

Global Secondary Index: It is an additional index that you can


create on a table. It provides an alternative way to query the data within
the table using attributes other than the primary key. GSI allows you to
define a different partition key and sort key from the base table,
providing alternative access patterns and query capabilities.
Different Partition and Sort Key: Unlike Local Secondary Indexes
(LSIs), GSIs allow you to define a different partition key and sort key
from the base table. This means you can partition and sort the data in the
GSI independently of the base table’s primary key.

Query Flexibility: With a GSI, you can perform queries based on the
GSI’s partition key and sort key. This allows you to efficiently retrieve a
subset of data based on specific query requirements without scanning the
entire table.

Projection: When creating a GSI, you can choose to project a subset of


attributes from the base table into the index. These projected attributes
can be included in the index’s key schema or as non-key attributes,
allowing you to retrieve a subset of data directly from the GSI without
accessing the base table.

Separate Provisioned Throughput: GSIs have their own provisioned


throughput capacity, independent of the base table. This means you can
specify different read and write capacity units for the GSI, enabling you
to optimize performance based on the expected query workload.

Eventual Consistency: Similar to LSIs, GSIs only support eventual


consistency for read operations. After a write operation, the index may

255
not immediately reflect the updated data.
However, keep in mind that creating GSIs can consume additional
storage and provisioned throughput, so careful consideration is needed
to optimize the indexing strategy based on your application’s
requirements.
Let’s introduce the field called payment_method which we mentioned
earlier. To provide an alternative access pattern based on payment
methods, you can create a GSI on the transaction table with the
following attributes:

 Partition Key: A different attribute representing the payment


method, payment_method being that attribute here.
 Sort Key: The transaction_id, which uniquely identifies each
transaction.

Global Secondary Index: PartitionKey — payment_method,


SortKey transaction_id
The GSI in this example helps you perform targeted queries on
transactions based on the payment method, without having to scan the
entire table. It provides an alternative access pattern for retrieving
transaction data and can improve query performance for specific use
cases.

256
6. Streams

DynamoDB Streams is a feature of Amazon DynamoDB that provides a


time-ordered sequence of item-level modifications made to a table. It
captures a stream of events that represent changes to the data in a
DynamoDB table, including inserts, updates, and deletions. Each event
in the stream contains information about the modified item, such as its
primary key, attribute values before and after the modification, and a
timestamp indicating when the modification occurred.
Here are some key features and use cases of DynamoDB Streams:

 Capture Changes:
DynamoDB Streams captures changes happening in real-time as
modifications are made to the table. It provides a durable and reliable
way to track and react to changes in your data. This use-case is often
referred to as CDC or Change Data Capture.

 Integration and Event-Driven Architecture:


Streams can be used to integrate DynamoDB with other AWS
services or external systems. By processing the stream events, you
can trigger actions or update downstream systems based on changes
in the DynamoDB table.

 Change History and Audit Logs:


Streams provide a complete history of changes made to a DynamoDB
table, allowing you to maintain an audit trail of all modifications. You
can use the stream to review past changes, investigate issues, or
perform compliance and security checks.

257
 Data Synchronization:
Streams can be used for replicating data across multiple DynamoDB
tables or databases. By consuming the stream and applying the
changes to other destinations, you can keep different data stores
synchronized in near real-time.

 Data Transformation and Analytics:


By processing the stream events, you can transform the data,
aggregate information, and perform real-time analytics or generate
derived datasets for reporting purposes.

 Cross-Region Replication:
Streams can be utilized to replicate data from one DynamoDB table
to another in a different AWS region. This helps in creating disaster
recovery setups or distributing read traffic across regions.
To consume DynamoDB Streams, you can use AWS Lambda, which
allows you to write code that runs in response to stream events. You can
also use AWS services like Amazon Kinesis Data Streams or custom
applications to process and react to the stream events.
Remember our use-case that whenever there is a transaction done with
an amount greater than 100$, then we need to trigger a notification after
3 days into the payments service that propagates the same to
the Rewards Service which then issues a coupon to the user. DDB
Streams along with TTL can help us achieve this.

Flow diagram for the Rewards Notfication DDB Streams use-case

7. DynamoDB Accelerator (DAX)


DynamoDB Accelerator is an in-memory caching service provided by

258
Amazon Web Services (AWS) specifically designed for DynamoDB.
DAX improves the performance of DynamoDB by caching frequently
accessed data in memory, reducing the need to access the underlying
DynamoDB tables for every request. It provides low-latency read access
to the cached data, resulting in faster response times and reduced
database load.
Here are some key features and benefits of DAX:

 In-Memory Caching:
DAX caches frequently accessed data from DynamoDB tables in
memory. This eliminates the need for repeated reads from the
database, resulting in reduced latency and improved response times
for read-intensive workloads.

 Seamless Integration:
DAX is fully compatible with DynamoDB and integrates seamlessly
with existing DynamoDB applications. You can simply point your
application to the DAX endpoint, and it will automatically route read
requests to the DAX cluster.

 High Performance: DAX delivers microsecond latency for read


operations, enabling applications to handle millions of requests per
second with consistent, predictable performance.

 Automatic Data Management:


DAX automatically manages the cache and keeps it in sync with the
underlying DynamoDB tables. It handles invalidations and updates to
ensure data consistency between the cache and the database.

 Scalability and Availability:


DAX is a managed service that provides scalability and high
availability. It automatically scales the cache capacity based on the
workload, and data is distributed across multiple nodes to ensure
durability and fault tolerance.

259
 Cost Optimization:
By reducing the number of read operations on the underlying
DynamoDB tables, DAX can help lower the cost of running read-
intensive applications by reducing provisioned throughput and
minimizing the number of DynamoDB read capacity units required.

DAX is particularly useful for applications with high read traffic, such as
Payment Recon systems, real-time analytics, gaming leader-boards,
session stores etc. It improves the overall performance and efficiency of
DynamoDB, providing a seamless caching layer that enhances the speed
and scalability of your applications without sacrificing data consistency.

DynamoDB and it’s features


How to create a simple table, add data, scan and query the
data, delete data, and delete the table.
What is DynamoDB?
DynamoDB is a fully managed NoSQL serverless database offered by
AWS. DynamoDB is a great fit for mobile, web, gaming, ad tech, and
IoT applications where scalability, throughput, and reliable performance
are key considerations.

Features of DynamoDB:
 NoSQL database:
Managed and Serverless: It has a flexible schema which allows you to

260
have many different attributes for one item. We can easily adapt
business requirement change without having to refine the table schema.
It also supports key-value and document data models.

 Managed and Scalable service:


As it is a managed service, you don’t need to worry about the
provisioning, managing, maintaining and operating the underlying
servers. It is pay-as-go service which scales to zero and automatically
scales tables to adjust the capacity to match your workload for you.

 DynamoDB Global Table:


With Global tables you can read and writes can access data locally in the
selected Regions to get single-digit millisecond read and write
performance. global tables automatically scale capacity to accommodate
your multi-Region workloads which improves your applications multi-
region resiliency.

 DynamoDB Streams:
It captures data modification events i.e. create, update, or delete items in
a table near real time. Each record has a unique sequence number which
is used for ordering

 Secondary Indexes:
DynamoDB provides fast access to items in a table by specifying
primary key values. However, many applications might benefit from
having one or more secondary (or alternate) keys available, to allow
efficient access to data with attributes other than the primary key.
DynamoDB provides 2 kinds of Indexes: Global Secondary Index and
Local Secondary Index.

 DynamoDB Accelerator (DAX):


Built-in caching service for DynamoDB which is fully managed and
highly available. It can serve millions of requests per second with micro-
seconds response time for read-heavy workload.

261
 On-Demand Capacity Mode:
Allows user to scale seamlessly without capacity planning. It ensures
optimal performance and cost efficiency for fluctuating workloads.

 Point-in-time Recovery:
Enables users to restore their data to any second within a 35 days of
retention period, protecting against accidental data loss. It provides
peace of mind by allowing effortless recovery from user errors or
malicious actions, ensuring data integrity and availability.

 Encryption at Rest:
By default, DynamoDB encrypts all data at rest using AWS KMS (Key
Management Service), providing an additional layer of protection
against unauthorized access.

 Built-in support for ACID transactions:


DynamoDB guarantees ACID (Atomicity, Consistency, Isolation,
Durability) properties for transactions, ensuring reliable and predictable
data operations. Amazon DynamoDB provides native, server-side
transcations, simplifying the developer experience of making
coordinated, all-or-nothing changes to multiple items both within and
across tables.

Login into your AWS Management Console account and


search for DynamoDB
Creating a NoSQL table:
Click on Create Table

262
Give the table name (AWS Learners), partition key (LearnerName) and
sort key (Certifications)

For table Settings, select Customize settings (To enable auto-scaling for
our table). DynamoDB auto scaling will change the read and write
capacity of your table based on request volume. Rest of the settings
should be remained as it is, just add the json policy.

263
Add data to the NoSQL table
Click on Create Item

264
Enter the value for LearnerName and Certification

Query the NoSQL Data


Search for data in the table using query operations. In DynamoDB,
query operations are efficient and use keys to find data.

265
Deleting an existing item
Here I am deleting the item named Jay

266
Delete a NoSQL Table:
Deleting the entire AWSLearner table.

267
Type confirm to delete entire table

So, we created our first DynamoDB table, added items to the table, and
then queried the table to find the items we wanted. Also, learned how to
visually manage your DynamoDB tables and items through the AWS
Management Console.

268
SIMPLE STORAGE SERVICE

Amazon S3 (Simple Storage Service) is an industry-leading object


storage service. You can securely store any objects such as files, images,
videos, and payloads. S3 is the same as drives that you use daily basis
such as Google Drive and Microsoft OneDrive. It is so powerful and
reliable that can scale indefinitely. Its ease of use has garnered
widespread admiration among engineers, cementing its status as a
favored service within the tech community.
In S3, you can sidestep complex setups entirely. All you need is just
provide valid credentials. You can create an IAM user for the app with
an access key. However, relying solely on a permanent access key isn’t
considered best practice. A preferable approach involves deploying the
app on EC2 or ECS and utilizing IAM roles. This method generates
temporary tokens and manages rotation automatically, enhancing
security.
Novices often misconstrue S3 as a database, when in fact, it lacks the
typical characteristics of a database technology. However, it frequently
complements databases in usage scenarios. For instance, storing actual
images for posts in S3 while retaining object keys in the database as
pointers to these images.
Many organizations in the USA use S3 as a datalake. A data lake is a
centralized repository that allows you to store all your structured and
unstructured data at any scale. On top of the S3 datalake, you can utilize
analytical services like Athena and Glue to analyze organizational data
and extract insights without impacting the business apps.

269
S3 concepts:
 Buckets are containers for objects stored in Amazon S3. Every object
is contained in a bucket. Think of it as a folder for organizing files.

 Objects are the fundamental entities stored in Amazon S3. Objects


consist of object data and metadata. Maximum object size of 5TB.
You can store an unlimited number of objects.

 Keys are the unique identifiers for an object within a bucket. Every
object in a bucket has exactly one key that you can use later for object
retrieval.
If you enable versioning in the bucket, it doesn’t overwrite the existing
object if the key already exists. Instead, it adds enumerated versions.
The combination of a bucket, key, and version ID uniquely identifies
each object. When you create an S3 bucket, you have to select the region
where it belongs. When you go to the S3 console, you will see all
buckets on one screen. That illustrates the S3 bucket is a regional service
but the S3 namespace is global. The best practice is to select the region
that is physically closest to you, to reduce transfer latency. You can
create a folder in an S3 bucket to organize your data. Data engineers use
S3 folders for achieving data partitioning by date in the S3 data lake.

270
S3 Permissions
S3 permissions allow you to have granular control over who can view,
access, and use specific buckets and objects. Permissions functionality
can be found on the bucket and object levels. There are 3 types of
permissions in S3:

 Identity-based — IAM user or role has permission to access S3.


For example, A developer who has S3FullAccess or an EC2 instance
with an IAM role that has S3FullAccess policy.

 Resource-based — You can write the S3 bucket policy where the


principal tag defines who can access it. For instance, the bucket policy
can allow the public to get all objects for static website hosting.

 Access Control List (ACL) — Sets policies to both a bucket and


an object. With identity-based and resource-based policies, you cannot
assign a policy to an object.

Storage Classes
A storage class represents the classification assigned to each object. Each
storage class has varying attributes that dictate:

 Storage cost

 Object availability — It is the percent (%) over a one-year time period


that a file stored in S3 will be accessible.
 Object durability — It is the percent (%) over a one-year time period
that a file stored in S3 will not be lost. S3 provides 11 Nines durability
which means if you store 1 million objects in S3 for 10 million years,
you would expect to lose 1 file. There’s a higher likelihood of an
asteroid destroying Earth within a million years.

 Frequency of access

271
Storage classes are:
 S3 Standard for frequently accessed data. The default option and
expensive.

 S3 Express One Zone for your most frequently accessed data. It


provides consistent single-digit millisecond latency. Good for data-
intensive apps that require fast runtime.

 S3 Standard-Infrequent Access (S3 Standard-IA) and S3 One Zone-


Infrequent Access (S3 One Zone-IA) for less frequently accessed
data.

 The storage cost is cheaper but charges more when retrieving data.
 S3 Glacier Instant Retrieval for archive data that needs immediate
access. Decent pricing for object storage.

 S3 Glacier Flexible Retrieval (formerly S3 Glacier) for rarely


accessed long-term data that does not require immediate access. Data
retrieval could take minutes to hours.
 Amazon S3 Glacier Deep Archive (S3 Glacier Deep Archive) for
long-term archive and digital preservation with retrieval in hours at
the lowest cost storage in the cloud.

 S3 Intelligent-Tiering for automatic cost savings for data with


unknown or changing access patterns.
You can set lifecycle configurations that automate the migration of an
object’s storage class to a different storage class, or its deletion, based
on specified time intervals for cost optimization. For example, you can
set the following rule:

 I’ll be regularly using a work file for the next 30 days, so please keep
it in the standard class during this time.

 After this initial period, I’ll only need to access the file once a week

272
for the subsequent 60 days. Therefore, after 30 days, please transfer
the file from the standard class to the infrequently accessed class.

 Following this, I anticipate no further need to access the file after a


total of 90 days, but I’d like to retain it for archival purposes.
Accordingly, after 90 days, please move the file from the infrequently
accessed class to Glacier.
Whenever possible, it’s advisable to establish lifecycle configurations as
part of best practices for cost efficiency. A lifecycle policy can be
applied to the entire bucket, specific folders, and objects.

S3 Static web hosting


One standout feature of S3 is its capability for static web hosting, which
I’ve utilized for hosting my side projects. What’s particularly remarkable
about hosting a website on S3 is its cost-effectiveness — it’s essentially
free. Setting up static web hosting with S3 is straightforward, requiring
just three simple steps:
 Make the bucket public. And make objects public by writing bucket
policy.

 Enable static web hosting in the bucket.


 Build the front-end app for example in React. Then drag and drop the
build files. After these steps, if you get a 403 (Forbidden) error, make
sure the bucket policy allows public reads as I stated in Step 1.
 Another trick I do with the S3 static web hosting feature is that
redirecting requests to another website in a serverless and cost-
effective way without needing a launch a proxy server and configure
tools like Nginx. When you type “fb.com”, it redirects you to
“facebook.com”.
That functionality is achievable through the static website hosting feature
as illustrated below.

273
To optimize your website’s performance globally and bolster security,
consider leveraging the CDN (Content Delivery Network) service,
CloudFront, on top of the website hosted on S3. It’s advisable to
implement Origin Access Identity (OAI) as a best practice, as this
secures the website assets stored in Amazon S3. With OAI, direct public
access to the bucket is restricted, ensuring that the public can only access
the website through CloudFront. Additionally, CloudFront aids in cost
reduction by minimizing the number of requests to S3. It’s important to
note that every individual request in S3 incurs a charge, highlighting the
general cost implications associated with cloud-based services.

Clients directly accessing a static website hosted in Europe

274
Clients accessing a static website deploy through the CDN (CloudFront)

CloudFront is a versatile and widely utilized service. Here are the key
aspects:

 Blacklisting every IP address of a country would be impractical and


labor-intensive. Instead, use CloudFront for blacklisting countries.
 CloudFront reduces S3 cost. Because the content is delivered from
the edge server and there are no API calls against S3.

 You can run business logic code and small functions with
Lambda@Edge. For example, inspect headers of requests and pass
the requests down only a valid token is present.

 You can implement an advanced caching behavior based on query


strings and headers and serve private content through the CDN.
 It helps with video streaming.
You can do a path-based routing. For example, if the request has “us” in
the path, then forward it to the origin in the US region . If it is “ap”, then
forward it to Asia Pacific, etc.

Presigned URL
The S3-presigned URL stands out as a crucial feature, widely utilized in

275
practical applications. It allows clients to download and upload an object
to the bucket with a temporary URL. For instance, consider a book-
selling application where customers need immediate access to their
purchased books. In such cases, leveraging S3-presigned URLs allows
you to generate temporary links, granting customers access to download
their books promptly after purchase.
Another practical application of S3-presigned URLs arises when
uploading large files to an S3 bucket through the API Gateway.
Typically, API Gateways serve as the entry point for RESTful endpoints.
However, API Gateway has a limitation of handling requests up to 10
MB. I encountered this issue while dealing with an endpoint for blobs.
The endpoint functioned properly for certain files but failed for others,
leading to extensive debugging efforts. Initially, I couldn’t pinpoint
whether the problem lay with the API Gateway or S3 configuration, as
everything appeared correct. Eventually, I discovered the 10 MB request
limit of API Gateway, which was causing the failure. To address this, I
implemented a solution using pre-signed URLs. Instead of directly
storing files from the API Gateway to S3, I introduced an additional step.
Initially, the request goes to the API Gateway, where authorization for
file storage is verified. If authorized, the API Gateway returns a
presigned URL. Subsequently, the client application makes another call
to store the file using the provided URL.

276
S3 Gateway endpoint
All resources within the AWS cloud are integrated via the AWS Global
Network, as they operate within the same AWS network and
infrastructure. These endpoints facilitate communication between
instances within a Virtual Private Cloud (VPC) and various AWS
services, enabling efficient interaction across different accounts within
the AWS ecosystem. AWS resources in your account can be connected
to the resources in my AWS account through VPC endpoints.
When I was working on a multi-account initiative, we frequently
encountered the need to link resources across various accounts. VPC
endpoints come in handy for this purpose. When I required connectivity
between resources in different accounts, I simply submitted a ticket to
our DevOps team requesting the creation of a VPC endpoint. This
streamlined the process, enabling seamless connectivity to resources in
different AWS accounts. Although VPC endpoints offer practical
solutions, it’s essential to note that they incur additional charges.
Therefore, it’s advisable to decommission VPC endpoints once their use
is no longer required, as a best practice.
S3 is a publicly accessible service available for anyone to use. To
connect to S3, you need a valid token and an internet connection.
However, in certain scenarios, EC2 servers may not have internet
connectivity due to stringent security reasons. Nonetheless, these servers
can still establish connections with S3 using a VPC endpoint, specifically
the S3 Gateway endpoint. You can configure which resources can access
the S3 bucket through the VPC endpoint by writing an S3 bucket policy.

277
S3 Event notification
You can use the Amazon S3 Event Notifications feature to receive
notifications when certain events happen in your S3 buckets such as a
new object being created, or an object getting removed. It is a way to
achieve a modern Event-Driven Architecture that emphasizes asynchrony
and loose-couple which helps achieve better responsiveness.
When users upload profile pictures, synchronously generating a
thumbnail could take seconds. However, employing an Event-Driven
approach with S3 Event Notifications could significantly reduce latency
from seconds to milliseconds. Here’s how it works: After the user
uploads the profile picture, it’s stored in S3. Subsequently, S3 event
notifications initiate a new object-created event, triggering a Lambda
function. This Lambda function asynchronously generates the
thumbnails, optimizing the process for faster response times.
The S3 Event notification can trigger SNS, SQS, and Lambda. When
setting up the event notification, make sure you have the right resource-
based policy set on the destination or SNS/SQS.

278
S3 Global Replication
A seasoned software architect shared with me a strategy for architecting
global applications, S3 Global Replication. This feature boasts
impressive capabilities, allowing data replication between regions in a
matter of minutes. The replication enables automatic, asynchronous
copying of objects across Amazon S3 buckets between different accounts
and regions. An object may be replicated to a single destination bucket or
multiple destination buckets.

When to use the global replication:


 Data redundancy — If you need to maintain multiple copies of your
data in the same, or different AWS Regions, or across different
accounts. S3 Replication powers your global content distribution
needs, compliant storage needs, and data sharing across accounts.
Replica copies are identical to the source data, that retain all metadata,
such as the original object creation time, ACLs, and version IDs.

 Replicate objects to more cost-effective storage classes — You can


use S3 Replication to put objects into S3 Glacier, S3 Glacier Deep
Archive, or another storage class in the destination buckets.
Maintain object copies under a different account
S3 Cross-Region Replication (CRR) is used to copy objects across
Amazon S3 buckets in different AWS Regions. When to use Cross-
Region Replication:

279
 Meet compliance requirements — Although Amazon S3 stores your
data across multiple geographically distant Availability Zones by
default, compliance requirements might dictate that you store data at
even greater distances (regions).
 Minimize latency — If your customers are in two geographic
locations, you can minimize latency in accessing objects by
maintaining object copies in AWS Regions that are geographically
closer to your users.

 Increase operational efficiency — If you have compute clusters in two


different AWS Regions that analyze the same set of objects, you
might choose to maintain object copies in those Regions.
Same-Region Replication (SRR) is used to copy objects across Amazon
S3 buckets in the same AWS Region. SRR can help you do the
following:

 Aggregate logs into a single bucket — If you store logs in multiple


buckets or across multiple accounts, you can easily replicate logs into
a single, in-region bucket. This allows for simpler processing of logs
in a single location.

 Configure live replication between production and test accounts — If


you or your customers have production and test accounts that use the
same data, you can replicate objects between those multiple accounts,
while maintaining object metadata.

Object Lock
Store objects using a write-once-read-many (WORM) model to help you
prevent objects from being deleted or overwritten for a fixed amount of
time or indefinitely. Object Lock provides two ways to manage object
retention:

 Retention period — Specifies a fixed period of time during which an


object remains locked. During this period, your object is WORM-

280
protected and can’t be overwritten or deleted.

 Legal hold — Provides the same protection as a retention period, but


it has no expiration date. Instead, a legal hold remains in place until
you explicitly remove it.
Object Lock works only in versioned buckets.

Multipart Upload
Multipart upload allows you to upload a single object as a set of parts.
Each part is a contiguous portion of the object’s data. You can upload
these object parts independently and in any order. If transmission of any
part fails, you can retransmit that part without affecting other parts. After
all parts of your object are uploaded, Amazon S3 assembles these parts
and creates the object.
In general, when your object size reaches 100 MB, you should consider
using multipart uploads instead of uploading the object in a single
operation. It can make your app faster. But adds complexity as well. For
example, you have to provide more parameters in the API and there are
edge cases such as what happens when there are incomplete files.

Benefits of multi-part upload :


 Improved throughput — You can upload parts in parallel to improve
throughput.

 Quick recovery from any network issues — Smaller part size


minimizes the impact of restarting a failed upload due to a network
error.

 Pause and resume object uploads — You can upload object parts over
time. After you initiate a multipart upload, there is no expiry; you
must explicitly complete or stop the multipart upload.

 Begin an upload before you know the final object size — You can

281
upload an object as you are creating it.

S3 Transfer Acceleration
Amazon S3 Transfer Acceleration offers a significant boost in content
transfers. Users operating web or mobile applications with a broad user
base or those hosted far from their S3 bucket may encounter prolonged
and fluctuating upload and download speeds over the internet. S3
Transfer Acceleration (S3TA) addresses these challenges by minimizing
variability in internet routing, congestion, and speeds that typically
impact transfers. It effectively reduces the perceived distance to S3 for
remote applications by leveraging Amazon CloudFront’s Edge Locations
and AWS backbone networks, along with network protocol
optimizations, S3TA enhances transfer performance, ensuring smoother
and faster data transfers.

S3 Encryption
In our database, we had a column storing PII (Personal Identifiable
Information) data, which required encryption for security purposes.
Rather than investing significant effort and writing extensive code to
handle this, we opted for a solution with minimal effort. Leveraging
AWS S3 encryption support, we securely stored PII data payloads in an
S3 bucket to comply with regulations. Then we stored the corresponding
S3 object keys in database tables, effectively keeping sensitive
information out of the database.

There are 4 methods of encrypting objects in S3:


 SSE-S3: Encrypts S3 objects using keys handled & managed by
AWS. Out-of-box feature. Nothing to do on your side. All objects
stored in S3 are encrypted. Therefore, your app complies with

282
regulations without any effort from you.

 SSE-KMS: AWS Key Management Service (KMS) is a designated


service for encryption. You can opt out of the default S3 encryption
and utilize encryption features provided by KMS in your objects in
S3.

 SSE-C: In a highly regulated environment, you must maintain your


own encryption key. S3 supports that with SSE-C.
Client-Side Encryption: Encrypt files before storing them in S3 in your
client app.

S3 Pricing
I had been using Google Drive for storing my images and videos for 7+
years. I paid $20 yearly to get 100 GB storage. It had been red for days.
Then, I downloaded all my media and uploaded them to Amazon S3.
Since I did not access my files frequently, I picked the archival class
(Glacier Instant Retrieval) which costs $0.004 per GB per month. So, the
yearly cost to store my memories reduced from $20 to $4.8 which is a
75% cost reduction.

Despite the numerous impressive features I wrote in this blog, S3


remains one of the pricier services. I know a case where a company
switched their file storage service to use S3 instead of on-premise
servers. Then the cost was overwhelming and they switched back to the
on-premise servers. S3 cost can be bearable if you configure the storage
classes correctly. Storage price is much cheaper when using an IA
(Infrequently Accessed) storage class. But if the object is retrieved
frequently, it costs more.

Pricing varies by region like all other services. There are many factors it
charges for:

 Storage — How much data you stored

283
 Requests & data retrieval — Number of requests against the bucket
 Data transfer — It is a hidden cost. It is like a tax when running apps
in the cloud.
 Management & Analytics (if you enable)

 Global Replication (if you enable)


There are Requester Pays buckets, the requester pays the cost of the
request and the data downloaded from the bucket. The bucket owner
always pays the cost of storing data.
The API call to delete an object is free of charge, but obtaining the object
prior to deletion incurs costs. Retrieving 1 million objects, for instance,
can amount to approximately $5 depending on the region. Deleting or
replicating 1 million objects may take around 30 minutes. These figures
are sourced from another medium blog, emphasizing the importance of
vigilance and monitoring of S3 and cloud costs. It’s crucial to be mindful
of various factors that contribute to charges.
Additionally, it’s worth noting that the exact timing of object deletion is
uncertain, as it’s handled asynchronously by AWS. It’s possible that
deletions occur once daily, possibly during nighttime, but this remains
unknown.

284
IAM Programmatic access and AWS CLI
IAM Programmatic access
In order to access your AWS account from a terminal or system, you can
use AWS Access keys and AWS Secret Access keys.

AWS CLI
The AWS Command Line Interface (AWS CLI) is a unified tool to
manage your AWS services. With just one tool to download and
configure, you can control multiple AWS services from the command
line and automate them through scripts.
The AWS CLI v2 offers several new features including improved
installers, new configuration options such as AWS IAM Identity Center
(successor to AWS SSO), and various interactive features.

Task-01
Create AWS_ACCESS_KEY_ID and
AWS_SECRET_ACCESS_KEY from AWS Console.
 Log in to your AWS Management Console.

 Click on your username in the top right corner of the console and
select “Security Credentials” from the drop-down menu.

285
Click on the “Access keys (access key ID and secret access key)”
section.

286
Click on “Create Access Key.”

Your access key ID and secret access key will be displayed. Make sure
to download the CSV file with your access key information and store it
in a secure location.

287
Task-02

Setup and install AWS CLI and configure your account


credentials

Install the AWS CLI by following the instructions for your operating
system: https://docs.aws.amazon.com/cli/latest/userguide/install-
cliv2.html

288
Check aws-cli version

Once you have installed the AWS CLI, open a terminal or command
prompt and run the following command to configure your account
credentials:
You will be prompted to enter your AWS Access Key ID and Secret
Access Key. Copy and paste access key and secret key from downloaded
csv file. You will also be prompted to enter your default region and
output format. Choose the region that is closest to your location and

289
select a suitable output format.

Once you have entered your credentials and configured your default
settings, you can test that the CLI is working by running the following
command:

This command should list the contents of your default S3 bucket. You
have now set up and installed the AWS CLI and configured your
account credentials.

290
Data Preservation Strategies for EC2 Instances:
Safeguarding Your Information Before Destruction
we will explore how to preserve data in our EC2 instance before it gets
destroyed.

Use Case:
Imagine you are responsible for an e-commerce website hosted on an
EC2 instance. You decide to upgrade your application to a newer
version. Before initiating the upgrade, you need to preserve customer
data, transaction records, and other critical information stored on the
instance to ensure a seamless transition and avoid any data loss.

Overview :

Workflow Diagram

Steps :

Step 1:

291
Creating the EC2 Instance :
 Navigate to the EC2 console.

 Follow the Outlines steps below.

292
293
Now , I am going to insert some Data in the Instance.

294
Step 2 :
Creating AMI from Instance :
Now , I am going to create AMI from our Instance.
Steps :

295
Step 3 :
Terminating the Instance :
Now , I am going to Terminate the Instance and after that Iam going to
create a new instance to retrieve data.
Steps :

296
Step 4 :
Banking-up and Restore the Data :
Now , I am going to Create a new instance to retrieve data.

Steps :

297
298
299
We successfully backed-up and restored the Old Data.

300
AWS EBS Snapshots
 EBS is the network storage drive and can be connected with one EC2
instance at a time. EBS cannot be connected from one availability
zone to another.
 Snapshots are the backups of the EBS instance and can be restored in
any other availability zone as a copy (with the same data).

NOTE: If we want to connect EBS storage from one availability zone


to another directly. It will not happen, rather we take screenshots of the
EBS and recreate another EBS in different EBS
In this article, we will see below three ideas:
a. Way to create EBS snapshots
b. Move to some other availability zones
c. Way to use snapshot and restore a new drive

A. Way to create EBS snapshots:


 Search EC2 instance:

301
 Go to the Storage tab of the EC2 instance → Click the volume id

Here we will see a list of EBS storage. Right Click on


the storage whose backup/snapshot to create → Create Snapshot

302
Write the description of the snapshot you are making. Click on the
Create Snapshot button

A snapshot is created. Click on the snapshot link on left. You will find

303
your snapshot there.

Image of the EBS drive is ready (i.e. backup ready)

B. Move to some other availability zones:


 Regular EBS storage cannot be connected to other availability zones
or regions. This is possible with the help of a snapshot
 So, for connecting it to different EC2 instances, we can now convert
the EBS snapshot to a different region or availability zone.

NOTE: One region can have many availibility zones


 Right-click on the snapshot → Copy

304
Choose your region region and click Copy button

A new copy of the snapshot will be created with your new region.

C. Restore a new EBS drive from Snapshot:


The snapshot creates an EBS drive exactly the copy of the backed-up
drive.

305
You can change the availability zone of the drive from the below
selection, the rest of the details will be the same as previous. Click
Create Volume.

Now, when we go to volumes on left, we see a new EBS storage volume

306
is set up.

Closing thoughts:
 In this article, we have understood how EBS snapshots work in AWS
and to restore the EBS volume drive from Snapshots.

 Keeping a backup image of the drive is always safe. Additionally,


EBS can be used to move in another region as well (the copy of the
storage drive)

307
Elastic Beanstalk: Advantages and Drawbacks

First, let’s start with the basics: According to the AWS site, “Elastic
Beanstalk makes it even easier for developers to quickly deploy and
manage applications in the AWS cloud. Developers simply upload their
application, and Elastic Beanstalk automatically handles the deployment
details of capacity provisioning, load balancing, auto-scaling, and
application health monitoring.”

Advantages:
Elastic Beanstalk’s main benefits include timesaving server
configuration, powerful customization, and a cost-effective price point.
Lightning-Fast Configuration with Automation

Elastic Beanstalk automates the setup, configuration, and provisioning


of other AWS services like EC2, RDS, and Elastic Load Balancing to
create a web service. You can log into your AWS management console,
and have a new site up and accessible in less than an hour. Elastic
Beanstalk also creates a fairly standard configuration for a modern Rails
application.
This automation can save precious time by handling all the things that
need to be completed for a production app (configuring log file
rotations, nginx config files, puma service configuration, Linux package
installation, ruby installation, load balancer configuration, and database
setup). There are more services like Heroku, Engine Yard, and others
that do this as well, but in general, we found Elastic Beanstalk was on
par and had a good standard setup, especially when you factor in its
price. These are some of the specifics on the configuration:
 Support: AWS Elastic Beanstalk supports Java, .NET, PHP,
Node.js, Python, Ruby, Go, and Docker web applications.

308
 Rails Servers: For Rails, you can run either a Passenger or a Puma
application stack — we’ve only used Puma so far, and the servers
will be configured by Elastic Beanstalk to run a Puma process (with
Nginx used as a reverse proxy in front of it) and a reasonable server
configuration for log files, directory security, and user security.

 SQL server: This is configured through Amazon RDS, but at its


heart is a EC2 server with a database running on it. We’ve used both
postgres and Mysql.

 AWS Elastic Load Balancer: Running and with the correct


configuration for the Rails servers.

 Security: New AWS security groups and policies are created, along
with permissions for these services to securely talk to each other. All
the servers are configured so they can only talk to and have
permissions for what they need. For example, your Rails servers have
just one port open specifically for the load balancers, and nothing can
talk to your DB server, except for your Rails servers. This is fantastic,
because it can be hard to do correctly on your own.

 Default Configuration: ENV variables on the Rails servers that


securely set secrets needed for Rails to run — like Database endpoint,
username, and password. This is also helpful, because figuring out a
secure way to do this on your own can be a pain and is easy to get
wrong.

 Custom Configuration: An easy and secure way (either in the


Elastic Beanstalk UI or through the AWS command line tools) to set
custom ENV variables. For example, you can set your Mailchimp

309
account and password and have it accessible to your running Rails
code as an ENV variable.

 Monitoring: Basic monitoring of your servers through Cloudwatch.

 Deployment: Easy deployment of new versions through AWS CLI.


Once configured, you run `eb deploy` from the root of your git
repository, and the deploy just works. This was also easy to integrate
with Codeship, a Continuous Integration service that we used.
Elastic Beanstalk’s automated configuration helps avoid mistakes that
happen from missing small details when you try to DIY. These are great
boilerplate specs that you typically look for when using a service like
this because they can be pretty tricky to get right on your own.
Powerful Customization
With Elastic Beanstalk, it’s all under your control. Everything that is
created is just an AWS service — so you can look at EC2, see the new
instances, and ssh into them. You can update your database config file.
You can update the security group for all of the machines, so that, for
example, the entire application is only accessible from your office IP
address.
While at some level this is a similar service to Heroku, it also gives you
more low-level access and control. Customization is more complicated,
but much more flexible and powerful than using Heroku. For example,
when we wanted to add sidekiq, it wasn’t straightforward (but possible)
— that’s another post for another time.
Price and Flexibility
Elastic Beanstalk’s price and flexibility are great. The platform itself is
nothing, so there is no extra charge on top of the AWS services you’re
using. Also, because you can pick your instance size, as well as easily
add more front-end servers to the load balancer, you can easily match

310
your server needs to your service load. Elastic Beanstalk even has auto-
scaling functionality built in, which we never used, but would be a great
way to save money for larger applications by only bringing up extra
servers when needed. Overall, our costs on a application with a similar
size and scope we built during the same timeframe was 400% higher on
Heroku vs. Elastic Beanstalk. Although each application is different, it’s
a good ballpark comparison.
Drawbacks
Some of the biggest pains with Elastic Beanstalk include unreliable
deployments, lack of transparency and documentation around stack and
application upgrades, and an overall lack of clear documentation.
Unreliable Deployment
We do a lot of deployments — we have a continuous integration setup
via Codeship, so with every commit (or merged pull request), if tests
pass, we deploy a new version. We practice small, incremental changes
and strive to be as nimble as possible. Some days, we might deploy 10
times.
Over the last year or so of using Elastic Beanstalk, our deploys have
failed five or six times. When the failure happens, we get no indication
why, and further deployments will fail as well. On the positive side, this
didn’t result in downtime for us. We simply couldn’t deploy, and if we
tried again it would fail.
Each time, we needed to troubleshoot and fix on our own. We found and
tried multiple solutions, such as terminating the instance that had the
deployment issue, and let Elastic Beanstalk recover. Sometimes, we
could ssh into the stuck machine, kill a process that was part of the eb
deploy, the machine would recover. But overall, we didn’t know what
failed, and it’s never a good thing to not be sure that your machine is in a
good state.

311
Considering we have done over 1000 deployments; this isn’t a high
failure rate. It never hit us at a critical time, but what if this happened
when we were trying to do a hotfix for a performance issue that was
crippling our site? Or, what if we had larger sites with more than two or
three front-end servers? Would this increase our deployment failure
rate? For the two applications we have done, we decided that the risk of
this happening was small and that it didn’t warrant switching to a new
service. For some applications this would not be an acceptable risk.
Deployment Speed
Deployments would take five minutes at least, and sometimes stretch to
15, for a site with just two front-ends. With more servers, deployments
could take even longer. This might not seem like much, but we have
setup other Rails environments where deployment can be done in one or
two minutes. And this can be critical if you are trying to be responsive in
real-time.
Attempts have been made to improve the Elastic Beanstalk deployment
process, and a good summary to start with is this post from HE:labs. We
may try some things from there in the future.
Stack Upgrades
Elastic Beanstalk comes out with new stack versions all the time — but
we have zero information on what has changed. No release notes, no
blog post, not even a forum post. Sometimes, it’s obvious — the version
of Ruby or Puma will change. But other times, it’s just an upgrade.
We’ve done several upgrades, and sometimes it goes smoothly, and
sometimes it takes a week.
Old Application Versions
Another thing we learned is to occasionally delete old application
versions. With every deploy, Elastic Beanstalk archives the old
application version in an S3 bucket. However, if there are 500 old
versions, further deploys fail. You can delete them through the Elastic

312
Beanstalk UI, but this caught us off guard multiple times. Although this
seems like a small problem, I really don’t like it when deployments fail.
All of these problems are an indication of Elastic Beanstalk’s general
lack of transparency. We had to figure out a lot of these issues on our
own and through blog posts and internet searches. On the plus side, we
had complete transparency into EC2 instances, database, and other
services created, so we were free to learn on our own. And while stack
upgrades and failed deployments are the most clear moments of pain, in
general they were indicators of the types of things that you have to learn
on your own.
Summary
Elastic Beanstalk helps us to easily deploy updates to our Rails
application while also leveraging Amazon’s powerful infrastructure.
Enhancing our deployment process with containers — like Docker —
will add even more versatility. Thanks to the fine-grain control offered
by Elastic Beanstalk we get to choose technologies that work best for us.
Ultimately, we found the most helpful thing about Elastic Beanstalk to
be that its automation features let us easily deploy updates to our Rails
application. While it’s certainly not a perfect tool, if you’re looking to
reduce system operations and just focus on what you’re developing,
Elastic Beanstalk is a solid choice.

313
Adding a custom domain for the AWS Elastic
Beanstalk application using Route 53.

The main objective of this article is to deploy a simple web application


to the AWS cloud platform. Our plan is to deploy the application to
Elastic Beanstalk.
Prerequisites to Proceed Further
1. Valid node application

 We will be uploading a Node application to EBS in this demo.


 Other deployment environments are also available with EBS, but this
course is explicitly focused on a Node application.
If you don’t have a Node application available, you can access the
following GitHub project: https://github.com/Venn1991/node-typescript-
boilerplate-mongoose-Public

2. Valid AWS account


Of course, you need to have a valid AWS account available
Deploying Applications to Elastic Beanstalk:
Let’s first look at the application we need to deploy. We have a simple
Node project that renders a sample user page. Given below is
the package.json file for the sample project.

314
{
"name": "node-typescript-boilerplate",
"version": "1.0.0",
"description": "",
"main": "src/index.ts",
"scripts": {
"watch": "tsc -w",
"dev": "nodemon dist/index.js",
"start": "node dist/index.js",
"build": "tsc",
"build:lib": "tsc"
},
"keywords": [],
"author": "",
"license": "ISC",
"dependencies": {
"@types/jsonwebtoken": "^8.5.8",
"aws-sdk": "^2.1167.0",
"bcrypt": "^5.0.1",
"chalk": "^4.1.2",
"dotenv": "^16.0.1",
"express": "^4.18.1",
"express-validator": "^6.14.2",
"joi": "^17.6.0",
"jsonwebtoken": "^8.5.1",
"lodash": "^4.17.21",
"moment": "^2.29.3",

315
"mongoose": "^6.4.4",
"multer": "^1.4.5-lts.1",
"multer-s3": "^3.0.1",
"nodemailer": "^6.7.5",
"ts-node": "^10.8.2"
},
"devDependencies": {
"@types/aws-sdk": "^2.7.0",
"@types/bcrypt": "^5.0.0",
"@types/express": "^4.17.13",
"@types/lodash": "^4.14.182",
"@types/multer-s3": "^3.0.0",
"typescript": "^4.7.4"
}
}
In the package file specified above, we need to ensure that the scripts tag
contains a start script. This script will be used to bootstrap the
application on the Elastic Beanstalk server. In the sample code, we have
configured the start script as node dist/index.js.
Our app is ready, so let’s create the Stretch Bean Establishment app.

3. How to Create the Elastic Beanstalk App


Step 1: Configure Your Environment
 The first step is to go to the AWS Management Console and select
Elastic Beanstalk from the Services menu. Press the ‘Create
Application’ button.
 Select the Web server environment and give a name to your app.

316
Getting started with AWS Elastic Beanstalk

Choose Managed platform in "Platform type", and Node.js in


"Platform", and leave the rest as it is.
Then choose Upload your code in the "Application code" section and
upload the zip file.

317
Then set the version label to 1 and choose Single instance in the
"Presets" section and click Next.
Note: Prefer High availability for the production environment.

318
Step 2: Configure service access
In this section, it is necessary to set up IAM roles. We must create two
IAM roles, one for Elastic Beanstalk and one for EC2
For the service role, select Create and use new service role. It'll
automatically create and provide the required permissions
In order to ssh into your EC2 instance via terminal, create a key-value
pair and select it. Skip this step if you do not wish to log onto EC2.
Create an IAM role with the following permissions and add the role to
the ‘EC2 instance profile’ and proceed next.
 AWSElasticBeanstalkWebTier
 AWSElasticBeanstalkWorkerTier
 AWSElasticBeanstalkMulticontainerDocker

319
Step 3: Set up networking, database, and tags
I’m going to skip this step because I’m using a Mongoose database, so I
don’t need to do this step.
Step 4: Configure instance traffic and scaling
It’s not necessary to make any changes here unless you need them. If
you’re creating this sample app, leave the fields with their default values.
An Amazon Linux machine will be created by Elastic Beanstalk by
default.

320
Step 5: Configure updates, monitoring, and logging
Choose Basic in "Health reporting" and uncheck Managed updates
activation.

321
Add your environment variables and click Next.

322
In the end, examine all your configurations and proceed with the next
step.

Now you can see why I spent hours on this process in the first place.
Whenever I made a mistake, I had to wait about 10 to 15 minutes to
check the result and redo all the steps above if anything went wrong.
The Elasticated Bean will definitely test your patience, so be calm and
relaxed.
When everything is finished, the health will turn green and a domain
URL will be generated.

323
The following page will appear when you open the URL if you used my
example repo.

That’s all! We have successfully deployed our application on AWS


Elastic Beanstalk. Our Pipeline enables us to make changes to our
application continuously and return to old versions when necessary.
Don’t hesitate to explore the dashboard of your newly deployed
application.

Add your domain to Route 53


Connect to Route 53, and you can either buy or register a domain name
that has already been acquired from an external provider.

324
Enter your domain name in the “Domain Name” field. The name of the
custom domain you want to add (e.g., example.com) should be given
here.

Once the hosted zone is established, you will be directed to the records
management page. DNS records need to be added to point to the
resources you want to associate with your custom domain. Records (for
IPv4 addresses), AAAA records (for IPv6 addresses), CNAME records
(for aliases), MX records (for mail servers), etc. are all common record
types.

325
3. Add Name Server(NS) in your domain provider.
To point to the set of nameservers on AWS, custom NS records will be
added to GoDaddy next.
Click on My Products on the top toolbar. The DNS button on the
justingerber.com domain should be clicked next.

326
Depending on your requirements, you can choose between GoDaddy’s
default nameservers or custom nameservers. Here are the two options:
 Default Nameservers: If you’re using GoDaddy’s default
nameservers, you will typically see an option to choose “GoDaddy
Nameservers” or “Default Nameservers.” Select this option if you
want to use GoDaddy’s own nameservers for your domain.
 Custom Nameservers: If you have your own nameservers (provided
by a hosting provider, for example), you’ll want to choose the option
to enter custom nameservers. Enter the nameserver addresses
provided by your hosting provider.

Save your changes after entering the nameserver information. This


might involve clicking a ‘Save’ button or a similar action. It’s possible
that your changes won’t take effect immediately and may take some
time to propagate across the internet.
4. Create a Certificate Manager for SSL/TLS
The AWS Certificate Manager is a service that allows for the provision,
management, and deployment of public and private (SSL/TLS)
certificates in AWS services.

327
Fill out the necessary information for the certification manager form.
Make sure to use ‘*.doaminName.com’ when adding a name.

After creating the certification, you will have a new screen that contains
the necessary details related to your domain and certification. Click on
Create a record in route 53, which creates an SSL certificate for your
domain.

328
Press the Create Records button. Route 53 will have a CNAME record
added by this.

Return to the Certificate Manager service by navigating back.


Wait until the status is issued. This can take several minutes. Please do
not proceed until the status is Issued.

329
Create a custom domain in route 53 and select an Elastic Beanstalk
service.
5. Create a type A record in Route 53 in your custom domain
add.
Create a record by clicking on the “Create record” button.

Please provide the necessary information. I am naming “api” as the


record name. Make sure that A is selected in the record type drop-down
list.

330
 Make sure the alias switch is activated.
 Alias should be selected as the first drop-down list under Route traffic
in the Elastic Beanstalk environment.
 Create a second list under Route traffic to Asia Pacific (Mumbai)[ap-
south-1] for Route traffic to Asia Pacific (Mumbai)[ap-south-1]. you
may select anyone according to your country.
 Ensure that the third drop-down list under Route traffic is set to your
elastic Beanstalk environment.
 Press the Create records button.

Make sure that “A” and “CNAME” record is present on Route 53 for the
weather application.

331
6. Adding SSL in your Elastic Beanstalk.
Under the Environment Name column, click the Weather-test-app-dev
app that was added.

Afterwards, select the configuration option on the left menu.

332
Click the edit button under the Instance traffic and scaling. Continue
scrolling until you come across Listeners.

Click on the configuration option on the left menu. Click the edit button
located in the load balancer section. Click on the button that says Add
listener.

Set the Listener port to 443.


Set the Listener protocol to HTTPS.
Set the Instance port to 80.
Set the Instance protocol to HTTP.
Set the SSL certificate which is created in step 4.

333
Click the Add button.
Important: Once this has been added, the changes have not been saved
yet.
Scroll to the bottom of the page and click the Apply button.

Reflecting the environment on your custom domain will take some time.

That’s all! We succeeded in adding a custom domain on AWS Elastic


Beanstalk using the Route 53 service. Don’t hesitate to explore the
dashboard of your newly configured custom domain.

334
Using CloudWatch for Resource Monitoring, Create
CloudWatch Alarms and Dashboards

Introduction:
What’s Amazon CloudWatch?
Amazon CloudWatch is an AWS service for monitoring and managing
resources in the cloud. It ensures the reliability, availability, and
performance of AWS applications and infrastructure.

Key features of Amazon CloudWatch:


 Metrics and Alarms: Collect and monitor metrics, set alarms for
predefined thresholds.
 Dashboards: Create customized dashboards for a centralized view
of performance and health.
 Logs: Centralize logs, supporting aggregation, searching,
and filtering for efficient log management.
 Events: Trigger automated actions in response to changes in AWS
environment.
 Insights: Query log data interactively with CloudWatch Insights.
 Synthetics: Create canaries to monitor application availability and
latency.
 Container Insights: Specialized monitoring for containerized
applications.

Architecture Diagram:

335
Task Steps:
Step 1:
Sign in to AWS Management Console
On the AWS sign-in page ,enter your credentials to log in to your AWS
account and click on the Sign in button.
Once Signed in to the AWS Management Console, Make the default
AWS Region as US East (N. Virginia) us-east-1
Step 2:
Launching an EC2 Instance
In this step, we are going to launch an EC2 Instance that will be used for
checking various features in CloudWatch.
Make sure you are in the N.Virginia Region.
Navigate to EC2 by clicking on the Services menu in the top, then click
on EC2 in the Compute section.

3. Navigate to Instances from the left side menu and click on Launch
instances button.

336
4. Name : Enter MyEC2Server

5. For Amazon Machine Image (AMI): Select Amazon Linux and the
select Amazon Linux 2 AMI from the drop-down.
Note: if there are two AMI’s present for Amazon Linux 2 AMI, choose
any of them.

6. For Instance Type: Select t2.micro

337
7. For Key pair: Select Create a new key pair Button
Key pair name: MyEC2Key
Key pair type: RSA
Private key file format: .pem

8. Select Create key pair Button.

9. In Network Settings Click on Edit button:


Auto-assign public IP: Enable
Select Create new Security group
Security group name : Enter MyEC2Server_SG
Description : Enter Security Group to allow traffic to EC2

338
To add SSH :
Choose Type: Select SSH
Source: Select Anywhere

10. Keep Rest the things as Default and Click on Launch


Instance Button.
11. Select View all Instances to View the Instance you created.
12. Launch Status: Your instance is now launching. Click on the instance
ID and wait for complete initialization of the instance (until the status
changes to running).

339
Note: Select the instance and Copy the Instance-ID and save it for later,
we need to search the metrics in CloudWatch based on this.

Step 3 :
SSH into EC2 Instance and install necessary Software’s
Follow the instructions bellow to SSH to your EC2 instance
Once instance is launched, Select EC2 Instance Connect option and click
on Connect button.(Keep everything else as default)

340
A new tab will open in the browser where you can execute the CLI
Commands.

2. Once you are logged into the EC2 instance, switch to root user.
sudo su
3. Update :
yum update -y

341
4. Stress Tool : Amazon Linux 2 AMI does not have the stress tool
installed by default, we will need to install some packages
sudo amazon-linux-extras install epel -y
yum install stress -y
5. Stress tool will be used for simulating EC2 metrics. Once we create
the CloudWatch Alarm, we shall come back to SSH and
trigger CPUUtilization using it.
Step 4:
Create SNS Topic
In this step, we are going to create a SNS Topic.
Make sure you are in the N.Virginia Region.
Navigate to Simple Notification Service by clicking on the Services
menu available under the Application Integration section.

3. Click on Topics in the left panel and then click on Create topic button.

4. Under Details:
Type: Select Standard
Name: Enter MyServerMonitor

342
Display name: Enter MyServerMonitor

5. Leave other options as default and click on Create topic button. A SNS
topic will be created.

Step 5:
Subscribe to an SNS Topic
Once SNS topic is created, click on SNS topic MyServerMonitor.
Click on Create subscription button.

343
3. Under Details:
Protocol : Select Email
Endpoint : Enter your email address

Note: Make sure you give proper email address as this is where your
notification will be delivered.
4. You will receive a subscription confirmation to your email address

344
5. Click on Confirm subscription.

6. Your email address is now subscribed to SNS


Topic MyServerMonitor.

Step 6:
Using CloudWatch to Check EC2 CPU Utilization Metrics in
CloudWatch Metrics
Navigate to CloudWatch by clicking on the Services menu available
under the Management & Governance section.

345
2. Click on All metrics under Metrics in the Left Panel.
3. You should be able to see EC2 under All Metrics. If EC2 is not
visible, please wait for 5–10 minutes, CloudWatch usually takes around
5–10 minutes after the creation of EC2 to start fetching metric details.

4. Click on EC2. Select Per-Instance Metrics.

5. Here you can see various metrics. Select the CPUUtilization metric to
see the graph.

346
6. Now at the top of the screen, you can see the CPU Utilization graph
(which is at zero since we have not stressed the CPU yet).

Step 7:
Create CloudWatch Alarm
CloudWatch alarms are used to watch a single CloudWatch metric or the
result of a math expression based on CloudWatch metrics.
Click on In alarms under Alarms in the left panel of the CloudWatch
dashboard.

2. Click on Create alarm available on the top right corner.


3. In the Specify metric and conditions page:

347
Click on Select metric. It will open the Select Metrics page.

Scroll down and Select EC2.

Select Per-Instance Metrics

Enter your EC2 Instance-ID in the search bar to get metrics for
MyEC2Server
Choose the CPU Utilization metric.
Click on Select metric button.

348
4. Now, configure the alarm with the following details:
Under Metrics
Period: Select 1 Minute

Under Conditions
Threshold type: Choose Static
Whenever CPUUtilization is…: Choose Greater
than: Enter 30
Leave other values as default and click on Next button.

349
5. In Configure actions page:
Under Notification
Alarm state trigger: Choose In Alarm
Select an SNS topic: Choose Select an existing SNS topic
Send a notification to… : Choose MyServerMonitor SNS topic which
was created earlier.

Leave other fields as default. Click on Next button.


6. In the Add a description page, (under Name and Description):
Name: Enter the Name MyServerCPUUtilizationAlarm
Click on Next button.

350
7. A preview of the Alarm will be shown. Scroll down and click
on Create alarm button.
8. A new CloudWatch Alarm is now created.

Whenever the CPU Utilization goes above 30 for more than 1 minute,
an SNS Notification will be triggered and you will receive an email

Step 8:
Testing CloudWatch Alarm by Stressing CPU Utilization
SSH back into the EC2 instance — MyEC2Server.
The stress tool has already been installed. Let’s run a command to
increase the CPU Utilization manually.
sudo stress --cpu 10 -v --timeout 400s

351
3. This command shall monitor the process created by the stress
tool(which we triggered manually). It will run for 6 minutes and 40
seconds. It will monitor CPU utilization, which should remain very near
100% for that amount of time.

4. Open another Terminal on your local machine and SSH back in EC2
instance — MyEC2Server.
5. Run this command to see the CPU utilization if you are a MAC or
Linux User. For Windows User, you can navigate to Task manager.
top

352
6. You can now see that %Cpu(s) is 100. By running this stress
command, we have manually increased the CPU utilization of the EC2
Instance.
7. After 400 Seconds, the %Cpu will reduce back to 0.

353
Step 9 :
Checking For an Email from the SNS Topic
Navigate to your mailbox and refresh it. You should see a new email
notification for MyServerCPUUtilizationAlarm.

2. We can see that mail we received contains details about our


CloudWatch Alarm,(name of the alarm, when it was triggered, etc.).

Step 10:
Checking the CloudWatch Alarm Graph
Navigate back to CloudWatch page, Click on Alarms.
Click on MyServerCPUUtilizationAlarm.
On the Graph, you can see places where CPUUtilization has gone above
the 30% threshold.

354
4. We can trigger CPUUtilization multiple times to see the spike on the
graph.
5. You have successfully triggered a CloudWatch Alarm
for CPUUtilzation.

Step 11:
Create a CloudWatch Dashboard
We can create a simple Cloudwatch dashboard to see
the CPUUtilization and various other metric widgets.
Click on Dashboard in the left panel of the CloudWatch page.
Click on Create dashboard button.

355
Dashboard name: Enter MyEC2ServerDashboard

Click on Create dashboard


Add widget: Select Line Graph.

Click on Next button.


Select Metrics. Click on Next button.
On the next page, Choose EC2 under the Metrics tab. Choose Per-
Instance Metrics.
In the search bar, enter your EC2 Instance ID. Select CPUUtilization.

356
Click on Create Widget button.

3. Depending on how many times you triggered the stress command, you
will see different spikes in the timeline.
4. Now click on the Save button.

5. You can also add multiple Widgets to the same Dashboard by clicking
on Add widget button.

357
Exploring AWS CloudTrail: Auditing and Monitoring
AWS API Activity

WS CloudTrail is a service that enables governance, compliance,


operational auditing, and risk auditing of your AWS account. It provides
event history of your AWS account activity, including actions taken
through the AWS Management Console, AWS SDKs, command line
tools, and other AWS services.

What is CloudTrail?
CloudTrail continuously monitors and logs account activity across all
AWS services, including actions taken by a user, role, or AWS service.
The recorded information includes the identity of the API caller, the time
of the API call, the source IP address of the API caller, the request
parameters, and the response elements returned by the AWS service.

Why Use CloudTrail?

358
Here are some key reasons to use CloudTrail:

 Audit Compliance: CloudTrail logs provide detailed records of all


API calls, which can be used to audit compliance with regulatory
standards like HIPAA and PCI.

 Security Analysis: The API call logs can be analyzed to detect


anomalous activity and unauthorized access to determine security
issues.

 Operational Issues: The activity history can help troubleshoot


operational issues by pinpointing when an issue began and what
actions were taken.

 Resource Changes: You can identify what changes were made to


AWS resources by viewing CloudTrail events.

CloudTrail Log Files


CloudTrail log files contain the history of API calls made on your
account. These log files are stored in Amazon S3 buckets that you
specify. You can define S3 buckets per region or use the same bucket for
all regions.
The log files capture API activity from all Regions and are delivered
every 5 minutes. You can easily search and analyze the logs using
Amazon Athena, Amazon Elasticsearch, and other tools.

CloudTrail Events
CloudTrail categorizes events into two types:

 Management events: Provides information about management


operations that are performed on resources in your AWS account.
These include operations like creating, modifying, and deleting
resources.

359
 Data events: Provides information about resource operations
performed on or in a resource. These include operations like Amazon
S3 object-level API activity.
You can choose to log both management and data events or just
management events. Data events allow more granular visibility into
resource access.

Enabling CloudTrail
Enabling CloudTrail is simple and can be done in a few
steps:
 Sign into the AWS Management Console and open the CloudTrail
console.
 Get started by creating a new trail and specify a name.
 Choose whether to log management and/or data events.
 Specify an existing S3 bucket or create a new one where logs will be
stored.
 Click Create to finish enabling CloudTrail.
 Once enabled, CloudTrail will begin recording events and delivering
log files to the designated S3 bucket. You can customize trails further
by adding tags, configuring log file validation, logging to CloudWatch
Logs, and more.

Use Cases
Here are some common use cases for CloudTrail:
 User Activity Monitoring: Review which users and accounts are
performing actions across services.

 Service Usage Optimization: Analyze usage patterns to identify


opportunities to reduce costs.

 Security Forensics: Investigate unusual activity when a security

360
incident occurs by reviewing relevant events.

 Regulatory Compliance: Meet compliance requirements that


mandate detailed activity logging and audit trails.
CloudTrail provides a simple way to get visibility into account activity
by recording API calls made across AWS. The event history and logs can
be used for auditing, security analysis, troubleshooting, and more.
Businesses of all sizes can benefit from enabling CloudTrail to gain
insight into how their AWS resources are being accessed and modified.
Tutorial
AWS CloudTrail is a service that records AWS API calls for your
account and delivers log files to you. The recorded information includes
the identity of the API caller, the time of the API call, the source IP
address of the API caller, the request parameters, and the response
elements returned by the AWS service.
In this tutorial, we will walk through how to enable CloudTrail, view
and analyze the log files, and leverage CloudTrail logs for auditing and
security.

Prerequisites
Before starting, you should have:
 An AWS account
 Basic understanding of AWS services
 An S3 bucket to store the CloudTrail logs

Enabling CloudTrail
Let’s start by enabling CloudTrail across all Regions:

 Go to the CloudTrail console in the AWS Management Console.


 Click “Trails” in the left sidebar and then “Create trail”.
 Enter a name for the trail such as “CloudTrail-AllRegions”.

361
 Under Storage location, create or select an existing S3 bucket.
 For log file encryption, select AWS KMS to encrypt the logs.
 Click “Create” to enable the trail.
 CloudTrail will now begin recording events and sending log files to
the designated S3 bucket.

Viewing CloudTrail Log Files


The log files can be viewed in the S3 bucket or analyzed using Athena,
Elasticsearch, or other tools. Let’s take a look at the logs:

 Go to the S3 console and open the bucket storing the CloudTrail logs.
 Open one of the log files and inspect the JSON content.
 You will see API call details like source IP, user agent, resource
affected, and parameters.
The logs provide a comprehensive audit trail of all API activity across
services.

Using CloudTrail Insights


CloudTrail Insights detects unusual activity by continuously analyzing
event patterns. Let’s enable it:

 From the CloudTrail console, go to “Trails” and select the trail.


 Under “Insights Events”, enable insights.
 In “Insights summary”, you can see detected anomalies.
 Click on events to see the anomalous activity details.
 Insights makes it easy to identify potential security issues.
In this tutorial, we enabled CloudTrail across all Regions, viewed the
generated log files, and enabled CloudTrail Insights. The event history
and anomaly detection allow for auditing, operational analysis, security
monitoring, and more. Be sure to leverage CloudTrail logs to gain
visibility into your AWS account activity.

Common AWS CLI Commands for CloudTrail

362
Here are some common AWS CLI commands for working with AWS
CloudTrail:
Create CloudTrail trail

Describe CloudTrail trail

Start CloudTrail logging

Stop CloudTrail logging

List CloudTrail events

Get CloudTrail log files

Delete CloudTrail trail

Additional CloudTrail CLI commands


There are many additional CloudTrail CLI commands available:

 update-trail — Updates settings for a trail


 list-tags — Lists tags for a trail
 add-tags — Adds tags to a trail

363
 remove-tags — Removes tags from a trail
 list-public-keys — Lists public keys for log file validation
 get-trail-status — Returns status of CloudTrail logging
 list-trails — Lists trails in the account

Final Words
AWS CloudTrail provides a simple yet powerful way to gain visibility
into activity across your AWS account. By recording API calls made to
various AWS services, CloudTrail delivers detailed audit logs that can be
analyzed for security, compliance, and operational purposes. This tutorial
guided you through enabling CloudTrail across all Regions, inspecting
the generated log files, and leveraging CloudTrail Insights to detect
unusual activity. With CloudTrail activated, you now have
comprehensive visibility into changes, user activity, and resource access
within your AWS environment. Be sure to consult the CloudTrail logs
regularly for auditing, monitoring AWS usage, troubleshooting issues,
and investigating security incidents. We encourage you to explore the
other capabilities of CloudTrail such as log file encryption, log
validation, data event logging, and integrating logs with other AWS
services. CloudTrail is a key component of the AWS shared
responsibility model, enabling you to monitor the activity within your
account and respond appropriately.

AWS Cloud Trail


Moving your complex resources and workloads to the cloud can make it
challenging for your organization to analyze and understand everything
in your AWS environment. AWS CloudTrail is a management service
provided by AWS that enables governance, compliance, operational
origin, and risk auditing of your AWS Account. AWS CloudTrail
provides a comprehensive record history so you can easily see who
made changes, where they made the changes, and when the changes
were made. AWS Audit logs provide a wealth of information on every

364
activity within your AWS environments.
With AWS Cloud Trail, you can search and track all account activities to
monitor user changes, compliance, error rates, and risks.
The capabilities of CloudTrail are essential to simplifying your AWS
environment troubleshooting and letting you identify areas that need
improvements.

In this tutorial, we’ll explore using AWS CloudTrail to


monitor every activity and track user changes on our AWS
Account.
Features of CloudTrail
 Multi-Regional: AWS CloudTrail allows the user to make trails
from any part of the world, and you can enable this functionality from
the actions tab.

 Event History: Event history is a tab on AWS CloudTrail that lets


the user see what’s happening in CloudTrail and all the services (S3,
Lambda, Dynamo DB) integrated into CloudTrail.

 File Encryption: File encryption is done by AWS KMS, the key


management system that allows you to encrypt the logs created from
your environment to maintain the stability of your log files.

 File Integrity: File integrity checks for file validation and whether
all the files are corrupt. If there’s any form of corruption in any of the
log files, it’ll destroy the integrity of the file.

Getting Started
In your AWS Management Console, search and click on AWS
CloudTrail.

365
 Create a New Trail by clicking on Create Trail.

 Choose your Trail attributes. Enter your Trail name and storage
location (select an existing S3 bucket or create a new S3 bucket).
Enable your log file encryption with your file validation. This will
ensure all aws resources are encrypted.

366
 When you’re done configuring your Trail attributes, click on Next.
Next, choose your log events. In AWS CloudTrail, there are three types
of events. Management events, Data events, and Insights events.
 Management events are free and can be viewed in the event history
tab for 90 days. Data events are not free to the user and cannot be
viewed in the event history tab. Insights events let you identify
unusual activity, errors, or user behavior in your account.

Only Management events are free for your workloads. Data and Insights
events will incur costs. In this tutorial, we’ll be using Management
Events.

 When you’re done configuring log events, click on Next, you’ll see
the overview and general details of your configuration, and click
on Create Trail.
 In your Trails dashboard, you’ll see the Trail you just created.

367
 Integrate other AWS resources with your trail to see how it works and
see different log events. For example, in my S3 bucket, I’ll upload a
new file into my S3 bucket. Once I’m done uploading the file, I’ll
automatically see the events in my CloudTrail.

 In your CloudTrail event history, you’ll see all your events and logs
from your S3 bucket.

368
 You’ll see your event records and referenced resources when you
click on them.

 You can also filter your event history based on AWS access key,
Event ID, Event Name, Event Source, Resource name, and user type.

369
 You’ll see the PUT event history in your Event Name, the S3 bucket we
updated earlier.

 In your AWS S3 storage bucket, you’ll see your CloudTrail log


events in the AWS logs folder.

 When you click on Cloud Trail, you can see the logs from each AWS
Region.

370
Conclusion
You can see how fast it is to enable and configure AWS CloudTrail on
your AWS resources and view log events in your Event History
dashboard. CloudTrail is a service that has the primary function to record
and track all AWS API requests made. These API calls can be
programmatic requests initiated by a user using an SDK, from the AWS
CLI, or within the AWS management console. With our Open-Source
workflows, you can automatically send an API request with our ops cli to
automatically enable logs and events into your AWS resources.

371
Route 53
we dive into Route 53, Amazon’s highly scalable and reliable Domain
Name System (DNS) web service. Route 53 offers a plethora of features
to manage your domain names and direct internet traffic efficiently.
Let’s explore the key concepts and functionalities of Route 53.

Introduction to Route 53 DNS service


Route 53 is a scalable and highly available Domain Name System (DNS)
web service offered by Amazon Web Services (AWS). It is named after
the TCP/IP port 53, where DNS requests are addressed. Route 53
effectively translates human-readable domain names (like example.com)
into IP addresses (like 192.0.2.1) that computers use to identify each
other on the internet.
Key Features of Route 53:

 Scalability: Route 53 is designed to handle large volumes of DNS


queries without any degradation in performance. It can scale
automatically to manage changes in traffic patterns and query loads.

 High Availability: Route 53 is built on the same infrastructure that


powers other AWS services. It is distributed across multiple
geographically diverse data centers, ensuring high availability and
reliability of DNS resolution.

372
 Global Coverage: Route 53 has a global network of DNS servers
strategically located around the world. This ensures fast and reliable
DNS resolution for users accessing your applications from different
geographic regions.
 Integration with AWS Services: Route 53 seamlessly integrates with
other AWS services such as Elastic Load Balancing (ELB), Amazon
S3, Amazon EC2, and more. This allows you to easily map domain
names to your AWS resources and manage traffic routing efficiently.
 Advanced Routing Policies: Route 53 supports various routing
policies like simple routing, weighted routing, latency-based routing,
geolocation routing, and failover routing. These policies enable you
to implement sophisticated traffic management strategies based on
your specific requirements.
 Health Checks: Route 53 provides health checks to monitor the
health and availability of your resources. You can configure health
checks for endpoints like web servers, load balancers, and more.
Route 53 automatically routes traffic away from unhealthy endpoints,
helping you maintain high availability and reliability.

 DNS Failover: Route 53 offers DNS failover functionality, which


automatically redirects traffic from a failed or unhealthy resource to a
healthy one. This helps minimize downtime and ensures continuous
availability of your applications.

Use Cases for Route 53:


 Hosting Websites: Route 53 can be used to host your website’s DNS
records, including mapping domain names to web servers and
configuring subdomains.
 Load Balancing: Route 53 works seamlessly with Elastic Load
Balancers (ELB) to distribute incoming traffic across multiple EC2
instances or containers, ensuring optimal performance and fault

373
tolerance.

 Disaster Recovery: Route 53’s DNS failover feature can be used to


implement disaster recovery strategies by automatically redirecting
traffic to backup resources in case of primary resource failure.
 Global Applications: Route 53’s global coverage and latency-based
routing enable you to build and deploy applications that deliver low-
latency experiences to users worldwide.
 Hybrid Cloud Environments: Route 53 can be integrated with on-
premises infrastructure and hybrid cloud environments, allowing you
to manage DNS for both cloud-based and traditional resources from a
single interface.

Configuring DNS records and health checks:

Let’s explore how to configure DNS records and health checks in Route
53.

 Configuring DNS Records:


 Configuring DNS Records:
DNS records in Route 53 define how domain names are mapped to
resources such as EC2 instances, S3 buckets, load balancers, and other
AWS services. Here are some common DNS record types and their
purposes:

 A Records (Address Record): Maps a domain name to the IPv4


address of the server hosting the domain. This is commonly used for
pointing domain names to web servers or other infrastructure.

 AAAA Records (IPv6 Address Record): Similar to A records


but used for mapping domain names to IPv6 addresses.

 CNAME Records (Canonical Name Record): Points a

374
domain or subdomain to another domain’s canonical name. This is
often used for creating aliases for domains.

 Alias Records: Route 53-specific records that function similarly to


CNAME records but with some additional benefits, such as support
for zone apex mapping and automatic updating of IP addresses.

 MX Records (Mail Exchange Record): Specifies mail exchange


servers for the domain, allowing you to receive email for your
domain.
Here’s how you can configure DNS records in Route 53:

 Using the AWS Management Console: Navigate to the Route 53


console, select the hosted zone for your domain, and then create or
edit DNS records using the interface provided.

 Using the AWS CLI: You can use the AWS CLI to manage Route
53 DNS records programmatically. Commands like ‘aws route53
change-resource-record-sets’ enable you to add, update, or delete
DNS records in your hosted zones.

 Using AWS SDKs: AWS SDKs for various programming languages


provide APIs for interacting with Route 53 programmatically,
allowing you to automate DNS management tasks in your
applications.

Configuring Health Checks:


Health checks in Route 53 allow you to monitor the health and
availability of your resources, such as web servers, load balancers, and
endpoints. Route 53 periodically sends health check requests to your
resources and evaluates their responses to determine their health status.
Here’s how you can configure health checks in Route 53:

375
 Define Health Check Settings: Specify the endpoint or resource
you want to monitor, along with the protocol (HTTP, HTTPS, TCP, or
HTTPS), port, and other relevant settings.

 Set Thresholds and Intervals: Configure the frequency and


thresholds for health checks, including the number of consecutive
failed checks required to consider a resource unhealthy, and the
interval between checks.

 Configure Health Checkers: Choose the regions from which


Route 53 health checkers will send requests to your resources.
Distributing health checkers across multiple regions helps ensure
accurate monitoring and failover capabilities.

 Associate Health Checks with DNS Records: Associate health


checks with the DNS records that route traffic to your resources.
Route 53 automatically routes traffic away from unhealthy resources
based on the results of health checks.
By configuring health checks in Route 53, you can ensure the high
availability and reliability of your applications by automatically
redirecting traffic away from unhealthy resources and minimizing
downtime.

Implementing routing policies and latency-based routing:


It allows you to optimize traffic distribution and improve the
performance of your applications. Let’s delve into the details:
Routing Policies in Route 53:

Route 53 supports several routing policies, each designed to


meet specific requirements for traffic management and
failover scenarios:

1. Simple Routing Policy:

376
 This is the most basic routing policy where you associate a single
DNS record with a single resource. When a DNS query is received,
Route 53 responds with the IP address associated with the DNS
record.
 Useful for directing traffic to a single resource, such as a web server
or a load balancer.

2. Weighted Routing Policy:


 With weighted routing, you can distribute traffic across multiple
resources based on assigned weights.

 For example, you might allocate 70% of traffic to one resource and
30% to another to perform A/B testing or gradually shift traffic during
deployments.

3. Latency-Based Routing Policy:


 Latency-based routing directs traffic to the resource with the lowest
network latency based on the user’s geographical location.

 Route 53 measures latency from multiple locations worldwide and


directs traffic to the resource that provides the best performance for
each user.

 Ideal for global applications where minimizing latency is crucial for


user experience.

4. Failover Routing Policy:


 Failover routing is used for creating active-passive failover
configurations. You designate one resource as primary and another as
standby.
 Route 53 automatically redirects traffic to the standby resource if the
primary resource becomes unavailable.

377
 Commonly used for disaster recovery scenarios.

5. Geolocation Routing Policy:


 Geolocation routing allows you to route traffic based on the
geographic location of the user.
 You can define specific routing policies for different regions or
countries, ensuring users are directed to the closest or most
appropriate resources.

Latency-Based Routing:
Latency-based routing is particularly powerful for optimizing the
performance of globally distributed applications. Here’s how it works:

1. Route 53 Health Checks:


 Route 53 continually monitors the health and performance of your
resources using health checks.
 This ensures that only healthy and responsive resources are considered
when calculating latency.

2. Latency Measurements:
 Route 53 measures the latency between end users and your resources
from multiple AWS regions.
 It uses this information to determine the optimal resource to which
traffic should be directed based on the lowest latency.

2. Traffic Distribution:
 When Route 53 receives a DNS query, it evaluates the latency to each
resource and directs the query to the resource with the lowest latency
for that particular user.

 This ensures that users are automatically routed to the resource that

378
offers the best performance from their location.

Implementation:
To implement latency-based routing in Route 53:
1. Create Resource Records:
Define the DNS records for your resources (e.g., EC2 instances, ELB
endpoints) in your Route 53 hosted zone.

2. Enable Latency-Based Routing:


 In the Route 53 console, create a new record set and select “Latency”
as the routing policy.
 Specify the regions where your resources are located and associate
each region with the corresponding DNS records.

3. Health Checks and Monitoring:


 Ensure that health checks are configured for your resources to
maintain high availability and reliability.
 Monitor latency and resource health through the Route 53 console or
CloudWatch metrics to identify any performance issues.
Below are examples of code snippets demonstrating how to interact with
AWS Route 53 using the AWS CLI and Python SDK (boto3). We’ll
cover creating DNS records, configuring health checks, and
implementing latency-based routing.

Conclusion:
Route 53 is a powerful tool for managing DNS and routing traffic
effectively within AWS and beyond. Understanding its features and
configurations is essential for building scalable and reliable web
applications.

379
CloudFront in AWS

CloudFront
1. Content Delivery Network (CDN): CloudFront is a CDN
service provided by AWS. CDNs help deliver content (like web pages,
images, videos) to users globally with low latency by caching content
at edge locations.

2. Edge Locations: These are server clusters located in various parts


of the world. CloudFront uses these edge locations to cache and deliver
content to users with lower latency.

3. Origin Server: This is where your original, un-cached content is


stored. It could be an Amazon S3 bucket, an EC2 instance, or even an
on-premises server.

4. Distribution: A CloudFront distribution is the configuration


specifying the origin server, the edge locations for caching, and other
settings. You can create two types of distributions: Web and RTMP.

5. Web Distribution: Used for distributing websites, including static


and dynamic content.

6. RTMP Distribution: Designed for streaming media files using

380
Adobe Media Server and the Real-Time Messaging Protocol (RTMP).

7. Cache Behavior: Defines how CloudFront handles requests and


responses between users and the origin.

8. Security: CloudFront provides several security features, including


the ability to restrict access to content, use SSL/TLS for secure
connections, and integrate with AWS WAF for additional security.

9. Logging and Monitoring: CloudFront provides logs and real-time


monitoring through Amazon CloudWatch, helping you track user
requests and system performance.

10. Cost Management: Pricing is based on data transfer out of


CloudFront to end-users, requests, and data transfer between AWS
regions. Utilizing features like caching and compression can help
manage costs.

Uses of CloudFront
1. Create a Distribution: Set up a new CloudFront distribution in the
AWS Management Console.

2. Configure Origins: Specify the origin server where CloudFront


fetches the content.

3. Configure Behavior: Define cache behaviors, including how


CloudFront handles various types of content.

4. Set Security Measures: Implement security features like SSL/TLS


and access control.

5. Configure DNS: Map your domain to the CloudFront distribution


using a domain name

6. Testing and Optimization: Test your distribution to ensure


proper functionality and consider optimizing settings based on

381
performance requirements.
CloudFront is a powerful tool for optimizing content delivery and
enhancing the performance of your web applications globally.

382
AWS ACM
Introduction:
In the rapidly evolving landscape of web security, securing your website
with SSL/TLS certificates has become paramount. Amazon Web
Services (AWS) provides a robust solution for certificate management
through AWS Certificate Manager (ACM). In this blog post, we’ll delve
into the key features of AWS ACM, its benefits, and how it simplifies
the process of obtaining, managing, and deploying SSL/TLS certificates.
Additionally, we’ll explore the concepts of Public and Private Certificate
Authorities (CAs) and how they contribute to the security ecosystem.

Understanding AWS ACM:


AWS Certificate Manager (ACM) is a fully managed service that makes
it seamless to provision, manage, and deploy SSL/TLS certificates for
your applications on AWS. ACM takes the complexity out of certificate
management by automating the process of certificate renewal,
validation, and deployment, allowing you to focus on building and
scaling your applications.

Key Features of AWS ACM:


 Automatic Certificate Renewal: ACM automates the renewal
process, ensuring that your certificates are always up-to-date and
eliminating the risk of expiration-related disruptions. This feature is
particularly beneficial for organizations managing a large number of
certificates.

383
 Integrated with AWS Services: ACM seamlessly integrates
with other AWS services, such as Elastic Load Balancer (ELB),
CloudFront, and API Gateway. This integration simplifies the process
of associating certificates with these services, reducing the time and
effort required for deployment.

Global Coverage: ACM supports global deployments with


certificates that can be used in multiple AWS regions. This is especially
useful for businesses with a global presence, ensuring consistent security
across different geographic locations.

Certificate Validation: ACM handles the validation of domain


ownership automatically. This streamlines the process of obtaining
certificates, saving users from the hassle of manual validation steps.

Public and Private Certificate Authorities (CAs):


Public Certificate Authorities (CAs): Public CAs, such as Let’s
Encrypt or Sectigo, are entities that issue SSL/TLS certificates to the
public. These certificates are widely recognized by browsers, making
them suitable for securing websites that need to establish trust with a
broad audience.

384
Private Certificate Authorities (CAs): Private CAs, on the other
hand, are used within a specific organization or network. They are ideal
for internal communication where the trust is established within a closed
environment. AWS ACM supports private CAs, allowing organizations
to manage their internal certificates securely.

Benefits of Using AWS ACM:


 Simplified Management: ACM simplifies the traditionally
complex process of certificate management. With a few clicks in the
AWS Management Console or through API calls, you can request,
renew, and deploy certificates effortlessly.

 Enhanced Security: SSL/TLS certificates play a crucial role in


securing data in transit. ACM ensures that your certificates are always
valid and up-to-date, reducing the risk of security breaches due to
expired certificates.

 Cost-Efficiency: As a fully managed service, ACM eliminates the


need for manual intervention in certificate management, saving time
and reducing operational costs. Moreover, ACM is offered at no
additional cost, making it a cost-effective solution.

Conclusion:
AWS ACM emerges as a powerful tool in the realm of certificate
management, offering a seamless and secure experience for users. By
automating the certificate lifecycle, integrating with various AWS
services, and providing global coverage, ACM empowers businesses to
prioritize application development while ensuring robust security.
Embrace the simplicity and efficiency of AWS ACM, whether you’re
utilizing public or private CAs, to fortify your web applications with the
strength of SSL/TLS encryption.

385
Streamlining Mobile App Development with AWS
Amplify Console

In today’s fast-paced digital landscape, mobile app development can be


a daunting task. However, with the right tools and strategies, it can
become a seamless and efficient process. This is where AWS Amplify
Console comes into play. Combining the power of AWS Amplify and
AWS Mobile Hub, the Amplify Console offers a comprehensive
solution for building, testing, and deploying mobile apps.
Benefits of using AWS Amplify Console for mobile app
development
With AWS Amplify Console, developers can streamline the entire app
development lifecycle. From code changes to continuous deployment
and hosting, everything is managed in one place. This not only saves
time and effort but also ensures a smooth and consistent user experience.
One of the key benefits of using AWS Amplify Console is its integration
with AWS Amplify. Amplify is a set of tools and services that simplifies
the process of building scalable and secure mobile applications. By
leveraging Amplify Console, developers can easily connect their app to
the cloud, set up authentication and authorization, and access other AWS
services such as databases and storage.

386
Another advantage of using AWS Amplify Console is its scalability and
flexibility. With its automatic branch deployments feature, developers
can easily create new branches for different features or bug fixes and
have them automatically deployed to separate environments. This allows
for easy experimentation and iteration, ensuring that the app
development process remains agile and efficient.
Furthermore, AWS Amplify Console provides a simple and intuitive
user interface that makes it easy for developers to manage their app’s
deployment and hosting. With just a few clicks, developers can
configure their app’s settings, set up custom domains, and monitor the
deployment process. This eliminates the need for complex manual
configurations and reduces the risk of human error.
Key features of AWS Amplify Console
AWS Amplify Console is packed with powerful features that make it an
essential tool for mobile app development. Here are some of its key
features:
 Continuous deployment: AWS Amplify Console allows
developers to set up automated deployments for their app. Whenever
changes are pushed to the repository, Amplify Console automatically
builds and deploys the updated app, ensuring a smooth deployment
process.
 Environment variables: With Amplify Console, developers can
easily manage environment variables for different stages of their
app’s development. This allows for easy configuration of variables
such as API endpoints, database credentials, and third-party
integrations.
 Branch deployments: Amplify Console enables developers to
create separate branches for different features or bug fixes. Each
branch can have its own environment and deployment settings,
allowing for easy testing and experimentation.

387
 Custom domains: Developers can easily set up custom domains
for their app with Amplify Console. This gives the app a professional
and branded look, enhancing user trust and engagement.
 Automatic SSL certificates: Amplify Console automatically
provisions and manages SSL certificates for custom domains,
ensuring secure communication between the app and its users.
Setting up AWS Amplify Console for mobile app
development
Getting started with AWS Amplify Console is quick and easy. Here’s a
step-by-step guide to setting it up for your mobile app development:
 Create an AWS account: If you don’t already have one, sign up
for an AWS account at aws.amazon.com. This will give you access to
all the AWS services, including Amplify Console.
 Install the Amplify CLI: The Amplify CLI is a command-line
tool that helps you create and manage your app’s backend resources.
Install it by running the following command in your terminal: npm
install -g @aws-amplify/cli.
 Initialize your app: Navigate to your app’s root directory and run
the command amplify init. This will initialize your app with Amplify
and create a new Amplify environment.
 Connect your app to the cloud: Once your app is initialized,
you can start connecting it to the cloud. Use the Amplify CLI
commands to add backend services such as authentication, storage,
and databases.
 Configure Amplify Console: After setting up the backend, run
the command amplify console to open the Amplify Console in your
browser. Here, you can configure your app's deployment settings,
custom domains, and environment variables.
 Deploy your app: Finally, use the Amplify CLI command amplify
push to deploy your app to the Amplify Console. This will build your

388
app, create the necessary resources, and deploy it to the specified
environment.
Integrating AWS Amplify Console with your mobile app
development workflow
AWS Amplify Console seamlessly integrates with popular development
workflows, making it easy for developers to incorporate it into their
existing processes. Here are a few ways you can integrate Amplify
Console with your mobile app development workflow:
 Version control integration: Amplify Console supports
integration with popular version control systems like GitHub, GitLab,
and Bitbucket. This allows you to automatically build and deploy
your app whenever changes are pushed to your repository.
 Build hooks: Amplify Console provides build hooks that can be
used to trigger custom build scripts or external services. This enables
you to incorporate additional build steps or automated testing into
your app’s deployment pipeline.
 Webhooks: Amplify Console can also send webhooks to external
services, enabling you to trigger custom actions or notifications based
on the app’s deployment status. This can be useful for sending
notifications to team members or integrating with other tools in your
development workflow.
 API integration: Amplify Console provides a RESTful API that
allows you to programmatically manage your app’s deployments and
settings. This enables you to automate certain tasks or integrate
Amplify Console with other tools in your development workflow.
Streamlining the deployment process with AWS Amplify
Console
One of the biggest challenges in mobile app development is the
deployment process. Traditional deployment methods often involve

389
manual configurations, complex build scripts, and potential human
errors. However, with AWS Amplify Console, deploying your app
becomes a breeze.
Amplify Console simplifies the deployment process by automating key
tasks and providing an intuitive user interface. Here’s how it streamlines
the deployment process:
 Continuous deployment: With Amplify Console, every code
change triggers an automated deployment. This means that as soon as
you push changes to your repository, Amplify Console automatically
builds and deploys the updated app. This eliminates the need for
manual deployments and reduces the risk of human error.
 Automatic branch deployments: Amplify Console allows you
to create separate branches for different features or bug fixes. Each
branch can have its own environment and deployment settings. This
enables you to test and iterate on new features without affecting the
main production environment.
 Preview deployments: Amplify Console provides a preview URL
for each deployment, allowing you to easily preview and test your
app before making it live. This is particularly useful for testing new
features or bug fixes in a controlled environment.
 Rollback feature: In case of any issues or bugs in a deployment,
Amplify Console allows you to easily rollback to a previous version
with just a few clicks. This ensures that you can quickly revert to a
stable version of your app without any downtime.
Optimizing mobile app performance with AWS Amplify
Console
Performance is a critical aspect of mobile app development. Users
expect apps to be fast, responsive, and reliable. AWS Amplify Console
provides several features and optimizations that can help you optimize
your app’s performance:

390
 Content delivery network (CDN): Amplify Console
automatically deploys your app to a global CDN, ensuring that your
app’s static assets are served from the closest edge location. This
reduces latency and improves the app’s overall performance.
 Automatic asset optimization: Amplify Console automatically
optimizes your app’s static assets, including images, CSS, and
JavaScript files. This reduces the file size of these assets, resulting in
faster load times and better user experience.
 GZIP compression: Amplify Console automatically enables GZIP
compression for your app’s assets, reducing the size of transferred
data and improving network performance.
 Cache control: Amplify Console allows you to configure cache
control headers for your app’s assets. This enables you to control how
long assets are cached by the user’s browser, reducing the number of
requests made to the server and improving performance.
Monitoring and troubleshooting mobile app development with AWS
Amplify Console
Monitoring and troubleshooting are essential aspects of mobile app
development. AWS Amplify Console provides several tools and features
that help you monitor and troubleshoot your app’s development process:
1. Deployment logs: Amplify Console provides detailed deployment
logs that allow you to track the progress of your app’s deployment.
These logs include information about build times, errors, and
warnings, enabling you to quickly identify and fix any issues.
2. Real-time metrics: Amplify Console provides real-time metrics for
your app’s deployments, including build times, deployment durations,
and success rates. These metrics help you monitor the performance of
your app’s deployment process and identify any bottlenecks or issues.
3. Alerts and notifications: Amplify Console allows you to set up alerts
and notifications for your app’s deployments. You can configure
alerts based on criteria such as deployment failures, long build times,

391
or high error rates. This enables you to proactively monitor your
app’s development process and take immediate action when
necessary.
4. Integration with AWS CloudWatch: Amplify Console integrates
seamlessly with AWS CloudWatch, allowing you to collect and
analyze logs, metrics, and events from your app’s deployments. This
provides deeper insights into your app’s performance and helps you
troubleshoot any issues.
Case studies: Success stories of mobile app development with AWS
Amplify Console
AWS Amplify Console has been used by numerous organizations to
streamline their mobile app development process. Here are a couple of
success stories:
1. Company A: Company A, a fast-growing startup, used AWS
Amplify Console to build and deploy their mobile app. By leveraging
Amplify Console’s continuous deployment and automatic branch
deployments features, they were able to rapidly iterate on new
features and bug fixes. This allowed them to launch their app in
record time and achieve a high level of user satisfaction.
2. Company B: Company B, a large enterprise, used AWS Amplify
Console to simplify their complex mobile app development
workflow. With Amplify Console’s environment variables and
integration with version control systems, they were able to automate
their deployment process and reduce the risk of human error. This
resulted in significant time and cost savings for the company.
These success stories highlight the effectiveness of AWS Amplify
Console in streamlining mobile app development and enabling
organizations to deliver high-quality apps in a timely manner.
Conclusion: Streamlining mobile app development with AWS Amplify
Console

392
In conclusion, AWS Amplify Console is revolutionizing the way mobile
apps are developed. Its powerful features and seamless integration with
other AWS services make it a must-have tool for any app developer.
With Amplify Console, developers can streamline the entire app
development lifecycle, from code changes to continuous deployment and
hosting. Its scalability and flexibility enable easy adaptation and
iteration of apps, making it ideal for both small startups and large
enterprises.
Furthermore, Amplify Console’s optimization and monitoring features
help developers optimize their app’s performance and troubleshoot any
issues. With real-time metrics,
detailed deployment logs, and integration with AWS CloudWatch,
developers can ensure that their app is performing at its best.
So why not give AWS Amplify Console a try and experience the
convenience and efficiency it brings to your mobile app development
process? Streamline your workflow, deliver high-quality apps, and stay
ahead in the fast-paced digital landscape.

393
AWS Lambda
Serverless Architecture:
The advancement of technology has generated new needs. The
increasing demand, load and costs have accelerated the development of
new methods. In addition, the development of cloud technology and
innovations have brought new services and concepts into our lives. One
of these concepts is serverless architecture.
While developing, our primary goal is to create a structure that will
solve a problem. However, in doing so, we are also forced to consider
other things. We have to think about the server configuration where the
application will run, as well as authorization, load balancing, and many
other aspects. Serverless architecture (another term used in place of
“serverless” is “Functions as a Service”) is a design approach that
enables you to build and run applications and services without the need
to manage the infrastructure.
Serverless architecture is not a way of assuming that servers are no
longer required or that applications will not run on servers. Instead, it is
a pattern or approach that helps us think less about servers in the context
of software development and management. This approach allows us to
eliminate the need to worry about issues related to scaling, load
balancing, server configurations, error management, deployment, and
runtime. With serverless architecture, we are essentially outsourcing one
of the most challenging aspects of running a software in production,
which is managing operational tasks.
Every technology has its own drawbacks, and serverless is no exception.
Here are the main situations in which it is generally not recommended to
use serverless architecture:

 A “cold start” is a phenomenon that can occur when a serverless


platform is required to initiate internal resources in order to handle a
function request. This process can take some time and may result in
slower performance for the initial request. To avoid this issue, it is

394
possible to keep the function in an active state by sending periodic
requests to it. This helps ensure that the necessary resources are
already initialized and ready to handle incoming requests efficiently.
 Long-running workloads may be more expensive to run on serverless
platforms compared to using a dedicated server, which can be more
efficient in these cases. When deciding between these options, it is
crucial to carefully consider the specific needs and requirements of
the workload.
 Testing and debugging code in a serverless computing environment
can be challenging due to the nature of these cloud systems and the
lack of back-end visibility for developers.
 As a serverless application that relies on external vendors for back-
end services, it is natural to have a certain level of reliance on those
vendors. However, if you decide to switch vendors at any point, it can
be challenging to reconfigure your serverless architecture to
accommodate the new vendor’s features and workflows.
 Due to time limitations imposed by the vendor (for example, AWS
allows up to 15 minutes), it is not possible to perform long-running
tasks.

Figure 1 -Reference Architecture Serverless with Microservices

What benefits does serverless provide?


 Serverless computing runs on servers managed by cloud service
providers, eliminating the need for users to manage the underlying
infrastructure themselves. This allows organizations to focus on

395
developing and deploying their applications without worrying about
server management.
 Serverless architecture allows for applications to be scaled
automatically. This means that as demand for the application
increases, the necessary resources will be automatically allocated to
meet that demand, without the need for manual intervention. This can
provide a high degree of flexibility and scalability for organizations
using serverless architectures for their applications.
 Serverless architectures enable the creation of development
environments that are easier to set up, which can lead to faster
delivery and more rapid deployment of applications.
 When using serverless services, you only pay for the specific
instances or invocations you use rather than being charged for idle
servers or virtual machines that you may not be utilizing.

Use Cases
 Serverless computing is well suited for tasks that are triggered by an
event. If you have an event that needs to be run based on some
trigger, serverless architecture can be an effective solution. An
example of this is when a user signs up for a service on a website and
a welcome email is automatically sent in response.
 Serverless computing allows for the creation of RESTful APIs that
can be easily scaled as needed.
Serverless computing is a relatively new technology; it has advantages
and disadvantages. However, it is not a suitable solution for every
situation, and it is important to carefully consider all infrastructure
requirements before deciding to use it as your execution model. If you
currently host small functions on your own servers or virtual servers, it
may be beneficial for you to consider the benefits of using a serverless
computing solution.
There are a variety of platforms that offer a range of services for
serverless architecture. One such platform is Amazon Web Services,
which offers a number of serverless services. AWS provides AWS

396
Lambda, AWS Fargate for computing; Amazon EventBridge, AWS Step
Functions, Amazon SQS, Amazon SNS, Amazon API Gateway, AWS
AppSync for application integration; and Amazon S3, Amazon EFS,
Amazon DynamoDB, Amazon RDS Proxy, Amazon Aurora Serverless,
Amazon Redshift Serverless, Amazon Neptune serverless for data store.
I will now provide an explanation of one of the most widely utilized and
practical services among these options, which is AWS Lambda.

AWS Lambda
AWS Lambda is an event-driven cloud service from Amazon Web
Services (AWS) that enables users to execute their own code, known as
“functions,” without the need to worry about the underlying
infrastructure. These functions can be written in various programming
languages and runtimes supported by AWS Lambda and be uploaded to
the service for execution.
AWS Lambda automatically manages the scaling and allocation of
resources for these functions, providing a convenient and efficient way to
run code in the cloud.
AWS Lambda functions can be used to perform a wide range of
computing tasks, such as serving web pages, processing streams of data,
calling APIs, and integrating with other AWS services. These functions
are designed to be flexible and can be used for a variety of purposes,
making them a powerful tool for cloud computing.

What is the process behind AWS Lambda’s functionality?


Lambda functions are run in their own isolated containers. When a new
function is created, it is packaged into a container by Lambda and then
run on a cluster of machines managed by AWS, which can serve
multiple tenants. Before the functions begin execution, the required
RAM and CPU capacity is allocated to each function’s container. Once
the functions have completed execution, the RAM allocated at the start
is multiplied by the duration of the function’s execution. Customers are

397
charged based on the amount of allocated memory and the run time
required for the function to complete.
AWS manages the entire infrastructure layer of AWS Lambda, so
customers do not have visibility into how the system operates. However,
this also means that customers do not need to worry about tasks, such as
updating the underlying machines or managing network contention, as
these responsibilities are handled by AWS.

Figure 2 - Reference Architecture: Image File Processing

In this reference architecture, we can use AWS Lambda to create the


thumbnails automatically. Lambda will be triggered by S3 Bucket
events, and then Lambda will generate thumbnail.

What are the capabilities of Lambda?


AWS Lambda provides native support for Java, Go, PowerShell,
Node.js, C#, Python, and Ruby, and offers a runtime API that enables
the use of additional programming languages to write functions.
We can create web applications, mobile back-ends, and IoT back-ends
by combining Lambda with other serverless components. AWS Lambda
can be utilized to perform data transformation tasks, such as validation,
filtering, sorting, or other processes, for every data change in a
DynamoDB table, and load the transformed data to another data store.
Scalable APIs, when building APIs using AWS Lambda, each execution
of a Lambda function can serve a single HTTP request. The API’s
different components can be routed to different Lambda functions
through Amazon API Gateway. AWS Lambda automatically scales

398
individual functions based on demand, enabling different parts of the
API to scale differently according to current usage levels. This enables
cost-effective and flexible API set-ups.

AWS Lambda has few restrictions


 A Lambda function will end execution after 15 minutes. It is not
possible to alter this limit. If your function typically takes more
than 15 minutes to run, AWS Lambda may not be a suitable option
for your task.
 The amount of memory that can be allocated to a Lambda function
ranges from 128MB to 3,008MB in 64MB increments.
 The size of the zipped Lambda code package should not exceed
50MB, and the unzipped version should not be larger than 250MB.
 By default, the concurrent execution for all AWS Lambda
functions within a single AWS account are limited to 1,000.

Serverless Web App Development Made Easy: A


Complete Guide with AWS Amplify, DynamoDB,
Lambda and API Gateway

399
Get ready to dive into the world of serverless web application
development on AWS. In this series, we’ll guide you through the
process of creating a dynamic web app that calculates the area of a
rectangle based on user-provided length and width values. We’ll
leverage the power of AWS Amplify for web hosting, AWS Lambda
functions for real-time calculations, DynamoDB for storing and
retrieving results, and API Gateway for seamless communication. By the
end of this journey, you’ll have the skills to build a responsive and
scalable solution that showcases the true potential of serverless
architecture. Let’s embark on this development adventure together!
Prerequisites
 Have an AWS account. If you don’t have one, sign up here and enjoy
the benefits of the Free-Tier Account
 Access to the project files: Amplify Web-app

Creating the Front-end


 Use the index.html file from the project files. Or simply open a text
editor and copy the following code into an index.html file. Note the
part with “YOUR API URL” as we will be filling this part with the
API URL later

<!DOCTYPE html>
<html>
<head>
<meta charset="UTF-8">
<title>Rectangle</title>
<!-- Styling for the client UI -->
<style>
h1 {
color: #FFFFFF;
font-family: system-ui;
margin-left: 20px;
}
body {
background-color: #222629;
}

400
label {
color: #86C232;
font-family: system-ui;
font-size: 20px;
margin-left: 20px;
margin-top: 20px;
}
button {
background-color: #86C232;
border-color: #86C232;
color: #FFFFFF;
font-family: system-ui;
font-size: 20px;
font-weight: bold;
margin-left: 30px;
margin-top: 20px;
width: 140px;
}
input {
color: #222629;
font-family: system-ui;
font-size: 20px;
margin-left: 10px;
margin-top: 20px;
width: 100px;
}
</style>
<script>
// callAPI function that takes the length and width numbers as
parameters
var callAPI = (length,width)=>{
// instantiate a headers object
var myHeaders = new Headers();
// add content type header to object
myHeaders.append("Content-Type", "application/json");
// using built in JSON utility package turn object to string and
store in a variable
var raw = JSON.stringify({"length":length,"width":width});
// create a JSON object with parameters for API call and store in
a variable
var requestOptions = {
method: 'POST',
headers: myHeaders,
body: raw,
redirect: 'follow'
};
// make API call with parameters and use promises to get response
fetch("YOUR API URL", requestOptions)
.then(response => response.text())
.then(result => alert(JSON.parse(result).body))
.catch(error => console.log('error', error));
}

401
</script>
</head>
<body>
<h1>AREA OF A RECTANGLE!</h1>
<form>
<label>Length:</label>
<input type="text" id="length">
<label>Width:</label>
<input type="text" id="width">
<!-- set button onClick method to call function we defined passing
input values as parameters -->
<button type="button"
onclick="callAPI(document.getElementById('length').value,document.getElementB
yId('width').value)">CALCULATE</button>
</form>
</body>
</html>

2. The file should look like this when opened on a browser. It gives
spaces to input the length and width of a rectangle and a ‘Calculate’
button

Hosting the App on AWS Amplify


 On your AWS console search box, search for ‘Amplify’ and click on
the first option that appears

402
2. Click on ‘GET STARTED’

3. Select ‘Get Started’ on the Amplify Hosting side

4. select the source for your app files. They can be in a remote repository
or local. We will use ‘Deploy without Git provider’ since our files are
local. We also need to use a compressed folder with our files. Click on
‘Continue’

403
5. Give the app a name, an environment name, choose the method as
‘Drag and drop’ and selct the index.zip file (zip all the app files. In this
case, it is only the index.html file). Click on ‘Save and deploy’

6. Once the deployment is complete, click on the Domain to access your


app

404
7. The app opens on the browser. (You might need to refresh the
deployment page on Amplify. Maybe it’s a bug or something ● )

Creating a Lambda Function to do the Math


1. On the AWS console search bar, type ‘Lambda’ and select the Lambda
service

2. Click on ‘Create function’

3. Give the Function name, The Runtime(Latest Python), then scroll


down and click on ‘Create function’

405
4. Copy the following Lambda function onto your lambda_function.py
file. Please note the DynamoDB name. We will be using this name later
as we create the DB.

# import the JSON utility package


import json

# import the AWS SDK (for Python the package name is boto3)
import boto3

# import two packages to help us with dates and date formatting


from time import gmtime, strftime

# create a DynamoDB object using the AWS SDK


dynamodb = boto3.resource('dynamodb')

# use the DynamoDB object to select our table


table = dynamodb.Table('AreaDatabase')

# store the current time in a human readable format in a variable


now = strftime("%a, %d %b %Y %H:%M:%S +0000", gmtime())

# define the handler function that the Lambda service will use an entry point
def lambda_handler(event, context):

# extract the two numbers from the Lambda service's event object
Area = int(event['length']) * int(event['width'])

# write result and time to the DynamoDB table using the object we
instantiated and save response in a variable
response = table.put_item(
Item={
'ID': str(Area),
'LatestGreetingTime':now

406
})

# return a properly formatted JSON object


return {
'statusCode': 200,
'body': json.dumps('Your result is ' + str(Area))
}

5. Click on ‘Deploy’

Create an API Gateway


On the AWS services search box, enter ‘API’ and select ‘API Gateway’
that appears

2. In the list for ‘Choose an API type’, select ‘Build’ for ‘REST API’

407
3. Choose the ‘REST’ protocol for the API, select ‘New API’ under
‘Create new API’ and give the API a name, then click on ‘Create API’

4. On the page that appears, select ‘Resources’ on the Left Panel, On the
‘Actions’ drop-down, select ‘Create method’. Select ‘POST’ on the drop
down that appears then click on the ✔. Select ‘Lambda Function’ as the
Integration type and type the name of the lambda function in the
‘Lambda Function’ box. Click on ‘Save’

408
5. On the dialog box that appears to Add Permission to Lambda
Function, click ‘OK’

6. Select ‘POST’. On the ‘Actions’ drop down, click on ‘Enable CORS/


then click on ‘Enable CORS and replace existing CORS headers’ on the
bottom right

7. On the ‘Confirm method changes’ box that appears, click on ‘Yes,


replace existing values’

8. Once all the checks are complete, click on ‘Actions, then, ‘Deploy
API’

409
9. Give the ‘Stage name’, then click ‘Deploy’

10. The Invoke URL is what you replace “YOUR API URL” with on the
index.html file. Insert the URL, regenerate the index.zip and reupload to
Amplify

Invoke URL: https://yuavndwnn4.execute-api.us-east-1.amazonaws.com/dev

Setting up a Database on DynamoDB to store results


On the services search box, search for ‘DynamoDB’ and select the
DynamoDB service

410
2. Click on ‘Create table’

3. Give the table a name, for ‘Partition key’ input ‘ID’. Leave the rest as
default, scroll to the bottom and click on ‘Create table’

4. Select the table name. Under the overview tab, expand ‘Additional
info’, then take note of the ARN

411
arn:aws:dynamodb:us-east-1:494225556983:table/Area_table

5. Let’s add permissions to our Lambda function to access DynamoDB.


On the Lambda function window, select the ‘Configuration’ tab, then
‘Permissions’ on the left side panel and select the Role name.

6. A new tab opens in IAM and we can add permissions to the role.
Click on ‘Add permissions’, then ‘Create inline policy’

412
7. Select the JSON Tab and copy the following policy. Replace “YOUR-
TABLE-ARN” with the ARN of your table that we copied in step 4,
then click ‘Next’ at the bottom

{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"dynamodb:PutItem",
"dynamodb:DeleteItem",
"dynamodb:GetItem",
"dynamodb:Scan",
"dynamodb:Query",
"dynamodb:UpdateItem"

413
],
"Resource": "YOUR-TABLE-ARN"
}
]
}

8. On the ‘Review and create’ page, give the policy a name the click on
‘Create policy’ at the bottom of the page

9. Now the Lambda function has permissions to write to the DB

Testing
Now that we are done, let’s see what we have. Open the AWS Amplify
domain. It should open our app.

2. Input values for the Length and Width and click on “Calculate”. The
solution should pop up on the screen. (Returned in the browser through
API Gateway)

414
3. Yaaaay!!!!! And we are successful

Delete your Resources


Remember to delete your resources to avoid unnecessary charges:
Delete the Amplify App
Delete the DynamoDB Table
Delete the Lambda function
Delete the API Gateway

Conclusion
In this comprehensive guide, we’ve embarked on an exciting journey
into the realm of serverless web application development on AWS.
We’ve built a dynamic web app that calculates the area of a rectangle
based on user-provided length and width values. Leveraging the power
of AWS Amplify for web hosting, AWS Lambda functions for real-time
calculations, DynamoDB for result storage, and API Gateway for
seamless communication, we’ve demonstrated the incredible potential of
serverless architecture.
Serverless EC2 Instance Scheduler for Company
Working Hours
Scenario:
In some companies, there is no need to run their EC2 instances 24/7; they
require instances to operate during specific time periods, such as
company working hours, from 8:00 AM in the morning to 5:00 PM in the
evening. To address this scenario, I will implement two Lambda
functions responsible for starting and stopping instances. These Lambda

415
functions will be triggered by two CloudWatch Events in the morning
and evening. This solution is fully serverless.

Steps:
Step 1 :Creating the Instance :
 Navigate to the EC2 Console.
 Follow the Outlined steps below.

416
417
Step 2 :Creating the Policy:
 Navigate to the IAM Console.
 Click on “Policies” and then Click on “Create policy”

 Select services as EC2.


 And Actions are DescribeInstances , Start Instances.

418
419
5. Now we have created a policy for starting instances. We also need to
create a policy for stopping the instances. This is because we are going
to create two Lambda functions: one for starting and one for stopping
the instances. Each function will have its own role, and we will attach
these two policies to their respective roles.
6. Now we are going to repeat the same steps for Creating Stopping
Policy also.
7. Everything is same , Except Actions because we are going to stop the
instance.
8. The Actions are DescribeInstances , Stop Instances .
9. Keep your Policy name as “stop-ec2-instance”.

Step 3 :Creating the Lambda functions:


 Navigate to the lambda Console.
 Follow the Outlined steps below.

420
421
422
Now again, go to the Lambda console and then test the code.

423
1. Now we Created lambda function for Starting Instance.
2. We have to Repeat the same steps again to Create a Lambda function
for Stopping Instance , keep your lambda function name as “Stop-
EC2-demo”.
3. The only changes we have to make are to replace the default code
with the ‘stop-ec2-instance.py’ code and attach the policy we created
for stopping instances to the role of this Lambda function.

 As demonstrated above, when I test my Python code, it runs


successfully and stops the instance.

424
 Now, we are ready to proceed and create schedules for this
functions.

Step 5 :Creating the Schedules Using Cloud Watch :


 Navigate to the Cloud Watch Console.
 Follow the Outlined Steps below.

425
Note : Keep your rule name as “start-ec2-rule” , I mistakenly named it
‘role’ Please do not name it as ‘role.’

426
427
428
 We have now created a schedule for starting the instance every day at
8:00 AM.
 Next, we need to create a schedule for stopping instances.
 To create the schedule for stopping instances, follow the same steps
as for starting instance scheduling with a few changes, Keep your rule
name as “stop-ec2-rule”.
1. The changes include modifying the scheduled time and selecting the
appropriate scheduling function.
2. We need to change the schedule time to 17:00 because it will stop the

429
Lambda function at 17:00 IST (5:00 PM).
3. We have to Change the Function as Stop-EC2-demo

Now, we have successfully created two schedules: one to start the


instance every day at 8:00 AM and the other to stop the instance every
day at 5:00 PM.

Deploy Your First Web App on AWS with AWS


Amplify, Lambda, DynamoDB and API Gateway
This guide is designed for beginners or developers with some cloud
experience who want to learn the fundamentals of building web
applications on the AWS cloud platform. We’ll walk you through
deploying a basic contact management system, introducing you to key
AWS services along the way.
In this project, as you can guess from the title, we will use AWS, which
stands for Amazon Web Services; an excellent cloud platform with
endless services for so many various use cases from training machine
learning models to hosting websites and applications.
Cloud computing provides on-demand access to computing resources
like servers, storage, and databases.

Serverless functions are a type of cloud computing service that allows


you to run code without managing servers.
By the end of this tutorial, you’ll be able to:
 Deploy a static website to AWS Amplify.
 Create a serverless function using AWS Lambda.

430
 Build a REST API with API Gateway.
 Store data in a NoSQL database using DynamoDB.
 Manage permissions with IAM policies. Integrate your frontend code
with the backend services.
I recommend you follow the tutorial one time and then try it by yourself
the second time. And before we begin, ensure you have an AWS
account. Sign up for a free tier account if you haven’t already.
Now let’s get started!

Step 1: Deploy the frontend code on AWS Amplify


 In this step, we will learn how to deploy static resources for our web
application using the AWS Amplify console.
 Basic web development knowledge will be helpful for this part. We
will create our HTML file with the CSS (style) and Javascript code
(functionality) embedded in it. I have left comments throughout to
explain what each part does.
Here is the code snippet of the page:
<!DOCTYPE
html>
<html lang="en" dir="ltr">
<head>
<meta charset="UTF-8">
<title>Contact Management System</title>
<style>
body {
background-color: #202b3c;
display: flex; /* Centering the form */
justify-content: center; /* Centering the form */
align-items: center; /* Centering the form */
height: 100vh; /* Centering the form */
margin: 0; /* Removing default margin */
}
form {
display: flex;

431
flex-direction: column; /* Aligning form elements vertically */
align-items: center; /* Centering form elements horizontally */
background-color: #fff; /* Adding a white background to the form */
padding: 20px; /* Adding padding to the form */
border-radius: 8px; /* Adding border radius to the form */
}
label, button {
color: #FF9900;
font-family: Arial, Helvetica, sans-serif;
font-size: 20px;
margin: 10px 0; /* Adding margin between elements */
}
input {
color: #232F3E;
font-family: Arial, Helvetica, sans-serif;
font-size: 20px;
margin: 10px 0; /* Adding margin between elements */
width: 250px; /* Setting input width */
padding: 5px; /* Adding padding to input */
}
button {
background-color: #FF9900; /* Adding background color to button */
color: #fff; /* Changing button text color */
border: none; /* Removing button border */
padding: 10px 20px; /* Adding padding to button */
cursor: pointer; /* Changing cursor to pointer on hover */
}
h1{
color: #202b3c;
font-family: Arial, Helvetica, sans-serif;
}
</style>
<script>
// Define the function to call the API with the provided first name, last
name, and phone number
let callAPI = (fname, lname, pnumber)=>{
// Create a new Headers object and set the 'Content-Type' to
'application/json'
let myHeaders = new Headers();

432
myHeaders.append("Content-Type", "application/json");

// Create the JSON string from the input values


let raw = JSON.stringify({"firstname": fname, "lastname": lname,
"phone_number": pnumber});

// Define the request options including method, headers, body, and


redirect behavior
let requestOptions = {
method: 'POST', // Method type
headers: myHeaders, // Headers for the request
body: raw, // The body of the request containing the JSON string
redirect: 'follow' // Automatically follow redirects
};

// Use the fetch API to send the request to the specified URL
fetch("https://uvibtoen42.execute-api.us-east-1.amazonaws.com/web-
app-stage", requestOptions) // Replace "API_KEY" with your actual API endpoint
.then(response => response.text()) // Parse the response as text
.then(result => alert(JSON.parse(result).message)) // Parse the
result as JSON and alert the message
.catch(error => console.log('error', error)); // Log any errors to
the console
}
</script>
</head>
<body>
<form>
<h1>Contact Management System</h1>
<label>First Name :</label>
<input type="text" id="fName">
<label>Last Name :</label>
<input type="text" id="lName">
<label>Phone Number :</label>
<input type="text" id="pNumber">
<button type="button"
onclick="callAPI(document.getElementById('fName').value,
document.getElementById('lName').value,
document.getElementById('pNumber').value)">Submit</button>
<!-- Button to submit user input without reloading the page -->

433
<!-- When clicked, it calls the callAPI function with values from the
input fields -->
</form>
</body>
</html>

There are multiple ways to upload our code into Amplify console. For
example, I like using Git and Github. To keep this article simple, I will
show you how to do it directly by drag and drop method into Amplify.
To do this — we have to compress our HTML file.
Now, make sure you’re in the closest region to where you live, you can
see the region name at the top right of the page, right next to the account
name. Then let’s go to the AWS Amplify console. It will look something
like this:

When we click “Get Started,” it will take us to the following screen (we
will go with Amplify Hosting on this screen):

434
You will start a manual deployment. Give your app a name, I’ll call it
“Contact Management System”, and ignore the environment name.
Then, drop the compressed index file and click Save and Deploy.

Amplify will deploy the code, and return a domain URL where we can
access the website.

Click on the link and you should see this:

435
Step 2: Create an AWS Lambda Serverless function
 We will create a serverless function using the AWS Lambda service in
this step. A Lambda function is a serverless function that executes
code in response to events. You don’t need to manage servers or
worry about scaling, making it a cost-effective solution for simple
tasks. To give you some idea, a great example of Serverless
computing in real life is vending machines. They send the request to
the cloud and process the job only somebody starts using the machine.
 Let’s go to the Lambda service inside the AWS console. By the way,
make sure you are creating the function in the same region in which
you deployed the web application code in Amplify.
 Time to create a function. Give it a name, I’ll call it “my-web-app-
function”, and for runtime programming language parameters: I’ve
chosen Python 3.12, but feel free to choose a language and version
that you are more comfortable and familiar with.

436
After our lambda function is created, scroll down and you will see the
following screen:

Now, let’s edit the lambda function. Here is a function that extracts first
and last names from the event JSON input. And then returns a context
dictionary. The body key stores the JSON, which is a greeting string.
After editing, click Deploy to save my-web-app-function, and then click
Test to create an event.

437
To configure a test event, give the event a name like “MyEventTest”,
modify the Event JSON attributes and save it.

Now click on the big blue test button so we can test the Lambda
function.

The execution result has the following elements:

438
 Test Event Name
 Response
 Function Logs
 Request ID
Step 3: Create Rest API with API Gateway
Now let’s go ahead and deploy our Lambda function to the Web
Application. We will use Amazon API Gateway to create a REST API
that will let us make requests from the web browser. API Gateway acts
as a bridge between your backend services (like Lambda functions) and
your frontend application. It allows you to create APIs that expose
functionality to your web app.
REST: Representational State Transfer.
API: Application Programming Interface.
Go to the Amazon API Gateway to create a new REST API.

At the API creation page, we have to give it a name for example “Web
App API”, and choose a protocol type and endpoint type for the REST
API (select Edge-optimized).

439
Now we have to create a POST method so click on Create method.

In the Create method page, select the method type as POST, the
integration type should be Lambda function, ensure the Region is the
same Region you’ve used to create the lambda function and select the
Lambda function we just created. Finish by clicking on Create method at
the bottom of the page.

440
Now we need to enable CORS, so select the / and then click enable
CORS

In the CORS settings, just tick the POST box and leave everything else
as default, then click save.

After enabling CORS headers, click on the orange Deploy API button.

A window will pop up, under stage select new stage and give the stage a
name, for example “web-app-stage”, then click deploy.

441
When you view the stage, there will be a URL named Invoke URL.
Make sure to copy that URL; we will use it to invoke our lambda
function in the final step of this project.

Step 4: Create a DynamoDB table


In this step, we will create a data table in Amazon DynamoDB, another
AWS service. DynamoDB is a NoSQL database service that stores data
in key-value pairs. It’s highly scalable and flexible, making it suitable for
various applications. Click on the orange create table button.

442
Now we have to fill out some information about our data table, like the
name “contact-management-system-table”, and the partition key is ID.
The rest leave as default. Click Create table.

Once the table is successfully created, click on it and a new window


with the details of the table will open up. Expand the Additional info and
copy the Amazon Resource Name (ARN). We will use the ARN in the
next step when creating IAM access policies.

Step 5: Set up IAM Policies and Permissions


 AWS IAM is one of the most basic and important things to be set up,
yet a lot of people neglect it. For improved security, it’s always
recommended a least-privilege access model, which means not giving
a user more than needed access. For example, even for this simple
web application project, we have already worked on multiple AWS

443
services: Amplify, Lambda, DynamoDB, and API Gateway. It’s
essential to understand how they communicate with each other and
what kind of information they share.
 Now back to our project, we have to define an IAM policy to give
access to our lambda function to write/update the data in the
DynamoDB table.
 So go back to the AWS Lambda console, and click on the lambda
function we just created. Then go to the configuration tab, and on the
left menu click on Permissions. Under Execution role, you will see a
Role name.

Click on the link, which will take us to the permissions configuration


settings of this IAM role. Now click on Add permissions, then create an
inline policy.

Then click on JSON, delete what’s on the Policy editor and paste the

444
following.

{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"dynamodb:PutItem",
"dynamodb:DeleteItem",
"dynamodb:GetItem",
"dynamodb:Scan",
"dynamodb:Query",
"dynamodb:UpdateItem"
],
"Resource": "YOUR-DB-TABLE-ARN"
}
]
}

Make sure to substitute the “YOUR-DB-TABLE-ARN” with your real


DynamoDB table ARN. Click Next, give the policy a name, like
“lambda-dynamodb”, and then click Create policy. This policy will
allow our Lambda function to read, edit, delete, and update items from
the DynamoDB data table.

Now close this window, and back to the Lambda function, go to the
Code tab and we will update the lambda function python code with the
following.

445
import
json
import boto3
from time import gmtime, strftime

dynamodb = boto3.resource('dynamodb')
table = dynamodb.Table('contact-management-system-table')
now = strftime("%a, %d %b %Y %H:%M:%S +0000", gmtime())

def lambda_handler(event, context):


name = event['firstname'] +' '+ event['lastname'] +' '+ event['phone_number']
response = table.put_item(
Item={
'ID': name,
'LatestGreetingTime':now
})

return {
'statusCode': 200,
'body': json.dumps('Hello from Lambda, ' + name)
}
'statusCode': 200,
'body': json.dumps('Hello from Lambda, ' + name)
}

The response is in REST API format. After making the changes, make
sure to deploy the code. After the deployment is concluded, we can Test
the program by clicking on the blue test button.

446
We can also check the results on the DynamoDB table. When we run the
function it updates the data on our table. So go to AWS DynamoDB,
click on explore items in the left nav bar, click on your table. Here is the
object returned from the lambda function:

Step 6: Update frontend code with Rest API


 Congrats on making it this far!
 In this final step, we will see everything we just built in action. We
will update the front-end to be able to invoke the REST API with the
help of our lambda function and receive data.
 First, go back to your index.html on your code editor. See on line 68
you had “API_KEY”? Go ahead and swap that with the invoke URL
you copied from the API Gateway service under your REST API
details. Once you’ve done that, save and compress the file again, like

447
we did in step 1, and upload it again to AWS using the console.

Click on the new link you got and let’s test it.

Our data tables receive the post request with the entered data. The
lambda function invokes the API when the “Call API” button is clicked.
Then using javascript, we send the data in JSON format to the API. You
can find the steps under the callAPI function.
You can find the items returned to my data table below:

448
Conclusion
You have created a simple web application using the AWS cloud
platform. Cloud computing is snowballing and becoming more and more
part of developing new software and technologies.
If you feel up for a challenge, next you could:
 Enhance the frontend design
 Add user authentication and authorization
 Set up monitoring and analytics dashboards
 Implement CI/CD pipelines to automate the build, test, and
deployment processes of your web application using services like
AWS CodePipeline, AWS CodeBuild, and AWS CodeDeploy.

How to Start and Stop an AWS EC2 Instance


Automatically?

This text explains step by step how to automatically start and stop an EC2
instance in AWS using AWS Lambda function and Amazon Event Bridge.

449
We may not need the servers (EC2 instance) in AWS to run
continuously. Running it on only when needed and shutting it down
when the work is completed prevents waste of resources and saves our
budget.
It can be managed manually at irregular intervals or for servers that are
not tied to a specific schedule. However, on servers that need to be start
and stop in a certain schedule, we can automate this process using the
AWS Lambda function and Amazon Event Bridge. I describe step by
step now.

Create AWS Lambda Function:


We go to the AWS Lambda service and click on the “Create Function”
button. Then we give the function a name. We choose the language in
which the function will be written, for example Python 3.8. In our
example, the Lambda function is written in Python 3.8 and the AWS
SDK using the Boto3 library. We create the basic properties of the
function by clicking on the “Create” button.

450
AWS Create Function

We go inside the created function. We delete the default code under the
Code tab and paste the following code.

451
This script takes the instance_id as a parameter. If the instance_id is
incorrect or missing, the warning ‘instance_id parameter is missing’ is
returned. If the instance_id is correct, the process continues. An EC2
client is created and the status of the instance is checked. If the instance
is already running, no action is taken and the warning “EC2 instance is
already running” is returned. If the instance is not running, it is
initialised.
After pasting the code, we load the code by pressing the “Deploy”
button.

AWS Lambda Function Code

After installing the code, we go to the “General configuration” menu


under the “Configuration” tab and click the “Edit” button to change the
default settings.

452
AWS Lambda Function Configuration

Under Basic Settings, a description can optionally be written. Then, we specify the
amount of memory and storage required. In this example, the minimum values will
be enough.

The timeout specifies the maximum running time of the function. This duration
refers to the time from the start of the function to its completion. When the timeout
duration is exceeded, the function execution is stopped and the result is returned.

This parameter determines how long the function has to complete its function.

Since the server can take a long time to initialise, we should set this time to at least
5 minutes. Other properties are by default left and saved.

AWS Lambda Function Basic Configuration

453
After Basic Settings, we go to the “Permission” menu under the
“Configuration” tab and go to the role settings by clicking the role that
will run the function.

AWS Lambda Function Permission Configuration

Attach Policy
The default policy is not authorised in ec2 instances. To provide this, we
select the “Attach policies” option under the Add permissions button .

AWS Lambda Function Role Configuration

We find and select the AmazonEC2FullAccess policy and click the


“Add permissions” button.

AWS Lambda Function Role Attach Policy

With the addition of the new policy, the authorizations of the role will

454
change.

AWS Lambda Function Role

Now Test Time


We need to test our lambda function before scheduling it. For this we go
back to the lambda function and click on the Test tab. Here we give a
name to the event and write the instance id we want to start in the “Event
JSON” code section.

You can find the Instance ID in the instance section under the EC2
dashboard.

AWS EC2 Instance

After typing the correct instance id to the instance_Id variable in the test
section, save it and start the test process by clicking the test button.

455
Our test was successful. Now we make sure by checking the status on
the ec2 dashboard.

AWS EC2 Instance

In this way, we have seen that our Lambda function works successfully.

Create Schedule
We will trigger the Lambda Function that we have built by creating a
Schedule on Amazon Event Bridge. For this, we go to Amazon
EventBridge service and click on the “Create schedule” button.

456
Amazon EventBridge

In the page that appears, we give a name to “Schedule name” and select
“Recurring schedule” and “Cron-based schedule” options.

Amazon EventBridge Schedule

For example, if we want our server to start at 09.00 every day, we fill in
the blanks as follows, answer the “Flexible time window” question as
off and click next.

457
Amazon EventBridge Cron-based Schedule

On the following page, we select the AWS Lambda function, which is


the target API.

Amazon EventBridge Schedule with AWS Lambda

Then we select the Lambda Function that we have created and write the
instance id as a parameter in the Payload section, just as we wrote in the
test section of the Lambda Function, and click the next button.

458
Amazon EventBridge Schedule with AWS Lambda

On the following page, select NONE for the “Action after schedule
completion” question and disable the “Retry policy”.

Amazon EventBridge Schedule with AWS Lambda

Then select “Create new role for this schedule” in the “Execution role”

459
question and click next.

Amazon event Bridge Schedule with AWS Lambda

In the appearing window, we check the details again and click on


“Create schedule” button and complete the process.

Amazon event Bridge Schedule with AWS Lambda

We can now see our Schedule under Amazon event Bridge.

460
Amazon Event Bridge Schedule

Automatically Stop AWS EC2 Instance


We can also stop the server automatically, just like we just initialized the
server. I won’t go through the steps to stop the server automatically,
because they are mostly the same as the starting process. The only
difference is the code written inside the Lambda Function. I share this code
below.

461
AWS SNS — Simple Notification Service
what if you want to send one message to many receivers? So, we have
the possibility to do a direct integration.
So, we have our create service and needs to send email, talk to their
XYZ service, talk to their shipping service, maybe talk to another SQS
queue, so we could integrate all these things together, but it would be
quite difficult.
The other approach is to do something called Pub / Sub, or publish-
subscribe.
And so, our create service publishes data to our SNS topic, and our SNS
topic has many subscribers and all these subscribers get that data in real
time as a notification. So, it could be an email notification, text message,
shipping service, SQS queue. You’re basically able to send your
message once to an SNS topic and have many services receive it.

Basic Introduction:
An Amazon SNS topic is a logical access point which acts as a
communication channel. A topic lets you group multiple endpoints (such
as AWS Lambda, Amazon SQS, HTTP/S, or an email address). To
broadcast the messages of a message-producer system (for example, an
e-commerce website) working with multiple other services that require
its messages (for example, checkout and fulfillment systems), you can
create a topic for your producer system. The first and most common
Amazon SNS task is creating a topic

462
Each subscriber to the topic will get all the messages now we have new
feature to filter messages. Up to 10,000,000 subscriptions per topic &
100,000 topics limit.
Subscribers can be:
 SQS
 HTTP/HTTPS (with delivery retries — how many times).
 Lambda
 Emails
 SMS messages
 Mobile Notification

Our SNS integrates with a lot of Amazon Services in AWS.


 With Amazon S3, we use it on bucket events.
 For our ASG Auto-Scaling Notifications.
 In CloudWatch for Alarms.
 In CloudFormation for state changes.

SNS + SQS : Fan Out


The “fanout” scenario is when an Amazon SNS message is sent to a topic
and then replicated and pushed to multiple Amazon SQS queues, HTTP
endpoints, or email addresses. This allows for parallel asynchronous
processing. For example, you could develop an application that sends an
Amazon SNS message to a topic whenever an order is placed for a
product. Then, the Amazon SQS queues that are subscribed to that topic
would receive identical notifications for the new order. The Amazon EC2

463
server instance attached to one of the queues could handle the processing
or fulfillment of the order while the other server instance could be
attached to a data warehouse for analysis of all orders received.
It’s fully decoupled & there is no data loss. It has ability to add receivers
of data later.

Fanout

Let’s do Hands-On, we will go to SNS console & give the topic name,
I’m giving “MyTestTopic” & click on Next Step.

We’re not going to apply any custom changes so will keep as default &
click on Create Topic.

464
Now as we see in below picture, there is no Subscription in Topic so we
will create subscription, click in Create Subscription.

Now we will choose the protocol, there are many protocols available
there like HTTP/HTTPS, Email, Lambda, SQS, SMS etc. I’ll choose
Email & give the email id & Create subscription.

Now you will see that status of subscription is in Pending Confirmation,


Go to your email & click on received email from Amazon & Confirm it.

465
Now you will see the status confirmed as I confirm the subscription in
my email.

So, this is my console, you can add more subscription. I have added 2
subscriptions. Let’s publish some message. Click on Publish Message
button on top right-hand side.

I’m publishing my message, gave the details.

466
Now go to SQS Queue & click on View/Delete Message in
Actions(Make sure you subscribe the Topic in SQS)
Actions — Subscribe to SNS Topic (Do this work before publishing
message from SNS.

Now you can see you message in Queue as well as in your email.

467
Aws SQS- SIMPLE QUEUE SERVICE

What is SQS?
 SQS stands for SIMPLE QUEUE SERVICE.
 SQS was the first service available in AWS
AMAZON SQS is a web service that gives you access to message
queue that can be used to store message while waiting for a computer
to process them.
 Amazon SQS is a distributed queue system that enables web service
applications to quickly and reliably queue messages that one
component in the application generates to be consumed by another
component where a queue is a temporary repository for messages that
are awaiting processing.
 with the help of SQS ,you can send ,store and receive messages
between software components at any volume without losing
messages.
 using Amazon SQS, you can separate the components of an
application so that they can run independently , easing message
management between components.
 Any component of a distributed application can store the message in
the queue.
 Messages can contain up to 256 kB of text in any format such as
JSON,XML ,etc.
Amazon SQS can be described as commoditization of the messaging
service. Well known examples of messaging service Technologies
include IBM WEBSPHERE MQ and MICROSOFT MESSAGE

468
QUEUING.
Unlike these Technologies,users do not need to maintain their own
server. Amazon does it for them and sells the SQS service at a per-use
rate.

Authentication:-
Amazon SQS provides authentication procedures to allow for secure
handling of data. Amazon uses its AMAZON WEB SERVER (AWS)
identification to do this, requiring users to have an Aws enabled account
with amazon.com.Aws assigns a pair of related identifiers ,your Aws
access keys,to an Aws enabled Account to perform identification.
Aws uses the access key ID provided in a service request to look up an
account’s secret Amazon .com then calculates a digital signature with
the key .if they match then the user is considered authentic, if not then
the authentication fails and the request is not processed.

MESSAGE delivery:-
Amazon SQS guarantees at least- once delivery. Messages are stored on
multiple servers for redundancy and to ensure availability. If a message
is delivered while a server is not available, it may not be removed from
that server queue and may be resent.
The service supports both unlimited queues and message traffic.
you can get started with Amazon SQS for free .All customers can make
1 million Amazon SQS request for free each month. Some application
might be able to operate within this FREE tier limit.Aws free tier
includes 1 million requests with Amazon simple queue service.

How does SQS works??


SQS provides an Api endpoint to submit messages and another endpoint
to read Messages from a queue. Each message can only be retrieved
once, and you can have many clients submitting messages to and reading

469
messages from a queue at the same time .

Conclusion:-
 SQS is a pull- based ,not push-based
 message are 256 kB in size
 message are kept in a queue from 1 minute to 14 days.
 the default retention period is 4 days .
 it guarantees that your message will be processed at least one.
So, here’s about the short information about AMAZON SQS.

Sending a message from AWS SQS and SNS via


Lambda Function with API gateway

we will be sending a message from AWS SQS via Lambda Function


with API gateway.

Scenario:
A company wants to create a system that can track customer orders and
send notifications when orders have been shipped. They want to use
AWS services to build the system. They have decided to use SQS,

470
Lambda, and Python for the project.
1) Create a Standard SQS Queue using Python.
2) Create a Lambda function in the console with a Python 3.7 or higher
runtime
3) Modify the Lambda to send a message to the SQS queue. Your
message should contain either the current time or a random number. You
can use the built-in test function for testing.
4) Create an API gateway HTTP API type trigger.
5) Test the trigger to verify the message was sent.

Prerequisites:
 AWS CLI and Boto3 installed
 AWS account with I AM user access, NOT root user
 Basic AWS command line knowledge
 Basic Python programming language
 Basic knowledge of AWS Interactive Development Environment
(IDE)

What is AWS SQS?


Amazon Simple Queue Service (Amazon SQS) is a fully managed
message queuing service provided by Amazon Web Services (AWS). It
is a reliable, highly scalable, and flexible way to decouple and
asynchronously process workloads or messages across distributed
applications and services. The service allows messages to be sent
between software components and systems in a reliable and fault-
tolerant way.

What is AWS Lambda?


AWS Lambda is a serverless computing service provided by Amazon
Web Services (AWS) that allows you to run your code in response to
events and automatically manage the underlying compute resources.
With AWS Lambda, you can write and run code without having to

471
provision, manage, or scale servers. It is highly scalable, cost-effective,
and reliable, and integrates seamlessly with other AWS services to
enable building of scalable and highly available distributed systems and
microservices architectures.

What is AWS API Gateway?


Amazon API Gateway is a service provided by Amazon Web Services
(AWS) that lets you create APIs for your backend services or
applications without having to manage the infrastructure. With Amazon
API Gateway, you can create APIs that act as front doors for
applications to access data or functionality from your backend services,
such as AWS Lambda, Amazon EC2 instances, and HTTP/HTTPS
endpoints.

1. Create SQS Queue using Python


In order to create a Python script, here is the reference from Boto3
documentation (click here to view) It is the code requirements for
creating an SQS queue.
We need to create a new file before we get started! Select File > New
From Template > Python File > Save As… and give it a name. Make
sure you KEEP the extension .py.

#!/usr/bin/env python3.7

#Create a Standard SQS Quene using Python boto3


import boto3

#Get the service resource


sqs = boto3.resource('sqs')

#Creating sqs queue


queue = sqs.create_queue(QueueName='Week15Project-sqs-queue')

#print the URL


print(queue.url)

Alright, let’s run the code in AWS Cloud9 IDE . Great! We see SQS

472
queue url displayed. Let’s copy and save this url, we will need it for
later.

Ok, let’s double check if our SQS queue is created in AWS console.

Awesome, we see Week15ProjectSQS-queue is created!

2. Create Lambda Function


Creating our Lambda function will print a message to our newly created
SQS queue when triggered.
Let’s create a Lambda function in the AWS console > Create Function >

473
Author from scratch > Function Name: LambdaSQS > Runtime: Python
3.8 > Architecture x86_64 > Execution role: Create a new role with basic
Lambda permissions > Create Function

We see our Lambda function has been created. Let’s update the
permissions for the role and attach to the Lambda function.

Under Configuration > Permissions > Add Permissions > Click on


LambdaSQS-role-hpsvnw36

474
After click on LambdaSQS-role link, it will prompt you to IAM page,
click >Add permissions > Attach policies.

475
Search SQS > Check AmazonSQSFullAccess > Add permission

476
Great! Now our Lambda function have basic lambda function
permission and Full SQS Access permission assigned to the attached
role.

Next, we to add our SQS queue as a destination of the Lambda function.


Go back to Lambda Function > Functions> LambdaSQS > Add
destination

477
Add destination > Source: Asynchronous invocation (is a feature
provided by AWS Lambda that allows you to invoke a Lambda function
without waiting for the function to complete its execution) > Condition:
On success > Destination type: SQS_queue > Destination:
Week15Project-sqs-queue

Great we’ve completed the destination configuration.

3. Edit the Lambda function to send a message to the SQS


Queue
Below is the reference for Boto3 Documentation to find the code to send
message to the SQS queue. We only need to use ‘QueueURl’ and
‘MessageBody’. to get the ‘QueueURL’

478
send_message — Boto3 1.26.115 documentation (amazonaws.com)
Go back in Lambda, use the Github gist Lambda SQS python script code
and paste it in the lambda function code source, then click Deploy.
Lambda SQS Python Script Code (github.com)

import json
import boto3
from datetime import datetime
import dateutil.tz

# Define the lambda handler


def lambda_handler(event, context):

sqs = boto3.client('sqs')

# Get the current time


est = dateutil.tz.gettz('US/Eastern')
current_time = datetime.now(est)
time = current_time.strftime("%I:%M:%S %p")

# Send the message to the SQS queue


response = sqs.send_message(
QueueUrl = 'YOUR SQS URL',
MessageBody = json.dumps(f"US Eastern current time: {time}")
)
return {
'statusCode': 200,
'body': json.dumps('Message in a bottle sent to SQSqueue :D')
}

479
We have successfully Deployed the code, lets test the code click Test.

Configure test event page will show up after. Select test event action:
Create new event > Event Name: testSQSLambda > Event sharing
settings: Private > Tempalte: apigateway-aws-proxy > Save

480
Configu11'\e test event X

A \Qs.t eve:nt is a JSON object that mooks the stnJcturo oi rnque:sts Qmittoo by AWS seNices to invoke a ambda fum:tfon_
Use it to ,;ee the function's illVOGltion result.

To [nvol«! your iuriction without saving an event, configurehe JSON event, then choose Test.

Tes.t C'IICJlt action

I O Create new ewnt E

Eve:ntnarne

I testSQSL.ambda
Maxanum of 25 characters consisting of letters. number,, doU. lryphens and underscores..

Eve:nt sharing settlings


O Private
This event Is. only .wi>llab1eh 1he Lambda console and to the event creator. You can configure a totill1of 10. Learn more [!":

Q Shareable:
This event Is. ;wajlab1e to 1AM us.er.. wllhln the s.ame aocount .mo h.we permission, to aocess and u,e >hareab e events. Learn more [!":

Template: • opticmal

I apigateway-aws-proxy

EventJSON ormatJSON

1T lil
2 "body": "eyJ0ZXN0IjoiYm9keS]9",

I
3 "resource": "/{pNn<y+}",
4 "path": "/path/to/resource",,
S "nttpMethod" : "POST",
i; "isBase64Encoded": tr,ue,
7 T "(!Ue ryst ringJ>ar ters" : {
8 °foo": "bar"
9 },
10 T "multivalueQuerystringpara ters": {
11 T "foo": [
12 "bar"
13 ]
14 },
1ST "pathParameters": {
16 "proxy": "/path/to/resource"
17 },
18T "stagevariatJles": {
19 lllb az1•: ..(11.E(..
20 },
21T "headers": {
22 "Accept": "te:ct/h l,application/xh l+ l,application/xml;q=0.9,- age/ ,'ebp,*/*;q=0.8",
23 • Accept-Encodin:g" : "gzip, def1Me, s.dch",
24 "Accept-Language": "en-US,er1;q=>0.8",
2S • cache-control• : " ax- age=0",
26 "CloudFror1t-i'or,,.;'arded-Proto": "ltttps",
27 CloudFront-Is-Desktop-Vie111e-r-": "true",,
0

28 "cloudFror1t-Is-Mobile-Viewer": "false",,
29 "CloudFront-Is-smartTV-Vie111er·": "false",

481
We can now click Test again and see if the Test Event was successful.

Awesome! We see StatusCode: 200 and the ‘body’ show our message
along with the timestamp. This means our code is running correctly!

4. Create an API gateway HTTP API type trigger


In Lambda overview, click +Add trigger

482
Trigger configuration: > Select API Gateway > Intent: Create a new API
> API typ: HTTP API > Security: Open > Click Add

483
In Configuration tab > Triggers > Click on the newly created API
Gateway link.

We see “Message in a bottle sent to SQSqueue” is displayed in the


URL!!

Let’s go back to Amazon SQS Queue > Click on Send and receive
message

484
Click on Poll for message to see if the Lambda function sent a message
to the queue.

Click on the link to see what the latest message is:

485
Current Time! Awesome we completed our objectives! We are able to
send a message to the SQS queue by triggering our Lambda function
with API gateway!

ADVANCED:

1) Create a SNS topic using Python.


2) Use an email to subscribe to the SNS topic.
3) Create a Lambda function with a Python 3.7 or higher runtime
4) Modify the Lambda to trigger when new messages arrive in the SQS
queue you created earlier.
5) The Lambda should publish the SQS message that triggered it to the
SNS topic.
6) Validate the entire architecture works by triggering the API you
created earlier. You should receive the notification from the SNS
subscription.

486
In Amazon SNS console, create subscription tab.

Topic ARN: Week15project-sns-quene> Protocol: Email > Endpoint:


email address >Create Subscription

487
Subscription successfully created

Check your email: and confirm your subscription

Make sure to remember to delete the Lambda Function, SQS, API


gateway to prevent any ongoing AWS charges

488
Amazon Simple Email Service (SES)

What Is Amazon SES?


Welcome to the Amazon Simple Email Service (Amazon SES)
Developer Guide. Amazon SES is an email platform that provides an
easy, cost-effective way for you to send and receive email using your
own email addresses and domains.
Amazon simple Email Service (Amazon SES) is a cloud-based email
sending service designed to help digital marketers and application
developers send marketing, notification, and transnational emails. …
you’ll use our SMTP interface or one in all the AWS SDKs to integrate
Amazon SES directly into your existing applications.

Why use Amazon SES?


Amazon SES and other AWS services. you’ll send email from Amazon
EC2 by using AN AWS SDK, by using the Amazon SES SMTP
interface, or by making calls directly to the Amazon SES API. Use AWS
Elastic beanstalk to create AN email-enabled application like a program
that uses Amazon SES to send a report to customers.

Amazon SES and other AWS services:


Amazon SES integrates seamlessly with other AWS products. For

489
example, you can:

 Add email-sending capabilities to any application. If your application


runs in Amazon Elastic Compute Cloud (Amazon EC2), you can use
Amazon SES to send 62,000 emails every month at no additional
charge. You can send email from Amazon EC2 by using an AWS
SDK, by using the Amazon SES SMTP interface or by making calls
directly to the Amazon SES API.
 Use AWS Elastic Beanstalk to create an email-enabled application
such as a program that uses Amazon SES to send a newsletter to
customers.
 Set up Amazon Simple Notification Service (Amazon SNS) to notify
you of your emails that bounced, produced a complaint, or were
successfully delivered to the recipient’s mail server. When you use
Amazon SES to receive emails, your email content can be published
to Amazon SNS topics.
 Use the AWS Management Console to set up Easy DKIM, which is a
way to authenticate your emails. Although you can use Easy DKIM
with any DNS provider, it is especially easy to set up when you
manage your domain with Route 53.
 Control user access to your email sending by using AWS Identity and
Access Management (IAM).
 Store emails you receive in Amazon Simple Storage Service (Amazon
S3).
 Take action on your received emails by triggering AWS Lambda
functions.
 Use AWS Key Management Service (AWS KMS) to optionally
encrypt the mail you receive in your Amazon S3 bucket.
 Use AWS CloudTrail to log Amazon SES API calls that you make
using the console or the Amazon SES API.
 Publish your email sending events to Amazon CloudWatch or
Amazon Kinesis Data Firehose. If you publish your email sending
events to Kinesis Data Firehose, you can access them in Amazon
Redshift, Amazon Elasticsearch Service, or Amazon S3.

490
Amazon SES Quick Start:
This procedure leads you through the steps to sign up for AWS, verify
your email address, send your first email, think about how you’ll handle
bounces and complaints, and move out of the Amazon simple Email
Service (Amazon SES) sandbox.

Use this procedure if you:


 Are just experimenting with Amazon SES.
 Want to send some test emails without doing any programming.
 Want to get set up in as few steps as possible.

Step 1: Sign up for AWS


 Before you can use Amazon SES, you need to sign up for AWS.
When you sign up for AWS, your account is automatically signed up
for all AWS services.

Step 2: Verify your email address


 Before you can send email from your email address through Amazon
SES, you need to show Amazon SES that you own the email address
by verifying it.

Step 3: Send your first email


 You can send an email simply by using the Amazon SES console. As
a new user, your account is in a test environment called the sandbox,
so you can only send email to and from email addresses that you have
verified.

Step 4: Consider how you will handle bounces and complaints


Before the next step, you need to think about how you will handle
bounces and complaints. If you are sending to a small number of
recipients, your process can be as simple as examining the bounce and
complaint feedback that you receive by email, and then removing those

491
recipients from your mailing list.

Step 5: Move out of the Amazon SES sandbox


 To be able to send emails to unverified email addresses and to raise
the number of emails you can send per day and how fast you can send
them, your account needs to be moved out of the sandbox. This
process involves opening an SES Sending Limits Increase case in
Support Center.
Next steps:

 After you send a few test emails to yourself, use the Amazon SES
mailbox simulator for further testing because emails to the mailbox
simulator do not count towards your sending quota or your bounce
and complaint rates. For more information on the mailbox simulator,
see Testing Email Sending in Amazon SES .
 Monitor your sending activity, such as the number of emails that you
have sent and the number that have bounced or received complaints.
For more information, see Monitoring Your Amazon SES Sending
Activity.
 Verify entire domains so that you can send email from any email
address in your domain without verifying addresses individually. For
more information, see Verifying Domains in Amazon SES.
 Increase the chance that your emails will be delivered to your
recipients’ inboxes instead of junk boxes by authenticating your
emails. For more information, see Authenticating Your Email in
Amazon SES .

Triggering S3 bucket to send mail via AWS SES via


integrating Lambda

492
Introduction:
Sending emails is a common requirement for many applications, whether
it’s sending notifications, newsletters, or transactional emails. In this
blog post, we’ll explore how to leverage the power of Amazon Simple
Email Service (SES) and AWS Lambda to automate the process of
sending emails when new objects are uploaded to an Amazon S3 bucket.
This powerful combination allows you to build scalable and efficient
email notification systems.

Setting up s3
here is how you can create a bucket in S3, here only change is the name
and ACL and the rest is the same

493
creating bucket in S3

enable ACL

494
here our bucket is created simple.
Setting Lambda
below shows how you can create a lambda function handling our Python
code

495
Triggering S3 and integrating Lambda with S3 object
here is how step by step you can set up S3 object to trigger when
something is uploaded in out bucket.

496
497
here you have to connect the lambda with which you want your bucket
to react or trigger when something is put in it.

refresh the lambda page and you’ll see this, which shows our integration
is done here with S3.
Giving permissions
our lambda needs permissions to perform activities, so here we will
navigate to the permission section of our lambda and select these 2
permissions shown below as our work is with these two
these permissions basically mean that our lambda is permitted to do

498
anything with S3 and SES i.e full access

We’ll set up our lambda to perform actions

import json
import boto3

s3_client = boto3.client('s3')

def lambda_handler(event, context):


# function used to get object, here we know our file name, for unknown
file we can use prefix in S3 trigger event

a = s3_client.get_object(Bucket='testqejifiui30',Key='email_test')
a = a['Body'].read()

object_data = a.decode('utf-8') # this is used to make this file in


simple text

object_data = json.loads(object_data) # the above file comes in json


format to extract the information

ses_client = boto3.client('ses', region_name='ap-south-1') # Replace


with your desired region

response = ses_client.send_email(
Source='shubhangorei@gmail.com', # Replace with the sender's email
address
Destination={
'ToAddresses': object_data['emails'] # Replace with the recipient's
email address

499
},
Message={
'Subject': {
'Data': 'this is my subject', # Replace with the email subject
},
'Body': {
'Text': {
'Data': 'body is good', # Replace with the email body
}
}
}
)

return {
'statusCode': 200,
'body': json.dumps('mail send')
}

The Lambda function retrieves the object from S3, reads its content,
converts it to a JSON format, and extracts the email addresses. Then, it
uses the Boto3 library to interact with Amazon SES and sends the email
to the recipients specified in the JSON file.
Make sure to replace the following placeholders in the code with your
own values:

 Replace 'shubhangorei@gmail.com' with the sender’s mail used to


create SES , below we’ll see how it is created.
 Replace 'this is my subject' with the email subject.
 Replace 'body is good' with the email body.
 Adjust the region name ('ap-south-1') and bucket name
('testqejifiui30') according to your specific configuration.

500
Additionally, ensure that the appropriate IAM role is assigned to the
Lambda function to grant it the necessary permissions for accessing S3
and sending emails using Amazon SES.
Once you have customized the code and deployed the Lambda function,
it will be triggered whenever a new object is uploaded to the specified
S3 bucket, and it will send an email to the recipients specified in the
JSON file.
Please note that the above code assumes that the email addresses are
provided in the JSON file under the key 'emails'. Adjust the code
accordingly if your JSON file has a different structure.

501
Setting up SES
hear my account is in the test phase, as all new users are in this phase if
you want to go to the production phase where we can send mail to any
unknown ID, you can request for it in the SES interface.
here I have verified the mail which I'm going to upload and send mail
otherwise an error will show up
It’s important to note that when using Amazon SES in the sandbox
mode, you can only send emails to verified email addresses. This means
that the email addresses specified in the JSON file need to be verified in
the Amazon SES console. If they are not verified, the emails will not be
sent.

this is my file which I'm going to upload in S3, it’s in JSON format

{
"emails": ["vmrreddy913@gmail.com"]
}

Now upload this file in S3


this part of our code will do our stuff, you can customize this according

502
to your needs, the code is all explanatory.

es_client = boto3.client('ses', region_name='ap-south-1') # Replace with


your desired region

response = ses_client.send_email(
Source='vmrreddy913@gmail.com', # Replace with the sender's email
address
Destination={
'ToAddresses': object_data['emails'] # Replace with the recipient's
email address
},
Message={
'Subject': {
'Data': 'this is my subject', # Replace with the email subject
},
'Body': {
'Text': {
'Data': 'body is good', # Replace with the email body
}
}
}
)

now upload the file in your created bucket

503
Creating a Highly Available 3-Tier Architecture for
Web Applications in AWS

AWS provides a wide range of resources for developing and managing


cloud applications, which can be customized to construct highly
dependable and resilient cloud infrastructures. Suppose you are tasked
with developing a three-tier architecture that is readily available for your
organization’s new web application. This tutorial is extensive but
comprehensive. You may want to bookmark this guide for future
reference on creating web, application, and data tiers.

504
What is a 3-Tier Architecture?
A three-tier architecture comprises three layers, namely the presentation
tier, the application tier, and the data tier. The presentation tier serves as
the front-end, hosting the user interface, such as the website that users or
clients interact with. The application tier, commonly referred to as the
back-end, processes the data. Finally, the data tier is responsible for data
storage and management.

Benefits of a 3 Tier Architecture:


Scalability: Each tier can scale independently, allowing organizations
to optimize their resources and minimize costs.
Reliability: Each tier can be replicated across multiple servers,
improving application availability and reliability.
Performance: By dividing the application into separate layers, 3-tier
architecture reduces network traffic and enhances application
performance.
Security: Each tier can have its own security group, allowing different
organizations to implement customized security measures for each layer.
Reduced development time: Different teams can work on different
tiers simultaneously, resulting in faster deployments.
Flexibility: Each tier can be developed using different technologies
and programming languages, enabling organizations to leverage the best
tools for each layer.
Part 1:
Creating a VPC and Subnets

505
Using the architecture diagram as a reference, we will need to start by
creating a new VPC with 2 public subnets and 4 private subnets.
Log into the AWS management console and click the Create VPC
button.

We are going to create a VPC with multiple public and private subnets,
availability zones, and more, so let’s choose “VPC and more.”
Name your VPC. I am using the auto-assigned IPV4-CIDR block of
“10.0.0.0/16.” Choose these settings:
no IPV6
default Tenancy
2 Availability Zones
2 public subnets
4 private subnets

506
Next, for Nat gateway choose “in 1 AZ,” none for VPC endpoints, and
leave the Enable DNS hostnames and Enable DNS resolutions boxes
checked.
Before we create the VPC expand the customize AZ’s and customize
subnets CIDR blocks tabs.

507
Click the “Create VPC” button.
The diagram below highlights the route that your new VPC will take.

508
You will then be shown a workflow chart that shows your resources
being created.

View your new VPC after everything is created.

509
Next click on the Subnets tab in the VPC console. Select one of the new
subnets that was created, then under the “Actions” tab, expand the down
arrow and select “Edit subnet settings.”

510
Check “Enable auto-assign IPv4 address” and “Save.” We need to do
this for all 6 new subnets that were created.
Update Web Tier Public Route Table:
We need to navigate to the route table tag under the VPC dashboard to
make sure that the Route table that was automatically created is
associated with the correct subnets. Below I highlighted in green the
correct public subnets that already associated. If you have none or you
only created a stand-alone VPC, you will have to click on “edit subnet
associations” and select the ones needed.

511
Part 2: Creating a Web Server Tier
Next, we will create our first tier that represents our front-end user
interface (web interface). We will create an auto scaling group of EC2
instances that will host a custom webpage for us. Start by heading to

Next, we are going to launch an EC2 instance.


Name your instance and select an AMI. Please use Amazon linux2; I
used the updated version and was not able to ssh into it

512
Select the key pair that you will use, and make sure to select your new
VPC and the correct subnet. Auto-assign IP should be enabled.

Create a new security group. For inbound security group rules, add rules
for ssh, HTTP, and HTTPS from anywhere. This is not standard or safe,
practice, but for this demonstration it is fine.

513
Leave the configuration storage settings alone. In the Advanced details,
head all the way to the bottom. We are going to use a script to launch an
Apache web server when the instance starts.

514
Launch your new instance!
Once your instance is up and running, copy the public IP address and
paste it into a web server.

I am having an issue getting mine to say “My Company Website”. I will


solve this at a later date. All that really matters is that the instance has
Apache installed, and we can reach it. Next, I will SSH into the instance
to make sure that works as well.
For this project to work we need to create an auto scaling group that we
attach to our EC2 instance. This will increase our reliability and
availability. Next, we are going to create a launch template. Before we
do this, we will need to define a launch template; this template will

515
outline what resources are going to be allocated when an auto scaling
group launches on-demand instances. Under the EC2 dashboard, select
Launch Templates, and click the “Create launch template” button.

Name your template, and check the box to “provide guidance.”

516
Use our recently launched AMI t2.micro instance type and select your
key pair.

517
For the firewall, use “Select existing security group,” and make sure the
security group (SG) that we created for the web tier is selected. Under
Advanced network configuration, enable Auto-assign public IP.

We are going to leave the storage options alone for now. Click on the
Advanced details tab, scroll down, and enter the same script as we did
earlier for our EC2 instance.

518
Click the “Create launch template” button.

Navigate to the autoscaling tab at the bottom of the EC2 dashboard.


Click “Create auto scaling group.” The launch template that we just
finished creating is the template that our auto scaling group will use to
launch new EC2 instances when scaling up.
Name your auto scaling group (ASG), choose the launch template that
you created, then click the Next button.

519
Under Network, make sure to select the VPC that you created earlier,
then also under availability zones and subnets select the public subnets
that were created; yours may differ.
Click the Next button.

Now we are given to option to allocate a load balancer for our ASG. A
load balancer will distribute the load from incoming traffic across
multiple servers. This helps with availability and performance.

520
Select “Attach to a new load balancer” and “Application load balancer,”
name your load balancer, then select “Internet facing” as this is for our
web tier.

Your VPC and the two public subnets should already be selected.
Under the “Listeners and routing” section, select “Create a target
group,” which should be on port 80 for HTTP traffic.

521
Leave No VPC Lattice service checked.
Click to turn on Elastic Load Balancing health checks.

Check “enable group metrics collection within CloudWatch.”

522
Next, we configure the group size and scaling policy for our ASG. For
reliability and performance, enter 2 for desired capacity and minimum
capacity. For maximum capacity, enter 3.

In scaling policies, select “Target tracking scaling policy.” The metric


type should be set to “average CPU utilization” with a target value of 50.
Click the Next button.

523
On the next screen we could add notifications through SNS topics, but I
skipped this for now. Click the Next button.

The next screen allows you to add tags which can help to search, filter,
and track your ASG’s across AWS. For now, just click the Next button.

Review your setting on the next page, and at the bottom click the
“Create auto scaling group” button.
You should see a lovely green banner declaring your success here. After
the ASG is finished updating capacity, navigate to your EC2 dashboard
to confirm that your new instance has been created.
Note: In my previous examples my names were not accepted for the auto
scaling group, make sure you follow the naming conventions.

524
As you can see, the ASG is doing its job.

Before we move on it is a good idea to take a moment and connect to the


instances that were created; as you can see: success!

Part 3: Creating an Application Tier


Next up we are going to create the back-end of our 3-tier architecture.
We could create an EC2 instance first, but for this portion I will start by
heading to the Launch Templates tab under the EC2 dashboard.

525
Name your new template, and select the Guidance tab again.

Select browse more AMI’s, then Amazon Linux 2 for your AMI. Select
t2.micro for instance type, and also select you keypair.

526
Under the network setting, we want to limit access to the application tier
for security purposes. You don’t want any public access to the the
application layer or the data tier after it. We will create a new security
group. Select our VPC; I realize now that for this part I could have
chosen a better name!
Name your new SG and select the VPC that we created when we started.

527
We will create 3 security group rules starting with ssh; use My IP as the
source. For Security Group 2, use Custom TCP; the source here is the
security group from our web tier (tier-1). For the third group, select All
ICMP-IPv4, and set the Source type as anywhere. This will allow us to
ping our application tier from the public internet to test if traffic is
routed properly.

528
This is my updated screenshot
We will again leave the storage volumes alone, and head to the bottom
of Advanced details to enter our script. And then click next.

After I had created the launch template, I realized I made a couple of


mistakes. Instead of modifying the template I decided to start from
scratch, but again upon further reflection, in the future I would just
modify the template as this would save time, and we all know time is
money. The whole reason for these exercises is to learn, right!

529
I went back to the security group and updated my inbound rules.

Once this was fixed, I went back and recreated the application layer
template using the ApplicationTierSG1 that I just altered. I just double
checked the security group rules and used the same settings for
everything else above.
Application Tier Auto Scaling Group:
Ok, now we are ready to create our auto scaling group for the
application layer. Under the EC2 dashboard go to create an auto scaling
group.

Name your new ASG and select the proper launch template, then click
the Next button.
Choose the correct VPC and 2 private subnets, then click the Next
button.

530
We are again given the option to attach a load balancer, and we want to
do this. Select an application load balancer, name it, and set it as an
internal load balancer. Double check that the VPC and subnets are
correct. Mine are.

531
Under “Listeners and routing” create a new target group, and use port 80
once again.

532
Below I have again chosen to turn on health checks and enable group
metrics within CloudWatch.

533
On the next screen set your desired capacity, minimum capacity, and
maximum capacity again.

Then I have chosen to select target tracking with a CPU utilization of


50%.

534
Click the Next button, add notifications if you want or need and then
tags. Review your new ASG settings and create it.
As you can see below, my new application layer ASG is updating the
capacity.

Once the new EC2 instances are created and running, we will try to ssh
into them. If we set it up correctly, we should not be able to.
When I tried to SSH into the application tier EC2 instance running, the
connection timed out, this is exactly what we want here.

I also tried to connect using EC2 connect. This failed as well.


We still need to check if our tier-1 servers interact with our tier-2
servers. To test this, you will need to log into your tier-1 EC2 instances
via SSH and run a ping command to a private IP address of our tier-2
servers. Below you can see a successful ping.

535
After waiting for my new ASG to update its capacity, it only started 1
instance. I must have clicked back and reset my capacity selections to
the default 1 instance. Below I updated the capacity that I wanted.
I apologize for the changes in settings here. I completed this project over
a long period of time and multiple computers.

Update the application (Tier-2) Route table:


Head back to the VPC dashboard, select Route tables, and select one of
the route tables that was automatically created when we created our
VPC. I only have 1 subnet associated with this table so click on Edit
subnet associations.

536
Add another subnet that is private.

Part 4: Created a Database Tier


Almost there! We have created 2 out of the 3 tiers and tested both
successfully.
We are now going to build our database; AWS offers several types of
databases but for this exercise we are going to use a MySQL RDS
database.

537
Create a DB Subnet Group:
We will begin by creating a subnet groups. Navigate to the RDS console
and on the left side menu, click “Subnet Groups” and then the orange
“Create DB subnet group.”

For the next part we need to know the availability zones for the last two
subnets that were automatically created. Head back to the VPC console.
Under Subnets, find the last 2 subnets that you have; make sure not to
select the private subnets that you already used in tier 2.

Back at the RDS console, select the availability zones that you are going
to use.

538
Next up we need to select the proper subnet; the drop down menu only
lists the subnet ID. Below is another screenshot of my subnets; the
second column is the ID.

Back in the RDS console click Create database.


Select the MySQL DB.

539
Next you can choose a muli-AZ deployment with 3 database instances.
One is a primary instance with two read-only standby instances. This
makes for a very reliable system, but we do not need this at this time.

540
There are also availability and durability options; however, with the
free-tier, none are available. We do not need them either.
Under Settings, name your DB and create a master username and
password. This username and password these should be different than
your AWS account user login, as it is specific to the database you are
creating.

541
You will need your username and password. Make sure to store them in
a secure place!
Under Instance configuration, the burstable classes option is is pre-
selected because it’s the only one available for the free tier. I left my
instance type as adb.t2.micro. You can add storage as needed; I left mine
on default settings.

542
We are going to set up our network manually so choose not to connect to
an EC2 resource. Select the proper VPC; the subnet group that you
created earlier should be listed as default. Select Create new VPC
security group (firewall).

543
544
In Database authentication I left the default checked.

Click the “Create database” button.

545
Update the Database Tier Security Group:
Navigate over to the VPC console, select Security groups on the left side
menu, and then find the database tier security group you just created.
Select the security group that you just created. You need to edit inbound
rules; by default the database SG has an inbound rule to allow
MySQL/Aurora traffic on port 3306 from your IP address. Delete this
rule.

546
Create a new rule for MYSQL/Aurora on port 3306: for the Source,
select Custom to add your security group for your application layer (tier-
2 SG).

Update Tier 3 Private Route Tables:


It’s the home stretch, people— I hope you’re still here!
In the last step for our database tier, we need to make sure that the route
table we associate with our databases private subnets has both subnets
listed in subnet associations. If not, add the other subnet, and save.

547
Our three-tier architecture is done! We have already tested our web and
application layers, but we are going to go a step futher here.
Part 5: Testing
We can’t directly SSH to the database, but we can use an SSH
forwarding agent to achieve this.
You need to add your access key pair file to your keychain. To do this,
first make sure you are in your local host (use the command exit to get
out of any EC2 instance you’re connected to). Then use the following
command:
ssh-add -K <keypair.pem>

Now that your key pair file is added to your keychain, the SSH agent
will scan through all of the keys associated with the keychain and find
your matching key.
Now reconnect to the web tier EC2; however, this time use -A to specify
you want to use the SSH agent.
ssh -A ec2-user@<public-ip-address>

Once you are logged back into your tier-1 EC2, use the following
command to check if the SSH agent forwarded the private key.
ssh-add -l

548
Our key pair has been forwarded to our public instance. Go copy your
tier-2 application layer private IP address and copy it into the next
command.
ssh -A ec2-user@<private-ip-address>

We have now SSH’ed from your public tier 1 web instance into your
private tier 2 application instance!
Testing Connectivity to the Database Tier
There are a few ways you can connect to your RDS database from your
application tier. One way is to install MySQL on your private tier 2
instance to access your database. We are going to utilize this method.
While logged into your application tier instance, use this command:
sudo dnf install mariadb105-server

549
This command installs the MariDB package, which is used to read
MySQL. Once installed, you should be able to use the following
command to log into your RDS MySQL database. You will need your
RDS endpoint, user name, and password. To find your RDS database
endpoint, navigate to the database you created and find the endpoint
under Connectivity & Security.
mysql -h <rds-database-endpoint> -P 3306 -u <username> -p

We have now successfully connected to our MySQL database from the


application tier. We have connectivity with all of our tiers!

550
How to Deploy a Static Website with AWS Using S3,
Route 53, CloudFront, and ACM

Launching your own website is exciting, but figuring out how to get it
online can be a bit overwhelming, especially if you’re new to the
process.
If you’ve created a website using for example HTML, CSS, and
JavaScript and want to get it up and running on the internet, AWS
(Amazon Web Services) can help.
In this beginner-friendly guide, we’ll walk through the steps together.
We’ll use AWS services like S3 (for storage), Route 53 (for managing
your domain name), CloudFront (for delivering your content quickly),
and ACM (for keeping your site secure). Additionally, we’ll explore a
straightforward process to purchase a domain for your website,
making it incredibly easy to have your very own web address.
By the end, you’ll have a clear understanding of how to host your
website on AWS and make it accessible to anyone on the web!

Is this process free? If not, how much does it cost?

551
Before we delve into the process, let’s understand the potential costs
involved in using these services:

 Buy a Domain: There’s a multitude of places to purchase domains,


and the pricing varies based on several factors. Different extensions
(like .com, .org, .net) and the duration you opt for (you usually buy a
domain for a specific period) influence the costs. Generally, buying a
domain with a common extension might range from 5-10$/year

 S3: If you’re within the AWS Free Tier, hosting your website files on
S3 should incur no cost. Outside of the Free Tier, expenses are
typically minimal, often just a few cents.

 Route 53: Maintaining a hosted zone adds around 50 cents monthly,


while Route 53 queries cost approximately 40 cents per million
queries.

 AWS Certificate Manager: Securing your site with a TLS/SSL


certificate through ACM is free of charge.

 CloudFront: Operating within the Free Tier carries no cost. Beyond


that, expenses will depend on website traffic, but for most small-scale
usage, it often amounts to just a few cents. Refer to the detailed
pricing page for more specifics.

If you plan to delete everything after the tutorial, consider setting up an


AWS Budget to monitor and limit your spending. This ensures you’re
alerted before exceeding your set limit, preventing any unexpected bills.

Taking all this into account, let’s dive into deploying our
static website using AWS!
Buy a Domain

552
Before diving into the intricacies of using AWS services to host your
website, let’s first explore the fundamental step of acquiring a domain.
Your domain is like your home’s address on the internet; it’s how people
will find and remember your website. This step is crucial as it sets the
foundation for your online presence, giving your website its unique
identity in the vast landscape of the web.
There are various platforms where you can purchase a domain, such as:
 Hostinger
 Namecheap
 GoDaddy
Each of them offer simple processes to acquire your unique web address.
For our example, we’ll navigate through Hostinger, but the process
remains fairly consistent across these platforms. Once on their website,

 you’ll notice how straightforward it is to secure a domain.


 Simply input your desired name and check its availability.
If it’s free, you’ll then select your preferred domain extension( options
like .com, .es, .dev, among others)
Choose the duration, whether it’s for a year, two, or more.

553
AWS — S3

1. Create an AWS account


To create an account you just have to go to https://aws.amazon.com/

2. Search for S3
Navigate to the “Services” section and in the search bar type “S3” and
press enter. Amazon S3 will appear as the first option in the list of
services.

554
3. Create a S3 Bucket
AWS S3 is perfect for storing files affordably. When your website
consists of client-side code only, you can set up S3 for hosting a static
website easily.

 Click on the “Create bucket” button.


 Bucket Name: THIS NAME SHOULD MATCH THE DOMAIN
NAME EXACTLY. So if you bought a domain called for example
“anaquirosa.com” the bucket name should be “anaquirosa.com”
 AWS Region: Decide on the region where you want your bucket to
reside. From the dropdown menu, select your desired region. It’s
important to choose a region that aligns with your specific
requirements, such as data sovereignty or proximity to your target
audience.
 Once you have chosen the bucket name and region, proceed with the
creation of your bucket. This will establish a dedicated storage space
for your objects within the selected region, allowing you to store and
manage your data effectively using Amazon S3.

555
In my case, the bucket name already exists because I created it before,
but you shouldn’t have any issues.
As you scroll down you have to uncheck the option for Block all public
access. Typically, it’s not advised to do so, as indicated by the warning
prompt you’ll receive upon disabling it. However, since you’re crafting a
website that you want to be accessible worldwide, turning this off is
suitable for your purpose

556
 Proceed by utilising the default settings for the remaining bucket
configurations.
 Then proceed to click on “Create bucket”
By following these steps, you’ll create an S3 bucket that is accessible to
the public, allowing you to host and share your website’s content
effectively.

4. Upload the content to the new Bucket


To upload your website content to the newly created bucket, follow these
steps:

 Click on the name of your bucket in the S3 management console.


 Look for the “Upload” button and click on it. This will open the

557
upload interface.
 Choose whether you need to upload a folder or individual files. If you
have folders containing your website content, click on “Add
folder.” Otherwise, if you only have specific files, click on “Add
files.”
 Browse and select all the necessary files and folders from your local
machine that make up your website, including files like index.html,
CSS files, and images.
 After selecting all the relevant files, click on the “Upload” button
located at the bottom right corner.
5. Enable Web Hosting

 Access the Properties menu of your bucket.


 Scroll down to the bottom and click on the “Edit” button. This will
allow you to modify the properties of your bucket.
 Look for the option to enable static website hosting and click on it.
 Enter the name of the index file for your project, such
as “index.html.” This file will be loaded as the default page for your
website.
 If you have an error file, you can also specify its name under
the “Error document” option. This file will be displayed if any errors
occur while accessing your website.

558
After entering the necessary information, scroll down to the end of the
page and click “Save changes” located at the bottom right corner.

6. Create bucket policies


 Navigate to the Permissions section in your bucket menu.
 Click on the “Edit” button located under the Bucket Policy section.
 Copy and paste the following JSON into the policy editor, ensuring to
replace “your-bucket-name” with the actual name of your S3 bucket.
You can find the Bucket ARN above the editor.

{
"Version":"2012-10-17",
"Statement":[{

559
"Sid":"PublicReadObject",
"Effect":"Allow",
"Principal": "*",
"Action":["s3:GetObject"],
"Resource":"arn:aws:s3:::your-bucket-name/*"
}]
}

 Remember not to remove the “/*” after the bucket name in the
JSON policy. This ensures that all objects within the bucket are
selected to be made public.

 Finally, click on the “Save changes” option at the end to apply the
bucket policy.
It is important to create bucket policies for granting public access to files
otherwise, accessing the content of your website would not be possible.
Instead, you would encounter a “403 Forbidden” message.

7. Test that your website works


 Navigate to the Properties tab of the bucket.
 Scroll down to the bottom page to Static website hosting and click on the
link.

 If everything went well, you should observe your web showcased in


the browser

560
Congratulations on successfully hosting your web in S3 with public
access! However, to elevate its professionalism, it would be even more
impressive if it were linked to a custom domain. We’ve previously
purchased a domain, and now we’ll utilise it to enhance the final
result. □

Add your Domain Name


1. Create a Hosted Zone
A public hosted zone, the one you’ll be dealing with, manages internet
traffic. On the other hand, a private hosted zone governs traffic within an
AWS Virtual Private Cloud (VPC).

 Navigate to the “Services” section and in the search bar type “Route
53” and press enter. Route 53 will appear as the first option in the list
of services.

561
 Click on “Hosted zones” and then on “Create hosted zone”.
 Domain name: Enter the domain you purchased from the third-party
provider, select “Public hosted zone”, then click “Create hosted
zone”.

562
After creating your public hosted zone, you will find 4 listed name
servers.

563
Take a note of these for later!

2. DNS for your current domain provider


Go to the DNS settings for your current domain provider (in my case it
is Hostinger) Find your name server settings, and replace them with the
name servers from Route 53!
If you are doing it with Hostinger follow these steps:
Click on Manage

 Go to “DNS / Nameservers” and then to “Change Nameserves”


 Copy the links you found in the previous step in ‘Route 53’ and paste
them one by one and then click on “Save”.

564
3. Create a Record to point to the S3
Having set up a public hosted zone, the next step is creating a record
dictating how traffic should be directed when visitors access your
domain name. For that follow these steps:
Enter your “Hosted zone” and click on “Create record”

 Record name: Leave it blank.


 Record type: A — Routes traffic to an IPv4 address and some AWS
resources.
 Alias: Switch this on. Enabling an alias allows you to direct traffic to
various AWS resources such as S3, CloudFront, Elastic Beanstalk,
and more.

About Route traffic:


1º Alias to S3 website endpoint.
2º Choose your region.
3º This menu should auto-fill with your S3 website.

565
If nothing appears here, it might be because your bucket is not named
the same as your domain. In that case, you’ll have to recreate the bucket
with your domain’s exact name.
And finally:
Routing policy: Simple routing.
Evaluate target health: YES.
Click on “Create records”.
Your changes may take up to 60 seconds to become active. Once
the Status switches from PENDING to INSYNC , you’re all set to
test out your modifications.
Let’s run a test! If everything went good, entering your domain name
into a browser (like anaquirosa.com) should lead Route 53 to direct you
to the S3 website. This means you should see your website!!

Establishing a Secure Connection


Now the final step involves establishing a secure connection (HTTPS)
with a TLS/SSL certificate. This will help eliminate the “Not
secure” message from your browser. This is very important because a
secure connection will reassure visitors that they haven’t landed on a
suspicious or unreliable website.

566
“Not secure” alert

For that we are goint to use AWS Certificate Manager.


Generate a public TLS/SSL Certificate
 Navigate to the “Services” section and in the search bar
type “Certificate Manager” and press enter.

 Ensure to change your region to us-east-1 (N.Virginia) for this

567
section. Creating a certificate in any other region will render it
unusable with CloudFront, where you will ultimately need it.

 Click on “Request a certificate”

 Select “Request a public certificate” and click on NEXT.


 Fully qualified domain name: Enter your domain
 Leave the rest as default and then click “Request”.
The request went successful but it will remain in “pending validation”
status until you valide it through DNS. For that click on “View
certificate”

568
Before the certificate can be issued, Amazon requires confirmation of
your domain ownership and your ability to modify DNS settings (within
Route 53).

For that we need to :


 Click on “Create records in Route 53”
 Select your domain and click on “Create records”
If the record creation was completed without any issues, you’ll be
notified with a success message
Now you have to:

 Navigate to “Route 53” again and go to your “hosted zone”


 You’ll notice a fresh CNAME record generated by Certificate
Manager among the listings.

569
Congrats! You have a TLS/SSL certificate
Create a CloudFront Distribution
Your website’s files are sitting in S3, but here’s the catch: certificates
don’t work directly with S3 buckets. What you’ll need is
a CloudFront distribution to link to that S3 bucket. Then, you apply
the certificate to this CloudFront setup.
CloudFront is Amazon’s content delivery network (CDN) that speeds up
content delivery worldwide by storing it closer to users. It’s fantastic for
videos and images, making them load faster. If your website is basic or
has small files, you might not notice a huge difference in speed. But
using CloudFront is essential to apply the TLS/SSL certificate you made
earlier.

 Navigate to CloudFront.

 Click on “Create distribution”.


 Origin domain: If you click on the filter it should appears in Amazon
S3 your website files.

570
 Click on “Use website endpoint” and AWS will update the
endpoint for you.
 Scroll down until the section “Default cache behaviour”
 Viewer protocol policy: Redirect HTTP to HTTPS

571
Scroll down to “Web Application Firewall (WAF)”

 Select “Do not enable security protections”


Scroll down to “Settings”
 Alternate domain name (CNAME): Add item and enter your
domain name (in my case “anaquirosa.com”)

 Custom SSL certificate: Choose the certificate you established


earlier doing click on it. Remember, if you created it in a region other
than us-east-1 (N. Virginia), it won’t appear in this list.

 Default root object: Type “index.html” (your default homepage) and


then click “Create distribution”.
It can take a few minutes to for the CloudFront distribution to finish
deploying, even if it initially shows “Successfully created” at the top of
the page. You will see it’s finished when the “Last modified” value
displays a date and time

To confirm that everything is functioning correctly with CloudFront and


the TLS/SSL certificate:
 Copy the Distribution domain name.

 Open a new browser tab and paste that address into the navigation

572
bar.
If everything went good, you should now notice the padlock icon in your
browser (or something similar, it depends on the browser) signaling that
you’re securely connected via the certificate configured in Certificate
Manager.

Update Route 53 to direct to the CloudFront Distribution.


Right now, Route 53 sends traffic to the S3 bucket. We want it to send
traffic to the CloudFront distribution, which then leads to S3. For that:

 Navigate to “Route 53” > “Hosted zones”


 Select the A record and then click “Edit record”

573
 Route traffic to: Alias to Cloudfront distribution.
 Choose Region: This option is selected for you and grayed out.
 Choose your distribution (it should automatically populate in the
third dropdown)
 Click “Save”.

574
You did it!!
If everything worked, you should be able to navigate to your domain
name and have it load your website on a secure connection!!

575
576

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy