Unit 2 Notes
Unit 2 Notes
DIGITAL NOTES
ON
CLOUD COMPUTING
III B.TECH – II SEM
Prepared by
SIET III-II
CLOUD COMPUTING NOTES
UNIT II
Cloud computing fundamentals: Motivation for Cloud Computing, The Need for Cloud
Computing, Definition of Cloud computing, Principles of Cloud computing, Five Essential
Characteristics, Four Cloud Deployment Models, on demand services like Elastic resource pooling
using Amazon Elastic Compute Cloud (EC2) as example, Rapid elasticity using Amazon EBS,
Amazon EFS, Amazon S3, Amazon LEX, Amazon Lambda, overview of Docker CLI commands
cloud deployment using Docker.
• On the other hand, it is easy and handy to get the required computing power and resources from some
provider (or supplier) as and when it is needed and pay only for that usage. This would cost only a
reasonable investment or spending, compared to the huge investment when buying the entire
computing infrastructure. This phenomenon can be viewed as capital expenditure versus operational
expenditure.
• As one can easily assess the huge lump sum or smaller lump sum required for the hiring or getting the
computing infrastructure only to the tune of required time, and rest of the time free from that.
• Cloud computing is a mechanism of bringing–hiring or getting the services of the computing power or
infrastructure to an organizational or individual level to the extent required and paying only for the
consumed services.
Example : Electricity in our homes or offices
• A blind benefit of this computing is that even if we lose our laptop or due to some crisis our personal
computer—and the desktop system—gets damaged, still our data and files will stay safe and
secured as these are not in our local machine (but remotely located at the provider’s place—
machine).
• It is a fast solution growing in popularity because of storage especially among individuals and small
and medium-sized companies (SMEs).
SIET III-II
CLOUD COMPUTING NOTES
• Thus, cloud computing comes into focus and much needed subscription based or pay-per-use service
model of offering computing to end users or customers over the Internet and thereby extending the
IT’s existing capabilities.
• The main reasons for the need and use of cloud computing are convenience and reliability.
• In the past, if we wanted to bring a file, we would have to save it to a Universal Serial Bus (USB)
flash drive, external hard drive, or compact disc (CD) and bring that device to a different place.
• Instead, saving a file to the cloud (e.g., use of cloud application Dropbox) ensures that we will be
able to access it with any computer that has an Internet connection. The cloud also makes it much
easier to share a file with friends, making it possible to collaborate over the web.
• While using the cloud, losing our data/file is much less likely. However, just like anything online,
there is always a risk that someone may try to gain access to our personal data, and therefore, it is
important to choose an access control with a strong password and pay attention to any privacy
settings for the cloud service that we are using.
The formal definition of cloud computing comes from the National Institute of Standards and
Technology (NIST): “Cloud computing is a model for enabling ubiquitous, convenient, on demand
network access to a shared pool of configurable computing resources (e.g., networks, servers, storage,
applications, and services) that can be rapidly provisioned and released with minimal management effort
or service provider interaction.
Cloud computing has five essential characteristics, which are shown in Figure 2.2. Readers can note the
word essential, which means that if any of these characteristics is missing, then it is not cloud
computing:
SIET III-II
CLOUD COMPUTING NOTES
4. Rapid elasticity: Capabilities can be rapidly and elastically provisioned, in some cases automatically,
to quickly scale out and rapidly released to quickly scale in. To the consumer, the capabilities available
for provisioning often appear to be unlimited and can be purchased in any quantity at any time.
5. Measured service: Cloud systems automatically control and optimize resource use by leveraging a
metering capability at some level of abstraction appropriate to the type of service (e.g., storage,
processing , bandwidth, and active user accounts). Resource usage can be monitored, controlled, and
reported providing transparency for both the provider and consumer of the utilized service.
Deployment models also called as types of models. These deployment models describe the ways with
which the cloud services can be deployed or made available to its customers, depending on the
organizational structure and the provisioning location. One can understand it in this manner too: cloud
(Internet)-based computing resources—that is, the locations where data and services are acquired and
provisioned to its customers—can take various forms. Four deployment models are usually
distinguished, namely,
(1)Public
(2)Private
(3)Community, and
(4)Hybrid cloud service usage
(1)Public Cloud
Public clouds are managed by third parties which provide cloud services over the internet to the
public, these services are available as pay-as-you-go billing models.
They offer solutions for minimizing IT infrastructure costs and become a good option for handling
peak loads on the local infrastructure.
Public clouds are the go-to option for small enterprises, which can start their businesses without
large upfront investments by completely relying on public infrastructure for their IT needs.
The fundamental characteristics of public clouds are multitenancy. A public cloud is meant to serve
multiple users, not a single customer. A user requires a virtual computing environment that is
separated, and most likely isolated, from other users.
SIET III-II
CLOUD COMPUTING NOTES
(2)Private cloud
Private clouds are distributed systems that work on private infrastructure and provide the users with
dynamic provisioning of computing resources. Instead of a pay-as-you-go model in private clouds, there
could be other schemes that manage the usage of the cloud and proportionally billing of the different
departments or sections of an enterprise. Private cloud providers are HP Data Centers, Ubuntu, Elastic-
Private cloud, Microsoft, etc.
Private Cloud
SIET III-II
CLOUD COMPUTING NOTES
Advantages of using a private cloud are as follows:
1. Customer information protection: In the private cloud security concerns are less since customer
data and other sensitive information do not flow out of private infrastructure.
2. Infrastructure ensuring SLAs: Private cloud provides specific operations such as appropriate
clustering, data replication, system monitoring, and maintenance, disaster recovery, and other uptime
services.
3. Compliance with standard procedures and operations: Specific procedures have to be put in
place when deploying and executing applications according to third-party compliance standards.
This is not possible in the case of the public cloud.
(3)Community cloud
Community clouds are distributed systems created by integrating the services of different clouds to
address the specific needs of an industry, a community, or a business sector. But sharing responsibilities
among the organizations is difficult.
In the community cloud, the infrastructure is shared between organizations that have shared concerns or
tasks. An organization or a third party may manage the cloud.
Community Cloud
SIET III-II
CLOUD COMPUTING NOTES
4. Thanks to community clouds, we may share cloud resources, infrastructure, and other capabilities
between different enterprises.
Disadvantages of using Community cloud are:
1. Not all businesses should choose community cloud.
2. gradual adoption of data
3. It’s challenging for corporations to share duties.
SIET III-II
CLOUD COMPUTING NOTES
3. Security: Most important thing is security. A hybrid cloud is totally safe and secure because it works
on the distributed system network.
Disadvantages of using a Hybrid cloud are:
1. It’s possible that businesses lack the internal knowledge necessary to create such a hybrid
environment. Managing security may also be more challenging. Different access levels and security
considerations may apply in each environment.
2. Managing a hybrid cloud may be more difficult. With all of the alternatives and choices available
today, not to mention the new PaaS components and technologies that will be released every day
going forward, public cloud and migration to public cloud are already complicated enough. It could
just feel like a step too far to include hybrid.
Cloud computing is not a single piece of technology like a microchip or a cellphone. It's a system
primarily comprised of three services:
SIET III-II
CLOUD COMPUTING NOTES
• IaaS is also known as Hardware as a Service (HaaS). It is a computing infrastructure managed over
the internet. The main advantage of using IaaS is that it helps users to avoid the cost and
complexity of purchasing and managing the physical servers.
Characteristics of IaaS
Companies providing IAAS are DigitalOcean, Linode, Amazon Web Services (AWS), Microsoft Azure,
Google Compute Engine (GCE)
Example: AWS provides full control of virtualized hardware, memory, and storage. Servers, firewalls, and
routers are provided, and a network topology can be configured by the tenant.,
• PaaS cloud computing platform is created for the programmer to develop, test, run, and manage
the applications.
Characteristics of PaaS
• Builds on virtualization technology, so resources can easily be scaled up or down as per the
organization's need.
Companies offering PaaS are AWS Elastic Beanstalk, Windows Azure, Heroku, Force.com, Google App
Engine, Apache Stratos, Magento Commerce Cloud, and OpenShift.
Example of PaaS is The World Wide Web (WWW) can be considered as the operating system
for all our Internet-based applications. However, one has to understand that we will always need a local
operating system in our computer to access webbased applications.
SIET III-II
CLOUD COMPUTING NOTES
The basic meaning of the term platform is that it is the support on which applications run or give results to
the users. For example, Microsoft Windows is a platform. But, a platform does not have to be an operating
system. Java is a platform even though it is not an operating system.Through cloud computing, the web is
becoming a platform. With trends (applications) such as Office 2.0, more and more applications that were
originally available on desktop computers are now being converted into web–cloud applications. Word
processors like Buzzword and office suites
like Google Docs are now available in the cloud as their desktop counterparts.All these kinds of trends in
providing applications via the cloud are turning cloud computing into a platform or to act as a platform .
(3)Software as a Service:
• SaaS is also known as "on-demand software". It is a software in which the applications are hosted
by a cloud service provider. Users can access these applications with the help of internet
connection and web browser.
Characteristics of SaaS
• Users are not responsible for hardware and software updates. Updates are applied automatically
Companies providing SaaS are BigCommerce, Google Apps, Salesforce, Dropbox, ZenDesk, Cisco
WebEx, ZenDesk, Slack, and GoToMeeting.
Example of SaaS is The simplest thing that any computer does is allow us to store and retrieve information.
We can store our family photographs, our favorite songs, or even save movies on it, which is also the most
basic service offered by cloud computing. Let us look at the example of a popular application called Flickr
to illustrate the meaning of this section.
While Flickr started with an emphasis on sharing photos and images, it has emerged as a great place to
store those images. In many ways, it is superior to storing the images on your computer:
1. First, Flickr allows us to easily access our images no matter where we are or what type of device we are
using. While we might upload the photos of our vacation from our home computer, later, we can
easily access them from our laptop at the office.
2. Second, Flickr lets us share the images. There is no need to burn them to a CD or save them on a flash
drive. We can just send someone our Flickr address to share these photos or images.
SIET III-II
CLOUD COMPUTING NOTES
3. Third, Flickr provides data security. By uploading the images to Flickr, we are providing ourselves with
data security by creating a backup on the web. And, while it is always best to keep a local copy—
either on a computer, a CD, or a flash drive—the truth is that we are far more likely to lose the images that
we store locally than Flickr is of losing our images.
• EC2 which is a short form of Elastic Compute Cloud (ECC) it is a cloud computing service offered
by the Cloud Service Provider AWS.
• Amazon EC2 (Elastic Compute Cloud) is a web service that acts as a scalable virtual servers in the
cloud.
• It allows you to run applications, host websites, and process data with flexibility. You can choose
from various instance types optimized for compute, memory, storage, or GPU needs.
• Instances are launched within a VPC (Virtual Private Cloud) and can use Elastic IPs for static public
IP addresses.
• Security is managed using security groups, key pairs, and IAM roles.
• EC2 supports on-demand, reserved, and spot pricing, making it cost-effective. Persistent storage is
provided via Elastic Block Store (EBS). Auto Scaling and Elastic Load Balancing ensure
performance during traffic spikes.
• Monitoring tools like CloudWatch help track performance and usage. EC2's versatility makes it
suitable for hosting, analytics, machine learning, and development tasks.
• Amazon EC2 can be used to launch as many or as few virtual servers as you need, configure security
and networking, and manage storage. You can add capacity (scale up) to handle compute-heavy tasks,
such as monthly or yearly processes, or spikes in website traffic. When usage decreases, you can
reduce capacity (scale down) again.
SIET III-II
CLOUD COMPUTING NOTES
Different Amazon EC2 instance types are designed for certain activities. Consider the unique requirements
of your workloads and applications when choosing an instance type. This might include needs for
computing, memory, or storage.The AWS EC2 Instance types are as follows:
General Purpose Instances
Compute Optimized Instances
Memory-Optimized Instances
Storage Optimized Instances
Accelerated Computing Instances
1. General Purpose Instances
It provides the balanced resources for a wide range of workloads,It is suitable for web servers,
development environments, and small databases. Examples: T3, M5 instances.
2. Compute Optimized Instances
It provides high-performance processors for compute-intensive applications.It will be Ideal for high-
performance web servers, scientific modeling, and batch processing. Examples: C5, C6g instances.
3. Memory-Optimized Instances
High memory-to-CPU ratios for large data sets.Perfect for in-memory databases, real-time big data
analytics, and high-performance computing (HPC). Examples: R5, X1e instances.
4. Storage Optimized Instances
It provides optimized resource of instance for high, sequential read and write access to large data
sets.Best for data warehousing, Hadoop, and distributed file systems. Examples: I3, D2 instances.
5. Accelerated Computing Instances
It facilitates with providing hardware accelerators or co-processors for graphics processing and parallel
computations. It is ideal for machine learning, gaming, and 3D rendering. Examples: P3, G4 instances.
SIET III-II
CLOUD COMPUTING NOTES
SIET III-II
CLOUD COMPUTING NOTES
Amazon EC2 Step by step Procedure:
1. Open “EC2” in services
2. Launch an instance from EC2 dashboard
3. Provide name to the instance
4. Select the Operating system also known as AMI “ubuntu”
5. Click on create a key pair
6. create a key pair name
7. A file will be downloaded. Save it for connect with SSH or Putty gen
8. In network settings create security group enable all the three SSH, HTTP, HTTPS.
9. Configure storage to 15GB (storage is our wish)
10. Click on launch instance
11. Successful initiation of instance
12. Goback to dash board of instances where an instance is initialized and is running then Right click
onto the instance and click on connect it
13. Connect to instance, click on connect
14. Ec2 instance connect
15. Type sudo apt update
16. Type sudo apt install nginx
17. Do u want to continue: types yes
18. Copy and paste the public IP address in browser it opens nginx server
If the nginx not opened
19. The site can’t be reached so need to change the security settings
20. Go back to ec2 instance dashboard ,downside u have security tab
21. Click on security tab and select blue line security groups
22. The launch wizard will be as shown and edit the inbound rules
23. Add the rules -custom to “Alltrafic” and custom to “anywhere ipv4” and save rules
24. Saved changes and connect again then console will be opened.
SIET III-II
CLOUD COMPUTING NOTES
Elastic Block Storage (EBS): From the aforementioned list, EBS is a block type durable and persistent
storage that can be attached to EC2 instances for additional storage. Unlike EC2 instance storage volumes
which are suitable for holding temporary data EBS volumes are highly suitable for essential and long
term data. EBS volumes are specific to availability zones and can only be attached to instances within the
same availability zone.
EBS can be created from the EC2 dashboard in the console as well as in Step 4 of the EC2 launch. Just
note that when creating EBS with EC2, the EBS volumes are created in the same availability zone as
EC2, however when provisioned independently users can choose the AZ in which EBS is required.
Features of EBS:
SIET III-II
CLOUD COMPUTING NOTES
• Scalability: EBS volume sizes and features can be scaled as per the needs of the system. This can be
done in two ways:
• Take a snapshot of the volume and create a new volume using the Snapshot with new updated
features.
• Updating the existing EBS volume from the console.
• Backup: Users can create snapshots of EBS volumes that act as backups.
• Snapshot can be created manually at any point in time or can be scheduled.
• Snapshots are stored on AWS S3 and are charged according to the S3 storage charges.
• Snapshots are incremental in nature.
• New volumes across regions can be created from snapshots.
• Encryption: Encryption can be a basic requirement when it comes to storage. This can be due to the
government of regulatory compliance. EBS offers an AWS managed encryption feature.
• Users can enable encryption when creating EBS volumes by clicking on a checkbox.
• Encryption Keys are managed by the Key Management Service (KMS) provided by AWS.
• Encrypted volumes can only be attached to selected instance types.
• Encryption uses the AES-256 algorithm.
• Snapshots from encrypted volumes are encrypted and similarly, volumes created from
snapshots are encrypted.
• Charges: Unlike AWS S3, where you are charged for the storage you consume, AWS charges users
for the storage you hold. For example if you use 1 GB storage in a 5 GB volume, you’d still be
charged for a 5 GB EBS volume.
• EBS charges vary from region to region.EBS Volumes are independent of the EC2 instance they
are attached to. The data in an EBS volume will remain unchanged even if the instance is rebooted
or terminated.
• Ability to dynamically scale the services provided directly to customers' need for space and other
services. It is one of the five fundamental aspects of cloud computing.
SIET III-II
CLOUD COMPUTING NOTES
Horizontal Scalability: Adding or removing nodes, servers, or instances to or from a pool, such as a
cluster or a farm. Most implementations of scalability are implemented using the horizontal method, as
it is the easiest to implement, especially in the current web-based world we live in.
Vertical Scalability: Adding or removing resources to an existing node, server, or instance to increase
the capacity of a node, server, or instance.Vertical Scaling is less dynamic because this requires reboots
of systems, sometimes adding physical components to servers.Example : In Ec2 we are changing the
t2.micro to t2.medium or t2.large.
Let us tell you that 10 servers are needed for a three-month project. The company can provide cloud
services within minutes, pay a small monthly
We can compare this to before cloud computing became available. Let's say a customer comes to us
with the same opportunity, and we have to move to fulfil the opportunity. We have to buy 10 more
servers as a huge capital cost.
When the project is complete at the end of three months, we'll have servers left when we don't need
them anymore. It's not economical, which could mean we have to forgo the opportunity.
Because cloud services are much more cost-efficient, we are more likely to take this opportunity, giving
us an advantage over our competitors.
Let's say we are an eCommerce store. We're probably going to get more seasonal demand around
Christmas time. We can automatically spin up new servers using cloud computing as demand grows.
New buyers will register new accounts. This will put a lot of load on your server during the campaign's
duration compared to most times of the year.
SIET III-II
CLOUD COMPUTING NOTES
Existing customers will also revisit abandoned trains from old wish lists or try to redeem accumulated
points.It works to monitor the load on the CPU, memory, bandwidth of the server, etc. When it reaches a
certain threshold, we can automatically add new servers to the pool to help meet demand. When demand
drops again, we may have another lower limit below which we automatically shut down the server. We
can use it to automatically move our resources in and out to meet current demand.
Streaming services, Netflix is probably the best example to use here. When the streaming service
released all 13 episodes of House of Cards' second season, viewership jumped to 16% of Netflix's
subscribers, compared to just 2% for the first season's premiere weekend.
Those subscribers streamed one of those episodes within seven to ten hours that Friday. Now, Netflix
has over 50 million subscribers (February 2014). So a 16% jump in viewership means that over 8
million subscribers streamed a portion of the show in a single day within a workday.
Netflix engineers have repeatedly stated that they take advantage of the Elastic Cloud services by AWS
to serve multiple such server requests within a short period and with zero downtime.
(1) SSD:
SIET III-II
CLOUD COMPUTING NOTES
• Ratio of 3 IOPS/GB with upto 10,000 IOPS • Both Volume having low latency
• These volumes are for the IOPS intensive and throughput intensive workloads that require extremely
low latency or for mission critical applications .
• Designed for I/O intensive application such as large relational or No SQL Databases.
• Price - $0.125/gb/month.
(2) HDD:
SIET III-II
CLOUD COMPUTING NOTES
(a) Throughput optimized HDD (st1)
• St1 is backed by hard disk drivers and is ideal for frequently accessed,
• It cannot be a Boot Volume can provisioned upto 500 IOPS per volume.
• Prize - $0.045gb/month
(c)Magnetic Standard
• Lowest cost per Gb of all EBS volume type is Bootable.
Step by step procedure for creating a volume using EBS in same zone:
Create an empty EBS volume and attach it to a running instance at same availability zone of Ec2
instance.
Create an EBS volume from a snapshot and attach it to a running instance in availability zone or
from
SIET III-II
CLOUD COMPUTING NOTES
other zone.
(1) To create an EBS volume using the console in same availability zone
SIET III-II
CLOUD COMPUTING NOTES
EBS snapshot:
• EBS snapshots are point -in-time images/copies of your EBS volume.
• Any data written to the volume after the snapshot process s initiated ,will not be included in the
resulting snapshot (but will be included in future , incremental update).
• Per AWS account , up to 5000 EBS volume can be created.
• EBS snapshots are stored on S3 ,however you cannot access them directly you can only access them
through EC2 APIs.
• While EBS volume are AZ specific , snapshot are Region specific.
• To migrate an EBS from one AZ to another Create a snapshot (region specific) and create an EBS
volume from the snapshot in the intended AZ.
• You can create a snapshot to an EBS volume of the same or larger size than the original volume size
from which the original volume size ,from which the snapshot was initially created.
• You can take a snapshot of a non-root EBS volume while the volume is in use on a Running EC2
instance.
• This means you can still access it while the snapshot is being processed.
• However the snapshot will only include data that is already written to your volume.
• The snapshot is created immediately but it may stay in pending status until the full snapshot is
completed .This may takes, few hours to complete specially for the first time snapshot volume.
SIET III-II
CLOUD COMPUTING NOTES
• During the period ,when the snapshot status is pending ,you can still success the volume (non-root),
but I/O might be slower because of the snapshot activity.
• While in pending state ,on in-progress snapshot will not include data from ongoing reads and write to
the volume.
• To take complete snapshot of your Non-root EBS volume.stop or unmount the volume.
To create a snapshot for a Root EBS volume ,you must stop the instance first then take the snapshot.
SIET III-II
CLOUD COMPUTING NOTES
Advantages
Rapid elasticity in cloud computing provides an array of advantages to businesses hoping to scale their
resources.
High availability and reliability. With rapid elasticity, you can enjoy a remarkably consistent,
predictable experience. Cloud providers take care of scaling behind the scenes, keeping the system
running smoothly and fast.
Growth-supporting. You can more easily adopt a growth-oriented mindset with rapid elasticity.
With elastic cloud computing, your IT infrastructure becomes more agile and nimble, as well as more
prepared to acquire new users and customers.
Automation capability. Rapid elasticity in cloud computing uses increased automation in your IT
environment, which has many benefits. For example, you can free up your IT staff to focus on core
business functionality rather than scalability.
Cost-effective. Cloud providers offer resources on a pay-per-use basis, so you only pay for what you
actually use. Adding new infrastructure components to prepare for growth becomes convenient with
a pay-as-you-expand model.
Disadvantages
Though rapid elasticity in cloud computing provides a multitude of benefits, it also introduces a few
complexities you should keep in mind.
SIET III-II
CLOUD COMPUTING NOTES
Learning curve. Rapid elasticity takes some time and effort to fully comprehend and therefore
benefit from. Your staff may need to familiarize themselves with new programming languages, cloud
platforms, automation tools, etc.
Security. Since elastic systems only run for a short period, you must rethink how you handle user
authentication, incident response, root cause analysis, and forensics when you are dealing with
security. Luckily, experts like Synopsys provide accessible and reliable cloud security solutions to
simplify this process.
Cloud lock-in. Rapid elasticity is a big selling point for public cloud providers, but vendors can lock
you into their service. Do your research before settling on a public cloud provider to ensure you fully
understand its offerings and your contract.
SIET III-II
CLOUD COMPUTING NOTES
1. Secured file sharing: You can share your files in every secured manner and in a faster and easier way
and also ensures consistency across the system.
2. Web Hosting: Well suited for web servers where multiple web servers can access the file system and can
store the data EFS also scales whenever the data incoming is increased.
3. Modernize application development: You can share the data from the AWS resources like ECS, EKS,
and any serverless web applications in an efficient manner and without more management required.
4. Machine Learning and AI Workloads: EFS is well suited for large data AI applications where multiple
instances and containers will access the same data improving collaboration and reducing data
duplication.
Note: Select OS Linux while doing EFS , use same security group and key pair for both instances and
select the subnet id for second instance other than first instance while creating security group
(1) Creating first instance and note down the availability zone and security group
• Create the first instance by launching the Instance
• Name the first instance to be launched as efs1
SIET III-II
CLOUD COMPUTING NOTES
• Select The operating System as AWS Linux
• Create a Key pair and name it as nfs
• Create the key pair.
• Configure the network settings
• Provide all the permissions as shown by checking the boxes.
• Configure the storage from 8GB to 10GB
• Now Launch the instance.
• Instance 1 named efs1 is successfully launched.
(2)Creating Second instance and adding same security group, keypair and different availability zone
• Go back to the Dashboard of EC2 instance to create another instance.
• Check the availability zone and Security group in another tab so that for the second instance the
availabilty zone should be different and security group to be the same.
• Repeat the same process and create another instance with name as efs2 same as the previous instance
and use same key and security group for instance 2.
• Checking the Security group in another tab so that to apply it same to the instance 2 that we are
creating.
• Changing the security group as wizard2 as it was in instance1
• Now edit the same for the second instance2
• Select the existing security group for instance2
• Launch the instance2
• Instance 2 is also successfully launched
• The availability zones are different.
(3) Adding security group New Rule with NFS
• The Security groups are same, if we add both instances can be reflected.
• Select the first instance efs1
• Click on the security tab in dashboard of instance of efs1 click on the security groups
• Now select the inbound rules and edit the inbound rules,in the edit inbound rules click on add rule
• Add the rule , Select NFS and anywhere in IPV4.
• Save the changes and the saved changes will be successful.
(4) Creating EFS Service
• Go back to the dashboard and search for EFS, Now click on create file system
• Provide name to the file system something as efsdemo ,Let VPC be default and Storage class as
Standard and click on customize
• Click on Next and Removed all the previous provided Security Groups Network access.
SIET III-II
CLOUD COMPUTING NOTES
• Apply the security group name same as the EC2 instances security group
• Click on next, Now click on Create.
• EFS created successfully.
(5)Mounting the EFS with instances from console
• Go back to instances
• Now go to efs1 i.e, instance 1 and right click the instance and connect to the instance.
• Connect to the instance, by Click on connect to establish connection
• Connection is being established
• After connection is established, Same step must be repeated for the second instance efs2 and
connection must be established.
• After connection is established, Type the below all commands in two instance consoles
1) sudo su
2) mkdir efs
3) yum install-y amazon-efs-utils
• Go back to the amazon aws console , in the services go back to efs service, right click on created efs
i.e. efsdemo
• Click on attach to mount efs,We are ,mounting via DNS ,Copying the command and paste in two
consoles.
(6) Creating EFS directory and files
• Type the commands in two consoles
- ls
- cd efs
• Now create a file in any one of the ec2 instance such that it must reflect in another instance even.
For example create file in instance1 must reflect in instance 2
• Type the command in one console
- touch file1 file2
- ls in any one of the instance.
• Touch file1 file2 to create files and ls to list the created files.
• In another instance(instance where touch command not used) type ls. It shows the created files l1 and
l2
• In the other instance ,to remove the file type the command
- rm file1
- ls
Check the same ls command in the instance where we have created file1 and file2 after removal of file1.
It shows only file2 In this process Efs can be shared among ec2 instances with in the regions.
SIET III-II
CLOUD COMPUTING NOTES
AMAZON S3
Amazon Simple Storage Service (S3) is a storage for the internet. It is designed for large-capacity,
low-cost storage provision across multiple geographical regions.
Amazon S3 provides object storage with its own unique identifier or key, for access through web
requests from any location .
Unlike EBS or EFS, S3 is not limited to EC2.
Files stored and protected in S3 bucket can be accessed by customers directly or programmatically of
all sized industries.
SIET III-II
CLOUD COMPUTING NOTES
To store your data in Amazon S3, you first create a bucket and specify a bucket name and AWS
Region. Then, you upload your data to that bucket as objects in Amazon S3. Each object has
a key (or key name), which is the unique identifier for the object within the bucket.
S3 provides features that you can configure to support your specific use case. For example, you can
use S3 Versioning to keep multiple versions of an object in the same bucket, which allows you to
restore objects that are accidentally deleted or overwritten.
Buckets and the objects in them are private and can be accessed only if you explicitly grant access
permissions. You can use bucket policies, AWS Identity and Access Management (IAM) policies,
access control lists (ACLs), and S3 Access Points to manage access.
S3 Architecture
S3 Objects
Any object size stored in an S3 bucket can be ) bytes to 5 TB.
• Each object is stored and retrieved by a unique key(ID or name).
• An object in AWS S3 is Uniquely identified and addressed through
-Service endpoint
-Bucket name
-Optionally Object Version
• Object stored in a S3-bucket in a Region will never leave that region unless you specifically move
them to another region or CRR.
• A bucket owner can grant cross-account permission to another AWS account to upload objects.
• We can grant S3 bucket /object permission to :-
-Individual users
-AWS Account
-Make the Resource public
-or to all authenticate user
S3 Bucket versioning
• Bucket versioning is a S3 bucket Sub-resource used to protect against accidental object /data deleted
or overwrites.
• Versioning can also be used for data Retention and archive.
• Once you enable versioning on a Buckets ,it cannot be disabled ,however it can be suspended.
SIET III-II
CLOUD COMPUTING NOTES
• When enabled ,bucket versioning will protect existing and new objects, and maintains their versions
as they are updated.
• Updating objects refers to PUT,POST,COPY,DELETE actions on objects.
• When versioning is enabled and you by to delete an object , a delete marker is placed on the object.
• We can still view the object and the delete marker.
• It you Reconsider deleting the objects ,we can delete the “Delete Marker” and the object will be
available again.
• We will be changed for all S3 storage cost for all object versions stored.
• We can use versioning with S3 lifecycle policies to delete older versions, or you move them to a
cheaper S3 storage (or Glacier).
• Bucket versioning state
-Enabled
-Suspended
• Versioning applies to all objects in a bucket and not partially applied.
• Object Existing before enabling versioning will have a version ID.
• If you have a bucket that is already versioned ,then you suspend versioning, existing objects and their
versions remain as it is.
• However they will not be updated/versioning further with future updates while the bucket versioning
is suspended.
• New object (uploaded after suspension they will have a version ID “null”
• If the same key (name) is used to stone another object ,it will override the existing one.
• An object deletion in a suspended versioning buckets will only delete the objects with ID “null”.
S3 Cross-Region Replication
• Replication enables automatic, asynchronous copying of objects across Amazon S3 buckets.
• Buckets that are configured for object replication can be owned by the same AWS account or by
different accounts.
• You can replicate objects to a single destination bucket or to multiple destination buckets.
• The destination buckets can be in different AWS Regions or within the same Region as the source
bucket.
• To automatically replicate new objects as they are written to the bucket, use live replication, such as
Cross-Region Replication (CRR).
• To enable CRR, you add a replication configuration to your source bucket To enable CRR, you add a
replication configuration to your source bucket.
-The destination bucket or buckets where you want Amazon S3 to replicate objects
-An AWS Identity and Access Management (IAM) role that Amazon S3 can assume to replicate
objects on your behalf
Use Cases :
SIET III-II
CLOUD COMPUTING NOTES
• Compliance – store data hundreds of miles a part
• Lower latency- Distribute data to regional customers
• Security – create remote replicas managed by separate AWS accounts
• Only replicates new PUTs once S3 is configured ,all new updates into a source buckets will be
replicated.
• Versioning is required.
SIET III-II
CLOUD COMPUTING NOTES
can be scaled to meet your growing data stores, and you’ll never have to make an investment upfront.
Backup and restoration: Secure, robust backup and restoration solutions are easy to build when you
combine S3 with other AWS offerings, including EBS, EFS, or S3 Glacier. These offerings enhance
your on-premises capabilities, while other offerings can help you meet compliance, recovery time,
and recovery point objectives.
Reliable disaster recovery: S3 storage, S3 Cross-Region Replication and additional AWS networking,
computing, and database services make it easy to protect critical applications, data, and IT systems. It
offers nimble recovery from outages, no matter if they are caused by system failures, natural disasters,
or human error.
Methodical archiving: S3 works seamlessly with other AWS offerings to provide methodical archiving
capabilities. S3 Glacier and S3 Glacier Deep Archive enable you to archive data and retire physical
infrastructure. There are three S3 storage classes you can use to retain objects for extended periods of
SIET III-II
CLOUD COMPUTING NOTES
time at their lowest rates. S3 Lifecycle policies can be created to archive objects at any point within
their lifecycle, or you can upload objects to archival storage classes directly. S3 Object Lock meets
compliance regulations by applying retention dates objects to avoid their deletion. And unlike a tape
library, S3 Glacier can restore any archived object within minutes.
Advantages
Rapid elasticity in cloud computing provides an array of advantages to businesses hoping to scale their
resources.
•High availability and reliability. With rapid elasticity, you can enjoy a remarkably consistent,
predictable experience. Cloud providers take care of scaling behind the scenes, keeping the system
running smoothly and fast.
•Growth-supporting. You can more easily adopt a growth-oriented mindset with rapid elasticity. With
elastic cloud computing, your IT infrastructure becomes more agile and nimble, as well as more
prepared to acquire new users and customers.
•Automation capability. Rapid elasticity in cloud computing uses increased automation in your IT
environment, which has many benefits. For example, you can free up your IT staff to focus on core
business functionality rather than scalability.
•Cost-effective. Cloud providers offer resources on a pay-per-use basis, so you only pay for what you
actually use. Adding new infrastructure components to prepare for growth becomes convenient with a
pay-as-youexpand model.
Disadvantages
Though rapid elasticity in cloud computing provides a multitude of benefits, it also introduces a few
complexities you should keep in mind.
•Learning curve. Rapid elasticity takes some time and effort to fully comprehend and therefore benefit
from. Your staff may need to familiarize themselves with new programming languages, cloud platforms,
automation tools, etc.
SIET III-II
CLOUD COMPUTING NOTES
•Security. Since elastic systems only run for a short period, you must rethink how you handle user
authentication, incident response, root cause analysis, and forensics when you are dealing with security.
Luckily, experts like Synopsys provide accessible and reliable cloud security solutions to simplify this
process.
•Cloud lock-in. Rapid elasticity is a big selling point for public cloud providers, but vendors can lock
you into their service. Do your research before settling on a public cloud provider to ensure you fully
understand its offerings and your contract.
AMAZON LEX
Amazon Lex is an AWS service for building conversational interfaces for applications using voice and
text. With Amazon Lex, the same conversational engine that powers Amazon Alexa is now available to
any developer, enabling you to build sophisticated, natural language chatbots into your new and existing
applications.
Amazon Lex provides the deep functionality and flexibility of natural language understanding (NLU) and
automatic speech recognition (ASR) so you can build highly engaging user experiences with lifelike,
conversational interactions, and create new categories of products.
1. Create a bot and configure it with one or more intents that you want to support. Configure the bot so
it understands the user's goal (intent), engages in conversation with the user to elicit information, and
fulfills the user's intent.
2. Test the bot. You can use the test window client provided by the Amazon Lex console.
3. Publish a version and create an alias.
4. Deploy the bot. You can deploy the bot on platforms such as mobile applications or messaging
platforms such as Facebook Messenger.
Before you get started, familiarize yourself with the following Amazon Lex core concepts and
terminology:
Bot – A bot performs automated tasks such as ordering a pizza, booking a hotel, ordering flowers, and
so on. An Amazon Lex bot is powered by Automatic Speech Recognition (ASR) and Natural Language
Understanding (NLU) capabilities. Each bot must have a unique name within your account.Amazon
Lex bots can understand user input provided with text or speech and converse in natural language. You
can create Lambda functions and add them as code hooks in your intent configuration to perform user
data validation and fulfillment tasks.
• Intent – An intent represents an action that the user wants to perform. You create a bot to support one
SIET III-II
CLOUD COMPUTING NOTES
or more related intents. For example, you might create a bot that orders pizza and drinks. For each
intent, you provide the following required information:
• Intent name– A descriptive name for the intent. For example, order pizza Intent names must be
unique
within your account.
• Sample utterances – How a user might convey the intent. For example, a user might say "Can I
order
a pizza please" or "I want to order a pizza".
How to fulfill the intent – How you want to fulfill the intent after the user provides the necessary
information (for example, place order with a local pizza shop). We recommend that you create a
Lambda function to fulfill the intent.
For example, the Order Pizza intent requires slots such as pizza size, crust type, and number of pizzas.
In the intent configuration, you add these slots. For each slot, you provide slot type and a prompt for
Amazon Lex to send to the client to elicit data from the user. A user can reply with a slot value that
includes additional words, such as "large pizza please" or "let's stick with small." Amazon Lex can still
understand the intended slot value.
Slot type – Each slot has a type. You can create your custom slot types or use built-in slot types. Each
slot type must have a unique name within your account. For example, you might create and use the
following slot types for the order pizza intend.
Size – With enumeration values small, Medium, and Large.
Crust – With enumeration values Thick and Thin.
Amazon Lex also provides built-in slot types For example, AMAZON .NUMBER. is a built-in slot type
that you can use for the number of pizzas ordered For more information.
SIET III-II
CLOUD COMPUTING NOTES
Intend :-An intend represents an action that the user wants to perform
For example ,you might create an intend that orders pizza,books ,check balance ,apply for a loan,
payment issue, etc
Sample Utterances – How a user might convey the intent.
For example, a user might stay. ”Can I order a pizza” or “I want to order a pizza”.
Slot – A slot is an information that Amazon Lex needs to fulfil an intend. Each slot has type. You can
create your custom slot types or use built-in types.
For example ,the OrderPizza intend intent requires slots such as pizza and pizza type.
Slot type – Each slot has a type . You can create your slot type or use built –in slot types.
For example you might create and use the following slot types for the OrderPizza intent:
Size – With enumeration values Small , Medium and Large.
Crust – With enumeration values Thick and Thin.
SIET III-II
CLOUD COMPUTING NOTES
• Select Amazon Lex service from console
• we have select the create a role with basic Amazon Lex permission
• We have different options ,here we select the “This is only text based application”
- Book a Hotel
- Book a {nights} nights stay in {location} sample :Book a 3 nights stay in Mumbai
• We can create the our own slot by creating the slot types
• After giving the name for the slot here we go for restrict to slot values -> means it
• Here we give the values for the slot type ,and click on create slot
• Here we can see the our slot type which we have created ,add prompts also
• After adding all the required slots we can click on save intend
• Here will get the preview after saving the intend, n build the intend
• After that build it ,we go for test option ,right side panel will be created with chat
• Now we can give any one of the utterance here, I gave here Book Hotel, you see response also.
• In response to Book a hotel we get the replay from bot, in this way will get the responses
SIET III-II
CLOUD COMPUTING NOTES
• Go to the intend ,confirmation prompts confirmation prompts type your message
AWS LAMBDA
• Aws lambda is a compute service that let you run code without provisioning or managing servers.
• With aws lambda ,you can run code for virtually any type of application or backend service all with
zero administration.
SIET III-II
CLOUD COMPUTING NOTES
• Aws lambda executes your code only when needed and scales automatically from a few requests per
day to thousand per second.
• You pay only for the compute time consume no charge when your code is not running.
• All you need to do is supply your code in the form of one or more lambda functions to aws lambda in
one of the languages that aws supports (currently Node js , Java , power shell , c#, ruby ,python and
Go lang) and the services can the code on your behalf
• Typically the lifecycle for an aws lambda based application includes authoring code ,deploying code
to aws lambda and then monitoring and troubleshooting.
• This is in exchange for flexibility which means you cannot log into compute instance or customize
the operating the operating system or lang runtime.
• If you do want to manage your own compute you can use ec2 or elastic bean stalk.
Function :- A function is a resource that you can invoke to run your code in aws lambda .A function
has code that processor events and runtime that passes request and responses between lambda and the
function code.
Runtime :- Lambda runtime allows function is different language to run in the same base execution
environment .The runtime sits in between the lambda services and your functions code relaying
invocation events, context information and responses between two.
Event:-it is a Json formatted document that contains data for a function to process.
.Event source /trigger :- An aws service such as amazon SNS or a custom service that triggers your
function and execute its logic
Down Stream /Resource :- An aws services such as dynamo DB tables or s3 Buckets ,that your
lambda function calls once it is triggered.
Concurrency :- Number of requests that your function is serving in any given time.
SIET III-II
CLOUD COMPUTING NOTES
• To run your code in response to HTTP repeat using Amazon API Gateway.
• With there capabilities ,you can use lambda to early build data processing triggers for aws services
like
Amazon S3 and amazon DynamoDB ,process streaming data stand in kiness or create your own
backend that operates at AWS scale performance and security.
Example of S3:
1. The user create an object in a bucket
2. Amazon S3 detect the object created event
3. Amazon S3 invokes your lambda functions using the permission provided by the execution role.
4. Amazon S3 knows which lambda function to invoke based on the event source mapping that is stored
in the bucket notification configuration.
IAM ROLE
• This is the role that AWS lambda assume when it executes the lambda function on your behalf.
• AWS lambda function Services it can access
• Lambda services or Non-AWS services
• AWS services running in AWS VPC (ex- Redshift ,elastic ache ,RDS instance).
• Non- AWS services running on EC@ instance in an AWS VPC.
• AWS Lambda run your function code securely within a VPC by default.
• However , to enable your lambda function to access resources inside your private VPC ,you must
provide additional VPC- specific configuration information that includes VPC subnet ID send
security
• Different way to make lambda function
• Synchronous invoke (push)
• Asynchronous invoke (event)
• Poll-based invoke (pull based)
Synchronous invoke one the most straight forward way to make your lambda,In this model ,your
function executes immediately when you perform the lambda invoke API call Invocation flag specify a
value of ‘Request Response’. You want for the function to process the event and return a reponesse funcn
Asynchronous Invocation
• For asynchronous invocation ,lambda places the event in a Queue and returns a success response
without additional information.
• Lambda queues the event for processing and returns a response immediately.
• You can configure lambda to send an invocation record to another service like SQS, SNS, lambda.
• List of services
• Amazon S3
• Amazon SNS
• Cloud Formation
SIET III-II
CLOUD COMPUTING NOTES
• Aws config
Poll-Based
• The invocation model is designed to allow you to integrate with AWS stream and queue based
service with no code or server management lambda will poll the following service on your behalf ,
retrieve records and invoke your function
Amazon kinesis
Amazon SQS
Amazon DynamoDB streams
• Choose the option Author from scratch ,give function name and select the language as python 3.7
• In permissions change default execution role select the use an existing role create function
SIET III-II
CLOUD COMPUTING NOTES
DOCKER
Docker is a platform which packages an application and all its dependencies together in the form of
containers. This containerization aspect ensures that the application works in any environment.
In the diagram, each and every application runs on separate containers and has its own set of
dependencies & libraries. This makes sure that each application is independent of other applications,
giving developers surety that they can build applications that will not interfere with one another. So a
developer can build a container having different applications installed on it and give it to the QA team.
Then the QA team would only need to run the container to replicate the developer’s environment.
Docker Commands
1. docker –version
This command is used to get the currently installed version of docker
SIET III-II
CLOUD COMPUTING NOTES
2. docker pull
Usage: docker pull <image name>
This command is used to pull images from the docker repository(hub.docker.com
3. docker run
Usage: docker run -it -d <image name>
This command is used to create a container from an image
4. docker ps
This command is used to list the running containers
5. docker ps -a
This command is used to show all the running and exited containers
6. docker stop
Usage: docker stop <container id>
This command stops a running container
SIET III-II
CLOUD COMPUTING NOTES
7. docker kill
Usage: docker kill <container id>
This command kills the container by stopping its execution immediately. The difference
between ‘docker kill’ and ‘docker stop’ is that ‘docker stop’ gives the container time to
shutdown gracefully, in situations when it is taking too much time for getting the container to
stop
8. docker commit
Usage: docker commit <conatainer id> <username/imagename>
This command creates a new image of an edited container on the local system
9. docker build
Usage: docker build <path to docker file>
This command is used to build an image from a specified docker file.
SIET III-II
CLOUD COMPUTING NOTES
1 docker
This
3. command is used to login to the
login
docker hub repository
11. docker rm
Usage: docker rm <container id>
This command is used to delete a stopped container
SIET III-II
CLOUD COMPUTING NOTES
Step by step Procedure to Pull the image from Docker to EC2 Instance aand access it publicly.
• Create the EC2 instance and connect with EC2 Console.
• In the opened Console of EC2, Type the following commands to pull the image from docker - sudo
apt update / sudo apt-get update
- sudo docker image ls (it shows the images list present in our Instance)
Note: we don’t have images in the instance because we didn’t pulled the image from docker
Note: 3000:3000 first is inbound values we change it but right isde 3000 value is docker bound values
we cant change that, example we can give 3008:3000 -
• After run command it shows the image downloaded. Now we can access it publicly, by copying the
EC2 public IP address shown below of the console or EC2 dashboard.
• Copy the public ip and paste it on browser with inbound value.
• Example : http://3.12.123.4:3000 Note : if it is not opening check
SIET III-II
CLOUD COMPUTING NOTES
SIET III-II