0% found this document useful (0 votes)
38 views

Cloud Computing Note(UNIT 1-2)

The document provides an extensive overview of cloud computing, defining it as a technology that delivers computing services over the internet, allowing users to access resources remotely without managing physical servers. It discusses the advantages, service models (IaaS, PaaS, SaaS), and deployment models (public, private, hybrid, community) of cloud computing, highlighting its cost efficiency, scalability, and global accessibility. Additionally, it traces the history and evolution of cloud computing from the 1960s to recent advancements, emphasizing its transformative impact on businesses and technology.

Uploaded by

Bunny Chokkam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
38 views

Cloud Computing Note(UNIT 1-2)

The document provides an extensive overview of cloud computing, defining it as a technology that delivers computing services over the internet, allowing users to access resources remotely without managing physical servers. It discusses the advantages, service models (IaaS, PaaS, SaaS), and deployment models (public, private, hybrid, community) of cloud computing, highlighting its cost efficiency, scalability, and global accessibility. Additionally, it traces the history and evolution of cloud computing from the 1960s to recent advancements, emphasizing its transformative impact on businesses and technology.

Uploaded by

Bunny Chokkam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 32

UNIT-1

Introduction to Cloud Computing:


What is the cloud: "The cloud" refers to servers that are accessed over the Internet,
and the software and databases that run on those servers. Cloud servers are in data
centres all over the world. By using cloud computing, users and companies do not
have to manage physical servers themselves or run software applications on their own
machines.

The cloud enables users to access the same files and applications from almost any
device, because the computing and storage takes place on servers in a data centre,
instead of locally on the user device. Therefore, a user can log into their Instagram
account on a new phone after their old phone breaks and still find their old
account in place, with all their photos, videos, and conversation history. It works
the same way with cloud email providers like Gmail or Microsoft Office 365, and
with cloud storage providers like Dropbox or Google Drive.

For businesses, switching to cloud computing removes some IT costs and


overhead: for instance, they no longer need to update and maintain their own
servers, as the cloud vendor they are using will do that. This especially makes an
impact for small businesses that may not have been able to afford their own
internal infrastructure but can outsource their infrastructure needs affordably via
the cloud. The cloud can also make it easier for companies to operate internationally,
because employees and customers can access the same files and
applications from any
location.

Cloud computing at a glance


1. Definition: Cloud computing is a technology delivering computing services over
the internet.

2. Resource Access: Users access and utilize computing resources like servers,
storage, and software remotely.
3. On-Demand Services: Offers scalable ,flexible , and cost-effective services tailored
to user needs.

4. Internet-Based: All services are provided over the internet, eliminating the need for
physical infrastructure.

5. Payment Model: Users pay for resources consumed, promoting cost efficiency.

6. Service Models: Includes Infrastructure as a Service (IaaS), Platform as a Service


(PaaS), and Software as a Service (SaaS).

7. Shared Environment: Utilizes virtualized and shared infrastructure for increased


accessibility, collaboration, and innovation.

Definition of Cloud Computing:


1. Virtualized Resources: A cloud is a virtualized pool of computing resources,
encompassing servers, storage, and networks, accessible over the internet.

2. On-Demand Availability: Resources are available on-demand, allowing users to


provision and utilize computing services as needed.

3. Scalability: Clouds offer seamless scalability, enabling users to easily scale


resources up or down based on workload fluctuations.

4. Resource Sharing: Multiple users can share the same underlying infrastructure,
optimizing utilization and promoting efficiency.

5. Self-Service Model: Users can independently manage and control resources,


accessing computing capabilities without requiring direct intervention from service
providers.

6. Pay-as-You-Go: Cloud services often follow a pay-as-you-go model, where users


pay for actual resource consumption rather than fixed, upfront costs.

The term “Cloud Computing” refers to services provided by the cloud that is
responsible for delivering of computing services such as servers, storage, databases,
networking, software, analytics, intelligence, and more, over the Cloud (Internet).
Cloud computing applies a virtualized platform with elastic resources on demand by
provisioning hardware, software, and data sets dynamically
Cloud Computing provides an alternative to the on-premises data center. With
an on- premises data center, we must manage everything, such as purchasing
and installing hardware, virtualization, installing the operating system, and any other
required applications, setting up the network, configuring the firewall, and setting up
storage for data. After doing all the set-up, we become responsible for maintaining it
through its entire lifecycle.

However, if we choose Cloud Computing, a cloud vendor is responsible for the


hardware purchase and maintenance. They also provide a wide variety of software and
platform as a service. We can take any required services on rent. The cloud computing
services are charged based on usage.

The cloud environment provides an easily accessible online portal that makes handy
for the user to manage the compute, storage, network, and application resources. Some
of the cloud service providers are in the following figure.

The vision of cloud computing:


1. Global Accessibility: Cloud computing envisions universal access to computing
resources, allowing users to connect and utilize services from anywhere in the
world.

2. Scalability: The vision includes seamless scalability, enabling users to easily adjust
resources based on demand, ensuring optimal performance.

3. Cost Efficiency: A key aspect is the economic advantage, where users pay for
actual usage, reducing upfront costs and promoting efficiency.
4. Innovation Acceleration: Cloud computing aims to foster rapid innovation by
providing a platform for developers to create and deploy applications swiftly.

5. Collaboration: The vision emphasizes enhanced collaboration through shared


resources, facilitating teamwork and information exchange on a global scale.

Advantages of cloud computing:


1. Cost: It reduces the huge capital costs of buying hardware and software.
2. Speed: Resources can be accessed in minutes, typically within a few clicks.
3. Scalability: We can increase or decrease the requirement of resources
according to the business requirements.
4. Productivity: While using cloud computing, we put less operational effort.
We do not need to apply patching, as well as no need to maintain hardware
and software. So, in this way, the IT team can be more productive and focus on
achieving business goals.
5. Reliability: Backup and recovery of data are less expensive and extremely
fast for business continuity.
6. Security: Many cloud vendors offer a broad set of policies, technologies, and
controls that strengthen our data security.

Cloud computing shares characteristics with:

1. Client–server model—Client–server computing refers broadly to any


distributed application that distinguishes between service providers
(servers) and service requestors (clients).
2. Grid computing—A form of distributed and parallel computing, whereby
a 'super and virtual computer' is composed of a cluster of networked, loosely
coupled computers acting in concert to perform very large tasks.
3. Fog computing—Distributed computing paradigm that provides data,
compute, storage and application services closer to the client or near-user edge
devices, such as network routers. Furthermore, fog computing handles data at
the network level, on smart devices and on the end-user client-side (e.g.,
mobile devices), instead of sending data to a remote location for processing.
4. Mainframe computer—Powerful computers used mainly by large
organizations for critical applications, typically bulk data processing such as
census; industry and consumer statistics; police and secret intelligence
services; enterprise resource planning; and financial transaction processing.
5. Utility computing—The packaging of computing resources, such as
computation and storage, as a metered service similar to a traditional public
utility, such as electricity.
6. Peer-to-peer—A distributed architecture without the need for central
coordination.
Participants are both suppliers and consumers of resources (in contrast
to the traditional client-server model).
7. Green computing—Study and practice of environmentally sustainable
computing or
IT.
8. Cloud sandbox—A live, isolated computer environment in which a program,
code or file can run without affecting the application in which it runs.
Characteristics of Cloud Computing
1. Agility for organizations
2. Cost reductions, Centralization of infrastructure in locations with lower costs.
3. Device and location independence, which means no maintenance, required.
4. Pay-per-use means utilization and efficiency improvements for systems that are
often only 10–20% utilized.
5. Performances are being monitored by IT experts i.e., from the service provider
end.
6. Productivity increases which results in multiple users who can work on the
same data simultaneously.
7. Time may be saved as information does not need to be re-entered when
fields are matched
8. Availability improves with the use of multiple redundant sites
9. Scalability and elasticity via dynamic ("on-demand") provisioning of
resources on a fine-grained, self-service basis in near real-time without users
having to engineer for peak loads.
10. Self-service interface.
11. Resources thatare abstracted or virtualized.
12. Security can improve due to centralization of data

The National Institute of Standards and Technology's definition of cloud


computing identifies "five essential characteristics":
1. On-demand self-service.
2. Broad network access.

3. Resource pooling.
4. Rapid elasticity.
5. Measured service.
Basic Concepts
There are certain services and models working behind the scene making the cloud
computing feasible and accessible to end users.
• Deployment Models
 Service Models
Deployment Models
• Deployment models define the type of access to the cloud, i.e., how the cloud is
located? Cloud can have any of the four types of access: Public, Private, Hybrid,
and Community.
Public cloud
• Public cloud (off-site and remote) describes cloud computing where resources are
dynamically provisioned on an on-demand, self-service basis over the Internet, via
web applications/web services, open API, from a third-party provider who bills on
a utility computing basis.
Private cloud
• A private cloud environment is often the first step for a corporation prior to
adopting a public cloud initiative. Corporations have discovered the benefits of
consolidating shared services on virtualized hardware deployed from a primary
datacenter to serve local and remote users.
Hybrid cloud
• A hybrid cloud environment consists of some portion of computing resources on-
site (on premise) and off-site (public cloud). By integrating public cloud services,
users can leverage cloud solutions for specific functions that are too costly to
maintain on-premise such as virtual server disaster recovery, backups and
test/development environments.
Community cloud
• A community cloud is formed when several organizations with similar requirements
share common infrastructure. Costs are spread over fewer users than a public cloud
but more than a single tenant.
Cloud Service Models
Infrastructure-as-a-Service (IaaS): In Infrastructure-as-a-Service model, the service
provider owns the hardware equipment's such as Servers, Storage, Network and is
provided as services to the clients. The client uses these equipment's and pays on per-
use basis.
• E.g. Amazon Elastic Compute (EC2) and Simple Storage Service (S3).
Platform-as-a-Service (PaaS): In Platform-as-a-Service model, complete resources
needed to Design, Develop, Testing, Deploy and Hosting an application are
provided as services without spending money for purchasing and maintaining the
servers, storage and software.
PaaS is an extension of IaaS. In addition to the fundamental computing resource
supplied by the hardware in an IaaS offering, PaaS models also include the software
and configuration required to create an applications.
• E.g. Google App Engine.

Software-as-a-Service (SaaS): In Software-as-a-Service model, the service provider


provides software's as a service over the Internet, eliminating the need to buy, install,
maintain, upgradation and licensing on their local machine.
• E.g. Accounting, CRM, Google Docs are all popular examples of SaaS.

Benefits
 One can access applications as utilities, over the Internet.
 One can manipulate and configure the applications online at any time.
 It does not require to install a software to access or manipulate cloud application.
 Cloud Computing offers online development and deployment tools,
programming runtime environment through PaaS model.
 Cloud resources are available over the network in a manner that provide
platform independent access to any type of clients. Cloud Computing offers on-
demand self-service. The resources can be used without interaction with
cloud service provider.
 Cloud Computing is highly cost effective because it operates at high efficiency
with optimum utilization. It just requires an Internet connection
 Cloud Computing offers load balancing that makes it more reliable.
HISTORY AND EVOLUTION
• Cloud computing is one the most innovative technology of our time. Following
is a brief history of Cloud computing.

• EARLY 1960S:- The computer scientist John McCarthy, come up with concept
of timesharing, and enabling Organization to simultaneously use an expensive
mainframe. This computing is described as a significant contribution to the
development of the Internet, and a pioneer of Cloud computing.
• IN 1969:- The idea of an “Intergalactic Computer Network” or “Galactic
Network” (a computer networking concept similar to today’s Internet) was
introduced by J.C.R. Licklider, who was responsible for enabling the
development of ARPANET (Advanced Research Projects Agency Network). His
vision was for everyone on the globe to be interconnected and being able to
access programs and data at any site, from anywhere.
• IN 1970:- Using virtualization software like VMware. It become possible to run
more than one Operating System simultaneously in an isolated environment. It
was possible to run a completely different Computer (virtual machine) inside a
different Operating System.
• IN 1997:- The first known definition of the term “Cloud Computing” seems to
be by Prof. Ramnath Chellappa in Dallas in 1997 – “A computing paradigm
where the boundaries of computing will be determined by economic rationale
rather than technical limits alone.”
• IN 1999:-The arrival of Salesforce.com in 1999 pioneered the concept of
delivering enterprise applications via simple website. The services firm covered
the way for both specialist and mainstream software firms to deliver applications
over the Internet.
• In 2002:- Amazon lunch its cloud computing Web Service known as AWS
• IN 2003:- The first public release of Xen, which creates a Virtual Machine
Monitor (VMM) also known as a hypervisor, a software system that allows the
execution of multiple virtual guest operating systems simultaneously on a single
machine.
• IN 2006:- In 2006, Amazon expanded its cloud services. First was its Elastic
Compute cloud (EC2), which allowed people to access computers and run their
own applications on them, all on the cloud. Then they brought out Simple
Storage Service (S3). This introduced the pay-as-you-go model to both users and
the industry as a whole, and it has basically become standard practice now.
• IN 2009:- Google Apps also started to provide cloud computing enterprise
applications. In 2009 also Microsoft lunched Windows Azure and companies
like Oracle and HP have also joined the game.
• IN 2013:-The Worldwide Public Cloud Services Market totalled £78bn, up 18.5
per cent on 2012, with IaaS (infrastructure-as-a-service) the fastest growing
market service.
• IN 2014:- In 2014, global business spending for infrastructure and services
related to the cloud will reach an estimated £103.8bn, up 20% from the amount
spent in 2013 (Constellation Research).
• 2016:Serverless computing gained popularity with AWS Lambda and Azure
Functions.
AI and machine learning services like Google AI and AWS SageMaker became
mainstream.
• 2017:Multi-cloud strategies emerged as organizations sought to avoid vendor
lock-in.
Edge computing started gaining traction, driven by IoT needs.
• 2018:5G rollouts began, promising to accelerate edge computing and real-time
applications.
The General Data Protection Regulation (GDPR) in the EU impacted cloud
compliance requirements.
• 2019:
Hybrid cloud solutions like AWS Outposts and Azure Arc were introduced.
Quantum computing began to integrate with cloud platforms (e.g., IBM Q
Experience).
• 2020:
The COVID-19 pandemic accelerated cloud adoption for remote work and
digital transformation.
Video conferencing and collaboration tools like Zoom and Microsoft Teams
scaled massively using cloud infrastructure.
• 2021:Sustainability became a focus, with cloud providers pledging carbon
neutrality (e.g., Google, Microsoft).
Decentralized cloud models like Filecoin and IPFS gained attention.
• 2022:AI-powered cloud management tools automated resource allocation and
cost optimization.
Industry-specific cloud solutions (e.g., for healthcare and finance) gained
traction.
• 2023:Supercloud architectures emerged, enabling seamless integration across
multiple cloud platforms.
The adoption of AI-driven applications and real-time analytics surged.
• 2024:Quantum computing services expanded, solving niche problems in finance,
logistics, and healthcare.
Enhanced edge computing applications supported autonomous vehicles and
AR/VR experiences.
• 2025:Cloud providers fully embraced 5G-enabled edge solutions, bringing low-
latency services to global markets.
AI and blockchain integration in cloud systems enabled secure and transparent
data management.
Evolution of Cloud Computing
• Distributed Systems
• Virtualization
• Web 2.0
• Service Oriented Computing
• Utility Oriented Computing

• Distributed System:
Distributed System is a composition of multiple independent systems but all of them are
depicted as a single entity to the users.
Properties:
Heterogeneity
Openness
Scalability
Transparency
Concurrency
Continuous Availability
Independent Failure
Three Milestone of Distributed System
• Mainframe Computing
• Cluster Computing
• Grid Computing
Mainframe Computing
• Mainframe which first came into existence in 1951 are highly powerful and
reliable computing machines.
• First large computational facilities.
• Large organization for bulk data processing task
Online Transactions
• Enterprise resource planning(ERP)
• Batch processing is the main application of main frame
• Online Booking, Airline ticket booking , Supermarket and Telcos, Govt Services
Cluster Computing
• These were way cheaper than those mainframe systems
• New nodes could easily be added to the cluster if it was required.
• Evolved in 1980
• Used areas such as WebLogic , Application Servers , Databases etc.
• A cluster is a group of independent computers that work together to
perform the tasks given. Cluster computing is defined as a type of
computing that consists of two or more independent computers, referred to as
nodes, that work together to execute tasks as a single machine.
The goal of cluster computing is to increase the performance, scalability and
simplicity of the system. As you can see in the below diagram, all the
nodes, (irrespective of whether they are a parent node or child node), act as a
single entity to perform the tasks.

Grid Computing
• In 1990s, the concept of grid computing was introduced.
• Different systems were placed at entirely different geographical locations and
these all were connected via the internet.
• The grid consisted of heterogeneous nodes.
• Cloud computing is often referred to as “Successor of grid computing”.
• Used as predictive modeling , Automation , Simulation etc.
• Grid computing is defined as a type of computing where it is constitutes a
network of computers that work together to perform tasks that may be difficult
for a single machine to handle. All the computers on that network work under
the same umbrella and are termed as a virtual supercomputer.
• The tasks they work on is of either high computing power and consist of
large data sets. All communication between the computer systems in grid
computing is done on the “data grid”.The goal of grid computing is to solve
more high computational problems in less time and improve productivity.

Virtualization
• Creating a virtual layer over the hardware which allows the user to run multiple
instances simultaneously on the hardware
• It is the base on which major cloud computing services such as Amazon EC2,
VMware vCloud etc work on.
• Hardware virtualization is still one of the most common types of virtualization.

Web 2.0
• Web 2.0 is the interface through which the cloud computing services interact
with the clients.
• It is because of Web 2.0 that we have interactive and dynamic web pages.
• It also increases flexibility among web pages. Popular examples of web 2.0
include Google Maps, Facebook, Twitter, etc.
• It gained major popularity in 2004.
Service Oriented computing
• A service orientation acts as a reference model for cloud computing.
• It supports low-cost, flexible, and evolvable applications.
• Two important concepts were introduced in this computing model. These
were Quality of Service (QoS) which also includes the SLA (Service Level
Agreement) and Software as a Service (SaaS)
Utility Computing
• Pay-per-use model for compute, storage, and infrastructure services.
• Resources are allowed to use on demand as metered service

Advantages
• Easy backup and restore
• Excellent accessibility
• Low maintenance cost
• Mobility
• Huge /Unlimited storage capacity
• Allows pay-per-use mode

Disadvantages
• Internet Connectivity
• Vendor lock-in(migration is not possible)
• Limited Control
• Security
Architecture of Cloud Computing
Architecture of cloud computing is the combination of both SOA (Service Oriented
Architecture) and EDA (Event Driven Architecture). Client infrastructure, application,
service, runtime cloud, storage, infrastructure, management and security all these are the
components of cloud computing architecture.
The cloud architecture is divided into 2 parts, i.e.
1. Frontend
2. Backend
• Front End
 The front end is used by the client.
 It contains client-side interfaces and applications that are required to access the
cloud computing platforms.
 The front end includes web servers (including Chrome, Firefox, internet explorer,
etc.), tablets and mobile devices.
• Back End
 The back end is used by the service provider.
 It manages all the resources that are required to provide cloud computing services.
 It includes a huge amount of data storage, security mechanism, virtual machines,
deploying models, servers, traffic control mechanisms, etc.

Components of Cloud Computing Architecture


There are the following components of cloud computing architecture –
1. Client Infrastructure
Client Infrastructure is a Front end component. It provides GUI (Graphical User
Interface) to interact with the cloud.
2. Application
The application may be any software or platform that a client wants to access.
Example-
 Microsoft 365 (SaaS): Users access Word, Excel, and other tools via the cloud
without installing software locally.
 Gmail
3. Service
A Cloud Services manages that which type of service you access according to the
client’s requirement.
 Infrastructure as a Service (IaaS): Provides virtualized computing resources over
the internet, rental of Server, OS, Virtual Machine ,Storage etc (e.g., AWS EC2,
Microsoft Azure).
 Platform as a Service (PaaS): Offers a platform for developers to build and
deploy applications (e.g., Google App Engine, Heroku).
 Software as a Service (SaaS): Delivers fully functional applications to end-users
(e.g., Gmail, Webapp etc).
4. Runtime Cloud
Runtime Cloud provides the execution and runtime environment to the virtual machines.
If we build a virtual machice so it requires runtime cloud to run.
• Example-AWS Lambda: Runs code in response to events without provisioning
servers.
Storage
Storage is one of the most important components of cloud computing. It provides a huge
amount of storage capacity in the cloud to store and manage data.
Example: Amazon S3: Provides scalable object storage for files, backups, and media.
Infrastructure
Cloud infrastructure includes hardware and software components such as servers,
storage, network devices, virtualization software, and other storage resources that are
needed to support the cloud computing model.
It provides services on the host level, application level, and network level.
7. Management
Management is used to manage components such as application, service, runtime cloud,
storage, infrastructure, and other security issues in the backend and establish
coordination between them.
8. Security
Security is an in-built back end component of cloud computing. It implements a security
mechanism in the back end.
9. Internet
The Internet is medium through which front end and back end can interact and
communicate with each other.
Building Cloud Computing Environments:
 The creation of cloud computing environments encompasses both the
development of applications and systems.
 These development that leverages cloud computing solutions and the creation of
frameworks, platforms and infrastructure delivery cloud computing services
 These are divided into
• Application development Infrastructure
• System development
• Computing platforms and technologies
Application Development in Cloud Computing
 Applications that leverages cloud computing benefits from its capability to
dynamically scale on demand.
 Web application is the application which take the biggest advantages of these
feature.
 Web 2.0 widely use for application development.
The web has become the platform
a)Enterprise Application(ERP software)
b)Resource intensive application

b)Resource intensive application


i)Data intensive and ii)Compute intensive
 Large no of recourses not required constantly
Ex- Scientific recourses required large recourses
 We can rent the infrastructure.
 Cloud computing can provide computing as well as storage and offering the
runtime environment designed for scalability and dynamic sizing providing
application services that mimic the behaviour desktop application.
 Cloud computing provide a solution for on demand and dynamic scaling across
the entire stack of computing.
a)Providing methods for renting compute power , storage and networking.
b)Offering runtime environments designed for scalability and dynamic sizing.
c)Providing application services that mimic the behaviour of desktop applications but
that are completely hosted and managed on provider side.
• RapidAPI.com
• NOdeJS-> Python
• PHP-> Ruby
• Java etc
• Amazon cloud works API etc
Infrastructure Development in Cloud Computing
• Distributed Computing
• Virtualization
• Service Orientation
• Web 2.0
Distributed Computing
• Distributed systems are the fundamental model for cloud computing because
cloud computing are distributed.
• It hides the complexity of the cloud and give the single interface

Distributed computing is defined as a type of computing where multiple


computer systems work on a single problem. Here all the computer systems are
linked together, and
the problem is divided into sub-problems where each part is solved by different
computer
systems. The goal of distributed computing is to increase the performance and
efficiency of the system and ensure fault tolerance. In the below diagram, each
processor has its own local memory, and all the processors communicate with each
other over a network.

Virtualization
• Virtualization is the abstraction of virtual hardware or runtime environment.
• Creating a virtual layer over the hardware which allows the user to run multiple
instances simultaneously on the hardware
• It is the base on which major cloud computing services such as Amazon EC2,
VMware vCloud etc work on.
Service Orientation
• A service orientation acts as a reference model for cloud computing.
• It supports low-cost, flexible, and evolvable applications.
• IaaS provide the add-on of recourses as well as remove it to scale up and scale
down.
• PaaS embed code offering algorithms and rule.
Web 2.0
• Core technology through which we can access all the services.
• Web 2.0 is the interface through which the cloud computing services interact
with the clients.
• It is because of Web 2.0 that we have interactive and dynamic web pages.
• Called the CC as Xaas- everything as a service
Computing platforms and technologies in Cloud Computing
• AWS
• Google App Engine
• Microsoft Azure
• Hadoop
• Force.com and salesforce.com
• Manjrasoft Aneka

AWS(Amazon Web Services)


• AWS(2006) provide different wide ranging cloud Iaas service.
• It provides virtual computers , storage and network.
• Cloud Computing Services:
• AWS offers a comprehensive suite of on demand cloud services, including
computing power Elastic Compute Cloud(EC2), simple storage service (S3),
databases (RDS, DynamoDB), and machine learning tools.
• EC2(Customized virtual hardware that can be used base infrastructure including
GPU and cluster instances)
• EC2 also provide capability to save a specific runtime instances as image store
in S3.
• S3(Stores some templates and deliver them , ordered buckets contain binary
form . User can store objects any type simple files to entire disk images
Benefits
Flexibility , Scalable, Secure
Benefits
• Global Infrastructure: Operates on a global scale with data centers in multiple
regions, ensuring high availability, scalability, and redundancy.
• Pay-as-You-Go Model: Offers a flexible pricing structure based on usage,
allowing cost-effective solutions for startups and enterprises.
• Wide Adoption: Supports diverse industries and use cases, from web hosting and
big data processing to IoT and AI/ML applications.
Google App Engine
• Its lunch in 2011 scalable runtime environment.
• Used for executing web applications.
• Platform as a Service (PaaS): Provides a managed environment for building and
deploying scalable web and mobile applications.
• Automatic Scaling: Dynamically adjusts resources based on application demand,
ensuring performance without manual intervention.
• Supports Multiple Languages: Compatible with popular programming languages
like Python, Java, Node.js etc.
• Integration with Google Cloud: Seamlessly integrates with other Google Cloud
services like BigQuery, Cloud Storage, and Firebase.
• Developers can use Software Development Kit (SDK) to develop and migrate to
app engine.
Microsoft Azure
• It’s a cloud OS and platform which user can develop application in cloud
• Scalable runtime application for web applications.
• Comprehensive Cloud Platform: Offers a wide range of services, including
virtual machines, AI/ML tools, DevOps, IoT solutions, and databases.
• Hybrid Cloud Capabilities: Provides tools like Azure Arc for managing on-
premises and cloud resources together.
• Enterprise Integration: Strong integration with Microsoft tools like Office 365,
Dynamics, and Active Directory.
• Global Network: Operates in numerous regions worldwide with a focus on
compliance and data security.
• Its provide large storage area , network caching, content delivery etc.
• Microsoft Azure provides all three core service models: Infrastructure as a
Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS).

Hadoop
• Open-Source Framework: Designed for distributed storage and processing of
large datasets across hardware .
• Implementation of MapReduce.
• MapReduce developed by Google consist Map and Reduce
• Map is used transform and synthesizes the input data and Reduce function
reduces means aggregate the data.
• Core Components: Includes HDFS-Hadoop Distributed File System (storage)
and MapReduce (processing) for efficient big data handling.
• Scalable and Fault-Tolerant: Can handle increasing data volumes by adding
nodes, with built-in fault tolerance .
• Ecosystem: Supports additional tools like Hive (SQL queries), Pig (data
analysis), and Spark (in-memory processing).
• Hadoop is sponsored by Yahoo.
Force.com and salesforce.com
• Force.com: A platform-as-a-service (PaaS) for building and deploying custom
applications within the Salesforce ecosystem.
• Salesforce.com: A SaaS product that uses the Force.com platform to provide
CRM features.
• Its an American cloud computing company head Quarter in Sun Francisco ,
Califernia.
• Salesforce.com: A software-as-a-service (SaaS) CRM(Customer Relation
Management) platform for sales, customer service, and marketing automation.
• Customizable: Both platforms allow extensive customization using Apex code
and Lightning components.
• Cloud-Based: Focuses on cloud solutions for managing customer relationships
and business workflows.
Manjrasoft Aneka
• Manjrasoft Aneka is a platform that helps developers build and manage
distributed applications on the cloud.
• Cloud Application Platform: Provides a framework for developing and
deploying cloud applications with resource provisioning.
• Supports Multiple Models: Includes task, thread, and map-reduce models for
different application needs.
• Cross-Platform: Enables integration with private, public, and hybrid cloud
environments .
• Ease of Use: Offers APIs and a graphical user interface for simplifying cloud
application development and management.

• Aneka is a Platform as a Service (PaaS) cloud software that allows users to build
and manage applications that can run on private, public, and hybrid clouds.
Aneka: Aneka is a .NET-based service-oriented resource management and
development platform. Each server in an Aneka deployment (dubbed Aneka cloud
node) hosts the Aneka container, which provides the base infrastructure that consists
of services for persistence, security (authorization, authentication and auditing), and
communication (message handling and dispatching). Cloud nodes can be either
physical server, virtual machines (Xen Server and VMware are supported), and
instances rented from Amazon EC2. The Aneka container can also host any number of
optional services that can be added by developers to augment the capabilities of an
Aneka Cloud node, thus providing a single, extensible framework for orchestrating
various application models.
Several programming models are supported by such task models to enable
execution of legacy HPC applications and Map Reduce, which enables a variety of
data-mining and search applications. Users request resources via a client to a
reservation services manager of the Aneka master node, which manages all cloud
nodes and contains scheduling service to distribute request to cloud nodes.
App Engine: Google App Engine lets you run your Python and Java Web applications
on elastic infrastructure supplied by Google. App Engine allows your
applications to scale dynamically as your traffic and data storage requirements
increase or decrease. It gives developers a choice between a Python stack and Java.
The App Engine serving architecture is notable in that it allows real-time auto-
scaling without virtualization for many common types of Web applications.
However, such auto-scaling is dependent on the application developer using a
limited subset of the native APIs on each platform, and in some instances you need
to use specific Google APIs such as URLFetch, Data store, and mem cache in
place of certain native API calls. For example, a deployed App Engine application
cannot write to the file system directly (you must use the Google Data store) or open
a socket or access another host directly (you must use Google URL fetch service). A
Java application cannot create a new Thread either.
Microsoft Azure: Microsoft Azure Cloud Services offers developers a hosted. NET
Stack (C#, VB.Net, ASP.NET). In addition, a Java & Ruby SDK for .NET
Services is also available. The Azure system consists of a number of elements. The
Windows Azure Fabric Controller provides auto-scaling and reliability, and it
manages memory resources and load balancing. The .NET Service Bus registers and
connects applications together. The .NET Access Control identity providers include
enterprise directories and Windows LiveID. Finally, the .NET Workflow allows
construction and execution of workflow instances.
Force.com: In conjunction with the Salesforce.com service, the Force.com PaaS
allows developers to create add-on functionality that integrates into main Salesforce
CRM SaaS application. Force.com offers developers two approaches to create
applications that can be deployed on its SaaS plaform: a hosted Apex or
Visualforce application. Apex is a proprietary Java-like language that can be
used to create Salesforce applications. Visual force is an XML-like syntax for
building UIs in HTML, AJAX, or Flex to overlay over the Salesforce hosted CRM
system. An application store called App Exchange is also provided, which offers a
paid & free application directory.
Heroku: Heroku is a platform for instant deployment of Ruby on Rails Web
applications. In the Heroku system, servers are invisibly managed by the platform
and are never exposed to users. Applications are automatically dispersed across
different CPU cores and servers,
maximizing performance and minimizing contention. Heroku has an advanced logic
layer than can automatically route around failures, ensuring seamless and
uninterrupted service at all times.
UNIT-2
Virtualization is a technique how to separate a service from the underlying
physical delivery of that service. It is the process of creating a virtual version of
something like computer hardware. It was initially developed during the mainframe
era. It involves using specialized software to create a virtual or software-created
version of a computing resource rather than the actual version of the same resource.
With the help of Virtualization, multiple operating systems and applications can run on
the same machine and its same hardware at the same time, increasing the utilization
and flexibility of hardware.
In other words, one of the main cost-effective, hardware-reducing, and energy-
saving techniques used by cloud providers is Virtualization. Virtualization allows
sharing of a single physical instance of a resource or an application among multiple
customers and organizations at one time. It does this by assigning a logical name to
physical storage and providing a pointer to that physical resource on demand. The term
virtualization is often synonymous with hardware virtualization, which plays a
fundamental role in efficiently delivering Infrastructure-as-a-Service (IaaS) solutions
for cloud computing. Moreover, virtualization technologies provide a virtual
environment for not only executing applications but also for storage, memory, and
networking.
 Host Machine: The machine on which the virtual machine is going to be built is known
as Host Machine.
 Guest Machine: The virtual machine is referred to as a Guest Machine.

Work of Virtualization in Cloud Computing


Virtualization has a prominent impact on Cloud Computing. In the case of cloud
computing, users store data in the cloud, but with the help of Virtualization, users have the
extra benefit of sharing the infrastructure. Cloud Vendors take care of the required physical
resources, but these cloud providers charge a huge amount for these services which impacts
every user or organization. Virtualization helps Users or Organisations in maintaining
those services which are required by a company through external (third-party) people,
which helps in reducing costs to the company. This is the way through which
Virtualization works in Cloud Computing.
 Benefits of Virtualization
 More flexible and efficient allocation of resources.
 Enhance development productivity.
 It lowers the cost of IT infrastructure.
 Remote access and rapid scalability.
 High availability and disaster recovery.
 Pay peruse of the IT infrastructure on demand.
 Enables running multiple operating systems.
 Drawback of Virtualization
 High Initial Investment: Clouds have a very high initial investment, but it is also true
that it will help in reducing the cost of companies.
 Learning New Infrastructure: As the companies shifted from Servers to Cloud, it
requires highly skilled staff who have skills to work with the cloud easily, and for this,
you have to hire new staff or provide training to current staff.
 Risk of Data: Hosting data on third-party resources can lead to putting the data at risk,
it has the chance of getting attacked by any hacker or cracker very easily.

Characteristics of Virtualization
 Increased Security: The ability to control the execution of a guest program in a
completely transparent manner opens new possibilities for delivering a secure,
controlled execution environment. All the operations of the guest programs are
generally performed against the virtual machine, which then translates and applies
them to the host programs.
 Managed Execution: In particular, sharing, aggregation, emulation, and isolation
are the most relevant features.
 Sharing: Virtualization allows the creation of a separate computing environment
within the same host.
 Aggregation: It is possible to share physical resources among several guests, but
virtualization also allows aggregation, which is the opposite process.
 Resource Abstraction
Virtualization abstracts physical hardware resources (CPU, memory, storage, and
network) and presents them as virtual resources.
 Isolation
Each virtual machine operates independently of others, ensuring that issues in one
VM (e.g., crashes or security breaches) do not affect others.
 Scalability
Virtualized environments enable dynamic scaling of resources based on workload
demands.
 Flexibility
Virtualized environments support running different operating systems (e.g.,
Windows, Linux) on the same physical machine.
 High Resource Utilization
Virtualization improves hardware utilization by allowing multiple workloads to
share the same physical resources.
 Portability
Virtual machines and containers can be easily moved between hosts or data
centers, enabling disaster recovery, load balancing, and efficient resource
management.
 Snapshot and Cloning
Virtualized environments support creating snapshots of VMs, which can be used
for backup, testing, or rollback.
 Security
Virtualization includes features like sandboxing, which isolates applications or
systems for enhanced security.
• Cost Efficiency
By consolidating workloads on fewer physical machines, virtualization reduces
the need for physical hardware, lowering capital and operational expenses.
• Simplified Management
Centralized management tools (e.g., VMware vCenter, Microsoft Hyper-V
Manager) provide an interface to monitor, control, and optimize virtualized
resources.
• Fault Tolerance and High Availability
Virtualized environments often include fault-tolerance mechanisms to ensure
continuity during hardware failures.
• Elasticity
Resources can be dynamically adjusted to meet changing demands, making
virtualized environments ideal for cloud computing and on-demand services.
Hypervisor
• Hypervisor a program used to create, run and manage one or more virtual
machines on a computer
• Virtual box share hardware resources from Host OS.
• Separate set of virtual CPU , RAM , Storage etc.
• VMs are fully isolated(Independent of hosted OS)
• It’s a software that creates and runs Virtual Machines(VM’S)
• The virtualization layer consists of a hypervisor or a Virtual Machine Monitor
(VMM).
• Work of Hypervisor-Create VM , Manage , Monitor and Run.
There are two types of hypervisors
• Type-1 Hypervisors or Native Hypervisors or Bare Metal
Type-1 Hypervisors or Native Hypervisors run directly on the host hardware and
control the hardware and monitor the guest operating system.
• Type-2 Hypervisors or Hosted Hypervisors
Type 2 Hypervisors or Hosted Hypervisors run on top of a conventional (main or
Host) operating system and monitor the guest operation systems.
Examples of Hypervisor

Xen
• Xen is an open-source, Type 1 hypervisor that supports para-virtualization.
• It is widely used in cloud computing platforms, such as Amazon Web Services
(AWS).
• While Xen started with para-virtualization as its primary model, it also supports
full virtualization for running operating systems that are not modified to work
with the hypervisor (like Windows).
• In full virtualization mode, Xen uses a technique called hardware-assisted
virtualization (with Intel VT-x or AMD-V), allowing unmodified guest operating
systems to run in virtual machines.
• Xen supports both para-virtualization (for Linux and other modified OSes) and
full virtualization (for unmodified guest OSes like Windows).
Hardware Layer (Bottom Section)
• This is the physical hardware of the system, which includes:
The Xen Hypervisor runs directly on this hardware (bare-metal) to manage virtual
machines.
• CPU – The processor that runs instructions.
Disk – Hard drive or SSD for storage.
Network/PCI – Network interface cards (NIC) and PCI devices.
Memory (RAM) – Physical memory for storing running programs.
Xen Hypervisor Layer (Second Section)
This is the core of the virtualization platform.
 Xen does not have a built-in user interface. Instead, it relies on Dom0 (privileged
VM) to manage the system.
 Without Dom0, Xen cannot function properly.
• The Xen Hypervisor sits between the hardware and virtual machines.
• It controls and manages access to CPU, memory, disk, and network resources.
• It ensures efficient sharing of hardware among multiple virtual machines.
Guest Virtual Machines (Third Section: Dom0 & DomU)
• This layer represents the virtual machines (VMs) running on top of Xen.
• Dom0 (Domain 0) – The Control Virtual Machine
 Dom0 is a special privileged VM that has direct access to hardware.
 It runs a management application to control other VMs.
 It includes device drivers that allow other VMs (DomU) to use hardware.
 It can create, delete, and manage virtual machines.
DomU (Unprivileged Virtual Machines)
These are guest virtual machines created by Dom0.
• DomU relies on virtualized resources provided by Dom0.
• Each DomU runs an operating system (OS) such as Linux or Windows.
• They run applications (APPs) just like normal computers.
• They do not have direct access to hardware; instead, they communicate with
Dom0 for hardware access.
How It Works
• Xen is an open-source type-1 hypervisor used to create and manage VMs.
• It supports paravirtualization (PV) and hardware-assisted virtualization (HVM) to
optimize performance.
• The control domain (Dom0) manages VMs (DomU) and hardware resources.
Where It Works:
• Used in cloud computing (AWS, Oracle Cloud) and enterprise data centers .
• Works on Linux-based servers.
Benefits:
• Lightweight and efficient due to minimal overhead.
• Supports both Windows and Linux VMs.
• High availability and fault tolerance features.
Example:
• A cloud provider like AWS uses Xen to run thousands of VMs.
• A startup wants to host its website on Amazon Web Services (AWS). Instead of
renting an entire physical server, they rent a Virtual Machine (VM) instance
powered by Xen Para-Virtualization.
• Google Cloud Compute Engine (GCE) used para-virtualization before shifting to
full hardware-assisted virtualization.
• Stock trading platforms using Xen para-virtualized VMs for high-speed, low-
latency transactions.

Feature Xen (Para VMware (Full Hyper-V (Hardware-


Virtualization) Virtualization) Assisted)

Virtualization Type
Para Virtualization Full Virtualization Hardware Assisted

Hypervisor Type
Type-1 (Bare Metal) Type-1 (Bare Metal) Type-1 (Bare Metal)

Guest OS Yes (Para


No (Unmodified) No (Unmodified)
Virtualization)
Modification Moderate (More High (Uses
High (Less CPU
CPU Overhead) Hardware
Overhead)
Acceleration)
Performance High (Less CPU Moderate (More High (Uses
Overhead) CPU Overhead) Hardware
Acceleration)
Hardware
Yes (Intel VT-x, Yes (Intel VT-x,
Virtualization Not Required
AMD-V) AMD-V)
Support
Live Migration
Yes Yes Yes
Best For
Cloud Computing Cloud Computing Cloud Computing
(AWS, Citrix) (AWS, Citrix) (AWS, Citrix)

Free & Open- Yes No (Paid License) Yes (With Windows


Source Server)

VMware
• VMware is a leading provider of virtualization and cloud computing technologies.
• VMware specializes in virtualization, which allows multiple operating systems
and applications to run on a single physical machine.
• VMware is built upon the principle of full virtualization, which involves
duplicating the underlying hardware and presenting it to the guest OS. The guest
OS operates without any awareness of this abstraction layer and requires no
modifications.
• Full virtualization is a virtualization technique that allows multiple virtual
machines (VMs) to run on a single physical host without modifications to the
guest operating systems. In a fully virtualized environment, each virtual machine
operates as if it has its own dedicated physical hardware, even though it shares
resources with other VMs on the same host.
• VMware is a leading company in the field of virtualization and cloud computing.
They provide a range of virtualization and cloud management solutions that allow
organizations to create and manage virtualized IT environments. VMware’s most
notable product is VMware vSphere, which includes the ESXi hypervisor,
vCenter Server for centralized management, and various other components for
virtual infrastructure management.

VMware vSphere
• A comprehensive server virtualization platform with a hypervisor (ESXi) and
management tools (vCenter Server).
• VMware ESXi – A lightweight, bare-metal hypervisor that allows multiple virtual
machines (VMs) to run on a single physical server.
• VMware vCenter Server – A centralized management tool for controlling multiple
ESXi hosts.

Key VMware Products


• VMware vSphere: vSphere includes the ESXi hypervisor (which allows multiple
virtual machines to run on a physical server) and vCenter Server (for centralized
management of VMs).
• VMware Workstation: A desktop virtualization product for developers and IT
professionals to run multiple OS environments on a single physical machine.
• VMware vCloud: A cloud computing service that enables the deployment of
virtualized resources for building private, public, or hybrid cloud infrastructures.
• VMware NSX: A network virtualization and security platform that abstracts
networking and security services from the underlying hardware.
• VMware Horizon: A virtual desktop infrastructure (VDI) solution that allows for
virtual desktops and apps to be delivered to end users securely, regardless of their
location.
• VMware vSAN: A software-defined storage solution that integrates with VMware
vSphere, providing a virtualized storage layer across servers in a cluster.

Benefits of VMware:
• Efficiency: Allows multiple virtual servers to run on a single physical machine,
reducing hardware costs and increasing server utilization.
• Flexibility: Supports the creation of virtual environments for testing,
development, and production.
• Disaster Recovery: Virtualization allows for easier backup and replication of
virtual machines, improving disaster recovery options.
• Scalability: VMware’s infrastructure can scale easily as the organization grows,
enabling the addition of more virtual machines without a significant increase in
hardware.
• Automation: VMware tools and products automate many IT processes, reducing
administrative overhead.

Hyper-V
• Hyper-V is a type-1 (bare-metal) hypervisor developed by Microsoft. It allows
multiple operating systems (Windows, Linux, etc.) to run as virtual machines
(VMs) on a single physical machine.
• The hypervisor sits between the hardware and the guest OS, managing resources
like CPU, memory, and storage.
Hardware (x86)
• The bottom-most layer represents the physical hardware, including the Processor
(CPU) and Memory (RAM).
• Hyper-V utilizes Intel VT-x or AMD-V extensions for hardware-assisted
virtualization.
• Processor (CPU) – Handles computation and execution of instructions. Hyper-V
utilizes Intel VT-x or AMD-V extensions for hardware-assisted virtualization.
• Memory (RAM) – Allocated dynamically or statically to VMs.
• Storage & Networking – Includes hard drives, SSDs, and network adapters used
by VMs.

Hypervisor (Ring -1)


• The Hypervisor runs directly on the hardware (bare-metal) and manages virtual
machines (VMs).
• It provides essential services like:
– Hypercalls: A communication mechanism that allows guest operating
systems (OS) to request services from the hypervisor.
– Model-Specific Registers (MSRs): Special registers used for system
management and performance monitoring.
– APIC (Advanced Programmable Interrupt Controller): Manages interrupt
handling for virtualized environments.
– Scheduler: Allocates CPU time among VMs to ensure fair resource
distribution.
– Address & Partition Management: Ensures each VM has isolated memory
and resources.
Root / Parent Partition (Host OS)
• This is the main partition running Windows (Hyper-V host OS).
• It manages:
– Virtual Machine Management Service (VMMS): Controls and manages
virtual machines.
– Windows Management Instrumentation (WMI): Enables automation and
remote management of Hyper-V.
– I/O Stack & Drivers: Handles disk, network, and other hardware
interactions for VMs.
– VMBus: A high-speed internal communication channel between the parent
and child partitions, improving performance by avoiding emulation.
– WinHv (Windows Hypervisor Interface): Allows Windows guest OS to
communicate efficiently with the hypervisor.

Child Partitions (Guest VMs)


• These are the virtual machines running inside Hyper-V.
• Enlightened Child Partition (Hypervisor-aware guests)
– These VMs (Windows and Linux) are designed to work efficiently with
Hyper-V.
– They use Virtual Service Clients (VSCs) to communicate with the parent
partition for optimized performance.
– VMBus allows direct and faster access to storage, networking, and other
I/O operations.Windows guests use WinHv, and Linux guests use
LinuxHv to communicate with the hypervisor.
Unenlightened Child Partition (Hypervisor-unaware guests)
• These VMs are unaware that they are running in a virtualized environment. They
rely on emulated devices, which are slower than the VMBus approach. Hyper-V
provides device emulation to ensure compatibility.
It works
Runs on Windows Server and Windows 10/11 Pro/Enterprise editions.
Used in enterprise data centers, cloud environments, and personal computing.
Benefits:
• Better resource utilization by running multiple OS instances .
• Isolation between VMs, making the system more secure.
• Live migration allows moving VMs between servers without downtime.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy