Cloud Computing Note(UNIT 1-2)
Cloud Computing Note(UNIT 1-2)
The cloud enables users to access the same files and applications from almost any
device, because the computing and storage takes place on servers in a data centre,
instead of locally on the user device. Therefore, a user can log into their Instagram
account on a new phone after their old phone breaks and still find their old
account in place, with all their photos, videos, and conversation history. It works
the same way with cloud email providers like Gmail or Microsoft Office 365, and
with cloud storage providers like Dropbox or Google Drive.
2. Resource Access: Users access and utilize computing resources like servers,
storage, and software remotely.
3. On-Demand Services: Offers scalable ,flexible , and cost-effective services tailored
to user needs.
4. Internet-Based: All services are provided over the internet, eliminating the need for
physical infrastructure.
5. Payment Model: Users pay for resources consumed, promoting cost efficiency.
4. Resource Sharing: Multiple users can share the same underlying infrastructure,
optimizing utilization and promoting efficiency.
The term “Cloud Computing” refers to services provided by the cloud that is
responsible for delivering of computing services such as servers, storage, databases,
networking, software, analytics, intelligence, and more, over the Cloud (Internet).
Cloud computing applies a virtualized platform with elastic resources on demand by
provisioning hardware, software, and data sets dynamically
Cloud Computing provides an alternative to the on-premises data center. With
an on- premises data center, we must manage everything, such as purchasing
and installing hardware, virtualization, installing the operating system, and any other
required applications, setting up the network, configuring the firewall, and setting up
storage for data. After doing all the set-up, we become responsible for maintaining it
through its entire lifecycle.
The cloud environment provides an easily accessible online portal that makes handy
for the user to manage the compute, storage, network, and application resources. Some
of the cloud service providers are in the following figure.
2. Scalability: The vision includes seamless scalability, enabling users to easily adjust
resources based on demand, ensuring optimal performance.
3. Cost Efficiency: A key aspect is the economic advantage, where users pay for
actual usage, reducing upfront costs and promoting efficiency.
4. Innovation Acceleration: Cloud computing aims to foster rapid innovation by
providing a platform for developers to create and deploy applications swiftly.
3. Resource pooling.
4. Rapid elasticity.
5. Measured service.
Basic Concepts
There are certain services and models working behind the scene making the cloud
computing feasible and accessible to end users.
• Deployment Models
Service Models
Deployment Models
• Deployment models define the type of access to the cloud, i.e., how the cloud is
located? Cloud can have any of the four types of access: Public, Private, Hybrid,
and Community.
Public cloud
• Public cloud (off-site and remote) describes cloud computing where resources are
dynamically provisioned on an on-demand, self-service basis over the Internet, via
web applications/web services, open API, from a third-party provider who bills on
a utility computing basis.
Private cloud
• A private cloud environment is often the first step for a corporation prior to
adopting a public cloud initiative. Corporations have discovered the benefits of
consolidating shared services on virtualized hardware deployed from a primary
datacenter to serve local and remote users.
Hybrid cloud
• A hybrid cloud environment consists of some portion of computing resources on-
site (on premise) and off-site (public cloud). By integrating public cloud services,
users can leverage cloud solutions for specific functions that are too costly to
maintain on-premise such as virtual server disaster recovery, backups and
test/development environments.
Community cloud
• A community cloud is formed when several organizations with similar requirements
share common infrastructure. Costs are spread over fewer users than a public cloud
but more than a single tenant.
Cloud Service Models
Infrastructure-as-a-Service (IaaS): In Infrastructure-as-a-Service model, the service
provider owns the hardware equipment's such as Servers, Storage, Network and is
provided as services to the clients. The client uses these equipment's and pays on per-
use basis.
• E.g. Amazon Elastic Compute (EC2) and Simple Storage Service (S3).
Platform-as-a-Service (PaaS): In Platform-as-a-Service model, complete resources
needed to Design, Develop, Testing, Deploy and Hosting an application are
provided as services without spending money for purchasing and maintaining the
servers, storage and software.
PaaS is an extension of IaaS. In addition to the fundamental computing resource
supplied by the hardware in an IaaS offering, PaaS models also include the software
and configuration required to create an applications.
• E.g. Google App Engine.
Benefits
One can access applications as utilities, over the Internet.
One can manipulate and configure the applications online at any time.
It does not require to install a software to access or manipulate cloud application.
Cloud Computing offers online development and deployment tools,
programming runtime environment through PaaS model.
Cloud resources are available over the network in a manner that provide
platform independent access to any type of clients. Cloud Computing offers on-
demand self-service. The resources can be used without interaction with
cloud service provider.
Cloud Computing is highly cost effective because it operates at high efficiency
with optimum utilization. It just requires an Internet connection
Cloud Computing offers load balancing that makes it more reliable.
HISTORY AND EVOLUTION
• Cloud computing is one the most innovative technology of our time. Following
is a brief history of Cloud computing.
• EARLY 1960S:- The computer scientist John McCarthy, come up with concept
of timesharing, and enabling Organization to simultaneously use an expensive
mainframe. This computing is described as a significant contribution to the
development of the Internet, and a pioneer of Cloud computing.
• IN 1969:- The idea of an “Intergalactic Computer Network” or “Galactic
Network” (a computer networking concept similar to today’s Internet) was
introduced by J.C.R. Licklider, who was responsible for enabling the
development of ARPANET (Advanced Research Projects Agency Network). His
vision was for everyone on the globe to be interconnected and being able to
access programs and data at any site, from anywhere.
• IN 1970:- Using virtualization software like VMware. It become possible to run
more than one Operating System simultaneously in an isolated environment. It
was possible to run a completely different Computer (virtual machine) inside a
different Operating System.
• IN 1997:- The first known definition of the term “Cloud Computing” seems to
be by Prof. Ramnath Chellappa in Dallas in 1997 – “A computing paradigm
where the boundaries of computing will be determined by economic rationale
rather than technical limits alone.”
• IN 1999:-The arrival of Salesforce.com in 1999 pioneered the concept of
delivering enterprise applications via simple website. The services firm covered
the way for both specialist and mainstream software firms to deliver applications
over the Internet.
• In 2002:- Amazon lunch its cloud computing Web Service known as AWS
• IN 2003:- The first public release of Xen, which creates a Virtual Machine
Monitor (VMM) also known as a hypervisor, a software system that allows the
execution of multiple virtual guest operating systems simultaneously on a single
machine.
• IN 2006:- In 2006, Amazon expanded its cloud services. First was its Elastic
Compute cloud (EC2), which allowed people to access computers and run their
own applications on them, all on the cloud. Then they brought out Simple
Storage Service (S3). This introduced the pay-as-you-go model to both users and
the industry as a whole, and it has basically become standard practice now.
• IN 2009:- Google Apps also started to provide cloud computing enterprise
applications. In 2009 also Microsoft lunched Windows Azure and companies
like Oracle and HP have also joined the game.
• IN 2013:-The Worldwide Public Cloud Services Market totalled £78bn, up 18.5
per cent on 2012, with IaaS (infrastructure-as-a-service) the fastest growing
market service.
• IN 2014:- In 2014, global business spending for infrastructure and services
related to the cloud will reach an estimated £103.8bn, up 20% from the amount
spent in 2013 (Constellation Research).
• 2016:Serverless computing gained popularity with AWS Lambda and Azure
Functions.
AI and machine learning services like Google AI and AWS SageMaker became
mainstream.
• 2017:Multi-cloud strategies emerged as organizations sought to avoid vendor
lock-in.
Edge computing started gaining traction, driven by IoT needs.
• 2018:5G rollouts began, promising to accelerate edge computing and real-time
applications.
The General Data Protection Regulation (GDPR) in the EU impacted cloud
compliance requirements.
• 2019:
Hybrid cloud solutions like AWS Outposts and Azure Arc were introduced.
Quantum computing began to integrate with cloud platforms (e.g., IBM Q
Experience).
• 2020:
The COVID-19 pandemic accelerated cloud adoption for remote work and
digital transformation.
Video conferencing and collaboration tools like Zoom and Microsoft Teams
scaled massively using cloud infrastructure.
• 2021:Sustainability became a focus, with cloud providers pledging carbon
neutrality (e.g., Google, Microsoft).
Decentralized cloud models like Filecoin and IPFS gained attention.
• 2022:AI-powered cloud management tools automated resource allocation and
cost optimization.
Industry-specific cloud solutions (e.g., for healthcare and finance) gained
traction.
• 2023:Supercloud architectures emerged, enabling seamless integration across
multiple cloud platforms.
The adoption of AI-driven applications and real-time analytics surged.
• 2024:Quantum computing services expanded, solving niche problems in finance,
logistics, and healthcare.
Enhanced edge computing applications supported autonomous vehicles and
AR/VR experiences.
• 2025:Cloud providers fully embraced 5G-enabled edge solutions, bringing low-
latency services to global markets.
AI and blockchain integration in cloud systems enabled secure and transparent
data management.
Evolution of Cloud Computing
• Distributed Systems
• Virtualization
• Web 2.0
• Service Oriented Computing
• Utility Oriented Computing
• Distributed System:
Distributed System is a composition of multiple independent systems but all of them are
depicted as a single entity to the users.
Properties:
Heterogeneity
Openness
Scalability
Transparency
Concurrency
Continuous Availability
Independent Failure
Three Milestone of Distributed System
• Mainframe Computing
• Cluster Computing
• Grid Computing
Mainframe Computing
• Mainframe which first came into existence in 1951 are highly powerful and
reliable computing machines.
• First large computational facilities.
• Large organization for bulk data processing task
Online Transactions
• Enterprise resource planning(ERP)
• Batch processing is the main application of main frame
• Online Booking, Airline ticket booking , Supermarket and Telcos, Govt Services
Cluster Computing
• These were way cheaper than those mainframe systems
• New nodes could easily be added to the cluster if it was required.
• Evolved in 1980
• Used areas such as WebLogic , Application Servers , Databases etc.
• A cluster is a group of independent computers that work together to
perform the tasks given. Cluster computing is defined as a type of
computing that consists of two or more independent computers, referred to as
nodes, that work together to execute tasks as a single machine.
The goal of cluster computing is to increase the performance, scalability and
simplicity of the system. As you can see in the below diagram, all the
nodes, (irrespective of whether they are a parent node or child node), act as a
single entity to perform the tasks.
Grid Computing
• In 1990s, the concept of grid computing was introduced.
• Different systems were placed at entirely different geographical locations and
these all were connected via the internet.
• The grid consisted of heterogeneous nodes.
• Cloud computing is often referred to as “Successor of grid computing”.
• Used as predictive modeling , Automation , Simulation etc.
• Grid computing is defined as a type of computing where it is constitutes a
network of computers that work together to perform tasks that may be difficult
for a single machine to handle. All the computers on that network work under
the same umbrella and are termed as a virtual supercomputer.
• The tasks they work on is of either high computing power and consist of
large data sets. All communication between the computer systems in grid
computing is done on the “data grid”.The goal of grid computing is to solve
more high computational problems in less time and improve productivity.
Virtualization
• Creating a virtual layer over the hardware which allows the user to run multiple
instances simultaneously on the hardware
• It is the base on which major cloud computing services such as Amazon EC2,
VMware vCloud etc work on.
• Hardware virtualization is still one of the most common types of virtualization.
Web 2.0
• Web 2.0 is the interface through which the cloud computing services interact
with the clients.
• It is because of Web 2.0 that we have interactive and dynamic web pages.
• It also increases flexibility among web pages. Popular examples of web 2.0
include Google Maps, Facebook, Twitter, etc.
• It gained major popularity in 2004.
Service Oriented computing
• A service orientation acts as a reference model for cloud computing.
• It supports low-cost, flexible, and evolvable applications.
• Two important concepts were introduced in this computing model. These
were Quality of Service (QoS) which also includes the SLA (Service Level
Agreement) and Software as a Service (SaaS)
Utility Computing
• Pay-per-use model for compute, storage, and infrastructure services.
• Resources are allowed to use on demand as metered service
Advantages
• Easy backup and restore
• Excellent accessibility
• Low maintenance cost
• Mobility
• Huge /Unlimited storage capacity
• Allows pay-per-use mode
Disadvantages
• Internet Connectivity
• Vendor lock-in(migration is not possible)
• Limited Control
• Security
Architecture of Cloud Computing
Architecture of cloud computing is the combination of both SOA (Service Oriented
Architecture) and EDA (Event Driven Architecture). Client infrastructure, application,
service, runtime cloud, storage, infrastructure, management and security all these are the
components of cloud computing architecture.
The cloud architecture is divided into 2 parts, i.e.
1. Frontend
2. Backend
• Front End
The front end is used by the client.
It contains client-side interfaces and applications that are required to access the
cloud computing platforms.
The front end includes web servers (including Chrome, Firefox, internet explorer,
etc.), tablets and mobile devices.
• Back End
The back end is used by the service provider.
It manages all the resources that are required to provide cloud computing services.
It includes a huge amount of data storage, security mechanism, virtual machines,
deploying models, servers, traffic control mechanisms, etc.
Virtualization
• Virtualization is the abstraction of virtual hardware or runtime environment.
• Creating a virtual layer over the hardware which allows the user to run multiple
instances simultaneously on the hardware
• It is the base on which major cloud computing services such as Amazon EC2,
VMware vCloud etc work on.
Service Orientation
• A service orientation acts as a reference model for cloud computing.
• It supports low-cost, flexible, and evolvable applications.
• IaaS provide the add-on of recourses as well as remove it to scale up and scale
down.
• PaaS embed code offering algorithms and rule.
Web 2.0
• Core technology through which we can access all the services.
• Web 2.0 is the interface through which the cloud computing services interact
with the clients.
• It is because of Web 2.0 that we have interactive and dynamic web pages.
• Called the CC as Xaas- everything as a service
Computing platforms and technologies in Cloud Computing
• AWS
• Google App Engine
• Microsoft Azure
• Hadoop
• Force.com and salesforce.com
• Manjrasoft Aneka
Hadoop
• Open-Source Framework: Designed for distributed storage and processing of
large datasets across hardware .
• Implementation of MapReduce.
• MapReduce developed by Google consist Map and Reduce
• Map is used transform and synthesizes the input data and Reduce function
reduces means aggregate the data.
• Core Components: Includes HDFS-Hadoop Distributed File System (storage)
and MapReduce (processing) for efficient big data handling.
• Scalable and Fault-Tolerant: Can handle increasing data volumes by adding
nodes, with built-in fault tolerance .
• Ecosystem: Supports additional tools like Hive (SQL queries), Pig (data
analysis), and Spark (in-memory processing).
• Hadoop is sponsored by Yahoo.
Force.com and salesforce.com
• Force.com: A platform-as-a-service (PaaS) for building and deploying custom
applications within the Salesforce ecosystem.
• Salesforce.com: A SaaS product that uses the Force.com platform to provide
CRM features.
• Its an American cloud computing company head Quarter in Sun Francisco ,
Califernia.
• Salesforce.com: A software-as-a-service (SaaS) CRM(Customer Relation
Management) platform for sales, customer service, and marketing automation.
• Customizable: Both platforms allow extensive customization using Apex code
and Lightning components.
• Cloud-Based: Focuses on cloud solutions for managing customer relationships
and business workflows.
Manjrasoft Aneka
• Manjrasoft Aneka is a platform that helps developers build and manage
distributed applications on the cloud.
• Cloud Application Platform: Provides a framework for developing and
deploying cloud applications with resource provisioning.
• Supports Multiple Models: Includes task, thread, and map-reduce models for
different application needs.
• Cross-Platform: Enables integration with private, public, and hybrid cloud
environments .
• Ease of Use: Offers APIs and a graphical user interface for simplifying cloud
application development and management.
• Aneka is a Platform as a Service (PaaS) cloud software that allows users to build
and manage applications that can run on private, public, and hybrid clouds.
Aneka: Aneka is a .NET-based service-oriented resource management and
development platform. Each server in an Aneka deployment (dubbed Aneka cloud
node) hosts the Aneka container, which provides the base infrastructure that consists
of services for persistence, security (authorization, authentication and auditing), and
communication (message handling and dispatching). Cloud nodes can be either
physical server, virtual machines (Xen Server and VMware are supported), and
instances rented from Amazon EC2. The Aneka container can also host any number of
optional services that can be added by developers to augment the capabilities of an
Aneka Cloud node, thus providing a single, extensible framework for orchestrating
various application models.
Several programming models are supported by such task models to enable
execution of legacy HPC applications and Map Reduce, which enables a variety of
data-mining and search applications. Users request resources via a client to a
reservation services manager of the Aneka master node, which manages all cloud
nodes and contains scheduling service to distribute request to cloud nodes.
App Engine: Google App Engine lets you run your Python and Java Web applications
on elastic infrastructure supplied by Google. App Engine allows your
applications to scale dynamically as your traffic and data storage requirements
increase or decrease. It gives developers a choice between a Python stack and Java.
The App Engine serving architecture is notable in that it allows real-time auto-
scaling without virtualization for many common types of Web applications.
However, such auto-scaling is dependent on the application developer using a
limited subset of the native APIs on each platform, and in some instances you need
to use specific Google APIs such as URLFetch, Data store, and mem cache in
place of certain native API calls. For example, a deployed App Engine application
cannot write to the file system directly (you must use the Google Data store) or open
a socket or access another host directly (you must use Google URL fetch service). A
Java application cannot create a new Thread either.
Microsoft Azure: Microsoft Azure Cloud Services offers developers a hosted. NET
Stack (C#, VB.Net, ASP.NET). In addition, a Java & Ruby SDK for .NET
Services is also available. The Azure system consists of a number of elements. The
Windows Azure Fabric Controller provides auto-scaling and reliability, and it
manages memory resources and load balancing. The .NET Service Bus registers and
connects applications together. The .NET Access Control identity providers include
enterprise directories and Windows LiveID. Finally, the .NET Workflow allows
construction and execution of workflow instances.
Force.com: In conjunction with the Salesforce.com service, the Force.com PaaS
allows developers to create add-on functionality that integrates into main Salesforce
CRM SaaS application. Force.com offers developers two approaches to create
applications that can be deployed on its SaaS plaform: a hosted Apex or
Visualforce application. Apex is a proprietary Java-like language that can be
used to create Salesforce applications. Visual force is an XML-like syntax for
building UIs in HTML, AJAX, or Flex to overlay over the Salesforce hosted CRM
system. An application store called App Exchange is also provided, which offers a
paid & free application directory.
Heroku: Heroku is a platform for instant deployment of Ruby on Rails Web
applications. In the Heroku system, servers are invisibly managed by the platform
and are never exposed to users. Applications are automatically dispersed across
different CPU cores and servers,
maximizing performance and minimizing contention. Heroku has an advanced logic
layer than can automatically route around failures, ensuring seamless and
uninterrupted service at all times.
UNIT-2
Virtualization is a technique how to separate a service from the underlying
physical delivery of that service. It is the process of creating a virtual version of
something like computer hardware. It was initially developed during the mainframe
era. It involves using specialized software to create a virtual or software-created
version of a computing resource rather than the actual version of the same resource.
With the help of Virtualization, multiple operating systems and applications can run on
the same machine and its same hardware at the same time, increasing the utilization
and flexibility of hardware.
In other words, one of the main cost-effective, hardware-reducing, and energy-
saving techniques used by cloud providers is Virtualization. Virtualization allows
sharing of a single physical instance of a resource or an application among multiple
customers and organizations at one time. It does this by assigning a logical name to
physical storage and providing a pointer to that physical resource on demand. The term
virtualization is often synonymous with hardware virtualization, which plays a
fundamental role in efficiently delivering Infrastructure-as-a-Service (IaaS) solutions
for cloud computing. Moreover, virtualization technologies provide a virtual
environment for not only executing applications but also for storage, memory, and
networking.
Host Machine: The machine on which the virtual machine is going to be built is known
as Host Machine.
Guest Machine: The virtual machine is referred to as a Guest Machine.
Characteristics of Virtualization
Increased Security: The ability to control the execution of a guest program in a
completely transparent manner opens new possibilities for delivering a secure,
controlled execution environment. All the operations of the guest programs are
generally performed against the virtual machine, which then translates and applies
them to the host programs.
Managed Execution: In particular, sharing, aggregation, emulation, and isolation
are the most relevant features.
Sharing: Virtualization allows the creation of a separate computing environment
within the same host.
Aggregation: It is possible to share physical resources among several guests, but
virtualization also allows aggregation, which is the opposite process.
Resource Abstraction
Virtualization abstracts physical hardware resources (CPU, memory, storage, and
network) and presents them as virtual resources.
Isolation
Each virtual machine operates independently of others, ensuring that issues in one
VM (e.g., crashes or security breaches) do not affect others.
Scalability
Virtualized environments enable dynamic scaling of resources based on workload
demands.
Flexibility
Virtualized environments support running different operating systems (e.g.,
Windows, Linux) on the same physical machine.
High Resource Utilization
Virtualization improves hardware utilization by allowing multiple workloads to
share the same physical resources.
Portability
Virtual machines and containers can be easily moved between hosts or data
centers, enabling disaster recovery, load balancing, and efficient resource
management.
Snapshot and Cloning
Virtualized environments support creating snapshots of VMs, which can be used
for backup, testing, or rollback.
Security
Virtualization includes features like sandboxing, which isolates applications or
systems for enhanced security.
• Cost Efficiency
By consolidating workloads on fewer physical machines, virtualization reduces
the need for physical hardware, lowering capital and operational expenses.
• Simplified Management
Centralized management tools (e.g., VMware vCenter, Microsoft Hyper-V
Manager) provide an interface to monitor, control, and optimize virtualized
resources.
• Fault Tolerance and High Availability
Virtualized environments often include fault-tolerance mechanisms to ensure
continuity during hardware failures.
• Elasticity
Resources can be dynamically adjusted to meet changing demands, making
virtualized environments ideal for cloud computing and on-demand services.
Hypervisor
• Hypervisor a program used to create, run and manage one or more virtual
machines on a computer
• Virtual box share hardware resources from Host OS.
• Separate set of virtual CPU , RAM , Storage etc.
• VMs are fully isolated(Independent of hosted OS)
• It’s a software that creates and runs Virtual Machines(VM’S)
• The virtualization layer consists of a hypervisor or a Virtual Machine Monitor
(VMM).
• Work of Hypervisor-Create VM , Manage , Monitor and Run.
There are two types of hypervisors
• Type-1 Hypervisors or Native Hypervisors or Bare Metal
Type-1 Hypervisors or Native Hypervisors run directly on the host hardware and
control the hardware and monitor the guest operating system.
• Type-2 Hypervisors or Hosted Hypervisors
Type 2 Hypervisors or Hosted Hypervisors run on top of a conventional (main or
Host) operating system and monitor the guest operation systems.
Examples of Hypervisor
Xen
• Xen is an open-source, Type 1 hypervisor that supports para-virtualization.
• It is widely used in cloud computing platforms, such as Amazon Web Services
(AWS).
• While Xen started with para-virtualization as its primary model, it also supports
full virtualization for running operating systems that are not modified to work
with the hypervisor (like Windows).
• In full virtualization mode, Xen uses a technique called hardware-assisted
virtualization (with Intel VT-x or AMD-V), allowing unmodified guest operating
systems to run in virtual machines.
• Xen supports both para-virtualization (for Linux and other modified OSes) and
full virtualization (for unmodified guest OSes like Windows).
Hardware Layer (Bottom Section)
• This is the physical hardware of the system, which includes:
The Xen Hypervisor runs directly on this hardware (bare-metal) to manage virtual
machines.
• CPU – The processor that runs instructions.
Disk – Hard drive or SSD for storage.
Network/PCI – Network interface cards (NIC) and PCI devices.
Memory (RAM) – Physical memory for storing running programs.
Xen Hypervisor Layer (Second Section)
This is the core of the virtualization platform.
Xen does not have a built-in user interface. Instead, it relies on Dom0 (privileged
VM) to manage the system.
Without Dom0, Xen cannot function properly.
• The Xen Hypervisor sits between the hardware and virtual machines.
• It controls and manages access to CPU, memory, disk, and network resources.
• It ensures efficient sharing of hardware among multiple virtual machines.
Guest Virtual Machines (Third Section: Dom0 & DomU)
• This layer represents the virtual machines (VMs) running on top of Xen.
• Dom0 (Domain 0) – The Control Virtual Machine
Dom0 is a special privileged VM that has direct access to hardware.
It runs a management application to control other VMs.
It includes device drivers that allow other VMs (DomU) to use hardware.
It can create, delete, and manage virtual machines.
DomU (Unprivileged Virtual Machines)
These are guest virtual machines created by Dom0.
• DomU relies on virtualized resources provided by Dom0.
• Each DomU runs an operating system (OS) such as Linux or Windows.
• They run applications (APPs) just like normal computers.
• They do not have direct access to hardware; instead, they communicate with
Dom0 for hardware access.
How It Works
• Xen is an open-source type-1 hypervisor used to create and manage VMs.
• It supports paravirtualization (PV) and hardware-assisted virtualization (HVM) to
optimize performance.
• The control domain (Dom0) manages VMs (DomU) and hardware resources.
Where It Works:
• Used in cloud computing (AWS, Oracle Cloud) and enterprise data centers .
• Works on Linux-based servers.
Benefits:
• Lightweight and efficient due to minimal overhead.
• Supports both Windows and Linux VMs.
• High availability and fault tolerance features.
Example:
• A cloud provider like AWS uses Xen to run thousands of VMs.
• A startup wants to host its website on Amazon Web Services (AWS). Instead of
renting an entire physical server, they rent a Virtual Machine (VM) instance
powered by Xen Para-Virtualization.
• Google Cloud Compute Engine (GCE) used para-virtualization before shifting to
full hardware-assisted virtualization.
• Stock trading platforms using Xen para-virtualized VMs for high-speed, low-
latency transactions.
Virtualization Type
Para Virtualization Full Virtualization Hardware Assisted
Hypervisor Type
Type-1 (Bare Metal) Type-1 (Bare Metal) Type-1 (Bare Metal)
VMware
• VMware is a leading provider of virtualization and cloud computing technologies.
• VMware specializes in virtualization, which allows multiple operating systems
and applications to run on a single physical machine.
• VMware is built upon the principle of full virtualization, which involves
duplicating the underlying hardware and presenting it to the guest OS. The guest
OS operates without any awareness of this abstraction layer and requires no
modifications.
• Full virtualization is a virtualization technique that allows multiple virtual
machines (VMs) to run on a single physical host without modifications to the
guest operating systems. In a fully virtualized environment, each virtual machine
operates as if it has its own dedicated physical hardware, even though it shares
resources with other VMs on the same host.
• VMware is a leading company in the field of virtualization and cloud computing.
They provide a range of virtualization and cloud management solutions that allow
organizations to create and manage virtualized IT environments. VMware’s most
notable product is VMware vSphere, which includes the ESXi hypervisor,
vCenter Server for centralized management, and various other components for
virtual infrastructure management.
VMware vSphere
• A comprehensive server virtualization platform with a hypervisor (ESXi) and
management tools (vCenter Server).
• VMware ESXi – A lightweight, bare-metal hypervisor that allows multiple virtual
machines (VMs) to run on a single physical server.
• VMware vCenter Server – A centralized management tool for controlling multiple
ESXi hosts.
Benefits of VMware:
• Efficiency: Allows multiple virtual servers to run on a single physical machine,
reducing hardware costs and increasing server utilization.
• Flexibility: Supports the creation of virtual environments for testing,
development, and production.
• Disaster Recovery: Virtualization allows for easier backup and replication of
virtual machines, improving disaster recovery options.
• Scalability: VMware’s infrastructure can scale easily as the organization grows,
enabling the addition of more virtual machines without a significant increase in
hardware.
• Automation: VMware tools and products automate many IT processes, reducing
administrative overhead.
Hyper-V
• Hyper-V is a type-1 (bare-metal) hypervisor developed by Microsoft. It allows
multiple operating systems (Windows, Linux, etc.) to run as virtual machines
(VMs) on a single physical machine.
• The hypervisor sits between the hardware and the guest OS, managing resources
like CPU, memory, and storage.
Hardware (x86)
• The bottom-most layer represents the physical hardware, including the Processor
(CPU) and Memory (RAM).
• Hyper-V utilizes Intel VT-x or AMD-V extensions for hardware-assisted
virtualization.
• Processor (CPU) – Handles computation and execution of instructions. Hyper-V
utilizes Intel VT-x or AMD-V extensions for hardware-assisted virtualization.
• Memory (RAM) – Allocated dynamically or statically to VMs.
• Storage & Networking – Includes hard drives, SSDs, and network adapters used
by VMs.