Unit i Introduction to Virtualization
Unit i Introduction to Virtualization
VIRTUALIZATION
UNIT I -
INTRODUCTION TO VIRTUALIZATION
S. Duraimurugan,
Asso. Prof./CSE PERI Institute of Technology
UNIT I – Introduction to Virtualization
UNIT I
INTRODUCTION TO VIRTUALIZATION
Page
S.No. Topic
No.
1. Virtualization and cloud computing 3
2. Need of virtualization 7
4. limitations 11
6. Full virtualization 24
7. Partial virtualization 27
8. Para Virtualization 26
9. Types of Hypervisors 28
CCS372 VIRTUALIZATION
UNIT I INTRODUCTION TO VIRTUALIZATION
Virtualization and cloud computing - Need of virtualization – cost, administration, fast deployment,
reduce infrastructure cost – limitations- Types of hardware virtualization: Full virtualization – partial
virtualization - Paravirtualization-Types of Hypervisors.
- Virtualization plays an important role in cloud computing since it allows for the appropriate
degree of customization, security, isolation, and manageability that are fundamental for
delivering IT services on demand.
- Virtualization technologies are primarily used to offer configurable computing environments and
storage. Network virtualization is less popular and, in most cases, is a complementary feature,
which is naturally needed in build virtual computing systems.
- Particularly important is the role of virtual computing environment and execution virtualization
techniques.
- Among these, hardware and programming language virtualization are the techniques adopted in
cloud computing systems.
- Hardware virtualization is an enabling factor for solutions in the Infrastructure-as-a-Service
(IaaS) market segment, while programming language virtualization is a technology leveraged in
Platform-as-a-Service (PaaS) offerings.
- In both cases, the capability of offering a customizable and sandboxed environment constituted
an attractive business opportunity for companies featuring a large computing infrastructure that
was able to sustain and process huge workloads.
- Moreover, virtualization also allows isolation and a finer control, thus simplifying the leasing of
services and their accountability on the vendor side.
- Besides being an enabler for computation on demand, virtualization also gives the opportunity to
design more efficient computing systems by means of consolidation, which is performed
transparently to cloud computing service users.
- Since virtualization allows us to create isolated and controllable environments, it is possible to
serve these environments with the same resource without them interfering with each other.
- If the underlying resources are capable enough, there will be no evidence of such sharing.
- This opportunity is particularly attractive when resources are underutilized, because it allows
reducing the number of active resources by aggregating virtual machines over a smaller number
of resources that become fully utilized. This practice is also known as server consolidation,
while the movement of virtual machine instances is called virtual machine migration (see
Figure 3.10).
- Because virtual machine instances are controllable environments, consolidation can be applied
with a minimum impact, either by temporarily stopping its execution and moving its data to the
new resources or by performing a finer control and moving the instance while it is running.
- This second techniques is known as live migration and in general is more complex to implement
but more efficient since there is no disruption of the activity of the virtual machine instance.
- Server consolidation and virtual machine migration are principally used in the case of hardware
virtualization, even though they are also technically possible in the case of programming
language virtualization.
- Finally, cloud computing revamps the concept of desktop virtualization, initially introduced in
the mainframe era. The ability to recreate the entire computing stack—from infrastructure to
application services—on demand opens the path to having a complete virtual computer hosted on
the infrastructure of the provider and accessed by a thin client over a capable Internet connection.
1. Cloud Computing :
Cloud computing is a client-server computing architecture. In cloud computing, resources are used in
centralized pattern and cloud computing is a high accessible service. Cloud computing is a payment and
useful business tool, users pay for usage.
2. Virtualization:
Virtualization is the establishment of cloud computing. It is this novelty that empowers a continuous
asset age from certain eccentric conditions or a singular physical device framework. Here the job of
hypervisor is essential, which is legitimately associated with the equipment to make a few virtual
machines from it. These virtual machines working is unmistakable, independent and doesn’t meddle
with one another. In the condition of disaster recovery, it relies on single peripheral device as single
dedicated hardware do a great job in it.
1. ENHANCED PERFORMANCE:
Currently, the end user system i.e. PC is sufficiently powerful to fulfill all the basic computation
requirements of the user, with various additional capabilities which are rarely used by the user. Most of
their systems have sufficient resources which can host a virtual machine manager and can perform a
virtual machine with acceptable performance so far.
The limited use of the resources leads to under-utilization of hardware and software resources. As all the
PCs of the user are sufficiently capable to fulfill their regular computational needs that’s why many of
their computers are used often which can be used 24/7 continuously without any interruption. The
efficiency of IT infrastructure could be increase by using these resources after hours for other purposes.
This environment is possible to attain with the help of Virtualization.
3. SHORTAGE OF SPACE:
The regular requirement for additional capacity, whether memory storage or compute power, leads data
centers raise rapidly. Companies like Google, Microsoft and Amazon develop their infrastructure by
building data centers as per their needs. Mostly, enterprises unable to pay to build any other data center
to accommodate additional resource capacity. This heads to the diffusion of a technique which is known
as server consolidation.
4. ECO-FRIENDLY INITIATIVES:
At this time, corporations are actively seeking for various methods to minimize their expenditures on
power which is consumed by their systems. Data centers are main power consumers and maintaining a
data center operations needs a continuous power supply as well as a good amount of energy is needed to
keep them cool for well-functioning. Therefore, server consolidation drops the power consumed and
cooling impact by having a fall in number of servers. Virtualization can provide a sophisticated method
of server consolidation.
5. ADMINISTRATIVE COSTS:
Furthermore, the rise in demand for capacity surplus, that convert into more servers in a data center,
accountable for a significant increase in administrative costs. Hardware monitoring, server setup and
updates, defective hardware replacement, server resources monitoring, and backups are included in
common system administration tasks. These are personnel-intensive operations. The administrative costs
is increased as per the number of servers. Virtualization decreases number of required servers for a
given workload, hence reduces the cost of administrative employees.
Why Virtualize?
Virtualization can help companies maximize the value of IT investments, decreasing the server hardware
footprint, energy consumption, and cost and complexity of managing IT systems while increasing the
flexibility of the overall environment.
Cost
Depending on your solution, you can have a cost-free datacenter. You do have to shell out the money for
the physical server itself, but there are options for free virtualization software and free operating systems.
Microsoft’s Virtual Server and VMware Server are free to download and install. If you use a licensed
operating system, of course that will cost money. For instance, if you wanted five instances of Windows
Server on that physical server, then you’re going to have to pay for the licenses. That said, if you were to
use a free version of Linux for the host and operating system, then all you’ve had to pay for is the physical
server.
Naturally, there is an element of “you get what you pay for.” There’s a reason most organizations have
paid to install an OS on their systems. When you install a free OS, there is often a higher total cost of operation,
because it can be more labor intensive to manage the OS and apply patches.
Administration
Having all your servers in one place reduces your administrative burden. According to VMware, you can
reduce your administrative burden from 1:10 to 1:30. What this means is that you can save time in your
daily server administration or add more servers by having a virtualized environment. The following factors
ease your administrative burdens:
• A centralized console allows quicker access to servers.
• CDs and DVDs can be quickly mounted using ISO files.
• New servers can be quickly deployed.
• New virtual servers can be deployed more inexpensively than physical servers.
• RAM can be quickly allocated for disk drives.
• Virtual servers can be moved from one server to another.
Fast Deployment
Because every virtual guest server is just a file on a disk, it’s easy to copy (or clone) a system to create a
new one. To copy an existing server, just copy the entire directory of the current virtual server.
This can be used in the event the physical server fails, or if you want to test out a new application to
ensure that it will work and play well with the other tools on your network.
Virtualization software allows you to make clones of your work environment for these endeavors. Also,
not everyone in your organization is going to be doing the same tasks. As such, you may want different
work environments for different users. Virtualization allows you to do this.
Reduced Infrastructure Costs
We already talked about how you can cut costs by using free servers and clients, like Linux, as well as
free distributions of Windows Virtual Server, Hyper-V, or VMware. But there are also reduced costs
across your organization. If you reduce the number of physical servers you use, then you save money on
hardware, cooling, and electricity. You also reduce the number of network ports, console video ports,
mouse ports, and rack space.
Some of the savings you realize include
• Increased hardware utilization by as much as 70 percent
• Decreased hardware and software capital costs by as much as 40 percent
• Decreased operating costs by as much as 70 percent
Limitations
Sure, we’ve clapped virtualization’s back and described how it can be helpful, but there are times
when it is not ideal. For instance, graphics-intensive applications are not well suited for today’s
virtual environment. Video cards cannot handle the requirements of a high performance graphics
adapter. Gaming, CAD, and software requiring three-dimensional graphics are not ideal for a
virtualized environment.
Databases and business intelligence software are also poor matches for virtualization, simply
because they require a lot more memory and processor power than current virtualized servers can
provide. Databases can be successful, if small enough, but they will scale poorly.
Further, server applications that require access to hardware like PCI cards and USB devices are
difficult to virtualize. Also server virtualization doesn’t typically play well with proprietary
hardware, so applications that need the use of more than the Ethernet jack are not typically going
to work.
Security
When it comes to security, the same risks that exist for a physical server exist for a virtualized server.
There is a misconception that virtual servers are somehow immune to these problems or that the host
server acts as sort of a bodyguard, but that’s not the case. Virtual machines need to have the same
networking concerns dealt with and the same virus concerns addressed as a physical machine. You also
need to protect against spyware and malware. Then configuring your servers, let your host server do just
its job—don’t add any extra applications that it doesn’t need. It will be safer and perform its duties better
when it just has one thing to do.
In fact, security is extra important on a virtualized server, because a virtualized host can potentially lead
to the failure of other virtualized machines on the same physical server.
It’s ideal to separate the virtualization host and virtualized machines and be extra cautious when setting
up perimeter security to further protect host servers. It’s also ideal to have strong, highly guarded
passwords and run with the smallest number of privileges.
Host Machine: The machine on which the virtual machine is going to be built is known as Host
Machine.
Guest Machine: The virtual machine is referred to as a Guest Machine.
Work of Virtualization in Cloud Computing
- Virtualization has a prominent impact on Cloud Computing. In the case of cloud computing,
users store data in the cloud, but with the help of Virtualization, users have the extra benefit of
sharing the infrastructure.
- Cloud Vendors take care of the required physical resources, but these cloud providers charge a
huge amount for these services which impacts every user or organization.
- Virtualization helps Users or Organisations in maintaining those services which are required by a
company through external (third-party) people, which helps in reducing costs to the company.
This is the way through which Virtualization works in Cloud Computing.
Benefits of Virtualization
More flexible and efficient allocation of resources.
Enhance development productivity.
It lowers the cost of IT infrastructure.
Remote access and rapid scalability.
High availability and disaster recovery.
Pay peruse of the IT infrastructure on demand.
Enables running multiple operating systems.
Drawback of Virtualization
High Initial Investment: Clouds have a very high initial investment, but it is also true that it will
help in reducing the cost of companies.
Learning New Infrastructure: As the companies shifted from Servers to Cloud, it requires highly
skilled staff who have skills to work with the cloud easily, and for this, you have to hire new staff or
provide training to current staff.
Risk of Data: Hosting data on third-party resources can lead to putting the data at risk, it has the
chance of getting attacked by any hacker or cracker very easily.
Characteristics of Virtualization
Increased Security: The ability to control the execution of a guest program in a completely
transparent manner opens new possibilities for delivering a secure, controlled execution
environment. All the operations of the guest programs are generally performed against the virtual
machine, which then translates and applies them to the host programs.
Managed Execution: In particular, sharing, aggregation, emulation, and isolation are the most
relevant features.
Sharing: Virtualization allows the creation of a separate computing environment within the same
host.
Aggregation: It is possible to share physical resources among several guests, but virtualization also
allows aggregation, which is the opposite process.
Types of Virtualization
1. Application Virtualization
2. Network Virtualization
3. Desktop Virtualization
4. Storage Virtualization
5. Server Virtualization
6. Data virtualization
1. Application Virtualization:
- Application virtualization helps a user to have remote access to an application from a server.
- The server stores all personal information and other characteristics of the application but can still
run on a local workstation through the internet.
- An example of this would be a user who needs to run two different versions of the same software.
Technologies that use application virtualization are hosted applications and packaged
applications.
2. Network Virtualization:
- The ability to run multiple virtual networks with each having a separate control and data plan. It
co-exists together on top of one physical network.
- It can be managed by individual parties that are potentially confidential to each other.
- Network virtualization provides a facility to create and provision virtual networks, logical
switches, routers, firewalls, load balancers, Virtual Private Networks (VPN), and workload
security within days or even weeks.
3. Desktop Virtualization:
- Desktop virtualization allows the users’ OS to be remotely stored on a server in the data center.
- It allows the user to access their desktop virtually, from any location by a different machine.
- Users who want specific operating systems other than Windows Server will need to have a virtual
desktop.
- The main benefits of desktop virtualization are user mobility, portability, and easy management
of software installation, updates, and patches.
4. Storage Virtualization:
- Storage virtualization is an array of servers that are managed by a virtual storage system.
- The servers aren’t aware of exactly where their data is stored and instead function more like
worker bees in a hive.
- It makes managing storage from multiple sources be managed and utilized as a single repository.
storage virtualization software maintains smooth operations, consistent performance, and a
continuous suite of advanced functions despite changes, breaks down, and differences in the
underlying equipment.
5. Server Virtualization:
- This is a kind of virtualization in which the masking of server resources takes place. Here, the
central server (physical server) is divided into multiple different virtual servers by changing the
identity number, and processors. So, each system can operate its operating systems in an isolated
manner.
- Where each sub-server knows the identity of the central server. It causes an increase in
performance and reduces the operating cost by the deployment of main server resources into a
sub-server resource.
- It’s beneficial in virtual migration, reducing energy consumption, reducing infrastructural costs,
etc.
6. Data Virtualization:
- This is the kind of virtualization in which the data is collected from various sources and managed
at a single place without knowing more about the technical information like how data is
collected, stored & formatted then arranged that data logically so that its virtual view can be
accessed by its interested people and stakeholders, and users through the various cloud services
remotely.
- Many big giant companies are providing their services like Oracle, IBM, At scale, Cdata, etc.
Uses of Virtualization
Data-integration
Business-integration
Service-oriented architecture data-services
Searching organizational data
VIRTUALIZATION
What is virtualization?
- Virtualization is the creation of a virtual (rather than physical) version of something, such as an
operating system, a server, a storage device or network resources.
- It hides the physical characteristics of a resource from users, instead showing another abstract
resource.
- Virtualization is a technique, which allows to share single physical instance of an application
or resource among multiple organizations or tenants (customers).
- It does so by assigning a logical name to a physical resource and providing a pointer to that
physical resource when demanded.
Virtual Machine:
What is Virtual Machine (VM)?
VM is a software implementation of a machine (i.e. a computer) that executes programs like a real machine.
Terminology:
Host (Target)
o The primary environment where will be the target of virtualization.
Guest (Source)
o The virtualized environment where will be the source of virtualization
Virtualization constructs an isomorphism that maps a virtual guest system to a real host
This isomorphism, illustrated in Figure 1.2, maps the guest state to the host state (function V in
Figure 1.2), and for a sequence of operations, e’, that modifies the state in the guest (the function e
modifies state Si to state Sj) there is a corresponding sequence of operations e’ in the host that
performs an equivalent modification to the host’s state (changes Si’ to Sj’).
- Creating a virtual machine over existing operating system and hardware is referred as
HardwareVirtualization.
- Virtual Machines provide an environment that is logically separated from the underlying
hardware.
- The machine on which the virtual machine is created is known as host machine and
virtual machine is referred as a guest machine.
- This virtual machine is managed by a software or firmware, which is known as hypervisor.
In Figure 1.12, the host machine is equipped with the physical hardware, as shown at the bottom
of the figure. An example is an x-86 architecture desktop running its installed Windows OS, as
shown in part (a) of the figure.
The VM can be provisioned for any hardware system.
The VM is built with virtual resources managed by a guest OS to run a specific application.
Between the VMs and the host platform, one needs to deploy a middleware layer called a
Virtual Machine Monitor (VMM). Figure 1.12(b) shows a native VM installed with the use of
a VMM called a hypervisor in privileged mode. For example, the hardware has x-86architecture
running the Windows system.
The guest OS could be a Linux system and the hypervisor is the XEN system developed at
Cambridge University.
This hypervisor approach is also called Bare-Metal VM, because the hypervisor handles the bare
hardware (CPU, memory, and I/O) directly.
Another architecture is the host VM shown in Figure 1.12(c). Here the VMM runs in non-
privileged mode.
The host OS need not be modified. The VM can also be implemented with a dual mode, as
shown in Figure 1.12(d).
Part of the VMM runs at the user level and another part runs at the supervisor level. In this case,
the host OS may have to be modified to some extent.
Multiple VMs can be ported to a given hardware system to support the virtualization process.
The VM approach offers hardware independence of the OS and applications. The user
application running on its dedicated OS could be bundled together as a virtual appliance that can
be ported to any hardware platform.
The VM could run on an OS different from that of the host computer.
– The hypervisor supports hardware-level virtualization (see Figure 3.1(b)) on bare metal devices
like CPU, memory, disk and network interfaces.
– The hypervisor software sits directly between the physical hardware and its OS.
– This virtualization layer is referred to as either the VMM or the hypervisor.
– The hypervisor provides hypercalls for the guest OSes and applications.
– Depending on the functionality, a hypervisor can assume a micro-kernel architecture like the
Microsoft Hyper-V. Or it can assume a monolithic hypervisor architecture like the VMware ESX
for server virtualization (A Monolithic Type 1 Hypervisor hosts its drivers on the Hypervisor
itself).
– A micro-kernel hypervisor includes only the basic and unchanging functions (such as physical
memory management and processor scheduling).
– The device drivers and other changeable components are outside the hypervisor.
– A monolithic hypervisor implements all the aforementioned functions, including those of the
device drivers.
– Therefore, the size of the hypervisor code of a micro-kernel hypervisor is smaller than that of a
monolithic hypervisor.
– Essentially, a hypervisor must be able to convert physical devices into virtual resourcesdedicated
for the deployed VM to use.
– However, not all guest OSes are created equal, and one in particular controls the others.
– The guest OS, which has control ability, is called Domain 0, and the others are called Domain U.
S.Duraimurugan, Asso.Prof/CSE, PERI Institute of Tech. Chennai 22
UNIT I – Introduction to Virtualization
– With full virtualization, noncritical instructions run on the hardware directly while critical
instructions are discovered and replaced with traps into the VMM to be emulated by software.
– Both the hypervisor and VMM approaches are considered full virtualization.
– Why are only critical instructions trapped into the VMM? This is because binary translation can
incur a large performance overhead.
– Noncritical instructions do not control hardware or threaten the security of the system, but critical
instructions do.
– Therefore, running noncritical instructions on hardware not only can promote efficiency, but also
can ensure system security.
– This approach was implemented by VMware and many other software companies. As shown in
Figure 3.6, VMware puts the VMM at Ring 0 and the guest OS at Ring 1.
– The VMM scans the instruction stream and identifies the privileged, control- and behaviour-
sensitive instructions.
– When these instructions are identified, they are trapped into the VMM, which emulates the
behaviour of these instructions. The method used in this emulation is called binary translation.
– Therefore, full virtualization combines binary
translation and direct execution.
– The guest OS is completely decoupled from the
underlying hardware. Consequently, the guest OS
is unaware that it is being virtualized.
– The performance of full virtualization may not be
ideal, because it involves binary translation which
is rather time-consuming.
– In particular, the full virtualization of I/O-
intensive applications is a really a big challenge.
– Binary translation employs a code cache to store
translated hot instructions to improve
performance, but it increases the cost of memory
usage.
– At the time of this writing, the performance of full virtualization on the x86 architecture is typically
80 percent to 97 percent that of the host machine.
Host-Based Virtualization
– An alternative VM architecture is to install a virtualization layer on top of the host OS. This host OS
is still responsible for managing the hardware.
– The guest OSes are installed and run on top of the virtualization layer.
– Dedicated applications may run on the VMs. Certainly, some other applications can also run with
the host OS directly.
– This host based architecture has some distinct advantages, as enumerated next. First, the user can
install this VM architecture without modifying the host OS.
– The virtualizing software can rely on the host OS to provide device drivers and other low-level
services.
– This will simplify the VM design and ease its deployment. Second, the host-based approach appeals
to many host machine configurations.
– Compared to the hypervisor/VMM architecture, the performance of the host-based architecture may
also be low. When an application requests hardware access, it involves four layers of mapping
which downgrades performance significantly.
– When the ISA of a guest OS is different from the ISA of the underlying hardware, binary translation
must be adopted.
– Although the host-based architecture has flexibility, the performance is too low to be useful in
practice.
– Compared with full virtualization, para-virtualization is relatively easy and more practical.
– The main problem in full virtualization is its low performance in binary translation.
– To speed up binary translation is difficult.
– Therefore, many virtualization products employ the para-virtualization architecture.
– The popular Xen, KVM, and VMware ESX are good examples.
The VMM layer virtualizes the physical hardware resources such as CPU, memory, network and disk
controllers, and human interface devices. Every VM has its own set of virtual hardware resources. The
resource manager allocates CPU, memory disk, and network bandwidth and maps them to the virtual
hardware resource set of each VM created.
Hardware interface components are the device drivers and the VMware ESX Server File System. The
service console is responsible for booting the system, initiating the execution of the VMM and resource
manager, and relinquishing control to those layers. It also facilitates the process for system administrators.
Partial virtualization. Partial virtualization provides a partial emulation of the underlying hardware,
thus not allowing the complete execution of the guest operating system in complete isolation.
Partial virtualization allows many applications to run transparently, but not all the features of the
operating system can be supported, as happens with full virtualization.
An example of partial virtualization is address space virtualization used in time-sharing systems;
this allows multiple applications and users to run concurrently in a separate memory space, but they
still share the same hardware resources (disk, processor, and network).
Historically, partial virtualization has been an important milestone for achieving full virtualization,
and it was implemented on the experimental IBM M44/44X. Address space virtualization is a
common feature of contemporary operating systems.
HYPERVISORS
- A hypervisor, also known as a virtual machine monitor or Virtual Machine Manager or VMM.
- The hypervisor is a firmware or low-level program that allows us to build and run virtual
machines which are abbreviated as VMs.
- Each virtual machine runs independently of the other virtual machines on the same box with
different operating systems that are isolated from each other.
- Thereare two types of hypervisor: Type I and Type II (see Figure 3.7).
Type I hypervisors run directly on top of the hardware.
o Therefore, they take the place of the operating systems and interact directly with the ISA
interface exposed by the underlying hardware, and they emulate this interface in order to
allow the management of guest operating systems.
o This type of hypervisor is also called a native or bare metal hypervisor since it runs natively
on hardware.
Type II hypervisors require the support of an operating system to provide virtualization services.
o This means that they are programs managed by the operating system, which interact with it
through the ABI and emulate the ISA of virtual hardware for guest operating systems.
o This type of hypervisor is also called a hosted virtual machine since it is hosted within an
operating system.
The design and architecture of a virtual machine manager, together with the underlying hardware
design of the host machine, determine the full realization of hardware virtualization, where a guest
operating system can be transparently executed on top of a VMM as though it were run on the
underlying hardware.
The criteria that need to be met by a virtual machine manager to efficiently support virtualization were
established by Goldberg and Popek in 1974. Three properties have to be satisfied:
• Equivalence. A guest running under the control of a virtual machine manager should exhibit the same
behaviour as `````````when it is executed directly on the physical host.
• Resource control. The virtual machine manager should be in complete control of virtualized
resources.
• Efficiency. A statistically dominant fraction of the machine instructions should be executed
without intervention from the virtual machine manager.
The major factor that determines whether these properties are satisfied is represented by the layout of the
ISA of the host running a virtual machine manager.
Benefits of hypervisors
o Speed: The hypervisors allow virtual machines to be built instantly unlike bare-metal servers. This
makes provisioning resources for complex workloads much simpler.
o Efficiency: Hypervisors that run multiple virtual machines on the resources of a single physical
machine often allow for more effective use of a single physical server.
o Flexibility: Since the hypervisor distinguishes the OS from the underlying hardware, the program
no longer relies on particular hardware devices or drivers, bare-metal hypervisors enable operating
systems and their related applications to operate on a variety of hardware types.
o Portability: Multiple operating systems can run on the same physical server thanks to hypervisors
(host machine). The hypervisor's virtual machines are portable because they are separate from the
physical computer.