UNIT 4 Virtualization
UNIT 4 Virtualization
Virtualization abstracts hardware that can share common resources with multiple workloads. A variety
of workloads can be co-located on shared virtualized hardware whilemaintaining complete insulation,
migrating freely through the infrastructures and scaling, when required.
Cloud Virtualization makes server operating system and storage devices a virtual platform. This will
enable the user to also share a single physical resource instance or application with several users by
providing multiple machines. Cloud virtualizations also administer work through the transformation,
scalability, economics and efficiency of traditionalcomputing.
Cloud computing virtualizations quickly integrate the key computing method. One of the key features
of virtualization is that it allows multiple customers and companies to share their applications.
The virtualization environment can also be referred to as cloud-based services and applications. Either
public or private this environment. The customer can maximizeresources through virtualization and
reduce the physical system needed.
Recently, due to the confluence of several advantages , virtualization technology has become more
interested:
1. Increased performance and computing capacity:
A unique corporate data center is, in most instances, unable to compete in terms of security,performance,
speed and cost - effectiveness with the network of data centers provided by service provider. Since the
majority of services are available on demand, in a short periodof time users can also have large amounts
of computing resources, with tremendous ease and flexibility and without any costly investment.
In turn, Cloud services offer you the ability to free up memory and computing power on your individual
computers through remote hosting of platforms, software and databases. The obvious result, in fact, is a
significant performance improvement.
Underutilization of hardware and software is caused by increased computing and performance and
constrained or infrequent resource usages. Computer systems havebecome so powerful today that in
certain instances those who are only a fraction of its capacity is used by an application or the system.
Furthermore, when taking into consideration the company's IT infrastructure, numerous computer
systems are only partlyutilized whereas they can be used 24/7/365 services without interruption. For
instance,
desktop PCs mainly for office automation tasks and used by administration personnel are used only for
working hours. The efficiency of the IT infrastructure can be enhanced by using these resources for
other purposes. A completely separate environment, which can be achieved via virtualization, is needed
to provide such a service transparently.
3. Lack of space.
Data centers are continuously expanding with the necessity for extra infrastructure, be it storage or
computing power. Organizations like Google and Microsoft are expanding their infrastructure by
constructing data centers as compare as football grounds in which contains thousands of nodes .While
this is feasible for IT big players, companies are oftenunable to build an additional data center to
accommodate extra resource capacity. ,Together with this situation, unused of hardware resources
which led to the diffusion of a server consolidation, fundamental to the virtualization is used in the
technique.
4. Greening initiatives
Virtualization is a core technology for the deployment of a cloud-based infrastructure to
run multiple operating system images simultaneously on a single physical server. As a consolidation
enabler, server virtualization reduces the overall physical server size, with the green benefits inherent.
From the perspective of resource efficiency, fewer workloads are required, which proactively reduce the
space in a datacenter and the eventual footprint of e-waste. From anenergy-efficiency point of view, a
data center will consume less electricity with fewer physical equipment. Cooling in data centers is a
major requirement and can help with highpower consumption. Through free cooling methods, such as
the use of air and water compared to air conditioning and cooling, data centers can reduce their cooling
costs. Thedata center managers can save on electricity costs with solar panels, temperature controls and
wind energy panels.
Power consumption and cooling costs are increasing as well as IT device costs. In addition, increased
demand for extra capacity that transforms into more servers in a data center leads to an increase in
administrative costs significantly. Computers — especially servers — willnot all work independently,
but require system administrator care and attention. Hardware monitoring, flawed equipment
replacement, server installation and updates, server resources monitoring and backups are part of
common system administration tasks. Theseoperations are time consuming and the more servers to
handle, the higher administrative expenses. The more administrative expenses are involved,
virtualization can contribute toreducing the number of servers required for a particular workload and
reducing administrative staff costs.
1. GUEST:
As usual, the guest denotes the system component interacting with the virtualization layer instead with
the host machine. Usually one or more virtual disk and VM definition files arepresented to guests. A host
application which looks and manages every virtual machine as a different application is centrally
operated by virtual machines.
2. Hosts:
The host is the original environment in which the guest is to be managed. Each host uses the common
resources that the host gives to each guest. The OS works as a host and manages the physical
management of resources and the support of the device.
3. Virtualization Layer
The virtualization layer ensures that the same or different environment where the guest operates is
recreated. It is an extra layer of abstract between the hardware, the computing and the application
running in the network and storage. It usually helps to operate a single operating system per machine
which, compared with virtualization, is very inflexible.
1. Increased Security –
The ability to fully transparently govern the execution of a guest program creates new opportunities for
providing a safe, controlled execution environment. All guest programs operate usually against the
virtual machine, translating them and using them for host program.
2. Execution Managed –
In particular, the most important features are sharing, aggregation, emulation andisolation.
3. Sharing –
Virtualization makes it possible to create a separate computing environment in the same host. This
common function reduces the amount of active servers and reduces energy consumption.
4. Aggregation –
The physical resource can not only be shared between several guests, but virtualizationalso enables
aggregation. A group of individual hosts can be linked and represented asa single virtual host. This
functionality is implemented using the Cluster Management Software, which uses and represents the
physical resources of a uniform group of machines.
5. Emulation –
In the virtualization layer, which is essentially a program, guest programs are executed within an
environment. An entirely different environment can also be emulated with regard to the host, so that
guest programs that require certain features not present in thephysical host can be carried out.
6. Isolation –
Virtualization allows guests to provide an entirely separate environment in that they are executed — if
they are operating systems, applications or other entities. The guest program operates through an
abstraction layer that offers access to the underlying resources. The virtual machine is able to filter the
guest’s activities and prevent dangerous operations against the host.
7. Portability –
Dependent on a specific type of virtualization, the concept of portability applies in different ways.
In the case of a hardware virtualization, the guest is packed in a virtual image which can be moved and
executed safely on various virtual machines in many instances.
function of the software layer for virtualization is to virtualize the physical hardware of a ost
machine into virtual resources to be used by the VMs .
Common virtualization layers include the instruction set architecture (ISA) level, hardware
level, operating system level, library support level, and application level
Instruction set emulation requires binary translation and optimization. A virtual instruction set
architecture (V-ISA) thus requires adding a processor-specific software translation layer to the
compiler.
5. User-Application Level
Virtualization at the application level virtualizes an application as a VM.
On a traditional OS, an application often runs as a process. Therefore, application-level
virtualization is also known as process-level virtualization.
The most popular approach is to deploy high level language (HLL)VMs. In this scenario, the
virtualization layer sits as an application program on top of the operating system,
The layer exports an abstraction of a VM that can run programs written and compiled to a
particular abstract machine definition.
Any program written in the HLL and compiled for this VM will be able to run on it. The
Microsoft .NET CLR and Java Virtual Machine (JVM) are two good examples of this class of
VM.
Hypervisors
A hypervisor is a key software piece which enables virtualization. It abstracts from the actual hardware
the guest machines and the operating system they use.
Hypervisors create the CPU / Processor, RAM, and other physical resources virtualizedlayer that
separates you from the virtual devices you are creating.
The hypervisor on which we install the machine is called a host machine, compared with virtual guest
machines running over it. Hypervisors emulate resources available for guest machines to use. Regardless of
which operating system you are booting with an actual hardware, it believes that real physical hardware is
available. From the viewpoint of VM, the physical and virtual environment is unlike any difference. In the
virtual environment, Guest machines do not know that the hypervisor has created them. Or share the
computingpower available. VMs run on the hardware that powers them simultaneously, and they aretherefore
fully dependent upon their stability operation.
Type 1 Hypervisor (also called bare metal or native)
Type 2 Hypervisor (also known as hosted hypervisors)
Type 1 hypervisors are a very basic OS themselves, on which virtual machines can
be operated. The hypervisor’s physical machine is used for server virtualization
purpose only.For anything else, you can't use it. In enterprise environments, type 1
hypervisors are mostly found.
This type of hypervisor runs within a physical host operating system. .Example of
Type 2hypervisor include VMware Player or Parallels Desktop.
A physical machine.
An installed hardware operating system (Windows, Linux, macOS).
Software for the type 2 hypervisor in this operating system.
The current instances of virtual guest machines.
The core components of a Xen system are the hypervisor, kernel, and applications
The guest OS, which has control ability, is called Domain 0, and the others are called
Guest Domain .
Domain 0 is designed to access hardware directly and manage devices .
Other guest domains not permitted to access the hardware directly , when domain o give
permission , then they can access.
The over all control ability is in only Domain O .
There is also a security issue due to the dependency of domain 0 , if domain 0 is hacked
and lost in any case then all guest domains have also lost and loose ssecurity.
When these instructions are identified, they are trapped into the VMM, which emulates the
behavior of these instructions. The method used in this emulation is called binary translation.
CPU virtualization:-
CPU virtualization is not the same thing as emulation. ESXi does not use emulation to
run virtual CPUs. With emulation, all operations are run in software by an emulator. A
software emulator allows programs to run on a computer system other than the one
for which they were originally written. The emulator does this by emulating, or
reproducing, the original computer’s behavior by accepting the same data or inputs
and achieving the same results. Emulation provides portability and runs software
designed for one platform across several platforms.
When CPU resources are overcommitted, the ESXi host time-slices the physical
processors across all virtual machines so each virtual machine runs as if it has its
specified number of virtual processors. When an ESXi host runs multiple virtual
machines, it allocates to each virtual machine a share of the physical resources. With
the default resource allocation settings, all virtual machines associated with the same
host receive an equal share of CPU per virtual CPU. This means that a single-
processor virtual machines is assigned only half of the resources of a dual-processor
virtual machine.
With software-based CPU virtualization, the guest application code runs directly on
the processor, while the guest privileged code is translated and the translated code
runs on the processor.
The translated code is slightly larger and usually runs more slowly than the native
version. As a result, guest applications, which have a small privileged code
component, run with speeds very close to native. Applications with a significant
privileged code component, such as system calls, traps, or page table updates can run
slower in the virtualized environment.
When you use hardware assistance for virtualization, there is no need to translate the
code. As a result, system calls or trap-intensive workloads run very close to native
speed. Some workloads, such as those involving updates to page tables, lead to a large
number of exits from guest mode to root mode. Depending on the number of such
exits and total time spent in exits, hardware-assisted CPU virtualization can speed up
execution significantly.
The VMkernel manages all physical RAM on the host. The VMkernel dedicates part
of this managed physical RAM for its own use. The rest is available for use by virtual
machines.
The virtual and physical memory space is divided into blocks called pages. When
physical memory is full, the data for virtual pages that are not present in physical
memory are stored on disk. Depending on processor architecture, pages are
typically 4 KB or 2 MB.
Memory Overcommitment
For each running virtual machine, the system reserves physical RAM for the virtual
machine’s reservation (if any) and for its virtualization overhead.
Memory Sharing
Memory sharing is a proprietary ESXi technique that can help achieve greater
memory density on a host.
Memory Virtualization
Because of the extra level of memory mapping introduced by virtualization, ESXi can
effectively manage memory across all virtual machines.
I/O Virtualization
I/O virtualization involves managing the routing of I/O requests between virtual
devices and the shared physical hardware. At the time of this writing, there are
three ways to implement I/O virtualization: full device emulation, para-
virtualization, and direct I/O. Full device emulation is the first approach for I/O
virtualization. Generally, this approach emulates well-known, real-world devices.
All the functions of a device or bus infrastructure, such as device enumeration,
identification, interrupts, and DMA, are replicated in software. This software is
located in the VMM and acts as a virtual device. The I/O access requests of the
guest OS are trapped in the VMM which interacts with the I/O devices. The full
device emulation approach is shown in Figure 3.14.
A single hardware device can be shared by multiple VMs that run concurrently.
However, software emulation runs much slower than the hardware it emulates
[10,15]. The para-virtualization method of I/O virtualization is typically used in
Xen. It is also known as the split driver model consisting of a frontend driver and
a backend driver. The frontend driver is running in Domain U and the backend
dri-ver is running in Domain 0. They interact with each other via a block of shared
memory. The frontend driver manages the I/O requests of the guest OSes and the
backend driver is responsible for managing the real I/O devices and multiplexing
the I/O data of different VMs. Although para-I/O-virtualization achieves better
device performance than full device emulation, it comes with a higher CPU
overhead.
Direct I/O virtualization lets the VM access devices directly. It can achieve
close-to-native performance without high CPU costs. However, current direct I/O
virtualization implementations focus on networking for mainframes. There are a
lot of challenges for commodity hardware devices. For example, when a physical
device is reclaimed (required by workload migration) for later reassign-ment, it
may have been set to an arbitrary state (e.g., DMA to some arbitrary memory
locations) that can function incorrectly or even crash the whole system. Since
software-based I/O virtualization requires a very high overhead of device
emulation, hardware-assisted I/O virtualization is critical. Intel VT-d supports the
remapping of I/O DMA transfers and device-generated interrupts. The
architecture of VT-d provides the flexibility to support multiple usage models that
may run unmodified, special-purpose, or “virtualization-aware” guest OSes.
Virtual clusters are built with VMs installed at distributed servers from one or
more physical clus-ters. The VMs in a virtual cluster are interconnected logically
by a virtual network across several physical networks. Figure 3.18 illustrates the
concepts of virtual clusters and physical clusters. Each virtual cluster is formed
with physical machines or a VM hosted by multiple physical clusters. The virtual
cluster boundaries are shown as distinct boundaries.
The provisioning of VMs to a virtual cluster is done dynamically to have the
following interest-ing properties:
• The virtual cluster nodes can be either physical or virtual machines. Multiple
VMs running with different OSes can be deployed on the same physical node.
• A VM runs with a guest OS, which is often different from the host OS, that
manages the resources in the physical machine, where the VM is implemented.
• The size (number of nodes) of a virtual cluster can grow or shrink dynamically,
similar to the way an overlay network varies in size in a peer-to-peer (P2P)
network.
• The failure of any physical nodes may disable some VMs installed on the
failing nodes. But the failure of VMs will not pull down the host system.
Since system virtualization has been widely used, it is necessary to effectively
manage VMs running on a mass of physical computing nodes (also called virtual
clusters) and consequently build a high-performance virtualized computing
environment. This involves virtual cluster deployment, monitoring and
management over large-scale clusters, as well as resource scheduling, load
balancing, server consolidation, fault tolerance, and other techniques. The
different node colors in Figure 3.18 refer to different virtual clusters. In a virtual
cluster system, it is quite important to store the large number of VM images
efficiently.
Desktop Virtualization
Server Virtualization
Server virtualization separates the operating system from the computer hardware
and allows the VM to be treated as a file. This provides for easy management
and facilitates redundancy, high availability and disaster recovery.
Server virtualization gave birth to a new term referred to as elasticity. This gives
us the ability to adjust our hardware resources to the current workload on the fly.
When workload requirements are low, servers can be decommissioned. When
workloads are high, servers are turned on. This along with server consolidation
can save money on electricity and cooling.
Elasticity also allows companies to expand their data center resources on demand
without buying any additional hardware. Services like Amazon Web Services
and Microsoft Azure can provide resources as needed in a pay as you go model,
allowing you to never have a shortage of resources and never paying for
equipment that sits underutilized.
Network Virtualization
All of these virtualization solutions offer many benefits. However, we often don’t
have the budgets to accompany all of them, so how do we choose which to
implement first? One thing to consider when deciding which virtualization
solution to deploy is to adopt the solution that provides the maximum benefits.
Another thing to consider is the ease of implementation. This may be easier said
than done since they are mutually exclusive. However, often times solutions that
are easy to implement can provide tremendous benefits immediately and that will
give a quick ROI.