0% found this document useful (0 votes)
15 views

Unit 2 - Final

Uploaded by

pothulanandini3
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views

Unit 2 - Final

Uploaded by

pothulanandini3
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

Unit 2

Virtual Machines and Virtualization of Clusters and Data Centres: Implementation Levels of
Virtualization, Virtualization Structures/Tools and mechanisms, Virtualization of CPU, Memory
and I/O Devices, Virtual Clusters and Resource Management, Virtualization for Data Centre
Automation.

What is Virtual Machine?

 A virtual machine (VM) is a digital version of a physical computer system. Virtual machine can
run the programs and operating systems, stores data, connect to networks, and do other
computing functions, and requires maintenance such as updates and system monitoring.

 A VM is a virtualized instance of a computer that can perform almost all of the same functions
as a computer, including running applications and operating systems.

 Virtual machines run on a physical machine and access computing resources of that physical
machine with the help of software called a hypervisor.

 Virtual machine is a software-based-computer that exists within the operating system of another
computer. In simpler terms, it is a virtualization of an actual computer, except that it exists on
another system.
 So with VM, multiple OS environments can exist simultaneously on the same machine

 And one or more virtual “guest” machines can run on a single physical “host” machine

 The purpose of a VM is to enhance resource sharing by many users and improve computer
performance in terms of resource utilization and application flexibility

What is Virtualization?

 Virtualization can be defined as a process that enables the creation of a virtual version of a
desktop, operating system, network resources, or server. Virtualization plays a key and
dominant role in cloud computing.
 It is also defined as a creation of a virtual version of a server, a desktop, a storage device, an
operating system, or network resources. It is essentially a technique or method that allows the
sharing of a single physical instance of a resource or an application amongst multiple
organizations or customers.
 The machine on which the virtual machine is built is called the Host Machine and the virtual
machine is known as the guest machine.
 In cloud computing, this virtualization facilitates the creation of virtual machines and ensures
the smooth functioning of multiple operating systems. It also helps create a virtual ecosystem
for server operating systems and multiple storage devices.
 Actually, the idea of VMs can be dated back to the 1960s

 Hardware resources (CPU, memory, I/O devices, etc.) or software resources (operating system
and software libraries) can be virtualized in various functional layers.

M. Purnachandra Rao, Assoc. Prof. , Dept. of IT, KITS Page 1


 Virtualization is referred as a technology used to create multiple simulated environments from
a single physical hardware system.
 This is the idea to separate the hardware from the software that yields better system efficiency.

Implementation levels of Virtualization:

The following figure shows the computer before and after virtualization

 A traditional computer runs with a host OS specially designed for its hardware architecture,
however after virtualization, different user applications managed by their own OSs can run on
the same hardware, independent of the host OS.
 This is done by adding an additional layer between physical hardware and host OS.
 This virtualization layer is known as Hypervisor or VMM (Virtual Machine Monitor)
 The main function of this virtualization layer is to virtualize the physical hardware of host
machine into virtual resources to be used by VMs.
 The hypervisor creates an abstraction of VMs by placing virtualization layer at various
operational levels of computer system.
 These levels include, instruction set architecture (ISA) level, hardware level, OS level, library
level and application level.

Instruction set architecture (ISA) level: This level defines a way in which a microprocessor is

M. Purnachandra Rao, Assoc. Prof. , Dept. of IT, KITS Page 2


programmed at the machine level. At this level, virtualization is performed by emulating a given
ISA by the ISA of the host machine. For example MIPS binary code can run on an x86 based host
machine with the help of ISA emulation. Instruction set emulation requires binary translation and
optimization. A virtual instruction set architecture (V-ISA) thus requires adding a processor-specific
software translation layer to the compiler.

Hardware abstraction layer (HAL) level: This approach generates a virtual hardware
environment for a VM. The idea is to virtualize a resource of computer such as processors,
memory, and I/O devices. This is done at the top of the bare or base hardware. The goal of this level
is to enhance the hardware utilization by enabling concurrent system usage among multiple users.
This is done by creating a virtual hardware environment for actual machine and manage the
hardware through virtualization. Most recently Xen hypervisor has been applied to visulaize x86
based machine to run Linux or other guest OS applications.

OS level: It refers to an abstraction layer between tradition OS and user applications. OS-level
virtualization creates isolated containers on a single physical server and the OS instances to utilize
the hardware and software in data centres. The containers behave like real servers. Using this level
of virtualization, a virtual platform can be created to assign hardware resources to different users
which do not trust each other.

Library support level: The virtualization at library level can be done simply by managing APIs
associated with applications systems. Most applications use APIs exported by user-level libraries
rather than using lengthy system calls by the OS. Since most systems provide well-documented
APIs, such an interface becomes another level for virtualization. Virtualization with library
interfaces is possible by controlling the communication link between applications and the rest of a
system.

User-Application level: Virtualization at this level virtualizes an application as VM. The process
involves wrapping the application in a layer that is isolated from the host OS and other applications.
It is also known as process-level virtualization because OS considers each application as a process.
The most popular approach is to deploy HLL VMs. Any program written in the HLL and compiled
for this VM will be able to run on it. JVM & Microsoft .NET CLR are two good examples of this
class of VM.

VMM Design requirements

 A virtualization system that partitions single physical system into multiple VMs
 In order to create & deploy VMs and services to physical servers, VMM is the solution
for virtualization environment
 The software that provides virtualization is often called VMM or Hypervisor.
 Therefore VMM manages the hardware resources of a host computing system.
There are three requirements for a VMM
1. VMM should provide an environment for programs which is essentially identical to the
original machine.
2. Programs run in this environment should show, at worst, only minor decreases in speed.
3. VMM should be in complete control of the system resources
The complete control of computing resources by VMM include the following aspects:
1. The VMM is responsible for allocating hardware resources for programs.

M. Purnachandra Rao, Assoc. Prof. , Dept. of IT, KITS Page 3


2. It is not possible for a program to access any resource not explicitly allocated to it.
3. It is possible under certain circumstances for a VMM to regain control of resources already
allocated.

Virtualization Support at the OS level:

 Virtualization also need to done at OS level, which is referred as OS level virtualization


 With the technique of virtualized OS, nothing is required to be pre-installed or permanently
loaded on the local disk.
 However, everything runs from the network using virtual disk that is remotely stored on a
server like Storage Area Network (SAN)
 With the help of OS virtualization, the kernel of OS allows to exists more than one isolated
user-space instances, which are called containers.
 In other words, OS kernel will run a single operating system & provide that operating
system's functionality to replicate on each of the isolated partitions.
 However this virtualization in cloud computing has two challenges.
 The first is the ability to use a variable number of physical machines and VM instances
depending on the needs of a problem. For example, a task may need only a single CPU during
some phases of execution but may need hundreds of CPUs at other times
 The second challenge concerns the slow operation of instantiating new VMs

Why OS level Virtualization:

 In a cloud computing environment, perhaps thousands of VMs need to be initialized


simultaneously.
 Moreover, full virtualization at the hardware level also has the disadvantages of slow
performance and low density, and the need for para-virtualization is to modify the guest OS.
 To reduce the performance overhead of hardware-level virtualization, even hardware
modification is needed.
 OS-level virtualization provides a feasible solution for these hardware-level virtualization
issues.
 Operating system virtualization inserts a virtualization layer inside an operating system to
partition a machine’s physical resources.
 It enables multiple isolated VMs within a single operating system kernel. This kind of VM is
often called a virtual execution environment (VE).
 From the user’s point of view, VEs look like real servers. This means a VE has its own set of
processes, file system, user accounts, network interfaces with IP addresses, routing tables,
firewall rules, and other personal settings
Advantages of OS Extensions:

 All OS-level VMs on the same physical machine share a single operating system kernel.

 The virtualization layer can be designed in a way that allows processes in VMs to access as
many resources of the host machine as possible, but never to modify them

Dis-advantages of OS Extensions:

 The main disadvantage of OS extensions is that all the VMs at operating system level on a
single container must have the same kind of guest operating system.

M. Purnachandra Rao, Assoc. Prof. , Dept. of IT, KITS Page 4


 For example, a Windows distribution such as Windows XP cannot run on a Linux-based
container. However, users of cloud computing have various preferences. Some prefer
Windows and others prefer Linux or other operating systems. Therefore, there is a challenge
for OS-level virtualization in such cases.
Virtualization on Linux or Windows Platforms

 OS-level virtualization systems are Linux-based. However, virtualization support on the


Windows-based platform is still in the research stage.
 Two OS tools (Linux vServer and OpenVZ) support Linux platforms to run other platform-
based applications through virtualization.
 The third tool, FVM (Feather-weight Virtual Machine), is an attempt specifically developed
for virtualization on the Windows NT platform.
 OpenVZ is an OS-level tool designed to support Linux platforms to create virtual
environments for running VMs under different guest OS.
 OpenVZ is an open source container-based virtualization solution built on Linux.
 Library-level virtualization is also known as user-level Application Binary Interface (ABI)
or API emulation.
 This type of virtualization can create execution environments for running programs on a
platform rather than creating a VM to run the entire operating system.
2.2 Virtualization Structures/Tools and Mechanisms
 Before virtualization an OS can manage the entire hardware. However after virtualization, a
virtualization layer is inserted between hardware and OS which is responsible for converting
portions of real hardware into virtual hardware.
 There are different classes of VM architectures based on the position of the virtualization
layers, namely Hypervisor architecture, Para virtualization and Host-based virtualization.

Hypervisor and Xen architecture:

 The hypervisor supports hardware level virtualization on bare metal devices like CPU,
memory, disk and network interfaces
 This hypervisor software sits directly between physical hardware and its OS and able to
convert physical devices into virtual resources dedicated for deployed VM to use.
 This virtualization layer is referred as VMM or the hypervisor.
 Depending on functionality, this hypervisor can be a micro-kernel architecture like
Microsoft Hyper-V or it can be a monolithic hypervisor architecture like VMware ESX for
server virtualization.
 A micro-kernel hypervisor includes only basic and unchanging functions. i.e device drivers
and other changeable components are outside of hypervisor. However, monolithic
hypervisor can implement all such functions including device drivers.
 Therefore, the size of micro-kernel hypervisor is smaller than monolithic hypervisor.
 Top hypervisor tools are Microsoft Hyper-V, VMware, KVM, Oracle’s VirtualBox

The Xen architecture:

 Xen is an open source micro-kernel hypervisor developed by Cambridge University. It does


not include device drivers.

M. Purnachandra Rao, Assoc. Prof. , Dept. of IT, KITS Page 5


 It just provides a mechanism by which a guest OS can have direct access to the physical
devices. As a result, size of Xen hypervisor is kept rather small.
 The core components of Xen system are hypervisor, kernel and applications.
 Like any other virtualization systems, many guest OSs can run on the top of hypervisor
 However all guest OSs that are created are not equal, and one in particular controls the
others.
 The guest OS which has control ability is called Domain 0 and others are called Domain U.
 Hence Domain 0 is privileged OS of Xen. i.e it is designed to access hardware directly and
manage devices. Therefore, one of the responsibilities of Domain O is to allocate and map
hardware resources for the guest Domain U

Binary Translation with Full Virtualization:

 The hardware virtualization provides the architectural support to build virtual machine
manager that can run a guest OS in isolation.
 Depending on the implementation technologies, hardware virtualization can be classified
into two categories, like full virtualization and host-based virtualization.

Full virtualization:

 This type of virtualization runs OS on the top of VM without modifying it.


 So, it does not need to modify host OS. The guest OSs and their applications consist of
noncritical and critical instructions.
 With full virtualization, noncritical instructions run directly on hardware while critical
instructions run through VMM software.
 This method provides isolated and secured virtualization allowing different systems to exist
on the same platform.
 The key idea to build this type of virtualization is to combine hardware and software and
present the execution of harmful instructions directly on host.
 These architectures offer 4 levels of privilege known as Ring 0,1,2,3 to OS and applications
to manage access to the computer hardware.
 The user applications are at Ring 3 and OS which needs to have direct access to the memory
and hardware is at Ring 0.

Binary Translation of Guest OS requests using VMM:

 This approach is shown by the following figure

M. Purnachandra Rao, Assoc. Prof. , Dept. of IT, KITS Page 6


 In this, VMware puts the VMM at Ring 0 and guest OS at Ring 1.
 The VMM scans the instruction and identifies the privileged, control behaviour sensitive
instructions. i.e this VMM emulates the behaviour of these instructions.
 The method used in this emulation is called binary translation.
 Hence, in this, the combination of binary translation and direct execution provides Full
virtualization as the guest OS is decoupled from the underlying hardware by the
virtualization layer.
 The guest OS is not aware that it is being virtualized and requires no modification.
 The VMM translates all the OS instructions, while user level instructions run unmodified at
native speed.

Advantages of full virtualization:


1. This type of virtualization provide best isolation and security for Virtual machine.
2. Truly isolated multiple guest OS can run simultaneously on same hardware.
3. It's only option that requires no hardware assist or OS assist to virtualize sensitive and
privileged instructions.

Disadvantages:

1. Full virtualization is usually bit slower, because of all emulation.

2. Hypervisor does not contain the device driver and it might be difficult for new device
drivers to be installer by users.

Host-based Virtualization:

 An alternative VM architecture is to install a virtualization layer on top of the host OS.


 This host OS is still responsible for managing the hardware. The guest OSs are installed and
run on top of the virtualization layer. This approach has two advantages.

M. Purnachandra Rao, Assoc. Prof. , Dept. of IT, KITS Page 7


 First, the user can install this VM architecture without modifying the host OS. The
virtualizing software can rely on the host OS to provide device drivers and other low-level
services. This will simplify the VM design and ease its deployment.
 Second, the host-based approach appeals to many host machine configurations. Compared to
the hypervisor/VMM architecture, the performance of the host-based architecture may also
be low.
 When an application requests hardware access, it involves four layers of mapping which
downgrades performance significantly.
 When the ISA of a guest OS is different from the ISA of the underlying hardware, binary
translation must be adopted. Although the host-based architecture has flexibility, the
performance is too low to be useful in practice.

Advantages of Host-based virtualization

1. The user can install this VM architecture with modifying host OS. So the VMM can rely on
host OS to provide device drivers and other low level services
2. This approach appeals to many host machine configurations.

Para-Virtualization with Compiler Support:

 Para virtualization needs to modify the guest OSs. i.e A para-virtualized VM provides
special APIs requiring substantial OS modifications in user applications.
 The virtualization layer can be inserted at different positions in a machine software stack.
 The guest operating systems are para-virtualized. They are assisted by an intelligent
compiler to replace the nonvirtualizable OS instructions by hypercalls.
 The guest OS kernel is modified to replace privileged and sensitive instructions with hyper
calls to hypervisor or VMM. So the guest OS may not able to run them. These are
implemented by hypervisor.
 Therefore para virtualization replaces non-virtualizable instructions with hypercalls that
communicate directly with hypervisor or VMM
 When guest OS kernel is modified for virtualization, it can no longer run on hardware
directly.
 Unlike full virtualization architecture which interprets and emulates privileged and sensitive
instructions at runtime, however para virtualization handles them at compile time.
 The traditional x86 processor offers four instruction execution rings: Rings 0, 1, 2, and 3.
 The OS is responsible for managing the hardware and the privileged instructions to execute
at Ring 0, while user-level applications run at Ring 3
 However, para-virtualization attempts to reduce the virtualization overhead, and thus
improve performance by modifying only the guest OS kernel.
 Unlike full virtualization ,guest OSs are aware of one another.
 The following figures illustrate the concept of para-virtualized VM architecture.
 The lower the ring number, the higher the privilege of instruction being executed,
 Compared with full virtualization, para-virtualization is relatively easy and more practical.
 The main problem in full virtualization is its low performance in binary translation. To
speed up binary translation is difficult.
 Therefore, many virtualization products employ the para-virtualization architecture.
 The popular Xen, KVM and VMware ESX are some good examples of this type.

M. Purnachandra Rao, Assoc. Prof. , Dept. of IT, KITS Page 8


a) Para-virtualized VM

b) Para-virtualized guest OS assisted


by intelligent compiler to replace
nonvirtualizable OS instructions by
hypercalls.

KVM (Kernel-based VM)

 KVM is hardware assisted para-virtualization tool which improves the performance and
support unmodified guest OS such as Windows, Linux, Solaris etc.
 This is Linux para-virtualization system as a part of the Linux version 2.6.20 kernel.
 Memory management and scheduling activities are carried out by the existing Linux kernel.
The rest of things can be carried out by KVM, which makes it simpler than the hypervisor
that controls the entire machine.

VMware ESX server:

 ESX is a VMM or a hypervisor for bare-metal x86 symmetric multiprocessing(SMP)


servers.
 It accesses hardware resources such as I/O directly and has complete resource management
control.
 ESX enabled server consists of four components like a virtualization layer, a resource
manager, hardware interface components, and a service console as shown in the figure.
 The virtualization layer virtualizes the physical hardware resources such as CPU, memory,
network and disk controllers, and human interface devices. Every VM has its own set of
virtual hardware resources
 The resource manager allocates CPU, memory disk, and network bandwidth and maps them
to the virtual hardware resource set of each VM created.
 Hardware interface components are the device drivers and VMware ESX server file system.

M. Purnachandra Rao, Assoc. Prof. , Dept. of IT, KITS Page 9


 The server console is responsible for booting the system.
 To improve performance, the ESX server employs para-virtualization architecture in which
the VM kernel interacts directly with the hardware without involving host OS.

Advantages of para-virtualization:

 As a guest Os can directly communicate with hypervisor


 This is efficient virtualization. It allows the users to make use of new or modified device
drivers.
Disadvantages:

 Para virtualization requires the guest OS to be modified in order to interact with para
virtualization interfaces.
 It requires significant support and maintainability issues in production environment.
2.3 Virtualization of CPU, Memory and I/O devices.

2.3.1 Hardware support for virtualization

 To support virtualization processors can employ a special running mode and instructions,
know as hardware assisted virtualization. So VMM and guest OS run in different modes.
 The components to consider when selecting virtualization hardware include, CPU, Memory
and Network I/O devices.
 These are all critical for workload consolidation issues. The issues with CPU pertain to
either clock speed or the number of cores held by CPU
 Hardware virtualization allows to run several OSs on a unique machine. This is done due to
the specific software called “Virtual Machine Monitor/Manager (VMM)”.
 In hardware virtualization, there are two things, like, Host machine and Guest machine.
 The software that creates a VM on host hardware is called hypervisor or VMM.
 Modern OSs and processors permit multiple processes to run simultaneously.
 If there is no protection mechanism in a processor, all instructions from different processes
will access the hardware directly and cause system crash,
 Therefore, all processors have at least two modes, user mode and supervisor mode to ensure
controlled access of critical hardware.
 Instructions running in supervisor mode are called privileged instructions. Other instructions
are unprivileged instructions.
 In a virtualized environment it is more difficult to make OSs and applications run correctly
because there are more layers in the machine stack.
 The following figure shows hardware support for virtualization in the Intel x86 processor.

M. Purnachandra Rao, Assoc. Prof. , Dept. of IT, KITS Page 10


 For processor virtualization, Intel offers the VT-x or VT-i technique. VT-x adds a privileged
mode and some instructions to processors. This enhancement traps all sensitive instructions
in the VMM automatically.
 For memory virtualization, Intel offers the EPT, which translates the virtual address to the
machine’s physical addresses to improve performance.
 For I/O virtualization, Intel implements VT-d or VT-c to support this.

2.3.2 CPU Virtualization

 A VM is a duplicate of an existing computer system in which majority of the VM


instructions are executed on the host processor in native mode. Thus, unprivileged
instructions of VMs run directly on host machine for higher efficiency.
 The critical instructions are divided into 3 categories.
1. Privileged instructions
2. Control-sensitive instructions
3. Behaviour-sensitive instructions
 Privileged instructions execute in a privileged mode and will be trapped if executed outside
this mode
 Control-sensitive instructions attempt to change the configuration of resources used.
 Behaviour-sensitive instructions have different behaviours depending on the configuration
of resources, including the load and store operations over the virtual memory.
 CPU architecture is virtualizable if it supports ability to run VM’s privileged and
unprivileged instructions in CPU’s user mode while VMM runs in supervisor mode.

2.3.2.1 Hardware Assisted CPU virtualization

 This technique attempts to simplify virtualization because full or para virtualization is


complicated.
 Intel and AMD add an additional mode called privilege mode level (Ring-1) to x86
processors.
 Therefore OS can still run at Ring 0 and hypervisor can run at Ring – 1.
 All the privileged and sensitive instructions are trapped in the hypervisor automatically.
 So this technique removes the difficulty of implementing binary translation of full
virtualization and also allows OS to run in VMs without modifications.
 The following figure shows Intel Hardware-assisted CPU virtualization

M. Purnachandra Rao, Assoc. Prof. , Dept. of IT, KITS Page 11


 Intel’s VT-x technology is an example of hardware assisted virtualization, VT-x is one of
the two versions of Intel’s virtualization technology used for x86 processors.
 Intel calls the privilege level of x86 processor the VMX Root Mode.
 In order to control start and stop of a VM and allocate a memory page to maintain the CPU
state for VMs, a set of additional instructions is added.
 Xen, VMware, and the Microsoft Virtual PC all implement their hypervisor by using this
VT-x technology.

2.3.3 Memory Virtualization:

 In a traditional execution environment, the OS maintains of virtual memory to machine


memory using page tables, which is a one-stage mapping from virtual memory to machine
memory.
 However, each page table of guest OS has a separate page table in VMM corresponding to
it, the VMM page table is called shadow page table.
 All modern x86 CPUs include a memory management unit (MMU) and a translation look a
side buffer (TLB) to optimize virtual memory performance. So this MMU handles virtual to
physical translations as defined by OS
 However, in virtual execution environment, virtual memory virtualization involves sharing
the physical system memory in RAM and dynamically allocating it to the physical memory
of the VMs
 This means that two-stage mapping process should be maintained by guest OS and the
VMM respectively, i.e virtual memory to physical memory and physical memory to
machine memory.
 VMware uses shadow page table to perform this two stage mapping process.
 Processors use TLB to map virtual memory directly to machine memory to avoid the two
levels of translation on every access.
 The following figure shows the two-level mapping procedure

 The guest OS continues to control the mapping of virtual addresses to physical memory
addresses of VMs. However the guest OS can’t directly access the actual machine memory.
 The VMM is responsible for mapping the guest physical memory to the actual machine
memory.

2.3.4 I/O Virtualization:

 By this I/O virtualization, a single hardware device can be shared by multiple VMs that run
concurrently.
 This I/O virtualization involves managing the routing of I/O request between virtual devices
and the shared physical hardware.

M. Purnachandra Rao, Assoc. Prof. , Dept. of IT, KITS Page 12


 There are three ways to implement I/O virtualization, like Full device emulation, para-
virtualization, and Direct I/O.
 Full device emulation is first approach for I/O virtualization. Generally, this approach
emulates well-known, real-world devices.
 All the functions of a device or bus infrastructure, such as device enumeration,
identification, interrupts and DMA, are replicated in software. This software is located in
the VMM and acts as a virtual device.
 The para-virtualization method of I/O virtualization is used in Xen. It is also known as split
driver model consisting of frontend driver and a backend driver.
 The frontend driver manages I/O requests of the guest OSs running in Domain U and the
backend driver running in Domain 0, is responsible for managing real I/O devices and
multiplexing the I/O data of different VMs.
 Direct I/O virtualization allows the VM to access devices directly. It can achieve close to
native performance without high CPU costs.
 The following figure shows the device emulation for I/O virtualization.

2.3.5 Virtualization in Muti-core Processors:

 Virtualizing a multi-core processor is relatively more complicated than virutalizing uni-core


processor.
 Multi-core processors are claimed to have higher performance by integrating multiple processor
cores in a single chip.
 However, multi-core virtualization has raised some new challenges to computer architects,
compiler constructors, system designers and application programmers,
 There are mainly two difficulties, application programs must be parallelized to use all cores
fully, and software must explicitly assign tasks to the cores, which is a very complex problem.

Physical Vs Virtual Processor cores:

 The physical processor cores a physical unit of CPU and virtual processor core is also called as
VCPU or virtual processor which is also physical unit that is assigned to VM.
 The multi-core virtualization method allow hardware designers to get an abstraction of the low
level details of the processor cores. So this virtualization in multi-cores method alleviates the
burden and inefficiency of managing hardware resources by software. It is illustrated in the
following figure

M. Purnachandra Rao, Assoc. Prof. , Dept. of IT, KITS Page 13


 This method exposes four VCPU to the software, when only three cores are actually present.

Virtual Hierarchy:

 Instead of supporting time-sharing jobs on one or few cores, we can use cores in a space-sharing,
where single or multi threaded jobs are assigned to separate groups of cores for long time
intervals.
 Virtual hierarchies can be created to overlay a coherence and caching hierarchy onto a physical
processor.
 Unlike a fixed physical hierarchy, a virtual hierarchy is a cache hierarchy that can adapt to fit the
workload or mix of workloads.
 The first level of hierarchy locates data blocks close to the cores needing them for faster access,
establishes a shared- cache domain, and establishes a point of coherence for faster
communication.

2.4 Virtual clusters and resource management


Cluster: A cluster is group of servers and other resources that act like a single system and enable
high availability and, in some cases, load balancing and parallel processing.

2.4.1 Physical Vs Virtual clusters:

 Cluster is a group of computers put together. A physical cluster is a collection of servers


(physical machines) interconnected by a physical network such as LAN.
 Virtual clusters are built with VMs installed at distributed servers from one or more physical
clusters. So VMs in a virtual cluster are interconnected logically by a virtual network across
several physical clusters.
 Each virtual cluster is formed with physical machines or VMs hosted by multiple physical
clusters.
 In a virtual cluster, virtual machines are grouped and configured for high performance
computing or parallel computing
 When virtual cluster is created, different cluster features can be used such as failover, load
balancing, live migration of virtual machiches across physical hosts.

Virtual cluster properties:

The provisioning of VMs to a virtual cluster is done dynamically to have the following interesting
properties:

 The virtual cluster nodes can be either physical or virtual machines. Multiple VMs running
with different OSs can be deployed on the same physical node,
 A VM runs with guest OS, which is often different form host OS, that manages the resources in
physical machine where the VM is implemented.

M. Purnachandra Rao, Assoc. Prof. , Dept. of IT, KITS Page 14


 The purpose of using VMs is to consolidate multiple functionalities on the same server. This
will greatly enhance server utilization and application flexibility.
 VMs can be replicated in multiple servers for the pupose of promoting distributed parallelism,
fault tolerance and disaster recovery.
 The size of virtual cluster can grow or shrink dynamically similar to the way the overlay
network varies in size in P2P network.
 The failure of any physical node may disable some VMs installed on the failing nodes. But the
failure of VMs will not pull down the host system.

2.4.1.1 Fast development and Effective Scheduling:

 The system should have the capability of fast deployment.


 The deployment means two things,
1) To construct and distribute software stacks to physical node inside the clusters as fast as
possible and
2) To quickly switch runtime environment from one user’s virtual cluster to another user’s
virtual cluster.
 If one user finishes the work then virtual cluster should shutdown or suspend quickly to save
resources to run other VMs for other users. So the advantage of this is load balancing of
applications in a virtual cluster.
 There are 4 steps to deploy a group of VMs onto a target cluster:
1. Preparing the disk image
2. Configuring the VMs
3. Choosing the destination nodes and
4. Executing the deployment command on every host

2.4.1.2 High performance Virtual Storage:

 It is also important to manage the disk space occupied by software packages.


 Some storage architecture design can be applied to reduce duplicated blocks in a distributed file
system of virtual clusters.
 Hash values are used to compare the contents of the data blocks.
 Every VM is configured with a name, disk image, network setting, and allocated CPU and
memory. However one needs to record each VM configuration into a file.
 This method is inefficient when managing a large group of VMs. VMs with the same
configurations could use preedited profiles to simplify the process. i.e system configures VMs
according to chosen profile.

M. Purnachandra Rao, Assoc. Prof. , Dept. of IT, KITS Page 15


There are 3 critical design issues of virtual clusters, like Live migration of VM, Memory, File
and network resource migration and dynamic deployment of VM

2 4.2 Live VM Migration steps and Performance effects:

 Live Migration refers to process of moving a running virtual machine or application between
different physical machines without disconnecting the client or application.
 Memory, storage, and network connectivity of virtual machine are transferred from original
guest machine to destination
 The live migration of VMs allow workloads of one node to transfer to another node. However
it does not guarantee that VMs can randomly migrate among themselves.

Live migration allow us to:

 Automatically optimize virtual machines within resource pools.


 Perform hardware maintenance without scheduling downtime or disrupting business operations.
 When a VM fails, its role could be replaced by another VM on a different node, as long as they
both run with the same guest OS.
 VMs can be live-migrated from one physical machine to another, in case of failure, one VM can
be replaced by another VM.
 The potential drawback is that a VM must stop playing its role if it residing host node fails.
However, this problem can be mitigated with VM life migration.

2.4.3 Migration of Memory, Files and Network Resources:

 This is also one of important aspects of VM migration.


 Moving the memory instance of VM from one physical host to another can be approached in
any no. of ways.
 Memory migration can be in a range of hundreds of megabytes to a few gigabytes in a typical
system today, and it needs to be done in an efficient manner.
 The Internet Suspend-Resume (ISR) technique exploit temporal locality as memory states are
likely to have considerable overlap in the suspended and resumed instance of a VM.
 To exploit temporal locality, each file in file system is represented as a tree of small sub files.
A copy of this tree exists in both suspended and resumed VM instance.

File System Migration:

 To support VM migration, a system must provide each VM with a consistent location


independent view of file system that is available to all hosts.
 A simple way to achieve this is to provide each VM with its own virtual disk. However due to
current trend of high capacity disk, migration of contents of entire disk over a network is not a
viable solution.
 So another way is to have a global file system across all machines where a VM could be
located. This can remove the need to copy files from one machine to another because all files
are network accessible.

Network Migration:

 It involves moving data and programs from one network to another as an upgrade or add-on to
a network system.

M. Purnachandra Rao, Assoc. Prof. , Dept. of IT, KITS Page 16


 The process of migration makes it possible to set up migrated files on a new network or to
blend two independent networks together.
 The need for network migration may result from security issues, corporate restructuring,
increased storage needs, and many others.
 A migrating VM should maintain all open network connections without relying on forwarding
mechanisms on the original host.
 Each VM must be assigned a virtual IP that known to other entities. This address should
distinct from IP of host machine where VM is currently located.
 Each VM can also have its own distinct virtual MAC address. The VMM maintains a mapping
of virtual IP and MAC address to their corresponding VMs.
 So migrating VM includes all the protocol states and carries its IP address with it.

2.4.4 Dynamic deployment of Virtual Clusters:

 Lightweight Directory Access Protocol (LDAP) is a set of open protocols used to access and
modify centrally stored information over a network.
 Dynamic Host Configuration Protocol (DHCP) is a protocol that provides quick, automatic,
and central management for the distribution of IP addresses within a network.

2.5 Virtualization for Data Centre Automation:


 Data centre automation means that huge volumes of hardware, software and database resources
and these can be allocated dynamically to millions of internet users simultaneously with
guaranteed QoS and cost effectiveness.
 Google, Yahoo, Amazon, Microsoft, Hp, Apple and IBM companies have invested billions of
dollars in data centre construction and automation.
 Virtualization of data centre highlights high availability (HA), backup services, workload
balancing and further increases in client bases.

2.5.1 Sever consolidation in Data Centres:

 In data centres, a large number of heterogeneous workloads can run on servers at various times.
These workloads can be roughly divided into 2 categories.
1. Chatty workloads and
2. Non interactive workloads

M. Purnachandra Rao, Assoc. Prof. , Dept. of IT, KITS Page 17


1. Chatty workload: It may burst at some point and return to a silent state at some other point. A
web video service is an example of this, where by a lot of people use it at night and few people use
it during the day. One more example is workload on university result server.

2. Non interactive workload: This workloads don’t require people’s effort to make progress after
they are submitted. High performance computing is a typical example of this. At various stages,
the requirements for resources of these workloads are dramatically different. A workload will
always be satisfied with all demand levels, the workload is statically allocated enough resources so
that peak demand is stratified.

Need & Advantages of Server Consolidation:

 It is common that most servers in data centres are underutilized. A large amount of hardware,
space, power and management cost of these servers is wasted.
 Server consolidation is an approach to improve the low utility ratio of hardware resources by
reducing the no. of physical servers.
 Among several server consolidation techniques, such as centralized and physical consolidation,
virtualized based server consolidation is most powerful. Data centres need to optimize their
resource management.
 Consolidation enhances hardware utilization. Many unutilized servers are consolidated into
fewer servers to enhance resource utilization. Consolidation is also facilitates backup services
and disaster recovery.
 This approach enables more agile provisioning and deployment of resources. In a virtual
environment, the images of guest OSs and their applications are readily cloned and reused.
 The total cost of ownership is reduced. In this sense, server virtualization causes differed
purchases of new servers, a smaller data centre footpoint, lower maintenance costs and lower
power, cooling and cabling requirements.
 This approach improves availability and business continuity. The crash of guest OS has no
effect on the host OS or any other guest OS. It becomes easier to transfer a VM from one
server to another because virtual servers unaware of underlying hardware.

2.5.2 Virtual Storage Management

 In system virtualization, virtual storage includes the storage managed by VMMs and guest
OSs. Generally the data stored in this environment can be classified into two categories,
1. VM Images and
2. Application Data
 The VM images are special to the virtual environment
 The application data includes all other data which is same as the data in traditional OS
environment.
 The most important aspects of system virtualization are encapsulation and isolation
 Traditional OSs and applications running on them can be encapsulated in VMs. Only one OS
runs in virtualization while many applications run in the OS. System virtualization allows
multiple VMs to run on a physical machine and VMs are completely isolated.
 To achieve encapsulation & isolation both system software and hardware platform, such as
CPU and chipset, are rapidly updated. However the storage is lagging. The storage systems
become the main bottleneck of VM deployment.

M. Purnachandra Rao, Assoc. Prof. , Dept. of IT, KITS Page 18


2.5.3 Cloud OS for Virtualized Data Centres:

 Data centres must be virtualized to serve as cloud providers.


 The table summarizes four virtual infrastructure (VI) managers and OSs

 These VI managers and OSs are specially tailored for virtualizing data centres which own a
large no of servers in clusters.
 Nimbus, Eucalyptus and openNebula are all open source software available to the general
public. Only vSphere 4 is a proprietary OS for cloud resource virtualization and management
over data centres.

2.5.4 Trust Management in Virtualization Data Centres:

 A VMM changes the computer architecture. It provides a layer of software between OS and
system hardware to create one or more VMs on a single physical platform.
 VMM can provide a secure isolation and a VM accesses hardware resources through the
control of the VMM, so the VMM is the base of the security of a virtual system. Normally one
VM is taken as management VM to have some privileges such as creating, suspending,
resuming or deleting a VM
 Once a hacker successfully enters the VMM or management VM, the whole system is in
danger.

2.5.4.1 VM based Intrusion detection:

 Intrusions are unauthorised access to certain computer systems from local or network users.
 Intrusion detection is used to recognize the unauthorised access.
 An Intrusion Detection System (IDS) is built on OSs and is based on the characteristics of
intrusion actions
 A typical IDS can be classified as a host based IDS (HIDS) or a network based IDS (NIDS),
depending on data source.
 HIDS can be implemented on the monitored system. When the monitored system is hacked by
hackers, the HIDS also faces the risk of being hacked. A NIDS is based on the flow of network
traffic which can’t detect take actions.

M. Purnachandra Rao, Assoc. Prof. , Dept. of IT, KITS Page 19


 VMM monitors and audits access requests for hardware and system software. This can avoid
fake actions and process the merits of HIDS.
 There are two methods for implementing a VM based IDS
1) The IDS is an independent process in each VM or a high privileged VM or the VMM
2) The IDS is integrated into the VMM and has the same privileged to acess the hardware as
well as VMM

 The VM-based IDS contain policy engine and policy module. The policy framework can
monitor events in different guest VMs

M. Purnachandra Rao, Assoc. Prof. , Dept. of IT, KITS Page 20

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy