ccs mod2
ccs mod2
- This virtualization layer creates virtual resources from the physical hardware for exclusive use by
virtual machines (VMs), allowing them to function with their own guest OS.
The virtualization can be implemented at various operational levels (as shown in Figure 3.2), including:
-Instruction set architecture (ISA) level
- Hardware level
- Operating system level
- Library support level
- Application level
Page 2
CLOUD COMPUTING & SECURITY (BIS613D)
FIGURE 3.2: Virtualization ranging from hardware to applications in five abstraction levels
Page 3
CLOUD COMPUTING & SECURITY (BIS613D)
- To enhance efficiency, dynamic binary translation is preferred, translating basic blocks of dynamic
source instructions into target instructions.
- Further optimizations can involve program traces or super blocks for better translation efficacy.
- Instruction set emulation necessitates binary translation and optimization, resulting in the creation of a
virtual instruction set architecture (V-ISA) that includes a processor-specific software translation layer
within the compiler.
Page 4
CLOUD COMPUTING & SECURITY (BIS613D)
- vCUDA is another tool that allows virtual machine applications to utilize GPU hardware acceleration.
User-Application Level
- Application-level virtualization virtualizes applications as a virtual machine (VM), often referred to as
process-level virtualization.
- It typically involves deploying high-level language (HLL) VMs, which act as an abstraction layer
running on the operating system.
- Programs written in HLL and compiled for these VMs can operate within this environment, with the
Microsoft .NET CLR and Java Virtual Machine (JVM) as notable examples.
- Other forms of application-level virtualization include application isolation, sandboxing, and
streaming, which encapsulate applications in a layer separate from the host OS and other applications.
- This isolation facilitates easier distribution and removal of applications from user workstations.
- An example is the LANDesk application virtualization platform, which allows software to run as self-
contained, executable files without installation or system modifications.
Page 5
CLOUD COMPUTING & SECURITY (BIS613D)
The hardware and OS support will yield the highest performance. However, the hardware and
application levels are also the most expensive to implement.
User isolation is the most difficult to achieve. ISA implementation offers the best application flexibility.
- Programs executed under a VMM should function similarly to those run on the original machine, with
exceptions allowed for:
- Variations in resource availability when multiple virtual machines (VMs) share the same hardware.
- Timing differences arising from the layer of software and the existence of other VMs.
- The demand for resource efficiency is vital; a VMM must leverage VMs effectively to avoid being less
preferable than physical machines.
- Traditional emulators and software interpreters, while flexible, tend to be inefficient due to their slower
processing speeds. As such, a VMM must directly execute a statistical majority of the virtual processor's
instructions on the real processor without software interference to ensure optimal efficiency.
Table 3.2 compares four hypervisors and VMMs that are in use today.
Page 6
CLOUD COMPUTING & SECURITY (BIS613D)
- A Virtual Machine Monitor (VMM) has comprehensive control over resources, which includes:
1. Allocating hardware resources to programs.
2. Preventing programs from accessing resources that have not been explicitly allocated to them.
3. Regaining control of resources when necessary.
- Not all processors meet the requirements for effective VMM implementation, as it is closely tied to
processor architectures.
- Difficulties arise with certain processors, such as x86, including the inability to trap on some privileged
instructions, making VMM implementation challenging without hardware modifications.
- Hardware-assisted virtualization is needed for processors that do not inherently support the necessary
features for VMM functionality.
Page 7
CLOUD COMPUTING & SECURITY (BIS613D)
FIGURE 3.3: The OpenVZ virtualization layer inside the host OS, which provides some OS images to
create VMs quickly
Advantages of OS Extensions
Page 8
CLOUD COMPUTING & SECURITY (BIS613D)
Disadvantages of OS Extensions
- OS extensions require all VMs at the operating system level within a container to share the same guest
OS family.
- Different OS-level VMs can use various distributions, but they must belong to the same OS family
(e.g., Windows cannot run on a Linux container).
- This limitation poses a challenge for cloud computing users who have diverse preferences for operating
systems.
Figure 3.3 illustrates the concept of OS-level virtualization. - The virtualization layer operates within
the OS, allowing multiple virtual machines (VMs) to access hardware resources.
- OS-level virtualization involves creating isolated execution environments using a single OS kernel.
- Access requests from VMs are redirected to their designated resource partitions on the physical
machine.
- The chroot command in UNIX facilitates the creation of virtual root directories for various VMs.
- Two methods exist for implementing virtual root directories:
- Duplicating common resources for each VM, which incurs high costs and overhead.
- Sharing resources with the host and generating private copies on demand.
- The first method undermines the advantages of OS-level virtualization relative to hardware-assisted
virtualization, making OS-level virtualization a less preferred option.
Page 9
CLOUD COMPUTING & SECURITY (BIS613D)
Two OS tools (Linux vServer and OpenVZ) support Linux platforms to run other platform-based
applications through virtualization. These two OS-level tools are illustrated in Example 3.1.
The third tool, FVM, is an attempt specifically developed for virtualization on the Windows NT
platform.
Page 10
CLOUD COMPUTING & SECURITY (BIS613D)
Each VPS has its own files, users and groups, process tree, virtual network, virtual devices, and IPC
through semaphores and messages.
The resource management subsystem of OpenVZ consists of three components: two-level disk
allocation, a two-level CPU scheduler, and a resource controller.
The amount of disk space a VM can use is set by the OpenVZ server administrator. This is the first level
of disk allocation. Each VM acts as a standard Linux system. Hence, the VM administrator is responsible
for allocating disk space for each user and
group. This is the second-level disk quota.
The first-level CPU scheduler of OpenVZ decides which VM to give the time slice to, taking into
account the virtual CPU priority and limit settings.
The second-level CPU scheduler is the same as that of Linux. OpenVZ has a set of about 20 parameters
which are carefully chosen to cover all aspects of VM operation. Therefore, the resources that a VM can
use are well controlled. OpenVZ also supports checkpointing and live migration.
The complete state of a VM can quickly be saved to a disk file.
This file can then be transferred to another physical machine and the VM can be restored there. It only
takes a few seconds to complete the whole process. However, there is still a delay in processing because
the established network connections are also migrated.
Page 11
CLOUD COMPUTING & SECURITY (BIS613D)
- Visual MainWin delivers a compiler support system for developing Windows applications with Visual
Studio, facilitating their operation on select UNIX hosts.
The vCUDA is explained in Example 3.2 with a graphical illustration in Figure 3.4.
Session 9 questions:
1. What is the primary benefit of virtualization technology in cloud computing?
2. What are the three main approaches to I/O virtualization?
3. What is the main disadvantage of full device emulation in I/O virtualization?
4. What technology supports hardware-assisted I/O virtualization, and what does it do?
5. What are the two main difficulties in virtualizing multi-core processors?
Page 12
CLOUD COMPUTING & SECURITY (BIS613D)
Page 13
CLOUD COMPUTING & SECURITY (BIS613D)
- Xen hypervisor does not natively include device drivers, allowing guest OS direct access to physical
devices, thus keeping its size small.
- It creates a virtual environment between hardware and OS, with commercial versions being developed
by vendors such as Citrix and Oracle.
- Core components of a Xen system include the hypervisor, kernel, and applications.
- Multiple guest operating systems can run on the hypervisor, with one, known as Domain 0, having
control over others (Domain U).
- Domain 0 is a privileged OS that manages hardware access and resource allocation for guest domains,
operating without file system drivers upon boot.
- Security concerns arise with Domain 0; if compromised, the entire system could be vulnerable.
- Domain 0 allows for flexible management of virtual machines (VMs), including creation, modification,
and migration, but also introduces security challenges.
- Traditional machine states resemble a straight line, while VM states resemble a tree, permitting
multiple instances and the ability to roll back to previous states for error correction or system image
distribution.
Page 14
CLOUD COMPUTING & SECURITY (BIS613D)
- Host-based virtualization involves a layer of virtualization software between the host and guest
operating systems.
- Both virtualization classes provide different architectures for virtual machines.
Full Virtualization
- Full virtualization allows noncritical instructions to run directly on the hardware.
- Critical instructions are trapped and emulated by the Virtual Machine Monitor (VMM) to avoid
performance overhead from binary translation.
- Noncritical instructions do not pose a security risk, allowing them to run efficiently on hardware.
- This approach balances efficiency with system security by ensuring only critical instructions are
managed by the VMM.
- The VMM scans instruction streams to identify privileged and behavior-sensitive instructions, which
are then trapped and emulated using binary translation.
Page 15
CLOUD COMPUTING & SECURITY (BIS613D)
- Full virtualization is achieved through a combination of binary translation and direct execution,
allowing the guest OS to be decoupled from the hardware.
- The guest OS remains unaware of its virtualization status.
- Performance may suffer due to time-consuming binary translation, especially with I/O-intensive
applications.
- A code cache is used in binary translation to store frequently used translated instructions, enhancing
performance but increasing memory usage.
- Typically, full virtualization performance on the x86 architecture is between 80% and 97% of that of
the host machine.
Session 10 questions:
1. Name various classes of VM architecture.
2. What is the significance of Hypervisor
3. What is Xen stands for.
4. Name the two categories of hardware virtualization.
5. What is full virtualization?
Host-Based Virtualization
- The alternative VM architecture involves a virtualization layer installed on the host OS, which manages
hardware.
- Guest OSes run on top of this virtualization layer, allowing dedicated applications to operate within
VMs while some applications may run directly on the host OS.
- Advantages of this host-based architecture include:
- No modifications required to the host OS for installation.
- Utilization of the host OS for device drivers and low-level services, simplifying VM design and
deployment.
- This approach suits various host machine configurations while potentially suffering from lower
performance compared to hypervisor/VMM architecture.
- Hardware access requests entail four layers of mapping, which significantly degrade performance.
- The need for binary translation arises when the guest OS's ISA differs from the underlying hardware.
- Despite its flexibility, the performance limitations render the host-based architecture less practical for
intensive applications.
Page 16
CLOUD COMPUTING & SECURITY (BIS613D)
The guest operating systems are para-virtualized. They are assisted by an intelligent compiler to replace
the nonvirtualizable OS instructions by hypercalls as illustrated in Figure 3.8.
Page 17
CLOUD COMPUTING & SECURITY (BIS613D)
The traditional x86 processor offers four instruction execution rings: Rings 0, 1, 2, and 3. The lower the
ring number, the higher the privilege of instruction being executed. The OS is responsible for managing
the hardware and the privileged instructions to execute at Ring 0, while user-level applications run at
Ring 3.
The best example of para-virtualization is the KVM.
Para-Virtualization Architecture
When the x86 processor is virtualized, a virtualization layer is inserted between the hardware and
the OS.
According to the x86 ring definition, the virtualization layer should also be installed at Ring 0.
Different instructions at Ring 0 may cause some problems.
Figure 3.8, shows that para-virtualization replaces nonvirtualizable instructions with hypercalls that
communicate directly with the hypervisor or VMM.
However, when the guest OS kernel is modified for virtualization, it can no longer run on the hardware
directly.
Page 18
CLOUD COMPUTING & SECURITY (BIS613D)
- Para-virtualization reduces overhead but introduces compatibility and portability issues due to the need to
support unmodified operating systems.
- Maintaining para-virtualized operating systems is costly as it often requires significant kernel modifications.
- Performance benefits of para-virtualization fluctuate based on workload.
- Compared to full virtualization, para-virtualization is more practical and easier to implement.
- Full virtualization faces challenges with low performance in binary translation, making it difficult to
enhance.
- Popular virtualization products utilizing para-virtualization include Xen, KVM, and VMware ESX.
Page 19
CLOUD COMPUTING & SECURITY (BIS613D)
Handouts for Session 12: VIRTUALIZATION OF CPU, MEMORY, AND I/O DEVICES
CPU Virtualization
- A virtual machine (VM) simulates a computer system, efficiently executing unprivileged instructions
directly on the host processor while requiring special handling for privileged, control-sensitive, and
behavior-sensitive instructions.
Page 20
CLOUD COMPUTING & SECURITY (BIS613D)
- A CPU architecture is considered virtualizable if it allows both privileged and unprivileged instructions
to run in user mode, with the virtual machine monitor (VMM) operating in supervisor mode to mediate
hardware access.
- RISC architectures support virtualization effectively, as all sensitive instructions are privileged;
however, x86 architectures pose challenges because some sensitive instructions cannot be trapped by
the VMM.
- On UNIX-like systems, system calls generate interrupts that transfer control to the OS kernel;
paravirtualization (e.g., in Xen) prompts interrupts in both the guest OS and hypervisor, allowing
unmodified applications to operate, albeit with a minor performance overhead.
Memory Virtualization
- Virtual memory virtualization mimics traditional OS virtual memory support through page tables.
- In a standard OS, a single mapping stage occurs, using a memory management unit (MMU) and a
translation lookaside buffer (TLB) for efficiency.
- In a virtualized setting, system memory is shared across virtual machines, necessitating a two-stage
mapping process.
- The guest operating system maps virtual memory to physical memory, while the virtual machine
monitor (VMM) maps physical memory to the actual machine memory.
- MMU virtualization provides transparency to the guest OS, enabling virtual-to-physical address
mapping without direct machine memory access.
- The VMM oversees the final mapping to ensure system control and stability.
Page 21
CLOUD COMPUTING & SECURITY (BIS613D)
- Each guest OS has a corresponding shadow page table in the Virtual Machine Monitor (VMM) for
managing virtual memory.
- Nested page tables introduce another level of indirection in the virtual memory addressing process.
- The Memory Management Unit (MMU) translates virtual to physical addresses based on the OS,
followed by a translation to machine addresses via hypervisor-defined page tables.
- Modern OS architectures often lead to an abundance of shadow page tables, resulting in significant
performance overhead and memory costs.
- VMware utilizes shadow page tables for the translation of virtual memory to machine memory.
- To optimize memory access, processors implement Translation Lookaside Buffer (TLB) hardware,
allowing for direct virtual-to-machine memory mapping.
- Any changes made by the guest OS to memory mappings trigger updates in the shadow page tables by
the VMM for efficient lookup.
- The AMD Barcelona processor, since 2007, has been equipped with hardware-assisted memory
virtualization, employing nested paging technology for enhanced address translation in virtual
environments.
Page 22
CLOUD COMPUTING & SECURITY (BIS613D)
- Virtual address translation involves accessing the L4 page table via Guest CR3, requiring a conversion
to the host physical address (HPA) using EPT.
- The EPT TLB is checked for existing translations; if unavailable, the CPU consults the EPT. If the
translation is still not found, an EPT violation exception is triggered.
- Translating each Guest Virtual Address (GVA) requires accessing the EPT multiple times, leading to
a maximum of 20 memory accesses in the worst case, which remains inefficient.
- To address these inefficiencies, Intel has increased the EPT TLB size to reduce the frequency of
memory accesses required for translation.
I/O Virtualization
I/O virtualization involves managing the routing of I/O requests between virtual devices and
the shared physical hardware. At the time of this writing, there are three ways to implement I/O
virtualization:
1) full device emulation
Page 23
CLOUD COMPUTING & SECURITY (BIS613D)
2) para-virtualization
3) direct I/O.
Full device emulation is the first approach for I/O virtualization. Generally, this approach emulates
well-known, real-world devices.
-All the functions of a device or bus infrastructure, such as device enumeration, identification,
interrupts, and DMA, are replicated in software.
-This software is located in the VMM and acts as a virtual device. The I/O access requests of the guest
OS are trapped in the VMM which interacts with the I/O devices. The full device emulation approach is
shown in Figure 3.14.
Direct I/O virtualization allows VMs to access devices directly, leading to near-native performance with
lower CPU costs, primarily focused on networking for mainframes.
- Challenges exist for commodity hardware, especially concerning device states during workload migration
that may lead to system instability.
- Hardware-assisted I/O virtualization is essential due to the high overhead of software-based methods.
Page 24
CLOUD COMPUTING & SECURITY (BIS613D)
- Intel VT-d facilitates the remapping of I/O DMA transfers and device-generated interrupts, supporting
various guest OS models.
- Self-virtualized I/O (SV-IO) optimizes I/O virtualization by utilizing multicore processors:
- Encapsulates all virtualization tasks associated with an I/O device.
- Provides virtual devices and access APIs for VMs, along with management APIs for the VMM.
- Defines a unique virtual interface (VIF) for each virtualized I/O device, enabling structured communication
through message queues for incoming and outgoing messages.
Session 12 questions:
1. What is Memory virtualization?
2. What is the advantage of CPU virtualization?
3. What is full device emulation?
4. What is para-virtualization?
5. What is direct I/O?
Handouts for Session 13: VIRTUALIZATION OF CPU, MEMORY, AND I/O DEVICES (Contd.)
Page 25
CLOUD COMPUTING & SECURITY (BIS613D)
- This dynamic heterogeneity arises from issues related to less reliable transistors and increased
complexity in transistor usage.
Virtual Hierarchy
- Current many-core CMPs utilize a physical hierarchy with multiple static cache levels for allocation
and mapping. Virtual Hierarchy refers to a dynamic or software-based organization of resources,
such as memory or cache, that adapts to workloads rather than being fixed in hardware.
A virtual cache hierarchy adapts to diverse workloads, enhancing performance.
The first level of virtual hierarchy shown in Figure 3.17 (a), aims the system stores data blocks close to
the cores that need them for faster access.
● It uses a shared-cache and coherence point to improve efficiency.
● If data is missing at the first level, it checks L2 cache.
Workloads are split into three groups:
● Database (VM0, VM3)
● Web server (VM1, VM2)
Page 26
CLOUD COMPUTING & SECURITY (BIS613D)
● Middleware (VM4–VM7)
Each workload runs in a separate virtual machine, but can also share space within one OS.
Careful mapping of virtual pages to physical memory by
operating systems or hypervisors.
1. First level – Each VM runs separately to reduce delays and avoid performance issues. Cache
and interconnects are mostly isolated.
2. Second level – Features shared memory, allowing flexible resource allocation without
clearing caches.
This setup preserves existing software, enables page sharing, and supports multiprogramming and
server consolidation.
FIGURE 3.17: CMP server consolidation by space-sharing of VMs into many cores forming multiple
virtual clusters to execute various workloads.
Session 13 questions:
1. What is multicore virtualization?
2. What are the problems in virtualization of multicore systems?
3. How the problems of virtualization of multicore system can be addressed?
4. What is virtual hierarchy?
5. In which type of virtual hierarchy data blocks are stored close to the core?
Page 27
CLOUD COMPUTING & SECURITY (BIS613D)
Traditional VMs need manual setup, which becomes inefficient as more VMs join.
Amazon EC2 offers cloud-based elastic computing, simplifying VM management.
Platforms like XenServer and VMware ESX support bridging mode, allowing VMs to act as individual
network hosts.
Bridging mode enables seamless VM communication through a virtual network card, with automatic
network setup.
Virtual Clusters
Virtual clusters are built with VMs installed at distributed servers from one or more physical clusters.
The VMs in a virtual cluster are interconnected logically by a virtual network across several
physical networks.
Figure 3.18 illustrates virtual clusters and physical clusters.
FIGURE 3.18: A cloud platform with four virtual clusters over three physical clusters shaded differently.
Page 28
CLOUD COMPUTING & SECURITY (BIS613D)
Figure 3.19 shows the concept of a virtual cluster based on application partitioning or customization.
Different colors in the figure represent nodes in different virtual clusters. Efficient storage of VM images
is important due to their large numbers. Common software (OS, libraries) can be preinstalled as
templates (Template VMs). Users can build custom software stacks by copying from template VMs and
adding their own libraries and applications.
Four virtual clusters exist over physical clusters.
Page 29
CLOUD COMPUTING & SECURITY (BIS613D)
Physical machines (hosts) run virtual machines (guests), which may have different operating systems.
VMs can be installed remotely or replicated across multiple servers in the same or different physical
clusters. Virtual clusters are flexible as VM nodes can be added, removed, or migrated dynamically.
Green computing is important, but past efforts have focused only on single workstations, not entire
clusters. Existing energy-saving techniques work only for specific setups and applications.
Live VM migration helps transfer workloads but can reduce cluster efficiency and affect performance
(QoS). The challenge is to develop migration strategies that balance energy efficiency and performance.
-Virtualization enables load balancing by analyzing system load and user activity.
-Automatic scaling (up/down) improves resource usage and reduces response times.
Efficient VM-to-node mapping is crucial as live migration helps balance cluster loads dynamically.
Templates (preinstalled OS with optional apps) simplify setup; COW (Copy on Write) reduces disk
space use and speeds up deployment.
Page 30
CLOUD COMPUTING & SECURITY (BIS613D)
Each VM needs a unique setup (name, disk, network, CPU, memory), stored in configuration files.
Large groups of VMs benefit from predefined profiles to simplify setup.
VMs share common settings, while unique values (UUID, IP, VM name) are generated automatically.
Choosing the right destination host is key for balancing workloads and meeting VM requirements.
VM Migration: The VM state file is copied from storage to the host machine.
Page 31
CLOUD COMPUTING & SECURITY (BIS613D)
Host-Based Manager: The cluster manager runs on the host system, supervising guest VMs and
restarting them if they fail (e.g., VMware HA system).
Independent Manager: A separate manager operates on both host and guest systems, but this adds
complexity.
Integrated Manager: A unified system manages both host and guest resources, distinguishing between
physical and virtual resources.
Virtual Clustering & Migration: VMs can be migrated between physical machines to handle failures.
Used in cloud computing, grids, and HPC for dynamic resource allocation.
Balances downtime, network bandwidth, and migration time to minimize disruptions.
VMs can be Inactive, Active, Paused, or Suspended.
Figure 3.21 shows the effect on the data transmission rate (Mbit/second) of live migration of a
VM from one host to another. Before migration, data throughput was 870 MB/sec. The first precopy
took 63 seconds, reducing throughput to 765 MB/sec. Additional copying lasted 9.8 seconds, further
lowering the rate to 694 MB/sec. Downtime was only 165 ms before the VM was restored on the
destination host.
Page 32
CLOUD COMPUTING & SECURITY (BIS613D)
Memory Migration
- VM migration, a crucial aspect, involves transferring memory instances between physical hosts with
various approaches.
- Traditional techniques share common paradigms influenced by the application/workloads of the guest
OS.
- Memory migration typically ranges from hundreds of megabytes to a few gigabytes and must be
efficient.
- The Internet Suspend-Resume (ISR) technique leverages temporal locality, where memory states have
considerable overlap between VM suspension and resumption.
- Temporal locality indicates that memory states differ only based on work done since the last
suspension.
Page 33
CLOUD COMPUTING & SECURITY (BIS613D)
- File systems are represented as trees of small subfiles, with copies in both suspended and resumed VM
instances.
- This tree-based representation allows for caching and transmission of only changed files, optimizing
the process.
- ISR is suitable when live machine migration is unnecessary, though it results in higher downtime
compared to other techniques.
Network Migration
- A migrating virtual machine (VM) must retain all open network connections without depending on
original host forwarding or mobility support.
- Each VM is assigned a unique virtual IP address known to other systems, which may differ from the
host machine's IP address.
- VMs can also possess distinct virtual MAC addresses.
- The Virtual Machine Monitor (VMM) keeps a record of the mapping between virtual IP and MAC
addresses and their respective VMs.
Page 34
CLOUD COMPUTING & SECURITY (BIS613D)
- Generally, a migrating VM carries all protocol states along with its IP address.
- When migrating between closely connected machines on a switched LAN, an unsolicited ARP reply
from the migrating host advertises the new IP location, updating peer communication paths.
- Although some packets may be lost during this transition, no significant issues arise.
- On a switched network, the migrating operating system can retain its original Ethernet MAC address
and depend on the network switch to recognize its relocation to a different port.
- Live migration allows for the transfer of a virtual machine (VM) between physical nodes without
disruption to its operating system or applications.
- This feature is increasingly adopted in enterprise settings for efficient online maintenance, load
balancing, and proactive fault tolerance.
- Advantages include server consolidation, performance isolation, and enhanced management
capabilities.
- Various implementations exist that cater to diverse functionalities related to live migration.
- Traditional migration involves suspending VMs during transport and resuming afterward.
- The precopy mechanism enables live migration while keeping the VM operational and its applications
running.
- Live migration is an essential feature of virtualization technologies, focusing on VM migration within
cluster environments utilizing network-accessible storage systems (e.g., SAN, NAS).
- The migration primarily involves transferring only the memory and CPU state from the source node to
the target node.
- The precopy approach is the main technique, where all memory pages are initially transferred, followed
by only the modified pages in iterative rounds.
- Minimal service downtime is anticipated due to the use of iterative copy operations, with VM
suspension occurring when the application's writable working set is small.
- During the precopy phase, performance can degrade significantly as the migration daemon uses
network bandwidth to transfer dirty pages.
- An adaptive rate limiting approach is implemented to alleviate performance issues, although this may
increase total migration time by nearly tenfold.
- There is a need to establish a maximum number of iterations, as not all applications' dirty pages may
converge to a small writable working set within multiple rounds.
- The precopy approach in VM migration faces challenges due to the large amount of transferred data.
- A new method, CR/TR-Motion, proposes transferring execution trace files in iterations instead of dirty
pages, reducing total migration time and downtime.
Page 35
CLOUD COMPUTING & SECURITY (BIS613D)
- This method's effectiveness is contingent upon the log replay rate exceeding the log growth rate,
impacting its application in clustered environments.
- An alternative strategy known as postcopy involves transferring all memory pages once, which
minimizes baseline migration time but increases downtime due to page fetching latency.
- The availability of abundant CPU resources in multicore machines allows for the utilization of memory
compression techniques, significantly reducing the amount of data transferred.
- Memory compression algorithms typically incur minimal overhead and support rapid and
straightforward decompression without additional memory requirements.
Page 36
CLOUD COMPUTING & SECURITY (BIS613D)
Page 37
CLOUD COMPUTING & SECURITY (BIS613D)
Session 14 questions:
1. What is virtual cluster?
2. What is the benefit of Live VM migration?
3. What is memory migration?
4. What is file migration?
5. What is network migration?
Page 38
CLOUD COMPUTING & SECURITY (BIS613D)
- Virtual machines (VMs) increase resource management complexity, posing challenges in improving
resource utilization and ensuring QoS in data centers.
- Key side effects of server virtualization include:
Page 39
CLOUD COMPUTING & SECURITY (BIS613D)
- Consolidation: Enhances hardware utilization by merging underused servers, facilitating backup and
disaster recovery.
- Agility: Enables quick provisioning and deployment through easy cloning and reuse of guest OS
images and applications.
- Cost Reduction: Lowers the total cost of ownership by deferring server purchases, reducing data
center footprint, and cutting maintenance and energy costs.
- Availability: Improves business continuity as the failure of a guest OS does not affect the host or
other guests, enabling easier VM migration across servers.
- To automate data-center operations, factors to consider include resource scheduling, architectural
support, power management, and performance analytics.
- Efficient, fine-grained scheduling is crucial in virtualized data centers, with potential for
implementation at VM, server, and data-center levels, though current techniques often focus on one or
two levels due to complexity.
- Dynamic CPU allocation is informed by VM utilization and application-level QoS metrics.
- Resource management techniques include adjusting both CPU and memory as well as managing
resource overhead in response to workload changes.
- A two-level resource management system features a local controller at the VM level and a global
controller at the server level for autonomic resource allocation.
- Cutting-edge technologies like multicore processing and virtualization can beneficially interact.
- However, the optimization of Cache Memory Processors (CMP) is not fully realized, particularly in
memory systems.
- Potential improvements include designing a virtual hierarchy on CMPs and implementing protocols to
minimize memory access time while supporting inter-VM sharing and reassignment.
- A VM-aware power budgeting scheme with integrated managers is proposed for enhanced power
management, taking into account heterogeneity challenges.
- The trade-off between power savings and data-center performance must be considered in power
budgeting policies.
Page 40
CLOUD COMPUTING & SECURITY (BIS613D)
- Storage management for virtual machines (VMs) is often cumbersome, complicating operations like
remapping volumes and checkpointing disks.
- Data centers housing thousands of VMs face issues with VM image oversaturation.
- Researchers focus on simplifying management, improving performance, and minimizing storage usage
for VM images.
- Parallax is a distributed storage system specifically tailored for virtualization environments.
- Content Addressable Storage (CAS) addresses the reduction of total VM image sizes, benefiting VM-
based systems in data centers.
- Traditional storage techniques overlook virtualization characteristics, prompting Parallax to develop
an architecture that integrates storage management into a network of storage VMs sharing the same
physical hosts as the VMs they serve.
Figure 3.30 provides an overview of the Parallax system architecture. It supports all popular system
virtualization techniques, such as paravirtualization and full virtualization. For each physical machine,
Parallax customizes a special storage appliance VM. The storage appliance VM acts as a block
virtualization layer between individual VMs and the physical storage device. It provides a virtual disk
for each VM on the same physical machine.
Page 41
CLOUD COMPUTING & SECURITY (BIS613D)
FIGURE 3.26: Parallax is a set of per-host storage appliances that share access to a common block
device and presents virtual disks to client VMs.
Page 42
CLOUD COMPUTING & SECURITY (BIS613D)
- VI managers and operating systems (OSes) are designed for virtualizing data centers with numerous
server clusters.
- Nimbus, Eucalyptus, and OpenNebula are open-source solutions, while vSphere 4 is a proprietary OS
for cloud resource management.
- VI managers create virtual machines (VMs) and aggregate them into elastic virtual clusters.
- Nimbus and Eucalyptus primarily support virtual networks; OpenNebula offers features for dynamic
resource provisioning and advance reservations.
- All mentioned VI managers utilize Xen and KVM for virtualization; vSphere 4 utilizes VMware's ESX
and ESXi hypervisors.
- vSphere 4 uniquely supports virtual storage alongside virtual networking and data protection.
- The system operates through an interface layer known as vCenter, focusing on data center resource
management for private cloud construction.
- VMware positions vSphere 4 as the first cloud OS prioritizing availability, security, and scalability in
cloud services.
- vSphere 4 comprises two main software suites: infrastructure services and application services, with
three component packages for virtualization: vCompute, vStorage, and DRS, supported by VMware's
libraries. and thin provisioning libraries; and vNetwork offers distributed switching and networking
functions.
Page 43
CLOUD COMPUTING & SECURITY (BIS613D)
Session 15 questions:
1. What is the need of server consolidation?
2. What is virtual storage management?
3. Name the different types of data in virtual storage.
4. What are the challenges involved in virtual storage?
5. List the virtualizations supported by Parallax system architecture.
Page 44
CLOUD COMPUTING & SECURITY (BIS613D)
- Virtualization-based intrusion detection enables isolation of guest virtual machines (VMs) on a shared
hardware platform.
- Even if some VMs are compromised, others remain unaffected, similar to NIDS functionality.
- A Virtual Machine Monitor (VMM) oversees access requests for hardware and software, preventing
fake actions, akin to HIDS advantages.
- VM-based IDS can be implemented as either an independent process in each VM or integrated within
the VMM, possessing equivalent hardware access privileges.
- It is recommended that the IDS operates on the VMM as a high-privileged VM.
Figure 3.29 illustrates the concept.
FIGURE 3.29: The architecture of livewire for intrusion detection using a dedicated VM.
- The VM-based Intrusion Detection System (IDS) features a policy engine and module to monitor
events across guest VMs using an operating system interface library and PTrace for security policy
tracing.
- Predicting and preventing all intrusions is challenging; thus, post-intrusion analysis is vital.
- Most systems currently rely on logs to analyze attack actions, but ensuring their integrity is difficult,
particularly if the operating system is compromised.
- The IDS logging service is based on the operating system kernel, which should remain secure even
when the OS is attacked.
- Honeypots and honeynets are also commonly used in intrusion detection, designed to lure attackers
and provide a decoy system view.
- They facilitate the analysis of attack actions and the development of secure IDS solutions.
Page 45
CLOUD COMPUTING & SECURITY (BIS613D)
Session 16 questions:
1. What is IDS?
2. Name two types of IDS.
3. What is the need of virtualization-based IDS?
4. Where does the IDS logging service runs?
5. What is honeypot?
Question Bank
1. Explain the characteristics of virtualized environments.
2. List and explain the features of VMM.
3. List and explain levels of virtualization with a neat diagram.
4. Explain application level virtualization.
5. Explain hardware level virtualization.
6. Explain OS-level virtualization.
7. Describe the role of a hypervisor or Virtual Machine Monitor (VMM) in the virtualization process.
8. Explain para-virtualization.
9. Write a note on CPU virtualization
10. Write a note on Memory virtualization
11. Describe dynamic binary translation. Why is it preferred over code interpretation for ISA-level
virtualization?
12. What are the drawbacks of hardware-level virtualization. Explain how OS-level virtualization seeks
to address these problems?
12. List the various classes of VM architectures and explain
13. With a neat figure explain Xen architecture.
14. Compare the relative merits of implementing virtualization at various levels.
15. Explain kernel based virtual machine and para virtualization.
Page 46
CLOUD COMPUTING & SECURITY (BIS613D)
Page 47