0% found this document useful (0 votes)
21 views

UNIT 4 Virtualization

Uploaded by

shuklchitrank
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views

UNIT 4 Virtualization

Uploaded by

shuklchitrank
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

UNIT 4 : Virtualization

Virtualization abstracts hardware that can share common resources with multiple workloads. A variety
of workloads can be co-located on shared virtualized hardware whilemaintaining complete insulation,
migrating freely through the infrastructures and scaling, when required.

Cloud Virtualization makes server operating system and storage devices a virtual platform. This will
enable the user to also share a single physical resource instance or application with several users by
providing multiple machines. Cloud virtualizations also administer work through the transformation,
scalability, economics and efficiency of traditionalcomputing.

Cloud computing virtualizations quickly integrate the key computing method. One of the key features
of virtualization is that it allows multiple customers and companies to share their applications.

The virtualization environment can also be referred to as cloud-based services and applications. Either
public or private this environment. The customer can maximizeresources through virtualization and
reduce the physical system needed.

Recently, due to the confluence of several advantages , virtualization technology has become more
interested:
1. Increased performance and computing capacity:

A unique corporate data center is, in most instances, unable to compete in terms of security,performance,
speed and cost - effectiveness with the network of data centers provided by service provider. Since the
majority of services are available on demand, in a short periodof time users can also have large amounts
of computing resources, with tremendous ease and flexibility and without any costly investment.
In turn, Cloud services offer you the ability to free up memory and computing power on your individual
computers through remote hosting of platforms, software and databases. The obvious result, in fact, is a
significant performance improvement.

2. Underutilized hardware and software resources.

Underutilization of hardware and software is caused by increased computing and performance and
constrained or infrequent resource usages. Computer systems havebecome so powerful today that in
certain instances those who are only a fraction of its capacity is used by an application or the system.
Furthermore, when taking into consideration the company's IT infrastructure, numerous computer
systems are only partlyutilized whereas they can be used 24/7/365 services without interruption. For
instance,
desktop PCs mainly for office automation tasks and used by administration personnel are used only for
working hours. The efficiency of the IT infrastructure can be enhanced by using these resources for
other purposes. A completely separate environment, which can be achieved via virtualization, is needed
to provide such a service transparently.

3. Lack of space.

Data centers are continuously expanding with the necessity for extra infrastructure, be it storage or
computing power. Organizations like Google and Microsoft are expanding their infrastructure by
constructing data centers as compare as football grounds in which contains thousands of nodes .While
this is feasible for IT big players, companies are oftenunable to build an additional data center to
accommodate extra resource capacity. ,Together with this situation, unused of hardware resources
which led to the diffusion of a server consolidation, fundamental to the virtualization is used in the
technique.
4. Greening initiatives
Virtualization is a core technology for the deployment of a cloud-based infrastructure to
run multiple operating system images simultaneously on a single physical server. As a consolidation
enabler, server virtualization reduces the overall physical server size, with the green benefits inherent.
From the perspective of resource efficiency, fewer workloads are required, which proactively reduce the
space in a datacenter and the eventual footprint of e-waste. From anenergy-efficiency point of view, a
data center will consume less electricity with fewer physical equipment. Cooling in data centers is a
major requirement and can help with highpower consumption. Through free cooling methods, such as
the use of air and water compared to air conditioning and cooling, data centers can reduce their cooling
costs. Thedata center managers can save on electricity costs with solar panels, temperature controls and
wind energy panels.

5. Rise of administrative costs.

Power consumption and cooling costs are increasing as well as IT device costs. In addition, increased
demand for extra capacity that transforms into more servers in a data center leads to an increase in
administrative costs significantly. Computers — especially servers — willnot all work independently,
but require system administrator care and attention. Hardware monitoring, flawed equipment
replacement, server installation and updates, server resources monitoring and backups are part of
common system administration tasks. Theseoperations are time consuming and the more servers to
handle, the higher administrative expenses. The more administrative expenses are involved,
virtualization can contribute toreducing the number of servers required for a particular workload and
reducing administrative staff costs.

Major Components of Virtualization Environment


Virtualization is the way to create the physical machine's 'virtual version.' Using a virtual machine
monitor, virtualization is achieved. It enables several virtual machinesto operate on one single physical
device. Without any changes observe in virtual machines it can easily be moved from hardware to
another. In cloud computing, virtualization is widely used. Virtualization helps to run multiple operating
systems andapplications on the same hardware components on each of them.
Figure: 3.1 Reference Model of Virtualization.

In a virtualized environment, three main components fall into this category:

1. GUEST:
As usual, the guest denotes the system component interacting with the virtualization layer instead with
the host machine. Usually one or more virtual disk and VM definition files arepresented to guests. A host
application which looks and manages every virtual machine as a different application is centrally
operated by virtual machines.

2. Hosts:
The host is the original environment in which the guest is to be managed. Each host uses the common
resources that the host gives to each guest. The OS works as a host and manages the physical
management of resources and the support of the device.

3. Virtualization Layer
The virtualization layer ensures that the same or different environment where the guest operates is
recreated. It is an extra layer of abstract between the hardware, the computing and the application
running in the network and storage. It usually helps to operate a single operating system per machine
which, compared with virtualization, is very inflexible.

3.2.1 Characteristics of Virtualization

1. Increased Security –

The ability to fully transparently govern the execution of a guest program creates new opportunities for
providing a safe, controlled execution environment. All guest programs operate usually against the
virtual machine, translating them and using them for host program.

2. Execution Managed –

In particular, the most important features are sharing, aggregation, emulation andisolation.

3. Sharing –
Virtualization makes it possible to create a separate computing environment in the same host. This
common function reduces the amount of active servers and reduces energy consumption.

4. Aggregation –
The physical resource can not only be shared between several guests, but virtualizationalso enables
aggregation. A group of individual hosts can be linked and represented asa single virtual host. This
functionality is implemented using the Cluster Management Software, which uses and represents the
physical resources of a uniform group of machines.

5. Emulation –
In the virtualization layer, which is essentially a program, guest programs are executed within an
environment. An entirely different environment can also be emulated with regard to the host, so that
guest programs that require certain features not present in thephysical host can be carried out.

6. Isolation –
Virtualization allows guests to provide an entirely separate environment in that they are executed — if
they are operating systems, applications or other entities. The guest program operates through an
abstraction layer that offers access to the underlying resources. The virtual machine is able to filter the
guest’s activities and prevent dangerous operations against the host.

7. Portability –
Dependent on a specific type of virtualization, the concept of portability applies in different ways.
In the case of a hardware virtualization, the guest is packed in a virtual image which can be moved and
executed safely on various virtual machines in many instances.

Levels of Virtualization Implementation


Virtualization is a computer architecture technology by which multiple virtual machines (VMs)
aremultiplexed in the same hardware machine.
After virtualization, different user applications managed by their own operating systems (guest
OS) can run onthe same hardware independent of the host OS
done by adding additional software, called a virtualization layer
This virtualization layer is known as hypervisor or virtual machine monitor (VMM)

function of the software layer for virtualization is to virtualize the physical hardware of a ost
machine into virtual resources to be used by the VMs .
Common virtualization layers include the instruction set architecture (ISA) level, hardware
level, operating system level, library support level, and application level

1. Instruction Set Architecture Level


 At the ISA level, virtualization is performed by emulating a given ISA by the ISA of the host
machine. For example, MIPS binary code can run on an x86-based host machine with the help
of ISA emulation. With this approach, it is possible to run a large amount of legacy binary code
written for various processors on any given new hardware host machine.
 Instruction set emulation leads to virtual ISAs created on any hardware machine. The basic
emulation method is through code interpretation. An interpreter program interprets the source
instructions to target instructions one by one. One source instruction may require tens or
hundreds of native target instructions to perform its function. Obviously, this process is
relatively slow. For better performance, dynamic binary translation is desired.
 This approach translates basic blocks of dynamic source instructions to target instructions. The
basic blocks can also be extended to program traces or super blocks to increase translation
efficiency.

 Instruction set emulation requires binary translation and optimization. A virtual instruction set
architecture (V-ISA) thus requires adding a processor-specific software translation layer to the
compiler.

2. Hardware Abstraction Level


Hardware-level virtualization is performed right on top of the bare hardware.
This approach generates a virtual hardware environment for a VM.
The process manages the underlying hardware through virtualization. The idea is to virtualize a
computer’s resources, such as its processors, memory, and I/O devices.
The intention is to upgrade the hardware utilization rate by multiple users concurrently. The
idea was implemented in the IBM VM/370 in the 1960s.
More recently, the Xen hypervisor has been applied to virtualize x86-based machines to run
Linux or other guest OS applications.

3. Operating System Level


This refers to an abstraction layer between traditional OS and user applications.
OS-level virtualization creates isolated containers on a single physical server and the OS
instances to utilize the hardware and software in data centers.
The containers behave like real servers.
OS-level virtualization is commonly used in creating virtual hosting environments to allocate
hardware resources among a large number of mutually distrusting users.
It is also used, to a lesser extent, in consolidating server hardware by moving services on
separate hosts into containers or VMs on one server.

4. Library Support Level


Most applications use APIs exported by user-level libraries rather than using lengthy system
callsby the OS.
Since most systems provide well-documented APIs, such an interface becomes another
candidate for virtualization.
Virtualization with library interfaces is possible by controlling the communication link
between applications and the rest of a system through API hooks.

5. User-Application Level
Virtualization at the application level virtualizes an application as a VM.
On a traditional OS, an application often runs as a process. Therefore, application-level
virtualization is also known as process-level virtualization.
The most popular approach is to deploy high level language (HLL)VMs. In this scenario, the
virtualization layer sits as an application program on top of the operating system,
The layer exports an abstraction of a VM that can run programs written and compiled to a
particular abstract machine definition.
Any program written in the HLL and compiled for this VM will be able to run on it. The
Microsoft .NET CLR and Java Virtual Machine (JVM) are two good examples of this class of
VM.

VMM Design Requirements and Providers


layer between real hardware and traditional operating systems. This layer is commonly called
the Virtual Machine Monitor (VMM)
three requirements for a VMM
a VMM should provide an environment for programs which is essentially identical to the
original machine
programs run in this environment should show, at worst, only minor decreases in speed
VMM should be in complete control of the system resources.
VMM includes the following aspects:
(1) The VMM is responsible for allocating hardware resources for programs;
(2) it is not possible for a program to access any resource not explicitly allocated to it;
(3) it is possible under certain circumstances for a VMM to regain control of resources already
allocated.

Hypervisors
A hypervisor is a key software piece which enables virtualization. It abstracts from the actual hardware
the guest machines and the operating system they use.
Hypervisors create the CPU / Processor, RAM, and other physical resources virtualizedlayer that
separates you from the virtual devices you are creating.
The hypervisor on which we install the machine is called a host machine, compared with virtual guest
machines running over it. Hypervisors emulate resources available for guest machines to use. Regardless of
which operating system you are booting with an actual hardware, it believes that real physical hardware is
available. From the viewpoint of VM, the physical and virtual environment is unlike any difference. In the
virtual environment, Guest machines do not know that the hypervisor has created them. Or share the
computingpower available. VMs run on the hardware that powers them simultaneously, and they aretherefore
fully dependent upon their stability operation.
 Type 1 Hypervisor (also called bare metal or native)
 Type 2 Hypervisor (also known as hosted hypervisors)

3.6.1 Type 1 Hypervisor

A bare-metal hypervisor (type 1) is a software layer which is installed directly above


a physical server and its underlying hardware. Examples of Type 1 hypervisors
include VMware ESXi, XenServer and Microsoft Hyper-V hypervisor ,KVM.

There is no intermediate software or operating system, therefore bare-metal


hypervisor is the name. A Type 1 hypervisor, which does not run inside Windows or
any other operating system so it is proven to provide excellent performance and
stability.

Type 1 hypervisors are a very basic OS themselves, on which virtual machines can
be operated. The hypervisor’s physical machine is used for server virtualization
purpose only.For anything else, you can't use it. In enterprise environments, type 1
hypervisors are mostly found.

FIGURE 3.7 Type 1 Hypervisor


3.6.1 Type 2 Hypervisor

This type of hypervisor runs within a physical host operating system. .Example of
Type 2hypervisor include VMware Player or Parallels Desktop.

That is why we call type 2 hypervisors – hosted hypervisors. In contrast to type 1


hypervisors, which run directly on the hardware, hosted hypervisors have one
underlying layer of the software. Here we have the following:

 A physical machine.
 An installed hardware operating system (Windows, Linux, macOS).
 Software for the type 2 hypervisor in this operating system.
 The current instances of virtual guest machines.

FIGURE 3.8 Type 2 Hypervisor


Virtualization Structure / Tools & Mechanism :-
1. Xen Hypervisor :-
 Xen is an open source hypervisor program developed by Cambridge University.
 Xen is a microkernel hypervisor

 The core components of a Xen system are the hypervisor, kernel, and applications

 The guest OS, which has control ability, is called Domain 0, and the others are called
Guest Domain .
 Domain 0 is designed to access hardware directly and manage devices .
 Other guest domains not permitted to access the hardware directly , when domain o give
permission , then they can access.
 The over all control ability is in only Domain O .
 There is also a security issue due to the dependency of domain 0 , if domain 0 is hacked
and lost in any case then all guest domains have also lost and loose ssecurity.

Full virtualization / VMWare :-


Technology of VMware is based on the key concept of Full
Virtualization. Either in desktop environment, with the help of type-II hypervisor, or in server
environment, through type-I hypervisor, VMware implements full virtualization. In both the
cases, full virtualization is possible through the direct execution for non-sensitive instructions
and binary translation for sensitive instructions or hardware traps, thus enabling the
virtualization of architecture like x86 .
Full virtualization, noncritical instructions run on the hardware directly while critical instructions
are discovered and replaced with traps into the VMM to be emulated by software.
VMware puts the VMM at Ring 0 and the guest OS at Ring 1.
The VMM scans the instruction stream and identifies the privileged, control- and behavior-
sensitive instructions.

When these instructions are identified, they are trapped into the VMM, which emulates the
behavior of these instructions. The method used in this emulation is called binary translation.

Therefore, full virtualization combines binary translation and direct execution.


KVM (Kernel-Based VM) :-
This is a Linux para-virtualization system—a part of the Linux version 2.6.20
kernel. Memory management and scheduling activities are carried out by the
existing Linux kernel. The KVM does the rest, which makes it simpler than the
hypervisor that controls the entire machine. KVM is a hardware-assisted para-
virtualization tool, which improves performance and supports unmodified guest
OSes such as Windows, Linux, Solaris, and other UNIX variants.

VIRTUALIZATION OF CPU, MEMORY, AND I/O


DEVICES

CPU virtualization:-

CPU virtualization emphasizes performance and runs directly on the processor


whenever possible. The underlying physical resources are used whenever possible
and the virtualization layer runs instructions only as needed to make virtual machines
operate as if they were running directly on a physical machine.

CPU virtualization is not the same thing as emulation. ESXi does not use emulation to
run virtual CPUs. With emulation, all operations are run in software by an emulator. A
software emulator allows programs to run on a computer system other than the one
for which they were originally written. The emulator does this by emulating, or
reproducing, the original computer’s behavior by accepting the same data or inputs
and achieving the same results. Emulation provides portability and runs software
designed for one platform across several platforms.

When CPU resources are overcommitted, the ESXi host time-slices the physical
processors across all virtual machines so each virtual machine runs as if it has its
specified number of virtual processors. When an ESXi host runs multiple virtual
machines, it allocates to each virtual machine a share of the physical resources. With
the default resource allocation settings, all virtual machines associated with the same
host receive an equal share of CPU per virtual CPU. This means that a single-
processor virtual machines is assigned only half of the resources of a dual-processor
virtual machine.

 Software-Based CPU Virtualization


With software-based CPU virtualization, the guest application code runs directly on
the processor, while the guest privileged code is translated and the translated code
runs on the processor .

 With software-based CPU virtualization, the guest application code runs directly on
the processor, while the guest privileged code is translated and the translated code
runs on the processor.

 The translated code is slightly larger and usually runs more slowly than the native
version. As a result, guest applications, which have a small privileged code
component, run with speeds very close to native. Applications with a significant
privileged code component, such as system calls, traps, or page table updates can run
slower in the virtualized environment.

 Hardware-Assisted CPU Virtualization


Certain processors provide hardware assistance for CPU virtualization . Certain
processors provide hardware assistance for CPU virtualization.

 When using this assistance, the guest can use a separate mode of execution called
guest mode. The guest code, whether application code or privileged code, runs in the
guest mode. On certain events, the processor exits out of guest mode and enters root
mode. The hypervisor executes in the root mode, determines the reason for the exit,
takes any required actions, and restarts the guest in guest mode.

 When you use hardware assistance for virtualization, there is no need to translate the
code. As a result, system calls or trap-intensive workloads run very close to native
speed. Some workloads, such as those involving updates to page tables, lead to a large
number of exits from guest mode to root mode. Depending on the number of such
exits and total time spent in exits, hardware-assisted CPU virtualization can speed up
execution significantly.

 Virtualization and Processor-Specific Behavior


Although VMware software virtualizes the CPU, the virtual machine detects the
specific model of the processor on which it is running.

 Performance Implications of CPU Virtualization


CPU virtualization adds varying amounts of overhead depending on the workload and
the type of virtualization used.
Memory Virtualization

The VMkernel manages all physical RAM on the host. The VMkernel dedicates part
of this managed physical RAM for its own use. The rest is available for use by virtual
machines.

The virtual and physical memory space is divided into blocks called pages. When
physical memory is full, the data for virtual pages that are not present in physical
memory are stored on disk. Depending on processor architecture, pages are
typically 4 KB or 2 MB.

 Virtual Machine Memory


Each virtual machine consumes memory based on its configured size, plus additional
overhead memory for virtualization.

 Memory Overcommitment
For each running virtual machine, the system reserves physical RAM for the virtual
machine’s reservation (if any) and for its virtualization overhead.

 Memory Sharing
Memory sharing is a proprietary ESXi technique that can help achieve greater
memory density on a host.

 Memory Virtualization
Because of the extra level of memory mapping introduced by virtualization, ESXi can
effectively manage memory across all virtual machines.

 Support for Large Page Sizes


ESXi provides limited support for large page sizes

I/O Virtualization

I/O virtualization involves managing the routing of I/O requests between virtual
devices and the shared physical hardware. At the time of this writing, there are
three ways to implement I/O virtualization: full device emulation, para-
virtualization, and direct I/O. Full device emulation is the first approach for I/O
virtualization. Generally, this approach emulates well-known, real-world devices.
All the functions of a device or bus infrastructure, such as device enumeration,
identification, interrupts, and DMA, are replicated in software. This software is
located in the VMM and acts as a virtual device. The I/O access requests of the
guest OS are trapped in the VMM which interacts with the I/O devices. The full
device emulation approach is shown in Figure 3.14.

A single hardware device can be shared by multiple VMs that run concurrently.
However, software emulation runs much slower than the hardware it emulates
[10,15]. The para-virtualization method of I/O virtualization is typically used in
Xen. It is also known as the split driver model consisting of a frontend driver and
a backend driver. The frontend driver is running in Domain U and the backend
dri-ver is running in Domain 0. They interact with each other via a block of shared
memory. The frontend driver manages the I/O requests of the guest OSes and the
backend driver is responsible for managing the real I/O devices and multiplexing
the I/O data of different VMs. Although para-I/O-virtualization achieves better
device performance than full device emulation, it comes with a higher CPU
overhead.

Direct I/O virtualization lets the VM access devices directly. It can achieve
close-to-native performance without high CPU costs. However, current direct I/O
virtualization implementations focus on networking for mainframes. There are a
lot of challenges for commodity hardware devices. For example, when a physical
device is reclaimed (required by workload migration) for later reassign-ment, it
may have been set to an arbitrary state (e.g., DMA to some arbitrary memory
locations) that can function incorrectly or even crash the whole system. Since
software-based I/O virtualization requires a very high overhead of device
emulation, hardware-assisted I/O virtualization is critical. Intel VT-d supports the
remapping of I/O DMA transfers and device-generated interrupts. The
architecture of VT-d provides the flexibility to support multiple usage models that
may run unmodified, special-purpose, or “virtualization-aware” guest OSes.

VIRTUAL CLUSTERS AND RESOURCE


MANAGEMENT

A physical cluster is a collection of servers (physical machines) interconnected


by a physical network such as a LAN. In Chapter 2, we studied various clustering
techniques on physical machines. Here, we introduce virtual clusters and study
its properties as well as explore their potential applications. In this section, we
will study three critical design issues of virtual clusters: live migration of
VMs, memory and file migrations, and dynamic deployment of virtual
clusters.

When a traditional VM is initialized, the administrator needs to manually write


configuration information or specify the configuration sources. When more VMs
join a network, an inefficient configuration always causes problems with
overloading or underutilization. Amazon’s Elastic Compute Cloud (EC2) is a
good example of a web service that provides elastic computing power in a cloud.
EC2 permits customers to create VMs and to manage user accounts over the time
of their use. Most virtualization platforms, including XenServer and VMware
ESX Server, support a brid-ging mode which allows all domains to appear on the
network as individual hosts. By using this mode, VMs can communicate with one
another freely through the virtual network interface card and configure the
network automatically.

1. Physical versus Virtual Clusters

Virtual clusters are built with VMs installed at distributed servers from one or
more physical clus-ters. The VMs in a virtual cluster are interconnected logically
by a virtual network across several physical networks. Figure 3.18 illustrates the
concepts of virtual clusters and physical clusters. Each virtual cluster is formed
with physical machines or a VM hosted by multiple physical clusters. The virtual
cluster boundaries are shown as distinct boundaries.
The provisioning of VMs to a virtual cluster is done dynamically to have the
following interest-ing properties:

• The virtual cluster nodes can be either physical or virtual machines. Multiple
VMs running with different OSes can be deployed on the same physical node.

• A VM runs with a guest OS, which is often different from the host OS, that
manages the resources in the physical machine, where the VM is implemented.

• The purpose of using VMs is to consolidate multiple functionalities on the same


server. This will greatly enhance server utilization and application flexibility.

• VMs can be colonized (replicated) in multiple servers for the purpose of


promoting distributed parallelism, fault tolerance, and disaster recovery.

• The size (number of nodes) of a virtual cluster can grow or shrink dynamically,
similar to the way an overlay network varies in size in a peer-to-peer (P2P)
network.

• The failure of any physical nodes may disable some VMs installed on the
failing nodes. But the failure of VMs will not pull down the host system.
Since system virtualization has been widely used, it is necessary to effectively
manage VMs running on a mass of physical computing nodes (also called virtual
clusters) and consequently build a high-performance virtualized computing
environment. This involves virtual cluster deployment, monitoring and
management over large-scale clusters, as well as resource scheduling, load
balancing, server consolidation, fault tolerance, and other techniques. The
different node colors in Figure 3.18 refer to different virtual clusters. In a virtual
cluster system, it is quite important to store the large number of VM images
efficiently.
Desktop Virtualization

Desktop Virtualization should be used if application virtualization can’t deliver


the required applications and desktops. Application virtualization using the
hosted model (XenApp or RDS) is preferred since you can get more users per
server. Users that want specific operating systems other than Windows Server
will need to have a virtual desktop. Some of the common benefits of desktop and
application virtualization are user mobility, easy management of software
installation, updates and patches.

Server Virtualization

Server virtualization separates the operating system from the computer hardware
and allows the VM to be treated as a file. This provides for easy management
and facilitates redundancy, high availability and disaster recovery.

Server virtualization gave birth to a new term referred to as elasticity. This gives
us the ability to adjust our hardware resources to the current workload on the fly.
When workload requirements are low, servers can be decommissioned. When
workloads are high, servers are turned on. This along with server consolidation
can save money on electricity and cooling.

Elasticity also allows companies to expand their data center resources on demand
without buying any additional hardware. Services like Amazon Web Services
and Microsoft Azure can provide resources as needed in a pay as you go model,
allowing you to never have a shortage of resources and never paying for
equipment that sits underutilized.

Network Virtualization

Network virtualization was developed by using the same concepts of server


virtualization. Software Defined Networking uses virtual switches, routers,
firewalls and load balancers. This allows IT staff to provision networks without
disruption to the physical network while running traffic over the physical
network. This allows VM’s to retain their security properties when moved from
one host server to another that may be located on a different network. Server
managers have the ability to configure virtual switches, routers, firewalls, load
balancers, etc. without having to bother the network administrator.

All of these virtualization solutions offer many benefits. However, we often don’t
have the budgets to accompany all of them, so how do we choose which to
implement first? One thing to consider when deciding which virtualization
solution to deploy is to adopt the solution that provides the maximum benefits.
Another thing to consider is the ease of implementation. This may be easier said
than done since they are mutually exclusive. However, often times solutions that
are easy to implement can provide tremendous benefits immediately and that will
give a quick ROI.

This will vary from company to company on a case-by-case basis. While


network virtualization was developed last, large organizations may derive many
more benefits than a smaller organization so that would shuffle the order priority.
That being said, larger organizations probably have already deployed all other
virtualization strategies in some form already. For new companies that are large
and planning their initial network rollout, server and network virtualization will
probably be the first priority since these will be the building blocks of the entire
network along with the physical network.

Virtualization of Data Centres :-

Data center virtualization is the process of designing, developing and


deploying a data center on virtualization and cloud computing
technologies.

It primarily enables virtualizing physical servers in a data center facility


along with storage, networking and other infrastructure devices and
equipment. Data center virtualization usually produces a virtualized, cloud
and collocated virtual/cloud data center.

Data center virtualization encompasses a broad range of tools, technologies and


processes that enable a data center to operate and provide services on top of
virtualization layer/technology. Using data center virtualization, an existing or a
standard data center facility can be used to provide/host multiple virtualized
data centers on the same physical infrastructure, which can simultaneously be
used by separate applications and/or organizations. This not only helps in
optimal IT infrastructure/resource utilization, but also in reducing data center
capital and operational costs.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy