0% found this document useful (0 votes)
5 views

CC 1

Uploaded by

shreyaraut2512
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

CC 1

Uploaded by

shreyaraut2512
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 43

1

3.1 Origins and Influences


A Brief History
The idea of computing in a "cloud" traces back to the origins of utility computing, a
concept that computer scientist John McCarthy publicly proposed in 1961:
"f computers of the kind I have adocated become the computers of the future, then com-
puting may someday be organized as a public utility just as the telephone system is a public
utility.. The computer utility could become the basis of a new and important industry"
In 1969, Leonard Kleinrock, a chief scientist of the Advanced Research Projects Agency
Network or ARPANET project that seeded the Internet, stated:
"As of now, computer netuorks are still in their infancy, but as they grow up and become
sophisticated, ue avill robabły se the spread of 'computer utilitis' ..
The general public has been leveraging forms of Internet-based computer utilities since
the mid-1990s through various incarnations of search engines (Yahoo!, Google), e-mail
services (Hotmail, Gmail). open publishing platforms (MySpace, Facebook, YouTube).
and other types of social media (Twitter, Linkedin). Though consumer-centric, these
services popularized and validated core concepts that form the basis of modern-day
cloud computing
In the late 1990, Salesforce com pioneered the notion of bringing remotely provisioned
services into the enterprise. In 2002, Amazon.com launched the Amazon Web Services
(AWS) platform, a suite of enterprise-oriented services that provide remotely provi-
sioned storage, computing resources, and business functionality.
Definitions
A Gartner report listing cloud computing at the top of its strategic technology areas
further reafirmed its prominence as an industry trend by announcing its formal def-
nition as:
"..a style ofcomputing in ohich scalable and elastic IT-enabld capabilities are delirered
sa seroice to external customers using Internet technlogies."
This is a slight revision of Gartner's original definition from 2008, in which "massively
scalable" was used instead of "scalable and elastic" This acknowledges the impor-
tance of scalability in relation to the ability to scale verticaly and not just to enormous
Proportions
Forester Research provided its own definition of cloud computing as:
".a Standandizad IT capability (services, softoare, or infnastructure) deliered via Inter-
net tecimologis in a pay-per-use, sefserice ony"
The definition that received industry-wide aceptance was composed by the National
Institute of Sandards and Technology (NIST). NIST published its original definition
back in 2009,followed by a revised version after further review and industry input that
was published in September of 2011

Business Drivers
Before đelving into the layers of technologis that undertie couds the motivations that
led to their creation by industry leaders must first be understod Several of the primary
business drivers that fostered modern cloud-based technology are presented in this
section.
2

The origins and inspirations of many of the characteristis, models, and mechanisms
covered throughout subsequent chapters can be traced back to the upcoming business
drivers. tis important to note that these influences shaped clouds and the overall cloud
computing market from both ends They have motivated organizations to adopt cloud
computing in support of their business automation requirements They have corre
spondingy motivated other organizations to become providers of cloud environments
and cloud technalogy vendors in order to create and meet the demand to fuláil cn-
sumer needs.
Capaciy Planning
Capacity planning is the process of determining and fuliling future demands of an
onganization's II resources, products, and services Within this context,. capacity rep-
resents the maximum amount of work that an IT resource is capable of delivering in
a given period of time. A discrepancy between the capacity of an IT resource and its
demand can resuil in asystem becoming ither ineficient (over provisianing) ar unable
to fulfíll user needs (under-provisioning). Capacity planning is focused on minimizing
this discrepancy to achieve predictable efficiency and performance.
Different capacity planning strategies exist:
. Lead Strategy - adding capacity to an IT resource in anticipation of demand
• Lag Stnategy - adding capacity when the IT resource reaches its full capacity
• Match Strategy - adding IT resource capacity in small increments, as demand
increases
Planning for capacity can be challenging because it requires estimating usage load fluc-
tuations. There is a constant need to balance peak usage requirements without unneces-
sary over-expenditure on infrastructure. An example is outfitting IT infrastructure to
accommodate maximum usage loads which can impose unreasonable financial invest-
ments. In such cases, moderating investments can result in under-provisioning, leading
to transaction losses and other usage limitations from lowered usage thresholds.
Cost Reduction
A direct alignment between IT costs and business performance can be difficult to main-
tain. The growth of T environments often corresponds to the assessment of their maxi-
automations an ever-increasing investment. Much of this required investment is fun-
neled into infrastructure expansion because the usage potential of a given automation
solution will always be limited by the processing power of its underlying infrastructure.
Two costs need to be accounted for: the cost of acquiring new infrastructure, and the
cost of its ongoing ownership. Operational overhead represents a considerable share of
IT budgets, often exceeding up-front investment costs
Common forms of infrastructure-related operating overhead include the following:
• technical personnel required to keep the environment operational
• upgrades and patches that introduce additional testing and deployment cycles
• utility bills and capital expense investments for power and cooling
ty and acce control measures that need to be maintained and enforced to
protect infrastructure resources
trvtue
•administrative and accounts staff that may be required to keep track of licenses
3

and support arrangements


Technology Innovations
Established technologies are often used as inspiration and, at times, the actual foun-
dations upon which new technology innovations are derived and built. This section
briefly describes the pre-existing technologies considered to be the primary influences
on cloud computing
Clustering
A cluster is a group of independent IT resources that are interconnected and work as
a single system. System failure rates are reduced while availability and reliability are
increased, since redundancy and failover features are inherent to the cluster.
A general prerequisite of hardware clustering is that its component systems have rea-
sonably identical hardware and operating systems to provide similar performance lev-
els when one failed component is to be replaced by another. Component devices that
form a cluster are kept in synchronization through dedicated, high-speed communica-
tion links.
Grid Computing
A computing grid (or "computational grid") provides a platform in which computing
resources are organized into one or more logical pools These pools are collectively
coordinated to provide a high performance distributed grid., sometimes referred to as a
"super virtual computer." Grid computing differs from clustering in that grid systems
are much more loosely coupled and distributed. As a result, grid computing systems
can involve computing resources that are heterogeneous and geographically dispersed.
which is generally not possible with cluster computing-based systems.
Grid computing has been an on-going research area in computing science since the
early 1990s. The technological advancements achieved by grid computing projects have
influenced various aspects of cloud computing platforms and mechanisms, specifically
in relation to common feature-sets such as networked access, resource pooling, and
scalability and resiliency. These types of features can be established by both grid com-
puting and cloud computing, in their own distinctive approaches.
3.2 Basic Concepts and Terminology
This section establishes a set of basic terms that represent the fundamental concepts
and aspects pertaining to the notion of a cloud and its most primitive artifacts.
Cloud
A cloud refers to a distinct IT environment that is designed for the purpose of remotely
provisioning scalable and measured IT resources. The term originated as a metaphor
for the Internet which is, in essence, a network of networks providing remote access to
a set of decentralized IT resources. Prior to cloud computing becoming its own formal-
ized IT industry segment, the symbol of a cloud was commonly used to represent the
Internet in a variety of specifications and mainstream documentation of Web-based
architectures. This same symbol is now used to specifically represent the boundary of
a cloud environment, as shown in Figure 31.
4

Virtualization
Virtualization represents a technology platform used for the creation of virtual instances
of IT resources. A layer of virtualization software allows physical T resources to pro-
vide multiple virtual images of themselves so that their undertlying processing capabili-
ties can be shared by multiple users
Prior to the advent of virtualization technologies, software was limited to residing on
and being coupled with static hardware environments. The virtualization process sev-
ers this software-hardware dependency. as hardware requirements can be simulated by
emulation software running in virtualized environments
Established virtualization technologies can be traced to several coud characteristics
and cloud computing mechanisms, having inspired many of their core features. As
cloud computing evolved., a generation of modern virtualization technologies emerged
to overcome the performance, reliability, and scalability limitations of traditional virtu-
5

Technology Innovations vs. Enabling Technologies


kis essential to highlight several other areas of technology that continue to contribute
to modern-day cloud-based platforms. These are distinguished as cloud-enabling tech-
nologies, the following of which are covered in Chapter 5:
•Broadband Networks and Internet Architecture
• Data Center Technology
• (Modem) Virtualization Technology
• Web Technology
•Multitenant Technology
• Service Technology
Each of these cloud-enabling technologies existed in some form prior to the formal
advent of cloud computing Some were refined further, and on occasion even redefined.
as a result of the subsequent evolution of cloud computing

Technology architectures and various interaction scenarios involving T resources are


illustrated in diagrams like the one shown in Figure 33 is important to note the fo-
knowing points when studying and working with these diagrams
• The ITresources shown within the boundary of a given doud symbol usually do
not represent all of the availate IT resources hosted by that cloud. Subsets of IT
resources are generally highlighted to demonstrate a particular topic.
d seioet nd tee srpe devious
Focusing on the relevant aspects of a topke requires many of these diagrams to
intentionally provide abstract views of the underlying technology architectures
This means that only a portion of the actual technical details are shown
Furthermore, some diagrams will display IT resources outside of the doud symbol
This convention is used to indicate IT resources that are not cloud based.
On-Premise
Asadistinct and remotely accessible environment, a cloud represents an option for the
deployment of IT resources An IT resource that is hosted in a conventional T enter-
pre
is considered to be located cn the premises of the IT enterprise,
In other words, the term "on-premise is another way of stating "on the premises of a
controlled IT environment that is not cloud-based This term is used to qualify an IT
resource as an alternative to cloud-based" An IT resource that is on-premise cannot be
coad based, and vice versa
within an organizational boundary (that does not cifically
Note the following key points an ta cloud)
6

Cloud Consumers and Cloud Providers


• An on-premise IT resource can access and interact with a cloud-based IT resource.
An on-premie IT resource can be moved to a cloud, thereby changing it to a
cloud-based IT resource
•Redundant deployments of an IT resource can exist in both onpremise and cloud
based environments
If The distinction between on-premise and cloud-based IT resources is confusing in rela-
tion to private clouds (described in the Cloud Deployment Madels section of Chapter 4).
then an alternative qualifier can be used
The party that provide cloud-based IT resource is the cloud provider. The party that
uses cloud-based IT resources is the clod cotsmer. These terms represent roles usually
assumed by organizations in relation to douds and corresponding cloud provisioning
contracts These roles are formally defined in Chapter 4, as part of the Roles and Boundary-
ies section
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25

Federated cloud/intercloud
Cloud Federation, also known as Federated Cloud is the deployment and management of several
external and internal cloud computing services to match business needs. It is a multinational cloud
system that integrates private, community, and public clouds into scalable computing platforms Federated
cloud is created by connecting the cloud environment of different cloud providers using a common
standard Federated Cloud

The architecture of Federated Cloud:


1. Cloud Exchange
The architecture of Federated Cloud consists of three basic components
The Cloud Exchange acts as a medi ator between cloud coordinator and cloud broker. The demands of
the cloud broker are mapped by the cloud exchange to the available services provided by the cloud
coordinator. The cloud exchange has a track record of what is the present cost, demand patterns, and
available cloud providers, and this information is periodically reformed by the cloud coordinator.
26

2. Cloud Co-ordinator
The cloud coordinator assigns the resources of the cloud to the remote users based on the quality of
service they demand and the credits they have in the cloud bank. The cloud enterprises and their
membership are managed by the cloud controller.
3. Cloud Broker
The cloud broker interacts with the cloud coordinator, analyzes the Service Level agreement and the
resources offered by several cloud providers in cloud exchange. Cloud broker finalizes the most suitable
deal for their client.
27
28
29

Hybrid cloud:
A hybrid cloud is a heterogeneous distributed system formed by combining facilities of the public cloud
and private cloud. For this reason, they are also called heterogeneous clouds.
A major drawback of private deployments is the inability to scale on-demand and efficiently address peak
loads. Here public clouds are needed. Hence, a hybrid cloud takes advantage of both public and private
clouds.

Community cloud:
Community clouds are distributed systems created by integrating the services of different clouds to
address th specific needs of an industry, a community, or a business sector. But sharing responsibilities
among the organizations is difficult.
In the community cloud, the infrastructure is shared between organizations that have shared concerns or
tasks An organization or a third party may manage the cloud.

Multicloud
Multicloud is the use of multiple cloud computing services from different providers, which allows
organizations to use the best-sui ted services for their specific needs and avoid vendor lockin.
This allows organizations to take advantage of the different features and capabilities offered by different
cloud providers
Advantages of using multi-cloud:
1. Flexibility: Using multiple cloud providers allows organizations to choose the best-suited services for
their specific needs, and avoid vendor lockin..
2.Cost-effectiveness: Organizations can take advan tage of the cost savings and pricing benefits offered
by different cloud providers for di fferent services3. Improved performance By distributing workloads
across multiple cloud providers, organizations can improve
30
31
32
33
34

3.1.1.1 Instruction Set Architecture Level


At the ISA level, virtualization is performed by emulating a given ISA by the ISA of the host
machine. For example. MIPS binary code can run on an x86-based host machine with the help of
ISA emulation. With this approach, it is possible to run a large amount of legacy binary code writ-
ten for various processors on any given new hardware host machine. Instruction set emulation leads
to virtual ISAs created on any hardware machine.
The basic enmulation method is through code interpretation. An interpreter program interprets the
source instructions to target instructions one by one. One source instruction may require tens or
hundreds of native target instructions to perform its function. Obviously, this process is relatively
slow. For better performance, dynamic binary translation is desired. This approach translates basie
blocks of dynamic source instructions to target instructions. The basic blocks can also be extended
to program traces or super blocks to inerease translation efficiency. Instruction set emulation
requires binary translation and optimization. A virtual instruction set architecture (V-1SA) thus
requires adding a processor-specific software translation layer to the compiler.
3.1.1.2 Hardware Abstraction Level
Hardware-level virtualization is performed right on top of the bare hardware. On the one hand, this
approach generates a virtual hardware environment for a VM. On the other hand, the process manages
the underlying hardware through virtualization. The idea is to virtualize a computer's resources, such as
its processors, memory, and O devices. The intention is to upgrade the hardware utilization rate by
multiple users concurrently. The idea was implemented in the IBM VM/370 in the 1960s. More
recently, the Xen hypervisor has been applicd to vitualize x86-based machines to run Linux or other
guest OS applications. We will discuss hardware virtualization approaches in more detail in Section 3.3.
3.1.1.3 Operating System Level
This refers to an abstraction layer between traditional OS and user applications. OS-level virtualiza-
tion creates isolated containers on a single physical server and the OS instances to utilize the hard-
ware and software in data centers. The containers behave like real servers. OS-level vitualization is
commonly used in creating virtual hosting environments to allocate hardware resources amonga
large number of mutually distrusting users. It is also used, to a lesser extent, in consolidating server
hardware by moving services on separate hosts into containers or VMs on one server. OS-level
virtualization is depicted in Section 3.1.3.
3.1.1.4 Library Support Level
Most applications use APIs exported by user-level libraries rather than using lengthy system calls
by the OS. Since most systems provide well-documented APls, such an interface becomes another
candidate for virtualization. Virtualization with library interfaces is possible by controlling the com-
munication link between applications and the rest of a system through API hooks. The software
tool WINE has implemented this approach to support Windows applications on top of UNIX hosts.
Another example is the vCUDA which allows applications executing within VMs to leverage GPU
hardware acceleration. This approach is detailed in Section 3.14.
3.1.1.5 User-Application Level
Virtualization at the application level virtualizes an application as a VM. On a traditional OS, an
application often runs as a process. Therefore, application-level virtualization is also known as

3.2 VIRTUALIZATION STRUCTURES/TOOLS AND MECHANISMS


In general, there are three typical classes of VM architecture. Figure 3.1 showed the architectures of
a machine before and after virtualization. Before virtualization, the operating system manages the
hardware. After virtualization, a virtualization layer is inserted between the hardware and the operat-
ing system. In such a case, the virtualization layer is responsible for converting portions of the real
hardware into virtual hardware. Therefore, different operating systems such as Linux and Windows
can run on the same physical machine, simultaneously. Depending on the position of the virtualiza-
35

tion layer, there are several classes of VM architectures, namely the hypervisor architecture . para-
virtualization, and host-based virtualization. The hypervisor is also known as the VMM (Virtual
Machine Monitor). They both perfon the same virtualization operations.
3.2.1 Hypervisor and Xen Architecture
The hypervisor supports hardware-level virtualization (see Figure 3.1(b)) on bare metal devices like
CPU, memory, disk and network interfaces. The hypervisor software sits directly between the physi-
cal hardware and its OS. This virtualization layer is referred to as either the VMM or the hypervisor.
The hypervisor provides hypercalls for the guest OSes and applications. Depending on the functional-
ity, a hypervisor can assume a micro-kernel architecture like the Microsoft Hyper-V. Or it can
assume a monolithic hypervisor architecture like the VMware ESX for server virtualization.
A microkernel hypervisor includes only the basic and unchanging functions (such as physical
memory management and processor scheduling). The device drivers and other changeable components
are outside the hypervisor. A monolithic hypervisor implements all the aforementioned functions,
including those of the device drivers. Therefore, the size of the hypervisor code of a micro-kemel hyper-
visor is smaller than that of a monolithic hypervisor. Essentially, a hypervisor must be able to convert
physical devices into virtual resources dedicated for the deployed VM to use.
3.2.1.1 The Xen Architecture
Xen is an open source hypervisor program developed by Cambridge University. Xen is a micro-
kernel hypervisor, which separates the policy from the mechanism. The Xen hypervisor implements
ll the mechanisms, leaving the policy to be handled by Domain 0, as shown in Figure 3.5. Xen
does not include any device drivers natively (7]. It just provides a mechanism by which a guest OS
can have direct access to the physical devices. As a result, the size of the Xen hypervisor is kept
rather small., Xen provides a virtual environment located between the hardware and the OS.
A number of vendors are in the process of developing commercial Xen hypervisors, among them
are Citrix XenServer (62] and Oracle VM J421.
The core components of a Xen system are the hypervisor, kemel, and applications. The organi-
zation of the three components is important. Like other virtualization systems, many guest OSes
can run on top of the hypervisor. However, not all guest OSes are created equal, and one in
process-level virtualization. The most popular approach is to deploy high level language (HLL)
VMs. In this scenario, the virtualization layer sits as an application program on top of the operating
system, and the layer exports an abstraction of a VM that can run programs written and compiled
to a particular abstract machine definition. Any program written in the HILL. and compiled for this
VM will be able to run on it. The Microsoft NET CLR and Java Virtual Machine (JVM) are two
good examples of this class of VM.
Other forms of application-level virtualization are known as application isolation, application
sandboxing, or application streaming. The process involves wrapping the agplication in a layer that
is isolated from the host 0S and other applications. The result is an application that is much easier
to distribute and remove from user workstations. An example is the LANDesk application viruali-
zation platform which deploys software applications as self contained, executable files in an isolatcd
environment without requiring installation, system modifications, or elevated security privileges.
31.16 Relative Merits of Different Approaches
Table 3.1 compares the relative merits of implementing virtualization at various levels. The column
headings correspond to four technical merits. "Higher Performance" and "Application Flexibility"
are self
explanatory. "implementation Complexity" implies the cost to implement that particular vir-
The number ofX's in the table cels reflects the advantage points of each implementation level.
Five X's implies the best case and one X implies the worst case. Overall, hardware and OS support
will jield the highest performance. However, the hardware and application levels are also the most
expensive to implement. User isolation is the most difficult to achieve. ISA implementation offers
36

the best application flexibility. ualization level. "Application olation" refers to the effort required to isolate
resources committed o different VMs. Each row corresponds to a particular level of virtualization.
37
38
39

3.3.2 CPU Virtualization


A VM is a duplicate of an existing computer system in which a majority of the VM instructions are
executed on the host processor in native mode. Thus, unprivileged instructions of VMs run directly on the
host machine for higher efficiency. Other critical instructions should be handled carefully for correctness
and stability. The critical instructions are divided into three categories: privileged instructions, control-
tons ekecule in a phvileged the configuration of resources used. Behavior-sensitive instructions have
different behaviors depending on the configuration of resources, including the load and store operations
over the virtual memory.
A CPU architecture is virtualizable if it supports the ability to run the VM's privileged and
unprivileged instructions in the CPU's user mode while the VMM runs in supervisor mode. When
the privileged instructions including control- and behavior-sensitive instructions of a VM are exe-
cuted, they are trapped in the VMM. In this case, the VMM acts as a unified mediator for hardware
because all control- and behavior-sensitive instructions are privileged instructions. On the contrary
X86 CPU architectures are not primarily designed to support virtualization. This is because about 10
sensitive instructions, such as SGDT and SMSW. are not privileged instructions. When these instruc-
tions execute in virtualization, they cannot be trapped in the VMM.
On a native UNIX-like system, a system call triggers he 80h interrupt and passes control to the
OS kernel. The interrupt handler in the kemel is then invoked to process the system call. On a para-
virtualization system such as Xen, a system call in the guest OS first triggers the 8oh interrupt nor-
control is passed on to the hypervisor as well. When the hynerisor completes a task or the eut
OS system call, it passes control back to the guest OS kernel. Certainly, the guest OS kernel may
also invoke the hypercall while it's running. Although paravirtualization of a CPU lets unmodified
applications run in the VM, it causes a small performance penalty
40

3.3.2.1 Hardware-Assisted CPU Virtualization


This technique attempts to simplify virtualization because full or paravirtualization is complicated.
Intel and AMD add an additional mode called privilege mode level (some people call it Ring-1) to
X86 processors. Therefore, operating systems can still run at Ring 0 and the hypervisor can run at
This technique removes the difficulty of implementing binary translation of full virtualization. It
also lets the operating system nun in VMs without modification.
Example 3.5 Intel Hardware-Assisted CPU Virtualization
Although x86 processors are not virtualizable primarily. great effort is taken to virtualize them. They are
used widely in comparing RISC processors that the bulk of x86-based legacy systems cannot discard
easily. Virtualzation of xX86 processors is detailed in the following sections. Intel's VT-x technology is an
example of hardware-assisted virtualization, as shown in Figure 3.11. Intel calls the privilege level of x86
processors the VMX Root Mode. In order to control the start and stop of a VM and allocate a memory
page to maintain the
3.3.3 Memory Virtualization
Virtual memory virtualization is similar to the virtual memory support provided by modern operat-
ing systems., In a traditional execution environment, the operating system maintains mappings of
Wire memo o machine memory usn Page labies. a sa onesage mhapp Trom (MMU)
and a translation lookaside buffer (TLB) to optimize virtual memory performance. However, in a
virtual execution environment, virtual memory virtualization involves sharing the physical system
memory in RAM and dynamically allocating it to the physical memory of the VMs
That means a two-stage mapping process should be maintained by the guest OS and the VMM.,
respectively: virtual memory to physical memory and physical memory to machine memory.
Furthermore, MMU virtualization should be supported, which is transparent to the guest OS.
The guest OS continues to control the mapping of virtual addresses to the physical memory
addresses of VMs. But the guest OS cannot directly access the actual machine memory. The VMM
is responsible for mapping the guest physical memory to the actual machine memory. Figure 3.12
shows the two-level memory mapping procedure.
41
42
43

VIRTUAL CLUSTERS AND RESOURCE MANAGEMENT


A physical cluster is a collection of servers (physical machines) interconnected by a physical network
such as a LAN. In Chapter 2, we studied various clustering techniques on physical machines. Here, we
introduce virtual clusters and study its properties as well as explore their potential applications. In this
section, we will study three critical design issues of virtual clusters: live migration of VMs,memory and file
migrations, and dynamic deployment of virtual clusters. When a traditional VM is initialized, the
administrator needs to manually write configuration information or specify the configuration sources.
When more VMs join a network, an inefficient of emile ofweh senicehat Cate
3.4.1 Physical versus Virtual Clusters
Virtual clio oltie a cloud. EC2 permits customers to create VMs and to manage user accounts over the
time of their use. Most virtualization platforms, including XenServer and VMware ESX Server, support a
bridging mode which allows all domains to appear on the network as individual hosts. By using this
mode, VMs can communicate with one another freely through the virtual network interface card
and configure the network automatically. ith VMs installed at distributed servmntine
ters, The VMs in a virtual cluster are interconnected logically by a vitual network across several
physical networks. Figure 3. 18 illustrates the concepts of virtual clusters and physical clusters. Each
virtual cluster is formed with physical machines or a VM hosted by multiple physical clusters. The
virtual cluster boundaries are shown as distinct boundaries.
The provisioning of VMs to a virtual cluster is done dynamically to have the following interest-
ing properties:
The virtual cluster nodes can be either physical or virtual machines. Multiple VMs running with
different OSes can be deployed on e
t om the host OS. that manages the
resources in the physical machine, where the VM is implemented.
The purpose of using VMs is to consolidate multiple functionalities on the same server. This
will greatly enhance server utilization and application flexibility.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy