0% found this document useful (0 votes)
162 views30 pages

Unit Iv Cloud Enabling Technologies

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
162 views30 pages

Unit Iv Cloud Enabling Technologies

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 30

UNIT 4 CLOUD ENABLING TECHNOLOGIES

Service Oriented Architecture – Web Services – Basics of Virtualization – Emulation – Types of


Virtualization – Implementation levels of Virtualization – Virtualization structures – Tools&
Mechanism – Virtualization of CPU, Memory & I/O Devices – Desktop Virtualization – Server
Virtualization – Google App Engine – Amazon AWS – Federation in the cloud.

Cloud-enabling technology

Cloud-enabling technology is the use of computing resources that are delivered to customers
with the help of the internet. Cloud-computing technologies are proliferating across various
sectors, such as energy and power, oil and gas, buildings and construction, transport,
communication, etc.

4.1 SERVICE ORIENTED ARCHITECTURE

Service-Oriented Architecture (SOA) allows organizations to access on-demand cloud-based


computing solutions according to the change of business needs. It can work without or with
cloud computing. The advantages of using SOA are that it is easy to maintain, platform
independent, and highly scalable.

Service Provider and Service consumer are the two major roles within SOA.

Applications of Service-Oriented Architecture

There are the following applications of Service-Oriented Architecture -

o It is used in the healthcare industry.


o It is used to create many mobile applications and games.
o In the air force, SOA infrastructure is used to deploy situational awareness systems.

The service-oriented architecture is shown below:

SOA (Service Oriented Architecture) is built on computer engineering approaches that offer an
architectural advancement towards enterprise system. It describes a standard method for
requesting services from distributed components and after that the results or outcome is
managed. The primary focus of this service oriented approach is on the characteristics of service
interface and predictable service behavior. Web Services means a set or combination of industry
standards collectively labeled as one. SOA provides a translation and management layer within
the cloud architecture that removes the barrier for cloud clients obtaining desired services.
Multiple networking and messaging protocols can be written using SOA's client and components
and can be used to communicate with each other. SOA provides access to reusable Web services
over a TCP/IP network, which makes this an important topic to cloud computing going forward.

Benefits of SOA
With high-tech engineering and enterprise point of view, various offers are provided by SOA
which proved to be beneficial. These are:

 Language Neutral Integration: Regardless of the developing language used, the system
offers and invoke services through a common mechanism. Programming language
neutralization is one of the key benefits of SOA's integration approach.
 Component Reuse: Once an organization built an application component, and offered it
as a service, the rest of the organization can utilize that service.
 Organizational Agility: SOA defines building blocks of capabilities provided by software
and it offers some service(s) that meet some organizational requirement; which can be
recombined and integrated rapidly.
 Leveraging Existing System: This is one of the major use of SOA which is to classify
elements or functions of existing applications and make them available to the organizations or
enterprise.

Key Benefits Along With Risks of SOA

 Dependence on the network


 Provider cost
 Enterprise standards
 Agility

SOA Architecture

SOA architecture is viewed as five horizontal layers. These are described below:

 Consumer Interface Layer: These are GUI based apps for end users accessing the
applications.
 Business Process Layer: These are business-use cases in terms of application.
 Services Layer: These are whole-enterprise, in service inventory.
 Service Component Layer: are used to build the services, such as functional and technical
libraries.
 Operational Systems Layer: It contains the data model.

SOA Governance

It is a notable point to differentiate between It governance and SOA governance. IT governance


focuses on managing business services whereas SOA governance focuses on managing Business
services. Furthermore in service oriented organization, everything should be characterized as a
service in an organization. The cost that governance put forward becomes clear when we
consider the amount of risk that it eliminates with the good understanding of service,
organizational data and processes in order to choose approaches and processes for policies for
monitoring and generate performance impact.

SOA Architecture and Protocols

Here lies the protocol stack of SOA showing each protocol along with their relationship among
each protocol. These components are often programmed to comply with SCA (Service
Component Architecture), a language that has broader but not universal industry support. These
components are written in BPEL (Business Process Execution Languages), Java, C#, XML etc
and can apply to C++ or FORTRAN or other modern multi-purpose languages such as Python,
PP or Ruby. With this, SOA has extended the life of many all-time famous applications.
Security in SOA

With the vast use of cloud technology and its on-demand applications, there is a need for well -
defined security policies and access control. With the betterment of these issues, the success of
SOA architecture will increase. Actions can be taken to ensure security and lessen the risks when
dealing with SOE (Service Oriented Environment). We can make policies that will influence the
patterns of development and the way services are used. Moreover, the system must be set-up in
order to exploit the advantages of public cloud with resilience. Users must include safety
practices and carefully evaluate the clauses in these respects.

Elements of SOA

Here's the diagrammatic figure showing the different elements of SOA and its subparts:
Figure - Elements Of SOA:

Though SOA enjoyed lots achieving the success in the past, the introduction to cloud technology
with SOA, renewed the value of SOA

4.2 WEB SERVICES

Cloud computing is a style of computing in which virtualised and standard resources, software
and data are provided as a service over the Internet.

Consumers and businesses can use the cloud to store data and applications and can interact with
the Cloud using mobiles, desktop computers, laptops etc. via the Internet from anywhere and at
any time.

The technology of Cloud computing entails the convergence of Grid and cluster computing,
virtualisation, Web services and Service Oriented Architecture (SOA) - it offers the potential to
set IT free from the costs and complexity of its typical physical infrastructure, allowing concepts
such as Utility Computing to become meaningful.

Key players include: IBM, HP, Google, Microsoft, Amazon Web Services, Salesforce.com,
NetSuite, VMware.

Benefits of Cloud Computing:

 predictable any time, anywhere access to IT resources


 flexible scaling of resources (resource optimisation)
 rapid, request-driven provisioning
 lower total cost of operations
Risks and Challenges of Cloud computing include:
 security of data and data protection
 data privacy
 legal issues
 disaster recovery
 failure management and fault tolerance
 IT integration management issues
 business regulatory requirements
 SLA (service level agreement) management
Web services refers to software that provides a standardized way of integrating Web-based
applications using the XML, SOAP, WSDL and UDDI open standards over the Internet.
4.3 BASICS OF VIRTUALIZATION

Virtualization is a technique, which allows to share single physical instance of an application


or resource among multiple organizations or tenants (customers). It does so by assigning a
logical name to a physical resource and providing a pointer to that physical resource on
demand.

Virtualization Concept

Creating a virtual machine over existing operating system and hardware is referred as Hardware
Virtualization. Virtual Machines provide an environment that is logically separated from the
underlying hardware.
The machine on which the virtual machine is created is known as host machine and virtual
machine is referred as a guest machine. This virtual machine is managed by a software or
firmware, which is known as hypervisor.

Hypervisor

The hypervisor is a firmware or low-level program that acts as a Virtual Machine Manager.
There are two types of hypervisor:
Type 1 hypervisor executes on bare system. LynxSecure, RTS Hypervisor, Oracle VM, Sun
xVM Server, VirtualLogic VLX are examples of Type 1 hypervisor. The following diagram
shows the Type 1 hypervisor.

The type1 hypervisor does not have any host operating system because they are installed on a
bare system.
Type 2 hypervisor is a software interface that emulates the devices with which a system
normally interacts. Containers, KVM, Microsoft Hyper V, VMWare Fusion, Virtual Server
2005 R2, Windows Virtual PC and VMWare workstation 6.0 are examples of Type 2
hypervisor. The following diagram shows the Type 2 hypervisor.
Types of Hardware Virtualization

Here are the three types of hardware virtualization:

 Full Virtualization
 Emulation Virtualization
 Paravirtualization

Full Virtualization

In full virtualization, the underlying hardware is completely simulated. Guest software does
not require any modification to run.

Emulation Virtualization

In Emulation, the virtual machine simulates the hardware and hence becomes independent of it.
In this, the guest operating system does not require modification.
Paravirtualization

In Paravirtualization, the hardware is not simulated. The guest software run their own isolated
domains.

VMware vSphere is highly developed infrastructure that offers a management infrastructure


framework for virtualization. It virtualizes the system, storage and networking hardware.
4.4 EMULATION
Emulation Virtualization. In emulation virtualization, hardware simulates by the virtual machine
and it is independent. Here, the guest operating system does not require any other modification.
In this virtualizations, computer hardware as architectural support builds and manages a fully
virtualized VM.

Emulation Cloud is an open application development environment that helps customers and
third-party developers create, test, and fine-tune customized applications in a completely
virtual environment.
With the web-scale traffic demands of fast-growing cloud-based services, content
distribution, and new IT services emerging from virtualization, requirements for flexibility
and programmability are growing like never before.

There are four key benefits of Emulation Cloud:

 It can accelerate DevOps and web-scale IT integration by enabling customers and


partners to create, test, and fine-tune applications and scripts using a cloud-based
solution rather than expensive and labor-intensive IT resources.
 It provides access to full API definitions and descriptions and enables users to tap
into the expertise of Ciena’s team for questions regarding APIs and code.
 Users can schedule and access virtual lab time to develop unique operational tools,
without IT infrastructure investment.
 It encourages innovation through experimentation and testing—all from the safety of
a virtual cloud environment.

Ciena offers a rich the toolset in the Emulation Cloud so developers and IT teams can
simplify integration activities.

4.5 TYPES OF VIRTUALIZATION


Virtualization plays a very important role in the cloud computing technology, normally in the
cloud computing, users share the data present in the clouds like application etc, but actually with
the help of virtualization users shares the Infrastructure.

Types of Virtualization:

1. Hardware Virtualization.
2. Operating system Virtualization.
3. Server Virtualization.
4. Storage Virtualization.

1) Hardware Virtualization:

When the virtual machine software or virtual machine manager (VMM) is directly installed on
the hardware system is known as hardware virtualization.

The main job of hypervisor is to control and monitoring the processor, memory and other
hardware resources.

After virtualization of hardware system we can install different operating system on it and run
different applications on those OS.

Usage:

Hardware virtualization is mainly done for the server platforms, because controlling virtual
machines is much easier than controlling a physical server.

2) Operating System Virtualization:

When the virtual machine software or virtual machine manager (VMM) is installed on the Host
operating system instead of directly on the hardware system is known as operating system
virtualization.

Usage:

Operating System Virtualization is mainly used for testing the applications on different platforms
of OS.

3) Server Virtualization:

When the virtual machine software or virtual machine manager (VMM) is directly installed on
the Server system is known as server virtualization.

Usage:

Server virtualization is done because a single physical server can be divided into multiple servers
on the demand basis and for balancing the load.

4) Storage Virtualization:
Storage virtualization is the process of grouping the physical storage from multiple network
storage devices so that it looks like a single storage device.

Storage virtualization is also implemented by using software applications.

Usage:

Storage virtualization is mainly done for back-up and recovery purposes.

4.6 IMPLEMENTATION LEVELS OF VIRTUALIZATION

The Five Levels of Implementing Virtualization

1. Instruction Set Architecture Level (ISA)


2. Hardware Abstraction Level (HAL)
3. Operating System Level
4. Library Level
5. Application Level

Virtualization has been present since the 1960s, when it was introduced by IBM. Yet, it has only
recently caught the expected traction owing to the influx of cloud-based systems.

Virtualization, to explain in brief, is the capability to run multiple instances of computer systems
on the same hardware. The way hardware is being used can vary based on the configuration of
the virtual machine.

The best example of this is your own desktop PC or laptop. You might be running Windows on
your system, but with virtualization, now you can also run Macintosh or Linux Ubuntu on it.

Now, there are various levels of virtualizations that we are going to be seeing. Let’s have a look
at them.

The Five Levels of Implementing Virtualization

Virtualization is not that easy to implement. A computer runs an OS that is configured to that
particular hardware. Running a different OS on the same hardware is not exactly feasible.

To tackle this, there exists a hypervisor. What hypervisor does is, it acts as a bridge between
virtual OS and hardware to enable its smooth functioning of the instance.

There are five levels of virtualizations available that are most commonly used in the industry.
These are as follows:

Instruction Set Architecture Level (ISA)

In ISA, virtualization works through an ISA emulation. This is helpful to run heaps of legacy
code which was originally written for different hardware configurations.
These codes can be run on the virtual machine through an ISA.

A binary code that might need additional layers to run can now run on an x86 machine or with
some tweaking, even on x64 machines. ISA helps make this a hardware-agnostic virtual
machine.

The basic emulation, though, requires an interpreter. This interpreter interprets the source code
and converts it to a hardware readable format for processing.

Hardware Abstraction Level (HAL)

As the name suggests, this level helps perform virtualization at the hardware level. It uses a bare
hypervisor for its functioning.

This level helps form the virtual machine and manages the hardware through virtualization.

It enables virtualization of each hardware component such as I/O devices, processors, memory,
etc.

This way multiple users can use the same hardware with numerous instances of virtualization at
the same time.

IBM had first implemented this on the IBM VM/370 back in 1960. It is more usable for cloud-
based infrastructure.

Thus, it is no surprise that currently, Xen hypervisors are using HAL to run Linux and other OS
on x86 based machines.

Operating System Level

At the operating system level, the virtualization model creates an abstract layer between the
applications and the OS.

It is like an isolated container on the physical server and operating system that utilizes hardware
and software. Each of these containers functions like servers.

When the number of users is high, and no one is willing to share hardware, this level of
virtualization comes in handy.

Here, every user gets their own virtual environment with dedicated virtual hardware resources.
This way, no conflicts arise.

Library Level

OS system calls are lengthy and cumbersome. Which is why applications opt for APIs from user-
level libraries.

Most of the APIs provided by systems are rather well documented. Hence, library level
virtualization is preferred in such scenarios.
Library interfacing virtualization is made possible by API hooks. These API hooks control the
communication link from the system to the applications.

Some tools available today, such as vCUDA and WINE, have successfully demonstrated this
technique

Application Level

Application-level virtualization comes handy when you wish to virtualize only an application. It
does not virtualize an entire platform or environment.

On an operating system, applications work as one process. Hence it is also known as process-
level virtualization.

It is generally useful when running virtual machines with high-level languages. Here, the
application sits on top of the virtualization layer, which is above the application program.

The application program is, in turn, residing in the operating system.

Programs written in high-level languages and compiled for an application-level virtual machine
can run fluently here.

4.7 VIRTUALIZATION STRUCTURES TOOLS & MECHANISM


In general, there are three typical classes of VM architecture. Figure 3.1 showed the architectures
of a machine before and after virtualization. Before virtualization, the operating system manages
the hardware. After virtualization, a virtualization layer is inserted between the hardware and the
operat-ing system. In such a case, the virtualization layer is responsible for converting portions
of the real hardware into virtual hardware. Therefore, different operating systems such as Linux
and Windows can run on the same physical machine, simultaneously. Depending on the position
of the virtualiza-tion layer, there are several classes of VM architectures, namely
the hypervisor architecture, para-virtualization, and host-based virtualization. The hypervisor is
also known as the VMM (Virtual Machine Monitor). They both perform the same virtualization
operations.
1. Hypervisor and Xen Architecture
The hypervisor supports hardware-level virtualization (see Figure 3.1(b)) on bare metal devices
like CPU, memory, disk and network interfaces. The hypervisor software sits directly between
the physi-cal hardware and its OS. This virtualization layer is referred to as either the VMM or
the hypervisor. The hypervisor provides hypercalls for the guest OSes and applications.
Depending on the functional-ity, a hypervisor can assume a micro-kernel architecture like the
Microsoft Hyper-V. Or it can assume a monolithic hypervisor architecture like the VMware ESX
for server virtualization.

A micro-kernel hypervisor includes only the basic and unchanging functions (such as physical
memory management and processor scheduling). The device drivers and other changeable
components are outside the hypervisor. A monolithic hypervisor implements all the
aforementioned functions, including those of the device drivers. Therefore, the size of the
hypervisor code of a micro-kernel hyper-visor is smaller than that of a monolithic hypervisor.
Essentially, a hypervisor must be able to convert physical devices into virtual resources
dedicated for the deployed VM to use.
1.1 The Xen Architecture
Xen is an open source hypervisor program developed by Cambridge University. Xen is a micro-
kernel hypervisor, which separates the policy from the mechanism. The Xen hypervisor
implements all the mechanisms, leaving the policy to be handled by Domain 0, as shown in
Figure 3.5. Xen does not include any device drivers natively [7]. It just provides a mechanism by
which a guest OS can have direct access to the physical devices. As a result, the size of the Xen
hypervisor is kept rather small. Xen provides a virtual environment located between the
hardware and the OS. A number of vendors are in the process of developing commercial Xen
hypervisors, among them are Citrix XenServer [62] and Oracle VM [42].

The core components of a Xen system are the hypervisor, kernel, and applications. The organi-
zation of the three components is important. Like other virtualization systems, many guest OSes
can run on top of the hypervisor. However, not all guest OSes are created equal, and one in

particular controls the others. The guest OS, which has control ability, is called Domain 0, and
the others are called Domain U. Domain 0 is a privileged guest OS of Xen. It is first loaded when
Xen boots without any file system drivers being available. Domain 0 is designed to access
hardware directly and manage devices. Therefore, one of the responsibilities of Domain 0 is to
allocate and map hardware resources for the guest domains (the Domain U domains).
2. Binary Translation with Full Virtualization
Depending on implementation technologies, hardware virtualization can be classified into two
cate-gories: full virtualization and host-based virtualization. Full virtualization does not need to
modify the host OS. It relies on binary translation to trap and to virtualize the execution of
certain sensitive, nonvirtualizable instructions. The guest OSes and their applications consist of
noncritical and critical instructions. In a host-based system, both a host OS and a guest OS are
used. A virtuali-zation software layer is built between the host OS and guest OS. These two
classes of VM architec-ture are introduced next.
2.1 Full Virtualization
With full virtualization, noncritical instructions run on the hardware directly while critical
instructions are discovered and replaced with traps into the VMM to be emulated by software.
Both the hypervisor and VMM approaches are considered full virtualization. Why are only
critical instructions trapped into the VMM? This is because binary translation can incur a large
performance overhead. Noncritical instructions do not control hardware or threaten the security
of the system, but critical instructions do. Therefore, running noncritical instructions on
hardware not only can promote efficiency, but also can ensure system security.

2.2 Binary Translation of Guest OS Requests Using a VMM


This approach was implemented by VMware and many other software companies. As shown in
Figure 3.6, VMware puts the VMM at Ring 0 and the guest OS at Ring 1. The VMM scans the
instruction stream and identifies the privileged, control- and behavior-sensitive instructions.
When these instructions are identified, they are trapped into the VMM, which emulates the
behavior of these instructions.
The performance of full virtualization may not be ideal, because it involves binary translation
which is rather time-consuming. At the time of this writing, the performance of full virtualization
on the x86 architecture is typically 80 percent to 97 percent that of the host mach
2.3 Host-Based Virtualization
An alternative VM architecture is to install a virtualization layer on top of the host OS. This host
OS is still responsible for managing the hardware. The guest OSes are installed and run on top of
the virtualization layer. Dedicated applications may run on the VMs. Certainly, some other
applications
can also run with the host OS directly. This host-based architecture has some distinct advantages,
as enumerated next. First, the user can install this VM architecture without modifying the host
OS. The virtualizing software can rely on the host OS to provide device drivers and other low-
level services. This will simplify the VM design and ease its deployment.
Second, the host-based approach appeals to many host machine configurations. Compared to the
hypervisor/VMM architecture, the performance of the host-based architecture may also be low.
3. Para-Virtualization with Compiler Support
Para-virtualization needs to modify the guest operating systems. A para-virtualized VM
provides special APIs requiring substantial OS modifications in user applications. Performance
degradation is a critical issue of a virtualized system. No one wants to use a VM if it is much
slower than using a physical machine. The virtualization layer can be inserted at different
positions in a machine soft-ware stack. However, para-virtualization attempts to reduce the
virtualization overhead, and thus improve performance by modifying only the guest OS kernel.
Figure 3.7 illustrates the concept of a paravirtualized VM architecture. The guest operating
systems are para-virtualized. The OS is responsible for managing the hardware and the
privileged instructions to execute at Ring 0, while user-level applications run at Ring 3. The best
example of para-virtualization is the KVM to be described below.
3.1 Para-Virtualization Architecture
When the x86 processor is virtualized, a virtualization layer is inserted between the hardware and
the OS. According to the x86 ring definition, the virtualization layer should also be installed at
Ring 0. Different instructions at Ring 0 may cause some problems. In Figure 3.8, we show that
para-virtualization replaces nonvirtualizable instructions with hypercalls that communicate
directly with the hypervisor or VMM. However, when the guest OS kernel is modified for
virtualization, it can no longer run on the hardware directly.
Although para-virtualization reduces the overhead, it has incurred other problems. First, its
compatibility and portability may be in doubt, because it must support the unmodified OS as
well. Second, the cost of maintaining para-virtualized OSes is high, because they may require
deep OS kernel modifications. Finally, the performance advantage of para-virtualization varies
greatly due to workload variations. Compared with full virtualization, para-virtualization is
relatively easy and more practical. The main problem in full virtualization is its low performance
in binary translation. To speed up binary translation is difficult. Therefore, many virtualization
products employ the para-virtualization architecture. The popular Xen, KVM, and VMware ESX
are good examples.
3.2 KVM (Kernel-Based VM)
This is a Linux para-virtualization system—a part of the Linux version 2.6.20 kernel. Memory
management and scheduling activities are carried out by the existing Linux kernel. The KVM
does the rest, which makes it simpler than the hypervisor that controls the entire machine. KVM
is a hardware-assisted para-virtualization tool, which improves performance and supports
unmodified guest OSes such as Windows, Linux, Solaris, and other UNIX variants.
3.3 Para-Virtualization with Compiler Support
Unlike the full virtualization architecture which intercepts and emulates privileged and sensitive
instructions at runtime, para-virtualization handles these instructions at compile time. The guest
OS kernel is modified to replace the privileged and sensitive instructions with hypercalls to the
hypervi-sor or VMM. Xen assumes such a para-virtualization architecture.
The guest OS running in a guest domain may run at Ring 1 instead of at Ring 0. This implies
that the guest OS may not be able to execute some privileged and sensitive instructions. The
privileged instructions are implemented by hypercalls to the hypervisor. After replacing the
instructions with hypercalls, the modified guest OS emulates the behavior of the original guest
OS. On an UNIX system, a system call involves an interrupt or service routine. The hypercalls
apply a dedicated service routine in Xen.
Example 3.3 VMware ESX Server for Para-Virtualizati
VMware pioneered the software market for virtualization. The company has developed
virtualization tools for desktop systems and servers as well as virtual infrastructure for large data
centers. ESX is a VMM or a hypervisor for bare-metal x86 symmetric multiprocessing (SMP)
servers. It accesses hardware resources such as I/O directly and has complete resource
management control. An ESX-enabled server consists of four components: a virtualization layer,
a resource manager, hardware interface components, and a service console, as shown in Figure
3.9. To improve performance, the ESX server employs a para-virtualization architecture in which
the VM kernel interacts directly with the hardware without involving the host OS.
The VMM layer virtualizes the physical hardware resources such as CPU, memory, network and
disk controllers, and human interface devices. Every VM has its own set of virtual hardware
resources. The resource manager allocates CPU, memory disk, and network bandwidth and maps
them to the virtual hardware resource set of each VM created. Hardware interface components
are the device drivers and the
VMware ESX Server File System. The service console is responsible for booting the system,
initiating the execution of the VMM and resource manager, and relinquishing control to those
layers. It also facilitates the process for system administrators.

4.8 VIRTUALIZATION OF CPU, MEMORY & I/O DEVICES

To support virtualization, processors such as the x86 employ a special running mode and
instructions, known as hardware-assisted virtualization. In this way, the VMM and guest OS run
in different modes and all sensitive instructions of the guest OS and its applications are trapped
in the VMM. To save processor states, mode switching is completed by hardware. For the x86
architecture, Intel and AMD have proprietary technologies for hardware-assisted virtualization.

1. Hardware Support for Virtualization


Modern operating systems and processors permit multiple processes to run simultaneously. If
there is no protection mechanism in a processor, all instructions from different processes will
access the hardware directly and cause a system crash. Therefore, all processors have at least two
modes, user mode and supervisor mode, to ensure controlled access of critical hardware.
Instructions running in supervisor mode are called privileged instructions. Other instructions are
unprivileged instructions. In a virtualized environment, it is more difficult to make OSes and
applications run correctly because there are more layers in the machine stack. Example 3.4
discusses Intel’s hardware support approach.
At the time of this writing, many hardware virtualization products were available. The
VMware Workstation is a VM software suite for x86 and x86-64 computers. This software suite
allows users to set up multiple x86 and x86-64 virtual computers and to use one or more of these
VMs simultaneously with the host operating system. The VMware Workstation assumes the
host-based virtualization. Xen is a hypervisor for use in IA-32, x86-64, Itanium, and PowerPC
970 hosts. Actually, Xen modifies Linux as the lowest and most privileged layer, or a hypervisor.
One or more guest OS can run on top of the hypervisor. KVM (Kernel-based Virtual
Machine) is a Linux kernel virtualization infrastructure. KVM can support hardware-assisted
virtualization and paravirtualization by using the Intel VT-x or AMD-v and VirtIO framework,
respectively. The VirtIO framework includes a paravirtual Ethernet card, a disk I/O controller, a
balloon device for adjusting guest memory usage, and a VGA graphics interface using VMware
drivers.
2. CPU Virtualization
A VM is a duplicate of an existing computer system in which a majority of the VM instructions
are executed on the host processor in native mode. Thus, unprivileged instructions of VMs run
directly on the host machine for higher efficiency. Other critical instructions should be handled
carefully for correctness and stability. The critical instructions are divided into three
categories: privileged instructions, control-sensitive instructions, and behavior-sensitive
instructions. Privileged instructions execute in a privileged mode and will be trapped if executed
outside this mode. Control-sensitive instructions attempt to change the configuration of resources
used. Behavior-sensitive instructions have different behaviors depending on the configuration of
resources, including the load and store operations over the virtual memory.
A CPU architecture is virtualizable if it supports the ability to run the VM’s privileged and
unprivileged instructions in the CPU’s user mode while the VMM runs in supervisor mode.
When the privileged instructions including control- and behavior-sensitive instructions of a VM
are exe-cuted, they are trapped in the VMM. In this case, the VMM acts as a unified mediator for
hardware access from different VMs to guarantee the correctness and stability of the whole
system. However, not all CPU architectures are virtualizable. RISC CPU architectures can be
naturally virtualized because all control- and behavior-sensitive instructions are privileged
instructions. On the contrary, x86 CPU architectures are not primarily designed to support
virtualization. This is because about 10 sensitive instructions, such as SGDT and SMSW, are not
privileged instructions. When these instruc-tions execute in virtualization, they cannot be trapped
in the VMM.
On a native UNIX-like system, a system call triggers the 80h interrupt and passes control to
the OS kernel. The interrupt handler in the kernel is then invoked to process the system call. On a
para-virtualization system such as Xen, a system call in the guest OS first triggers
the 80h interrupt nor-mally. Almost at the same time, the 82h interrupt in the hypervisor is
triggered. Incidentally, control is passed on to the hypervisor as well. When the hypervisor
completes its task for the guest OS system call, it passes control back to the guest OS kernel.
Certainly, the guest OS kernel may also invoke the hypercall while it’s running. Although
paravirtualization of a CPU lets unmodified applications run in the VM, it causes a small
performance penalty.
2.1 Hardware-Assisted CPU Virtualization
This technique attempts to simplify virtualization because full or paravirtualization is
complicated. Intel and AMD add an additional mode called privilege mode level (some people
call it Ring-1) to x86 processors. Therefore, operating systems can still run at Ring 0 and the
hypervisor can run at Ring -1. All the privileged and sensitive instructions are trapped in the
hypervisor automatically. This technique removes the difficulty of implementing binary
translation of full virtualization. It also lets the operating system run in VMs without
modification.
3. Memory Virtualization
Virtual memory virtualization is similar to the virtual memory support provided by modern
operat-ing systems. In a traditional execution environment, the operating system maintains
mappings of virtual memory to machine memory using page tables, which is a one-stage
mapping from virtual memory to machine memory. All modern x86 CPUs include a memory
management unit (MMU) and a translation lookaside buffer (TLB) to optimize virtual memory
performance. However, in a virtual execution environment, virtual memory virtualization
involves sharing the physical system memory in RAM and dynamically allocating it to
the physical memory of the VMs.
That means a two-stage mapping process should be maintained by the guest OS and the VMM,
respectively: virtual memory to physical memory and physical memory to machine memory.
Furthermore, MMU virtualization should be supported, which is transparent to the guest OS. The
guest OS continues to control the mapping of virtual addresses to the physical memory addresses
of VMs. But the guest OS cannot directly access the actual machine memory. The VMM is
responsible for mapping the guest physical memory to the actual machine memory. Figure 3.12
shows the two-level memory mapping procedure.
Since each page table of the guest OSes has a separate page table in the VMM corresponding
to it, the VMM page table is called the shadow page table. Nested page tables add another layer
of indirection to virtual memory. The MMU already handles virtual-to-physical translations as
defined by the OS. Then the physical memory addresses are translated to machine addresses
using another set of page tables defined by the hypervisor. Since modern operating systems
maintain a set of page tables for every process, the shadow page tables will get flooded.
Consequently, the perfor-mance overhead and cost of memory will be very high.
VMware uses shadow page tables to perform virtual-memory-to-machine-memory address
translation. Processors use TLB hardware to map the virtual memory directly to the machine
memory to avoid the two levels of translation on every access. When the guest OS changes the
virtual memory to a physical memory mapping, the VMM updates the shadow page tables to
enable a direct lookup. The AMD Barcelona processor has featured hardware-assisted memory
virtualization since 2007. It provides hardware assistance to the two-stage address translation in a
virtual execution environment by using a technology called nested paging.
4. I/O Virtualization
I/O virtualization involves managing the routing of I/O requests between virtual devices and the
shared physical hardware. At the time of this writing, there are three ways to implement I/O
virtualization: full device emulation, para-virtualization, and direct I/O. Full device emulation is
the first approach for I/O virtualization. Generally, this approach emulates well-known, real-
world devices.
All the functions of a device or bus infrastructure, such as device enumeration, identification,
interrupts, and DMA, are replicated in software. This software is located in the VMM and acts as
a virtual device. The I/O access requests of the guest OS are trapped in the VMM which interacts
with the I/O devices. The full device emulation approach is shown in Figure 3.14.
A single hardware device can be shared by multiple VMs that run concurrently. However,
software emulation runs much slower than the hardware it emulates [10,15]. The para-
virtualization method of I/O virtualization is typically used in Xen. It is also known as the split
driver model consisting of a frontend driver and a backend driver. The frontend driver is running
in Domain U and the backend dri-ver is running in Domain 0. They interact with each other via a
block of shared memory. The frontend driver manages the I/O requests of the guest OSes and the
backend driver is responsible for managing the real I/O devices and multiplexing the I/O data of
different VMs. Although para-I/O-virtualization achieves better device performance than full
device emulation, it comes with a higher CPU overhead.
Direct I/O virtualization lets the VM access devices directly. It can achieve close-to-native
performance without high CPU costs. However, current direct I/O virtualization implementations
focus on networking for mainframes. There are a lot of challenges for commodity hardware
devices. For example, when a physical device is reclaimed (required by workload migration) for
later reassign-ment, it may have been set to an arbitrary state (e.g., DMA to some arbitrary
memory locations) that can function incorrectly or even crash the whole system. Since software-
based I/O virtualization requires a very high overhead of device emulation, hardware-assisted
I/O virtualization is critical. Intel VT-d supports the remapping of I/O DMA transfers and
device-generated interrupts. The architecture of VT-d provides the flexibility to support multiple
usage models that may run unmodified, special-purpose, or “virtualization-aware” guest OSes.
Another way to help I/O virtualization is via self-virtualized I/O (SV-IO) [47]. The key idea
of SV-IO is to harness the rich resources of a multicore processor. All tasks associated with
virtualizing an I/O device are encapsulated in SV-IO. It provides virtual devices and an
associated access API to VMs and a management API to the VMM. SV-IO defines one virtual
interface (VIF) for every kind of virtua-lized I/O device, such as virtual network interfaces,
virtual block devices (disk), virtual camera devices, and others. The guest OS interacts with the
VIFs via VIF device drivers. Each VIF consists of two mes-sage queues. One is for outgoing
messages to the devices and the other is for incoming messages from the devices. In addition,
each VIF has a unique ID for identifying it in SV-IO.
5. Virtualization in Multi-Core Processors
Virtualizing a multi-core processor is relatively more complicated than virtualizing a uni-core
processor. Though multicore processors are claimed to have higher performance by integrating
multiple processor cores in a single chip, muti-core virtualiuzation has raised some new
challenges to computer architects, compiler constructors, system designers, and application
programmers. There are mainly two difficulties: Application programs must be parallelized to
use all cores fully, and software must explicitly assign tasks to the cores, which is a very
complex problem.
Concerning the first challenge, new programming models, languages, and libraries are needed
to make parallel programming easier. The second challenge has spawned research involving
scheduling algorithms and resource management policies. Yet these efforts cannot balance well
among performance, complexity, and other issues. What is worse, as technology scales, a new
challenge called dynamic heterogeneity is emerging to mix the fat CPU core and thin GPU cores
on the same chip, which further complicates the multi-core or many-core resource management.
The dynamic heterogeneity of hardware infrastructure mainly comes from less reliable
transistors and increased complexity in using the transistors [33,66].
5.1 Physical versus Virtual Processor Cores
Wells, et al. [74] proposed a multicore virtualization method to allow hardware designers to get
an abstraction of the low-level details of the processor cores. This technique alleviates the burden
and inefficiency of managing hardware resources by software. It is located under the ISA and
remains unmodified by the operating system or VMM (hypervisor). Figure 3.16 illustrates the
technique of a software-visible VCPU moving from one core to another and temporarily
suspending execution of a VCPU when there are no appropriate cores on which it can run.
5.2 Virtual Hierarchy
The emerging many-core chip multiprocessors (CMPs) provides a new computing landscape.
Instead of supporting time-sharing jobs on one or a few cores, we can use the abundant cores in a
space-sharing, where single-threaded or multithreaded jobs are simultaneously assigned to
separate groups of cores for long time intervals. This idea was originally suggested by Marty and
Hill [39]. To optimize for space-shared workloads, they propose using virtual hierarchies to
overlay a coherence and caching hierarchy onto a physical processor. Unlike a fixed physical
hierarchy, a virtual hierarchy can adapt to fit how the work is space shared for improved
performance and performance isolation.
Today’s many-core CMPs use a physical hierarchy of two or more cache levels that statically
determine the cache allocation and mapping. A virtual hierarchy is a cache hierarchy that can
adapt to fit the workload or mix of workloads [39]. The hierarchy’s first level locates data blocks
close to the cores needing them for faster access, establishes a shared-cache domain, and
establishes a point of coherence for faster communication. When a miss leaves a tile, it first
attempts to locate the block (or sharers) within the first level. The first level can also pro-vide
isolation between independent workloads. A miss at the L1 cache can invoke the L2 access.
The idea is illustrated in Figure 3.17(a). Space sharing is applied to assign three workloads to
three clusters of virtual cores: namely VM0 and VM3 for database workload, VM1 and VM2 for
web server workload, and VM4–VM7 for middleware workload. The basic assumption is that
each workload runs in its own VM. However, space sharing applies equally within a single
operating system. Statically distributing the directory among tiles can do much better, provided
operating sys-tems or hypervisors carefully map virtual pages to physical frames. Marty and Hill
suggested a two-level virtual coherence and caching hierarchy that harmonizes with the
assignment of tiles to the virtual clusters of VMs.

Figure 3.17(b) illustrates a logical view of such a virtual cluster hierarchy in two levels. Each
VM operates in a isolated fashion at the first level. This will minimize both miss access time and
performance interference with other workloads or VMs. Moreover, the shared resources of cache
capacity, inter-connect links, and miss handling are mostly isolated between VMs. The second
level maintains a globally shared memory. This facilitates dynamically repartitioning resources
without costly cache flushes. Furthermore, maintaining globally shared memory minimizes
changes to existing system software and allows virtualization features such as content-based
page sharing. A virtual hierarchy adapts to space-shared workloads like multiprogramming and
server consolidation. Figure 3.17 shows a case study focused on consolidated server workloads
in a tiled architecture. This many-core mapping scheme can also optimize for space-shared
multiprogrammed workloads in a single-OS environment.

4.9 DESKTOP VIRTUALIZATION

Desktop virtualization is technology that lets users simulate a workstation load to access a
desktop from a connected device remotely or locally. This separates the desktop environment
and its applications from the physical client device used to access it. Desktop virtualization is a
key element of digital workspaces and depends on application virtualization.
How does desktop virtualization work?
Desktop virtualization can be achieved in a variety of ways, but the most important two types of
desktop virtualization are based on whether the operating system instance is local or remote.
Local Desktop Virtualization
Local desktop virtualization means the operating system runs on a client device using
hardware virtualization, and all processing and workloads occur on local hardware. This
type of desktop virtualization works well when users do not need a continuous network
connection and can meet application computing requirements with local system resources.
However, because this requires processing to be done locally you cannot use local desktop
virtualization to share VMs or resources across a network to thin clients or mobile devices.
Remote Desktop Virtualization
Remote desktop virtualization is a common use of virtualization that operates in a
client/server computing environment. This allows users to run operating systems and
applications from a server inside a data center while all user interactions take place on a
client device. This client device could be a laptop, thin client device, or a smartphone. The
result is IT departments have more centralized control over applications and desktops, and
can maximize the organization’s investment in IT hardware through remote access to
shared computing resources.
What is virtual desktop infrastructure?
A popular type of desktop virtualization is virtual desktop infrastructure (VDI). VDI is a variant
of the client-server model of desktop virtualization which uses host-based VMs to deliver
persistent and nonpersistent virtual desktops to all kinds of connected devices. With a persistent
virtual desktop, each user has a unique desktop image that they can customize with apps and
data, knowing it will be saved for future use. A nonpersistent virtual desktop infrastructure
allows users to access a virtual desktop from an identical pool when they need it; once the user
logs out of a nonpersistent VDI, it reverts to its unaltered state. Some of the advantages of virtual
desktop infrastructure are improved security and centralized desktop management across an
organization.
What are the benefits of desktop virtualization?
1. Resource Management:
Desktop virtualization helps IT departments get the most out of their hardware investments by
consolidating most of their computing in a data center. Desktop virtualization then allows
organizations to issue lower-cost computers and devices to end users because most of the
intensive computing work takes place in the data center. By minimizing how much computing is
needed at the endpoint devices for end users, IT departments can save money by buying less
costly machines.
2. Remote work:
Desktop virtualization helps IT admins support remote workers by giving IT central control over
how desktops are virtually deployed across an organization’s devices. Rather than manually
setting up a new desktop for each user, desktop virtualization allows IT to simply deploy a
ready-to-go virtual desktop to that user’s device. Now the user can interact with the operating
system and applications on that desktop from any location and the employee experience will be
the same as if they were working locally. Once the user is finished using this virtual desktop,
they can log off and return that desktop image to the shared pool.
3. Security:
Desktop virtualization software provides IT admins centralized security control over which users
can access which data and which applications. If a user’s permissions change because they leave
the company, desktop virtualization makes it easy for IT to quickly remove that user’s access to
their persistent virtual desktop and all its data—instead of having to manually uninstall
everything from that user’s devices. And because all company data lives inside the data center
rather than on each machine, a lost or stolen device does not post the same data risk. If someone
steals a laptop using desktop virtualization, there is no company data on the actual machine and
hence less risk of a breach

4.10 SERVER VIRTUALIZATION


It is the division of physical server into several virtual servers and this division is mainly done to
improvise the utility of server resource. In other word it is the masking of resources that are
located in server which includes the number & identity of processors, physical servers & the
operating system. This division of one physical server into multiple isolated virtual servers is
done by server administrator using software. The virtual environment is sometimes called the
virtual private-servers.

In this process, the server resources are kept hidden from the user. This partitioning of physical
server into several virtual environments; result in the dedication of one server to perform a single
application or task.

Usage of Server Virtualization

This technique is mainly used in web-servers which reduces the cost of web-hosting services.
Instead of having separate system for each web-server, multiple virtual servers can run on the
same system/computer.
The primary uses of server virtualization are:

 To centralize the server administration


 Improve the availability of server
 Helps in disaster recovery
 Ease in development & testing
 Make efficient use of server resources.

Approaches To Virtualization:

For Server Virtualization, there are three popular approaches.


These are:

 Virtual Machine model


 Para-virtual Machine model
 Operating System (OS) layer Virtualization

Server virtualization can be viewed as a part of overall virtualization trend in the IT companies
that include network virtualization, storage virtualization & management of workload. This trend
brings development in automatic computing. Server virtualization can also used to eliminate
server sprawl (Server sprawl is a situation in which many under-utilized servers utilize more
space or consume more resources than can be justified by their workload) & uses server
resources efficiently.

1. Virtual Machine model: are based on host-guest paradigm, where each guest runs on a
virtual replica of hardware layer. This technique of virtualization provide guest OS to run
without modification. However it requires real computing resources from the host and for this a
hypervisor or VM is required to coordinate instructions to CPU.
2. Para-Virtual Machine model: is also based on host-guest paradigm & uses virtual
machine monitor too. In this model the VMM modifies the guest operating system's code which
is called 'porting'. Like that of virtual machine, similarly the Para-virtual machine is also
capable of executing multiple operating systems. The Para-virtual model is used by both Xen &
UML.
3. Operating System Layer Virtualization: Virtualization at OS level functions in a
different way and is not based on host-guest paradigm. In this model the host runs a single
operating system kernel as its main/core and transfers its functionality to each of the guests.
The guest must use the same operating system as the host. This distributed nature of
architecture eliminated system calls between layers and hence reduces overhead of CPU usage.
It is also a must that each partition remains strictly isolated from its neighbors because any
failure or security breach of one partition won't be able to affect the other partitions.

Advantages of Server Virtualization

 Cost Reduction: Server virtualization reduces cost because less hardware is required.
 Independent Restart: Each server can be rebooted independently and that reboot won't
affect the working of other virtual servers.

4.11 GOOGLE APP ENGINE

Google App Engine (GAE) is a service for developing and hosting Web applications in Google's
data centers, belonging to the platform as a service (PaaS) category of cloud computing. Web
applications hosted on GAE are sandboxed and run across multiple servers for redundancy and
allowing for scaling of resources according to the traffic requirements of the moment. App
Engine automatically allocates additional resources to the servers to accommodate increased
load.
Google App Engine is Google's platform as a service offering that allows developers and
businesses to build and run applications using Google's advanced infrastructure. These
applications are required to be written in one of a few supported languages, namely: Java,
Python, PHP and Go. It also requires the use of Google query language and that the database
used is Google Big Table. Applications must abide by these standards, so applications either
must be developed with GAE in mind or else modified to meet the requirements.

GAE is a platform, so it provides all of the required elements to run and host Web applications,
be it on mobile or Web. Without this all-in feature, developers would have to source their own
servers, database software and the APIs that would make all of them work properly together, not
to mention the entire configuration that must be done. GAE takes this burden off the developers
so they can concentrate on the app front end and functionality, driving better user experience.

Advantages of GAE include:

 Readily available servers with no configuration requirement


 Power scaling function all the way down to "free" when resource usage is minimal
 Automated cloud computing tools

4.12 AMAZON AWS

In 2006, Amazon Web Services (AWS) began offering IT infrastructure services to businesses in
the form of web services -- now commonly known as cloud computing. One of the key benefits
of cloud computing is the opportunity to replace up-front capital infrastructure expenses with
low variable costs that scale with your business. With the Cloud, businesses no longer need to
plan for and procure servers and other IT infrastructure weeks or months in advance. Instead,
they can instantly spin up hundreds or thousands of servers in minutes and deliver results faster.

Today, Amazon Web Services provides a highly reliable, scalable, low-cost infrastructure
platform in the cloud that powers hundreds of thousands of businesses in 190 countries around
the world. With data center locations in the U.S., Europe, Brazil, Singapore, Japan, and
Australia, customers across all industries are taking advantage of the following benefits:

Low Cost
AWS offers low, pay-as-you-go pricing with no up-front expenses or long-term commitments.
We are able to build and manage a global infrastructure at scale, and pass the cost saving benefits
onto you in the form of lower prices. With the efficiencies of our scale and expertise, we have
been able to lower our prices on 15 different occasions over the past four years. Visit the
Economics Center to learn more.

Agility and Instant Elasticity


AWS provides a massive global cloud infrastructure that allows you to quickly innovate,
experiment and iterate. Instead of waiting weeks or months for hardware, you can instantly
deploy new applications, instantly scale up as your workload grows, and instantly scale down
based on demand. Whether you need one virtual server or thousands, whether you need them for
a few hours or 24/7, you still only pay for what you use.

Open and Flexible


AWS is a language and operating system agnostic platform. You choose the development
platform or programming model that makes the most sense for your business. You can choose
which services you use, one or several, and choose how you use them. This flexibility allows you
to focus on innovation, not infrastructure

Secure
AWS is a secure, durable technology platform with industry-recognized certifications and audits:
PCI DSS Level 1, ISO 27001, FISMA Moderate, FedRAMP, HIPAA, and SOC 1 (formerly
referred to as SAS 70 and/or SSAE 16) and SOC 2 audit reports. Our services and data centers
have multiple layers of operational and physical security to ensure the integrity and safety of
your data.

Solutions
The AWS cloud computing platform provides the flexibility to launch your application
regardless of your use case or industry.

Learn more about popular solutions customers are running on AWS:

Application Hosting
Use reliable, on-demand infrastructure to power your applications, from hosted internal
applications to SaaS offerings.

Websites
Satisfy your dynamic web hosting needs with AWS’s scalable infrastructure platform.

Backup and Storage


Store data and build dependable backup solutions using AWS’s inexpensive data storage
services.

Enterprise IT
Host internal- or external-facing IT applications in AWS's secure environment.

Content Delivery
Quickly and easily distribute content to end users worldwide, with low costs and high data
transfer speeds.

Databases
Take advantage of a variety of scalable database solutions, from hosted enterprise database
software or non-relational database solutions.

4.13 FEDERATION IN THE CLOUD

Cloud Federation, also known as Federated Cloud is the deployment and management of
several external and internal cloud computing services to match business needs. It is a multi-
national cloud system that integrates private, community, and public clouds into scalable
computing platforms. Federated cloud is created by connecting the cloud environment of
different cloud providers using a common standard.

Federated Cloud

The architecture of Federated Cloud:

The architecture of Federated Cloud consists of three basic components:


1. Cloud Exchange
The Cloud Exchange acts as a mediator between cloud coordinator and cloud broker. The
demands of the cloud broker are mapped by the cloud exchange to the available services
provided by the cloud coordinator. The cloud exchange has a track record of what is the
present cost, demand patterns, and available cloud providers, and this information is
periodically reformed by the cloud coordinator.
2. Cloud Coordinator
The cloud coordinator assigns the resources of the cloud to the remote users based on the
quality of service they demand and the credits they have in the cloud bank. The cloud
enterprises and their membership are managed by the cloud controller.
3. Cloud Broker
The cloud broker interacts with the cloud coordinator, analyzes the Service-level agreement
and the resources offered by several cloud providers in cloud exchange. Cloud broker finalizes
the most suitable deal for their client.

Properties of Federated Cloud:

1. In the federated cloud, the users can interact with the architecture either centrally or in a
decentralized manner. In centralized interaction, the user interacts with a broker to mediate
between them and the organization. Decentralized interaction permits the user to interact
directly with the clouds in the federation.
2. Federated cloud can be practiced with various niches like commercial and non-
commercial.
3. The visibility of a federated cloud assists the user to interpret the organization of
several clouds in the federated environment.
4. Federated cloud can be monitored in two ways. MaaS (Monitoring as a Service)
provides information that aids in tracking contracted services to the user. Global monitoring
aids in maintaining the federated cloud.
5. The providers who participate in the federation publish their offers to a central entity.
The user interacts with this central entity to verify the prices and propose an offer.
6. The marketing objects like infrastructure, software, and platform have to pass through
federation when consumed in the federated cloud.

Federal Cloud Architecture

Benefits of Federated Cloud:

1. It minimizes the consumption of energy.


2. It increases reliability.
3. It minimizes the time and cost of providers due to dynamic scalability.
4. It connects various cloud service providers globally. The providers may buy and sell
services on demand.
5. It provides easy scaling up of resources.

Challenges in Federated Cloud:

1. In cloud federation, it is common to have more than one provider for processing the
incoming demands. In such cases, there must be a scheme needed to distribute the incoming
demands equally among the cloud service providers.
2. The increasing requests in cloud federation have resulted in more heterogeneous
infrastructure, making interoperability an area of concern. It becomes a challenge for cloud
users to select relevant cloud service providers and therefore, it ties them to a particular
cloud service provider.
3. A federated cloud means constructing a seamless cloud environment that can interact
with people, different devices, several application interfaces, and other entities.

Federated Cloud technologies:

The technologies that aid the cloud federation and cloud services are:
1. OpenNebula
It is a cloud computing platform for managing heterogeneous distributed data center
infrastructures. It can use the resources of its interoperability, leveraging existing information
technology assets, protecting the deals, and adding the application programming interface
(API).
2. Aneka coordinator
The Aneka coordinator is a proposition of the Aneka services and Aneka peer components
(network architectures) which give the cloud ability and performance to interact with other
cloud services.
3. Eucalyptus
Eucalyptus defines the pooling computational, storage, and network resources that can be
measured scaled up or down as application workloads change in the utilization of the software.
It is an open-source framework that performs the storage, network, and many other
computational resources to access the cloud environment.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy