Unit Iv Cloud Enabling Technologies
Unit Iv Cloud Enabling Technologies
Cloud-enabling technology
Cloud-enabling technology is the use of computing resources that are delivered to customers
with the help of the internet. Cloud-computing technologies are proliferating across various
sectors, such as energy and power, oil and gas, buildings and construction, transport,
communication, etc.
Service Provider and Service consumer are the two major roles within SOA.
SOA (Service Oriented Architecture) is built on computer engineering approaches that offer an
architectural advancement towards enterprise system. It describes a standard method for
requesting services from distributed components and after that the results or outcome is
managed. The primary focus of this service oriented approach is on the characteristics of service
interface and predictable service behavior. Web Services means a set or combination of industry
standards collectively labeled as one. SOA provides a translation and management layer within
the cloud architecture that removes the barrier for cloud clients obtaining desired services.
Multiple networking and messaging protocols can be written using SOA's client and components
and can be used to communicate with each other. SOA provides access to reusable Web services
over a TCP/IP network, which makes this an important topic to cloud computing going forward.
Benefits of SOA
With high-tech engineering and enterprise point of view, various offers are provided by SOA
which proved to be beneficial. These are:
Language Neutral Integration: Regardless of the developing language used, the system
offers and invoke services through a common mechanism. Programming language
neutralization is one of the key benefits of SOA's integration approach.
Component Reuse: Once an organization built an application component, and offered it
as a service, the rest of the organization can utilize that service.
Organizational Agility: SOA defines building blocks of capabilities provided by software
and it offers some service(s) that meet some organizational requirement; which can be
recombined and integrated rapidly.
Leveraging Existing System: This is one of the major use of SOA which is to classify
elements or functions of existing applications and make them available to the organizations or
enterprise.
SOA Architecture
SOA architecture is viewed as five horizontal layers. These are described below:
Consumer Interface Layer: These are GUI based apps for end users accessing the
applications.
Business Process Layer: These are business-use cases in terms of application.
Services Layer: These are whole-enterprise, in service inventory.
Service Component Layer: are used to build the services, such as functional and technical
libraries.
Operational Systems Layer: It contains the data model.
SOA Governance
Here lies the protocol stack of SOA showing each protocol along with their relationship among
each protocol. These components are often programmed to comply with SCA (Service
Component Architecture), a language that has broader but not universal industry support. These
components are written in BPEL (Business Process Execution Languages), Java, C#, XML etc
and can apply to C++ or FORTRAN or other modern multi-purpose languages such as Python,
PP or Ruby. With this, SOA has extended the life of many all-time famous applications.
Security in SOA
With the vast use of cloud technology and its on-demand applications, there is a need for well -
defined security policies and access control. With the betterment of these issues, the success of
SOA architecture will increase. Actions can be taken to ensure security and lessen the risks when
dealing with SOE (Service Oriented Environment). We can make policies that will influence the
patterns of development and the way services are used. Moreover, the system must be set-up in
order to exploit the advantages of public cloud with resilience. Users must include safety
practices and carefully evaluate the clauses in these respects.
Elements of SOA
Here's the diagrammatic figure showing the different elements of SOA and its subparts:
Figure - Elements Of SOA:
Though SOA enjoyed lots achieving the success in the past, the introduction to cloud technology
with SOA, renewed the value of SOA
Cloud computing is a style of computing in which virtualised and standard resources, software
and data are provided as a service over the Internet.
Consumers and businesses can use the cloud to store data and applications and can interact with
the Cloud using mobiles, desktop computers, laptops etc. via the Internet from anywhere and at
any time.
The technology of Cloud computing entails the convergence of Grid and cluster computing,
virtualisation, Web services and Service Oriented Architecture (SOA) - it offers the potential to
set IT free from the costs and complexity of its typical physical infrastructure, allowing concepts
such as Utility Computing to become meaningful.
Key players include: IBM, HP, Google, Microsoft, Amazon Web Services, Salesforce.com,
NetSuite, VMware.
Virtualization Concept
Creating a virtual machine over existing operating system and hardware is referred as Hardware
Virtualization. Virtual Machines provide an environment that is logically separated from the
underlying hardware.
The machine on which the virtual machine is created is known as host machine and virtual
machine is referred as a guest machine. This virtual machine is managed by a software or
firmware, which is known as hypervisor.
Hypervisor
The hypervisor is a firmware or low-level program that acts as a Virtual Machine Manager.
There are two types of hypervisor:
Type 1 hypervisor executes on bare system. LynxSecure, RTS Hypervisor, Oracle VM, Sun
xVM Server, VirtualLogic VLX are examples of Type 1 hypervisor. The following diagram
shows the Type 1 hypervisor.
The type1 hypervisor does not have any host operating system because they are installed on a
bare system.
Type 2 hypervisor is a software interface that emulates the devices with which a system
normally interacts. Containers, KVM, Microsoft Hyper V, VMWare Fusion, Virtual Server
2005 R2, Windows Virtual PC and VMWare workstation 6.0 are examples of Type 2
hypervisor. The following diagram shows the Type 2 hypervisor.
Types of Hardware Virtualization
Full Virtualization
Emulation Virtualization
Paravirtualization
Full Virtualization
In full virtualization, the underlying hardware is completely simulated. Guest software does
not require any modification to run.
Emulation Virtualization
In Emulation, the virtual machine simulates the hardware and hence becomes independent of it.
In this, the guest operating system does not require modification.
Paravirtualization
In Paravirtualization, the hardware is not simulated. The guest software run their own isolated
domains.
Emulation Cloud is an open application development environment that helps customers and
third-party developers create, test, and fine-tune customized applications in a completely
virtual environment.
With the web-scale traffic demands of fast-growing cloud-based services, content
distribution, and new IT services emerging from virtualization, requirements for flexibility
and programmability are growing like never before.
Ciena offers a rich the toolset in the Emulation Cloud so developers and IT teams can
simplify integration activities.
Types of Virtualization:
1. Hardware Virtualization.
2. Operating system Virtualization.
3. Server Virtualization.
4. Storage Virtualization.
1) Hardware Virtualization:
When the virtual machine software or virtual machine manager (VMM) is directly installed on
the hardware system is known as hardware virtualization.
The main job of hypervisor is to control and monitoring the processor, memory and other
hardware resources.
After virtualization of hardware system we can install different operating system on it and run
different applications on those OS.
Usage:
Hardware virtualization is mainly done for the server platforms, because controlling virtual
machines is much easier than controlling a physical server.
When the virtual machine software or virtual machine manager (VMM) is installed on the Host
operating system instead of directly on the hardware system is known as operating system
virtualization.
Usage:
Operating System Virtualization is mainly used for testing the applications on different platforms
of OS.
3) Server Virtualization:
When the virtual machine software or virtual machine manager (VMM) is directly installed on
the Server system is known as server virtualization.
Usage:
Server virtualization is done because a single physical server can be divided into multiple servers
on the demand basis and for balancing the load.
4) Storage Virtualization:
Storage virtualization is the process of grouping the physical storage from multiple network
storage devices so that it looks like a single storage device.
Usage:
Virtualization has been present since the 1960s, when it was introduced by IBM. Yet, it has only
recently caught the expected traction owing to the influx of cloud-based systems.
Virtualization, to explain in brief, is the capability to run multiple instances of computer systems
on the same hardware. The way hardware is being used can vary based on the configuration of
the virtual machine.
The best example of this is your own desktop PC or laptop. You might be running Windows on
your system, but with virtualization, now you can also run Macintosh or Linux Ubuntu on it.
Now, there are various levels of virtualizations that we are going to be seeing. Let’s have a look
at them.
Virtualization is not that easy to implement. A computer runs an OS that is configured to that
particular hardware. Running a different OS on the same hardware is not exactly feasible.
To tackle this, there exists a hypervisor. What hypervisor does is, it acts as a bridge between
virtual OS and hardware to enable its smooth functioning of the instance.
There are five levels of virtualizations available that are most commonly used in the industry.
These are as follows:
In ISA, virtualization works through an ISA emulation. This is helpful to run heaps of legacy
code which was originally written for different hardware configurations.
These codes can be run on the virtual machine through an ISA.
A binary code that might need additional layers to run can now run on an x86 machine or with
some tweaking, even on x64 machines. ISA helps make this a hardware-agnostic virtual
machine.
The basic emulation, though, requires an interpreter. This interpreter interprets the source code
and converts it to a hardware readable format for processing.
As the name suggests, this level helps perform virtualization at the hardware level. It uses a bare
hypervisor for its functioning.
This level helps form the virtual machine and manages the hardware through virtualization.
It enables virtualization of each hardware component such as I/O devices, processors, memory,
etc.
This way multiple users can use the same hardware with numerous instances of virtualization at
the same time.
IBM had first implemented this on the IBM VM/370 back in 1960. It is more usable for cloud-
based infrastructure.
Thus, it is no surprise that currently, Xen hypervisors are using HAL to run Linux and other OS
on x86 based machines.
At the operating system level, the virtualization model creates an abstract layer between the
applications and the OS.
It is like an isolated container on the physical server and operating system that utilizes hardware
and software. Each of these containers functions like servers.
When the number of users is high, and no one is willing to share hardware, this level of
virtualization comes in handy.
Here, every user gets their own virtual environment with dedicated virtual hardware resources.
This way, no conflicts arise.
Library Level
OS system calls are lengthy and cumbersome. Which is why applications opt for APIs from user-
level libraries.
Most of the APIs provided by systems are rather well documented. Hence, library level
virtualization is preferred in such scenarios.
Library interfacing virtualization is made possible by API hooks. These API hooks control the
communication link from the system to the applications.
Some tools available today, such as vCUDA and WINE, have successfully demonstrated this
technique
Application Level
Application-level virtualization comes handy when you wish to virtualize only an application. It
does not virtualize an entire platform or environment.
On an operating system, applications work as one process. Hence it is also known as process-
level virtualization.
It is generally useful when running virtual machines with high-level languages. Here, the
application sits on top of the virtualization layer, which is above the application program.
Programs written in high-level languages and compiled for an application-level virtual machine
can run fluently here.
A micro-kernel hypervisor includes only the basic and unchanging functions (such as physical
memory management and processor scheduling). The device drivers and other changeable
components are outside the hypervisor. A monolithic hypervisor implements all the
aforementioned functions, including those of the device drivers. Therefore, the size of the
hypervisor code of a micro-kernel hyper-visor is smaller than that of a monolithic hypervisor.
Essentially, a hypervisor must be able to convert physical devices into virtual resources
dedicated for the deployed VM to use.
1.1 The Xen Architecture
Xen is an open source hypervisor program developed by Cambridge University. Xen is a micro-
kernel hypervisor, which separates the policy from the mechanism. The Xen hypervisor
implements all the mechanisms, leaving the policy to be handled by Domain 0, as shown in
Figure 3.5. Xen does not include any device drivers natively [7]. It just provides a mechanism by
which a guest OS can have direct access to the physical devices. As a result, the size of the Xen
hypervisor is kept rather small. Xen provides a virtual environment located between the
hardware and the OS. A number of vendors are in the process of developing commercial Xen
hypervisors, among them are Citrix XenServer [62] and Oracle VM [42].
The core components of a Xen system are the hypervisor, kernel, and applications. The organi-
zation of the three components is important. Like other virtualization systems, many guest OSes
can run on top of the hypervisor. However, not all guest OSes are created equal, and one in
particular controls the others. The guest OS, which has control ability, is called Domain 0, and
the others are called Domain U. Domain 0 is a privileged guest OS of Xen. It is first loaded when
Xen boots without any file system drivers being available. Domain 0 is designed to access
hardware directly and manage devices. Therefore, one of the responsibilities of Domain 0 is to
allocate and map hardware resources for the guest domains (the Domain U domains).
2. Binary Translation with Full Virtualization
Depending on implementation technologies, hardware virtualization can be classified into two
cate-gories: full virtualization and host-based virtualization. Full virtualization does not need to
modify the host OS. It relies on binary translation to trap and to virtualize the execution of
certain sensitive, nonvirtualizable instructions. The guest OSes and their applications consist of
noncritical and critical instructions. In a host-based system, both a host OS and a guest OS are
used. A virtuali-zation software layer is built between the host OS and guest OS. These two
classes of VM architec-ture are introduced next.
2.1 Full Virtualization
With full virtualization, noncritical instructions run on the hardware directly while critical
instructions are discovered and replaced with traps into the VMM to be emulated by software.
Both the hypervisor and VMM approaches are considered full virtualization. Why are only
critical instructions trapped into the VMM? This is because binary translation can incur a large
performance overhead. Noncritical instructions do not control hardware or threaten the security
of the system, but critical instructions do. Therefore, running noncritical instructions on
hardware not only can promote efficiency, but also can ensure system security.
To support virtualization, processors such as the x86 employ a special running mode and
instructions, known as hardware-assisted virtualization. In this way, the VMM and guest OS run
in different modes and all sensitive instructions of the guest OS and its applications are trapped
in the VMM. To save processor states, mode switching is completed by hardware. For the x86
architecture, Intel and AMD have proprietary technologies for hardware-assisted virtualization.
Figure 3.17(b) illustrates a logical view of such a virtual cluster hierarchy in two levels. Each
VM operates in a isolated fashion at the first level. This will minimize both miss access time and
performance interference with other workloads or VMs. Moreover, the shared resources of cache
capacity, inter-connect links, and miss handling are mostly isolated between VMs. The second
level maintains a globally shared memory. This facilitates dynamically repartitioning resources
without costly cache flushes. Furthermore, maintaining globally shared memory minimizes
changes to existing system software and allows virtualization features such as content-based
page sharing. A virtual hierarchy adapts to space-shared workloads like multiprogramming and
server consolidation. Figure 3.17 shows a case study focused on consolidated server workloads
in a tiled architecture. This many-core mapping scheme can also optimize for space-shared
multiprogrammed workloads in a single-OS environment.
Desktop virtualization is technology that lets users simulate a workstation load to access a
desktop from a connected device remotely or locally. This separates the desktop environment
and its applications from the physical client device used to access it. Desktop virtualization is a
key element of digital workspaces and depends on application virtualization.
How does desktop virtualization work?
Desktop virtualization can be achieved in a variety of ways, but the most important two types of
desktop virtualization are based on whether the operating system instance is local or remote.
Local Desktop Virtualization
Local desktop virtualization means the operating system runs on a client device using
hardware virtualization, and all processing and workloads occur on local hardware. This
type of desktop virtualization works well when users do not need a continuous network
connection and can meet application computing requirements with local system resources.
However, because this requires processing to be done locally you cannot use local desktop
virtualization to share VMs or resources across a network to thin clients or mobile devices.
Remote Desktop Virtualization
Remote desktop virtualization is a common use of virtualization that operates in a
client/server computing environment. This allows users to run operating systems and
applications from a server inside a data center while all user interactions take place on a
client device. This client device could be a laptop, thin client device, or a smartphone. The
result is IT departments have more centralized control over applications and desktops, and
can maximize the organization’s investment in IT hardware through remote access to
shared computing resources.
What is virtual desktop infrastructure?
A popular type of desktop virtualization is virtual desktop infrastructure (VDI). VDI is a variant
of the client-server model of desktop virtualization which uses host-based VMs to deliver
persistent and nonpersistent virtual desktops to all kinds of connected devices. With a persistent
virtual desktop, each user has a unique desktop image that they can customize with apps and
data, knowing it will be saved for future use. A nonpersistent virtual desktop infrastructure
allows users to access a virtual desktop from an identical pool when they need it; once the user
logs out of a nonpersistent VDI, it reverts to its unaltered state. Some of the advantages of virtual
desktop infrastructure are improved security and centralized desktop management across an
organization.
What are the benefits of desktop virtualization?
1. Resource Management:
Desktop virtualization helps IT departments get the most out of their hardware investments by
consolidating most of their computing in a data center. Desktop virtualization then allows
organizations to issue lower-cost computers and devices to end users because most of the
intensive computing work takes place in the data center. By minimizing how much computing is
needed at the endpoint devices for end users, IT departments can save money by buying less
costly machines.
2. Remote work:
Desktop virtualization helps IT admins support remote workers by giving IT central control over
how desktops are virtually deployed across an organization’s devices. Rather than manually
setting up a new desktop for each user, desktop virtualization allows IT to simply deploy a
ready-to-go virtual desktop to that user’s device. Now the user can interact with the operating
system and applications on that desktop from any location and the employee experience will be
the same as if they were working locally. Once the user is finished using this virtual desktop,
they can log off and return that desktop image to the shared pool.
3. Security:
Desktop virtualization software provides IT admins centralized security control over which users
can access which data and which applications. If a user’s permissions change because they leave
the company, desktop virtualization makes it easy for IT to quickly remove that user’s access to
their persistent virtual desktop and all its data—instead of having to manually uninstall
everything from that user’s devices. And because all company data lives inside the data center
rather than on each machine, a lost or stolen device does not post the same data risk. If someone
steals a laptop using desktop virtualization, there is no company data on the actual machine and
hence less risk of a breach
In this process, the server resources are kept hidden from the user. This partitioning of physical
server into several virtual environments; result in the dedication of one server to perform a single
application or task.
This technique is mainly used in web-servers which reduces the cost of web-hosting services.
Instead of having separate system for each web-server, multiple virtual servers can run on the
same system/computer.
The primary uses of server virtualization are:
Approaches To Virtualization:
Server virtualization can be viewed as a part of overall virtualization trend in the IT companies
that include network virtualization, storage virtualization & management of workload. This trend
brings development in automatic computing. Server virtualization can also used to eliminate
server sprawl (Server sprawl is a situation in which many under-utilized servers utilize more
space or consume more resources than can be justified by their workload) & uses server
resources efficiently.
1. Virtual Machine model: are based on host-guest paradigm, where each guest runs on a
virtual replica of hardware layer. This technique of virtualization provide guest OS to run
without modification. However it requires real computing resources from the host and for this a
hypervisor or VM is required to coordinate instructions to CPU.
2. Para-Virtual Machine model: is also based on host-guest paradigm & uses virtual
machine monitor too. In this model the VMM modifies the guest operating system's code which
is called 'porting'. Like that of virtual machine, similarly the Para-virtual machine is also
capable of executing multiple operating systems. The Para-virtual model is used by both Xen &
UML.
3. Operating System Layer Virtualization: Virtualization at OS level functions in a
different way and is not based on host-guest paradigm. In this model the host runs a single
operating system kernel as its main/core and transfers its functionality to each of the guests.
The guest must use the same operating system as the host. This distributed nature of
architecture eliminated system calls between layers and hence reduces overhead of CPU usage.
It is also a must that each partition remains strictly isolated from its neighbors because any
failure or security breach of one partition won't be able to affect the other partitions.
Cost Reduction: Server virtualization reduces cost because less hardware is required.
Independent Restart: Each server can be rebooted independently and that reboot won't
affect the working of other virtual servers.
Google App Engine (GAE) is a service for developing and hosting Web applications in Google's
data centers, belonging to the platform as a service (PaaS) category of cloud computing. Web
applications hosted on GAE are sandboxed and run across multiple servers for redundancy and
allowing for scaling of resources according to the traffic requirements of the moment. App
Engine automatically allocates additional resources to the servers to accommodate increased
load.
Google App Engine is Google's platform as a service offering that allows developers and
businesses to build and run applications using Google's advanced infrastructure. These
applications are required to be written in one of a few supported languages, namely: Java,
Python, PHP and Go. It also requires the use of Google query language and that the database
used is Google Big Table. Applications must abide by these standards, so applications either
must be developed with GAE in mind or else modified to meet the requirements.
GAE is a platform, so it provides all of the required elements to run and host Web applications,
be it on mobile or Web. Without this all-in feature, developers would have to source their own
servers, database software and the APIs that would make all of them work properly together, not
to mention the entire configuration that must be done. GAE takes this burden off the developers
so they can concentrate on the app front end and functionality, driving better user experience.
In 2006, Amazon Web Services (AWS) began offering IT infrastructure services to businesses in
the form of web services -- now commonly known as cloud computing. One of the key benefits
of cloud computing is the opportunity to replace up-front capital infrastructure expenses with
low variable costs that scale with your business. With the Cloud, businesses no longer need to
plan for and procure servers and other IT infrastructure weeks or months in advance. Instead,
they can instantly spin up hundreds or thousands of servers in minutes and deliver results faster.
Today, Amazon Web Services provides a highly reliable, scalable, low-cost infrastructure
platform in the cloud that powers hundreds of thousands of businesses in 190 countries around
the world. With data center locations in the U.S., Europe, Brazil, Singapore, Japan, and
Australia, customers across all industries are taking advantage of the following benefits:
Low Cost
AWS offers low, pay-as-you-go pricing with no up-front expenses or long-term commitments.
We are able to build and manage a global infrastructure at scale, and pass the cost saving benefits
onto you in the form of lower prices. With the efficiencies of our scale and expertise, we have
been able to lower our prices on 15 different occasions over the past four years. Visit the
Economics Center to learn more.
Secure
AWS is a secure, durable technology platform with industry-recognized certifications and audits:
PCI DSS Level 1, ISO 27001, FISMA Moderate, FedRAMP, HIPAA, and SOC 1 (formerly
referred to as SAS 70 and/or SSAE 16) and SOC 2 audit reports. Our services and data centers
have multiple layers of operational and physical security to ensure the integrity and safety of
your data.
Solutions
The AWS cloud computing platform provides the flexibility to launch your application
regardless of your use case or industry.
Application Hosting
Use reliable, on-demand infrastructure to power your applications, from hosted internal
applications to SaaS offerings.
Websites
Satisfy your dynamic web hosting needs with AWS’s scalable infrastructure platform.
Enterprise IT
Host internal- or external-facing IT applications in AWS's secure environment.
Content Delivery
Quickly and easily distribute content to end users worldwide, with low costs and high data
transfer speeds.
Databases
Take advantage of a variety of scalable database solutions, from hosted enterprise database
software or non-relational database solutions.
Cloud Federation, also known as Federated Cloud is the deployment and management of
several external and internal cloud computing services to match business needs. It is a multi-
national cloud system that integrates private, community, and public clouds into scalable
computing platforms. Federated cloud is created by connecting the cloud environment of
different cloud providers using a common standard.
Federated Cloud
1. In the federated cloud, the users can interact with the architecture either centrally or in a
decentralized manner. In centralized interaction, the user interacts with a broker to mediate
between them and the organization. Decentralized interaction permits the user to interact
directly with the clouds in the federation.
2. Federated cloud can be practiced with various niches like commercial and non-
commercial.
3. The visibility of a federated cloud assists the user to interpret the organization of
several clouds in the federated environment.
4. Federated cloud can be monitored in two ways. MaaS (Monitoring as a Service)
provides information that aids in tracking contracted services to the user. Global monitoring
aids in maintaining the federated cloud.
5. The providers who participate in the federation publish their offers to a central entity.
The user interacts with this central entity to verify the prices and propose an offer.
6. The marketing objects like infrastructure, software, and platform have to pass through
federation when consumed in the federated cloud.
1. In cloud federation, it is common to have more than one provider for processing the
incoming demands. In such cases, there must be a scheme needed to distribute the incoming
demands equally among the cloud service providers.
2. The increasing requests in cloud federation have resulted in more heterogeneous
infrastructure, making interoperability an area of concern. It becomes a challenge for cloud
users to select relevant cloud service providers and therefore, it ties them to a particular
cloud service provider.
3. A federated cloud means constructing a seamless cloud environment that can interact
with people, different devices, several application interfaces, and other entities.
The technologies that aid the cloud federation and cloud services are:
1. OpenNebula
It is a cloud computing platform for managing heterogeneous distributed data center
infrastructures. It can use the resources of its interoperability, leveraging existing information
technology assets, protecting the deals, and adding the application programming interface
(API).
2. Aneka coordinator
The Aneka coordinator is a proposition of the Aneka services and Aneka peer components
(network architectures) which give the cloud ability and performance to interact with other
cloud services.
3. Eucalyptus
Eucalyptus defines the pooling computational, storage, and network resources that can be
measured scaled up or down as application workloads change in the utilization of the software.
It is an open-source framework that performs the storage, network, and many other
computational resources to access the cloud environment.