0% found this document useful (0 votes)
46 views

CCL Lab Manual-1

Uploaded by

Babita Bhagat
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
46 views

CCL Lab Manual-1

Uploaded by

Babita Bhagat
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 83

Mahatma Education Society‘s

PILLAI HOC COLLEGE OF ENGINEERING AND TECHNOLOGY,


RASAYANI

Department of Computer Engineering

Subject: Cloud Computing Lab Sem: VIII

Sr.
List of Experiments Page No.
No.

1. Study of NIST model of cloud computing


2. Virtualization in cloud
3. To study and implement IaaS using OpenStack
4. To study and implement Storage as a Service.
5. Understand Security of Web Server and data directory
6. Case study on Fog computing
7. To simulate identity management in private cloud.

8. Write an IAM policy by using client libraries.


To understand on demand application delivery and virtual
9.
desktop infrastructure.
10 To study containerization using Dockert.
.11. Miniproject

Practical In-Charge HOD

Ms.Archana Augustine Ms.Rohini Bhosale


Ms.Snehal Chitale
EXPERIMENT NO.01
Name of the Exp.: Study of NIST model of Cloud Computing.

Aim : Study of NIST model of Cloud Computing

Pre- Requisite : basic knowledge of cloud computing

Objective of this module is to provide students an overview of the C loud

Computing and Architecture and different types of Cloud Computing.

Scope:

Cloud Computing & Architecture Types of Cloud Computing.

THORY:

What is Cloud?

The term Cloud refers to a Network or Internet. In other words, we can say that
Cloud is something, which is present at remote location. Cloud can provide
services over public and private networks, i.e., WAN, LAN or VPN.

Applications such as e-mail, web conferencing, customer relationship


management (CRM) execute on cloud.

What is Cloud Computing?

Cloud Computing refers to manipulating, configuring, and accessing the


hardware and software resources remotely. It offers online data storage,
infrastructure, and application. Cloud computing offers platform independency,
as the software is not required to be installed locally on the PC
Deployment Models

Deployment models define the type of access to the cloud, i.e., how the cloud is
located? Cloud can have any of the four types of access: Public, Private, Hybrid,
and Community.
Public Cloud

The public cloud allows systems and services to be easily accessible to the
general public. Public cloud may be less secure because of its openness. Public
clouds are owned and operated by third parties; they deliver superior economies of
scale to customers, as the infrastructure costs are spread among a mix of users,
giving each individual client an attractive low-cost, ―Pay-as-you-go‖ model. All
customers share the same infrastructure pool with limited configuration, security
protections, and availability variances. These are managed and supported by the
cloud provider. One of the advantages of a Public cloud is that they may be larger
than an enterprises cloud, thus providing the ability to scale seamlessly, on
demand.

Private Cloud

The private cloud allows systems and services to be accessible within an


organization. It is more secured because of its private nature.
● On-premise Private Cloud: On-premise private clouds, also known as
internal clouds are hosted within one‘s own data centre. This model provides
a more standardized process and protection, but is limited in aspects of size
and scalability.
● Externally hosted Private Cloud: This type of private cloud is hosted
externally with a cloud provider, where the provider facilitates an
exclusive cloud environment with full guarantee of privacy.

Community Cloud

The community cloud allows systems and services to be accessible by a group of


organizations. The costs are spread over fewer users than a public cloud (but more than a
private cloud), so only some of the cost savings potential of cloud computing are realized.

Hybrid Cloud

The hybrid cloud is a mixture of public and private cloud, in which the critical
activities are performed using private cloud while the non-critical activities are
performed using public cloud.
Hybrid Clouds combine both public and private cloud models. With a Hybrid
Cloud, service providers can utilize 3rd party Cloud Providers in a full or partial
manner thus increasing the flexibility of computing. The Hybrid cloud
environment is capable of providing on-demand, externally provisioned scale.

Cloud Computing Models


Cloud Providers offer services that can be grouped into three categories.

1. Software as a Service (SaaS): In this model, a complete application is


offered to the customer, as a service on demand. A single instance of the
service runs on the cloud & multiple end users are serviced. On the
customers‟ side, there is no need for upfront investment in servers or software
licenses, while for the provider, the costs are lowered, since only a single
application needs to be hosted & maintained. Today SaaS is offered by
companies such as Google, Salesforce, Microsoft, Zoho, etc.

2. Platform as a Service (Paas): Here, a layer of software, or development


environment is encapsulated & offered as a service, upon which other higher
levels of service can be built. The customer has the freedom to build his own
applications, which run on the provider‘s infrastructure. To meet manageability
and scalability requirements of the applications, PaaS providers offer a
predefined combination of OS and application servers, such as LAMP platform
(Linux, Apache, MySql and PHP), restricted J2EE, Ruby etc. Google‘s App
Engine, Force.com, etc are some of the popular PaaS examples.

3. Infrastructure as a Service (Iaas): IaaS provides basic storage and


computing capabilities as standardized services over the network. Servers,
storage systems, networking equipment, data centre space etc. are pooled and
made available to handle workloads. The customer would typically deploy
his own software on the infrastructure. Some common examples are Amazon,
GoGrid, 3 Tera, etc.
Benefits of cloud:

Conclusion :
In this way we are able to study Cloud Computing with their deployment models & service
models.
EXPERIMENT NO. 02

Name of the Exp. : Virtualization in Cloud

Aim: Virtualization in Cloud


Pre- Requisite: Vmware oracle virtual box, virtualization

Concept: Virtualization

Objective: In this module students will learn, Virtualization Basics, Objectives of


Virtualization, and Benefits of Virtualization in cloud.
Scope: Creating and running virtual machines on open source OS.
Technology: Virtual Box, Xen Server 7.0

Theory:

What is Virtualization?
Virtualization allows multiple operating system instances to run concurrently on a
single computer; it is a means of separating hardware from a single operating system.
Each ―guest‖ OS is managed by a Virtual Machine Monitor (VMM), also known as a
hypervisor. Because the virtualization system sits between the guest and the hardware, it
can control the guests‘ use of CPU, memory, and storage, even allowing a guest OS to
migrate from one machine to another.
Virtualization is one of the hardware reducing, cost saving and energy saving
technology that is rapidly transforming the IT landscape and fundamentally changing the
way that people compute.

Before Virtualization:
● Single OS image per machine
● Software and hardware tightly coupled
● Running multiple applications on same machine often creates conflict
● Inflexible and costly infrastructure

After Virtualization:
● Hardware-independence of operating system and applications
● Virtual machines can be provisioned to any system
● Can manage OS and application as a single unit by encapsulating them into virtual
Machines
VM1 VM2
Hypervisor
Hardware

Virtualization Software
● Vmware
o Server
o Player
o Workstation
o ESX Server
● Qemu
● Xen
● Microsoft Virtual PC/Server
o Connectix

Installation and configuration of Hosted Virtualization


Step 1: Download Oracle Vm Virtualbox from https://www.virtualbox.org/wiki/Downloads

Step 2: Install it in Windows, Once the installation has done open it


Step 3:-:Create Virtual Machine by clicking on New

Step 4-: Specify RAM Size, HDD Size, and Network Configuration and Finish the wizard

Step 4-: To Select the media for installation Click on start and browse for iso file
Step 5:Complete the Installation and use it
Step 6: To Connect OS to the network change network Mode to Bridge Adaptor
Download the iso for the XenServer

Create a New > Virtual Machine > Guest Operating System “VMware ESX – VMware ESXi
5”
Make sure you enable “Virtualize Intel VT-x/EPT or AMD-V/RVI”

Mount the ISO into your new Virtual Machine & start the Virtual Machine to get into the XenServer boot loader
Pick Your Keyboard Layout

Click on ok
Accept EULA

Click ok
Click ok

Make sure “Local Media” is highlighted


if you have any Supplemental Packs > Click Yes | If not you could Click > No

Highlight > “Skip verification”


Provide a Password to your console. Make sure you remember it because you
will need it to connect to your XenServer when you install XenCenter.

From here configure your IP Address or let DHCP assign one for you.
Give your XenServer a host name and assign a DNS, if you don’t want to let your
DHCP
configure your S for you.

Configure your Time Zone


Continue to configure your Time Zone
Continue to configure.

Click on “Install XenServer”


Once the installation is completed, please remove the ISO or the CD and reboot your machine or Virtual
Machine.

Conclusion:

In this way we are able to study Virtualization in Cloud and created virtual machines on Open
Source OS.
Experiment No. 03

Aim: Study and implementation of Infrastructure as a Service.

Theory:

Though service-oriented architecture advocates "everything as a service" (with the acronyms


EaaS or XaaS or simply aas), cloud-computing providers offer their "services" according to
different models,which happen to form a stack: infrastructure-, platform- and software-as-a-
service.

Infrastructure as a service (IaaS)

In the most basic cloud-service model—and according to the IETF (Internet Engineering Task
Force)—providers of IaaS offer computers—physical or (more often) virtual machines—and
other resources. IaaS refers to online services that abstract user from the detail of infrastructure
like physical computing resources, location, data partitioning, scaling, security, backup etc. A
hypervisor, such as Xen, Oracle VirtualBox, KVM, VMware ESX/ESXi, or Hyper-V runs the
virtual machines as guests. Pools of hypervisors within the cloud operational system can support
large numbers of virtual machines and the ability to scale services up and down according to
customers' varying requirements. IaaS clouds often offer additional resources such as a virtual-
machine disk-image library, raw block storage, file or object storage, firewalls, load balancers, IP
addresses, virtual local area networks (VLANs), and software bundles.IaaS-cloud providers
supply these resources on-demand from their large pools of equipment installed in data centres.
For wide-area connectivity, customers can use either the Internet or carrier clouds (dedicated
virtual private networks).

To deploy their applications, cloud users install operating-system images and their application
software on the cloud infrastructure. In this model, the cloud user patches and maintains the
operating systems and the application software. Cloud providers typically bill IaaS services on a
utility computing basis: cost reflects the amount of resources allocated and consumed.
Openstack

OpenStack is a free and open source cloud computing platform.


OpenStack is a free and open source, cloud computing software platform that is widely
used in the deployment of infrastructure-as-a-Service (IaaS) solutions. The core
technology with OpenStack comprises a set of interrelated projects that control the
overall layers of processing, storage and networking resources through a data centre that
is managed by the users using a Web-based dashboard, command-line tools, or by using
the RESTful API.
Currently, OpenStack is maintained by the OpenStack Foundation, which is a non- profit
corporate organisation established in September 2012 to promote OpenStack software as
well as its community. Many corporate giants have joined the project, including
GoDaddy, Hewlett Packard, IBM, Intel, Mellanox, Mirantis, NEC, NetApp, Nexenta,
Oracle, Red Hat, SUSE Linux, VMware, Arista Networks, AT&T, AMD, Avaya,
Canonical, Cisco, Dell, EMC, Ericsson, Yahoo! etc.

Deployment of OpenStack using DevStack

DevStack is used to quickly create an OpenStack development environment. It is also used to


demonstrate the starting and running of OpenStack services, and provide examples of using
them from the command line. DevStack has evolved to support a large number of configuration
options and alternative platforms and support services. It can be considered as the set of scripts
which install all the essential OpenStack services in the computer without any additional
software or configuration. To implement DevStack, first download all the essential packages,
pull in the OpenStack code from various OpenStack projects, and set everything for the
deployment.

To install OpenStack using DevStack, any Linux-based distribution with 2GB RAM can
be used to start the implementation of IaaS.

Here are the steps that need to be followed for the installation.

1. Install Git
$ sudo apt-get install git
2. Clone the DevStack repository and change the directory. The code will set up the cloud
infrastructure.

$ git clone http://github.com/openstack-dev/devstack

$ cddevstack/

/devstack$ ls

3. Execute the stack.sh script:


/devstack$ ./stack.sh

Here, the MySQL database password is entered. There‘s no need to worry about the installation
of MySQL separately on this system. We have to specify a password and this script will install
MySQL, and use this password there.

▪ Horizon is now available at http://1.1.1.1/


▪ Keystone is serving at http://1.1.1.1:5000/v2.0/
▪ Examples on using the novaclient command line are in exercise.sh
▪ The default users are: admin and demo
▪ The password: nova
▪ This is your host IP: 1.1.1.1

After all these steps, the machine becomes the cloud service providing platform. Here, 1.1.1.1 is
the IP of my first network interface.

We can type the host IP provided by the script into a browser, in order to access the dashboard
‗Horizon‘. We can log in with the username ‗admin‘ or ‗demo‘ and the password ‗admin‘.

You can view all the process logs inside the screen, by typing the following command:

$ screen–x

Executing the following will kill all the services, but it should be noted that it will not delete any
of the code.

To bring down all the services manually, type:

$ sudokillall screen

localrc configurations

localrc is the file in which all the local configurations (local machine parameters) are
maintained. After the first successful stack.sh run, you will see that a localrc file gets created
with the configuration values you specified while running that script.
The following fields are specified in the localrc file:

DATABASE_PASSWORD

RABBIT_PASSWORD

SERVICE_TOKEN

SERVICE_PASSWORD

ADMIN_PASSWORD

If we specify the option OFFLINE=True in the localrc file inside DevStack directory, and if
after specifying this, we run stack.sh, it will not check any parameter over the Internet. It will
set up DevStack using all the packages and code residing in the local system. In the phase of
code development, there is need to commit the local changes in the /opt/stack/nova repository
before restack (re-running stack.sh) with the RECLONE=yes option. Otherwise, the changes
will not be committed.

To use more than one interface, there is a need to specify which one to use for the external IP
using this configuration:

HOST_IP=xxx.xxx.xxx.xxx

Cinder on DevStack

Cinder is a block storage service for OpenStack that is designed to allow the use of a reference
implementation (LVM) to present storage resources to end users that can be consumed by the
OpenStack Compute Project (Nova). Cinder is used to virtualise the pools of block storage
devices. It delivers end users with a self-service API to request and use the resources, without
requiring any specific complex knowledge of the location and configuration of the storage
where it is actually deployed.

All the Cinder operations can be performed via any of the following:

1. CLI (Cinders python-cinderclient command line module)

2. GUI (Using OpenStacks GUI project horizon)

3. Direct calling of Cinder APIs.

Creation and deletion of volumes:

▪ To create a 1 GB Cinder volume with no name, run the following command:

$ cinder create 1

▪ To see more information about the command, just type cinder help <command>
$ cinder help create
▪ To create a Cinder volume of size 1GB with a name, using cinder create –display- name
myvolume:

$ cinder create --display-name myvolume 1

▪ To list all the Cinder volumes, using cinder list:


$ cinder list

ID Status Display Name Size Volume typeBootable Attached To id1 Available Myvolume 1
None False
id2 Available None 1 None False

▪ To delete the first volume (the one without a name), use the cinder delete <volume_id>
command. If we execute cinder list really quickly, the status of the volume going to
‗deleting‘ can be seen, and after some time, the volume will be deleted:

$ cinder delete id2


$ cinder list
ID Status Display Name Size Volume type Bootable Attached To id1 Available Myvolume 1
None False
id2 Deleting None 1 None False

▪ Volume snapshots can be created as follows:

$ cinder snapshot-create id2


▪ All the snapshots can be listed as follows:

$ cinder snapshot-list
ID Volume ID Status Display Name Size Snapshotid1 id2
Available None 1

There are lots of functions and features available with OpenStack related to cloud deployment.
Depending upon the type of implementation, including load balancing, energy optimisation,
security and others, the cloud computing framework OpenStack can be explored a lot.

Conclusion:

In this way we studied Infrastructure as a Service and installed OpenStack.

Questions:

1. What is infrastructure as a service example?


2. How does infrastructure as a service work?
3. What are the essential things that must be followed before going to cloud computing
platform?
4. How many types of virtual private server instances are partitioned in an IaaS stack?
5. Why is infrastructure a service?
Experiment No. 04
1.
Experiment No. 04

Aim: To study and implementation of Storage as a Service

Theory:

Collaborating on Word Processing:


You use your word processor most likely some version of Microsoft Word—to write memos,
letters, thank you notes, fax coversheets, reports, newsletters, you name it. The word processor
is an essential part of our computing lives. There are a number of web-based replacements for
Microsoft‘s venerable Word program are available. All of these programs let you write your
letters and memos and reports from any computer, no installed software necessary, as long as
that computer has a connection to the Internet. And every document you create is housed on the
web, so you don‘t have to worry about taking your work with you. It‘s cloud computing at its
most useful, and it‘s here today.

Exploring Web-Based Word Processors:


There are a half-dozen or so really good web-based word processing applications, led by the
ever-popular Google Docs. We‘ll start our look at these applications with Google‘s application
and work through the rest in alphabetic order.

Google Docs:
Google Docs (docs.google.com) is the most popular web-based word processor available today.
Docs is actually a suite of applications that also includes Google Spreadsheets and Google
Presentations; the Docs part of the Docs suite is the actual word processing application. Like all
things Google, the Google Docs interface is clean and, most important, it works well without
imposing a steep learning curve. Basic formatting is easy enough to do, storage space for your
documents is generous, and sharing collaboration version control is a snap to do. When you log
in to Google Docs with your Google account, you see the page. This is the home page for all the
Docs applications (word processing, spreadsheets, and presentations); all your previously
created documents are listed on this page. The leftmost pane helps you organize your
documents. You can store files in folders, view documents by type (word processing document
or spreadsheet), and display documents shared with specific people.

Collaborating on Spreadsheets :
If the word processor is the most-used office application, the spreadsheet is the second most-
important app. Office users and home users alike use spreadsheets to prepare budgets, create
expense reports, perform ―what if‖ analyses, and otherwise crunch their numbers. And thus we
come to those spreadsheets in the cloud, the web-based spreadsheets that let you share your
numbers with other users via the Internet. All the advantages of webbased word processors
apply to web-based spreadsheets— group collaboration, anywhere/anytime access, portability,
and so on.
Exploring Web-Based Spreadsheets:
Several web-based spreadsheet applications are worthy competitors to Microsoft Excel. Chief
among these is Google Spreadsheets, which we‘ll discuss first, but there are many other apps
that also warrant your attention. If you‘re at all interested in moving your number crunching and
financial analysis into the cloud, these web-based applications are worth checking out.

Google Spreadsheets
Google Spreadsheets was Google‘s first application in the cloud office suite first known as
Google Docs & Spreadsheets and now just known as Google Docs. As befits its longevity,
Google Spreadsheets is Google‘s most sophisticated web-based application. You access your
existing and create new spreadsheets from the main Google Docs page (docs.google.com). To
create a new spreadsheet, click the New button and select Spreadsheet; the new spreadsheet
opens in a new window and you can edit it.

Collaborating on Presentations:
One of the last components of the traditional office suite to move into the cloud is the
presentation application. Microsoft PowerPoint has ruled the desktop forever, and it‘s proven
difficult to offer competitive functionality in a web-based application; if nothing else, slides
with large graphics are slow to upload and download in an efficient manner. That said, there is a
new crop of web-based presentation applications that aim to give PowerPoint a run for its
money. The big players, as might be expected, are Google and Zoho, but there are several other
applications that are worth considering if you need to take your presentations with you on the
road—or collaborate with users in other locations.

Google Presentations:
If there‘s a leader in the online presentations market, it‘s probably Google Presentations, simply
because of Google‘s dominant position with other webbased office apps. Google Presentations
is the latest addition to the Google Docs suite of apps, joining the Google Docs word processor
and Google Spreadsheets spreadsheet application. Users can create new presentations and open
existing ones from the main Google Docs page (docs.google.com). Open a presentation by
clicking its title or icon. Create a new presentation by selecting New, then Presentation. Your
presentation now opens in a new window on your desktop. What you do get is the ability to add
title, text, and blank slides; a PowerPoint-like slide sorter pane; a selection of predesigned
themes. the ability to publish your file to the web or export as a PowerPoint PPT or Adobe PDF
file; and quick and easy sharing and collaboration, the same as with Google‘s other web-based
apps.

Collaborating on Databases:
A database does many of the same things that a spreadsheet does, but in a different and often
more efficient manner. In fact, many small businesses use spreadsheets for database-like
functions. A local database is one in which all the data is stored on an individual computer. A
networked database is one in which the data is stored on a computer or server connected to a
network, and accessible by all computers connected to that network. Finally, an online or web-
based database stores data on a cloud of servers somewhere on the Internet, which is accessible
by any authorized user with an Internet connection. The primary advantage of a web-based
database is that data can easily be shared with a large number of other users, no matter where
they may be located. When your employee database is in the cloud.

Exploring Web-Based Databases:


In the desktop computing world, the leading database program today is Microsoft Access. (This
wasn‘t always the case; dBase used to rule the database roost, but things change over time.) In
larger enterprises, you‘re likely to encounter more sophisticated software from Microsoft,
Oracle, and other companies. Interestingly, none of the major database software developers
currently provide web-based database applications. Instead, you have to turn to a handful of
start-up companies (and one big established name) for your online database needs.

Cebase
Cebase (www.cebase.com) lets you create new database applications with a few clicks of your
mouse; all you have to do is fill in a few forms and make a few choices from some pull-down
lists. Data entry is via web forms, and then your data is displayed in a spreadsheet-like layout,
You can then sort, filter, and group your data as you like. Sharing is accomplished by clicking
the Share link at the top of any data page. You invite users to share your database via email, and
then adjust their permissions after they‘ve accepted your invitation.

Result:
SNAPSHOTS

Step 1: Sign into the Google Drive website with your Google account.
If you don‘t have a Google account, you can create one for free. Google Drive will allow
you to store your files in the cloud, as well as create documents and forms through the
Google Drive web interface.
Step 2: Add files to your drive.
There are two ways to add files to your drive. You can create Google Drive documents,
or you can upload files from your computer. To create a new file, click the CREATE
button. To upload a file, click the ―Up Arrow‖ button next to the CREATE button.

Step 3: Change the way your files are displayed.


You can choose to display files by large icons (Grid) or as a list (List). The List mode
will show you at a glance the owner of the document and when it was last modified. The
Grid mode will show each file as a preview of its first page. You can change the mode
by clicking the buttons next to the gear icon in the upper right corner of the page.
// List Mode
Step 4: Use the navigation bar on the left side to browse your files.
―My Drive‖ is where all of your uploaded files and folders are stored. ―Shared with Me‖
are documents and files that have been shared with you by other Drive users. ―Starred‖
files are files that you have marked as important, and ―Recent‖ files are the ones you
have most recently edited.
• You can drag and drop files and folders around your Drive to organize them as you see
fit.
•Click the Folder icon with a ―+‖ sign to create a new folder in your Drive. You can
create folders inside of other folders to organize your files.
Step 5: Search for files.
You can search through your Google Drive documents and folders using the search bar
at the top of your page. Google Drive will search through titles, content, and owners. If a
file is found with the exact term in the title, it will appear under the search bar as you
type so that you can quickly select it.

Step 1: Click the NEW button.


A menu will appear that allows you to choose what type of document you want to
create. You have several options by default, and more can be added by clicking the
―More ― link at the bottom of the menu:

Step 2: Create a new file.


Once you‘ve selected your document type, you will be taken to your blank document. If
you chose Google Docs/Sheets/Slides , you will be greeted by a wizard that will help
you configure the feel of your document.
Step 3: Name the file.
At the top of the page, click the italic gray text that says ―Untitled <file type>‖. When
you click it, the ―Rename document‖ window will appear, allowing you to change the
name of your file.

Step 4: Edit your document.


Begin writing your document as you would in its commercially-equivalent. You will
most likely find that Google Drive has most of the basic features, but advanced features
you may be used to are not available.
1. Your document saves automatically as you work on it.
Step 5: Export and convert the file.
If you want to make your file compatible with similar programs, click File and place
your cursor over ―Download As‖. A menu will appear with the available formats.
Choose the format that best suits your needs. You will be asked to name the file and
select a download location. When the file is downloaded, it will be in the format you
chose.

Step 6: Share your document.


Click File and select Share, or click the blue Share button in the upper right corner to
open the Sharing settings. You can specify who can see the file as well as who can edit
it.
Other Capabilities
1. Edit photos
2. Listen Music
3. Do drawings
4. Merge PDFs

Conclusion:
Google Docs provide an efficient way for storage of data. It fits well in Storage as a service
(SaaS). It has varied options to create documents, presentations and also spreadsheets. It saves
documents automatically after a few seconds and can be shared anywhere on the Internet at the
click of a button.

Questions:

1. What kind of data can be stored in Cloud Storage?


2. Who owns the cloud storage?
3. Is cloud a storage device?
4. Where is Internet information stored?
5. Where is cloud storage stored?
Experiment No. 05
Aim:
Understand Security of Web Server and data directory

Prerequisite:
Knowledge of Access Control, Authentication and Authorization

Objective:
Objectives this experiment is to provide students an overview of Security issues of Cloud and how to
manage various user groups over cloud.

Outcome:
After successful completion of this experiment student will be able to ∙ Analyze
security issues on cloud

Theory:

∙ AWS Identity and Access Management (IAM)


IAM refers to a framework or policies and technologies for ensuring that the people in an
organization have the appropriate access to technology resources.

OR
AWS Identity Access Management (IAM) is a web service that helps you securely control access to
AWS resources. You use IAM to control to check who is authenticated (signed-in) and authorized to use
resources.
Note: Only one account is created (if 200 employees using same account then there is need of
IAM)(HR+ Marketing+ Finance+ Development)(for 200 employees account need to be created)

∙ IAM user Limits=500 user per root account(AWS account)


∙ 300 groups per AWS account (like HR, Development, Etc)
∙ 1000 roles per AWS account

✔ When you first create an AWS account, you begin with a single sign-in identity that has complete
access to all AWS services and resources in the account.

✔ This identity is called the AWS account root user and is accessed by signing in with the email address
and password that you used to create the account.
✔ We strongly recommend that you do not use the root user for your everyday tasks, even the
administrative ones. Instead, adhere to the best practice of using the root user only to create your
first IAM user. Then securely lock away the root user credentials and use them to perform only a few
account and service management tasks.
IAM gives you the following features:
1. Shared access to your AWS account:
You can grant other people permission to administer and use resources in your AWS account without
having to share your password or access key. (by creating user name and password)

2. Granular permissions (Read only/Read write/etc permission) You can grant different permissions
to different people for different resources. For example, you might allow some users complete access to
Amazon Elastic Compute Cloud (Amazon EC2), Amazon Simple Storage Service (Amazon S3),
Amazon Dynamo, Amazon Red-shift, and other AWS services.
For other users, you can allow read-only access to just some S3 buckets, or permission to administer just
some EC2 instances, or to access your billing information but nothing else.

3. Secure access to AWS resources for applications that run on Amazon EC2 You can use IAM
features to securely provide credentials for applications that run on EC2 instances. These
credentials provide permissions for your application to access other AWS resources.
Examples include S3 buckets and Dynamo tables.

4. Multi-factor authentication (MFA)


You can add two-factor authentication to your account and to individual users for extra security.
With MFA you or your users must provide not only a password or access key to work with your account,
but also a code from a specially configured device.
5. Identity federation
You can allow users, who already have passwords elsewhere (like face-book or elsewhere can use that
account and login)— for example, in your corporate network or with an internet identity provider—to
get temporary access to your AWS account. (Trust between company ids (or face book id, Gmail
id) and AWS)

6. Identity information for assurance


If you use AWS Cloud-Trail, you receive log records that include information about those who made
requests for resources in your account. That information is based on IAM identities.

7. PCI DSS Compliance


IAM supports the processing, storage, and transmission of credit card data by a merchant or service
provider, and has been validated as being compliant with Payment Card Industry (PCI) Data Security
Standard (DSS). For more information about PCI DSS, including how to request a copy of the AWS PCI
Compliance Package, see PCI DSS Level 1.

8. Integrated with many AWS services


For a list of AWS services that work with IAM, see AWS Services That Work with IAM (p. 502).

9. Eventually Consistent
IAM, like many other AWS services, is eventually consistent (IAS work is replicated like multiple
zones). IAM achieves high availability by replicating data across multiple servers within Amazon's data
centres around the world.
If a request to change some data is successful, the change is committed and safely stored. However, the
change must be replicated across IAM, which can take some time. Such changes include creating or
updating users, groups, roles, or policies. We recommend that you do not include such IAM changes in
the critical, high-availability code paths of your application. Instead, make IAM changes in a separate
initialization or setup routine that you run less frequently. Also, be sure to verify that the changes have
been
propagated before production workflows depend on them. For more information, see Changes that I
make are not always immediately visible (p. 466).

10.Free to use
AWS Identity and Access Management (IAM) and AWS Security Token Service (AWS STS) are features
of your AWS account offered at no additional charge. You are charged only when you access other AWS
services using your IAM users or AWS STS temporary security credentials. For information about the
pricing of other AWS products, see the Amazon Web Services pricing page.

∙ Components of IAM

1. Users

2. Groups
3. Roles

4. Policies

I.e. roles
are assigned to applications, user are assigned to peoples
(Suppose you have created an EC2 instance and inside that instance you have hosted a website and it is
accessing s3 services, i.e. application has to interact with S3 service, so I have to give permission to that
web application to access that s3 service, that to give permission you need to create a role)

∙ Steps For demonstrating AWS IAM:


1. Login to root account

2. Go to dashboard-Click on IAM

3. Create new user(IAM user-give rights)(copy the URL , so that IAM user login using that URL-Need
to remember username and password)

4. Login using IAM user (use provided URL), create group (give rights) and then add user.

5. Login to root account, Go to dashboard- create policies as per requirement and attached
policies to applications.
∙ Accessing IAM:
You can work with AWS Identity and Access Management in any of the following ways.

1. AWS Management Console


The console is a browser-based interface to manage IAM and AWS resources. For more information
about accessing IAM through the console, see The IAM Console and Sign-in Page (p. 55). For a tutorial
that guides you through using the console, see Creating Your First IAM Admin User and Group (p. 17).

2. AWS Command Line Tools


You can use the AWS command line tools to issue commands at your system's command line to perform
IAM and AWS tasks. Using the command line can be faster and more convenient than the console. The
command line tools are also useful if you want to build scripts that perform AWS tasks.
AWS provides two sets of command line tools: the AWS Command Line Interface (AWS CLI) and the
AWS Tools for Windows Power Shell. For information about installing and using the AWS CLI, see the
AWS Command Line Interface User Guide. For information about installing and using the Tools for
Windows Power Shell, see the AWS Tools for Windows Power Shell User Guide.

3. AWS SDKs
AWS provides SDKs (software development kits) that consist of libraries and sample code for various
programming languages and platforms (Java, Python, Ruby, .NET, iOS, Android, etc.). The SDKs
provide a convenient way to create programmatic access to IAM and AWS. For example, the SDKs take
care of tasks such as cryptographically signing requests, managing errors, and retrying requests
automatically. For information about the AWS SDKs, including how to download and install them, see
the Tools for Amazon Web Services page.

4. IAM HTTPS API


You can access IAM and AWS programmatically by using the IAM HTTPS API, which lets you issue
HTTPS requests directly to the service. When you use the HTTPS API, you must include code to
digitally sign requests using your credentials. For more information, see Calling the API by Making
HTTP Query Requests (p. 1239) and the IAM API Reference.
Conclusion:
In this way we are able to understand Security of Web Server and data directory.
Questions:
Q.1 Explain User Management in cloud computing Detail?
Q.2 Add snapshots for creating IAM user and user groups (using AWS IAM service) Q.3
Explain various parameters to measure security of web server?

Experiment No. 06
Aim: Case study on Fog computing

Prerequisite:
Knowledge of Access Control, Authentication and Authorization Fog computing

Theory:
Fog Computing is the term coined by Cisco that refers to extending cloud computing to an edge of the
enterprise‘s network. Thus, it is also known as Edge Computing or Fogging. It facilitates the operation of
computing, storage, and networking services between end devices and computing data centers.

1. The devices comprising the fog infrastructure are known as fog nodes.
2. In fog computing, all the storage capabilities, computation capabilities, data along with the
applications are placed between the cloud and the physical host.
3. All these functionalities are placed more towards the host. This makes processing faster as it is
done almost at the place where data is created.
4. It improves the efficiency of the system and is also used to ensure increased security.

History of fog computing


The term fog computing was coined by Cisco in January 2014. This was because fog is referred to as clouds
that are close to the ground in the same way fog computing was related to the nodes which are present near
the nodes somewhere in between the host and the cloud. It was intended to bring the computational
capabilities of the system close to the host machine. After this gained a little popularity, IBM, in 2015,
coined a similar term called ―Edge Computing‖.
When to use fog computing?
Fog Computing can be used in the following scenarios:

1. It is used when only selected data is required to send to the cloud. This selected data is chosen
for long-term storage and is less frequently accessed by the host.
2. It is used when the data should be analyzed within a fraction of seconds i.e Latency should be
low.
3. It is used whenever a large number of services need to be provided over a large area at different
geographical locations.
4. Devices that are subjected to rigorous computations and processings must use fog computing.
5. Real-world examples where fog computing is used are in IoT devices (eg. Car-to-Car
Consortium, Europe), Devices with Sensors, Cameras (IIoT-Industrial Internet of Things), etc.

Advantages of fog computing


● This approach reduces the amount of data that needs to be sent to the cloud.
● Since the distance to be traveled by the data is reduced, it results in saving network bandwidth.
● Reduces the response time of the system.
● It improves the overall security of the system as the data resides close to the host.
● It provides better privacy as industries can perform analysis on their data locally.

Disadvantages of fog computing

● Congestion may occur between the host and the fog node due to increased traffic (heavy data
flow).
● Power consumption increases when another layer is placed between the host and the cloud.
● Scheduling tasks between host and fog nodes along with fog nodes and the cloud is difficult.
● Data management becomes tedious as along with the data stored and computed, the transmission
of data involves encryption-decryption too which in turn release data.

Applications of fog computing


● It can be used to monitor and analyze the patients‘ condition. In case of emergency, doctors can
be alerted.
● It can be used for real-time rail monitoring as for high-speed trains we want as little latency as
possible.
● It can be used for gas and oils pipeline optimization. It generates a huge amount of data and it is
inefficient to store all data into the cloud for analysis.

Experiment No 07
Aim: To simulate identity management in private cloud.

Theory:

What is Cloud Identity Management?

Cloud identity management is a lot more than just a simple web app SSO solution. this
approach is the modern adaptation of the traditional, on-prem and legacy solutions like
Microsoft Active Directory (AD) and Lightweight Directory Access Protocol (LDAP), along
with their add-ons of web application single sign-on, multi factor authentication, privileged
access management, identity governance and administration, and more.

The modern adaptation of the directory service is optimized to be used across any device, on
any operating system, with any on-prem or web-based application or any cloud, on-prem, or
remote resource. Modern cloud IAM solutions are also focused on being multi-protocol to
enable virtually any IT resource to connect in their ‗native‘ authentication language.

Manage access to projects, folders, and organizations

It describes how to grant, change, and revoke access to projects, folders, and organizations.
To learn how to manage access to other resources, see the following guides:

∙ Manage access to service accounts

∙ Manage access to other resources

In Identity and Access Management (IAM), access is managed through IAM policies. An
IAM policy is attached to a Google Cloud resource. Each policy contains a collection of role
bindings that associate one or more principals, such as users or service accounts, with an
IAM role. These role bindings grant the specified roles to the principals, both on the
resource that the policy is attached to and on all of that
Cloud Computing Lab

resource's descendants. For more information about IAM policies, see Understanding
policies.

You can manage access to projects, folders, and organizations with the Google Cloud
Console, the Google Cloud CLI, the REST API, or the Resource Manager client libraries.

Before you begin


∙ Enable the Resource Manager API.

Enable the API

Required roles

To get the permissions that you need to manage access to a project, folder, or organization,
ask your administrator to grant you the following IAM roles on the resource that you want to
manage access for (project, folder, or organization):

∙ To manage access to a project: Project IAM Admin (roles/resourcemanager.projectIamAdmin)

∙ To manage access to a folder: Folder Admin (roles/resourcemanager.folderAdmin)


∙ To manage access to projects, folders, and organizations: Organization Admin
(roles/resourcemanager.organizationAdmin)

∙ To manage access to almost all Google Cloud resources: Security Admin


(roles/iam.securityAdmin)

For more information about granting roles, see Manage access.

These predefined roles contain the permissions required to manage access to a project,
folder, or organization. To see the exact permissions that are required, expand the Required
permissions section:

View current access

You can view who has access to your project, folder, or organization using the Cloud
Console, the gcloud CLI, the REST API, or the Resource Manager client libraries.
Cloud Computing Lab

import com.google.api.client.googleapis.javanet.GoogleNetHttpTransport;
import com.google.api.client.json.jackson2.JacksonFactory;
import com.google.api.services.cloudresourcemanager.v3.CloudResourceManager;
import
com.google.api.services.cloudresourcemanager.v3.model.GetIamPolicyRequest;
import com.google.api.services.cloudresourcemanager.v3.model.Policy;
import com.google.api.services.iam.v1.IamScopes;
import com.google.auth.http.HttpCredentialsAdapter;
import com.google.auth.oauth2.GoogleCredentials;
import java.io.IOException;
import java.security.GeneralSecurityException;
import java.util.Collections;

public class GetPolicy {

// Gets a project's policy.


public static Policy getPolicy(String projectId) {
// projectId = "my-project-id"

Policy policy = null;

CloudResourceManager service = null;


try {
service = createCloudResourceManagerService();
} catch (IOException | GeneralSecurityException e) {
System.out.println("Unable to initialize service: \n" + e.toString());
return policy;
}

try {
GetIamPolicyRequest request = new GetIamPolicyRequest();
policy = service.projects().getIamPolicy(projectId, request).execute();
System.out.println("Policy retrieved: " + policy.toString());
return policy;
} catch (IOException e) {
System.out.println("Unable to get policy: \n" + e.toString());
return policy;
}
}

public static CloudResourceManager createCloudResourceManagerService()


throws IOException, GeneralSecurityException {
// Use the Application Default Credentials strategy for authentication. For more info, see: //
https://cloud.google.com/docs/authentication/production#finding_credentials_automatically
GoogleCredentials credential =
GoogleCredentials.getApplicationDefault()
.createScoped(Collections.singleton(IamScopes.CLOUD_PLATFORM));

CloudResourceManager service =
new CloudResourceManager.Builder(
GoogleNetHttpTransport.newTrustedTransport(),
JacksonFactory.getDefaultInstance(),
new HttpCredentialsAdapter(credential))
.setApplicationName("service-accounts")
.build();
return service;
}
}
Cloud Computing Lab

Grant or revoke a single role

You can use the Cloud Console and the gcloud CLI to quickly grant or revoke a single role
for a single principal, without editing the resource's IAM policy directly. Common types of
principals include Google accounts, service accounts, Google groups, and domains. For a list
of all principal types, see Concepts related to identity.

If you need help identifying the most appropriate predefined role, see Choose
predefined roles.

Grant a single role

To grant a single role to a principal, do the following:


Cloud Computing Lab
Experiment No. 08

Aim: Write an IAM policy by using client libraries.

Theory:

How to get started with the IAM methods from the Resource Manager API in our favourite
programming language.

For step-by-step guidance on this task directly in Cloud Console, click on :

Getting started – Getting started – Google Cloud Platform

Create a Google Cloud project

For this we need a new Google Cloud project.

Warning: If we use an existing project, then completing this will enable some users to
temporarily access resources in that project.
1. If we're new to Google Cloud, create an account to evaluate how our products perform in
real-world scenarios. New customers also get $300 in free credits to run, test, and deploy
workloads.

2. In the Google Cloud Console, on the project selector page, click Create project to begin
creating a new Google Cloud project.

Go to Project Selector
3. Enable the Resource Manager API.
Enable the API

4. Create a service account:


a. In the Cloud Console, go to the Create service account page.
Go to Create Service Account
b. Select your Project.
c. In the Service account name field, enter a name. The Cloud Console fills in
the Service account ID field based on this name.

d. In the Service account description field, enter a description. For example,


Service account for quickstart.
Cloud Computing Lab

e. Click Create and continue.

f. To provide access to your project, grant the following role(s) to your service
account: Project IAM Admin .

In the Select a role list, select a role.

For additional roles, click add Add another role and add each additional

role.

g. Click Continue.

h. Click Done to finish creating the service account.

Do not close your browser window. You will use it in the next step. 5.

Create a service account key:

a. In the Cloud Console, click the email address for the service account that you
created.
b. Click Keys.
c. Click Add key, then click Create new key.
d. Click Create. A JSON key file is downloaded to your computer. e.
Click Close.
6. Set the environment variable GOOGLE_APPLICATION_CREDENTIALS to the path of the
JSON file that contains your service account key. This variable only applies to your current
shell session, so if you open a new session, set the variable again.

For Windows:

For PowerShell:

$env:GOOGLE_APPLICATION_CREDENTIALS=" KEY_PATH"

Replace KEY_PATH with the path of the JSON file that contains your service account key.

For example:
$env:GOOGLE_APPLICATION_CREDENTIALS="C:\Users\username\Downloads\service-account-file.json

" For command prompt:

set GOOGLE_APPLICATION_CREDENTIALS=KEY_PATH
Cloud Computing Lab

Replace KEY_PATH with the path of the JSON file that contains your service account key.

Install the client library


Read, modify, and write an IAM policy
The code snippet in this quickstart does the following:

∙ Initializes the Resource Manager service, which manages Google Cloud projects. ∙ Reads
the IAM policy for your project.
∙ Modifies the IAM policy by granting the Log Writer role (roles/logging.logWriter) to your
Google Account.
∙ Writes the updated IAM policy.

∙ Prints all the principals that have the Log Writer role (roles/logging.logWriter) at the
project level.
∙ Revokes the Log Writer role.

Replace the following values before running the code snippet:

∙ your-project: The ID of your project.


∙ your-member:The email address for your Google Account, with the prefix user:. For
example, user:tanya@example.com.
Cloud Computing Lab

To install and use the client library for Resource Manager, see Resource Manager client
libraries.
import com.google.api.client.googleapis.javanet.GoogleNetHttpTransport;
import com.google.api.client.json.jackson2.JacksonFactory;
import com.google.api.services.cloudresourcemanager.v3.CloudResourceManager;
import com.google.api.services.cloudresourcemanager.v3.model.Binding;
import
com.google.api.services.cloudresourcemanager.v3.model.GetIamPolicyRequest;
import com.google.api.services.cloudresourcemanager.v3.model.Policy;
import
com.google.api.services.cloudresourcemanager.v3.model.SetIamPolicyRequest;
import com.google.api.services.iam.v1.IamScopes;
import com.google.auth.http.HttpCredentialsAdapter;
import com.google.auth.oauth2.GoogleCredentials;
import java.io.IOException;
import java.security.GeneralSecurityException;
import java.util.Collections;
import java.util.List;

public class Quickstart {

public static void main(String[] args) {


// TODO: Replace with your project ID in the form "projects/your-project-id".
String projectId = "your-project";
// TODO: Replace with the ID of your member in the form
"user:member@example.com" String member = "your-member";
// The role to be granted.
String role = "roles/logging.logWriter";

// Initializes the Cloud Resource Manager service.


CloudResourceManager crmService = null;
try {
crmService = initializeService();
} catch (IOException | GeneralSecurityException e) {
System.out.println("Unable to initialize service: \n" + e.getMessage() +
e.getStackTrace()); }

// Grants your member the "Log writer" role for your project.
addBinding(crmService, projectId, member, role);

// Get the project's policy and print all members with the "Log Writer" role
Policy policy = getPolicy(crmService, projectId);
Binding binding = null;
List<Binding> bindings = policy.getBindings();
for (Binding b : bindings) {
if (b.getRole().equals(role)) {
binding = b;
break;
}
}
System.out.println("Role: " + binding.getRole());
System.out.print("Members: ");
for (String m : binding.getMembers()) {
System.out.print("[" + m + "] ");
}
System.out.println();

// Removes member from the "Log writer" role.


removeMember(crmService, projectId, member, role);
}
Cloud Computing Lab
public static CloudResourceManager initializeService()
throws IOException, GeneralSecurityException {
// Use the Application Default Credentials strategy for authentication. For more info, see: //
https://cloud.google.com/docs/authentication/production#finding_credentials_automatically
GoogleCredentials credential =
GoogleCredentials.getApplicationDefault()
.createScoped(Collections.singleton(IamScopes.CLOUD_PLATFORM));

// Creates the Cloud Resource Manager service object.


CloudResourceManager service =
new CloudResourceManager.Builder(
GoogleNetHttpTransport.newTrustedTransport(),
JacksonFactory.getDefaultInstance(),
new HttpCredentialsAdapter(credential))
.setApplicationName("iam-quickstart")
.build();
return service;
}

public static void addBinding(


CloudResourceManager crmService, String projectId, String member, String role) {

// Gets the project's policy.


Policy policy = getPolicy(crmService, projectId);

// Finds binding in policy, if it exists


Binding binding = null;
for (Binding b : policy.getBindings()) {
if (b.getRole().equals(role)) {
binding = b;
break;
}
}

if (binding != null) {
// If binding already exists, adds member to binding.
binding.getMembers().add(member);
} else {
// If binding does not exist, adds binding to policy.
binding = new Binding();
binding.setRole(role);
binding.setMembers(Collections.singletonList(member));
policy.getBindings().add(binding);
}

// Sets the updated policy


setPolicy(crmService, projectId, policy);
}

public static void removeMember(


CloudResourceManager crmService, String projectId, String member, String role)
{ // Gets the project's policy.
Policy policy = getPolicy(crmService, projectId);

// Removes the member from the role.


Binding binding = null;
for (Binding b : policy.getBindings()) {
if (b.getRole().equals(role)) {
binding = b;
Cloud Computing Lab

break;
}
}
if (binding.getMembers().contains(member)) {
binding.getMembers().remove(member);
if (binding.getMembers().isEmpty()) {
policy.getBindings().remove(binding);
}
}

// Sets the updated policy.


setPolicy(crmService, projectId, policy);
}

public static Policy getPolicy(CloudResourceManager crmService, String projectId)


{ // Gets the project's policy by calling the
// Cloud Resource Manager Projects API.
Policy policy = null;
try {
GetIamPolicyRequest request = new GetIamPolicyRequest();
policy = crmService.projects().getIamPolicy(projectId, request).execute();
} catch (IOException e) {
System.out.println("Unable to get policy: \n" + e.getMessage() +
e.getStackTrace()); }
return policy;
}

private static void setPolicy(CloudResourceManager crmService, String projectId, Policy policy)


{ // Sets the project's policy by calling the
// Cloud Resource Manager Projects API.
try {
SetIamPolicyRequest request = new SetIamPolicyRequest();
request.setPolicy(policy);
crmService.projects().setIamPolicy(projectId, request).execute();
} catch (IOException e) {
System.out.println("Unable to set policy: \n" + e.getMessage() + e.getStackTrace());
}
}
}
Experiment No 09
Aim: To understand on demand application delivery and virtual desktop infrastructure.

Theory:

What is VDI (Virtual desktop infrastructure)?

Virtual desktop infrastructure (VDI) is a technology that refers to the use of virtual
machines to provide and manage virtual desktops. VDI hosts desktop environments on a
centralized server and deploys them to end-users on request.

How does VDI work?

In VDI, a hypervisor segments servers into virtual machines that in turn host virtual
desktops, which users access remotely from their devices. Users can access these virtual
desktops from any device or location, and all processing is done on the host server. Users
connect to their desktop instances through a connection broker, which is a software-based
gateway that acts as an intermediary between the user and the server.

VDI can be either persistent or nonpersistent. Each type offers different benefits:

∙ With persistent VDI, a user connects to the same desktop each time, and users are
able to personalize the desktop for their needs since changes are saved even after
the connection is reset. In other words, desktops in a persistent VDI environment
act exactly like a personal physical desktop.
∙ In contrast, nonpersistent VDI, where users connect to generic desktops and no
changes are saved, is usually simpler and cheaper, since there is no need to
maintain customized desktops between sessions. Nonpersistent VDI is often
used in organizations with a lot of task workers, or employees who perform a
limited set of repetitive tasks and don‘t need a customized desktop.

Why VDI?
VDI offers a number of advantages, such as user mobility, ease of access, flexibility and
greater security. In the past, its high-performance requirements made it costly and
challenging to deploy on legacy systems, which posed a barrier for many businesses.
However, the rise in enterprise adoption of hyperconverged infrastructure (HCI) offers a
solution that provides scalability and high performance at a lower cost.

What are the benefits of VDI?

Although VDI‘s complexity means that it isn‘t necessarily the right choice for every
organization, it offers a number of benefits for organizations that do use it. Some of these
benefits include:

∙ Remote access: VDI users can connect to their virtual desktop from any location
or device, making it easy for employees to access all their files and applications
and work remotely from anywhere in the world.
∙ Cost savings: Since processing is done on the server, the hardware requirements
for end devices are much lower. Users can access their virtual desktops from
older devices, thin clients, or even tablets, reducing the need for IT to purchase
new and expensive hardware.
∙ Security: In a VDI environment, data lives on the server rather than the end client
device. This serves to protect data if an endpoint device is ever stolen or
compromised.
∙ Centralized management: VDI‘s centralized format allows IT to easily patch,
update or configure all the virtual desktops in a system.
What is VDI used for?

Although VDI can be used in all sorts of environments, there are a number of use cases that
are uniquely suited for VDI, including:

∙ Remote work: Since VDI makes virtual desktops easy to deploy and update from
a centralized location, an increasing number of companies are implementing it
for remote workers.
∙ Bring your own device (BYOD): VDI is an ideal solution for environments that
allow or require employees to use their own devices.
Cloud Computing Lab

Since processing is done on a centralized server, VDI allows the use of a wider
range of devices. It also offers better security, since data lives on the server and
is not retained on the end client device.
∙ Task or shift work: Nonpersistent VDI is particularly well suited to organizations
such as call centers that have a large number of employees who use the same
software to perform limited tasks.

What is the difference between VDI and desktop virtualization?

Desktop virtualization is a generic term for any technology that separates a desktop
environment from the hardware used to access it. VDI is a type of desktop virtualization, but
desktop virtualization can also be implemented in different ways, such as remote desktop
services (RDS), where users connect to a shared desktop that runs on a remote server.
What is the difference between VDI and virtual machines (VMs)? Virtual machines are
the technology that powers VDI. VMs are software ―machines‖ created by partitioning a
physical server into multiple virtual servers through the use of a hypervisor. (This process is
also known as server virtualization.) Virtual machines can be used for a number of
applications, one of which is running a virtual desktop in a VDI environment. How to
implement VDI?

When planning for VDI deployment, larger enterprises should consider implementing it in
an HCI environment, as HCI‘s scalability and high performance are a natural fit for VDI‘s
resource needs. On the other hand, implementing HCI for VDI is probably not necessary
(and would be overly expensive) for organizations that require less than 100 virtual
desktops.

In addition to infrastructure considerations, there are a number of best practices to follow


when implementing VDI:

∙ Prepare your network: Since VDI performance is so closely linked to network


performance, it‘s important to know peak usage times and anticipate demand
spikes to ensure sufficient network capacity.

∙ Avoid underprovisioning: Perform capacity planning in advance using a


performance monitoring tool to understand the resources each virtual desktop
consumes and to make sure you know your overall resource consumption needs.

Understand your end-users‘ needs: Do your users need to be able to customize their desktops, or
are they task workers who can work from a generic desktop? (In other words, is your
organization better suited to a persistent or nonpersistent VDI setup?) What are your users‘
performance requirements? You‘ll need to provision your setup differently for users who use
graphics-intensive applications versus those who just need access to the internet or to one or two
simple applications.

∙ Perform a pilot test: Most virtualization providers offer testing tools that you can use to run a
test VDI deployment beforehand; it‘s important to do so to make sure you‘ve provisioned your
resources correctly.
Experiment No 10

Aim: To study containerization using Dockert.

Theory:

Software as a service (SaaS) is a software delivery model where both the software and the
associated data are centrally hosted on the cloud. In this model, application functionality is
delivered through a subscription over the internet. But SaaS solutions are constantly
evolving.

Research and product development teams are always adding layers, features, tools, and plug-
ins. SaaS is cheap, smart, sexy, and constantly on the edge. All these points make a SaaS
solution a serious option for running a business. According to a study conducted by North
Bridge Venture Partners, "45% of businesses say they already, or plan to, run their company
from the cloud - showing how integral the cloud is to business".

The evolution of traditional products toward SaaS can be approached in different ways. (We
use the term "traditional" to identify products that are not cloud-native.) The easiest
approach is porting the product on cloud, which might be a good step forward if you don‘t
want to risk starting a migration to a cloud-native product but you want the typical
advantages of moving to cloud (for example, IT delegation, no infrastructure and
maintenance costs, and higher security). This cloudification process is basically a so-called
"lift-and-shift" migration: the product is ported "as is" on an infrastructure as a service (IaaS)
cloud provider.

The main objective of this article is to show a further evolution of the basic cloudification
process: to leverage containerization to address the previously described questions and
achieve the benefits of a pure, cloud-native product. Specifically, this article discusses an
example of the approach that our team used to move the IBM Control Desk product to a
microservices pattern using Docker container technology, without the need to redesign the
product or touch the code.
Cloud Computing Lab

The application was split into its basic components and deployed on different WebSphere
Liberty containers to achieve a more manageable provisioning pattern - both in
time-to-market and in overall IT operations activities.

Example: IBM Control Desk existing solution

IBM Control Desk provides IT service management to simplify support of users and
infrastructures. It was built on the Tivoli Product Automation Engine component embedded
in the IBM Maximo Asset Management product.

The standard architecture consists of the following parts:

∙ A Java Enterprise application (UI and back end).


∙ A database.
∙ A Node.js application (a service portal UI).
∙ A web server (a load balancer).

As a Java Runtime Environment, WebSphere Application Server for Network Deployment


was the typical choice, because the database manager supports Oracle, DB2 and Microsoft
SQL Server. The most common web server option was IBM HTTP Server, especially for
working with WebSphere Application Server for Network Deployment, as shown in the
following diagram:
Cloud Computing Lab

The Maximo Asset Management deployment guide, which also included best practices,
explained how to split the all-in-one application into four different applications: Maximo
User Interface, Maximo Cron, Maximo Report, and the Integration Framework. The cost in
terms of effort for achieving this pattern was entirely owned by the local IT team, without
any default procedure that supported IT engineers. Maximo Asset Management 7.6.1
included so-called "Liberty support" by further splitting the applications and providing a
suite of build scripts that builds and bundles only the modules needed by the application
role.

IBM Control Desk 7.6.1 was built on top of Maximo Asset Management 7.6.1 and inherited
the Liberty support that is used for achieving microservice decomposition.

Our deployment path to containerize the application

The deployment path our team used illustrates how to "SaaSify" an application. Our

team's process included the following tasks:

∙ Install IBM Control Desk 7.6.1 on the administrative workstation node.


∙ Deploy IBM Control Desk database on a DB2 node.
∙ Build a Docker image for the IBM Control Desk and Service Portal.
∙ Build a Docker image for the JMS server.
∙ Create the network for allowing direct communication among containers.
∙ Run one container for each Docker image we built.
∙ Configure an IBM HTTP Server for correctly routing traffic to the containers.

First step: Installing and deploying IBM Control Desk 7.6.1

For the purpose of this article, we decided to use the same node as the administrative
workstation and the DB2 node. We used a Red Hat Enterprise Linux Server 7 based virtual
machine with two CPUs and 4 GB RAM, installed with the IBM Control Desk product and
deployed on the MAXDB76 database.

The IBM Control Desk installation directory (which contains the application build code) and
the Service Portal installation directory are shared through the network file system (NFS)
with the Docker engine. Therefore, the applications are available for the build on the both
nodes.
Cloud Computing Lab

Second step: Building the Docker images

We decided to strictly follow what is stated in Maximo Asset Management tech note, so we
produced five different applications:

∙ UI
∙ Cron
∙ Maximo Enterprise Adapter (MEA)
∙ API
∙ Report (Business Intelligence and Reporting Tools - BIRT - Report Only Server or BROS)

Then, we added two applications for a service portal and the JMS server, which
receives, stores, and forward messages wherever a JMS protocol is used.

∙ Service portal (SP)


∙ JMS Server (JMS)

The following illustration shows a pictorial representation of the architecture:


We built the applications by following instructions at Maximo Asset Management 7.6.1
WebSphere Liberty Support, which produces a series of web archive (war) files.
Cloud Computing Lab

For example, for the UI application we ran the following command on the administrative
workstation:

cd /opt/IBM/SMP/maximo/deployment/was-liberty-default
./buildmaximoui-war.sh && ./buildmaximo-xwar.sh
Show more

The deployment/maximo-ui/maximo-ui-server/apps subdirectory had following structure:

deployment/maximo-ui/maximo-ui-server/
├── apps
│ ├── maximoui.war
│ └── maximo-x.war
├── Dockerfile
├── jvm.options
└── server.xml
Show more

The server.xml file was the server descriptor, jvm.options contained the system properties to set
at the JVM startup level, Dockerfile was the file used for building the image, and apps
contained the build artifacts:

-rw-r--r-- 1 root root 1157149383 Mar 20 09:57 maximoui.war -rw-r--r-- 1 root root
70932873 Mar 20 10:01 maximo-x.war Show more

From the Dockerfile path on the administration workstation, we built the Docker image by
running the following command:

docker build . -t icd/ui:7.6.1.0


Show more

We did the same for the other Liberty applications, so that we had the following images:

icd/ui:7.6.1.0
icd/cron:7.6.1.0
icd/mea:7.6.1.0
icd/bros:7.6.1.0
icd/api:7.6.1.0
Show more

The JMS server did not come by default with Maximo Liberty support. We needed to create
it from scratch. Our procedure was based on the WebSphere Application
Cloud Computing Lab

Server Liberty documentation. (You can see the example server.xml file in the Technical
procedure section.)

We built the Service Portal Docker image from the Node.js image. For the Service Portal
application, we copied the full application tree, the certificate, and the private key exported
by the web server to allow communication between the two components. Eventually, we
obtained the following images:

icd/ui:7.6.1.0
icd/cron:7.6.1.0
icd/mea:7.6.1.0
icd/bros:7.6.1.0
icd/api:7.6.1.0
icd/jms:1.0.0.0
icd/sp:7.6.1.0
Show more
Third step: Deploying the containers

For the Docker engine, we chose an Ubuntu 18.04 machine with four CPUs and 32 GB
RAM, a typical size for standard SaaS architecture.

After we had our images, we started deploying containers from them. We first deployed with
one container per image. Then we carried out a scalability test with two UI containers, as
discussed in the results section.

We created a Docker network called ICDNet, and we added each running container to it,
which allowed easy communication between all the containers.

In the end, our suitably formatted docker ps command looked like the following example:

NAME SP
IMAGE icd/sp:7.6.1.0
PORTS 0.0.0.0:3000->3000/tcp

NAME CRON
IMAGE icd/cron:7.6.1.0
PORTS 9080/tcp, 9443/tcp

NAME UI
IMAGE icd/ui:7.6.1.0
PORTS 0.0.0.0:9080->9080/tcp, 0.0.0.0:9443->9443/tcp

NAME API
IMAGE icd/api:7.6.1.0
Cloud Computing Lab

PORTS 0.0.0.0:9081->9080/tcp, 0.0.0.0:9444->9443/tcp

NAME MEA
IMAGE icd/mea:7.6.1.0
PORTS 0.0.0.0:9084->9080/tcp, 0.0.0.0:9447->9443/tcp

NAME JMS
IMAGE icd/jms:1.0.0.0
PORTS 9080/tcp, 0.0.0.0:9011->9011/tcp, 9443/tcp

NAME BROS
IMAGE icd/bros:7.6.1.0
PORTS 0.0.0.0:9085->9080/tcp, 0.0.0.0:9448->9443/tcp
Show more

All resources (container, network and volumes) are created using Docker compose

tool (the docker-compose.yml file is included in the technical procedure section). The
YAML file adds parameters to the run command for each container, for example db host and
some environment variables for configuring containers correctly.

Our technical procedure

The following example shows the Dockerfile for the Liberty-based image:

FROM websphere-liberty
USER root

# Copy the applications


COPY --chown=default:root apps
/opt/ibm/wlp/usr/servers/defaultServer/apps

# Copy the server.xml and JVM options


COPY server.xml /opt/ibm/wlp/usr/servers/defaultServer/ COPY jvm.options
/opt/ibm/wlp/usr/servers/defaultServer/

# install the additional utilities listed in the server.xml RUN


["/opt/ibm/wlp/bin/installUtility","install","defaultServer"] server.xml file for the JMS docker image
<server description="new server">
<featureManager>
<feature>servlet-3.1</feature>
<feature>wasJmsClient-2.0</feature>
<feature>wasJmsServer-1.0</feature>
<feature>jmsMdb-3.2</feature>
</featureManager>

<!-- To allow access to this server from a remote client host="*" has been added to the following element
-->
<wasJmsEndpoint id="InboundJmsEndpoint" host="*"
wasJmsPort="9011" wasJmsSSLPort="9100"/>

<!-- A messaging engine is a component, running inside a server, that manages messaging resources.
Applications are connected to a messaging
Cloud Computing Lab

engine when they send and receive messages. When wasJmsServer-1.0 feature is added in server.xml by
default a messaging engine runtime is initialized which contains a default queue (Default.Queue) and a default
topic space(Default.Topic.Space).
If the user wants to create a new queue or topic space, then the messagingEngine element must be
defined in server.xml --> <messagingEngine>
<queue id="jms/maximo/int/queues/sqin" sendAllowed="true" receiveAllowed="true"
maintainStrictOrder="false"/>

<queue id="jms/maximo/int/queues/sqout"
sendAllowed="true"
receiveAllowed="true"maintainStrictOrder="false"/>

<queue id="jms/maximo/int/queues/cqin" sendAllowed="true" receiveAllowed="true"


maintainStrictOrder="false"/>

<queue id="jms/maximo/int/queues/notf" sendAllowed="true" receiveAllowed="true"


maintainStrictOrder="false"/>

<queue id="jms/maximo/int/queues/weather"
sendAllowed="true" receiveAllowed="true"
maintainStrictOrder="false"/>

</messagingEngine>
</server>
Show more

The following example shows the Dockerfile for the JMS Server image:

FROM websphere-liberty

COPY files/server.xml /opt/ibm/wlp/usr/servers/defaultServer/

RUN ["/opt/ibm/wlp/bin/installUtility","install","defaultServer"] Show more

The following example shows the Dockerfile for the Service Portal image:

FROM aricenteam/aricentrepo:nodejs
USER root

# copy the serviceportal tree in the /opt/ibm/ng directory RUN mkdir -p /opt/ibm/ng
COPY ng /opt/ibm/ng/

# copy certificate and key files


COPY server.crt /opt/ibm/ng
COPY server.key /opt/ibm/ng

EXPOSE 3000
WORKDIR /opt/ibm/ng
CMD ["node", "app.js"]
docker-compose.yml file
version: '3.7'

services:
ui:
Cloud Computing Lab

image: icd:ui
ports:
- "9443"
environment:
JVM_ARGS: "-Dmxe.name=MAXIMO_UI"
networks:
- icd_net
volumes:
- doclinks:/DOCLINKS
- search:/SEARCH
api:
image: icd:api
ports:
- "9443"
environment:
JVM_ARGS: "-Dmxe.name=MAXIMO_API"
networks:
- icd_net
volumes:
- doclinks:/DOCLINKS
- search:/SEARCH
cron:
image: icd:cron
ports:
- "9443"
environment:
JVM_ARGS: "-Dmxe.name=MAXIMO_CRON"
networks:
- icd_net
volumes:
- doclinks:/DOCLINKS
- search:/SEARCH
mea:
image: icd:mea
ports:
- "9443"
environment:
JVM_ARGS: "-Dmxe.name=MAXIMO_MEA"
networks:
- icd_net
volumes:
- doclinks:/DOCLINKS
- search:/SEARCH
jms:
image: icd:jms_server
networks:
- icd_net

networks:
icd_net:
driver: bridge

volumes:
doclinks:
search:
Show more

After we had our Control Desk instance up and running, we took some measurements with
the Rational Performance Tester tool to compare performance of
Cloud Computing Lab

a classic instance and the container-based one. We ran two different tests with a workload of
20 and 50 users and monitored the with nmon script CPU and memory of the virtual
machines.

As shown in the following image, on average, we saw a larger memory consumption by the
classic instance (deployed on WebSphere Application Server for Network Deployment),
which was due mainly to the number of running Java process. It also included the
deployment manager and node agent. On the CPU side, the behaviors overlap.

The following table shows the average, minimum, and maximum values for the page
response time (PRT) as a function of time. It seemed that the Docker case performed slightly
better, with an average response of ~0.1 times the one in the classic case.

For situations with 50 users, we also performed a scalability test by adding another UI
container to the instance and see if the workload is balanced well.

The following screen capture shows the page response time (PRT) and the functions in cases
of one and two UI containers. The results confirmed what was expected: the performance
for two containers increased by a factor of 2, so we concluded that the instance scales with a
quasi-ideal trend.
Cloud Computing Lab
Cloud Computing Lab
Conclusion: Hence, We‘ve studied using container technology to leverage the native support for
WebSphere Liberty Profile with the IBM Control Desk product.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy