CCL Lab Manual-1
CCL Lab Manual-1
Sr.
List of Experiments Page No.
No.
Scope:
THORY:
What is Cloud?
The term Cloud refers to a Network or Internet. In other words, we can say that
Cloud is something, which is present at remote location. Cloud can provide
services over public and private networks, i.e., WAN, LAN or VPN.
Deployment models define the type of access to the cloud, i.e., how the cloud is
located? Cloud can have any of the four types of access: Public, Private, Hybrid,
and Community.
Public Cloud
The public cloud allows systems and services to be easily accessible to the
general public. Public cloud may be less secure because of its openness. Public
clouds are owned and operated by third parties; they deliver superior economies of
scale to customers, as the infrastructure costs are spread among a mix of users,
giving each individual client an attractive low-cost, ―Pay-as-you-go‖ model. All
customers share the same infrastructure pool with limited configuration, security
protections, and availability variances. These are managed and supported by the
cloud provider. One of the advantages of a Public cloud is that they may be larger
than an enterprises cloud, thus providing the ability to scale seamlessly, on
demand.
Private Cloud
Community Cloud
Hybrid Cloud
The hybrid cloud is a mixture of public and private cloud, in which the critical
activities are performed using private cloud while the non-critical activities are
performed using public cloud.
Hybrid Clouds combine both public and private cloud models. With a Hybrid
Cloud, service providers can utilize 3rd party Cloud Providers in a full or partial
manner thus increasing the flexibility of computing. The Hybrid cloud
environment is capable of providing on-demand, externally provisioned scale.
Conclusion :
In this way we are able to study Cloud Computing with their deployment models & service
models.
EXPERIMENT NO. 02
Concept: Virtualization
Theory:
What is Virtualization?
Virtualization allows multiple operating system instances to run concurrently on a
single computer; it is a means of separating hardware from a single operating system.
Each ―guest‖ OS is managed by a Virtual Machine Monitor (VMM), also known as a
hypervisor. Because the virtualization system sits between the guest and the hardware, it
can control the guests‘ use of CPU, memory, and storage, even allowing a guest OS to
migrate from one machine to another.
Virtualization is one of the hardware reducing, cost saving and energy saving
technology that is rapidly transforming the IT landscape and fundamentally changing the
way that people compute.
Before Virtualization:
● Single OS image per machine
● Software and hardware tightly coupled
● Running multiple applications on same machine often creates conflict
● Inflexible and costly infrastructure
After Virtualization:
● Hardware-independence of operating system and applications
● Virtual machines can be provisioned to any system
● Can manage OS and application as a single unit by encapsulating them into virtual
Machines
VM1 VM2
Hypervisor
Hardware
Virtualization Software
● Vmware
o Server
o Player
o Workstation
o ESX Server
● Qemu
● Xen
● Microsoft Virtual PC/Server
o Connectix
Step 4-: Specify RAM Size, HDD Size, and Network Configuration and Finish the wizard
Step 4-: To Select the media for installation Click on start and browse for iso file
Step 5:Complete the Installation and use it
Step 6: To Connect OS to the network change network Mode to Bridge Adaptor
Download the iso for the XenServer
Create a New > Virtual Machine > Guest Operating System “VMware ESX – VMware ESXi
5”
Make sure you enable “Virtualize Intel VT-x/EPT or AMD-V/RVI”
Mount the ISO into your new Virtual Machine & start the Virtual Machine to get into the XenServer boot loader
Pick Your Keyboard Layout
Click on ok
Accept EULA
Click ok
Click ok
From here configure your IP Address or let DHCP assign one for you.
Give your XenServer a host name and assign a DNS, if you don’t want to let your
DHCP
configure your S for you.
Conclusion:
In this way we are able to study Virtualization in Cloud and created virtual machines on Open
Source OS.
Experiment No. 03
Theory:
In the most basic cloud-service model—and according to the IETF (Internet Engineering Task
Force)—providers of IaaS offer computers—physical or (more often) virtual machines—and
other resources. IaaS refers to online services that abstract user from the detail of infrastructure
like physical computing resources, location, data partitioning, scaling, security, backup etc. A
hypervisor, such as Xen, Oracle VirtualBox, KVM, VMware ESX/ESXi, or Hyper-V runs the
virtual machines as guests. Pools of hypervisors within the cloud operational system can support
large numbers of virtual machines and the ability to scale services up and down according to
customers' varying requirements. IaaS clouds often offer additional resources such as a virtual-
machine disk-image library, raw block storage, file or object storage, firewalls, load balancers, IP
addresses, virtual local area networks (VLANs), and software bundles.IaaS-cloud providers
supply these resources on-demand from their large pools of equipment installed in data centres.
For wide-area connectivity, customers can use either the Internet or carrier clouds (dedicated
virtual private networks).
To deploy their applications, cloud users install operating-system images and their application
software on the cloud infrastructure. In this model, the cloud user patches and maintains the
operating systems and the application software. Cloud providers typically bill IaaS services on a
utility computing basis: cost reflects the amount of resources allocated and consumed.
Openstack
To install OpenStack using DevStack, any Linux-based distribution with 2GB RAM can
be used to start the implementation of IaaS.
Here are the steps that need to be followed for the installation.
1. Install Git
$ sudo apt-get install git
2. Clone the DevStack repository and change the directory. The code will set up the cloud
infrastructure.
$ cddevstack/
/devstack$ ls
Here, the MySQL database password is entered. There‘s no need to worry about the installation
of MySQL separately on this system. We have to specify a password and this script will install
MySQL, and use this password there.
After all these steps, the machine becomes the cloud service providing platform. Here, 1.1.1.1 is
the IP of my first network interface.
We can type the host IP provided by the script into a browser, in order to access the dashboard
‗Horizon‘. We can log in with the username ‗admin‘ or ‗demo‘ and the password ‗admin‘.
You can view all the process logs inside the screen, by typing the following command:
$ screen–x
Executing the following will kill all the services, but it should be noted that it will not delete any
of the code.
$ sudokillall screen
localrc configurations
localrc is the file in which all the local configurations (local machine parameters) are
maintained. After the first successful stack.sh run, you will see that a localrc file gets created
with the configuration values you specified while running that script.
The following fields are specified in the localrc file:
DATABASE_PASSWORD
RABBIT_PASSWORD
SERVICE_TOKEN
SERVICE_PASSWORD
ADMIN_PASSWORD
If we specify the option OFFLINE=True in the localrc file inside DevStack directory, and if
after specifying this, we run stack.sh, it will not check any parameter over the Internet. It will
set up DevStack using all the packages and code residing in the local system. In the phase of
code development, there is need to commit the local changes in the /opt/stack/nova repository
before restack (re-running stack.sh) with the RECLONE=yes option. Otherwise, the changes
will not be committed.
To use more than one interface, there is a need to specify which one to use for the external IP
using this configuration:
HOST_IP=xxx.xxx.xxx.xxx
Cinder on DevStack
Cinder is a block storage service for OpenStack that is designed to allow the use of a reference
implementation (LVM) to present storage resources to end users that can be consumed by the
OpenStack Compute Project (Nova). Cinder is used to virtualise the pools of block storage
devices. It delivers end users with a self-service API to request and use the resources, without
requiring any specific complex knowledge of the location and configuration of the storage
where it is actually deployed.
All the Cinder operations can be performed via any of the following:
$ cinder create 1
▪ To see more information about the command, just type cinder help <command>
$ cinder help create
▪ To create a Cinder volume of size 1GB with a name, using cinder create –display- name
myvolume:
ID Status Display Name Size Volume typeBootable Attached To id1 Available Myvolume 1
None False
id2 Available None 1 None False
▪ To delete the first volume (the one without a name), use the cinder delete <volume_id>
command. If we execute cinder list really quickly, the status of the volume going to
‗deleting‘ can be seen, and after some time, the volume will be deleted:
$ cinder snapshot-list
ID Volume ID Status Display Name Size Snapshotid1 id2
Available None 1
There are lots of functions and features available with OpenStack related to cloud deployment.
Depending upon the type of implementation, including load balancing, energy optimisation,
security and others, the cloud computing framework OpenStack can be explored a lot.
Conclusion:
Questions:
Theory:
Google Docs:
Google Docs (docs.google.com) is the most popular web-based word processor available today.
Docs is actually a suite of applications that also includes Google Spreadsheets and Google
Presentations; the Docs part of the Docs suite is the actual word processing application. Like all
things Google, the Google Docs interface is clean and, most important, it works well without
imposing a steep learning curve. Basic formatting is easy enough to do, storage space for your
documents is generous, and sharing collaboration version control is a snap to do. When you log
in to Google Docs with your Google account, you see the page. This is the home page for all the
Docs applications (word processing, spreadsheets, and presentations); all your previously
created documents are listed on this page. The leftmost pane helps you organize your
documents. You can store files in folders, view documents by type (word processing document
or spreadsheet), and display documents shared with specific people.
Collaborating on Spreadsheets :
If the word processor is the most-used office application, the spreadsheet is the second most-
important app. Office users and home users alike use spreadsheets to prepare budgets, create
expense reports, perform ―what if‖ analyses, and otherwise crunch their numbers. And thus we
come to those spreadsheets in the cloud, the web-based spreadsheets that let you share your
numbers with other users via the Internet. All the advantages of webbased word processors
apply to web-based spreadsheets— group collaboration, anywhere/anytime access, portability,
and so on.
Exploring Web-Based Spreadsheets:
Several web-based spreadsheet applications are worthy competitors to Microsoft Excel. Chief
among these is Google Spreadsheets, which we‘ll discuss first, but there are many other apps
that also warrant your attention. If you‘re at all interested in moving your number crunching and
financial analysis into the cloud, these web-based applications are worth checking out.
Google Spreadsheets
Google Spreadsheets was Google‘s first application in the cloud office suite first known as
Google Docs & Spreadsheets and now just known as Google Docs. As befits its longevity,
Google Spreadsheets is Google‘s most sophisticated web-based application. You access your
existing and create new spreadsheets from the main Google Docs page (docs.google.com). To
create a new spreadsheet, click the New button and select Spreadsheet; the new spreadsheet
opens in a new window and you can edit it.
Collaborating on Presentations:
One of the last components of the traditional office suite to move into the cloud is the
presentation application. Microsoft PowerPoint has ruled the desktop forever, and it‘s proven
difficult to offer competitive functionality in a web-based application; if nothing else, slides
with large graphics are slow to upload and download in an efficient manner. That said, there is a
new crop of web-based presentation applications that aim to give PowerPoint a run for its
money. The big players, as might be expected, are Google and Zoho, but there are several other
applications that are worth considering if you need to take your presentations with you on the
road—or collaborate with users in other locations.
Google Presentations:
If there‘s a leader in the online presentations market, it‘s probably Google Presentations, simply
because of Google‘s dominant position with other webbased office apps. Google Presentations
is the latest addition to the Google Docs suite of apps, joining the Google Docs word processor
and Google Spreadsheets spreadsheet application. Users can create new presentations and open
existing ones from the main Google Docs page (docs.google.com). Open a presentation by
clicking its title or icon. Create a new presentation by selecting New, then Presentation. Your
presentation now opens in a new window on your desktop. What you do get is the ability to add
title, text, and blank slides; a PowerPoint-like slide sorter pane; a selection of predesigned
themes. the ability to publish your file to the web or export as a PowerPoint PPT or Adobe PDF
file; and quick and easy sharing and collaboration, the same as with Google‘s other web-based
apps.
Collaborating on Databases:
A database does many of the same things that a spreadsheet does, but in a different and often
more efficient manner. In fact, many small businesses use spreadsheets for database-like
functions. A local database is one in which all the data is stored on an individual computer. A
networked database is one in which the data is stored on a computer or server connected to a
network, and accessible by all computers connected to that network. Finally, an online or web-
based database stores data on a cloud of servers somewhere on the Internet, which is accessible
by any authorized user with an Internet connection. The primary advantage of a web-based
database is that data can easily be shared with a large number of other users, no matter where
they may be located. When your employee database is in the cloud.
Cebase
Cebase (www.cebase.com) lets you create new database applications with a few clicks of your
mouse; all you have to do is fill in a few forms and make a few choices from some pull-down
lists. Data entry is via web forms, and then your data is displayed in a spreadsheet-like layout,
You can then sort, filter, and group your data as you like. Sharing is accomplished by clicking
the Share link at the top of any data page. You invite users to share your database via email, and
then adjust their permissions after they‘ve accepted your invitation.
Result:
SNAPSHOTS
Step 1: Sign into the Google Drive website with your Google account.
If you don‘t have a Google account, you can create one for free. Google Drive will allow
you to store your files in the cloud, as well as create documents and forms through the
Google Drive web interface.
Step 2: Add files to your drive.
There are two ways to add files to your drive. You can create Google Drive documents,
or you can upload files from your computer. To create a new file, click the CREATE
button. To upload a file, click the ―Up Arrow‖ button next to the CREATE button.
Conclusion:
Google Docs provide an efficient way for storage of data. It fits well in Storage as a service
(SaaS). It has varied options to create documents, presentations and also spreadsheets. It saves
documents automatically after a few seconds and can be shared anywhere on the Internet at the
click of a button.
Questions:
Prerequisite:
Knowledge of Access Control, Authentication and Authorization
Objective:
Objectives this experiment is to provide students an overview of Security issues of Cloud and how to
manage various user groups over cloud.
Outcome:
After successful completion of this experiment student will be able to ∙ Analyze
security issues on cloud
Theory:
OR
AWS Identity Access Management (IAM) is a web service that helps you securely control access to
AWS resources. You use IAM to control to check who is authenticated (signed-in) and authorized to use
resources.
Note: Only one account is created (if 200 employees using same account then there is need of
IAM)(HR+ Marketing+ Finance+ Development)(for 200 employees account need to be created)
✔ When you first create an AWS account, you begin with a single sign-in identity that has complete
access to all AWS services and resources in the account.
✔ This identity is called the AWS account root user and is accessed by signing in with the email address
and password that you used to create the account.
✔ We strongly recommend that you do not use the root user for your everyday tasks, even the
administrative ones. Instead, adhere to the best practice of using the root user only to create your
first IAM user. Then securely lock away the root user credentials and use them to perform only a few
account and service management tasks.
IAM gives you the following features:
1. Shared access to your AWS account:
You can grant other people permission to administer and use resources in your AWS account without
having to share your password or access key. (by creating user name and password)
2. Granular permissions (Read only/Read write/etc permission) You can grant different permissions
to different people for different resources. For example, you might allow some users complete access to
Amazon Elastic Compute Cloud (Amazon EC2), Amazon Simple Storage Service (Amazon S3),
Amazon Dynamo, Amazon Red-shift, and other AWS services.
For other users, you can allow read-only access to just some S3 buckets, or permission to administer just
some EC2 instances, or to access your billing information but nothing else.
3. Secure access to AWS resources for applications that run on Amazon EC2 You can use IAM
features to securely provide credentials for applications that run on EC2 instances. These
credentials provide permissions for your application to access other AWS resources.
Examples include S3 buckets and Dynamo tables.
9. Eventually Consistent
IAM, like many other AWS services, is eventually consistent (IAS work is replicated like multiple
zones). IAM achieves high availability by replicating data across multiple servers within Amazon's data
centres around the world.
If a request to change some data is successful, the change is committed and safely stored. However, the
change must be replicated across IAM, which can take some time. Such changes include creating or
updating users, groups, roles, or policies. We recommend that you do not include such IAM changes in
the critical, high-availability code paths of your application. Instead, make IAM changes in a separate
initialization or setup routine that you run less frequently. Also, be sure to verify that the changes have
been
propagated before production workflows depend on them. For more information, see Changes that I
make are not always immediately visible (p. 466).
10.Free to use
AWS Identity and Access Management (IAM) and AWS Security Token Service (AWS STS) are features
of your AWS account offered at no additional charge. You are charged only when you access other AWS
services using your IAM users or AWS STS temporary security credentials. For information about the
pricing of other AWS products, see the Amazon Web Services pricing page.
∙ Components of IAM
1. Users
2. Groups
3. Roles
4. Policies
I.e. roles
are assigned to applications, user are assigned to peoples
(Suppose you have created an EC2 instance and inside that instance you have hosted a website and it is
accessing s3 services, i.e. application has to interact with S3 service, so I have to give permission to that
web application to access that s3 service, that to give permission you need to create a role)
2. Go to dashboard-Click on IAM
3. Create new user(IAM user-give rights)(copy the URL , so that IAM user login using that URL-Need
to remember username and password)
4. Login using IAM user (use provided URL), create group (give rights) and then add user.
5. Login to root account, Go to dashboard- create policies as per requirement and attached
policies to applications.
∙ Accessing IAM:
You can work with AWS Identity and Access Management in any of the following ways.
3. AWS SDKs
AWS provides SDKs (software development kits) that consist of libraries and sample code for various
programming languages and platforms (Java, Python, Ruby, .NET, iOS, Android, etc.). The SDKs
provide a convenient way to create programmatic access to IAM and AWS. For example, the SDKs take
care of tasks such as cryptographically signing requests, managing errors, and retrying requests
automatically. For information about the AWS SDKs, including how to download and install them, see
the Tools for Amazon Web Services page.
Experiment No. 06
Aim: Case study on Fog computing
Prerequisite:
Knowledge of Access Control, Authentication and Authorization Fog computing
Theory:
Fog Computing is the term coined by Cisco that refers to extending cloud computing to an edge of the
enterprise‘s network. Thus, it is also known as Edge Computing or Fogging. It facilitates the operation of
computing, storage, and networking services between end devices and computing data centers.
1. The devices comprising the fog infrastructure are known as fog nodes.
2. In fog computing, all the storage capabilities, computation capabilities, data along with the
applications are placed between the cloud and the physical host.
3. All these functionalities are placed more towards the host. This makes processing faster as it is
done almost at the place where data is created.
4. It improves the efficiency of the system and is also used to ensure increased security.
1. It is used when only selected data is required to send to the cloud. This selected data is chosen
for long-term storage and is less frequently accessed by the host.
2. It is used when the data should be analyzed within a fraction of seconds i.e Latency should be
low.
3. It is used whenever a large number of services need to be provided over a large area at different
geographical locations.
4. Devices that are subjected to rigorous computations and processings must use fog computing.
5. Real-world examples where fog computing is used are in IoT devices (eg. Car-to-Car
Consortium, Europe), Devices with Sensors, Cameras (IIoT-Industrial Internet of Things), etc.
● Congestion may occur between the host and the fog node due to increased traffic (heavy data
flow).
● Power consumption increases when another layer is placed between the host and the cloud.
● Scheduling tasks between host and fog nodes along with fog nodes and the cloud is difficult.
● Data management becomes tedious as along with the data stored and computed, the transmission
of data involves encryption-decryption too which in turn release data.
Experiment No 07
Aim: To simulate identity management in private cloud.
Theory:
Cloud identity management is a lot more than just a simple web app SSO solution. this
approach is the modern adaptation of the traditional, on-prem and legacy solutions like
Microsoft Active Directory (AD) and Lightweight Directory Access Protocol (LDAP), along
with their add-ons of web application single sign-on, multi factor authentication, privileged
access management, identity governance and administration, and more.
The modern adaptation of the directory service is optimized to be used across any device, on
any operating system, with any on-prem or web-based application or any cloud, on-prem, or
remote resource. Modern cloud IAM solutions are also focused on being multi-protocol to
enable virtually any IT resource to connect in their ‗native‘ authentication language.
It describes how to grant, change, and revoke access to projects, folders, and organizations.
To learn how to manage access to other resources, see the following guides:
In Identity and Access Management (IAM), access is managed through IAM policies. An
IAM policy is attached to a Google Cloud resource. Each policy contains a collection of role
bindings that associate one or more principals, such as users or service accounts, with an
IAM role. These role bindings grant the specified roles to the principals, both on the
resource that the policy is attached to and on all of that
Cloud Computing Lab
resource's descendants. For more information about IAM policies, see Understanding
policies.
You can manage access to projects, folders, and organizations with the Google Cloud
Console, the Google Cloud CLI, the REST API, or the Resource Manager client libraries.
Required roles
To get the permissions that you need to manage access to a project, folder, or organization,
ask your administrator to grant you the following IAM roles on the resource that you want to
manage access for (project, folder, or organization):
These predefined roles contain the permissions required to manage access to a project,
folder, or organization. To see the exact permissions that are required, expand the Required
permissions section:
You can view who has access to your project, folder, or organization using the Cloud
Console, the gcloud CLI, the REST API, or the Resource Manager client libraries.
Cloud Computing Lab
import com.google.api.client.googleapis.javanet.GoogleNetHttpTransport;
import com.google.api.client.json.jackson2.JacksonFactory;
import com.google.api.services.cloudresourcemanager.v3.CloudResourceManager;
import
com.google.api.services.cloudresourcemanager.v3.model.GetIamPolicyRequest;
import com.google.api.services.cloudresourcemanager.v3.model.Policy;
import com.google.api.services.iam.v1.IamScopes;
import com.google.auth.http.HttpCredentialsAdapter;
import com.google.auth.oauth2.GoogleCredentials;
import java.io.IOException;
import java.security.GeneralSecurityException;
import java.util.Collections;
try {
GetIamPolicyRequest request = new GetIamPolicyRequest();
policy = service.projects().getIamPolicy(projectId, request).execute();
System.out.println("Policy retrieved: " + policy.toString());
return policy;
} catch (IOException e) {
System.out.println("Unable to get policy: \n" + e.toString());
return policy;
}
}
CloudResourceManager service =
new CloudResourceManager.Builder(
GoogleNetHttpTransport.newTrustedTransport(),
JacksonFactory.getDefaultInstance(),
new HttpCredentialsAdapter(credential))
.setApplicationName("service-accounts")
.build();
return service;
}
}
Cloud Computing Lab
You can use the Cloud Console and the gcloud CLI to quickly grant or revoke a single role
for a single principal, without editing the resource's IAM policy directly. Common types of
principals include Google accounts, service accounts, Google groups, and domains. For a list
of all principal types, see Concepts related to identity.
If you need help identifying the most appropriate predefined role, see Choose
predefined roles.
Theory:
How to get started with the IAM methods from the Resource Manager API in our favourite
programming language.
Warning: If we use an existing project, then completing this will enable some users to
temporarily access resources in that project.
1. If we're new to Google Cloud, create an account to evaluate how our products perform in
real-world scenarios. New customers also get $300 in free credits to run, test, and deploy
workloads.
2. In the Google Cloud Console, on the project selector page, click Create project to begin
creating a new Google Cloud project.
Go to Project Selector
3. Enable the Resource Manager API.
Enable the API
f. To provide access to your project, grant the following role(s) to your service
account: Project IAM Admin .
For additional roles, click add Add another role and add each additional
role.
g. Click Continue.
Do not close your browser window. You will use it in the next step. 5.
a. In the Cloud Console, click the email address for the service account that you
created.
b. Click Keys.
c. Click Add key, then click Create new key.
d. Click Create. A JSON key file is downloaded to your computer. e.
Click Close.
6. Set the environment variable GOOGLE_APPLICATION_CREDENTIALS to the path of the
JSON file that contains your service account key. This variable only applies to your current
shell session, so if you open a new session, set the variable again.
For Windows:
For PowerShell:
$env:GOOGLE_APPLICATION_CREDENTIALS=" KEY_PATH"
Replace KEY_PATH with the path of the JSON file that contains your service account key.
For example:
$env:GOOGLE_APPLICATION_CREDENTIALS="C:\Users\username\Downloads\service-account-file.json
set GOOGLE_APPLICATION_CREDENTIALS=KEY_PATH
Cloud Computing Lab
Replace KEY_PATH with the path of the JSON file that contains your service account key.
∙ Initializes the Resource Manager service, which manages Google Cloud projects. ∙ Reads
the IAM policy for your project.
∙ Modifies the IAM policy by granting the Log Writer role (roles/logging.logWriter) to your
Google Account.
∙ Writes the updated IAM policy.
∙ Prints all the principals that have the Log Writer role (roles/logging.logWriter) at the
project level.
∙ Revokes the Log Writer role.
To install and use the client library for Resource Manager, see Resource Manager client
libraries.
import com.google.api.client.googleapis.javanet.GoogleNetHttpTransport;
import com.google.api.client.json.jackson2.JacksonFactory;
import com.google.api.services.cloudresourcemanager.v3.CloudResourceManager;
import com.google.api.services.cloudresourcemanager.v3.model.Binding;
import
com.google.api.services.cloudresourcemanager.v3.model.GetIamPolicyRequest;
import com.google.api.services.cloudresourcemanager.v3.model.Policy;
import
com.google.api.services.cloudresourcemanager.v3.model.SetIamPolicyRequest;
import com.google.api.services.iam.v1.IamScopes;
import com.google.auth.http.HttpCredentialsAdapter;
import com.google.auth.oauth2.GoogleCredentials;
import java.io.IOException;
import java.security.GeneralSecurityException;
import java.util.Collections;
import java.util.List;
// Grants your member the "Log writer" role for your project.
addBinding(crmService, projectId, member, role);
// Get the project's policy and print all members with the "Log Writer" role
Policy policy = getPolicy(crmService, projectId);
Binding binding = null;
List<Binding> bindings = policy.getBindings();
for (Binding b : bindings) {
if (b.getRole().equals(role)) {
binding = b;
break;
}
}
System.out.println("Role: " + binding.getRole());
System.out.print("Members: ");
for (String m : binding.getMembers()) {
System.out.print("[" + m + "] ");
}
System.out.println();
if (binding != null) {
// If binding already exists, adds member to binding.
binding.getMembers().add(member);
} else {
// If binding does not exist, adds binding to policy.
binding = new Binding();
binding.setRole(role);
binding.setMembers(Collections.singletonList(member));
policy.getBindings().add(binding);
}
break;
}
}
if (binding.getMembers().contains(member)) {
binding.getMembers().remove(member);
if (binding.getMembers().isEmpty()) {
policy.getBindings().remove(binding);
}
}
Theory:
Virtual desktop infrastructure (VDI) is a technology that refers to the use of virtual
machines to provide and manage virtual desktops. VDI hosts desktop environments on a
centralized server and deploys them to end-users on request.
In VDI, a hypervisor segments servers into virtual machines that in turn host virtual
desktops, which users access remotely from their devices. Users can access these virtual
desktops from any device or location, and all processing is done on the host server. Users
connect to their desktop instances through a connection broker, which is a software-based
gateway that acts as an intermediary between the user and the server.
VDI can be either persistent or nonpersistent. Each type offers different benefits:
∙ With persistent VDI, a user connects to the same desktop each time, and users are
able to personalize the desktop for their needs since changes are saved even after
the connection is reset. In other words, desktops in a persistent VDI environment
act exactly like a personal physical desktop.
∙ In contrast, nonpersistent VDI, where users connect to generic desktops and no
changes are saved, is usually simpler and cheaper, since there is no need to
maintain customized desktops between sessions. Nonpersistent VDI is often
used in organizations with a lot of task workers, or employees who perform a
limited set of repetitive tasks and don‘t need a customized desktop.
Why VDI?
VDI offers a number of advantages, such as user mobility, ease of access, flexibility and
greater security. In the past, its high-performance requirements made it costly and
challenging to deploy on legacy systems, which posed a barrier for many businesses.
However, the rise in enterprise adoption of hyperconverged infrastructure (HCI) offers a
solution that provides scalability and high performance at a lower cost.
Although VDI‘s complexity means that it isn‘t necessarily the right choice for every
organization, it offers a number of benefits for organizations that do use it. Some of these
benefits include:
∙ Remote access: VDI users can connect to their virtual desktop from any location
or device, making it easy for employees to access all their files and applications
and work remotely from anywhere in the world.
∙ Cost savings: Since processing is done on the server, the hardware requirements
for end devices are much lower. Users can access their virtual desktops from
older devices, thin clients, or even tablets, reducing the need for IT to purchase
new and expensive hardware.
∙ Security: In a VDI environment, data lives on the server rather than the end client
device. This serves to protect data if an endpoint device is ever stolen or
compromised.
∙ Centralized management: VDI‘s centralized format allows IT to easily patch,
update or configure all the virtual desktops in a system.
What is VDI used for?
Although VDI can be used in all sorts of environments, there are a number of use cases that
are uniquely suited for VDI, including:
∙ Remote work: Since VDI makes virtual desktops easy to deploy and update from
a centralized location, an increasing number of companies are implementing it
for remote workers.
∙ Bring your own device (BYOD): VDI is an ideal solution for environments that
allow or require employees to use their own devices.
Cloud Computing Lab
Since processing is done on a centralized server, VDI allows the use of a wider
range of devices. It also offers better security, since data lives on the server and
is not retained on the end client device.
∙ Task or shift work: Nonpersistent VDI is particularly well suited to organizations
such as call centers that have a large number of employees who use the same
software to perform limited tasks.
Desktop virtualization is a generic term for any technology that separates a desktop
environment from the hardware used to access it. VDI is a type of desktop virtualization, but
desktop virtualization can also be implemented in different ways, such as remote desktop
services (RDS), where users connect to a shared desktop that runs on a remote server.
What is the difference between VDI and virtual machines (VMs)? Virtual machines are
the technology that powers VDI. VMs are software ―machines‖ created by partitioning a
physical server into multiple virtual servers through the use of a hypervisor. (This process is
also known as server virtualization.) Virtual machines can be used for a number of
applications, one of which is running a virtual desktop in a VDI environment. How to
implement VDI?
When planning for VDI deployment, larger enterprises should consider implementing it in
an HCI environment, as HCI‘s scalability and high performance are a natural fit for VDI‘s
resource needs. On the other hand, implementing HCI for VDI is probably not necessary
(and would be overly expensive) for organizations that require less than 100 virtual
desktops.
Understand your end-users‘ needs: Do your users need to be able to customize their desktops, or
are they task workers who can work from a generic desktop? (In other words, is your
organization better suited to a persistent or nonpersistent VDI setup?) What are your users‘
performance requirements? You‘ll need to provision your setup differently for users who use
graphics-intensive applications versus those who just need access to the internet or to one or two
simple applications.
∙ Perform a pilot test: Most virtualization providers offer testing tools that you can use to run a
test VDI deployment beforehand; it‘s important to do so to make sure you‘ve provisioned your
resources correctly.
Experiment No 10
Theory:
Software as a service (SaaS) is a software delivery model where both the software and the
associated data are centrally hosted on the cloud. In this model, application functionality is
delivered through a subscription over the internet. But SaaS solutions are constantly
evolving.
Research and product development teams are always adding layers, features, tools, and plug-
ins. SaaS is cheap, smart, sexy, and constantly on the edge. All these points make a SaaS
solution a serious option for running a business. According to a study conducted by North
Bridge Venture Partners, "45% of businesses say they already, or plan to, run their company
from the cloud - showing how integral the cloud is to business".
The evolution of traditional products toward SaaS can be approached in different ways. (We
use the term "traditional" to identify products that are not cloud-native.) The easiest
approach is porting the product on cloud, which might be a good step forward if you don‘t
want to risk starting a migration to a cloud-native product but you want the typical
advantages of moving to cloud (for example, IT delegation, no infrastructure and
maintenance costs, and higher security). This cloudification process is basically a so-called
"lift-and-shift" migration: the product is ported "as is" on an infrastructure as a service (IaaS)
cloud provider.
The main objective of this article is to show a further evolution of the basic cloudification
process: to leverage containerization to address the previously described questions and
achieve the benefits of a pure, cloud-native product. Specifically, this article discusses an
example of the approach that our team used to move the IBM Control Desk product to a
microservices pattern using Docker container technology, without the need to redesign the
product or touch the code.
Cloud Computing Lab
The application was split into its basic components and deployed on different WebSphere
Liberty containers to achieve a more manageable provisioning pattern - both in
time-to-market and in overall IT operations activities.
IBM Control Desk provides IT service management to simplify support of users and
infrastructures. It was built on the Tivoli Product Automation Engine component embedded
in the IBM Maximo Asset Management product.
The Maximo Asset Management deployment guide, which also included best practices,
explained how to split the all-in-one application into four different applications: Maximo
User Interface, Maximo Cron, Maximo Report, and the Integration Framework. The cost in
terms of effort for achieving this pattern was entirely owned by the local IT team, without
any default procedure that supported IT engineers. Maximo Asset Management 7.6.1
included so-called "Liberty support" by further splitting the applications and providing a
suite of build scripts that builds and bundles only the modules needed by the application
role.
IBM Control Desk 7.6.1 was built on top of Maximo Asset Management 7.6.1 and inherited
the Liberty support that is used for achieving microservice decomposition.
The deployment path our team used illustrates how to "SaaSify" an application. Our
For the purpose of this article, we decided to use the same node as the administrative
workstation and the DB2 node. We used a Red Hat Enterprise Linux Server 7 based virtual
machine with two CPUs and 4 GB RAM, installed with the IBM Control Desk product and
deployed on the MAXDB76 database.
The IBM Control Desk installation directory (which contains the application build code) and
the Service Portal installation directory are shared through the network file system (NFS)
with the Docker engine. Therefore, the applications are available for the build on the both
nodes.
Cloud Computing Lab
We decided to strictly follow what is stated in Maximo Asset Management tech note, so we
produced five different applications:
∙ UI
∙ Cron
∙ Maximo Enterprise Adapter (MEA)
∙ API
∙ Report (Business Intelligence and Reporting Tools - BIRT - Report Only Server or BROS)
Then, we added two applications for a service portal and the JMS server, which
receives, stores, and forward messages wherever a JMS protocol is used.
For example, for the UI application we ran the following command on the administrative
workstation:
cd /opt/IBM/SMP/maximo/deployment/was-liberty-default
./buildmaximoui-war.sh && ./buildmaximo-xwar.sh
Show more
deployment/maximo-ui/maximo-ui-server/
├── apps
│ ├── maximoui.war
│ └── maximo-x.war
├── Dockerfile
├── jvm.options
└── server.xml
Show more
The server.xml file was the server descriptor, jvm.options contained the system properties to set
at the JVM startup level, Dockerfile was the file used for building the image, and apps
contained the build artifacts:
-rw-r--r-- 1 root root 1157149383 Mar 20 09:57 maximoui.war -rw-r--r-- 1 root root
70932873 Mar 20 10:01 maximo-x.war Show more
From the Dockerfile path on the administration workstation, we built the Docker image by
running the following command:
We did the same for the other Liberty applications, so that we had the following images:
icd/ui:7.6.1.0
icd/cron:7.6.1.0
icd/mea:7.6.1.0
icd/bros:7.6.1.0
icd/api:7.6.1.0
Show more
The JMS server did not come by default with Maximo Liberty support. We needed to create
it from scratch. Our procedure was based on the WebSphere Application
Cloud Computing Lab
Server Liberty documentation. (You can see the example server.xml file in the Technical
procedure section.)
We built the Service Portal Docker image from the Node.js image. For the Service Portal
application, we copied the full application tree, the certificate, and the private key exported
by the web server to allow communication between the two components. Eventually, we
obtained the following images:
icd/ui:7.6.1.0
icd/cron:7.6.1.0
icd/mea:7.6.1.0
icd/bros:7.6.1.0
icd/api:7.6.1.0
icd/jms:1.0.0.0
icd/sp:7.6.1.0
Show more
Third step: Deploying the containers
For the Docker engine, we chose an Ubuntu 18.04 machine with four CPUs and 32 GB
RAM, a typical size for standard SaaS architecture.
After we had our images, we started deploying containers from them. We first deployed with
one container per image. Then we carried out a scalability test with two UI containers, as
discussed in the results section.
We created a Docker network called ICDNet, and we added each running container to it,
which allowed easy communication between all the containers.
In the end, our suitably formatted docker ps command looked like the following example:
NAME SP
IMAGE icd/sp:7.6.1.0
PORTS 0.0.0.0:3000->3000/tcp
NAME CRON
IMAGE icd/cron:7.6.1.0
PORTS 9080/tcp, 9443/tcp
NAME UI
IMAGE icd/ui:7.6.1.0
PORTS 0.0.0.0:9080->9080/tcp, 0.0.0.0:9443->9443/tcp
NAME API
IMAGE icd/api:7.6.1.0
Cloud Computing Lab
NAME MEA
IMAGE icd/mea:7.6.1.0
PORTS 0.0.0.0:9084->9080/tcp, 0.0.0.0:9447->9443/tcp
NAME JMS
IMAGE icd/jms:1.0.0.0
PORTS 9080/tcp, 0.0.0.0:9011->9011/tcp, 9443/tcp
NAME BROS
IMAGE icd/bros:7.6.1.0
PORTS 0.0.0.0:9085->9080/tcp, 0.0.0.0:9448->9443/tcp
Show more
All resources (container, network and volumes) are created using Docker compose
tool (the docker-compose.yml file is included in the technical procedure section). The
YAML file adds parameters to the run command for each container, for example db host and
some environment variables for configuring containers correctly.
The following example shows the Dockerfile for the Liberty-based image:
FROM websphere-liberty
USER root
<!-- To allow access to this server from a remote client host="*" has been added to the following element
-->
<wasJmsEndpoint id="InboundJmsEndpoint" host="*"
wasJmsPort="9011" wasJmsSSLPort="9100"/>
<!-- A messaging engine is a component, running inside a server, that manages messaging resources.
Applications are connected to a messaging
Cloud Computing Lab
engine when they send and receive messages. When wasJmsServer-1.0 feature is added in server.xml by
default a messaging engine runtime is initialized which contains a default queue (Default.Queue) and a default
topic space(Default.Topic.Space).
If the user wants to create a new queue or topic space, then the messagingEngine element must be
defined in server.xml --> <messagingEngine>
<queue id="jms/maximo/int/queues/sqin" sendAllowed="true" receiveAllowed="true"
maintainStrictOrder="false"/>
<queue id="jms/maximo/int/queues/sqout"
sendAllowed="true"
receiveAllowed="true"maintainStrictOrder="false"/>
<queue id="jms/maximo/int/queues/weather"
sendAllowed="true" receiveAllowed="true"
maintainStrictOrder="false"/>
</messagingEngine>
</server>
Show more
The following example shows the Dockerfile for the JMS Server image:
FROM websphere-liberty
The following example shows the Dockerfile for the Service Portal image:
FROM aricenteam/aricentrepo:nodejs
USER root
# copy the serviceportal tree in the /opt/ibm/ng directory RUN mkdir -p /opt/ibm/ng
COPY ng /opt/ibm/ng/
EXPOSE 3000
WORKDIR /opt/ibm/ng
CMD ["node", "app.js"]
docker-compose.yml file
version: '3.7'
services:
ui:
Cloud Computing Lab
image: icd:ui
ports:
- "9443"
environment:
JVM_ARGS: "-Dmxe.name=MAXIMO_UI"
networks:
- icd_net
volumes:
- doclinks:/DOCLINKS
- search:/SEARCH
api:
image: icd:api
ports:
- "9443"
environment:
JVM_ARGS: "-Dmxe.name=MAXIMO_API"
networks:
- icd_net
volumes:
- doclinks:/DOCLINKS
- search:/SEARCH
cron:
image: icd:cron
ports:
- "9443"
environment:
JVM_ARGS: "-Dmxe.name=MAXIMO_CRON"
networks:
- icd_net
volumes:
- doclinks:/DOCLINKS
- search:/SEARCH
mea:
image: icd:mea
ports:
- "9443"
environment:
JVM_ARGS: "-Dmxe.name=MAXIMO_MEA"
networks:
- icd_net
volumes:
- doclinks:/DOCLINKS
- search:/SEARCH
jms:
image: icd:jms_server
networks:
- icd_net
networks:
icd_net:
driver: bridge
volumes:
doclinks:
search:
Show more
After we had our Control Desk instance up and running, we took some measurements with
the Rational Performance Tester tool to compare performance of
Cloud Computing Lab
a classic instance and the container-based one. We ran two different tests with a workload of
20 and 50 users and monitored the with nmon script CPU and memory of the virtual
machines.
As shown in the following image, on average, we saw a larger memory consumption by the
classic instance (deployed on WebSphere Application Server for Network Deployment),
which was due mainly to the number of running Java process. It also included the
deployment manager and node agent. On the CPU side, the behaviors overlap.
The following table shows the average, minimum, and maximum values for the page
response time (PRT) as a function of time. It seemed that the Docker case performed slightly
better, with an average response of ~0.1 times the one in the classic case.
For situations with 50 users, we also performed a scalability test by adding another UI
container to the instance and see if the workload is balanced well.
The following screen capture shows the page response time (PRT) and the functions in cases
of one and two UI containers. The results confirmed what was expected: the performance
for two containers increased by a factor of 2, so we concluded that the instance scales with a
quasi-ideal trend.
Cloud Computing Lab
Cloud Computing Lab
Conclusion: Hence, We‘ve studied using container technology to leverage the native support for
WebSphere Liberty Profile with the IBM Control Desk product.