Cloud Lab
Cloud Lab
MARKS
S.NO. DATE EXPERIMENT TITLE SIGN
10
Install Virtualbox / VMware
Workstation with different flavours
1.
of linux or windows OS on top of
windows8 and above.
Install a C compiler in the virtual
machine created using virtual
2.
box and execute Simple
Programs
Install Google App Engine. Create
3. hello world app and other simple
web applications using python/java.
Use GAE launcher to launch the web
4.
applications.
Simulate a cloud scenario
using CloudSim and run a
5.
scheduling algorithm that is
not present in CloudSim.
Find a procedure to transfer the files from one
6
virtual machine to another virtual machine.
Install Hadoop single node
7 cluster and run simple
applications like wordcount.
DATE:
Install Virtualbox / VMware/Equvalent open source Workstation with
different flavours of linux or windows OS on top of windows8 and above.
AIM:
To Install Virtualbox / VMware Workstation with different flavours of linux
or windows OS on top of windows8 and above.
PROCEDURE:
RESULT:
Aim:
To Install a C compiler in the virtual machine created using virtual box and
execute Simple Programs`
PROCEDURE:
RESULT:
Thus the simple C programs executed successfully.
EX.NO:3
DATE:
Install Google App Engine. Create hello world app and other
simple web applications using python/java.
Aim:
To Install Google App Engine. Create hello world app and other simple
web applications using python/java.
Procedure:
Figure – Deselect the “Google Web ToolKit“, and link your GAE Java SDK via the
“configure SDK” link.
Click finished, Google Plugin for Eclipse will generate a sample project automatically.
3. Hello World
Review the generated project directory.
HelloWorld/
src/
...Java source code...
META-INF/
...other configuration...
war/
...JSPs, images, data files...
WEB-INF/
...app configuration...
lib/
...JARs for libraries...
classes/
...compiled classes...
Copy
The extra is this file “appengine-web.xml“, Google App Engine need this to run and
deploy the application.
File : appengine-web.xml
</appengine-web-app>
Copy
4. Run it local
Right click on the project and run as “Web Application“.
Eclipse console :
//...
INFO: The server is running at http://localhost:8888/
30 Mac 2012 11:13:01 PM com.google.appengine.tools.development.DevAppServerImpl
start
INFO: The admin console is running at http://localhost:8888/_ah/admin
Copy
Access URL http://localhost:8888/, see output
and also the hello world servlet – http://localhost:8888/helloworld
File : appengine-web.xml
<?xml version="1.0" encoding="utf-8"?>
<appengine-web-app xmlns="http://appengine.google.com/ns/1.0">
<application>mkyong123</application>
<version>1</version>
</appengine-web-app>
Copy
To deploy, see following steps:
Figure 1.2 – Sign in with your Google account and click on the Deploy button.
Figure 1.3 – If everything is fine, the hello world web application will be deployed to this
URL – http://mkyong123.appspot.com/
Result:
Thus the simple application was created successfully.
EX.NO:4
DATE:
Simulate a cloud scenario using CloudSim and run ascheduling
algorithm that is not present in CloudSim.
Aim:
To Simulate a cloud scenario using CloudSim and run a scheduling algorithm
that is not present in CloudSim.
Steps:
How to use CloudSim in Eclipse
CloudSim is written in Java. The knowledge you need to use CloudSim is basic Java
programming and some basics about cloud computing. Knowledge of programming IDEs
such as Eclipse or NetBeans is also helpful. It is a library and, hence, CloudSim does not
have to be installed. Normally, you can unpack the downloaded package in any directory,
add it to the Java class path and it is ready to be used. Please verify whether Java is
available on your system.
To use CloudSim in Eclipse:
1. Download CloudSim installable files
from https://code.google.com/p/cloudsim/downloads/list and unzip
2. Open Eclipse
3. Create a new Java Project: File -> New
4. Import an unpacked CloudSim project into the new Java Project
The first step is to initialise the CloudSim package by initialising the CloudSim
library, as follows
CloudSim.init(num_user, calendar, trace_flag)
5. Data centres are the resource providers in CloudSim; hence,
creation of data centres is a second step. To create Datacenter, you need
the Data center Characteristics object that stores the properties of a data
centre such as architecture, OS, list of machines, allocation policy that
covers the time or space shared, the time zone and its price:
Datacenter datacenter9883 = new Datacenter(name, characteristics, new
VmAllocationPolicySimple(hostList), s
6. The third step is to create a broker:
DatacenterBroker broker = createBroker();
7. The fourth step is to create one virtual machine unique ID of the
VM, userId ID of the VM’s owner, mips, number Of Pes amount of
CPUs, amount of RAM, amount of bandwidth, amount of storage,
virtual machine monitor, and cloudletScheduler policy for cloudlets:
Vm vm = new Vm(vmid, brokerId, mips, pesNumber, ram, bw, size,
vmm, new CloudletSchedulerTimeShared())
8. Submit the VM list to the broker:
broker.submitVmList(vmlist)
9. Create a cloudlet with length, file size, output size, and utilisation model:
Cloudlet cloudlet = new Cloudlet(id, length, pesNumber, fileSize, outputSize,
utilizationModel, utilizationMode
10. Submit the cloudlet list to the broker:
broker.submitCloudletList(cloudletList)
Sample Output from the Existing Example:
Starting
CloudSimExa
mple1...
Initialising...
Starting
CloudSim
version 3.0
Datacenter_0 is
starting...
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>null
Brok
er is
starti
ng...
Entiti
es
starte
d.
0.0 : Broker: Cloud Resource List
received with 1 resource(s) 0.0: Broker:
Trying to Create VM #0 in Datacenter_0
0.1 : Broker: VM #0 has been created in Datacenter #2, Host #0
0.1: Broker: Sending cloudlet 0 to VM #0
400.1: Broker: Cloudlet 0 received
400.1 : Broker: All Cloudlets
executed. Finishing... 400.1:
Broker: Destroying VM #0
Broker is shutting
down...
Simulation: No
more future events
CloudInformationService: Notify all CloudSim
entities for shutting down. Datacenter_0 is shutting
down...
Broker is
shutting
down...
Simulation
completed.
Simulation completed.
CloudSimExample1 finished!
RESULT:
Aim:
To Use GAE launcher to launch the web applications.
Steps:
MakingyourFirstApplication
Once you have selected your application and press Run. After a few
moments your application will start and the launcher will show a little
green icon next to your application. Then press Browse to open a
browser pointing at your application which is running at
http://localhost:8080/
Just for fun, edit the index.pytochange the name “Chuck” to your own
name and press Refresh in the browser to verify your updates.
You can watch the internal log of the actions that the webserver is
performing when you are interacting with your application in the
browser. Select your application in the Launcher and press the Logs
button to bring up a log window:
Each time you press Refresh in your browser–you can see it retrieving
the output with a GET request.
With two files to edit, there are two general categories of errors
that you may encounter. If you make a mistake ontheapp.yamlfile, the
App Engine will not start and your launcher will show a yellow icon
near your application:
To get more detail on what is going wrong, take a look at the log for the application:
In this instance – the mistake is mis-‐ indenting the last line in the app.yaml (line 8).
If you make a syntax error in the index.pyfile, a Python trace back error will
appear in your browser.
The error you need to see is likely to be the last few lines of the output
– in this case I made a Python syntax error on line one of our one-•‐line
application.
Reference: http://en.wikipedia.org/wiki/Stack_trace
When you make a mistake in the app.yaml file – you must the fix
the mistake and attempt to start the application again.
If you make a mistake in a file like index.py, you can simply fix
the file and press refresh in your browser – there is no need to restart
the server.
Result:
Thus the GAE web applications was created.
EX.NO:6
DATE:
Aim:
To Find a procedure to transfer the files from one virtual machine
to another virtual machine.
Steps:
1)You can copy few (or more) lines with copy & paste mechanism.
For this you need to share clipboard between host OS and guest
OS, installing Guest Addition on both the virtual machines
(probably setting bidirectional and restarting them). You copy from
guest OS in the clipboard that is shared with the host OS.
Then you paste from the host OS to the second guest OS.
1. You can enable drag and drop too with the same method (Click on
the machine, settings, general, advanced, drag and drop: set to
bidirectional )
2. You can have common Shared Folders on both virtual
machines and use one of the directory shared as buffer to
copy.
Installing Guest Additions you have the possibility to set Shared
Folders too. As you put a file in a shared folder from host OS or
from guest OS, is immediately visible to the other. (Keep in mind
that can arise some problems for date/time of the files when there
are different clock settings on the different virtual machines).
If you use the same folder shared on more machines you can
exchange files directly copying them in this folder.
3. You can use usual method to copy files between 2 different
computer with client-server application. (e.g. scp with sshd active
for linux, winscp... you can get some info about SSH servers e.g.
here)
You need an active server (sshd) on the receiving machine and a
client on the sending machine. Of course you need to have the
authorization setted (via password or, better, via an automatic
authentication method).
Note: many Linux/Ubuntu distribution install sshd by default:
you can see if it is running with pgrep sshd from a shell. You can
install with sudo apt-get install openssh-server.
4. You can mount part of the file system of a virtual machine via
NFS or SSHFS on the other, or you can share file and directory
with Samba.
You may find interesting the article Sharing files between
guest and host without VirtualBox shared folders with
detailed step by step instructions.
You should remember that you are dialling with a little network of
machines with different operative systems, and in particular:
Each virtual machine has its own operative system running
on and acts as a physical machine.
Each virtual machine is an instance of a program owned by an user
in the hosting operative system and should undergo the restrictions
of the user in the hosting OS.
E.g Let we say that Hastur and Meow are users of the hosting
machine, but they did not allow each other to see their directories
(no read/write/execute authorization). When each of them run a
virtual machine, for the hosting OS those virtual machine are two
normal programs owned by Hastur and Meow and cannot see the
private directory of the other user. This is a restriction due to the
hosting OS. It's easy to overcame it: it's enough to give
authorization to read/write/execute to a directory or to chose a
different directory in which both users can read/write/execute.
Windows likes mouse and Linux fingers. :-)
I mean I suggest you to enable Drag & drop to be cosy with the
Windows machines and the Shared folders or to be cosy with
Linux.
When you will need to be fast with Linux you will feel the need of ssh-keygen
and
to Generate once SSH Keys to copy files on/from a remote machine without
writing password anymore. In this way it functions bash auto-completion
remotely too!
PROCEDURE:
Steps:
1. Open Browser, type localhost:9869
2. Login using username: oneadmin, password: opennebula
3. Then follow the steps to migrate VMs
a. Click on infrastructure
b. Select clusters and enter the cluster name
c. Then select host tab, and select all host
d. Then select Vnets tab, and select all vnet
e. Then select datastores tab, and select all datastores
f. And then choose host under infrastructure tab
g. Click on + symbol to add new host, name the host then click on create.
4. on instances, select VMs to migrate then follow the stpes
a. Click on 8th icon ,the drop down list display
b. Select migrate on that ,the popup window display
c. On that select the target host to migrate then click on migrate.
Before migration
Host:SACET
Host:one-sandbox
After Migration:
Host:one-sandbox
Host:SACET
APPLICATIONS:
Easily migrate your virtual machine from one pc to another.
Result:
Thus the file transfer between VM was successfully completed.
EX.NO:7
DATE:
Install Hadoop single node cluster and run simpleapplications like wordcount.
Aim:
To Install Hadoop single node cluster and run simple applications like wordcount.
Steps:
Install Hadoop
Step 1: Click here to download the Java 8 Package. Save this file
in your home directory.
Command: wget
https://archive.apache.org/dist/hadoop/core/hado
op-2.7.3/hadoop- 2.7.3.tar.gz
Command: vi .bashrc
For applying all these changes to the current Terminal, execute the source command.
Command: cd hadoop-2.7.3/etc/hadoop/
Command:Is
All the Hadoop configuration files are located in Hadoop-2.7.3etc/Hadoop.
directory as you can see in the snapshot below:
Fig: Hadoop Installation – Hadoop Configuration Files shown below,
Step 7: Open core-site.xml and edit the property mentioned below inside
configuration tag:
Command: vi core-site.xml
Step 8: Edit hdfs-site.xml and edit the property mentioned below inside
configuration tag:
hdfs-site.xml contains configuration settings of HDFS daemons (i.e. NameNode,
DataNode, Secondary NameNode). It also includes the replication factor and
block size of HDFS.Command: vi hdfs-site.xml
In some cases, mapred-site.xml file is not available. So, we have to create the
mapred- site.xml file using mapred-site.xml template.
Command: vi mapred-site.xml.
Step 10: Edit yarn-site.xml and edit the property mentioned below inside
configuration tag:
Command: vi yarn-site.xml
hadoop-env.sh contains the environment variables that are used in the script to
runHadoop like Java home path, etc.
Command: vi hadoop–env.sh
Command: cd
Command: cd hadoop-2.7.3
This formats the HDFS via NameNode. This command is only executed for
the first time. Formatting the file system means initializing the directory
specified by the dfs.name.dir variable.
Never format, up and running Hadoop filesystem. You will lose all your data
stored inthe HDFS.
Step 13: Once the Name Node is formatted, go to hadoop-2.7.3/sbin directory
The Name Node is the centerpiece of an HDFS file system. It keeps the directory
tree ofall files stored in the HDFS and tracks all the file stored across the cluster.
Resource Manager is the master that arbitrates all the available cluster resources
and thus helps in managing the distributed applications running on the YARN
system. Its work is to manage each Node Managers and the each application’s
Application Master.
The Node Manager in each machine framework is the agent which is responsible
for managing containers, monitoring their resource usage and reporting the same to
the Resource Manager.
Job History Server is responsible for servicing all job history related requests from client.
Step 14: To check that all the Hadoop services are up and running, run the below
command.
Command: jps
Fig: Hadoop Installation – Checking Daemons
Result:
Thus the Hadoop one cluster was installed and simple applications executed
successfully.
EX.NO:8
DATE:
Procedure:
A ‘main.py’ file (python file that will contain the code to be executed).
A ‘Dockerfile’ file (Docker file that will contain the necessary
instructions to create the environment).
Normally you should have this folder architecture:
.├── Dockerfile
└── main.py
0 directories, 2 files
3. Edit the Python file
You can add the following code to the ‘main.py’ file:
#!/usr/bin/env python3
print("Docker is magic!")
but once you see “Docker is magic!” displayed in your terminal you will
know that your Docker is working.
FROM python:latest
# In order to launch our python code, we must import it into our image.#
We use the keyword 'COPY' to do that.# The first parameter 'main.py' is
the name of the file on the host.# The second parameter '/' is the path
where to put the file on the image.# Here we put the file at the image root
folder.
COPY main.py /
# We need to define the command to launch when we are going to run the
image.# We use the keyword 'CMD' to do that.# The following command
will execute "python ./main.py".
The ’-t’ option allows you to define the name of your image. In our case
we have chosen ’python-test’ but you can put what you want.
You need to put the name of your image after ‘docker run’.
There you go, that’s it. You should normally see “Docker is magic!”
displayed in your terminal.
Code is available
If you want to retrieve the complete code to discover it easily or to
execute it, I have put it at your disposal on my GitHub.
Result:
Thus the Docker was installed successfully.
EX.NO:9
DATE:
Run a container
Aim:
To write a procedure to run a container from docker.
Procedure:
Building the image may take some time. After your image is built, you
can view your image in the Images tab in Docker Desktop.
11.Define Opennebula.
OpenNebula is an open source management tool that helps virtualized
data centers oversee private clouds, public clouds and hybrid clouds. ...
OpenNebula is vendor neutral, as well as platform- and API-agnostic. It
can use KVM, Xen or VMware hypervisors.
12.Define Eclipse.
Eclipse is an integrated development environment (IDE) used in
computer programming, and is the most widely used Java IDE. It
contains a base workspace and an extensible plug-in system for
customizing the environment.
13.Define Netbeans.
NetBeans is an open-source integrated development environment (IDE)
for developing with Java, PHP, C++, and other programming languages.
NetBeans is also referred to as a platform of modular components used
for developing Java desktop applications.
19.Define IaaS?
The IaaS layer offers storage and infrastructure resources that is needed
to deliver the Cloud services. It only comprises of the infrastructure or
physical resource. Top IaaS Cloud Computing Companies: Amazon
(EC2), Rackspace, GoGrid, Microsoft, Terremark and Google.
20.Define PaaS?
PaaS provides the combination of both, infrastructure and application.
Hence, organisations using PaaS don’t have to worry for infrastructure
nor for services. Top PaaS Cloud Computing
Companies: Salesforce.com, Google, Concur Technologies, Ariba,
Unisys and Cisco..
21.Define SaaS?
In the SaaS layer, the Cloud service provider hosts the software upon
their servers. It can be defined as a in model in which applications and
softwares are hosted upon the server and made available to customers
over a network. Top SaaS Cloud Computing Companies: Amazon Web
Services, AppScale, CA Technologies, Engine Yard, Salesforce and
Windows Azure.