0% found this document useful (0 votes)
28 views59 pages

Cloud Lab

Cloud computing css335
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views59 pages

Cloud Lab

Cloud computing css335
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 59

TABLE OF CONTENTS

MARKS
S.NO. DATE EXPERIMENT TITLE SIGN
10
Install Virtualbox / VMware
Workstation with different flavours
1.
of linux or windows OS on top of
windows8 and above.
Install a C compiler in the virtual
machine created using virtual
2.
box and execute Simple
Programs
Install Google App Engine. Create
3. hello world app and other simple
web applications using python/java.
Use GAE launcher to launch the web
4.
applications.
Simulate a cloud scenario
using CloudSim and run a
5.
scheduling algorithm that is
not present in CloudSim.
Find a procedure to transfer the files from one
6
virtual machine to another virtual machine.
Install Hadoop single node
7 cluster and run simple
applications like wordcount.

8 Creating and executing your first container using


Docker.

9 Run a container from docker hub


EX.NO :1

DATE:
Install Virtualbox / VMware/Equvalent open source Workstation with
different flavours of linux or windows OS on top of windows8 and above.

AIM:
To Install Virtualbox / VMware Workstation with different flavours of linux
or windows OS on top of windows8 and above.

PROCEDURE:

Steps to install Virtual Box:


1. Download the Virtual box exe and click the exe file…and select next
button..
2. Click the next button..

3. Click the next button


4. Click the YES button..

5. Click the install button…


6. Then installation was completed..the show virtual box icon on desktop screen….

Steps to import Open nebula sandbox:

1. Open Virtual box


2. File import Appliance
3. Browse OpenNebula-Sandbox-5.0.ova file
4. Then go to setting, select Usb and choose USB 1.1
5. Then Start the Open Nebula
6. Login using username: root, password:opennebula
Steps to create Virtual Machine through opennebula
1. Open Browser, type localhost:9869
2. Login using username: oneadmin, password: opennebula
3. Click on instances, select VMs then follow the steps to create Virtaul machine
a. Expand the + symbol
b. Select user oneadmin
c. Then enter the VM name,no.of instance, cpu.
d. Then click on create button.
e. Repeat the steps the C,D for creating more than one VMs.
APPLICATIONS:
There are various applications of cloud computing in today’s network world. Many
search engines and social websites are using the concept of cloud computing like
www.amazon.com, hotmail.com, facebook.com, linkedln.com etc. the advantages of
cloud computing in context to scalability is like reduced risk , low cost testing ,ability to
segment the customer base and auto-scaling based on application load.

RESULT:

Thus the procedure to run the virtual machine of different configuration.


EX.NO:2
DATE:
Install a C compiler in the virtual machine created using
virtual box and execute Simple Programs.

Aim:
To Install a C compiler in the virtual machine created using virtual box and
execute Simple Programs`

PROCEDURE:

Steps to import .ova file:


1. Open Virtual box
2. File import Appliance
3. Browse ubuntu_gt6.ova file
4. Then go to setting, select Usb and choose USB 1.1
5. Then Start the ubuntu_gt6
6. Login using username: dinesh, password:99425.
Steps to run c program:

1. Open the terminal


2. Type cd /opt/axis2/axis2-1.7.3/bin then press enter
3. gedit hello.c
4. gcc hello.c
5. ./a.out

1. Type cd /opt/axis2/axis2-1.7.3/bin then press enter

2. Type gedit first.c

3. Type the c program


4. Running the C program

5. Display the output:


APPLICATIONS:
Simply running all programs in grid environment.

RESULT:
Thus the simple C programs executed successfully.
EX.NO:3
DATE:
Install Google App Engine. Create hello world app and other
simple web applications using python/java.

Aim:
To Install Google App Engine. Create hello world app and other simple
web applications using python/java.
Procedure:

1. Install Google Plugin for Eclipse


Read this guide – how to install Google Plugin for Eclipse. If you install the Google App
Engine Java SDK together with “Google Plugin for Eclipse“, then go to step 2,
Otherwise, get the Google App Engine Java SDK and extract it.

2. Create New Web Application Project


In Eclipse toolbar, click on the Google icon, and select “New Web Application
Project…”

Figure – New Web Application Project

Figure – Deselect the “Google Web ToolKit“, and link your GAE Java SDK via the
“configure SDK” link.
Click finished, Google Plugin for Eclipse will generate a sample project automatically.
3. Hello World
Review the generated project directory.

Nothing special, a standard Java web project structure.

HelloWorld/
src/
...Java source code...
META-INF/
...other configuration...
war/
...JSPs, images, data files...
WEB-INF/
...app configuration...
lib/
...JARs for libraries...
classes/
...compiled classes...
Copy
The extra is this file “appengine-web.xml“, Google App Engine need this to run and
deploy the application.

File : appengine-web.xml

<?xml version="1.0" encoding="utf-8"?>


<appengine-web-app xmlns="http://appengine.google.com/ns/1.0">
<application></application>
<version>1</version>

<!-- Configure java.util.logging -->


<system-properties>
<property name="java.util.logging.config.file" value="WEB-
INF/logging.properties"/>
</system-properties>

</appengine-web-app>
Copy

4. Run it local
Right click on the project and run as “Web Application“.

Eclipse console :

//...
INFO: The server is running at http://localhost:8888/
30 Mac 2012 11:13:01 PM com.google.appengine.tools.development.DevAppServerImpl
start
INFO: The admin console is running at http://localhost:8888/_ah/admin
Copy
Access URL http://localhost:8888/, see output
and also the hello world servlet – http://localhost:8888/helloworld

5. Deploy to Google App Engine


Register an account on https://appengine.google.com/, and create an application ID for
your web application.

In this demonstration, I created an application ID, named “mkyong123”, and put it


in appengine-web.xml.

File : appengine-web.xml
<?xml version="1.0" encoding="utf-8"?>
<appengine-web-app xmlns="http://appengine.google.com/ns/1.0">
<application>mkyong123</application>
<version>1</version>

<!-- Configure java.util.logging -->


<system-properties>
<property name="java.util.logging.config.file" value="WEB-
INF/logging.properties"/>
</system-properties>

</appengine-web-app>
Copy
To deploy, see following steps:

Figure 1.1 – Click on GAE deploy button on the toolbar.

Figure 1.2 – Sign in with your Google account and click on the Deploy button.
Figure 1.3 – If everything is fine, the hello world web application will be deployed to this
URL – http://mkyong123.appspot.com/
Result:
Thus the simple application was created successfully.
EX.NO:4
DATE:
Simulate a cloud scenario using CloudSim and run ascheduling
algorithm that is not present in CloudSim.

Aim:
To Simulate a cloud scenario using CloudSim and run a scheduling algorithm
that is not present in CloudSim.
Steps:
How to use CloudSim in Eclipse
CloudSim is written in Java. The knowledge you need to use CloudSim is basic Java
programming and some basics about cloud computing. Knowledge of programming IDEs
such as Eclipse or NetBeans is also helpful. It is a library and, hence, CloudSim does not
have to be installed. Normally, you can unpack the downloaded package in any directory,
add it to the Java class path and it is ready to be used. Please verify whether Java is
available on your system.
To use CloudSim in Eclipse:
1. Download CloudSim installable files
from https://code.google.com/p/cloudsim/downloads/list and unzip
2. Open Eclipse
3. Create a new Java Project: File -> New
4. Import an unpacked CloudSim project into the new Java Project
The first step is to initialise the CloudSim package by initialising the CloudSim
library, as follows
CloudSim.init(num_user, calendar, trace_flag)
5. Data centres are the resource providers in CloudSim; hence,
creation of data centres is a second step. To create Datacenter, you need
the Data center Characteristics object that stores the properties of a data
centre such as architecture, OS, list of machines, allocation policy that
covers the time or space shared, the time zone and its price:
Datacenter datacenter9883 = new Datacenter(name, characteristics, new
VmAllocationPolicySimple(hostList), s
6. The third step is to create a broker:
DatacenterBroker broker = createBroker();
7. The fourth step is to create one virtual machine unique ID of the
VM, userId ID of the VM’s owner, mips, number Of Pes amount of
CPUs, amount of RAM, amount of bandwidth, amount of storage,
virtual machine monitor, and cloudletScheduler policy for cloudlets:
Vm vm = new Vm(vmid, brokerId, mips, pesNumber, ram, bw, size,
vmm, new CloudletSchedulerTimeShared())
8. Submit the VM list to the broker:
broker.submitVmList(vmlist)
9. Create a cloudlet with length, file size, output size, and utilisation model:
Cloudlet cloudlet = new Cloudlet(id, length, pesNumber, fileSize, outputSize,
utilizationModel, utilizationMode
10. Submit the cloudlet list to the broker:
broker.submitCloudletList(cloudletList)
Sample Output from the Existing Example:
Starting
CloudSimExa
mple1...
Initialising...
Starting
CloudSim
version 3.0
Datacenter_0 is
starting...
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>null
Brok
er is
starti
ng...
Entiti
es
starte
d.
0.0 : Broker: Cloud Resource List
received with 1 resource(s) 0.0: Broker:
Trying to Create VM #0 in Datacenter_0
0.1 : Broker: VM #0 has been created in Datacenter #2, Host #0
0.1: Broker: Sending cloudlet 0 to VM #0
400.1: Broker: Cloudlet 0 received
400.1 : Broker: All Cloudlets
executed. Finishing... 400.1:
Broker: Destroying VM #0
Broker is shutting
down...
Simulation: No
more future events
CloudInformationService: Notify all CloudSim
entities for shutting down. Datacenter_0 is shutting
down...
Broker is
shutting
down...
Simulation
completed.
Simulation completed.

======= OUTPUT ==========


Cloudlet ID STATUS Data center ID VM ID Time
Start Time Finish Time 0 SUCCESS 2 0 400 0.1 400.1
*****Datacenter:
Datacenter_0*****
User id Debt
3 35.6

CloudSimExample1 finished!

RESULT:

The simulation was successfully executed.


Ex no:5
Date:
Use the GAE launcher to launch the web applications.

Aim:
To Use GAE launcher to launch the web applications.

Steps:

MakingyourFirstApplication

Now you need to create as imple application. We could use the


“+” option to have the launcher make us an application – but
instead we will do it by hand to get a better sense of what is going
on. Make a folder for your Google App Engine applications. I am
going to make the Folder on my Desktop called “apps” – the path
to this folder is:
C:\Documents and Settings\csev\Desktop\apps
And then make a sub-•‐f older in within apps called “ae-•01-•trivial” – the path
to this folder would be:
C:\ Documents and Settings \csev\Desktop\apps\ae-•01-•trivial
Using a text editor such as JEdit (www.jedit.org), create a file called app.yaml in the
ae-•01-•trivial folder with the following contents:
application: ae-01-trivial
version: 1
runtime: python api_version: 1
handlers:- url: /.*
script: index.py
Note: Please do not copy and paste these lines into yourtexteditor– youmightend
up with strange characters – simply type them into your editor.
Then create a file in the ae-•01-•trivial folder called index.py with three lines in it:
print 'Content-Type: text/plain'
print ' '
print 'Hello there Chuck'
Then start the GoogleAppEngineLauncher program that can
be found under Applications. Use the File -•> Add Existing
Application command and navigate into the apps directory and
select the ae-•01-•trivial folder. Once you have added the
application, select it so that you can control the application using the launcher.

Once you have selected your application and press Run. After a few
moments your application will start and the launcher will show a little
green icon next to your application. Then press Browse to open a
browser pointing at your application which is running at
http://localhost:8080/

Paste http://localhost:8080 into your browser and you should


see your application as follows:

Just for fun, edit the index.pytochange the name “Chuck” to your own
name and press Refresh in the browser to verify your updates.

Watching the Log

You can watch the internal log of the actions that the webserver is
performing when you are interacting with your application in the
browser. Select your application in the Launcher and press the Logs
button to bring up a log window:
Each time you press Refresh in your browser–you can see it retrieving
the output with a GET request.

Dealing With Errors

With two files to edit, there are two general categories of errors
that you may encounter. If you make a mistake ontheapp.yamlfile, the
App Engine will not start and your launcher will show a yellow icon
near your application:

To get more detail on what is going wrong, take a look at the log for the application:
In this instance – the mistake is mis-­‐ indenting the last line in the app.yaml (line 8).
If you make a syntax error in the index.pyfile, a Python trace back error will
appear in your browser.

The error you need to see is likely to be the last few lines of the output
– in this case I made a Python syntax error on line one of our one-•‐line
application.
Reference: http://en.wikipedia.org/wiki/Stack_trace
When you make a mistake in the app.yaml file – you must the fix
the mistake and attempt to start the application again.
If you make a mistake in a file like index.py, you can simply fix
the file and press refresh in your browser – there is no need to restart
the server.

Shutting Down the Server


To shut down the server, use the Launcher, select your application and press the
Stop button.

Result:
Thus the GAE web applications was created.
EX.NO:6
DATE:

Find a procedure to transfer the files from one virtual machine to


another virtual machine.

Aim:
To Find a procedure to transfer the files from one virtual machine
to another virtual machine.
Steps:
1)You can copy few (or more) lines with copy & paste mechanism.
For this you need to share clipboard between host OS and guest
OS, installing Guest Addition on both the virtual machines
(probably setting bidirectional and restarting them). You copy from
guest OS in the clipboard that is shared with the host OS.
Then you paste from the host OS to the second guest OS.
1. You can enable drag and drop too with the same method (Click on
the machine, settings, general, advanced, drag and drop: set to
bidirectional )
2. You can have common Shared Folders on both virtual
machines and use one of the directory shared as buffer to
copy.
Installing Guest Additions you have the possibility to set Shared
Folders too. As you put a file in a shared folder from host OS or
from guest OS, is immediately visible to the other. (Keep in mind
that can arise some problems for date/time of the files when there
are different clock settings on the different virtual machines).
If you use the same folder shared on more machines you can
exchange files directly copying them in this folder.
3. You can use usual method to copy files between 2 different
computer with client-server application. (e.g. scp with sshd active
for linux, winscp... you can get some info about SSH servers e.g.
here)
You need an active server (sshd) on the receiving machine and a
client on the sending machine. Of course you need to have the
authorization setted (via password or, better, via an automatic
authentication method).
Note: many Linux/Ubuntu distribution install sshd by default:
you can see if it is running with pgrep sshd from a shell. You can
install with sudo apt-get install openssh-server.
4. You can mount part of the file system of a virtual machine via
NFS or SSHFS on the other, or you can share file and directory
with Samba.
You may find interesting the article Sharing files between
guest and host without VirtualBox shared folders with
detailed step by step instructions.
You should remember that you are dialling with a little network of
machines with different operative systems, and in particular:
 Each virtual machine has its own operative system running
on and acts as a physical machine.
 Each virtual machine is an instance of a program owned by an user
in the hosting operative system and should undergo the restrictions
of the user in the hosting OS.
E.g Let we say that Hastur and Meow are users of the hosting
machine, but they did not allow each other to see their directories
(no read/write/execute authorization). When each of them run a
virtual machine, for the hosting OS those virtual machine are two
normal programs owned by Hastur and Meow and cannot see the
private directory of the other user. This is a restriction due to the
hosting OS. It's easy to overcame it: it's enough to give
authorization to read/write/execute to a directory or to chose a
different directory in which both users can read/write/execute.
 Windows likes mouse and Linux fingers. :-)
I mean I suggest you to enable Drag & drop to be cosy with the
Windows machines and the Shared folders or to be cosy with
Linux.
When you will need to be fast with Linux you will feel the need of ssh-keygen
and
to Generate once SSH Keys to copy files on/from a remote machine without
writing password anymore. In this way it functions bash auto-completion
remotely too!

PROCEDURE:
Steps:
1. Open Browser, type localhost:9869
2. Login using username: oneadmin, password: opennebula
3. Then follow the steps to migrate VMs
a. Click on infrastructure
b. Select clusters and enter the cluster name
c. Then select host tab, and select all host
d. Then select Vnets tab, and select all vnet
e. Then select datastores tab, and select all datastores
f. And then choose host under infrastructure tab
g. Click on + symbol to add new host, name the host then click on create.
4. on instances, select VMs to migrate then follow the stpes
a. Click on 8th icon ,the drop down list display
b. Select migrate on that ,the popup window display
c. On that select the target host to migrate then click on migrate.

Before migration

Host:SACET

Host:one-sandbox
After Migration:
Host:one-sandbox

Host:SACET
APPLICATIONS:
Easily migrate your virtual machine from one pc to another.

Result:
Thus the file transfer between VM was successfully completed.
EX.NO:7
DATE:
Install Hadoop single node cluster and run simpleapplications like wordcount.
Aim:
To Install Hadoop single node cluster and run simple applications like wordcount.

Steps:

Install Hadoop
Step 1: Click here to download the Java 8 Package. Save this file
in your home directory.

Step 2: Extract the Java Tar File.

Command: tar -xvf jdk-8u101-linux-i586.tar.gz

Fig: Hadoop Installation – Extracting Java Files

Step 3: Download the Hadoop 2.7.3 Package.

Command: wget
https://archive.apache.org/dist/hadoop/core/hado
op-2.7.3/hadoop- 2.7.3.tar.gz

Fig: Hadoop Installation – Downloading Hadoop


Step 4: Extract the Hadoop tar File.

Command: tar -xvf hadoop-2.7.3.tar.gz

Fig: Hadoop Installation – Extracting

Hadoop Files Step 5: Add the Hadoop and Java paths in

the bash file (.bashrc). Open. bashrc file. Now, add

Hadoop and Java Path as shown below.

Command: vi .bashrc

Fig: Hadoop Installation – Setting Environment Variable

Then, save the bash file and close it.

For applying all these changes to the current Terminal, execute the source command.

Command: source .bashrc

Fig: Hadoop Installation – Refreshing environment variables


To make sure that java and Hadoop have been properly installed on your
system and can be accessed through the Terminal, execute the java -
version and hadoop version commands.

Command: java -version

ig: Hadoop Installation – Checking Java Version

Command: hadoop version

Fig: Hadoop Installation – Checking Hadoop Version

Step 6: Edit the Hadoop Configuration files.

Command: cd hadoop-2.7.3/etc/hadoop/
Command:Is
All the Hadoop configuration files are located in Hadoop-2.7.3etc/Hadoop.
directory as you can see in the snapshot below:
Fig: Hadoop Installation – Hadoop Configuration Files shown below,

Step 7: Open core-site.xml and edit the property mentioned below inside
configuration tag:

core-site.xml informs Hadoop daemon where NameNode runs in the cluster. It


contains configuration settings of Hadoop core such as I/O settings that are
common to HDFS & MapReduce.

Command: vi core-site.xml

Fig: Hadoop Installation – Configuring core-site.xml


1
<?xml version="1.0" encoding="UTF-8"?>
2 <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
3 <configuration>
4 <property>
5 <name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
6 </property>
7 </configuration>

Step 8: Edit hdfs-site.xml and edit the property mentioned below inside
configuration tag:
hdfs-site.xml contains configuration settings of HDFS daemons (i.e. NameNode,
DataNode, Secondary NameNode). It also includes the replication factor and
block size of HDFS.Command: vi hdfs-site.xml

Fig: Hadoop Installation – Configuring hdfs-site.xml


Step 9: Edit the mapred-site.xml file and edit the property mentioned below
1
2 <?xml version="1.0" encoding="UTF-8"?>
3 <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
4 <property>
5 <name>dfs.replication</name>
6 <value>1</value>
7 </property>
<property>
8 <name>dfs.permission</name>
9 <value>false</value>
10 </property>
</configuration>
11

inside configuration tag:

mapred-site.xml contains configuration settings of MapReduce application like


number of JVM that can run in parallel, the size of the mapper and the reducer
process, CPU cores available for a process, etc.

In some cases, mapred-site.xml file is not available. So, we have to create the
mapred- site.xml file using mapred-site.xml template.

Command: cp mapred-site.xml.template mapred-site.xml

Command: vi mapred-site.xml.

Fig: Hadoop Installation – Configuring mapred-site.xml


1
<?xml version="1.0" encoding="UTF-8"?>
2 <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
3 <configuration>
4 <property>
5 <name>mapreduce.framework.name</name>
<value>yarn</value>
6 </property>
7 </configuration>

Step 10: Edit yarn-site.xml and edit the property mentioned below inside
configuration tag:

yarn-site.xml contains configuration settings of ResourceManager and


NodeManager like application memory management size, the operation needed on
program & algorithm, etc.

Command: vi yarn-site.xml

Fig: Hadoop Installation – Configuring yarn-site.xml


1
2
<?xml version="1.0">
3 <configuration>
4 <property>
5 <name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
6 </property>
7 <property>
8 <name>yarn.nodemanager.auxservices.mapreduce.shuffle.class</
name>
9
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
1 </property>
0
1
Step 11: Edit hadoop-env.sh and add the Java Path as mentioned below:

hadoop-env.sh contains the environment variables that are used in the script to
runHadoop like Java home path, etc.

Command: vi hadoop–env.sh

Fig: Hadoop Installation – Configuring hadoop-env.sh

Step 12: Go to Hadoop home directory and format the NameNode.

Command: cd

Command: cd hadoop-2.7.3

Command: bin/hadoop namenode -format

Fig: Hadoop Installation – Formatting NameNode

This formats the HDFS via NameNode. This command is only executed for
the first time. Formatting the file system means initializing the directory
specified by the dfs.name.dir variable.

Never format, up and running Hadoop filesystem. You will lose all your data
stored inthe HDFS.
Step 13: Once the Name Node is formatted, go to hadoop-2.7.3/sbin directory

start all the daemons.


Command: cd hadoop-2.7.3/sbin
Either you can start all daemons with a single command or do it individually.
Command: ./start-all.sh

The above command is a combination of start-dfs.sh, start-yarn.sh & mr-


jobhistory-daemon.sh

Or you can run all the services individually as below:

Start Name Node:

The Name Node is the centerpiece of an HDFS file system. It keeps the directory
tree ofall files stored in the HDFS and tracks all the file stored across the cluster.

Command: ./hadoop-daemon.sh start name node

Fig: Hadoop installation-starting name node


Start Data Node:
On startup, a Data Node connects to the Name node and it responds to the
requests from the Name node for different operations.
Command: ./hadoop-daemon.sh start data node

Fig: Hadoop Installation – Starting Data Node

Start Resource Manager:

Resource Manager is the master that arbitrates all the available cluster resources
and thus helps in managing the distributed applications running on the YARN
system. Its work is to manage each Node Managers and the each application’s
Application Master.

Command: ./yarn-daemon.sh start resource manager

Fig: Hadoop Installation – Starting Resource Manager

Start Node Manager:

The Node Manager in each machine framework is the agent which is responsible
for managing containers, monitoring their resource usage and reporting the same to
the Resource Manager.

Command: ./yarn-daemon.sh start node manager


See Batch Details

Fig: Hadoop Installation – Starting Node Manager

Start Job History Server:

Job History Server is responsible for servicing all job history related requests from client.

Command: ./mr-jobhistory-daemon.sh start history server

Step 14: To check that all the Hadoop services are up and running, run the below
command.
Command: jps
Fig: Hadoop Installation – Checking Daemons

Step 15: Now open the Mozilla browser and go

to localhost:50070/dfshealth.html to check the NameNode interface.

Fig: Hadoop Installation – Starting WebUI

Congratulations, you have successfully installed a single node Hadoop cluster

Result:
Thus the Hadoop one cluster was installed and simple applications executed
successfully.
EX.NO:8
DATE:

Install Docker on your machine


Aim:
To Install Docker on Ubuntu system.

Procedure:

First, update your packages:

$ sudo apt update

Next, install docker with apt-get:

$ sudo apt install docker.io

Finally, verify that Docker is installed correctly:

$ sudo docker run hello-world

2. Create your project


In order to create your first Docker application, I invite you to create a
folder on your computer. It must contain the following two files:

A ‘main.py’ file (python file that will contain the code to be executed).
A ‘Dockerfile’ file (Docker file that will contain the necessary
instructions to create the environment).
Normally you should have this folder architecture:

.├── Dockerfile

└── main.py

0 directories, 2 files
3. Edit the Python file
You can add the following code to the ‘main.py’ file:

#!/usr/bin/env python3

print("Docker is magic!")

but once you see “Docker is magic!” displayed in your terminal you will
know that your Docker is working.

3. Edit the Docker file


Our goal here is to launch Python code.

To do this, our Docker must contain all the dependencies necessary to


launch Python. A linux (Ubuntu) with Python installed on it should be
enough.The first step to take when you create a Docker file is to access
the DockerHub website. This site contains many pre-designed images to
save your time (for example: all images for linux or code languages).

FROM python:latest

# In order to launch our python code, we must import it into our image.#
We use the keyword 'COPY' to do that.# The first parameter 'main.py' is
the name of the file on the host.# The second parameter '/' is the path
where to put the file on the image.# Here we put the file at the image root
folder.

COPY main.py /

# We need to define the command to launch when we are going to run the
image.# We use the keyword 'CMD' to do that.# The following command
will execute "python ./main.py".

CMD [ "python", "./main.py" ]

4. Create the Docker image


Once your code is ready and the Dockerfile is written, all you have to do
is create your image to contain your application.

$ docker build -t python-test .

The ’-t’ option allows you to define the name of your image. In our case
we have chosen ’python-test’ but you can put what you want.

5. Run the Docker image


Once the image is created, your code is ready to be launched.

$ docker run python-test

You need to put the name of your image after ‘docker run’.

There you go, that’s it. You should normally see “Docker is magic!”
displayed in your terminal.

Code is available
If you want to retrieve the complete code to discover it easily or to
execute it, I have put it at your disposal on my GitHub.

-> GitHub: Docker First Application example

Useful commands for Docker


List your images.
$ docker image ls
Delete a specific image.

$ docker image rm [image name]

Delete all existing images.

$ docker image rm $(docker images -a -q)

List all existing containers (running and not running).


$ docker ps -a

Stop a specific container.

$ docker stop [container name]

Stop all running containers.

$ docker stop $(docker ps -a -q)

Delete a specific container (only if stopped).

$ docker rm [container name]

Delete all containers (only if stopped).

$ docker rm $(docker ps -a -q)

Display logs of a container.

$ docker logs [container name]

Result:
Thus the Docker was installed successfully.
EX.NO:9
DATE:

Run a container

Aim:
To write a procedure to run a container from docker.

Procedure:

Step 1: Get the sample application


If you have git, you can clone the repository for the sample application.
Otherwise, you can download the sample application. Choose one of the
following options.

Step 2: Explore the Dockerfile


To run your code in a container, the most fundamental thing you need is a
Dockerfile. A Dockerfile describes what goes into a container. Open the
sample application in your IDE and then open the Dockerfile to explore
its contents. Note that this project already has a Dockerfile, but for your
own projects you need to create a Dockerfile. A Dockerfile is simply a
text file named Dockerfile with no file extension.

Step 3: Build your first image


An image is like a static version of a container. You always need an
image to run a container. Once you have a Dockerfile in your repository,
run the following docker build command in the project folder to create an
image.

$ docker build -t welcome-to-docker .

Building the image may take some time. After your image is built, you
can view your image in the Images tab in Docker Desktop.

Step 4: Run your container


To run your image as a container, go to the Images tab, and then
select Run in the Actions column of your image. When the Optional
settings appear, specify the Host port number 8089 and then select Run.
Result:

Thus the container from Docker hub was executed successfully.


VIVA QUESTIONS AND ANSWERS
1.Define Cloud Computing with example.
Cloud computing is a model for enabling convenient, on-demand network
access to a shared pool of configurable computing resources (e.g.,
networks, servers, storage, applications, and services) that can be rapidly
provisioned and released with minimal management effort or service
provider interaction.

2.What is the working principle of Cloud Computing?


The cloud is a collection of computers and servers that are publicly
accessible via the Internet. This hardware is typically owned and operated
by a third party on a consolidated basis in one or more data center
locations. The machines can run any combination of operating systems.

3.What are the advantages and disadvantages of Cloud Computing?


Advantages
Lower-Cost Computers for Users
Improved Performance
Lower IT Infrastructure Costs
Fewer Maintenance Issues
Lower Software Costs
Instant Software Updates
Increased Computing Power
Unlimited Storage Capacity
Increased Data Safety
Improved Compatibility Between Operating Systems
Improved Document Format Compatibility
Easier Group Collaboration
Universal Access to Documents
Latest Version Availability
Removes the Tether to Specific Devices
Disadvantages
Requires a Constant Internet Connection
Doesn’t Work Well with Low-Speed Connections
Can Be Slow
Features Might Be Limited
Stored Data Might Not Be Secure
If the Cloud Loses Your Data, You’re Screwed

4.What is distributed system?


A distributed system is a software system in which components located
on networked computers communicate and coordinate their actions by
passing messages. The components interact with each other in order to
achieve a common goal.
Three significant characteristics of distributed systems are:
Concurrency of components
Lack of a global clock
Independent failure of components

5.What is grid computing?


Grid Computing enables virtuals organizations to share geographically
distributed resources as they pursue common goals, assuming the absence
of central location, central control, omniscience, and an existing trust
relationship.

6.What are the business areas needs in Grid computing?


Life Sciences
Financial services
Higher Education
Engineering Services
Government
Collaborative games

7.List out the Grid Applications:


Application partitioning that involves breaking the problem into
discrete pieces
Discovery and scheduling of tasks and workflow
Data communications distributing the problem data where and when it
is required
Provisioning and distributing application codes to specific system
nodes
Autonomic features such as self-configuration, self-optimization, self-
recovery and self
management

8.List some grid computing toolkits and frameworks?


Globus Toolkit Globus Resource Allocation Manager(GRAM)
Grid Security Infrastructure(GSI)
Information Services
Legion, Condor and Condor-G
NIMROD, UNICORE, NMI.

9.What are Desktop Grids?


These are grids that leverage the compute resources of desktop computers.
Because of the true (but unfortunate) ubiquity of Microsoft® Windows®
operating system in corporations, desktop grids are assumed to apply to
the Windows environment. The Mac OS™ environment is supported by a
limited number of vendors.

10.What are Server Grids?


Some corporations, while adopting Grid Computing , keep it limited to
server resources that are within the purview of the IT department. Special
servers, in some cases, are bought solely for the purpose of creating an
internal “utility grid” with resources made available to various
departments. No desktops are included in server grids. These usually run
some flavor of the Unix/Linux
operating system.

11.Define Opennebula.
OpenNebula is an open source management tool that helps virtualized
data centers oversee private clouds, public clouds and hybrid clouds. ...
OpenNebula is vendor neutral, as well as platform- and API-agnostic. It
can use KVM, Xen or VMware hypervisors.

12.Define Eclipse.
Eclipse is an integrated development environment (IDE) used in
computer programming, and is the most widely used Java IDE. It
contains a base workspace and an extensible plug-in system for
customizing the environment.

13.Define Netbeans.
NetBeans is an open-source integrated development environment (IDE)
for developing with Java, PHP, C++, and other programming languages.
NetBeans is also referred to as a platform of modular components used
for developing Java desktop applications.

14.Define Apache Tomcat.


Apache Tomcat (or Jakarta Tomcat or simply Tomcat) is an open source
servlet container developed by the Apache Software Foundation (ASF).
Tomcat implements the Java Servlet and the JavaServer Pages (JSP)
specifications from Sun Microsystems, and provides a "pure Java" HTTP
web server environment for Java code to run."

15.What is private cloud?


The private cloud is built within the domain of an intranet owned by a
single organization.
Therefore, they are client owned and managed. Their access is limited to
the owning clients and their partners. Their deployment was not meant to
sell capacity over the Internet through publicly accessible interfaces.
Private clouds give local users a flexible and agile private infrastructure
to run service workloads within their administrative domains.

16.What is public cloud?


A public cloud is built over the Internet, which can be accessed by any
user who has paid for the service. Public clouds are owned by service
providers. They are accessed by subscription. Many companies have built
public clouds, namely Google App Engine, Amazon AWS, Microsoft
Azure, IBM Blue Cloud, and Salesforce Force.com. These are
commercial providers that offer a publicly accessible remote interface for
creating and managing VM instances within their proprietary
infrastructure.

17. What is hybrid cloud?


A hybrid cloud is built with both public and private clouds, Private clouds
can also support a hybrid cloud model by supplementing local
infrastructure with computing capacity from an external public cloud. For
example, the research compute cloud (RC2) is a private cloud built by
IBM.

18.What is a Community Cloud ?


A community cloud in computing is a collaborative effort in which
infrastructure is shared between several organizations from a specific
community with common concerns (security, compliance, jurisdiction,
etc.), whether managed internally or by a third-party and hosted internally
or externally. This is controlled and used by a group of organizations that
have shared interest. The costs are spread over fewer users than a public
cloud (but more than a private cloud

19.Define IaaS?
The IaaS layer offers storage and infrastructure resources that is needed
to deliver the Cloud services. It only comprises of the infrastructure or
physical resource. Top IaaS Cloud Computing Companies: Amazon
(EC2), Rackspace, GoGrid, Microsoft, Terremark and Google.

20.Define PaaS?
PaaS provides the combination of both, infrastructure and application.
Hence, organisations using PaaS don’t have to worry for infrastructure
nor for services. Top PaaS Cloud Computing
Companies: Salesforce.com, Google, Concur Technologies, Ariba,
Unisys and Cisco..

21.Define SaaS?
In the SaaS layer, the Cloud service provider hosts the software upon
their servers. It can be defined as a in model in which applications and
softwares are hosted upon the server and made available to customers
over a network. Top SaaS Cloud Computing Companies: Amazon Web
Services, AppScale, CA Technologies, Engine Yard, Salesforce and
Windows Azure.

22.What is meant by virtualization?


Virtualizationisacomputerarchitecturetechnologybywhichmultiplevirtual
machines (VMs)are multipl exedin the same hardwar emachine.Theideaof
VMs canbe dated back to the 1960s.The purpose of a VM is to enhance
resource sharing by many users and improve computer performance
interms of resource utilization and application flexibility.

23.What are the implementation levels of virtualization?


The virtualization types are following
1. OS-level virtualization
2. ISA level virtualization
3. User-ApplicationLevel virtualization
4. hardware level virtualization
5. library level virtualization

24.List the requirements of VMM?


There are three requirements for a VMM. First, a VMM should provide
an environment for programs which is essentially identical to the original
machine. Second, programs run in this environment should show, at
worst, only minor decreases in speed. Third, a VMM should be in
complete control of the system resources.

25.Explain Host OS and Guest OS?


A comparison of the differences between a host system, a guest system,
and a virtual machine within a virtual infrastructure. A host system (host
operating system) would be the primary & first installed operating system.
If you are using a bare metal Virtualization platform like Hyper-V or
ESX, there really isn’t a host operating system besides the Hypervisor. If
you are using a Type-2 Hypervisor like VMware Server or Virtual Server,
the host operating system is whatever operating system those applications
are installed into. A guest system (guest operating system) is a virtual
guest or virtual machine (VM) that is installed under the host operating
system. The guests are the VMs that you run in your virtualization
platform.

26.Write the steps for live VM migration?


The five steps for live VM migration is
Stage 0: Pre-Migration
Active VM on Host A
Alternate physical host may be preselected for migration
Block devices mirrored and free resources maintained
Stage 1: Reservation
Initialize a container on the target host
Stage 2: Iterative pre-copy
Enable shadow paging
Copy dirty pages in successive rounds.
Stage 3: Stop and copy
Suspend VM on host A
Generate ARP to redirect traffic to Host B
Synchronize all remaining VM state to Host B
Stage 4: Commitment
VM state on Host A is released
Stage 5: Activation
VM starts on Host B
Connects to local devicesResumes normal operation

27.Define Globus Toolkit: Grid Computing Middleware


Globus is open source grid software that addresses the most
challenging problmes in distributed resources sharing. The Globus
Toolkit includes software services and libraries for distributed security,
resource management, monitoring and discovery, and data management.

28.Define Blocks in HDFS


A disk has a block size, which is the minimum amount of data that it
can read or write. Filesystems for a single disk build on this by dealing
with data in blocks, which are an integral multiple of the disk block size.
Filesystem blocks are typically a few kilobytes in size, while disk blocks
are normally 512 bytes. This is generally transparent to the filesystem
user who is simply reading or writing a file—of whatever length.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy