0% found this document useful (0 votes)
157 views20 pages

How To Install Hadoop On Ubuntu 18.04 or 20.04

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
157 views20 pages

How To Install Hadoop On Ubuntu 18.04 or 20.04

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

How to Install Hadoop on Ubuntu 18.04 or 20.04 https://phoenixnap.

com/kb/install-hadoop-ubuntu

Get 15 TB FREE bandwidth (5 TB in Singapore) with Bare Metal Cloud!


DEPLOY NOW

How to Install Hadoop on Ubuntu 18.04 or


20.04

May 11, 2020


APACHE BIG DATA UBUNTU

Home » SysAdmin » How to Install Hadoop on Ubuntu 18.04 or 20.04

Introduction

Every major industry is implementing Apache Hadoop as the standard framework for
processing and storing big data. Hadoop is designed to be deployed across a network
of hundreds or even thousands of dedicated servers. All these machines work together
to deal with the massive volume and variety of incoming datasets.

Deploying Hadoop services on a single node is a great way to get yourself acquainted
with basic Hadoop commands and concepts.

This easy-to-follow guide helps you install Hadoop on Ubuntu 18.04 or Ubuntu 20.04.

1 of 20 10/05/22, 20:29
How to Install Hadoop on Ubuntu 18.04 or 20.04 https://phoenixnap.com/kb/install-hadoop-ubuntu

Get 15 TB FREE bandwidth (5 TB in Singapore) with Bare Metal Cloud!


DEPLOY NOW

Prerequisites

• Access to a terminal window/command line


• Sudo or root privileges on local /remote machines

Install OpenJDK on Ubuntu


The Hadoop framework is written in Java, and its services require a compatible Java
Runtime Environment (JRE) and Java Development Kit (JDK). Use the following
command to update your system before initiating a new installation:

sudo apt update

At the moment, Apache Hadoop 3.x fully supports Java 8. The OpenJDK 8 package in
Ubuntu contains both the runtime environment and development kit.

Type the following command in your terminal to install OpenJDK 8:

sudo apt install openjdk-8-jdk -y

The OpenJDK or Oracle Java version can affect how elements of a Hadoop ecosystem
interact. To install a speci�c Java version, check out our detailed guide on how to
install Java on Ubuntu.

Once the installation process is complete, verify the current Java version:

2 of 20 10/05/22, 20:29
How to Install Hadoop on Ubuntu 18.04 or 20.04 https://phoenixnap.com/kb/install-hadoop-ubuntu

Get 15 TB FREE bandwidth (5 TB in Singapore) with Bare Metal Cloud!


java -version; javac -version
DEPLOY NOW

The output informs you which Java edition is in use.

Set Up a Non-Root User for Hadoop


Environment
It is advisable to create a non-root user, speci�cally for the Hadoop environment. A
distinct user improves security and helps you manage your cluster more e�ciently. To
ensure the smooth functioning of Hadoop services, the user should have the ability to
establish a passwordless SSH connection with the localhost.

Install OpenSSH on Ubuntu


Install the OpenSSH server and client using the following command:

sudo apt install openssh-server openssh-client -y

In the example below, the output con�rms that the latest version is already installed.

If you have installed OpenSSH for the �rst time, use this opportunity to implement
these vital SSH security recommendations.

Create Hadoop User


Utilize the adduser command to create a new Hadoop user:

sudo adduser hdoop

3 of 20 10/05/22, 20:29
How to Install Hadoop on Ubuntu 18.04 or 20.04 https://phoenixnap.com/kb/install-hadoop-ubuntu

The
Getusername, in thisbandwidth
15 TB FREE example, is hdoop.
(5 TB You are free the use
in Singapore) anyBare
with username andCloud!
Metal
password you see �t. Switch to the newly created user and enter the corresponding
DEPLOY NOW
password:

su - hdoop

The user now needs to be able to SSH to the localhost without being prompted for a
password.

Enable Passwordless SSH for Hadoop User


Generate an SSH key pair and de�ne the location is is to be stored in:

ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa

The system proceeds to generate and save the SSH key pair.

Use the cat command to store the public key as authorized_keys in the ssh directory:

cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

Set the permissions for your user with the chmod command:

chmod 0600 ~/.ssh/authorized_keys

4 of 20 10/05/22, 20:29
How to Install Hadoop on Ubuntu 18.04 or 20.04 https://phoenixnap.com/kb/install-hadoop-ubuntu

The
Getnew
15 user is nowbandwidth
TB FREE able to SSH without
(5 TB needing to enter awith
in Singapore) password
Bareevery
Metaltime.
Cloud!
Verify everything is set up correctly by using the hdoop user to SSH to localhost:
DEPLOY NOW

ssh localhost

After an initial prompt, the Hadoop user is now able to establish an SSH connection to
the localhost seamlessly.

Download and Install Hadoop on


Ubuntu
Visit the o�cial Apache Hadoop project page, and select the version of Hadoop you
want to implement.

The steps outlined in this tutorial use the Binary download for Hadoop Version 3.2.1.

Select your preferred option, and you are presented with a mirror link that allows you to
download the Hadoop tar package.

5 of 20 10/05/22, 20:29
How to Install Hadoop on Ubuntu 18.04 or 20.04 https://phoenixnap.com/kb/install-hadoop-ubuntu

Get 15 TB FREE bandwidth (5 TB in Singapore) with Bare Metal Cloud!


DEPLOY NOW

Note: It is sound practice to verify Hadoop downloads originating from


 mirror sites. The instructions for using GPG or SHA-512 for veri�cation
are provided on the o�cial download page.

Use the provided mirror link and download the Hadoop package with the wget
command:

wget https://downloads.apache.org/hadoop/common/hadoop-3.2.
1/hadoop-3.2.1.tar.gz

Once the download is complete, extract the �les to initiate the Hadoop installation:

tar xzf hadoop-3.2.1.tar.gz

The Hadoop binary �les are now located within the hadoop-3.2.1 directory.

6 of 20 10/05/22, 20:29
How to Install Hadoop on Ubuntu 18.04 or 20.04 https://phoenixnap.com/kb/install-hadoop-ubuntu

Single Node Hadoop Deployment


Get 15 TB FREE bandwidth (5 TB in Singapore) with Bare Metal Cloud!
DEPLOY NOW
(Pseudo-Distributed Mode)
Hadoop excels when deployed in a fully distributed mode on a large cluster of
networked servers. However, if you are new to Hadoop and want to explore basic
commands or test applications, you can con�gure Hadoop on a single node.

This setup, also called pseudo-distributed mode, allows each Hadoop daemon to run
as a single Java process. A Hadoop environment is con�gured by editing a set of
con�guration �les:

• bashrc
• hadoop-env.sh
• core-site.xml
• hdfs-site.xml
• mapred-site-xml
• yarn-site.xml

Configure Hadoop Environment Variables


(bashrc)
Edit the .bashrc shell con�guration �le using a text editor of your choice (we will be
using nano):

sudo nano .bashrc

De�ne the Hadoop environment variables by adding the following content to the end of
the �le:

#Hadoop Related Options


export HADOOP_HOME=/home/hdoop/hadoop-3.2.1
export HADOOP_INSTALL=$HADOOP_HOME
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export YARN_HOME=$HADOOP_HOME
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin

7 of 20 10/05/22, 20:29
How to Install Hadoop on Ubuntu 18.04 or 20.04 https://phoenixnap.com/kb/install-hadoop-ubuntu

Getexport
15 TB HADOOP_OPTS"-Djava.library.path=$HADOOP_HOME/lib/nat
FREE bandwidth (5 TB in Singapore) with Bare Metal Cloud!
iv"
DEPLOY NOW

Once you add the variables, save and exit the .bashrc �le.

It is vital to apply the changes to the current running environment by using the following
command:

source ~/.bashrc

Edit hadoop-env.sh File


The hadoop-env.sh �le serves as a master �le to con�gure YARN, HDFS, MapReduce,
and Hadoop-related project settings.

When setting up a single node Hadoop cluster, you need to de�ne which Java
implementation is to be utilized. Use the previously created $HADOOP_HOME variable to
access the hadoop-env.sh �le:

sudo nano $HADOOP_HOME/etc/hadoop/hadoop-env.sh

Uncomment the $JAVA_HOME variable (i.e., remove the # sign) and add the full path to
the OpenJDK installation on your system. If you have installed the same version as

8 of 20 10/05/22, 20:29
How to Install Hadoop on Ubuntu 18.04 or 20.04 https://phoenixnap.com/kb/install-hadoop-ubuntu

presented
Get 15 TB in the part of this tutorial,
�rstbandwidth
FREE (5 TBadd the following line:
in Singapore) with Bare Metal Cloud!
DEPLOY NOW
export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64

The path needs to match the location of the Java installation on your system.

If you need help to locate the correct Java path, run the following command in your
terminal window:

which javac

The resulting output provides the path to the Java binary directory.

Use the provided path to �nd the OpenJDK directory with the following command:

readlink -f /usr/bin/javac

The section of the path just before the /bin/javac directory needs to be assigned to the
$JAVA_HOME variable.

9 of 20 10/05/22, 20:29
How to Install Hadoop on Ubuntu 18.04 or 20.04 https://phoenixnap.com/kb/install-hadoop-ubuntu

Get 15 TB FREE bandwidth (5 TB in Singapore) with Bare Metal Cloud!


DEPLOY NOW
Edit core-site.xml File
The core-site.xml �le de�nes HDFS and Hadoop core properties.

To set up Hadoop in a pseudo-distributed mode, you need to specify the URL for your
NameNode, and the temporary directory Hadoop uses for the map and reduce process.

Open the core-site.xml �le in a text editor:

sudo nano $HADOOP_HOME/etc/hadoop/core-site.xml

Add the following con�guration to override the default values for the temporary
directory and add your HDFS URL to replace the default local �le system setting:

<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/hdoop/tmpdata</value>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://127.0.0.1:9000</value>
</property>
</configuration>

This example uses values speci�c to the local system. You should use values that
match your systems requirements. The data needs to be consistent throughout the
con�guration process.

10 of 20 10/05/22, 20:29
How to Install Hadoop on Ubuntu 18.04 or 20.04 https://phoenixnap.com/kb/install-hadoop-ubuntu

Get 15 TB FREE bandwidth (5 TB in Singapore) with Bare Metal Cloud!


DEPLOY NOW

Do not forget to create a Linux directory in the location you speci�ed for your
temporary data.

Edit hdfs-site.xml File


The properties in the hdfs-site.xml �le govern the location for storing node metadata,
fsimage �le, and edit log �le. Con�gure the �le by de�ning the NameNode and
DataNode storage directories.

Additionally, the default dfs.replication value of 3 needs to be changed to 1 to


match the single node setup.

Use the following command to open the hdfs-site.xml �le for editing:

sudo nano $HADOOP_HOME/etc/hadoop/hdfs-site.xml

Add the following con�guration to the �le and, if needed, adjust the NameNode and
DataNode directories to your custom locations:

<configuration>
<property>
<name>dfs.data.dir</name>
<value>/home/hdoop/dfsdata/namenode</value>
</property>
<property>

11 of 20 10/05/22, 20:29
How to Install Hadoop on Ubuntu 18.04 or 20.04 https://phoenixnap.com/kb/install-hadoop-ubuntu

<name>dfs.data.dir</name>
Get 15 TB FREE bandwidth (5 TB in Singapore) with Bare Metal Cloud!
<value>/home/hdoop/dfsdata/datanode</value>
DEPLOY NOW
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>

If necessary, create the speci�c directories you de�ned for the dfs.data.dir value.

Edit mapred-site.xml File


Use the following command to access the mapred-site.xml �le and de�ne MapReduce
values:

sudo nano $HADOOP_HOME/etc/hadoop/mapred-site.xml

Add the following con�guration to change the default MapReduce framework name
value to yarn:

<configuration>
<property>

12 of 20 10/05/22, 20:29
How to Install Hadoop on Ubuntu 18.04 or 20.04 https://phoenixnap.com/kb/install-hadoop-ubuntu

<name>mapreduce.framework.name</name>
Get 15 TB FREE bandwidth (5 TB in Singapore) with Bare Metal Cloud!
<value>yarn</value>
DEPLOY NOW
</property>
</configuration>

Edit yarn-site.xml File


The yarn-site.xml �le is used to de�ne settings relevant to YARN. It contains
con�gurations for the Node Manager, Resource Manager, Containers, and Application
Master.

Open the yarn-site.xml �le in a text editor:

sudo nano $HADOOP_HOME/etc/hadoop/yarn-site.xml

Append the following con�guration to the �le:

<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.cla
ss</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>

13 of 20 10/05/22, 20:29
How to Install Hadoop on Ubuntu 18.04 or 20.04 https://phoenixnap.com/kb/install-hadoop-ubuntu

Get</property>
15 TB FREE bandwidth (5 TB in Singapore) with Bare Metal Cloud!
<property>
DEPLOY NOW
<name>yarn.resourcemanager.hostname</name>
<value>127.0.0.1</value>
</property>
<property>
<name>yarn.acl.enable</name>
<value>0</value>
</property>
<property>
<name>yarn.nodemanager.env-whitelist</name>   
<value>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADO
OP_CONF_DIR,CLASSPATH_PERPEND_DISTCACHE,HADOOP_YARN_HOME,HA
DOOP_MAPRED_HOME</value>
</property>
</configuration>

Format HDFS NameNode


It is important to format the NameNode before starting Hadoop services for the �rst
time:

hdfs namenode -format

The shutdown noti�cation signi�es the end of the NameNode format process.

14 of 20 10/05/22, 20:29
How to Install Hadoop on Ubuntu 18.04 or 20.04 https://phoenixnap.com/kb/install-hadoop-ubuntu

Get 15 TB FREE bandwidth (5 TB in Singapore) with Bare Metal Cloud!


DEPLOY NOW

Start Hadoop Cluster


Navigate to the hadoop-3.2.1/sbin directory and execute the following commands to
start the NameNode and DataNode:

./start-dfs.sh

The system takes a few moments to initiate the necessary nodes.

Once the namenode, datanodes, and secondary namenode are up and running, start
the YARN resource and nodemanagers by typing:

./start-yarn.sh

As with the previous command, the output informs you that the processes are starting.

Type this simple command to check if all the daemons are active and running as Java
processes:

15 of 20 10/05/22, 20:29
How to Install Hadoop on Ubuntu 18.04 or 20.04 https://phoenixnap.com/kb/install-hadoop-ubuntu

Get 15 TB FREE bandwidth (5 TB in Singapore) with Bare Metal Cloud!


jps
DEPLOY NOW

If everything is working as intended, the resulting list of running Java processes


contains all the HDFS and YARN daemons.

Access Hadoop UI from Browser


Use your preferred browser and navigate to your localhost URL or IP. The default port
number 9870 gives you access to the Hadoop NameNode UI:

http://localhost:9870

The NameNode user interface provides a comprehensive overview of the entire cluster.

The default port 9864 is used to access individual DataNodes directly from your
browser:

http://localhost:9864

16 of 20 10/05/22, 20:29
How to Install Hadoop on Ubuntu 18.04 or 20.04 https://phoenixnap.com/kb/install-hadoop-ubuntu

Get 15 TB FREE bandwidth (5 TB in Singapore) with Bare Metal Cloud!


DEPLOY NOW

The YARN Resource Manager is accessible on port 8088:

Vladimir Kaplarevic
http://localhost:8088

Vladimir is a resident Tech Writer at phoenixNAP. He has more than 7 years of


The Resource
experience Manager is ane-commerce
in implementing invaluable tool
andthat allows
online you tosolutions
payment monitor all running
with various
processes in your providers.
global IT services Hadoop cluster.
His articles aim to instill a passion for innovative
technologies in others by providing practical advice and using an engaging writing
style.

Next you should read

Databases, DevOps
and Development
How to Install
Elasticsearch on
Conclusion
Ubuntu 18.04
April 23, 2020

You have successfully installed Hadoop on Ubuntu and deployed it in a pseudo-


distributed mode.
Elasticsearch is A
ansingle node Hadoop deployment is an excellent starting point to
explore basic HDFS
open-source enginecommands and acquire the experience you need to design a fully
distributed Hadoop cluster.
that enhances
searching, storing and
analyzing capabilities
Was this article helpful? Yes No
of your...

17 of 20 10/05/22, 20:29
How to Install Hadoop on Ubuntu 18.04 or 20.04 https://phoenixnap.com/kb/install-hadoop-ubuntu

R
Get 15 TB FREE bandwidth (5 TB in Singapore) with Bare Metal Cloud!
E
DEPLOY NOW
A
Databases, DevOps
D
and Development
M
How to Install
O
Spark on Ubuntu
R
April 13, 2020
E

This Spark tutorial


shows how to get
started with Spark. The
guide covers the
procedure for installing
Java...
READ MORE

Bare Metal Servers


Single vs. Dual
Processor
Servers, Which Is
Right For You?
February 20, 2019

Learn the differences


between a single
processor and a dual
processor server. Make
the best decision for
your...
READ MORE

Bare Metal Servers,


Networking, SysAdmin
How to Con�gure
& Setup AWS
Direct Connect
September 25, 2018

18 of 20 10/05/22, 20:29
How to Install Hadoop on Ubuntu 18.04 or 20.04 https://phoenixnap.com/kb/install-hadoop-ubuntu

Get 15 TB FREE bandwidth (5 TB in Singapore) with Bare Metal Cloud!


A
W DEPLOY NOW
S
Di
re
ct
C
o
n
ne
ct
es
ta
bli
sh
es
a
di
re
ct
pr
iv
at
e
c
o
n
ne
ct
io
n
fr
o
m
yo
ur
e Live Chat  Get a Quote  Support | 1-855-330-1509  Sales | 1-877-588-5918
q
ui

19 of 20 10/05/22, 20:29
How to Install Hadoop on Ubuntu 18.04 or 20.04 https://phoenixnap.com/kb/install-hadoop-ubuntu

p 15 TB FREE bandwidth (5 TB in Singapore) with Bare Metal Cloud!


Get
mPolicy
Privacy GDPR Sitemap © 2022 Copyright phoenixNAP | Global IT Services. All Rights Reserved.
DEPLOY NOW
en
t
to
A
W
S.
U
se
A
W
S
Di
re
ct
C
o
n
ne
ct
...
RE

RE

20 of 20 10/05/22, 20:29

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy