0% found this document useful (0 votes)
14 views10 pages

Wa0000

The document provides an overview of shell basics, defining the shell as a command interpreter and programming language within UNIX/Linux systems. It discusses the purposes of the shell, types of shells including Bourne, Bash, C, Korn, and Z shells, and explains standard input/output, redirection, pipes, and filters. Additionally, it covers command separation, grouping, and directory stack manipulation in the context of shell usage.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views10 pages

Wa0000

The document provides an overview of shell basics, defining the shell as a command interpreter and programming language within UNIX/Linux systems. It discusses the purposes of the shell, types of shells including Bourne, Bash, C, Korn, and Z shells, and explains standard input/output, redirection, pipes, and filters. Additionally, it covers command separation, grouping, and directory stack manipulation in the context of shell usage.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

UNIT – 3 SHELL BASICS

MEANING AND PURPOSE OF SHELL


Shell Definition: The shell can be defined as a command interpreter within an operating system
like Linux/GNU or UNIX. It is a program that runs other programs. The shell sends the result to
the user over the screen when it has completed running a program which is the common output
device. That's why it is known as "command interpreter". The shell is not just a command
interpreter. Also, the shell is a programming language with complete constructs of
a programming language such as functions, variables, loops, conditional execution, and many
others.

Purpose of the Shell: There are three main uses for the shell
1. Interactive use
2. Customization of your UNIX session
3. Programming
1. Interactive Use: When the shell is used interactively, the system waits for you to type a
command at the UNIX prompt. Your commands can include special symbols that let you
abbreviate filenames or redirect input and output.

2. Customization of Your UNIX Session: A UNIX shell defines variables to control the
behavior of your UNIX session. Setting these variables will tell the system, for example, which
directory to use as your home directory, or the file in which to store your mail. Some variables
are preset by the system; you can define others in start-up files that are read when you log in.
Start-up files can also contain UNIX commands or special shell commands. These will be
executed every time you log in.

3. Programming: UNIX shells provide a set of special (or built-in) commands that can be used
to create programs called shell scripts. In fact, many built-in commands can be used interactively
like UNIX commands, and UNIX commands are frequently used in shell scripts. Scripts are
useful for executing a series of individual commands.

TYPES OF SHELL
1. The Bourne Shell (sh): Developed at AT&T Bell Labs by Steve Bourne, the Bourne shell is
regarded as the first UNIX shell ever. It is denoted as sh. It gained popularity due to its compact
nature and high speeds of operation.

This is what made it the default shell for Solaris OS. It is also used as the default shell for all
Solaris system administration scripts.
However, the Bourne shell has some major drawbacks.
 It doesn‟t have in-built functionality to handle logical and arithmetic operations.
 Also, unlike most different types of shells in Linux, the Bourne shell cannot recall
previously used commands.
 It also lacks comprehensive features to offer a proper interactive use.
2. The GNU Bourne-Again Shell (bash): More popularly known as the Bash shell, the GNU
Bourne-Again shell was designed to be compatible with the Bourne shell. It incorporates useful
features from different types of shells in Linux such as Korn shell and C shell.

It allows us to automatically recall previously used commands and edit them with help of arrow
keys, unlike the Bourne shell. The complete path-name for the GNU Bourne-Again shell is
/bin/bash. By default, it uses the prompt bash-Version Number# for the root user and bash-
Version Number$ for the non-root users.

3. The C Shell (csh): The C shell was created at the University of California by Bill Joy. It is
denoted as csh. It was developed to include useful programming features like in-built support for
arithmetic operations and syntax similar to the C programming language.

Further, it incorporated command history which was missing in different types of shells in Linux
like the Bourne shell. Another prominent feature of a C shell is “aliases”. The complete path-
name for the C shell is /bin/csh. By default, it uses the prompt hostname# for the root user
and hostname% for the non-root users.

4. The Korn Shell (ksh): The Korn shell was developed at AT&T Bell Labs by David Korn, to
improve the Bourne shell. It is denoted as ksh. The Korn shell is essentially a superset of the
Bourne shell.

The Korn shell runs scripts made for the Bourne shell, while offering string, array and function
manipulation similar to the C programming language. It also supports scripts which were written
for the C shell. Further, it is faster than most different types of shells in Linux, including the C
shell.

5. The Z Shell (zsh): The Z Shell or zsh is a sh shell extension with tons of improvements for
customization. If you want a modern shell that has all the features a much more, the zsh shell is
what you‟re looking for.

STANDARD INPUT AND STANDARD OUTPUT


When a command begins running, it usually expects that three files are already open: standard
input, standard output, and standard error (sometimes called error output or diagnostic output). A
number, called a file descriptor, is associated with each of these files, as follows:
File descriptor 0 Standard input
File descriptor 1 Standard output
File descriptor 2 Standard error (diagnostic) output
When you enter a command, if no file name is given, your keyboard is the standard input,
sometimes denoted as stdin. When a command finishes, the results are displayed on your screen.
Your screen is the standard output, sometimes denoted as stdout. By default, commands take
input from the standard input and send the results to standard output. Standard error, sometimes
denoted as stderr, is where error messages go. By default, this is your screen.
REDIRECTION
Redirection can be defined as changing the way from where commands read input to where
commands sends output. You can redirect input and output of a command. For redirection, meta
characters are used. Redirection can be into a file (shell meta characters are angle brackets '<',
'>') or a program (shell meta characters are pipe symbol '|').

Standard Streams In I/O Redirection: The bash shell has three standard streams in I/O
redirection:
 standard input (stdin) : The stdin stream is numbered as stdin (0). The bash shell takes
input from stdin. By default, keyboard is used as input.
 standard output (stdout) : The stdout stream is numbered as stdout (1). The bash shell
sends output to stdout. Output goes to display.
 standard error (stderr) : The stderr stream is numbered as stderr (2). The bash shell
sends error message to stderr. Error message goes to display.

Redirection into a File: Each stream uses redirection commands. Single bracket '>' or double
bracket '>>' can be used to redirect standard output. If the target file doesn't exist, a new file with
the same name will be created. Commands with a single bracket '>' overwrite existing file
content.
 > : standard output
 < : standard input
 2> : standard error
Note: Writing '1>' or '>' and '0<' or '<' is same thing. But for stderr you have to write '2>'.
Syntax: cat > <fileName>

Append: Commands with a double bracket '>>' do not overwrite the existing file content.
 >> - standard output
 << - standard input
 2>> - standard error
Syntax: cat >> <fileName>

PIPES
Pipe is used to combine two or more commands, and in this, the output of one command acts
as input to another command, and this command‟s output may act as input to the next
command and so on. It can also be visualized as a temporary connection between two or more
commands/ programs/ processes. The command line programs that do the further processing
are referred to as filters.

This direct connection between commands/ programs/ processes allows them to operate
simultaneously and permits data to be transferred between them continuously rather than
having to pass it through temporary text files or through the display screen.
Pipes are unidirectional i.e data flows from left to right through the pipeline.
Syntax:
command_1 | command_2 | command_3 | .... | command_N
Example:
1. Listing all files and directories and give it as input to more command.
Syntax: $ ls -l | more

2. Use sort and uniq command to sort a file and print unique values.
Syntax: $ sort record.txt | uniq
This will sort the given file and print the unique values only.

3. Use head and tail to print lines in a particular range in a file.


Syntax: $ cat sample2.txt | head -7 | tail -5
This command select first 7 lines through (head -7) command and that will be input to (tail -5)
command which will finally print last 5 lines from that 7 lines.

FILTERS
Linux Filter commands accept input data from stdin (standard input) and produce output
on stdout (standard output). It transforms plain-text data into a meaningful way and can be used
with pipes to perform higher operations.

These filters are very small programs that are designed for a specific function which can be used
as building blocks. Some of the most commonly used filters are explained below:
1. Linux Cat Filters: When cat command is used inside pipes, it does nothing except moving
stdin to stout.
Syntax:
cat <fileName> | cat or tac | cat or tac |. . .

2. Linux Cut Filters: Linux cut command is useful for selecting a specific column of a file. It is
used to cut a specific section by byte position, character, and field and writes them to standard
output.

To cut a specific section, it is necessary to specify the delimiter. A delimiter will decide how the
sections are separated in a text file. Delimiters can be a space (' '), a hyphen (-), a slash (/), or
anything else. After '-f' option, the column number is mentioned.
Syntax:
cut OPTION... [FILE]...

3. Grep Command: The 'grep' command stands for "global regular expression print". grep
command filters the content of a file which makes our search easy.
(i) grep with pipe: The 'grep' command is generally used with pipe (|).

Syntax:
command | grep <searchWord>
(ii) grep without pipe: It can be used without pipe also.
Syntax:
grep <searchWord> <file name>
4. Linux comm: The 'comm' command compares two files or streams. By default, 'comm' will
always display three columns. First column indicates non-matching items of first file, second
column indicates non-matching items of second file, and third column indicates matching items
of both the files. Both the files has to be in sorted order for 'comm' command to be executed.
Syntax:
comm <file1> <file2>

5. Sed Command: Linux 'sed' command stands for stream editor. It is used to edit streams (files)
using regular expressions. But this editing is not permanent. It remains only in display, but in
actual, file content remains the same.
Syntax:
sed [OPTION]... {script-only-if-no-other-script} [input-file]...

6. Linux tee Command: Linux tee command is quite similar to the 'cat' command, with only
one difference. It puts stdin on stdout and also put them into a file. It is one of the most used
commands with other commands through piping. It allows us to write whatever is provided from
std input to std output.
Syntax:
tee <options> <file name>

7. Linux tr: The command 'tr' stands for 'translate'. It is used to translate, like from lowercase
to uppercase and vice versa or new lines into spaces.
Syntax:
command | tr <'old'> <'new'>

8. Linux uniq Command: Linux uniq command is used to remove all the repeated lines from a
file. Also, it can be used to display a count of any word, only repeated lines, ignore characters,
and compare specific fields. It is one of the most frequently used commands in the Linux system.
It is often used with the sort command because it compares adjacent characters. It discards all the
identical lines and writes the output.
Syntax:
uniq [OPTION]... [INPUT [OUTPUT]]

9. Linux wc Command: Linux wc command helps in counting the lines, words, and characters
in a file. It displays the number of lines, number of characters, and the number of words in a file.
Mostly, it is used with pipes for counting operation.
Syntax:
wc [OPTION]... [FILE]...
10. Linux od: The 'od' term stands for octal dump. It displays content of a file in different
human-readable formats like hexadecimal, octal and ASCII characters.
Syntax:
od -b <fileName> (display files in octal format)
od -t x1 <fileName> (display files in hexadecimal bytes format)
od -c <fileName> (display files in ASCII (backslashed) character format)
11. Linux sort: The 'sort' command sorts the file content in an alphabetical order.
Syntax:
sort <fileName>

12. Linux gzip: Gzip (GNU zip) is a compressing tool, which is used to truncate the file size. By
default original file will be replaced by the compressed file ending with extension (.gz). To
decompress a file you can use gunzip command and your original file will be back.
Syntax:
gzip <file1> <file2> <file3>. . .
gunzip <file1> <file2> <file3>. . .

COMMAND SEPARATION & GROUPING


Split Command: Split command in Linux is used to split large files into smaller files. It
splits the files into 1000 lines per file(by default) and even allows users to change the number
of lines as per requirement.

The names of the files are PREFIXaa, PREFIXab, PREFIXac, and so on. By default the
PREFIX of files name is x and the default size of each split file is 1000 lines per file and both
the parameters can be changed with ease. It is generally used with log and archive files as
they are very large and have a lot of lines, So in order to break them into small files for
analysis split command is used.
Syntax:
split [options] name_of_file prefix_for_new_files

1. Split file into short files. Assume a file name with name index.txt. Use below split
command to break it into pieces.
Syntax:
split index.txt
2. Split file based on number of lines.
Syntax:
split -l 4 index.txt split_file
3. Split command with verbose option. We can also run split command in verbose mode by
using „–verbose‟. It will give a diagnostic message each time a new split file is created.
Syntax:
split index.txt -l 4 --verbose
4. Split file size using „-b‟ option.
Syntax:
split -b 16 index.txt index
5. Change in suffix length. By default, the suffix length is 2. We can also change it using „-a‟
option.
Syntax:
split -l 4 -a 4 index.txt
6. Split files created with numeric suffix. In general, the output has a format of x** where **
are alphabets. We can change the split files suffix to numeric by using the „-d‟ option.
Syntax:
split -l 4 -d index.txt
7. Create n chunks output files. If we want to split a file into three chunk output files then use
the „-n‟ option with the split command which limits the number of split output files.
Syntax:
split -n 3 index.txt
8. Split file with customize suffix. With this command, we can create split output files with
customizing suffix. Assume, if we want to create split output files with index suffix, execute
the following command.
Syntax:
split -l 4 index.txt split_index_
9. Avoid zero-sized split files. There are situations when we split a small file into a large
number of chunk files and this may lead to zero size split output files. They do not add any
value so to avoid it we use the option „-e‟.
Syntax:
split -l 4 -e index.txt
10. Split the file into two files of equal length. To split a file equally into two files, we use the
„-n‟ option. By specifying „-n 2‟ the file is split equally into two files.
Syntax:
split -n 2 index.txt

Group Command: In linux, there can be multiple users(those who use/operate the system),
and groups are nothing but the collection of users. Groups make it easy to manage users with
the same security and access privileges. A user can be part of different groups.
Important Points:
 Groups command prints the names of the primary and any supplementary groups for
each given username, or the current process if no names are given.
 If more than one name is given, the name of each user is printed before the list of that
user‟s groups and the username is separated from the group list by a colon.
Syntax:
groups [username]...

Example 1: Provided with a user name


$groups demon

Example 2: No username is passed then this will display group membership for the current
user
$groups

DIRECTORY STACK MANIPULATION


The directory stack is a list of directories you have previously navigated to. The contents of the
directory stack can be seen using the dirs command. Directories are added to the stack when
changing to a directory using the pushd command and removed with the popd command.
The current working directory is always on the top of the directory stack. The current working
directory is the directory (folder) in which the user is currently working in. Each time you
interact with the command line, you are working within a directory. Following three commands
are used in this example.
 dirs: Display the directory stack
 pushd: Push directory into the stack
 popd: Pop directory from the stack and cd to it
Let us first create some temporary directories and push them to the directory stack as shown
below.

# mkdir /tmp/dir1
# mkdir /tmp/dir2
# mkdir /tmp/dir3
# mkdir /tmp/dir4

# cd /tmp/dir1
# pushd .

# cd /tmp/dir2
# pushd .

# cd /tmp/dir3
# pushd .

# cd /tmp/dir4
# pushd .

# dirs
/tmp/dir4 /tmp/dir4 /tmp/dir3 /tmp/dir2 /tmp/dir1

At this stage, the directory stack contains the following directories:

/tmp/dir4
/tmp/dir3
/tmp/dir2
/tmp/dir1
The last directory that was pushed to the stack will be at the top. When you perform popd, it will
cd to the top directory entry in the stack and remove it from the stack. As shown above, the last
directory that was pushed into the stack is /tmp/dir4. So, when we do a popd, it will cd to the
/tmp/dir4 and remove it from the directory stack as shown below.

# popd
# pwd
/tmp/dir4
[Note: After the above popd, directory Stack Contains:
/tmp/dir3
/tmp/dir2
/tmp/dir1]

# popd
# pwd
/tmp/dir3

[Note: After the above popd, directory Stack Contains:


/tmp/dir2
/tmp/dir1]

# popd
# pwd
/tmp/dir2

[Note: After the above popd, directory Stack Contains: /tmp/dir1]

# popd
# pwd
/tmp/dir1

[Note: After the above popd, directory Stack is empty!]

# popd
-bash: popd: directory stack empty

PROCESSES
A program/command when executed, a special instance is provided by the system to the
process. This instance consists of all the services/resources that may be utilized by the process
under execution.
 Whenever a command is issued in Unix/Linux, it creates/starts a new process. For
example, pwd when issued which is used to list the current directory location the user
is in, a process starts.
 Through a 5 digit ID number Unix/Linux keeps an account of the processes, this
number is called process ID or PID. Each process in the system has a unique PID.
 Used up pid‟s can be used in again for a newer process since all the possible
combinations are used.
 At any point of time, no two processes with the same pid exist in the system because it
is the pid that Unix uses to track each process.
A process can be run in two ways:
Method 1: Foreground Process: Every process when started runs in foreground by default,
receives input from the keyboard, and sends output to the screen. When issuing pwd
command
$ ls pwd

Method 2: Background Process: It runs in the background without keyboard input and waits
till keyboard input is required. Thus, other processes can be done in parallel with the process
running in the background since they do not have to wait for the previous process to be
completed.
$ pwd &

Tracking ongoing processes


ps (Process status) can be used to see/list all the running processes.
$ ps

PID TTY TIME CMD


19 pts/1 00:00:00 sh
24 pts/1 00:00:00 ps

For more information -f (full) can be used along with ps


$ ps –f
UID PID PPID C STIME TTY TIME CMD
52471 19 1 0 07:20 pts/1 00:00:00f sh
52471 25 19 0 08:04 pts/1 00:00:00 ps -f

For single-process information, ps along with process id is used


$ ps 19
PID TTY TIME CMD
19 pts/1 00:00:00 sh

Fields described by ps are described as:


 UID: User ID that this process belongs to (the person running it)
 PID: Process ID
 PPID: Parent process ID (the ID of the process that started it)
 C: CPU utilization of process
 STIME: Process start time
 TTY: Terminal type associated with the process
 TIME: CPU time is taken by the process
 CMD: The command that started this process

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy