0% found this document useful (0 votes)
2 views

Unix and Shell Programming_BCA3rdYear

The document is a comprehensive guide on UNIX and Shell Programming for BCA 3rd Year students, covering topics such as UNIX history, architecture, file systems, shell scripting, and system administration. It outlines key features of UNIX, including multi-user support, multitasking, and security, while also detailing the evolution of UNIX and Linux operating systems. The structure includes chapters with review questions to reinforce learning and understanding of UNIX concepts and commands.

Uploaded by

r7950541
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Unix and Shell Programming_BCA3rdYear

The document is a comprehensive guide on UNIX and Shell Programming for BCA 3rd Year students, covering topics such as UNIX history, architecture, file systems, shell scripting, and system administration. It outlines key features of UNIX, including multi-user support, multitasking, and security, while also detailing the evolution of UNIX and Linux operating systems. The structure includes chapters with review questions to reinforce learning and understanding of UNIX concepts and commands.

Uploaded by

r7950541
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 219

UNIX AND SHELL PROGRAMMING

BCA 3rd Year

Debashis Chowdhury
Netaji Subhash Engineering College
1

Topics
1. Introduction to UNIX
1.1 Introduction to UNIX: Overview and History ……………... 3
1.2 History of UNIX ……………………………………………….6
1.3 UNIX Architecture …………………………………………… 9
1.4 UNIX System Calls and POSIX Standards ………………16
1.5 Basic UNIX Commands (Internal and External) …………19
1.6 UNIX Utilities: Basic System Interaction Commands …. 22
1.7 Review Questions and Answers …………………………. 37

2. UNIX File System


2.1 UNIX File System: Structure and File Types …………... 41
2.2 File and Directory Management Commands ……...…… 51
2.3 Review Questions and Answers …………………………. 58

3. Ordinary File Handling


3.1 Ordinary File Handling …………………………….……... 62
3.2 Review Questions and Answers ………………………… 72

4. File attributes
4.1 Basic File Attributes ……………………………………... 78
4.2 Hard and Soft Links ………...………………………...…... 84
4.3 Review Questions and Answers …………………………. 92

5. Introduction to Shell
5.1 Introduction to Shell and Shell Types ……………….… 101
5.2 Pattern Matching and Regular Expressions …………... 103
5.3 Input/Output Redirection and Pipes ……………………. 106
5.4 Review Questions and Answers ……………………...….116

Netaji Subhash Engineering College


2

6. Process
6.1 Process Management …………………………………… 121
6.2 Review Questions and Answers …………………. 130
6.3

7. Customization
7.1 Customizing the Environment ………….…………….… 135
7.2 Review Questions and Answers ………………………. 140

8. Filters
8.1 Introduction to Filters ……………………………………. 144
8.2 Review Questions and Answers ……………………...… 156

9. Shell Script
9.1 Introduction to Shell Scripting…………………….……. 161
9.2
9.3 Review Questions and Answers ……………………...… 197

10. System Administration


10.1 Introduction to System Administration …………………. 202
10.2 Review Questions and Answers ……………………...… 210

Netaji Subhash Engineering College


3

Chapter 01
Introduction to UNIX

Netaji Subhash Engineering College


4

1.1 Introduction to UNIX: Overview and History

Introduction
UNIX is a multi-user, multitasking operating system designed for efficiency, stability, and portability.
Developed in 1969 at AT&T Bell Labs by Ken Thompson, Dennis Ritchie, and others, UNIX has
become a foundation for many modern operating systems, including Linux, macOS, and BSD. It is
widely used in servers, workstations, embedded systems, and cloud computing due to its robust
security and performance.

Key Features of UNIX


1. Multi-User: Multiple users can work on the system simultaneously.
2. Multitasking: Supports multiple processes running concurrently.
3. Portability: Can run on different hardware architectures with minimal modification.
4. Security: Implements strong user authentication and file permission controls.
5. Hierarchical File System: Organizes files in a tree-like directory structure.
6. Shell and Command Line Interface (CLI): Provides a powerful text-based interface for users to
execute commands.
7. Inter-Process Communication (IPC): Supports communication between processes through
pipes, signals, and shared memory.
8. Networking Capabilities: Designed with built-in support for networking protocols.

Detailed Explanation of Key Features of UNIX


UNIX is known for its powerful architecture and design, making it a widely used operating system in
enterprise, academic, and personal computing environments. Below is a detailed explanation of its key
features:

1. Multi-User System
UNIX is a multi-user operating system, meaning multiple users can access the system and use
resources simultaneously without interfering with each other. It manages user access through
authentication mechanisms and file permissions.
How It Works:
• Each user has a unique user ID (UID) and belongs to a group with a group ID (GID).
• UNIX provides access control mechanisms to ensure that users only access their permitted
files and processes.
• Remote login (SSH, Telnet) allows users to connect to a UNIX system over a network.
Multiple users can log in to a UNIX server via SSH and run their programs independently.

Netaji Subhash Engineering College


5
2. Multitasking
UNIX allows users to run multiple processes simultaneously. It achieves this through time-sharing and
process scheduling.
How It Works:
• The kernel switches between tasks using a scheduling algorithm.
• Each process is assigned a Process ID (PID) and managed by the operating system.
• Users can run tasks in the foreground or background.
A user can compile a program while simultaneously browsing files and running a database server.

3. Portability
UNIX is a portable operating system, meaning it can run on different hardware architectures with
minimal modification. This is because UNIX is primarily written in the C programming language, which
allows easy adaptation to various platforms.
How It Works:
• UNIX source code can be compiled on different machines with minimal changes.
• The use of system calls and a hardware abstraction layer makes UNIX adaptable.
UNIX can run on mainframes, servers, desktops, and embedded systems without major changes.

4. Hierarchical File System


UNIX organizes files in a hierarchical directory structure, making file management efficient and
scalable.
How It Works:
• The root (/) directory is the topmost directory.
• Files and directories are arranged in a tree-like structure.
• Supports absolute and relative paths.

5. Shell and Command-Line Interface (CLI)


UNIX provides a command-line interface (CLI) where users interact with the system using commands.
The CLI is controlled by a shell, which interprets user commands.
How It Works:
• The shell is a command interpreter (e.g., Bash, KornShell, C Shell).
• Users can write shell scripts to automate tasks.

Netaji Subhash Engineering College


6
6. Security and User Permissions
UNIX has a strong security model with user authentication, file permissions, and encryption to
protect system integrity.
How It Works:
• Each file has three levels of permissions: Owner (u), Group (g), Others (o)
• Permission types: Read (r), Write (w), Execute (x)
• File permissions are managed using the chmod command.

7. Process Management
Every task in UNIX runs as a process, which is managed by the system through process scheduling.
How It Works:
• Each process has a unique Process ID (PID).
• The ps command shows running processes.
• kill can terminate a process.

8. Networking and Communication


UNIX was designed with built-in networking capabilities, supporting protocols like TCP/IP for remote
communication.
How It Works:
• UNIX supports remote access, file transfer, and email services.
• Users can connect using SSH, FTP, and Telnet.

9. Pipelining and Redirection


UNIX allows users to redirect input/output and combine multiple commands using pipes (|).
How It Works:
• Redirection (>, <, >>) is used to send output to files.
• Pipes (|) pass the output of one command as input to another.

10. Device Independence


UNIX treats hardware devices as files, allowing programs to interact with them in a uniform manner.
How It Works:
• Devices are represented as files in /dev/ (e.g., /dev/sda for hard drives).
• Allows reading and writing to devices like regular files.

Netaji Subhash Engineering College


7

1.2 History of UNIX

UNIX’s key features, such as multi-user support, multitasking, security, portability, and process
management, make it a powerful and versatile operating system. Its efficient design has influenced many
modern operating systems like Linux and macOS, ensuring its continued relevance in computing.

History of UNIX
The history of UNIX dates back to the late 1960s when researchers at AT&T’s Bell Labs started working
on an operating system that would be simple, portable, and efficient. Initially, UNIX emerged as a
response to the failure of the Multics project, which was a joint venture between MIT, Bell Labs, and
General Electric. In 1969, Ken Thompson and Dennis Ritchie at Bell Labs developed the first version
of UNIX on a PDP-7 minicomputer.
In 1973, UNIX was rewritten in C programming language, making it portable and easier to adapt to
different hardware systems. This innovation led to UNIX being widely adopted in universities, research
institutions, and later, commercial industries. The University of California, Berkeley (UCB) developed
Berkeley Software Distribution (BSD UNIX), introducing networking capabilities and system
enhancements that became fundamental to the modern internet.
During the 1980s, UNIX became commercialized, leading to two major versions: System V (by AT&T)
and BSD UNIX. Several technology companies, including IBM, HP, and Sun Microsystems, developed
their own UNIX versions, leading to fragmentation known as the "UNIX Wars." In response, the POSIX
standard (1988) was introduced to unify UNIX implementations.
The 1990s saw the emergence of Linux (1991) by Linus Torvalds, an open-source UNIX-like operating
system that became a strong alternative to proprietary UNIX. Additionally, BSD UNIX evolved into
FreeBSD, OpenBSD, and NetBSD. By the 2000s, UNIX-based systems like macOS and Linux
distributions dominated enterprise and cloud computing.
Today, UNIX remains influential in server environments, cloud computing, and embedded systems.
Proprietary UNIX versions like IBM AIX, HP-UX, and Oracle Solaris are still used in enterprise
applications, while Linux and BSD variants continue to power modern computing infrastructures.

Highlighted Key Points


• 1960s: UNIX originated as a response to the failure of the Multics project.
• 1969: Ken Thompson and Dennis Ritchie developed the first UNIX version on PDP-7 at Bell Labs.
• 1973: UNIX was rewritten in C, making it portable and facilitating widespread adoption.
• 1975: BSD UNIX emerged from the University of California, Berkeley, adding advanced
features.
• 1983: AT&T introduced UNIX System V, leading to fragmentation with BSD UNIX.
• 1988: POSIX standard was developed to unify UNIX systems.
• 1991: Linux was created as an open-source UNIX alternative by Linus Torvalds.
• 1990s - 2000s: UNIX systems like Solaris, AIX, HP-UX, and macOS gained popularity.

Netaji Subhash Engineering College


8
• Present: UNIX principles continue to influence modern OS development, cloud computing, and
mobile devices.

History of UNIX - Summary Table

UNIX has had a profound impact on the computing world, shaping modern operating systems,
networking, and open-source software development. From its origins at Bell Labs to the emergence of
Linux and BSD, UNIX continues to be a foundation for enterprise servers, cloud platforms, and
embedded systems.

Linux Operating System: A Detailed Description


Linux is a free, open-source, Unix-like operating system based on the Linux kernel, which was first
developed by Linus Torvalds in 1991. It is widely used in servers, desktops, embedded systems,
mobile devices (Android), and cloud computing. Due to its stability, security, and flexibility, Linux
has become one of the most popular operating systems in the world.
Key Features of Linux
1. Open-Source – Linux is distributed under the GNU General Public License (GPL), allowing
anyone to modify and distribute it freely.
2. Multi-User – Multiple users can access and use the system simultaneously.
3. Multitasking – Supports running multiple processes at the same time.
4. Portability – Runs on various hardware platforms, from embedded devices to
supercomputers.
5. Security – Built-in security features like user authentication, file permissions, and firewalls.

Netaji Subhash Engineering College


9
6. Shell Interface – Provides command-line interfaces (Bash, Zsh, etc.) for advanced system
control.
7. File System Support – Supports multiple file systems like ext4, XFS, Btrfs, FAT, NTFS.
8. Networking Capabilities – Includes built-in networking tools for client-server communication
and remote management.
9. Customizability – Users can modify the system using various desktop environments
(GNOME, KDE, XFCE) and window managers.
10. Package Management – Uses package managers like APT (Debian-based), YUM/DNF (Red
Hat-based), and Pacman (Arch Linux) for software installation.

History of Linux
• 1991: Linus Torvalds developed the Linux kernel as a personal project.
• 1992: Linux was released under the GNU GPL, allowing open-source development.
• 1993-1994: First Linux distributions (Slackware, Debian, and Red Hat) were created.
• 2000s: Linux gained popularity in enterprise servers, web hosting, and cloud computing.
• 2008: Google introduced Android, based on the Linux kernel, dominating the mobile OS market.
• Present: Linux powers supercomputers, IoT devices, servers, cloud platforms (AWS,
Google Cloud, Azure), and cybersecurity.

Popular Linux Distributions (Distros)


• Debian-based: Ubuntu, Kali Linux, Linux Mint
• Red Hat-based: Red Hat Enterprise Linux (RHEL), Fedora, CentOS
• Arch-based: Arch Linux, Manjaro
• Others: OpenSUSE, Slackware, Alpine Linux

Linux is a powerful, secure, and highly customizable operating system that has revolutionized computing.
From enterprise servers to mobile devices and embedded systems, Linux continues to dominate due to
its open-source nature, stability, and adaptability.

Netaji Subhash Engineering College


10

1.3 UNIX Architecture: Kernel, Shell, Files, and Processes

UNIX follows a layered architecture, ensuring efficiency, modularity, and portability. It consists of
multiple layers, each responsible for handling specific functionalities. The core of UNIX is the kernel,
which interacts directly with hardware, while users interact with the system through commands and
applications.

Layers of UNIX Architecture


1. Hardware Layer
• This is the lowest layer of UNIX architecture.
• It consists of physical components such as the CPU, memory, disk storage, and input/output
devices.
• The UNIX Kernel interacts directly with this layer to manage system resources.

Architecture of UNIX

2. Kernel (Core of UNIX)


• The kernel is the heart of UNIX, managing all system resources.
• It is responsible for process management, memory management, device management, and file
system handling.
• The kernel provides an interface between the hardware and system utilities/applications.
• It operates in the privileged mode (Kernel Mode), meaning it has full control over the system.

Netaji Subhash Engineering College


11
Functions of the Kernel:

• Process Management: Handles process creation, scheduling, and termination using techniques
like multiprogramming and time-sharing.

• Memory Management: Allocates and deallocates memory for processes, implements virtual
memory and paging.

• File System Management: Manages file operations, directories, permissions, and storage.

• Device Management: Controls hardware devices using device drivers.

• Inter-process Communication (IPC): Allows communication between processes using signals,


pipes, message queues, and shared memory.

• Security and Access Control: Implements user authentication, file permissions, and encryption.

3. Shell (Command Interpreter Layer)


• The shell is the interface between users and the UNIX kernel.
• It interprets user commands, executes them, and returns the results.
• Acts as both a command-line interpreter and scripting language.
• Users can interact with the shell through terminals or command-line interfaces (CLI).
Types of UNIX Shells:
1. Bourne Shell (sh) – Default UNIX shell, efficient for scripting.
2. Bash (Bourne Again Shell) – Enhanced version of Bourne Shell, widely used in Linux.
3. C Shell (csh) – Uses C-like syntax, good for interactive use.
4. Korn Shell (ksh) – Combines features of Bourne and C Shell.
5. Z Shell (zsh) – Advanced shell with customization and scripting enhancements.

4. Utilities and System Programs


• UNIX provides built-in utilities and system programs for user interaction.
• Includes file management tools, compilers, text editors, network utilities, and system monitoring
tools.
Examples of UNIX Utilities:
• File Operations: ls, cp, mv, rm, mkdir, rmdir
• Process Management: ps, top, kill, nice, bg, fg
• User Management: who, whoami, passwd, groups, id
• Networking Commands: ping, scp, ssh, netstat, ftp, wget
• Text Processing: grep, awk, sed, cut, sort, uniq
• Programming Utilities: gcc (C compiler), make, gdb (debugger)

Netaji Subhash Engineering College


12
5. User Applications (Top Layer)
• Includes all the software applications used by end-users.
• Examples include web browsers, text editors, database management systems, and graphical
user interfaces (GUI).
Examples of UNIX Applications:
• Text Editors: vi, nano, emacs
• Development Tools: gcc, python, perl
• Networking Tools: ssh, ftp, scp
• System Monitoring: htop, iotop, vmstat
• Graphical Interfaces: X Window System (X11), GNOME, KDE

The UNIX architecture is designed with modularity, security, and flexibility. The kernel manages
system resources, while the shell provides an interface for user commands. Various utilities and
applications allow users to perform tasks efficiently. This layered structure makes UNIX a powerful,
stable, and scalable operating system, widely used in servers, cloud computing, and enterprise
environments.

Kernel and Shell in UNIX Architecture


The Kernel and Shell are the two most essential components of the UNIX operating system. Together,
they provide an efficient environment for users to interact with the system while ensuring resource
management and process execution.

Kernel in UNIX
The Kernel is the core component of UNIX, responsible for directly interacting with the hardware and
managing system resources such as CPU, memory, file system, and input/output devices. It acts as a
bridge between hardware and software, ensuring efficient execution of processes.

Functions of Kernel:
1. Process Management:
o Creates, schedules, and terminates processes.
o Implements multitasking and time-sharing.
o Uses process scheduling algorithms to allocate CPU time efficiently.
o Commands: ps, kill, nice, jobs
2. Memory Management
o Allocates and deallocates memory dynamically.

Netaji Subhash Engineering College


13
o Implements virtual memory and paging to optimize performance.
o Prevents memory leaks and unauthorized access.
o Commands: free, vmstat, top
3. File System Management
o Manages the creation, deletion, and access of files.
o Implements file permissions and directory structures.
o Supports multiple file systems (ext4, NTFS, FAT32, etc.).
o Commands: ls, mkdir, rm, chmod
4. Device Management
o Controls hardware devices (printers, disks, network adapters).
o Uses device drivers to communicate with hardware.
o Provides a unified interface for applications to access devices.
o Commands: lsblk, df, mount, umount
5. Interprocess Communication (IPC)
o Facilitates communication between processes using signals, pipes, message queues,
shared memory, and semaphores.
o Enables synchronization between processes to avoid conflicts.
o Commands: ipcs, ipcrm, kill, signal
6. Security and Access Control
o Enforces user authentication, file permissions, and encryption.
o Implements access control lists (ACLs) and secure user management.
o Commands: chmod, chown, passwd, su, sudo

Shell in UNIX
The Shell is a command-line interpreter that acts as an interface between the user and the kernel. It
takes user commands, processes them, and passes them to the kernel for execution.
Functions of Shell:
1. Command Execution: Accepts user input, interprets commands, and executes them. Provides
command-line and scripting capabilities.
Commands: ls, cd, cp, rm, echo
2. Shell Scripting: Supports scripting for automating repetitive tasks. Includes variables, loops,
conditions, and functions.
Commands: sh script.sh, bash script.sh
3. Input/Output Redirection: Redirects input/output to files or other commands. Operators: >, <,
>>, |
Netaji Subhash Engineering College
14
Example: ls > file.txt (writes output to file.txt)
4. Pipelines and Filters: Allows chaining multiple commands to process data efficiently.
Example: ls -l | grep "file"
5. Environment Management: Manages environment variables (PATH, HOME, USER). Allows
users to customize shell behaviour.
Commands: export, env, set, unset
6. User Interaction and Process Control: Provides an interface for user login and interactive
command execution. Allows background and foreground process management.
Commands: jobs, fg, bg, kill

Types of UNIX Shells:


1. Bourne Shell (sh) – Default UNIX shell, efficient for scripting.
2. Bash (Bourne Again Shell) – Most widely used, enhances Bourne Shell.
3. C Shell (csh) – Uses C-like syntax, supports scripting.
4. Korn Shell (ksh) – Advanced features for programming and scripting.
5. Z Shell (zsh) – Highly customizable, modern enhancements.

Types of UNIX Shells


In UNIX, a shell is a command-line interpreter that lets users interact with the operating system by typing
commands. Over time, several types of shells have been developed, each offering different features.

1. Bourne Shell (sh)


The Bourne Shell, developed by Stephen Bourne at AT&T Bell Labs, is one of the earliest UNIX shells
and served as the default shell in many UNIX systems. It is widely known for its simplicity, speed, and
effectiveness in scripting. Although it lacks modern interactive features like command history or auto-
completion, it forms the foundation of many other shells. Its clean scripting syntax and reliability make
it suitable for system-level programming and automation tasks.
Key Features:
• Simple and fast
• Excellent for scripting
• Available on all UNIX systems
• Limited interactive capabilities
• File extension for scripts: .sh
2. Bash (Bourne Again Shell)
Bash is a free and enhanced version of the Bourne Shell, developed by the GNU Project. It is the
default shell in most Linux distributions today. Bash adds modern features like command history,
Netaji Subhash Engineering College
15
tab auto-completion, arithmetic operations, and improved scripting capabilities. It is widely used
for both interactive command-line use and writing shell scripts.
Key Features:
• Backward-compatible with sh
• Command history and editing
• Auto-completion of commands and file names
• Advanced scripting features (arithmetic, arrays, loops)
• File extension for scripts: .sh (same as Bourne)

3. C Shell (csh)
The C Shell, developed by Bill Joy at the University of California, Berkeley, is known for its C-like
syntax, making it easier for C programmers to adapt. It introduced several innovations like command
history, aliases, and job control. However, its scripting capabilities are more limited and can behave
differently compared to Bourne-compatible shells.
Key Features:
• Syntax similar to C programming
• Command history and alias support
• Job control (running background tasks)
• Interactive use over scripting
• File extension for scripts: .csh

4. Korn Shell (ksh)


The Korn Shell, created by David Korn, blends the programming features of the C Shell with the
scripting power of the Bourne Shell. It supports advanced features like arrays, functions, floating-
point arithmetic, and pattern matching. It is POSIX-compliant, making it suitable for enterprise-grade
scripting and application automation.
Key Features:
• Combines sh and csh features
• Supports arrays and arithmetic
• Fast execution and robust scripting
• Suitable for enterprise systems
• File extension for scripts: .ksh

Netaji Subhash Engineering College


16
5. Z Shell (zsh)
The Z Shell is a modern and powerful shell that incorporates features from bash, ksh, and csh. It is
highly customizable and supports plugins, themes, and frameworks like Oh My Zsh, making it popular
among developers and power users. Features like auto-correction, improved tab completion, and
spelling correction make it extremely user-friendly.
Key Features:
• Combines the best features of other shells
• Plugin and theme support
• Auto-correction and enhanced completion
• Highly customizable and developer-friendly
• File extension for scripts: zsh

Summary Table

Shell Type Common Name Key Features Script Extension


Bourne Shell sh Lightweight, scripting .sh

Bash bash Enhanced Bourne, history, completion .sh

C Shell csh C-like syntax, history, aliases .csh

Korn Shell ksh Advanced scripting, arrays .ksh

Z Shell zsh Customization, plugins, themes .zsh

Comparison of Kernel and Shell

Feature Kernel Shell

Role Core of UNIX, manages hardware and Command interpreter, user interface.
system resources.

Function Manages processes, memory, devices, Accepts user commands, interprets,


file system, security. and executes them.

Interaction Directly interacts with hardware. Acts as an interface between the user
and kernel.

Execution Runs in privileged mode (Kernel Mode). Runs in user mode.


Mode

Examples Linux Kernel, XNU (macOS), Minix Bash, sh, csh, ksh, zsh.
Kernel.

Netaji Subhash Engineering College


17

1.4 UNIX System Calls and POSIX Standards

UNIX System Call


A system call is an interface provided by the operating system that allows a user-space application to
request services from the kernel. These services include file handling, process management, memory
management, and inter-process communication.
In UNIX, system calls act as a bridge between user programs and the kernel, allowing applications to
interact with hardware and system resources securely.

Why are System Calls Needed?


1. Access to Hardware Resources – User programs cannot directly access hardware (CPU,
memory, disk). System calls allow controlled access.
2. Process Control – Creating, executing, and terminating processes.
3. File and Directory Management – Reading, writing, opening, and closing files.
4. Interprocess Communication (IPC) – Allows processes to communicate using pipes, signals,
and shared memory.
5. Security and Protection – System calls operate in kernel mode, ensuring only authorized
access to resources.

How System Calls Work in UNIX


 A user program issues a system call, like open(), to open a file.
 The request is passed to the kernel through a software interrupt.
 The kernel executes the requested service (e.g., opening the file).
 The result (success or error) is returned to the user program.

Types of UNIX System Calls


Category Purpose Examples

Process Control Create, terminate, manage fork(), exec(), exit(), wait(), kill()
processes.

File Management Open, close, read, write files. open(), read(), write(), close(),
unlink()

Device Management Interact with hardware devices. ioctl(), read(), write()

Memory Management Allocate and manage memory. brk(), sbrk(), mmap()

Netaji Subhash Engineering College


18
Interprocess Communication Allow processes to pipe(), shmget(), msgget(),
(IPC) communicate. semop()

POSIX (Portable Operating System Interface) Standards


POSIX (Portable Operating System Interface) is a set of standardized operating system interfaces
specified by IEEE and recognized by ISO. It ensures compatibility between different UNIX-based
operating systems and helps maintain portability for software applications.
• Full Form: Portable Operating System Interface
• Developed by: IEEE (Institute of Electrical and Electronics Engineers)
• Standard Number: IEEE 1003
• ISO Standard: ISO/IEC 9945
• Primary Goal: To enable software compatibility across UNIX-like operating systems.

Purpose of POSIX
POSIX was developed to solve the issue of fragmentation among UNIX-based operating systems.
Different UNIX vendors had their own implementations, leading to portability challenges for applications.
POSIX standardizes APIs (Application Programming Interfaces), shell commands, utilities, and system
calls to provide a uniform interface.

POSIX Components and Standards


POSIX defines several interfaces that form its core structure:

POSIX Standard Description

POSIX.1 Defines system call interfaces (file operations, processes, signals).

POSIX.2 Defines shell and command-line utilities.

POSIX.4 Defines real-time extensions for UNIX (multithreading, priorities).

Key Features of POSIX


1. System Call Standardization: Defines consistent system calls for UNIX-based OSes.
2. File System Standardization: Specifies file and directory handling APIs.
3. Process Control and Signals: Defines how processes are created, managed, and terminated.
4. Threading Support: Standardizes multithreading through Pthreads.
5. Interprocess Communication (IPC): Includes shared memory, message queues, and
semaphores.
6. Real-Time Features: Includes scheduling and I/O operations for real-time systems.

Netaji Subhash Engineering College


19
7. Shell and Utility Commands: Ensures a common scripting environment across UNIX-like
systems.

POSIX Compliance and Implementations


Several operating systems and environments comply with POSIX standards:
1. Fully POSIX-Compliant Systems:

o IBM AIX
o HP-UX
o Solaris (earlier versions)
o QNX (Real-time OS)

2. Partially POSIX-Compliant Systems:

o Linux (GNU/Linux follows many POSIX standards but has deviations)


o macOS (formerly Mac OS X, which is UNIX-certified)
o FreeBSD, NetBSD, OpenBSD
o Windows (through compatibility layers like Cygwin and Windows Subsystem for Linux -
WSL)

Advantages of POSIX
• Portability: Applications written for one POSIX-compliant OS can run on another with minimal
changes.
• Interoperability: Encourages a consistent development environment across different UNIX-like
systems.
• Code Reusability: Reduces redundancy in software development.
• Standardized Programming Interface: Simplifies the development of cross-platform
applications.

POSIX is an essential standard for ensuring compatibility and portability among UNIX-like operating
systems. By defining system calls, shell utilities, and APIs, POSIX enables developers to write software
that can run across different environments with minimal modifications.

Netaji Subhash Engineering College


20

1.5 Basic UNIX Commands (Internal and External)

Basic UNIX Commands (Internal and External)


UNIX commands are categorized into internal (built-in) and external commands based on how they
are executed by the shell.
• Internal Commands: These are built into the shell and do not require a separate executable
file. They are executed directly by the shell itself.
• External Commands: These are separate executable programs stored in the filesystem and
are invoked by the shell.

1. Internal Commands
Internal commands are executed within the shell itself. They are faster since they do not require disk
access.
Common Internal Commands

Command Description Example

echo Displays a message or output of a variable echo "Hello, UNIX"

pwd Displays the present working directory pwd

cd Changes the current directory cd /home/user

set Sets or displays shell variables set -o vi

unset Unsets a shell variable unset VAR_NAME


export Exports a variable to the environment export PATH=/usr/local/bin:$PATH

alias Creates a shortcut for a command alias ll='ls -l'

unalias Removes an alias unalias ll

history Displays command history history

type Identifies if a command is built-in or external type ls

help Provides help for shell built-ins help cd

exit Closes the shell session exit

logout Logs out from the shell logout

umask Sets default permissions for new files umask 022

2. External Commands
External commands are stored in specific directories (/bin, /usr/bin, /sbin, etc.) and executed when
called.

Netaji Subhash Engineering College


21
Common External Commands

Command Description Example


Ls Lists files and directories ls -l

Cat Displays file contents cat file.txt

Tac Displays a file in reverse order tac file.txt

Cp Copies files or directories cp source.txt destination.txt

Mv Moves/renames files mv oldname.txt newname.txt

Rm Removes files or directories rm file.txt

mkdir Creates directories mkdir new_folder

rmdir Removes empty directories rmdir old_folder

touch Creates an empty file or updates timestamp touch newfile.txt

Find Searches for files in a directory hierarchy find /home -name "*.txt"
grep Searches text in files grep "error" logfile.txt

Diff Compares two files line by line diff file1.txt file2.txt


cmp Compares two files byte by byte cmp file1 file2

Wc Counts lines, words, and characters in a file wc file.txt


Cut Extracts sections from a file cut -d: -f1 /etc/passwd

paste Merges lines from multiple files paste file1 file2

sort Sorts text files sort names.txt

uniq Removes duplicate lines from sorted data uniq sorted.txt

head Displays the first few lines of a file head -5 file.txt

Tail Displays the last few lines of a file tail -5 file.txt

Tar Archives files into a tarball tar -cvf archive.tar file1 file2

gzip Compresses files using gzip gzip file.txt

gunzip Decompresses gzip files gunzip file.txt.gz

Zip Compresses files into a .zip archive zip archive.zip file1 file2

unzip Extracts files from a .zip archive unzip archive.zip

Ps Displays process information ps aux

Kill Kills a process using its PID kill 1234

Top Displays running processes dynamically top

Df Shows disk usage information df -h

Du Displays disk usage of a directory du -sh /home/user

Netaji Subhash Engineering College


22
chmod Changes file permissions chmod 755 script.sh

chown Changes file ownership chown user:group file.txt

ln Creates symbolic or hard links ln -s target link_name

ping Checks network connectivity to a host ping google.com

wget Downloads files from the internet wget http://example.com/file.txt

scp Securely copies files over SSH scp file.txt user@host:/path/

ftp Transfers files using FTP ftp hostname

3. Difference Between Internal and External Commands

Feature Internal Commands External Commands

Execution Executed by the shell itself Requires a separate binary file

Speed Faster (no disk access) Slightly slower (requires disk access)

Storage Location Built into the shell Located in /bin, /usr/bin, etc.

Examples cd, echo, pwd ls, cp, rm, grep

4. Checking Command Type


To determine whether a command is internal or external, use the following:
• type command_name → Tells whether the command is built-in or external
Example: type ls
• which command_name → Shows the path of an external command
Example: which ls
• command -v command_name → Similar to which, but works in all shells

Understanding UNIX commands is essential for interacting with the system effectively. Internal
commands are built into the shell, while external commands exist as separate binaries. Mastering these
commands improves system navigation, file management, and overall efficiency in UNIX-based
environments.

Netaji Subhash Engineering College


23

1.6 UNIX Utilities: Basic System Interaction Commands

Use basic UNIX utilities to interact with the system effectively. These utilities provide essential system
information, perform calculations, and allow users to display or manipulate text.

cal (Calendar)
The cal command displays a calendar for a specific month or year. By default, it shows the current month.

Syntax:
cal [month] [year]

Examples:

Display a calendar in Monday-first format:


cal -m

Command Output
cal Displays the current month’s calendar
cal 3 2025 Displays the calendar for March 2025
cal -y Shows the entire current year’s calendar
cal -m Monday Starts the week from Monday

date (Display System Date and Time)


The date command displays or sets the system date and time.

Syntax:
date [options] [+format]

Netaji Subhash Engineering College


24

Examples:

Command Output
Date Displays the current system date and time
date '+%Y-%m-%d' Shows date in YYYY-MM-DD format
date '+%H:%M:%S' Displays the current time in HH:MM:SS format
date '+%A, %B %d, %Y' Prints full day, month, and year (e.g., "Wednesday, March 13,
2025")
date -s "2025-03-13 10:30:00" Sets the system date and time (root privileges required)

• Display the current date and time: • Display the date in YYYY-MM-DD format:
date date +"%Y-%m-%d"

Output: Output:
Tue Mar 13 14:30:15 IST 2025 2025-03-13

• Display only the current year: • Display time in HH:MM:SS format:


date +"%Y" date +"%H:%M:%S"

Output: Output:
2025 14:30:15

• Set a new system date (requires admin privileges):


sudo date MMDDhhmmYYYY

Example:
sudo date 031314302025

This sets the date to March 13, 2025, at 14:30.

echo (Print Text or Variables)


The echo command prints text, variable values, and special characters.

Syntax:
echo [options] [string]

Examples:
Command Output

Netaji Subhash Engineering College


25
echo "Hello, UNIX!" Prints "Hello, UNIX!"
echo $HOME Displays the current user’s home directory path
echo "Today is $(date)" Prints today’s date using command substitution
echo -e "Line1\nLine2" Uses -e to enable escape sequences (newline \n)
echo -e "Column1\tColumn2" Uses \t for tab spacing

Escape Sequence?
An escape sequence is a series of characters used to represent special characters that cannot be typed directly
or have special meaning in strings or terminal output.

• In UNIX and shell environments, escape sequences are used with echo, printf, shell scripts, and within
programs to control formatting or insert special characters.

Common Escape Sequences

Escape Sequence Meaning Example Output


\n New line Hello\nWorld →

Hello
World
\t Horizontal tab Hello\tWorld → Hello World

\b Backspace Hello\bWorld → HellWorld

\r Carriage return Hello\rWorld → World

\\ Backslash \\ → \

\' Single quote \' → '

\" Double quote \" → "

\a Alert (bell sound) Produces a beep sound

\v Vertical tab Inserts vertical tab

\f Form feed Advances the paper (not visible in


terminals)

Using Escape Sequences in Shell (with echo and printf) - echo with -e option (enables interpretation of
escapes)

• echo -e "Hello\nWorld"

Netaji Subhash Engineering College


26
Output:
Hello
World

• echo -e "Tabbed\ttext"

Output:
Tabbed text

printf (better control than echo)

• printf "Name:\t%s\nAge:\t%d\n" "Alice" 25

Output:
Name: Alice
Age: 25

printf Command in UNIX: An Alternative to echo


The printf command in UNIX is a powerful alternative to echo for displaying formatted output. Unlike echo,
which simply prints text to the terminal, printf allows precise control over formatting, similar to the printf()
function in C.

Syntax
printf "FORMAT_STRING" ARGUMENTS

• FORMAT_STRING defines how the output should be formatted.

• ARGUMENTS are the values to be inserted into the format string.

Differences Between printf and echo


Feature Echo printf

Formatting Minimal Advanced (like C printf)

Variable Expansion Yes Yes

Escape Sequences Not always interpreted Always interpreted

Multi-line Output Simple Requires explicit \n

Examples
Printing a Simple String
Netaji Subhash Engineering College
27
printf "Hello, World!\n"

Output:
Hello, World!

(\n is needed to add a new line, unlike echo which adds a newline automatically.)

Using Variables
name="Alice"
printf "Hello, %s!\n" "$name"

Output:
Hello, Alice!

Printing Multiple Values


printf "Name: %s, Age: %d\n" "Bob" 25

Output:

Name: Bob, Age: 25

Format Specifiers
Specifier Meaning
%s String
%d Integer
%f Floating point
%c Character
%x Hexadecimal
%o Octal

Example: Formatting Numbers


printf "Decimal: %d, Hex: %x, Octal: %o\n" 42 42 42

Output:
Decimal: 42, Hex: 2a, Octal: 52

Floating-Point Precision
printf "Price: %.2f\n" 99.456

Output:
Price: 99.46

Netaji Subhash Engineering College


28

Escape Sequences
Escape Sequence Meaning

\n New line

\t Tab

\r Carriage return

\\ Backslash

\" Double quote

Example: Adding Tabs and Newlines


printf "Item\tPrice\nApple\t$1.00\nOrange\t$0.80\n"

Output:
Item Price
Apple $1.00
Orange $0.80

Padding and Alignment

Right-Align Numbers
printf "%5d\n" 42

Output:

42 (Spaces added to make the width 5.)

Left-Align Strings
printf "%-10s %-10s\n" "Name" "Score"
printf "%-10s %-10d\n" "Alice" 90
printf "%-10s %-10d\n" "Bob" 85

Output:
Name Score
Alice 90
Bob 85

bc (Basic Calculator) in UNIX/Linux

Netaji Subhash Engineering College


29
The bc command in UNIX/Linux is an interactive, command-line calculator that provides basic and advanced
mathematical operations. It is useful for arithmetic calculations, handling floating-point numbers, and performing
scripting-based computations.

Features of bc

• Supports integer and floating-point arithmetic.

• Allows user-defined variables and functions.

• Supports arithmetic, logical, and bitwise operations.

• Provides scale control for floating-point precision.

• Can be used in interactive mode or through piping.

Basic Syntax
bc [options]

It can be run interactively or in a pipeline with an input expression.

Modes of Using bc

1. Interactive Mode: Simply type bc in the terminal


bc

Then enter expressions like:

5+3

To exit, type: quit

2. Using Piping (echo with bc): Execute calculations directly from the command line:
echo "5+3" | bc

Output: 8

3. Using bc with a File: Store expressions in a file (calc.txt) and run them:

10 * 5

20 / 4

Execute:
bc < calc.txt

Arithmetic Operations

Operator Description Example Output

+ Addition 5+3 8

Netaji Subhash Engineering College


30
- Subtraction 10-4 6

* Multiplication 6*7 42

/ Division (Integer) 10/3 3

% Modulus (Remainder) 10%3 1

Floating-Point Arithmetic

By default, bc performs integer division. To enable decimal precision, use scale:


echo "scale=2; 10/3" | bc

Output: 3.33

Advanced Mathematical Functions

bc supports built-in functions when invoked with the -l (math library) option:
bc -l

Function Description Example Output

sqrt(x) Square root sqrt(25) 5

s(x) Sine (radians) s(0) 0

c(x) Cosine (radians) c(0) 1

l(x) Natural log l(2.71828) 1

e(x) Exponential e(1) 2.71828

Example:
echo "scale=4; sqrt(49)" | bc -l

Output: 7.0000

Variables in bc: Assign values to variables:


echo "x=10; y=5; x*y" | bc

Output: 50

Conditional and Logical Operations

Netaji Subhash Engineering College


31

Example: Example:
echo "5 > 3" | bc echo "(5 > 3) && (4 < 6)" | bc

Output: 1 # (True) Output: 1 # (True)

Loops and Conditional Statements


If-Else in bc
echo "if (10 > 5) print \"Yes\"" | bc

Output: Yes

For Loop in bc
echo "for (i=1; i<=5; i++) print i" | bc

Output: 12345

Base Conversion Using bc


The bc command (Basic Calculator) in UNIX is a powerful command-line calculator that supports arbitrary
precision arithmetic and base conversions (binary, octal, decimal, hexadecimal, etc.).

Syntax for base conversion:

Examples

1. Decimal to Binary 2. Binary to Decimal


$ bc $ bc
ibase=10 ibase=2

Netaji Subhash Engineering College


32
obase=2 obase=10
25 11001

Output: Output:
11001 25
Important Note: Set ibase before obase because changing ibase affects how the rest of the input is
read.

3. Decimal to Hexadecimal 4. Hexadecimal to Decimal


$ bc $ bc
ibase=10 ibase=16
obase=16 obase=10
255 FF

Output: Output:
FF 255
5. Octal to Decimal 6. Decimal to Octal
$ bc $ bc
ibase=8 ibase=10
obase=10 obase=8
77 63

Output: Output:
63 77

Examples

1. Convert Binary to Decimal


echo "ibase=2; obase=10; 1011" | bc

Output:
11

2. Convert Decimal to Hexadecimal


echo "ibase=10; obase=16; 255" | bc

Output:
FF

3. Convert Hexadecimal to Binary


echo "ibase=16; obase=2; A3" | bc

Output:
10100011

4. Convert Octal to Decimal


echo "ibase=8; obase=10; 17" | bc

Netaji Subhash Engineering College


33
Output:
15

The bc command is a powerful calculator that can handle complex arithmetic, logical operations, and scripting. It
is widely used for calculations in shell scripting and system administration tasks.

Recording Your UNIX Session Using script Command


The script command in UNIX is used to record all terminal activities into a file. It captures everything displayed on
the screen, including commands and their output, making it useful for logging sessions, debugging, and
documentation.

Syntax
script [options] [filename]

• If filename is provided, the session is saved to that file.

• If no filename is given, the output is saved to typescript (default file).

Start Recording a Session

script session.log ##This starts recording and saves everything to session.log.


Script started, file is session.log

Example:
$ script my_terminal.log
Script started, file is my_terminal.log
$ ls
$ echo "Hello, World!"
$ whoami
$ exit

Script done, file is my_terminal.log

• Everything executed will be saved in my_terminal.log.

Stop Recording: To stop recording, use:


exit
or press Ctrl+D.
Script done, file is session.log

Netaji Subhash Engineering College


34
View the Recorded Session: To view the recorded session:
cat session.log

passwd Command in UNIX


The passwd command is used to change the password of a user account in UNIX and Linux systems.

Purpose of passwd

• Allows users to change their own passwords.

• Allows the root (administrator) to set or change passwords for any user.

• Manages password policies such as expiry, locking, or deletion.

Basic Syntax
passwd [options] [username]

• If no username is provided, it changes the password of the current user.

• Only the root user can change passwords for other users.

How to Use passwd

1. Change Your Own Password


passwd

• You'll be prompted to enter your current password, then a new password twice.

Example interaction:
Changing password for user alice.
Current password:
New password:
Retype new password:

2. Root Changing Another User’s Password


sudo passwd john

• No need for current password.

• Useful for resetting or setting passwords for other users.

Netaji Subhash Engineering College


35
Options with passwd

Option Description

-d Deletes the password (user can log in without a password)

-l Locks the user account (disables login)

-u Unlocks the user account

-e Forces the user to change password at next login

-n DAYS Minimum number of days between password changes

-x DAYS Maximum number of days before password must be changed

-w DAYS Warn user before password expires

-i DAYS Days after password expires before account is disabled

Examples

• Lock a User Account


sudo passwd -l bob

• Unlock a User Account


sudo passwd -u bob

• Remove Password (no password login)


sudo passwd -d guest

• Force Password Expiry


sudo passwd -e student

who (List Logged-in Users)


The who command displays the list of users currently logged into the system.

Syntax:
who [options]

Examples:

Command Output
who Displays a list of logged-in users
who -u Shows login time and idle time of users
who -b Displays the last system boot time
who -q Shows the number of users logged in

Netaji Subhash Engineering College


36

uname (System Information)


The uname command displays system information such as the OS name, kernel version, and hardware details.

Syntax:
uname [options]

Examples:

Command Output
uname Displays the operating system name
uname -a Shows detailed system information

uname -r Prints the kernel version

uname -m Displays the system architecture (e.g., x86_64)

uname -n Prints the system hostname

tty (Terminal Device Name)


The tty command shows the name of the terminal (TTY) that the user is connected to.

Syntax:
tty

Examples:

Command Output
tty Displays the terminal device name (e.g., /dev/pts/0)

These UNIX utilities are essential for interacting with the system. They help users manage time, display messages,
perform calculations, and retrieve system details. Mastering these commands enhances efficiency in system
administration and scripting.

Netaji Subhash Engineering College


37

Review Question

Netaji Subhash Engineering College


38

Review Questions
Short Answer Type Questions
1. What is UNIX?
UNIX is a powerful, multiuser, multitasking operating system originally developed in the 1970s
that supports portability, flexibility, and security.
2. What are the main components of UNIX architecture?
The main components are the Kernel, which interacts with hardware, and the Shell, which
interacts with the user.
3. What is the UNIX Kernel?
The Kernel is the core of the UNIX system, managing system resources, memory, process
scheduling, and hardware communication.
4. What is the UNIX Shell?
The Shell is a command-line interpreter that reads and executes user commands, acting as an
interface between the user and Kernel.
5. What are UNIX files?
In UNIX, everything is treated as a file, including hardware devices, directories, and even
running processes.
6. What is a UNIX process?
A process in UNIX is an instance of a running program, managed by the operating system
through process IDs.
7. What is a system call in UNIX?
A system call is a programmatic way in which a process requests a service from the Kernel, like
reading files or creating processes.
8. List any two features of UNIX.
UNIX is known for its portability and multitasking capabilities. It supports multiple users and has
powerful networking features.
9. What is POSIX?
POSIX stands for Portable Operating System Interface; it is a set of standards to maintain
compatibility across UNIX systems.
10. What does "single-user specification" mean in UNIX?
Single-user specification refers to running the system in a maintenance mode where only the
root user can access the system.
11. Differentiate between internal and external commands.
Internal commands are built into the Shell, while external commands exist as separate
executable files in the system.
12. What is the use of the cal command in UNIX?
The cal command is used to display a calendar of a specific month or year on the terminal
screen.
13. What does the date command do in UNIX?
The date command displays or sets the current system date and time in UNIX.

Netaji Subhash Engineering College


39
14. How is the echo command useful in UNIX?
The echo command prints text or variable values to the screen, useful in scripts and command-
line outputs.
15. What is bc in UNIX?
bc is an arbitrary precision calculator language that supports mathematical calculations through
the command line.
16. What does the passwd command do?
The passwd command allows a user to change their login password securely by verifying the
current password.
17. What is the use of the who command?
The who command displays a list of users currently logged into the UNIX system along with
login details.
18. What information does the uname command provide?
The uname command displays system information such as the operating system name, version,
hardware type, and network node hostname.
19. What is the purpose of the tty command in UNIX?
The tty command prints the file name of the terminal connected to the standard input, useful for
debugging terminal sessions.
20. Why is UNIX considered secure?
UNIX is considered secure due to its file permission structure, user authentication, and support
for secure communication protocols.

Descriptive Type Questions


1. Explain the architecture of UNIX operating system.
UNIX architecture comprises mainly two components: the Kernel and the Shell. The Kernel
manages hardware resources, handles memory, processes, and file systems. The Shell serves
as a command interpreter, accepting user inputs and translating them into commands for the
Kernel. This separation provides flexibility and modularity.
2. Describe the features of UNIX operating system.
UNIX supports multitasking and multi-user functionality, allowing multiple users to work
simultaneously. It is portable, meaning it can run on various hardware platforms. UNIX supports
powerful networking capabilities, hierarchical file systems, scripting, and comes with a wide
range of useful utilities and development tools.
3. Differentiate between internal and external commands in UNIX with examples.
Internal commands are built into the Shell and executed directly, like cd, echo, and pwd.
External commands are stored as executable files in directories like /bin or /usr/bin, such as ls,
cp, and grep. Internal commands load faster since they don’t require creating a new process.
4. What are system calls in UNIX? Explain with examples.
System calls are interfaces between user programs and the Kernel, allowing access to
hardware and system resources. Examples include fork() to create a new process, read() and
write() for file I/O, and exec() to execute programs. These calls are essential for process control
and file management.
5. What is POSIX and why is it important?
POSIX (Portable Operating System Interface) is a set of IEEE standards that ensure

Netaji Subhash Engineering College


40
compatibility between UNIX-like operating systems. It defines APIs, shell command-line
interfaces, and utility interfaces. POSIX compliance allows software developed on one UNIX
system to be easily ported to others, enhancing interoperability.
6. Explain the use of common UNIX utilities with examples.
Common UNIX utilities include cal (calendar display), date (system date/time), echo (display
messages), bc (calculator), passwd (change password), and who (user login info). For example,
date shows the current date/time and bc can be used to perform advanced calculations
interactively.
7. Discuss the Shell in UNIX and its types.
The Shell is a command interpreter that allows users to interact with the UNIX system. Common
types include the Bourne Shell (sh), C Shell (csh), Korn Shell (ksh), and Bourne Again Shell
(bash). Each Shell has unique scripting syntax and built-in features for user interaction.
8. Explain how files and processes are handled in UNIX.
In UNIX, files are central and everything (devices, data, and processes) is treated as a file.
Processes are created and managed using system calls like fork() and exec(). Each process is
assigned a unique process ID, and the system maintains a process table to track them.
9. What are the security features available in UNIX?
UNIX uses file permission systems (read, write, execute) for user, group, and others. It supports
strong authentication using encrypted passwords. Other security measures include process
isolation, root privilege separation, and secure networking tools like SSH and SFTP to prevent
unauthorized access.
10. Describe the function of the uname and tty commands.
The uname command provides system information like OS name, kernel version, and machine
hardware name. It is useful for scripting and diagnostics. The tty command shows the file name
of the terminal device connected to standard input, helping users identify their terminal session
or debug issues.

Netaji Subhash Engineering College


41

Chapter 02
UNIX File System

Netaji Subhash Engineering College


42

2.1 UNIX File System: Structure and File Types

The UNIX file system is a hierarchical structure that organizes and manages files and directories
efficiently. It follows a tree-like structure, starting from the root (/) directory. UNIX treats everything as
a file, including directories, devices, and even processes.

UNIX File System Structure


The UNIX file system follows a standard directory hierarchy, commonly referred to as the Filesystem
Hierarchy Standard (FHS). The root (/) directory is the topmost level, with subdirectories organized
based on functionality.

Hierarchical Structure

Explanation of Major Directories


Directory Description

/ (Root) The top-level directory of the UNIX file system. All other directories stem from here.

/bin Stores essential system binaries (executables) like ls, cp, mv, cat, etc.

/boot Contains bootloader files and kernel-related files required to boot the system.

/dev Stores device files (e.g., /dev/sda for disks, /dev/null).

Netaji Subhash Engineering College


43
/etc Stores system configuration files (/etc/passwd, /etc/fstab).

/home Contains user-specific directories (/home/user1, /home/user2).

/lib Stores shared libraries required by system programs.

/media Mount point for removable media (CD/DVD, USB drives).

/mnt Temporary mount point for external file systems.

/opt Stores optional third-party applications.

/proc Virtual file system that provides system and process information.

/root Home directory of the root user (administrator).

/sbin Stores system administration commands (shutdown, fsck).

/srv Data for system services like web and FTP servers.

/tmp Stores temporary files (cleared on reboot).

/usr Contains user programs, libraries, and documentation.

/var Stores variable data like logs, caches, and mail.

File Types in UNIX


In UNIX, everything is treated as a file, including hardware devices and processes. The file system supports several
types of files, each serving a specific purpose. These file types help UNIX manage data, communication, and
devices effectively. Files can be categorized into different types:

• Regular files are the most common and fundamental type of files in the UNIX operating system. These files
contain user data such as text, images, programs, or binary data.

They can be:

• Text files (e.g., .txt, .c, .sh)

• Executable files (e.g., compiled programs like a.out)

• Binary files (e.g., images, audio, or compiled code)

Regular files do not include directories or special device files. When you run the ls -l command, a regular file
is indicated by a dash (-) at the beginning of the permissions string.

For example:
-rw-r--r-- 1 user group 1234 Apr 7 10:00 notes.txt

Here, -rw-r--r-- shows that it is a regular file with read and write permissions for the owner, and read-only for
others.

Regular files are used to store and manage content that users create or interact with daily, such as documents,
scripts, source code, or media.

Netaji Subhash Engineering College


44

• Directory files in UNIX are special files that serve as containers or folders for other files and directories.
Rather than storing user data like regular files, a directory file maintains metadata—essentially a list of
filenames and pointers (inodes) to their actual data locations on the disk.

In UNIX, the file system is hierarchical, and directories help organize this structure. Each directory file
contains entries for:

• The current directory (.),

• The parent directory (..), and

• The list of all files and subdirectories it holds.

When you list files using ls -l, a directory is identified by a d at the beginning of the permissions field:
drwxr-xr-x 2 user group 4096 Apr 7 10:00 documents

Here, d means it’s a directory, and the permissions indicate who can read (r), write (w), or execute (x) within
it. The execute permission on a directory means the ability to enter it (with cd) or traverse it to access files.

Directories are essential for organizing files, managing access, and enabling efficient file navigation and
system maintenance.

• Device files in UNIX, device files (also known as special files) are used to represent and interact with hardware
devices like hard drives, printers, terminals, and USBs — treating them as if they were regular files.

They act as interfaces between software and hardware, allowing users and programs to perform read/write
operations on devices using standard file operations.

Types of Device Files:

1. Character Device Files (c)

o Handle data character by character (byte-stream).

o Suitable for devices like keyboards, mice, and serial ports.

o Example: /dev/tty, /dev/console

2. Block Device Files (b)

o Handle data in fixed-size blocks.

o Suitable for storage devices like hard disks, SSDs, USB drives.

o Example: /dev/sda, /dev/sr0

How to Identify Device Files

Use ls -l in the /dev/ directory. Example output:


brw-rw---- 1 root disk 8, 0 Apr 7 10:00 /dev/sda
crw-rw---- 1 root tty 4, 0 Apr 7 10:00 /dev/tty0

• The first letter (b or c) indicates the device type.

Netaji Subhash Engineering College


45
• The numbers (8, 0 or 4, 0) are major and minor device numbers used by the kernel to identify the
device driver and the specific device.

• A symbolic link (also known as a soft link) in UNIX is a special type of file that acts as a pointer or shortcut to
another file or directory. It stores the path to the target file rather than its actual data.

Think of it like a Windows shortcut: clicking it redirects you to the original file.

Key Features of Symbolic Links:

• Created using the ln -s command.

• Can link to files or directories.

• If the original file is deleted, the symbolic link becomes broken (also called a dangling link).

• Identified with a l (lowercase L) in the file listing (ls -l):


lrwxrwxrwx 1 user group 12 Apr 7 10:00 link.txt -> original.txt

Here, link.txt is a soft link pointing to original.txt.

• A hard link in UNIX is a directory entry that directly points to the same inode (i.e., the actual data on disk) as
another file. It creates an exact replica of the original file's reference in the file system — meaning both
filenames refer to the same data blocks.

Unlike a symbolic (soft) link, a hard link doesn’t point to a filename or path — it points to the file's inode. So
even if the original file is deleted, the hard link still retains the content.

Key Characteristics of Hard Links:

• Created using the ln command (without -s).

• Both the original and the hard link share the same inode number.

• Changes made to one file are reflected in the other.

• You can’t create hard links to directories (to avoid loops).

• You can’t create hard links across different file systems.

File Listing Example:


ls -li

Output:
123456 -rw-r--r-- 2 user group 50 Apr 7 10:00 original.txt
123456 -rw-r--r-- 2 user group 50 Apr 7 10:00 duplicate.txt

Both files have the same inode number (123456) and link count 2, meaning they are hard links to the
same data.

• FIFO stands for First-In-First-Out and is a special type of named pipe in UNIX used for inter-process
communication (IPC). It allows two unrelated processes to communicate by reading and writing to a shared
file-like interface.
Netaji Subhash Engineering College
46
Unlike unnamed (anonymous) pipes, which only work between parent and child processes and exist
temporarily in memory, named pipes (FIFOs) exist permanently in the filesystem and can be accessed like a
regular file using a name.

Example:
mkfifo mypipe

To use the pipe:


echo "Hello" > mypipe &
cat < mypipe

• Socket files in UNIX are special types of files used for inter-process communication (IPC), particularly for
processes that communicate either on the same machine or over a network. They allow bidirectional
communication, unlike pipes or FIFOs which are typically one-way.

A UNIX socket file exists in the file system and serves as an endpoint for communication between processes.

Example:
ls -l /var/run/docker.sock

Output:
srwxr-xr-x 1 root root 0 Mar 13 14:30 /var/run/docker.sock

Here, s indicates a socket file.

File Naming Convention

UNIX provides a flexible but case-sensitive file naming convention. Here are the key rules and best practices:

• Characters Allowed: File and directory names can generally include alphanumeric characters (a-z, A-Z, 0-
9), as well as special characters like underscore (_), hyphen (-), and dot (.).

• Case Sensitivity: UNIX file systems are case-sensitive. This means that myfile, MyFile, and MYFILE are
treated as three distinct files. This is a crucial point to remember to avoid confusion.

• Maximum Length: While specific limits can vary between different UNIX-like systems and file system
types, there's usually a maximum length for file and directory names (e.g., 255 characters). It's good
practice to keep names reasonably short and descriptive.

• Reserved Characters (Avoid): Certain characters have special meanings in the shell and should generally
be avoided in file names to prevent unexpected behavior or the need for quoting/escaping. These
include:

o Space ( )

o Tab

o Newline

o Pipe (|)

Netaji Subhash Engineering College


47
o Ampersand (&)

o Semicolon (;)

o Angle brackets (&lt;, >)

o Asterisk (*)

o Question mark (?)

o Backslash (\)

o Dollar sign ($)

o Exclamation mark (!)

o Backquotes (`)

o Double quotes (")

o Single quotes (')

• Hidden Files: File or directory names that begin with a dot (.) are treated as hidden files. By default,
standard listing commands like ls do not display these files. You need to use the -a or -al option with ls to
view them. These are often used for configuration files and directories specific to applications or the
user's environment.

• Best Practices:

o Use descriptive names that reflect the content of the file.

o Be consistent with your naming conventions.

o Avoid spaces; use underscores or hyphens instead for readability (e.g., report_2023.txt or user-
manual.pdf).

o Stick to lowercase for general files to avoid case-related errors, especially when working across
different systems.

o Use extensions (e.g., .txt, .jpg, .py) to indicate the file type.

Parent – Child Relationship

The UNIX file system is organized as a hierarchical tree structure, starting from a single root directory (/). This
structure naturally creates parent-child relationships between directories and files:

• Parent Directory: A directory that contains other directories or files is considered the parent of those
contained items. Every directory (except the root directory) has a parent directory.

• Child Directory/File: A directory or file located directly within another directory is considered a child of
that directory.

Think of it like a family tree. The root directory (/) is like the ultimate ancestor. Directories directly under / are its
children. If a directory /home contains a subdirectory /home/user1, then /home is the parent of user1, and
user1 is a child of /home. Similarly, if /home/user1 contains a file document.txt, then /home/user1 is the parent
of document.txt, and document.txt is a child of /home/user1.

Netaji Subhash Engineering College


48

HOME Variable

The HOME environment variable is a crucial concept in UNIX. The HOME environment variable stores the
absolute pathname of the user's home directory. When a user logs in, this variable is automatically set by the
system.

o User's Personal Space: The home directory is the default working directory for a user upon
login. It's intended as the user's personal space to store their files, configuration files, and
personal directories.

o Convenient Path Reference: Many commands and applications use the HOME variable as a
shortcut to refer to the user's home directory without needing to know the full path. For
example, cd without any arguments will typically take you to your home directory (which the
system knows from the HOME variable). Similarly, paths like ~/.bashrc or ~/Documents use the
tilde (~), which the shell expands to the value of the HOME variable.

o Configuration Files: Many application-specific configuration files are stored in hidden directories
or directly within the user's home directory (often starting with a dot, like .config, .ssh, etc.).

• Accessing the HOME Variable: You can see the value of the HOME variable using the echo command in
the shell:
echo $HOME

The output will be the absolute path to your home directory (e.g., /home/your_username on many Linux
systems).

inode Number

Every file and directory in a UNIX file system is associated with a unique identifier called an inode (index node)
number.

• Definition: An inode is a data structure on disk that stores metadata about a file or directory. This
metadata includes:

o File type (regular file, directory, symbolic link, etc.)

o Permissions (read, write, execute for owner, group, others)

o Ownership (user ID and group ID)

o Size of the file

o Timestamps (access, modification, change)

o Pointers to the data blocks on disk where the actual file content is stored.

• Significance:

o Unique Identification: The inode number uniquely identifies a file or directory within a specific
file system. Two files on the same file system will never have the same inode number.

o Name Independence: The inode number is independent of the file's name. A single file can have
multiple names (through hard links), but all those names will point to the same inode.

Netaji Subhash Engineering College


49
o File System Specific: Inode numbers are unique within a particular file system. Different file
systems on the same machine can have files with the same inode number.

o How to View: You can see the inode number of a file or directory using the -i option with the ls
command:
ls -i myfile.txt

The output will show the inode number followed by the filename.

Absolute Pathname

An absolute pathname (also known as a full pathname) specifies the exact location of a file or directory starting
from the root directory (/).

• Characteristics:

o Always begins with a forward slash (/), indicating the root directory.

o Provides an unambiguous and unique path to a specific file or directory, regardless of the
current working directory.

o Traces the path down the directory hierarchy from the root to the target file or directory,
separating each directory level with a forward slash.

• Example: If you have a file named report.txt located in the directory /home/user1/documents, its
absolute pathname is /home/user1/documents/report.txt.

Relative Pathname

A relative pathname specifies the location of a file or directory relative to the current working directory.

• Characteristics:

o Does not begin with a forward slash (/).

o Its interpretation depends on the user's current working directory.

o Uses the special directory entries . (dot) and .. (dotdot) to navigate the directory hierarchy
relative to the current position.

• Example: If your current working directory is /home/user1, and you want to access the file report.txt
located in /home/user1/documents, the relative pathname would be documents/report.txt.

• Navigating Up: If your current working directory is /home/user1/documents, and you want to refer to
the directory /home/user1 (the parent directory), you would use the relative pathname ... To refer to a
file config.txt in /home/user1, the relative path would be ../config.txt.

Significance of dot (.) and dotdot (..)

The special directory entries . (dot) and .. (dotdot) are fundamental for navigating the file system using relative
pathnames:

• . (Dot): Represents the current working directory.


Netaji Subhash Engineering College
50
o It's a shorthand way to refer to the directory you are currently in.

o For example, ./my_script.sh explicitly tells the shell to execute the my_script.sh file located in
the current directory. This is important for security reasons, as the system's PATH environment
variable might not include the current directory.

o Commands like ls . will list the contents of the current directory.

• .. (Dotdot): Represents the parent directory of the current working directory.

o It allows you to move one level up in the directory hierarchy.

o For example, if you are in /home/user1/documents, the command cd .. will take you to
/home/user1.

o ls .. will list the contents of the parent directory.

UNIX File Permissions

Each file and directory in UNIX has associated permissions for different users.

Example:
ls -l

Output:
-rw-r--r-- 1 user user 1234 Mar 13 14:30 myfile.txt

Permission Breakdown

Position Meaning
-rw-r--r-- File Type & Permissions

- Regular file (d for directory, l for link)


rw- Owner: Read & Write
r-- Group: Read-only
r-- Others: Read-only

Changing permissions:
chmod 755 myfile.txt

Changing ownership:
chown user:group myfile.txt

Summary

Concept Description

Netaji Subhash Engineering College


51
Filesystem Hierarchy Organized under / root with directories like /bin, /etc, /home.

Regular Files Text, binary, and executable files.

Directory Files Containers for files and subdirectories.

Device Files Represent hardware devices (e.g., /dev/sda).

Symbolic Links Shortcut pointing to another file.

Hard Links Duplicate reference to an existing file.

Named Pipes (FIFO) Special file for interprocess communication.

Sockets Used for network communication between processes.

Permissions Controlled using chmod and chown.

Conclusion

Understanding the UNIX file system structure and file types is crucial for managing files, processes, and system
resources efficiently. By leveraging this knowledge, users can navigate and control the UNIX environment
effectively.

Netaji Subhash Engineering College


52

2.2 File and Directory Management Commands

Managing files and directories is a crucial skill in UNIX, allowing users to navigate the filesystem,
create, modify, and delete files, and organize directories efficiently. Below is a detailed
explanation of essential UNIX commands used for file and directory management.

1. Navigating the File System


Navigating the file system in UNIX is essential for working with files, directories, and executing
commands effectively. UNIX uses a hierarchical directory structure, with the root directory
/ at the top.
• pwd (Print Working Directory): The pwd command in UNIX is used to display the full
path of the current working directory — that is, the directory you are currently in within
the file system hierarchy.
Displays the absolute path of the current directory.
pwd
Output:
/home/user
• ls (List Files and Directories): The ls command in UNIX is used to display the
contents of a directory. It shows the names of files and subdirectories in the current
or specified directory.
Lists files and directories in the current directory.
Usage:
ls

• Common options:
▪ ls -l → Detailed view (permissions, owner, size, modification time).
▪ ls -a → Includes hidden files (starting with .).
▪ ls -lh → Displays human-readable file sizes.
▪ ls -R → Recursively lists subdirectories.
▪ ls -lt → Sorts files by modification time.

ls -lh

Output:
-rw-r—r-- 1 user user 2.0K Mar 10 12:34 file1.txt
drwxr-xr-x 2 user user 4.0K Mar 9 16:10 myfolder

Netaji Subhash Engineering College


53
2. Directory Management Commands
Directory management is a key part of working with the UNIX file system. These commands
help you create, remove, navigate, and manage directories efficiently.
• mkdir (Make Directory): The mkdir command in UNIX is used to create new directories
(folders) in the file system.
Syntax:
mkdir [options] directory_name

Example:
mkdir myfolder

• Create multiple directories:


mkdir dir1 dir2 dir3

• Create a nested directory structure:


mkdir -p parent/child/grandchild

• rmdir (Remove Empty Directory): The rmdir command in UNIX is used to delete empty
directories from the file system.
Syntax:
rmdir [options] directory_name

Example:
rmdir myfolder

• To remove multiple empty directories:


rmdir dir1 dir2 dir3

• rm (Remove Directory and Its Contents): The rm command in UNIX is used to delete files or
directories. When used with the -r option, it can remove entire directories along with their
contents, including subdirectories and files.
Syntax:
rm [options] file_or_directory_name

Examples:
• Remove a file:
rm file.txt

Netaji Subhash Engineering College


54
• Remove a directory and all its contents recursively:
rm -r myfolder

• Remove a directory without asking for confirmation:


rm -rf myfolder

Common Options:
Option Description
-r Recursively delete directories and subdirectories
-f Force deletion without prompting
-i Prompt before each deletion
-v Verbose – shows what is being deleted

3. File Management Commands: File management commands in UNIX allow users to create, view,
modify, move, copy, and delete files. These commands are essential for day-to-day tasks in a UNIX
environment.

• touch (Create Empty File or Update Timestamp): Creates new empty files or updates the last
modified time of an existing file.
Syntax:
touch filename

Example:
touch project.txt

• cp (Copy Files and Directories)


The cp command in UNIX is used to copy files or directories from one location to another. It creates a
duplicate of the specified file or folder.

Syntax:
cp [options] source destination

• source: the file or directory to copy


• destination: the location or name of the copy

Examples:
• Copy a file:
Netaji Subhash Engineering College
55
cp file.txt backup.txt

• Copy a file to another directory:


cp file.txt /home/user/Documents/

• Copy a directory and its contents recursively:


cp -r myfolder/ newfolder/

Commonly Used Options:

Option Description
-r Recursively copy directories
-i Prompt before overwrite
-f Force overwrite without prompting
-u Copy only if source is newer than destination
-v Verbose – show what is being copied

• Copies files or directories.


cp file1.txt file2.txt # Copy a file
cp -r dir1 dir2 # Copy a directory

• To copy multiple files to a directory:


cp file1.txt file2.txt /home/user/docs/

mv (Move or Rename a File/Directory)


• Moves or renames a file.
mv oldname.txt newname.txt # Rename a file

mv myfile.txt /home/user/docs/ # Move file to another directory

• To rename a directory:
mv olddir newdir

rm (Remove Files and Directories)


• Deletes files and directories.
rm myfile.txt # Delete a file
rm -r mydir/ # Delete a directory and its contents

• To force delete without confirmation:


rm -rf mydir

Warning: rm -rf / can delete the entire system.

Netaji Subhash Engineering College


56
4. Viewing and Searching Files
cat (View File Content)
• Displays file content.
cat file.txt

less / more (View Large Files Page by Page)


• Scroll through files one screen at a time.
less file.txt Press q to exit.

head / tail (View First or Last Lines of a File)


head -5 file.txt # Show first 5 lines
tail -5 file.txt # Show last 5 lines

find (Search for Files and Directories)


• Searches for files and directories based on criteria.
find /home/user -name "file.txt"

grep (Search Inside Files)


• Finds text patterns inside files.
grep "keyword" file.txt

5. Changing Directory and Permissions


cd (Change Directory)
• Moves between directories.
cd /home/user/docs/

• Move back one directory:


cd ..

• Move to the home directory:


cd ~

chmod (Change File Permissions)


• Modifies file permissions.
chmod 755 myscript.sh

o 7 → Read, write, execute for the owner


o 5 → Read, execute for the group

Netaji Subhash Engineering College


57
o 5 → Read, execute for others
chown (Change File Ownership)
• Changes file ownership.
chown user:group file.txt

6. Archiving and Compressing Files


tar (Archive Files and Directories)
• Creates an archive of files.
tar -cvf archive.tar myfolder/

• Extracting a tar file:


tar -xvf archive.tar

gzip (Compress Files)


• Compresses files using gzip.
gzip file.txt

• To decompress:
gunzip file.txt.gz

Conclusion
Mastering file and directory management commands in UNIX helps users efficiently navigate and
manipulate files and directories. Understanding these commands improves productivity and control
over the system.

Netaji Subhash Engineering College


58

Review Questions

Netaji Subhash Engineering College


59

Review Questions
Short Answer Type Questions
1. What is a file system in UNIX?
A file system in UNIX is a method for storing, organizing, and managing files and directories in a
hierarchical tree structure.
2. List different types of files in UNIX.
UNIX supports regular files, directories, symbolic links, character device files, block device files,
named pipes, and sockets.
3. What is the significance of the / directory in UNIX?
The / directory, known as the root directory, is the top-level directory from which all other
directories branch.
4. What are valid characters in UNIX file names?
File names may include letters, numbers, dots (.), underscores (_), and hyphens (-), but not
slashes (/) or null characters.
5. Explain parent-child relationship in UNIX directories.
Each directory (except root) has a parent directory. A subdirectory created inside another is
considered its child.
6. What is the HOME variable in UNIX?
The HOME variable stores the absolute path of the user’s home directory, providing a default
location for login and file operations.
7. Define an inode number.
An inode number uniquely identifies a file or directory and stores metadata such as
permissions, owner, and location on disk.
8. What is an absolute pathname?
An absolute pathname starts from the root / and specifies the full path to a file or directory,
independent of current location.
9. What is a relative pathname in UNIX?
A relative pathname indicates the location of a file or directory with respect to the current
working directory.
10. What does the dot (.) signify in UNIX?
The single dot (.) refers to the current directory and is used in relative paths and execution of
files in the same directory.
11. What does double dot (..) represent in UNIX?
The double dot (..) represents the parent directory, useful for moving up in the file hierarchy or
referencing parent files.
12. How can you display the current working directory in UNIX?
The pwd (print working directory) command displays the full absolute path of the current
working directory.
13. What is the use of cd command in UNIX?
The cd command is used to change the current working directory to the specified path or to the
user’s home directory.
Netaji Subhash Engineering College
60
14. How do you create a new directory in UNIX?
The mkdir command is used to create one or more new directories in the specified location.
15. How do you remove an empty directory in UNIX?
The rmdir command deletes an empty directory. It will fail if the directory contains files or
subdirectories.
16. What does the ls command do in UNIX?
The ls command lists the contents of a directory, showing files and subdirectories optionally with
detailed information using flags.
17. What is the purpose of the /bin directory in UNIX?
The /bin directory contains essential binary executables like ls, cp, and mkdir, needed for
booting and basic operation.
18. What is stored in the /etc directory?
The /etc directory contains system-wide configuration files and shell scripts used during system
startup and service management.
19. What does the /home directory represent in UNIX?
The /home directory contains the personal directories of all non-root users, storing their files and
personal configurations.
20. What is /dev used for in UNIX file system?
The /dev directory holds special files representing devices, such as hard drives, terminals, and
USB devices, for hardware interaction.

Descriptive Type Questions


1. Explain the structure of the UNIX file system.
The UNIX file system is organized in a hierarchical tree structure starting from the root /
directory. All files and directories branch from this root. Each user and system component has a
place in this hierarchy. It supports different file types and uses inodes for tracking file metadata,
ensuring efficient file management and access.
2. Describe the different types of files found in UNIX.
UNIX supports several file types including regular files (text or binary), directories, symbolic
links (shortcuts to other files), character and block device files (for hardware access), named
pipes (for inter-process communication), and sockets (for network communication). Each type
plays a specific role in the system's functionality.
3. What is the importance of inodes in the UNIX file system?
Inodes are critical components of the UNIX file system. Each inode contains metadata about a
file or directory—such as ownership, permissions, size, and data block pointers—excluding the
filename. The filename is stored in directory entries, which map names to inode numbers,
enabling efficient file handling.
4. Compare and contrast absolute and relative pathnames.
An absolute pathname begins from the root directory / and gives the complete path to a file or
directory.
A relative pathname, in contrast, starts from the current working directory. While absolute paths
are consistent regardless of location, relative paths are shorter and flexible within scripts.
5. Explain the role of dot (.) and dotdot (..) in navigation.
In UNIX, the dot . refers to the current directory, and dotdot .. refers to the parent directory. They
Netaji Subhash Engineering College
61
are used in relative paths to navigate within the file hierarchy. For example, cd .. moves up one
directory level, and ./file.sh runs a script in the current folder.
6. Write the purpose and usage of basic directory commands like pwd, cd, mkdir, and rmdir.
pwd displays the current directory's absolute path.
cd changes the current directory.
mkdir creates new directories,
while rmdir removes empty directories.
These commands are essential for managing directory navigation and organization in the UNIX
environment.
7. What is the significance of the /bin, /sbin, /usr/bin, and /usr/sbin directories?
These directories hold binary executables. /bin and /sbin contain essential commands for basic
system operation and administration. /usr/bin and /usr/sbin store non-essential binaries and
system admin tools, typically used after the system has booted successfully.
8. Briefly describe the function of /etc, /dev, /lib, and /usr/lib directories.
/etc holds configuration files. /dev contains device files for hardware. /lib stores shared libraries
essential for system binaries. /usr/lib holds additional libraries used by applications. These
directories support core and user-level system functionality.
9. What are the contents and usage of /usr/include, /usr/share/man, /var, and /tmp?
/usr/include contains header files for C programs. /usr/share/man stores manual pages for UNIX
commands. /var holds variable data like logs and mail spools. /tmp is used for temporary file
storage, often cleared on reboot, allowing programs to store temporary data.
10. Describe how HOME variable and cd command simplify user navigation.
The HOME variable stores the path to a user's home directory. Using cd without arguments, or
cd ~, brings the user back to their home directory. This functionality makes navigation more
efficient by eliminating the need to type the full home directory path repeatedly.

Netaji Subhash Engineering College


62

Chapter 03
Ordinary file handling

Netaji Subhash Engineering College


63

3.1 Ordinary File Handling

File handling is a fundamental aspect of working with the UNIX operating system. In UNIX, everything is treated
as a file — whether it is a text file, a directory, a device, or a program. Among the different types of files, ordinary
files are the most commonly used. These are regular files that store data in a structured or unstructured format,
such as plain text, source code, binary executables, or logs.

Ordinary file handling refers to the set of operations that users and programs perform on regular files, including
creating, reading, writing, modifying, copying, renaming, and deleting files. UNIX provides powerful command-
line utilities (like cat, cp, mv, rm, more, less, touch, etc.) and system calls (such as open(), read(), write(), and
close()) to perform these operations.

Performing Basic File Operations in UNIX


File handling is a crucial aspect of UNIX systems, as all data is stored in files. The UNIX operating system
provides several command-line utilities to manipulate files efficiently. These basic file operations include
creating, copying, deleting, moving, and comparing files.

Creating and Viewing Files Using cat


The cat (concatenate) command is one of the most frequently used commands for handling text files. It
can be used to:
• Create a new file.
• Display the contents of a file.
• Append content to an existing file.
• Combine multiple files.
Creating a File:
cat > filename

• This command allows the user to create a new file named filename.
• The system enters input mode, and the user can type the content of the file.
• To save the file, press Ctrl + D.
Displaying the Contents of a File:
cat filename

• This prints the contents of the file on the screen.


Concatenating Multiple Files:
cat file1 file2 > mergedfile

• This combines file1 and file2 into a new file named mergedfile.

Netaji Subhash Engineering College


64
Appending Content to a File:
cat >> filename

• This allows the user to add content to an existing file.

Copying Files Using cp


The cp (copy) command is used to duplicate files or directories.
Syntax:
cp source_file destination_file

Example:
cp file1 file2

• This copies the contents of file1 into file2. If file2 exists, it will be overwritten.
Copying a Directory:
cp -r dir1 dir2

• The -r option (recursive) ensures that all files inside dir1 are copied to dir2.
Preserving File Attributes:
cp -p file1 file2

• This preserves file permissions, ownership, and timestamps.

Removing Files Using rm


The rm (remove) command is used to delete files or directories.
Syntax:
rm filename

Example:
rm file1

• This deletes file1.


Removing Multiple Files:
rm file1 file2 file3

Removing a Directory and Its Contents:


rm -r directory_name

• The -r (recursive) option ensures that all files and subdirectories inside the directory are
removed.
Forcing Deletion Without Confirmation:

Netaji Subhash Engineering College


65
rm -rf directory_name

• The -f (force) option ensures that no prompts appear before deletion.

Moving and Renaming Files Using mv


The mv (move) command is used to move or rename files.
Syntax:
mv source destination

Renaming a File:
mv oldname newname

• This renames oldname to newname.


Moving a File to Another Directory:
mv file1 /home/user/Documents/

• This moves file1 to the specified directory.


Moving Multiple Files:
mv file1 file2 directory_name/

• This moves file1 and file2 into directory_name.

Counting Words, Lines, and Characters Using wc


The wc (word count) command is used to count the number of lines, words, and characters in a file.
Syntax:
wc filename

Example:
wc file1

• The output consists of three numbers:


o The first number represents the line count.
o The second represents the word count.
o The third represents the character count.
Counting Only Lines:
wc -l filename

Counting Only Words:


wc -w filename

Netaji Subhash Engineering College


66
Counting Only Characters:
wc -c filename

Paging Output (more)


The more command is used to view the content of a file one page at a time. It is especially useful
when dealing with large files where the content cannot fit on a single screen.
Syntax:
more filename

Features:
• Press Enter to move one line forward.
• Press Space to move one screen/page forward.
• Press q to quit.
Example:
more /etc/passwd

This command displays the contents of the /etc/passwd file one page at a time.

Printing a File (lp)


The lp command is used to send a file to the printer for printing. It is part of the CUPS (Common UNIX
Printing System).
Syntax:
lp [options] filename

Common Options:
• -n number: Print multiple copies.
• -d printername: Specify a particular printer.
Example:
lp document.txt -- This sends document.txt to the default printer.

lp -n 2 report.txt -- Prints two copies of report.txt.

Knowing File Type (file)


The file command is used to determine the type of a file — whether it's a text file, binary file,
directory, script, or something else.
Syntax:
Netaji Subhash Engineering College
67
file filename

Example:

• file myscript.sh

Output might be:


myscript.sh: Bourne-Again shell script, ASCII text executable

• file image.png

Output:
image.png: PNG image data, 800 x 600, 8-bit/color RGB

Comparing Files (cmp)


The cmp command is used to compare two files byte by byte and reports the first mismatch (if any).
Syntax:
cmp file1 file2

Example:
cmp file1.txt file2.txt

If both files are identical: no output

If there is a difference:
file1.txt file2.txt differ: byte 10, line 2

Best for comparing binary files or checking if two files are exactly the same.

Finding Common Between Two Files (comm)


The comm command is used to compare two sorted files line by line and display:
1. Lines only in the first file,
2. Lines only in the second file,
3. Lines common to both.
Syntax:
comm file1 file2

Note: Both files must be sorted beforehand.


Netaji Subhash Engineering College
68
Example:
sort file1.txt -o file1.txt
sort file2.txt -o file2.txt
comm file1.txt file2.txt

Sample Output:
apple
banana
cherry

This output means:


• apple is only in file1.txt
• banana is only in file2.txt
• cherry is common to both files (note the spacing format).

Displaying File Differences (diff)


The diff command shows the line-by-line differences between two text files. It is useful for
programmers to compare source code, configuration files, etc.
Syntax:
diff file1 file2

Common Symbols in Output:


• a – Add
• d – Delete
• c – Change
Example:

Suppose file1.txt contains: and file2.txt contains:


Hello Hello
Welcome Welcome back
Thank you Thank you

Then:
diff file1.txt file2.txt

Netaji Subhash Engineering College


68
Output:
2c2
< Welcome
---
> Welcome back

Explanation:
• Line 2 in file1.txt ("Welcome") was changed (c) to "Welcome back" in file2.txt.

Creating Archive File (tar)


The tar command is used to create, maintain, modify, or extract archive files, usually with .tar
extension. It combines multiple files into a single archive.
Syntax:
tar -cvf archive_name.tar file1 file2 ...

Options:
• -c : Create an archive
• -v : Verbose (show progress)
• -f : Filename of archive
Example:
tar -cvf myarchive.tar file1.txt file2.txt

This creates myarchive.tar containing file1.txt and file2.txt.

Compress File (gzip)


The gzip command is used to compress a file, reducing its size. It replaces the original file with a .gz file.
Syntax:
gzip filename

Example:
gzip data.txt

This compresses data.txt and creates data.txt.gz.

Netaji Subhash Engineering College


69

Uncompress File (gunzip)


The gunzip command is used to decompress .gz files created by gzip.
Syntax:
gunzip filename.gz

Example:
gunzip data.txt.gz

This restores the original data.txt.

Archive File (zip)


The zip command is used to create a compressed ZIP archive, combining one or more files into a .zip
file.
Syntax:
zip archive_name.zip file1 file2 ...

Example:
zip myzipfile.zip file1.txt file2.txt

This creates myzipfile.zip containing file1.txt and file2.txt.

Extract Compressed File (unzip)


The unzip command is used to extract the contents of a ZIP file.
Syntax:
unzip archive_name.zip

Example:
unzip myzipfile.zip

This extracts the contents of myzipfile.zip in the current directory.

Conclusion
File handling in UNIX is a foundational skill for both users and system administrators, enabling efficient
interaction with the file system. The set of commands discussed — including creating and displaying files
(cat), copying (cp), deleting (rm), renaming/moving (mv), and paging output (more) — form the basic
operations for managing file content and organization.

Netaji Subhash Engineering College


70
For output management and documentation, commands like lp (printing) and file (file type identification)
are essential. Analyzing file content becomes easier with tools like wc for counting lines, words, and
characters, and comparison tools such as cmp, comm, and diff, which help in identifying similarities and
differences between files.
To manage file storage efficiently, UNIX provides powerful archiving and compression tools. Commands
like tar, gzip, and zip are used to reduce disk usage and simplify file transfers, while gunzip and unzip help
restore the original content when needed.

Netaji Subhash Engineering College


71

Review Question

Netaji Subhash Engineering College


72

Review Question
Short Answer Type Questions
1. What is the primary purpose of ordinary file handling in UNIX?
Ans: Ordinary file handling in UNIX provides the fundamental mechanisms for interacting with
regular data files, enabling users and applications to create, read, write, modify, and manage
persistent data stored on the system's storage devices.
2. Explain the basic function of the cat command.
Ans: The cat command is primarily used to display the contents of one or more files to the
standard output. For example, cat myfile.txt will display the text within the 'myfile.txt' file on your
terminal. It can also concatenate files.
3. How do you create an empty file using the cat command?
Ans: You can create an empty file using cat > newfile.txt. This redirects the standard input (which
is empty in this case) to a new file named 'newfile.txt'. If the file exists, its contents will be
overwritten.
4. What is the function of the cp command? Provide an example.
Ans: The cp command is used to copy files or directories. For instance, cp oldfile.txt newfile.txt
creates a duplicate of 'oldfile.txt' named 'newfile.txt' in the current directory.
5. How can you recursively copy a directory and its contents using cp?
Ans: To recursively copy a directory, including all its subdirectories and files, you use the -r (or --
recursive) option with the cp command. For example, cp -r sourcedir
destinationdir will copy the entire 'sourcedir' to 'destinationdir'.

6. What is the purpose of the rm command? Exercise caution while using it.
Ans: The rm command is used to remove (delete) files and directories. Be extremely cautious, as
deleted files are often irrecoverable. For example, rm unwanted.txt will permanently delete the
'unwanted.txt' file.
7. Explain how to remove a directory and its contents using rm?
Ans: To remove a directory and all its contents, including subdirectories and files, you need to use
the -r (recursive) and -f (force) options with the rm command: rm -rf directoryname. Use this
command with extreme caution as it bypasses prompts.
8. What are the two primary functions of the mv command?
Ans: The mv command serves two main purposes: renaming files or directories and moving files
or directories to a different location within the file system. For example, mv oldname.txt
newname.txt renames the file, and mv file.txt /home/user/documents/ moves it to the specified
directory.
9. How does the more command help in viewing file content?
Ans: The more command displays the content of a file page by page, allowing users to read
through large files without the entire content scrolling past quickly. You can navigate forward by
pressing the spacebar and quit by pressing 'q'. Example: more largefile.log.
10. What is the basic function of the lp command?
Netaji Subhash Engineering College
73
Ans: The lp command is used to send files to the printer for printing. For example, lp
document.pdf would typically send the 'document.pdf' file to the default printer configured on
the system.
11. How can you determine the type of a file in UNIX?
Ans: The file command is used to determine the file type based on its content rather than its
extension. For example, file mydocument might output something like "mydocument: ASCII text"
or "mydocument: ELF 64-bit LSB executable, x86-64".
12. Explain the function of the wc command with its basic options.
Ans: The wc (word count) command counts the number of lines, words, and characters in a file.
Common options include -l (lines), -w (words), and -c (characters). For example, wc -l myfile.txt
will only display the number of lines in 'myfile.txt'.
13. What does the cmp command do? How does it indicate differences?
Ans: The cmp command compares two files byte by byte. If differences are found, it reports the
byte and line number where the first difference occurs. If no differences are found, it displays
nothing. Example: cmp file1.txt file2.txt.
14. What is the purpose of the comm command? What assumption does it make about the input
files?
Ans: The comm command compares two sorted files line by line and outputs three columns: lines
unique to the first file, lines unique to the second file, and lines common to both files. It assumes
that the input files are sorted. Example: comm sorted_file1.txt sorted_file2.txt.
15. How does the diff command differ from the cmp command?
Ans: The diff command also compares two files but, unlike cmp, it provides a line-by-line report
of the differences, indicating what needs to be changed in the first file to make it identical to the
second file. Example: diff original.txt modified.txt.
16. What is the primary use case for the tar command?
Ans: The tar (tape archive) command is primarily used for creating archive files, often referred to
as "tarballs," which bundle multiple files and directories into a single file for easier storage,
backup, or distribution. It doesn't inherently compress the data. Example: tar -cvf myarchive.tar
documents/ images/.
17. Explain the function of the gzip command.
Ans: The gzip command is used to compress files, typically reducing their size. It replaces the
original file with a compressed version having a .gz extension. Example: gzip myfile.txt will create
myfile.txt.gz and remove myfile.txt.
18. How do you decompress a file compressed with gzip?
Ans: To decompress a .gz file, you use the gunzip command. For example, gunzip myfile.txt.gz will
decompress it back to myfile.txt. You can also use gzip -d myfile.txt.gz.
19. What is the purpose of the zip command? How does it differ from tar?
Ans: The zip command is used to create archive files in the .zip format, and it also includes built-
in compression. Unlike tar, which primarily archives, zip both archives and compresses files.
Example: zip myarchive.zip documents/ images/.

Netaji Subhash Engineering College


74
20. How do you extract files from a .zip archive?
Ans: To extract the contents of a .zip archive, you use the unzip command followed by the name
of the zip file. For example, unzip myarchive.zip will extract the files and directories contained
within 'myarchive.zip' into the current directory.
Descriptive Type Questions
1. Describe the process of creating a backup of a directory containing multiple files and
subdirectories using standard UNIX commands. Include the commands used and explain their
options.
Ans: Creating a backup of a directory involves archiving its contents into a single file. The tar
command is the standard tool for this. To archive a directory named mydata including all its files
and subdirectories, you would use the command: tar -cvf backup.tar mydata/. Here, -c tells tar to
create an archive, -v (verbose) lists the files being processed, and -f backup.tar specifies the name
of the archive file to be created. To also compress this archive to save space, you can pipe the
output of tar to gzip: tar -cvf - mydata/ | gzip > backup.tar.gz. To restore this backup, you would
use gunzip -c backup.tar.gz | tar -xvf -, where -x extracts the files.
2. Explain the differences between the cmp, comm, and diff commands in UNIX. Provide a
scenario where each command would be most useful.
Ans: The cmp, comm, and diff commands are all used for comparing files, but they operate
differently.
o cmp performs a byte-by-byte comparison and is most useful when you need to know if
two binary files are identical or to quickly pinpoint the first differing byte in text files. For
example, verifying if a downloaded file is corrupted.
o comm compares two sorted files line by line, identifying unique and common lines, making
it ideal for finding overlaps or differences in sorted lists, such as comparing two sets of
alphabetically ordered configuration entries.
o diff provides a detailed, line-by-line report of the changes needed to make two text files
identical. This is invaluable for tracking modifications in source code or document versions.

3. Discuss the implications and potential risks associated with using the rm -rf command. Provide
a scenario where it might be necessary and explain the precautions one should take.
Ans: The rm -rf command is a powerful but dangerous tool in UNIX. The -r option makes it
recursive, deleting directories and their contents, while -f forces the deletion without prompting
for confirmation. The primary risk is accidental and irreversible data loss if the command is
executed on the wrong directory or with a typo.
A scenario where it might be considered necessary is when you need to completely remove a
complex directory structure that contains numerous files and subdirectories, and you are
absolutely certain of the target. Precautions are paramount: double-check the target path,
consider using interactive mode (rm -ri) first to review what will be deleted, and ensure you have
recent backups of important data.

Netaji Subhash Engineering College


75
4. Describe the process of creating a compressed archive of a directory using the zip command
and then extracting its contents to a new directory.
Ans: To create a compressed archive of a directory named myproject using the zip command,
you would use: zip -r project_backup.zip myproject/.

The -r option ensures that the command recursively includes all files and subdirectories within
myproject. project_backup.zip is the name of the resulting compressed archive.
To extract the contents of this zip file to a new directory, first create the directory: mkdir
extracted_project. Then, use the unzip command with the -d option to specify the destination
directory: unzip project_backup.zip -d extracted_project/. This will create
the myproject directory (if it doesn't exist within the zip) inside extracted_project and extract all
the archived files and subdirectories there.
5. Explain how you can use the cat command in conjunction with redirection (> and >>) to create
and append content to files.
Ans: The cat command, when used with redirection operators, allows you to create and modify
files. The > operator redirects the standard output to a file, overwriting the file if it exists or
creating it if it doesn't.
For example, cat > new_config.txt will wait for you to type input, and upon pressing
Ctrl+D, that input will be saved to 'new_config.txt', overwriting any previous content.
The >> operator appends the standard output to an existing file. If the file doesn't exist, it will be
created. For instance, echo "Additional setting" >> existing_config.txt will add the line "Additional
setting" to the end of 'existing_config.txt' without deleting its original content.
6. Discuss the advantages of using archive and compression tools like tar and gzip (or zip) when
managing files in UNIX.
Ans: Archive and compression tools offer several advantages for file management in UNIX.
Archiving with tar bundles multiple files and directories into a single file, simplifying tasks like
backup, distribution, and organization. It maintains directory structures and file permissions.
Compression tools like gzip and the built-in compression of zip reduce the storage space required
for files, which is crucial for efficient disk usage and faster data transfer over networks. Combining
archiving and compression (e.g., using tar with gzip to create .tar.gz files) provides the benefits of
both: a single manageable file that also consumes less disk space, making backups and transfers
more efficient.
7. Explain how you can use the file command to identify different types of files and why this can
be important in a UNIX environment.
Ans: The file command analyzes the content of a file to determine its type, rather than relying
solely on the filename extension. For example, a file named document.txt might actually be a PDF
document, and file document.txt would reveal this. This is important in UNIX because the
operating system heavily relies on the actual content of a file to determine how to handle it.
Executable files, text files, archives, and various data formats are identified by their internal
structure. Knowing the true file type is crucial for executing programs correctly, choosing the right
application to open a file, and ensuring data integrity.

Netaji Subhash Engineering College


76
8. Describe a scenario where you would use the wc command with different options to analyze a
log file.
Ans: Imagine you have a web server log file named access.log. To get a quick overview of the
activity, you might use wc -l access.log to count the number of lines, which often corresponds to
the number of requests. If you want to analyze the volume of data transferred (assuming each
entry has a size in bytes), you might use wc -c access.log to get the total number of characters
(bytes). To see the number of individual entries or requests (assuming each line contains multiple
words), wc -w access.log would give you the total word count. Combining these, wc -l -w -c
access.log provides a comprehensive summary of the log file's size and content.
9. Explain how the more command is useful for viewing large files compared to cat. What are
some navigation options within more?
Ans: The more command is significantly more efficient for viewing large files compared to cat
because it displays the content page by page, preventing the entire file from scrolling rapidly off
the screen. cat simply outputs the entire file to the standard output, which can be overwhelming
and resource-intensive for large files. Within more, you can navigate using several options:
pressing the spacebar advances to the next page, the Enter key moves forward one line, 'q' quits
the viewer, '/' followed by a pattern allows you to search for that pattern within the file, and 'h'
displays a help screen with more navigation commands.
10. Discuss the importance of understanding ordinary file handling commands in UNIX for system
administration and software development.
Ans: A solid understanding of ordinary file handling commands is fundamental for both system
administrators and software developers in a UNIX environment. For system administrators, these
commands are essential for daily tasks such as managing user data, creating backups, deploying
applications, analyzing logs, and system maintenance. They need to be proficient in creating,
modifying, moving, and archiving files, as well as understanding file permissions and types. For
software developers, these commands are crucial for interacting with the file system, reading and
writing configuration files, processing data files, and often form the basis of scripts used for
automation, build processes, and deployment. Without a strong grasp of these basic commands,
both roles would be significantly hindered in their ability to effectively manage and develop
within the UNIX ecosystem.

Netaji Subhash Engineering College


77

Chapter 04
File attributes

Netaji Subhash Engineering College


78

4.1 Basic File Attributes

In UNIX, every file and directory is associated with a set of attributes that define its properties and control how it
can be accessed or manipulated. These attributes provide essential information such as file type, permissions,
ownership, size, timestamps, and the number of links. Understanding file attributes is crucial for effective file
management and ensuring system security.

File attributes determine who can read, write, or execute a file, and which user or group owns it. System
administrators and users frequently use commands like ls -l and stat to view these attributes. For example,
permissions prevent unauthorized users from modifying important files, and timestamps help track when files
were last modified or accessed.

ls Command
The ls command is one of the most commonly used commands in UNIX and Linux. It is used to list the contents
of a directory, showing files and subdirectories

The primary command for viewing file and directory attributes is ls. Using different options with ls reveals
various details about these entities. The most common and informative option is -l (lowercase 'L'), which
provides a long listing format.

When you run ls -l, you'll see a line for each file or directory, containing several attributes:

• File Type and Permissions: The first field (e.g., -rw-r--r-- or drwxr-xr-x) indicates the file type (e.g., - for
regular file, d for directory, l for symbolic link) and the read, write, and execute permissions for the
owner, group, and others.

• Number of Hard Links: The second field shows the number of hard links to the file. For a directory, it
indicates the number of subdirectories plus two (for '.' and '..').

• Owner: The third field displays the username of the file's owner.

• Group: The fourth field shows the group name associated with the file.

• Size: The fifth field indicates the size of the file in bytes. For directories, this is the size of the directory
metadata, not the total size of its contents.

• Last Modification Time: The sixth and seventh fields show the date and time when the file's content was
last modified.

• Filename: The last field is the name of the file or directory.

Other useful options with ls include:

• -a: Shows all files, including hidden files (those starting with a '.').

• -h: Displays file sizes in a human-readable format (e.g., 1K, 234M, 2G).
Netaji Subhash Engineering College
79
• -t: Sorts the listing by modification time (newest first).

• -r: Reverses the order of the listing.

These attributes provide essential information about the ownership, permissions, size, and modification history
of files and directories, which are crucial for managing and understanding the UNIX file system.

Managing File Security Using Permissions in UNIX


File security in UNIX is managed through file attributes such as permissions, ownership, and umask.
These attributes control access to files and directories, ensuring that only authorized users can read,
modify, or execute them.

Understanding File Permissions


Each file and directory in UNIX has specific permissions that determine which users can perform
actions on them. The three types of permissions are:

Symbol Permission Numeric Value Description

r Read 4 Allows reading the file or listing the directory contents

w Write 2 Allows modifying or deleting the file or directory

x Execute 1 Allows running the file as a program or accessing a directory

Each file has three sets of permissions:


1. Owner (User) – The person who created the file.
2. Group – A collection of users who share file access.
3. Others (World) – Everyone else on the system.
Checking File Permissions
To view file permissions, use the ls -l command:
ls -l filename

Example Output:
-rwxr--r-- 1 user group 1234 Mar 18 12:00 myfile.txt

Breaking Down the Output:


-rwxr--r-- 1 user group 1234 Mar 18 12:00 myfile.txt

• -rwxr--r-- → Permissions
o - → File type (- for file, d for directory)
o rwx → Owner (User) has read, write, and execute permissions
o r-- → Group has only read permission

Netaji Subhash Engineering College


80
o r-- → Others have only read permission
• 1 → Hard link count
• user → Owner of the file
• group → Group associated with the file
• 1234 → File size in bytes
• Mar 18 12:00 → Last modified date
• myfile.txt → File name

Changing File Permissions Using chmod


The chmod (change mode) command modifies file permissions. It can be used in symbolic mode or
numeric mode.

Using Symbolic Mode or Relative Permission


chmod [who][operator][permission] filename

• [who]: u (user), g (group), o (others), a (all)


• [operator]: + (add), - (remove), = (set exact permission)
• [permission]: r, w, x
Examples:
1. Grant execute permission to the user:
chmod u+x myfile.txt

2. Remove write permission from the group:


chmod g-w myfile.txt

3. Give read and write permissions to all:


chmod a+rw myfile.txt

Using Numeric Mode or Absolute Permission

Numeric values represent permission sets.


Permission Value
r (Read) 4
w (Write) 2
x (Execute) 1
Each permission set (User, Group, Others) is summed to get a three-digit number.
Examples:

Netaji Subhash Engineering College


81
1. Give full permissions to the user, read and execute to the group, and only execute to
others:
chmod 751 myfile.txt

o 7 → Owner (4+2+1 → Read, Write, Execute)


o 5 → Group (4+0+1 → Read, Execute)
o 1 → Others (0+0+1 → Execute)
2. Give read and write permissions to everyone:
chmod 666 myfile.txt

o 6 → Read and Write (4+2)


o 6 → Read and Write (4+2)
o 6 → Read and Write (4+2)

Changing File Ownership Using chown


The chown (change owner) command is used to change the owner or group of a file.
chown new_owner filename

• Example:
chown alice myfile.txt

o This changes the owner of myfile.txt to alice.


To change both the owner and group:
chown alice:developers myfile.txt

• The owner becomes alice, and the group is set to developers.


To change only the group:
chown :staff myfile.txt

Changing Group Ownership Using chgrp


The chgrp (change group) command modifies the group of a file.
Syntax:
chgrp newgroup filename

• Example:
chgrp staff myfile.txt

Netaji Subhash Engineering College


82
o This assigns staff as the new group.

Default Permissions Using umask


The umask (user mask) command controls the default file permissions for newly created files and
directories.
Understanding umask Values
When a file is created, it gets default permissions:
• Files: 666 (rw-rw-rw-) by default.
• Directories: 777 (rwxrwxrwx) by default.
The umask value subtracts permissions from the default.
Viewing the Current umask:
umask

Setting a New umask:


umask 022

• This subtracts 022 (--w--w--w) from default permissions:


o New file permissions: 644 (rw-r--r--)
o New directory permissions: 755 (rwxr-xr-x)
Common umask Values:

umask File Permissions Directory Permissions

022 644 (rw-r--r--) 755 (rwxr-xr-x)

027 640 (rw-r-----) 750 (rwxr-x---)

077 600 (rw-------) 700 (rwx------)

To permanently change umask, add it to the shell profile (~/.bashrc or ~/.profile).

Special Permissions
Beyond standard permissions, UNIX provides special permission bits:
1. Set User ID (SUID)
• Allows a program to run as the file owner, not the user executing it.
• Set using:
chmod u+s filename

Netaji Subhash Engineering College


83
o Example: The passwd command uses SUID to modify system files.
2. Set Group ID (SGID)
• Ensures files created in a directory inherit the group ownership.
• Set using:
chmod g+s directory

3. Sticky Bit
• Prevents deletion of files in a directory by users other than the owner.
• Commonly used in /tmp:
chmod +t /tmp

Conclusion
Managing file security in UNIX is essential for protecting data and ensuring only authorized access.
Understanding permissions (chmod), ownership (chown, chgrp), and default permissions (umask)
helps in securing files and directories efficiently. Advanced techniques like SUID, SGID, and Sticky Bit
provide further control over access rights.

Netaji Subhash Engineering College


84

4.2 Hard and Soft Links, Inodes

Understanding Linking Mechanisms and Inodes in UNIX


Linking mechanisms in UNIX provide multiple ways to reference a file without duplicating its data. These
include hard links and soft (symbolic) links, both of which rely on inodes to manage file metadata.

1. Understanding Inodes
An inode (Index Node) is a data structure used by the UNIX file system to store metadata about a file.
Each file has a unique inode number that acts as an identifier.
1.1 Information Stored in an Inode
An inode contains:
• File type (regular file, directory, symbolic link, etc.)
• Permissions (read, write, execute)
• Owner (User ID - UID)
• Group (Group ID - GID)
• File size
• Timestamps (creation, modification, access)
• Number of hard links
• Pointers to data blocks (location of file content on disk)

1.2 Checking Inode Information


To display the inode number of a file, use:
ls -i filename

Example:
ls -i myfile.txt

123456 myfile.txt
This output shows that myfile.txt has inode number 123456.
To view detailed inode information:
stat myfile.txt

2. Hard Links
A hard link is another name for a file that shares the same inode number. It points directly to the
original file's inode and data blocks.

Netaji Subhash Engineering College


85
2.1 Features of Hard Links
• Same inode number: Hard links share the same inode as the original file.
• Data is shared: Modifying one link modifies all links.
• Deletion doesn’t remove data: The file remains accessible until all hard links are deleted.
• Only works within the same filesystem: Hard links cannot span different file systems or
partitions.
• Cannot link to directories: For security reasons, hard links to directories are not allowed.
2.2 Creating a Hard Link
Use the ln command:
ln original_file hard_link

Example:
ln myfile.txt myfile_hardlink.txt

Now, myfile.txt and myfile_hardlink.txt share the same inode.


2.3 Verifying Hard Links
Check the inode number:
ls -li myfile.txt myfile_hardlink.txt

Example output:
123456 -rw-r--r-- 2 user group 1024 Mar 18 12:00 myfile.txt

123456 -rw-r--r-- 2 user group 1024 Mar 18 12:00 myfile_hardlink.txt

Both files have the same inode number (123456), confirming they are hard links.
2.4 Deleting a Hard Link
If one file is deleted:
rm myfile.txt

The data is still accessible via myfile_hardlink.txt. Only when all hard links are removed does the data
get erased.

3. Soft (Symbolic) Links


A soft link (or symbolic link) is a special file that points to the original file’s path, not its inode.
3.1 Features of Soft Links
• Different inode number: A soft link has a unique inode.
• Points to the file path: If the original file is moved or deleted, the soft link becomes broken.

Netaji Subhash Engineering College


86
• Can span filesystems: Soft links can point to files on different partitions or even remote
systems.
• Can link to directories: Unlike hard links, soft links can reference directories.
3.2 Creating a Soft Link
Use the ln -s command:
ln -s original_file soft_link

Example:
ln -s myfile.txt myfile_symlink.txt

Now, myfile_symlink.txt points to myfile.txt.


3.3 Verifying Soft Links
Check with ls -l:
ls -li myfile.txt myfile_symlink.txt

Example output:
123456 -rw-r--r-- 1 user group 1024 Mar 18 12:00 myfile.txt

234567 lrwxrwxrwx 1 user group 10 Mar 18 12:05 myfile_symlink.txt -> myfile.txt

• The inode number is different for the soft link (234567).


• The "l" at the start of lrwxrwxrwx indicates it’s a symbolic link.

• myfile_symlink.txt -> myfile.txt shows the link points to myfile.txt.


3.4 Deleting a Soft Link
If the original file is deleted, the soft link becomes broken:
rm myfile.txt

Now, myfile_symlink.txt exists but doesn’t work. Listing its contents will show:
ls: cannot access 'myfile_symlink.txt': No such file or directory

4. Differences Between Hard Links and Soft Links


Feature Hard Link Soft Link

Inode Number Same as original file Different from original file

Points to The inode (actual file data) The filename (path)

Effect if Original File is Deleted File remains intact Link becomes broken

Cross Filesystems No Yes

Netaji Subhash Engineering College


87

Works with Directories No Yes

Storage Space Shares existing file’s storage Requires additional space for the link

5. Practical Applications of Links


• Hard links are useful for:
o Maintaining multiple references to a file without duplication.
o Ensuring file availability even if one name is deleted.
• Soft links are useful for:
o Creating shortcuts to frequently accessed files or directories.
o Linking files across different filesystems.
o Version control (pointing to different versions of a file).

6. Listing of Modification and Access Time in UNIX


In UNIX, each file and directory maintain a set of timestamps that record when certain actions were
performed. These are essential for file system management, backup operations, and auditing. The most
commonly accessed timestamps are:
Types of Timestamps

• Access Time (atime)


o Records the last time the file was read or accessed.
o Changes when a file is viewed or opened (e.g., using cat, less).

• Modification Time (mtime)


o Records the last time the file's contents were modified.
o Changes when the file’s actual data is altered (e.g., using an editor or echo >>).

• Change Time (ctime)


o Records the last time file metadata was changed, such as permissions or ownership.
o It does not refer to content modification, but attribute changes.
Commands to View Timestamps
Using ls -l

• Shows the modification time (mtime) only.


ls -l file.txt

Netaji Subhash Engineering College


88
Output:
-rw-r--r-- 1 user user 2048 May 19 11:00 file.txt

Using stat

• Displays all three timestamps: atime, mtime, and ctime.


stat file.txt

Sample output:
File: file.txt
Size: 2048 Blocks: 8 IO Block: 4096 regular file
Access: 2025-05-19 10:00:00
Modify: 2025-05-18 21:30:00
Change: 2025-05-18 21:35:00

7. Timestamp Changing with touch Command


The touch command is used to:
• Create empty files
• Update the access and modification timestamps of a file
Syntax:
touch [options] filename

Examples:
a. Create a new empty file:
touch newfile.txt

b. Update the timestamp of an existing file:


touch existing.txt -- This updates atime and mtime to the current time.

c. Change to a specific date and time:


touch -t 202405182359 file.txt -- Sets time to May 18, 2024, 23:59.

d. Set the timestamp same as another file:


touch -r ref.txt target.txt -- This sets the timestamp of target.txt to match
ref.txt.
Netaji Subhash Engineering College
89

8. File Locating with find Command


The find command is used to search for files and directories in a directory hierarchy based on various
conditions like name, size, time, permission, etc.
Basic Syntax:
find [starting_path] [expression]

Common Examples:
a. Find all .txt files in current directory:
find . -name "*.txt"

b. Find files modified in the last 1 day:


find . -mtime -1

c. Find files accessed more than 10 days ago:


find . -atime +10

d. Find empty files:


find . -type f -empty

e. Find files and execute a command (e.g., delete):


find . -name "*.log" -exec rm {} \;

f. Find files by permission (e.g., 777):


find /home/user -perm 0777

Conclusion
Understanding inodes and linking mechanisms is essential for efficient file management in UNIX.
• Hard links create another reference to a file using the same inode.
• Soft links create a new file that points to the original file's path.
Each has unique advantages and use cases, making them powerful tools in UNIX file
management.
• The stat and ls -l commands are useful to inspect file access, modification, and metadata
change times.
• The touch command is versatile for creating empty files and modifying file timestamps.
• The find command is a powerful utility to search and manage files efficiently based on various
criteria such as name, type, access/modification time, and permissions.
Netaji Subhash Engineering College
90

Netaji Subhash Engineering College


91

Review Question

Netaji Subhash Engineering College


92

Review Questions
Short Answer Type Questions

1. What are file attributes in Unix?


Ans: File attributes in Unix are metadata that describe and control access to a file, including permissions,
ownership, size, and timestamps, but excluding the actual file content.

2. Name three file attributes in Unix.


Ans: Three file attributes in Unix are: permissions (read, write, execute), owner (user ID), and group
(group ID).

3. What is the purpose of file permissions?


Ans: File permissions control who can access a file and what actions they can perform (read, write,
execute), ensuring system security and data protection.

4. What are the three categories of users in Unix file permissions?


Ans: The three categories of users are: user (owner), group, and others.

5. What are the three basic file permissions in Unix?


Ans: The three basic file permissions are: read (r), write (w), and execute (x).

6. What does the 'r' permission signify?


Ans: The 'r' permission signifies read permission, allowing a user to view the contents of a file or list the
files in a directory.

7. What does the 'w' permission signify?


Ans: The 'w' permission signifies write permission, allowing a user to modify the contents of a file or
create/delete files within a directory.

8. What does the 'x' permission signify?


Ans: The 'x' permission signifies execute permission, allowing a user to run a file as a program or enter a
directory (using cd).

9. What is the chmod command used for?


Ans: The chmod command is used to change the permissions of a file or directory in Unix systems. For
example: chmod 755 myfile.txt.

10. Explain absolute permissions.


Ans: Absolute permissions use octal numbers (0-7) to represent file permissions, where each digit
specifies the permissions for user, group, and others (e.g., 755).

11. Explain relative permissions.


Ans: Relative permissions use symbolic characters (+, -, =) and letters (r, w, x, u, g, o) to add, remove, or
set specific permissions relative to the existing permissions.

12. What is the chown command used for?


Ans: The `chown` command is used to change the owner of a file or directory. For example: `chown
user1 myfile.txt`.

13. What is the chgrp command used for?


Ans: The `chgrp` command is used to change the group ownership of a file or directory. For example:
`chgrp group1 myfile.txt`.

Netaji Subhash Engineering College


93
14. What is a Unix inode?
Ans: An inode (index node) is a data structure in Unix that stores all the attributes of a file or directory
(permissions, ownership, size, etc.), except for its name and actual data.

15. What is a hard link?


Ans: A hard link is a directory entry that points to the same inode as another file. It acts as an additional
name for the same underlying data.

16. What is a soft link (symbolic link)?


Ans: A soft link (or symbolic link) is a special type of file that contains a pointer to another file or
directory. It's a shortcut to the original file.

17. How does file attribute significance differ for directories?


Ans: For directories, read permission allows listing contents, write allows creating/deleting files, and
execute allows entering the directory (using cd).

18. What are the default permissions for a new file?


Ans: The default permissions for a new file are typically 666 (rw-rw-rw-), which is then modified by the
umask.

19. What are the default permissions for a new directory?


Ans: The default permissions for a new directory are typically 777 (rwxrwxrwx), which is then modified
by the umask.

20. What is umask?


Ans: `umask` is a command and a setting that determines the default permissions for newly created files
and directories by masking (removing) bits from the default permissions.

21. What command lists modification and access time?


Ans: The ls -l command shows the last modification time, and ls -lu shows the last access time. The stat
command provides more detailed time information.

22. What is the touch command used for?


Ans: The touch command is used to change file timestamps (access and modification times) or to create
an empty file.

23. What is the find command used for?


Ans: The find command is used to locate files and directories based on various criteria, such as name,
size, type, and permissions.

24. What is the significance of the sticky bit?


Ans: The sticky bit, when set on a directory, restricts file deletion within that directory to the file's
owner, the directory's owner, or the superuser, enhancing security in shared directories like /tmp.

25. What is the setuid attribute?


Ans: The setuid (Set User ID) attribute, when set on an executable file, allows the file to be executed
with the privileges of the file's owner, not the user who runs it.

26. What is the setgid attribute?


Ans: The setgid (Set Group ID) attribute, when set on an executable file or a directory, causes the file to
be executed with the privileges of the file's group or new files created in the directory will belong to the
directory's group.

27. What is the output of ls -l command?

Netaji Subhash Engineering College


94
Ans: The ls -l command displays a long listing of files and directories, including permissions, number of
links, owner, group, size, modification time, and name.

28. What is the meaning of the first character in the output of ls -l?
Ans: The first character in the ls -l output indicates the file type: "-" for regular file, "d" for directory, "l"
for symbolic link, "c" for character special file, and "b" for block special file.

29. How do you represent "no permissions" in octal?


Ans: "No permissions" in octal is represented by 0 (zero).

30. How do you represent "all permissions" in octal?


Ans: "All permissions" (read, write, and execute for user, group, and others) in octal is represented by
777.

Descriptive Type Questions

1. Explain file attributes in Unix, listing and briefly describing at least five common attributes.
Ans: In Unix, file attributes are metadata that describe and control access to a file. They are stored in the
file's inode and provide essential information beyond the file's content. Here are five common
attributes:

o Permissions: These determine who can access the file and how (read, write, execute).
Represented as three sets of three characters (e.g., rwxr-xr--).

o Owner: The user ID of the file's owner, who typically has the most control over the file. The
chown command is used to change ownership.

o Group: The group ID associated with the file, allowing group members to share access based on
group permissions. The chgrp command changes group ownership.

o Size: The amount of data the file contains, usually measured in bytes. The ls -l command displays
the file size.

o Timestamps: Include the last access time (when the file was last read), last modification time
(when the file's content was changed), and last status change time (when the inode was last
modified). The stat and touch commands are relevant here.

2. Describe file ownership and its significance in Unix.


Ans: File ownership in Unix refers to the user who created the file or to whom ownership has been
assigned using the chown command. Every file has an owner, and this ownership is significant because:

o Access Control: The owner has primary control over the file's permissions, determining who else
can read, write, or execute it.

o Privileges: The owner can modify the file's attributes, including permissions and group
ownership.

o Accountability: Ownership provides a way to track which user is responsible for a particular file,
aiding in system administration and security.

For example, if user alice owns a file data.txt, she can use chmod to restrict access to only herself,
ensuring her data's privacy.

Netaji Subhash Engineering College


95
3. Explain file permissions in Unix, including read, write, and execute permissions, and how they apply to
both files and directories.
Ans: File permissions in Unix control access to files and directories, using three basic permissions:

o Read (r):

▪ For files: Allows a user to view the file's contents (e.g., using cat).

▪ For directories: Allows a user to list the files and subdirectories within it (e.g., using ls).

o Write (w):

▪ For files: Allows a user to modify the file's contents (e.g., using a text editor).

▪ For directories: Allows a user to create new files or delete existing ones within the
directory.

o Execute (x):

▪ For files: Allows a user to run the file as a program or script.

▪ For directories: Allows a user to enter the directory using the cd command, granting
access to its contents and subdirectories.

Permissions are applied to three categories: user (owner), group, and others. For instance, chmod 755
mydir gives the owner full permissions, while the group and others can only read and execute the
directory.

4. Describe the process of changing file permissions in Unix, explaining both relative and absolute
permission methods with examples.
Ans: The chmod command changes file permissions.

o Relative Permissions: Use symbolic operators (+, -, =) and letters (r, w, x, u, g, o).

▪ chmod u+x script.sh: Adds execute permission for the owner.

▪ chmod g-w data.txt: Removes write permission for the group.

▪ chmod o=r myfile.txt: Sets others' permissions to read-only.

o Absolute Permissions: Use octal numbers. Each digit represents permissions for user, group, and
others.

▪ chmod 755 script.sh: Owner: rwx (7), Group: r-x (5), Others: r-x (5).

▪ chmod 644 data.txt: Owner: rw- (6), Group: r-- (4), Others: r-- (4).

▪ chmod 777 /var/www/html: All have full permissions.

5. Explain how to change file ownership and group ownership in Unix, providing examples using the
chown and chgrp commands.
Ans: Changing File Ownership (chown): The chown command changes the user who owns a file. You
must be the superuser (root) or the current owner of the file to change its ownership.

▪ chown newuser myfile.txt: Changes the owner of myfile.txt to newuser.

▪ chown root:staff /var/log/messages: Changes the owner to root and the group to staff.

Netaji Subhash Engineering College


96
▪ chown -R user1:group1 /home/user1/: Changes the owner and group of all files and
directories recursively within /home/user1/.

Changing Group Ownership (chgrp): The chgrp command changes the group associated with a file. You
must be the superuser or a member of the new group and the file's owner.

▪ chgrp developers myfile.txt: Changes the group of myfile.txt to developers.

▪ chgrp -R www-data /var/www/html/: Changes the group of all files and directories
recursively in /var/www/html/ to www-data.

6. Describe the Unix file system and the role of inodes in organizing files.
Ans: The Unix file system is a hierarchical structure, resembling an inverted tree, with the root directory
(/) at the top. It organizes files and directories in a structured manner, allowing for efficient storage and
retrieval.

Inodes (Index Nodes):

o Each file and directory has a unique inode.

o An inode is a data structure that stores all the metadata about a file, except its name and data.
This metadata includes:

▪ Permissions

▪ Ownership (user and group IDs)

▪ Size

▪ Timestamps (access, modification, change)

▪ File type

▪ Number of hard links

o Directories contain entries that map file names to inode numbers. When you access a file by its
name, the system looks up the inode number in the directory, then retrieves the file's attributes
from the inode.

o This separation of metadata (inode) from the filename allows for features like hard links, where
multiple names can point to the same inode (and thus, the same data).

7. Explain the concepts of hard links and soft links (symbolic links) in Unix, highlighting their differences
with examples.
Ans: Both hard and soft links create additional ways to access a file, but they function differently:

o Hard Links:

▪ A hard link is a directory entry that points directly to the same inode as the original file.
It's like giving the same file another name.

▪ If you change the contents of the file through one hard link, the changes are visible
through all other hard links.

▪ Hard links share the same inode, so they have the same attributes (permissions, owner,
etc.).

Netaji Subhash Engineering College


97
▪ You cannot create a hard link to a directory (in most Unix-like systems) or to a file on a
different file system.

Example:

▪ ln original.txt hardlink.txt # Create a hard link

▪ echo "Hello" >> original.txt

▪ cat hardlink.txt # Shows "Hello"

o Soft Links (Symbolic Links):

▪ A soft link is a special file that contains a pointer to the name of another file or directory.
It's more like a shortcut.

▪ If the original file is deleted, the soft link becomes broken (dangling), as it points to a
name that no longer exists.

▪ Soft links can point to directories and files on different file systems.

▪ Soft links have their own inode, which stores the path to the original file.

Example:

▪ ln -s original.txt softlink.txt # Create a soft link

▪ echo "World" >> original.txt

▪ cat softlink.txt # Shows "World"

▪ rm original.txt

▪ cat softlink.txt # Shows "No such file or directory"

▪ ls -l shows soft links with an "l" at the beginning of the permissions and displays the link
pointing to the original file.

8. Describe the significance of file attributes for directories in Unix.


Ans: File attributes have specific meanings for directories in Unix, controlling how users can interact with
the directory's contents:

o Read (r): Allows users to list the files and subdirectories within the directory using commands
like ls. Without read permission, you cannot see what's inside the directory.

o Write (w): Allows users to create new files or delete existing files within the directory. Note that
having write permission on a directory does not mean you can modify the contents of the files
within that directory, unless you also have write permission on those individual files.

o Execute (x): Allows users to enter the directory using the cd command. This is essential for
accessing files and subdirectories within the directory, even if you have read permission. Execute
permission on a directory is often referred to as "traverse" permission.

For example, if a directory has permissions rwxr-x---, the owner can list, create/delete, and enter it; the
group can list and enter it; and others cannot access it at all. Without execute permission on a directory,
you cannot cd into it, regardless of read permissions.

Netaji Subhash Engineering College


98
9. Explain the default permissions of a file and directory in Unix and how the umask command affects
them, with examples.
Ans: When a new file or directory is created in Unix, the system assigns default permissions. The umask
setting modifies these defaults.

o Default Permissions:

▪ Files: 666 (rw-rw-rw-)

▪ Directories: 777 (rwxrwxrwx)

o umask:

▪ umask sets the permissions that are removed from the default permissions. It's a mask
of bits that are turned off.

▪ The umask is typically set in startup files like .bashrc or /etc/profile.

▪ The umask value is also represented in octal.

Example:

▪ If umask is set to 022, it means:

▪ 0: No permissions are removed from the owner.

▪ 2: Write permission is removed from the group.

▪ 2: Write permission is removed from others.

▪ Default file permission (666) - umask (022) = Actual file permission (644, rw-r--r--)

▪ Default directory permission (777) - umask (022) = Actual directory permission (755,
rwxr-xr-x)

Commands:

▪ umask 022

▪ touch newfile.txt

▪ mkdir newdir

▪ ls -l newfile.txt newdir # Shows permissions 644 and 755

10. Describe how to list and change file modification and access times in Unix, including the use of the ls,
stat, and touch commands.
Unix maintains several timestamps for files:

o Access Time (atime): Time the file was last read.

o Modification Time (mtime): Time the file's content was last changed.

o Change Time (ctime): Time the file's metadata (permissions, ownership) was last changed.

o Listing Times:

▪ ls -l: Shows modification time (mtime).

Netaji Subhash Engineering College


99
ls -l myfile.txt
-rw-r--r-- 1 user group 1024 2024-07-24 10:00 myfile.txt #
2024-07-24 10:00 is mtime

▪ ls -lu: Shows access time (atime).


ls -lu myfile.txt
-rw-r--r-- 1 user group 1024 2024-07-24 10:15 myfile.txt #
2024-07-24 10:15 is atime

▪ stat: Provides detailed information, including all three timestamps.


stat myfile.txt
File: myfile.txt
Size: 1024 Blocks: 8 IO Block: 4096
regular file
Device: 803h/2051d Inode: 12345678 Links: 1
Access: (0644/-rw-r--r--) Uid: ( 1000/ user) Gid: ( 100/
group)
Access time: 2024-07-24 10:15:00.123456789 +0000
Modify time: 2024-07-24 10:00:00.987654321 +0000
Change time: 2024-07-24 10:20:00.456789123 +0000

o Changing Times (touch):

▪ touch filename: Updates both atime and mtime to the current time. If the file doesn't exist,
it creates an empty file.
touch myfile.txt # Updates atime and mtime

▪ touch -a filename: Changes only the access time (atime).


touch -a myfile.txt # Updates atime

▪ touch -m filename: Changes only the modification time (mtime).


touch -m myfile.txt # Updates mtime

▪ touch -t YYYYMMDDHHMM filename: Sets both atime and mtime to a specific time.
touch -t 202407241200 myfile.txt # Sets time to July 24,
2024, 12:00 PM

Netaji Subhash Engineering College


100

Chapter 05
Shell

Netaji Subhash Engineering College


101

5.1 Introduction to Shell and Shell Types

Introduction to Shell and Shell Types in UNIX


1. What is a Shell?

A shell is a command-line interpreter that allows users to interact with the operating system. It provides an
interface between the user and the system's kernel, enabling users to execute commands, run scripts, and
automate tasks.

1.1 Functions of a Shell

• Command Execution: Accepts user commands and executes them.

• Shell Scripting: Supports automation using shell scripts.

• File and Process Management: Allows users to navigate files, execute processes, and manage resources.

• I/O Redirection and Pipes: Controls input and output streams.

• Environment Management: Manages user variables, system paths, and configurations.

2. Types of Shells in UNIX

UNIX offers multiple shell types, each with unique features. These shells can be categorized into Bourne Shell
Family and C Shell Family.

2.1 Bourne Shell Family (Sh-Based Shells)

Shells in this family are descendants of the original Bourne Shell (sh), known for their scripting capabilities.
Shell Type Description

Bourne Shell (sh) Original UNIX shell, known for scripting but lacks user-friendly features.

Bash (Bourne Again Default shell in Linux, an enhanced version of sh, supports command history, auto-completion,
Shell) and scripting improvements.

Korn Shell (ksh) Developed by David Korn, adds scripting enhancements and performance improvements.

Z Shell (zsh) Combines features of bash and ksh, supports advanced auto-completion, themes, and plugins.

2.2 C Shell Family (C-Based Shells)

These shells have a C-like syntax, making them preferred by programmers.


Shell Type Description

C Shell (csh) Introduced by BSD, supports command aliasing, job control, and a C-like syntax.

TENEX C Shell Enhanced version of csh, includes command-line editing, history, and better scripting
(tcsh) capabilities.

3. Key Differences Between Shell Types


Netaji Subhash Engineering College
102
Feature Bourne Shell Bash Korn Shell Z Shell (zsh) C Shell TENEX C Shell
(sh) (ksh) (csh) (tcsh)

Default in UNIX Yes No No No No No

Command No Yes Yes Yes No Yes


History

Auto- No Yes Yes Yes No Yes


Completion (Advanced)

Aliases No Yes Yes Yes Yes Yes

Scripting Basic Advanced Advanced Advanced Limited Advanced


Features

Syntax Bourne-based Bourne- Bourne-based Bourne-based C-like C-like


based

Performance Basic Improved Optimized Fast Moderate Fast

4. Choosing the Right Shell

• For General Users: bash (default in Linux) or zsh (used in macOS).

• For Programmers: csh or tcsh (C-like syntax).

• For Advanced Users: ksh (powerful scripting and performance).

• For Customization Enthusiasts: zsh (themes, plugins, auto-correction).

Conclusion

A shell is a crucial part of the UNIX/Linux operating system, enabling users to execute commands and automate
tasks. Different shell types offer unique features, with bash being the most commonly used, while zsh and ksh
provide additional enhancements. Understanding shell differences helps in selecting the right shell for specific
needs.

Netaji Subhash Engineering College


103

5.2 Pattern Matching and Regular Expressions

Pattern matching is a technique used in text processing to search for specific sequences of characters (patterns)
in a file or input stream. In UNIX, pattern matching is commonly performed using regular expressions with tools
like grep and egrep.

Regular Expressions (RegEx) in UNIX

A Regular Expression (RegEx) is a pattern that defines a search criteria. It is used for string matching and text
filtering in UNIX commands.

Types of Regular Expressions

1. Basic Regular Expressions (BRE) – Used with grep.

2. Extended Regular Expressions (ERE) – Used with egrep.

The grep Command

grep (Global Regular Expression Print) is used to search for a specific pattern in files or input streams.

Basic Syntax:
grep [OPTIONS] "pattern" filename

Commonly Used Options in grep


Option Description
-i Case-insensitive search
-v Invert match (display lines that do not match)
-c Count the number of matching lines
-n Show line numbers along with matching lines
-l Display only filenames with matches
-w Match whole words
-o Print only the matched part of the line

Examples of grep

1. Search for a word in a file


grep "error" logfile.txt

Searches for the word "error" in logfile.txt.

2. Case-insensitive search
grep -i "error" logfile.txt

Finds "error", "Error", "ERROR", etc.

3. Display lines that do NOT contain the word


grep -v "warning" logfile.txt

Shows lines that do not contain "warning".

Netaji Subhash Engineering College


104
4. Count occurrences of a pattern
grep -c "failed" logfile.txt

Counts how many times "failed" appears in the file.

5. Show line numbers with matches


grep -n "error" logfile.txt

Displays lines containing "error" along with their line numbers.

The egrep Command (Extended grep)

egrep (Extended grep) supports Extended Regular Expressions (ERE), which include advanced patterns. It is
functionally similar to grep -E.

Syntax of egrep
egrep [OPTIONS] "pattern" filename

Additional Features in egrep


Symbol Meaning
` `
+ Matches one or more occurrences of the preceding character
? Matches zero or one occurrence of the preceding character
{n,m} Matches between n and m occurrences

Examples of egrep

1. Search for multiple words using | (OR operator)


egrep "error|failed|warning" logfile.txt

Finds lines containing "error", "failed", or "warning".

2. Match words ending with .log


egrep "\.log$" filenames.txt

Finds lines ending with .log.

3. Match words starting with "user"


egrep "^user" data.txt

Finds lines that start with "user".

4. Match repeated occurrences using +


egrep "go+d" words.txt

Matches "god", "good", "goood", etc.

Using grep and egrep for Advanced Pattern Matching

1. Anchors (^ and $)
Netaji Subhash Engineering College
105
• ^pattern → Match at the beginning of a line.

• pattern$ → Match at the end of a line.

Example:
grep "^Hello" file.txt # Lines starting with "Hello"
grep "end$" file.txt # Lines ending with "end"

2. Wildcards and Character Classes

Symbol Meaning
. Matches any single character
[abc] Matches any one of a, b, or c
[^abc] Matches any character except a, b, or c
[a-z] Matches any lowercase letter
[0-9] Matches any digit

Example:
egrep "gr.y" file.txt # Matches "gray", "grey", etc.
egrep "[A-Z]" file.txt # Matches any uppercase letter

3. Quantifiers (*, +, ?, {n,m})

Symbol Meaning
* Matches 0 or more times
+ Matches 1 or more times
? Matches 0 or 1 time
{n} Matches exactly n times
{n,} Matches n or more times
{n,m} Matches between n and m times

Example:
egrep "go*d" file.txt # Matches "gd", "god", "good", "goood"
egrep "go+d" file.txt # Matches "god", "good", "goood" but not "gd"
egrep "colou?r" file.txt # Matches "color" and "colour"

Conclusion

Pattern matching using grep and egrep is essential for text processing in UNIX. While grep supports basic regular
expressions, egrep provides extended pattern matching capabilities. These tools help in searching, filtering, and
extracting information efficiently from text files.

Netaji Subhash Engineering College


106

5.3 Input/Output Redirection and Pipes

In Unix/Linux systems, redirection is a powerful feature that allows users to change the standard input and output
sources of commands. By default, programs read input from the keyboard (standard input) and display output on
the screen (standard output). However, with redirection, you can reroute input from files and send output to files
or other devices instead of the screen. This enables users to save command results, read data from files, manage
error messages, and automate tasks more efficiently. Redirection is an essential tool for shell scripting,
programming, and system administration.

Redirection:
Redirection is the process of changing the default input or output source of a command in a Unix/Linux shell. It
allows a command to read input from a file instead of the keyboard (standard input), or write output to a file
instead of displaying it on the screen (standard output). Redirection also includes managing error messages by
sending them to files or combining them with standard output. It is commonly used to automate tasks, store
results, and streamline command-line workflows.

About three standard files/streams


In computing, a stream is a continuous flow of data that can be read from or written to. It acts as a channel
between a program and a data source or destination, such as a keyboard, screen, or file. In Unix/Linux systems,
streams are used to handle input and output operations. The most common types of streams are:

• Standard Input (stdin): Input taken from the keyboard by default (file descriptor 0).

• Standard Output (stdout): Output displayed on the screen by default (file descriptor 1).

• Standard Error (stderr): Error messages displayed on the screen (file descriptor 2).

Streams allow programs to process data efficiently, whether it's coming from the user, another program, or a file.
These streams are automatically opened when a program starts and are identified by file descriptors. A file
descriptor is a non-negative integer that uniquely identifies an open file or input/output resource used by a
process. It acts as a handle for accessing files, devices, sockets, or streams.

1. Standard Input(stdin):
Standard Input, commonly referred to as stdin, is one of the three default data streams used in Unix/Linux
systems for handling input and output.

Standard Input (stdin) is the default stream through which a program receives input data. It is identified by
the file descriptor 0. By default, input is taken from the keyboard unless redirected from a file or another
source.
o By default, stdin is connected to your keyboard. When a command expects input (like read, or when
you run sort without specifying a file), it's typically reading from what you type.

Netaji Subhash Engineering College


107
o However, stdin can be redirected using the < operator to read data from a file instead. This allows you
to feed pre-prepared data into a command as if you were typing it.
o Stdin is a read-only stream for the command. It can only receive data.

Example:
cat # The 'cat' command, without arguments, reads from stdin
This is what I'm typing.
^D # Press Ctrl+D (or Cmd+D on macOS) to signal the end of input
This is what I'm typing.

In this example, the cat command reads the lines you typed on the keyboard (stdin) and then echoes
them back to the terminal (stdout). The ^D signifies the "end of file" marker for stdin, telling cat that
there's no more input.

sort < names.txt


In this case, the sort command doesn't wait for you to type names. Instead, it reads the list of names
directly from the names.txt file as its standard input.

By default, when you run a command that expects input and you don't specify a source, it waits for you to
type something on your keyboard and press Enter. Each line you enter becomes part of the standard input for
that command.

2. Standard Output(stdout):
Standard Output, commonly called stdout, is one of the three primary data streams in Unix/Linux systems
used for communication between a program and the outside world.

Standard Output (stdout) is the default stream where a program writes its regular output data. It is identified
by the file descriptor 1. By default, stdout sends output to the terminal (screen).
o By default, stdout is connected to your terminal screen. When a command successfully completes its
task and produces output (like the listing from ls or the sorted output from sort), it's usually sent to
stdout, which then displays it on your screen.
o Stdout can be redirected using the > (overwrite) or >> (append) operators to save the output to a file
instead of displaying it on the screen.
o Stdout is a write-only stream for the command. It can only send data.

Example:
ls -l
total 4
-rw-r--r-- 1 user user 12 May 13 02:35 unsorted_list.txt
-rw-r--r-- 1 user user 25 May 13 02:37 sorted_list.txt
The ls -l command lists files and directories in a long format. Its standard output is directly displayed on
your terminal.

Netaji Subhash Engineering College


108
ls -l > file_listing.txt
You won't see any output on your terminal. Instead, the standard output of the ls -l command is redirected
to a file named file_listing.txt. If file_listing.txt didn't exist, it would be created. If it did exist, its contents
would be overwritten.

echo "Adding another entry" >> file_listing.txt


This command appends the text "Adding another entry" to the end of the file_listing.txt file. If the file
didn't exist, it would be created.

3. Standard Error(stderr):
Standard Error, abbreviated as stderr, is one of the three default data streams in Unix/Linux systems, used
specifically for handling error messages and diagnostics.

Standard Error (stderr) is the default stream where a program writes its error messages. It is identified by the
file descriptor 2. By default, stderr sends output to the terminal (screen), just like stdout, but it operates
independently.
o By default, stderr is also connected to your terminal screen. This ensures that even if the regular output
is redirected to a file, you'll still see any error messages on your screen, alerting you to potential
problems.
o Stderr can be redirected using the 2> (overwrite) or 2>> (append) operators to save error messages to
a separate file. This is crucial for debugging and monitoring automated processes.
o Like stdout, stderr is a write-only stream for the command.

Example:
cat non_existent_file.txt

cat: non_existent_file.txt: No such file or directory

The error message "cat: non_existent_file.txt: No such file or directory" is


sent to standard error, which is displayed on your terminal.

cat non_existent_file.txt 2> error_log.txt

You won't see the error message on your terminal. Instead, it is redirected to a file named error_log.txt.
The content of error_log.txt will be:

cat: non_existent_file.txt:

No such file or directory

Redirecting both standard output and standard error (&> or 2>&1)


• Using &> (often in Bash and other modern shells):

cat existing_file.txt non_existent_file.txt &> combined_log.txt

This will put both the normal output from existing_file.txt and the error message about
non_existent_file.txt into the combined_log.txt file.

• Using 2>&1 (more traditional and portable):

Netaji Subhash Engineering College


109
cat existing_file.txt non_existent_file.txt > output_and_errors.log 2>&1

Here, > redirects stdout to output_and_errors.log, and then 2>&1 redirects stderr to the
same location where stdout is currently going (which is now the file). The order is important here.

Here are some other input redirection operators:


<< (Here Document): This allows you to provide input directly within your script or command. The input
continues until a specified delimiter is encountered.

command << DELIMITER


This is some input.
It can span multiple lines.
DELIMITER
The command will receive "This is some input.\nIt can span multiple lines.\n" as its stdin. The
delimiter (DELIMITER in this example) can be any string you choose.

• <<< (Here String): This allows you to provide a single line of input to a command.

command <<< "This is a single line of input."

This is equivalent to echo "This is a single line of input." | command, but without the explicit pipe.

Why are these three streams important?

• Separation of Concerns: Keeping normal output and error messages separate makes it easier to process
the intended results of a command without being cluttered by error messages.

• Automation and Scripting: In scripts, you might want to save the regular output of a command to a file
for further processing while logging any errors to a separate file for debugging.

• Piping: When you use pipes (|) to connect the output of one command to the input of another, you
typically only want to pipe the standard output, not the error messages. Errors should usually be displayed
or logged separately.

Escaping, Quoting, and Command Substitution in Shell Scripting

These are essential concepts in shell scripting that help you manage special characters and execute commands
within other commands.

Escaping: Escaping is a way to remove the special meaning of a character and treat it literally. The backslash \ is
used as the escape character.

o To include special characters in a string without the shell interpreting them.

o To give special meaning to certain character combinations.

Examples:

Netaji Subhash Engineering College


110
o echo "This is a \"quoted\" string": The \" escapes the double quote, so it's printed as part of the
string.

o echo "Newline\ncharacter": \n represents a newline character.

o echo "The variable is \$VAR": \$ prevents the shell from expanding the variable VAR.

o echo "Path with backslash: C:\\\\Windows\\\\System32": \\\\ represents a literal backslash.

Quoting: Quoting is a way to group characters and control how the shell interprets them.

Types:

o Single Quotes (' '): Strong quoting.

o Double Quotes (" "): Weak quoting.

Purpose:

o To prevent word splitting.

o To prevent or allow variable expansion.

o To control the interpretation of special characters.

Examples:

echo 'This is a literal string with $VAR' : The variable $VAR is not
expanded; it is taken literally.

echo "This string expands $VAR": The variable $VAR is expanded to its
value.

echo "Files in current directory: *": The * will expand to the list of
files.

echo 'Files in current directory: *': The * is treated as a literal


asterisk.

myvar="hello world"

echo "The value of myvar is $myvar" #output: The value of myvar is hello
world

echo 'The value of myvar is $myvar' #output: The value of myvar is $myvar

Command Substitution: Command substitution allows you to execute a command and insert its output into
another command or string.

Syntax:

o Backticks: `command` (older syntax, less preferred)

o Dollar sign and parentheses: $(command) (newer, preferred syntax)

To use the output of one command as an argument to another command, or to assign it to a variable.

Netaji Subhash Engineering College


111
Examples:

o DATE=$(date): The output of the date command is assigned to the


variable DATE.

o echo "Current date and time: $(date)": The output of date is


inserted into the string.

o ls -l $(which ls): The which ls command finds the path to the ls


executable, and that path is used as an argument to ls -l.

o find . -name "$(ls | grep .txt)" : finds all the files with .txt
extension in the current directory.

PIPE

In Unix/Linux systems, a pipe is a command-line feature that allows the output of one command to be passed
directly as input to another command. Represented by the symbol |, pipes enable users to chain multiple
commands together, creating a seamless flow of data between them. This mechanism helps avoid the need for
temporary files, making command execution more efficient and script-friendly. Pipes are especially useful for
filtering, transforming, and analyzing data using standard utilities like grep, sort, awk, uniq, and wc. By
combining simple tools with pipes, users can solve complex tasks in a clean, modular way.

• A pipe (|) is a shell operator that connects the standard output (stdout) of one command to the standard
input (stdin) of another. It is used to pass data between commands without creating intermediate files.

Pipes (|) allow you to connect the standard output of one command directly to the standard input of another
command. This creates a powerful way to chain commands together to perform complex operations in a
sequential manner.

Syntax:

command1 | command2 | command3 | ...

The output of command1 becomes the input of command2, the output of command2 becomes the
input of command3, and so on. The data flows from left to right through the pipeline.

Example:

Let's say you want to find all files in the current directory that contain the word "report". You can achieve this by
combining the ls command with the grep command using a pipe:

1. ls -l: Lists files and directories in a long format (output goes to stdout).

2. grep "report": Reads input from stdin and prints lines containing "report" to stdout.

By piping them together:

ls -l | grep "report"

Netaji Subhash Engineering College


112
The output of ls -l (the list of files) is sent as the input to grep "report". The grep command then filters
this input and only prints the lines that contain the word "report". The final output is displayed on your
terminal.

Another example:

Let's say you want to count the number of lines in the names.txt file:

1. cat names.txt: Displays the content of names.txt (output to stdout).

2. wc -l: Reads input from stdin and counts the number of lines (output to stdout).

Piping them together:

cat names.txt | wc -l

The content of names.txt is piped to wc -l, which then counts the lines and prints the count to your terminal.

Key Advantages of Using Pipes:

• Efficiency: Pipes avoid the need to create intermediate files to store the output of one command before
using it as input for another. The data flows directly in memory.

• Flexibility: You can combine simple, specialized commands to perform complex tasks that no single
command can achieve.

• Readability: Well-constructed pipelines can be more readable and easier to understand than long,
complex commands with multiple options.

Efficient Use of Redirection and Pipes

To use redirection and pipes efficiently, consider the following:

• Understand the Input and Output of Commands: Before using redirection or pipes, make sure you
understand what a command expects as input and what it produces as output (both stdout and stderr).
Consult the command's manual page (man command) for details.

• Filter and Process Data Incrementally: Pipes allow you to process data step by step. Use commands like
grep, sed, awk, sort, uniq, etc., in your pipelines to filter, transform, and analyze data effectively.

• Handle Errors: Be mindful of standard error. If a command in a pipeline fails, the error messages will
usually still go to your terminal. If you need to capture or redirect errors from a pipeline, you might need
to redirect stderr explicitly at the end of the pipeline or use tools designed for more complex error
handling.

• Keep Pipelines Concise and Readable: While you can create very long pipelines, it's often better to break
down complex tasks into smaller, more manageable steps. Use temporary variables or intermediate files
if a pipeline becomes too convoluted.

• Use Redirection for Saving and Organizing Output: Use output redirection to save the results of
commands to files for later analysis, reporting, or configuration.

• Combine Redirection and Pipes: You can use both redirection and pipes in the same command. For
example, you can pipe the output of several commands to a file:

command1 | command2 > final_output.txt


Netaji Subhash Engineering College
113
Or you can take input from a file and pipe it to a command:

command < input_file | command2

In summary, I/O redirection and pipes are powerful tools that allow you to control the flow of data in your
command-line environment. By understanding how they work and using them effectively, you can significantly
enhance your productivity and automate various tasks with ease.

Understanding Shell Variables


In Unix/Linux, shell variables are fundamental to how the shell (like Bash) stores and manipulates data. They're
essentially named storage locations that hold values, which can be strings, numbers, or other data.

Key Concepts

• Variable Assignment: You assign a value to a variable using the following syntax:

variable_name=value #There should be no spaces around the equals sign (=).

• Variable Naming:

o Variable names can contain letters (a-z, A-Z), numbers (0-9), and underscores (_).

o They must start with a letter or an underscore.

o By convention, shell variables are often written in uppercase (e.g., MY_VAR, HOME). This helps
distinguish them from shell commands and scripts.

• Accessing Variable Values: To retrieve the value stored in a variable, you precede the variable name with
a dollar sign ($):

• echo $variable_name

• echo ${variable_name} #alternative way, helps in complex scenarios

For example:

NAME="John Doe"

echo "Hello, $NAME" # Output: Hello, John Doe

Types of Variables

Shell variables can be broadly categorized into three types:

1. Local Variables: These variables are defined within the current shell instance. They are only accessible
within the script or the current shell session where they are defined. Example:

my_var="This is local"

2. Environment Variables: These variables are available to the current shell and any child processes that it
creates. They are used to store information about the system, the user, and the shell environment. You
can make a local variable an environment variable using the export command:

export my_var

Netaji Subhash Engineering College


114
Examples of common environment variables:

▪ HOME: The user's home directory.

▪ PATH: A list of directories where the shell searches for executable programs.

▪ USER: The username of the current user.

▪ SHELL: The path to the user's default shell.

3. Shell Variables (Special Variables): These are special variables that are set by the shell itself. They provide
information about the shell's state and behaviour. Examples:

o $0: The name of the current script.

o $1, $2, ... $n: The arguments passed to a script.

o $#: The number of arguments passed to a script.

o $?: The exit status of the last executed command.

o $$: The process ID of the current shell.

Examples

#!/bin/bash
# Local variable
message="Hello, world!"
echo $message

# Environment variable
export GREETING="Welcome"
echo $GREETING

# Shell variables

echo "Script name: $0"


echo "Number of arguments: $#"
echo "First argument: $1"
echo "Exit status of previous command: $?"

# Accessing an Environment Variable

echo "Your home directory is: $HOME"

Key Points

• Variables are a powerful tool for storing and manipulating data in shell scripts.

• Understanding the difference between local, environment, and shell variables is crucial for writing correct
and portable scripts.

• Environment variables play a vital role in configuring the behavior of programs and the shell itself.

Netaji Subhash Engineering College


115

Review Question

Netaji Subhash Engineering College


116

Review Questions
Short Answer Type Questions
1. What is a shell in Unix?
Ans: A shell is a command-line interpreter that provides an interface between the user and the
operating system, allowing users to execute commands and manage system operations. It interprets user
input and translates it into instructions for the kernel.

2. Name three common types of shells in Unix/Linux.


Ans: Three common types of shells are the Bourne Shell (sh), the Bourne Again SHell (bash), and the C
Shell (csh). Bash is the most widely used shell in Linux distributions.

3. What is the primary function of the shell's interpretive cycle?


Ans: The primary function of the shell's interpretive cycle is to read user commands, process them
(including expanding special characters), execute the commands, and then wait for the next command.

4. What are metacharacters in the context of a shell?


Ans: Metacharacters are special symbols that have a specific meaning to the shell, such as *, ?, >, <, and
|. They are used for pattern matching, redirection, and other shell operations.

5. What does the * wildcard do in shell pattern matching?


Ans: The * wildcard matches any sequence of zero or more characters in a filename or pathname. For
example, ls a* lists all files starting with "a".

6. What is the purpose of the ? wildcard?


Ans: The ? wildcard matches any single character in a filename or pathname. For instance, ls file?.txt
would match file1.txt, file2.txt, etc.

7. Explain the concept of "escaping" in shell commands.


Ans: Escaping is the process of preventing the shell from interpreting a metacharacter by preceding it
with a backslash (\). For example, ls \* treats the asterisk as a literal character.

8. What is "quoting" in the shell, and what are its types?


Ans: Quoting is a mechanism to suppress the special meaning of metacharacters. Types include single
quotes (') which prevent all interpretation, and double quotes (") which allow some interpretation (like
variable expansion).

9. What is the difference between > and >> redirection operators?


Ans: > redirects the output of a command to a file, overwriting the file if it exists. >> appends the output
to the end of the file, preserving existing content.

10. What are the three standard data streams in Unix?


Ans: The three standard data streams are standard input (stdin), standard output (stdout), and standard
error (stderr).

11. What does standard input (stdin) typically refer to?


Ans: Standard input typically refers to the keyboard, but it can also be redirected from a file or another
command.

12. What is the default destination for standard output (stdout)?


Ans: The default destination for standard output is the terminal screen.

13. Where does standard error (stderr) usually go?

Netaji Subhash Engineering College


117
Ans: Standard error usually goes to the terminal screen, displaying error messages.

14. What is the purpose of /dev/null?


Ans: /dev/null is a special file that discards any data written to it. It's used to suppress unwanted output
from commands.

15. What is /dev/tty used for?


Ans: /dev/tty is a special file that refers to the user's terminal. It can be used to ensure that output is
always displayed on the user's terminal, regardless of redirection.

16. What is a "pipe" in the Unix shell?


Ans: A pipe (|) is a mechanism to redirect the output of one command to the input of another
command, creating a chain of commands.

17. What does the tee command do?


Ans: * The tee command reads from standard input and writes to both standard output and one or more
files, allowing you to see the output while also saving it.

18. Explain "command substitution."


Ans: Command substitution allows the output of a command to be used as an argument to another
command, using backticks () or $( )`.

19. What are shell variables?


Ans: Shell variables are named storage locations that hold data, such as strings or numbers, that can be
used within shell scripts or commands.

20. Give an example of setting a shell variable.


Ans: To set a shell variable named MYVAR to the value "Hello", you would use the command:
MYVAR="Hello".

Descriptive Type Questions with Answers


1. Describe the shell's interpretive cycle in detail.
Ans: The shell's interpretive cycle is the sequence of steps it follows to process user commands. First, the
shell displays a prompt and waits for user input. Once a command is entered, the shell scans the command
line, performing expansions (like wildcard expansion and variable substitution) and resolving any
metacharacters. Then, the shell forks a new process and executes the command within that process. The
shell waits for the command to finish. Finally, the shell displays the prompt again and awaits the next
command, repeating the cycle.

For example, if you type ls -l *.txt, the shell first displays the prompt (e.g., $). It then expands *.txt to a list
of matching files (e.g., file1.txt file2.txt). The shell then executes ls -l file1.txt file2.txt, shows the output,
and returns to the prompt.

2. Explain the differences between the Bourne shell (sh), the Bourne Again SHell (bash), and the C Shell
(csh).
Ans: The Bourne shell (sh) was one of the earliest Unix shells, known for its simplicity and efficiency. It
lacks some interactive features. The Bourne Again SHell (bash) is an enhanced version of sh, widely used
in Linux, offering more features like command-line editing, history, and scripting capabilities. The C Shell
(csh) was developed at Berkeley and provides features like aliases and job control, with a syntax
resembling the C programming language. Bash is generally preferred for its compatibility and rich feature
set.

3. How does pattern matching with wildcards simplify file manipulation in the shell? Provide examples.

Netaji Subhash Engineering College


118
Ans: Pattern matching with wildcards allows users to specify groups of files without typing each filename
individually, greatly simplifying file manipulation.

For example, to delete all files ending with .log, you can use rm *.log. The * matches any sequence of
characters before .log. To list all single-character files starting with 'a', use ls a?; the ? matches any single
character. This makes working with multiple files much more efficient.

4. Describe the importance of escaping and quoting in shell programming, and illustrate with examples.
Ans: Escaping and quoting are crucial for controlling how the shell interprets special characters. Escaping
(using \) treats the next character literally, preventing its special meaning. Quoting (using single or double
quotes) suppresses or partially suppresses the special meaning of characters within the quotes.

For example, to search for the string $100 in a file, grep \$100 myfile.txt escapes the $ to prevent variable
substitution. To prevent any interpretation, use single quotes: echo '$PATH' prints $PATH literally. Double
quotes allow variable expansion: echo "The value of PATH is $PATH".

5. Explain the concept of redirection in the shell, including standard input, standard output, and
standard error.
Ans: Redirection allows you to change the default sources and destinations of data streams. Standard
input (stdin) is the source of data for a command (usually the keyboard). Standard output (stdout) is where
a command's normal output is sent (usually the terminal). Standard error (stderr) is where a command's
error messages are sent (also usually the terminal).

For example, ls > filelist.txt redirects the stdout of ls to filelist.txt, creating a file with the directory listing.
grep "error" logfile.txt 2> errorlog.txt redirects stderr to errorlog.txt, capturing errors. cat < input.txt reads
stdin from input.txt.

6. Discuss the uses and significance of /dev/null and /dev/tty.


Ans: /dev/null is a null device that discards any data written to it. It is useful for suppressing unwanted
output from commands, such as error messages or verbose output. For example, command > /dev/null
silences the command's standard output.

/dev/tty represents the user's terminal. It's used to ensure that output is displayed directly on the user's
terminal, even if standard output has been redirected. For example, a script might use echo "Enter input:"
> /dev/tty to prompt the user for input, regardless of where stdout is redirected.

7. Explain how pipes (|) and the tee command can be used to create powerful command chains in Unix.
Ans: Pipes connect the stdout of one command to the stdin of another, allowing you to process data
through a series of commands. The tee command allows you to split the output, sending it to both the
terminal and a file.

For example, ls -l | grep "^d" | wc -l lists directories, filters them with grep, and counts the number of lines
(directories). command | tee output.txt displays the output on the terminal and saves it to output.txt.

8. Describe the concept of command substitution with examples.


Ans: Command substitution allows the output of a command to be used as an argument to another
command. This is done using backticks () or the $( )` syntax.

For example, echo "The current date is $(date)" substitutes the output of the date command into the
string. Another example: files=$(ls *.txt); echo "Text files: $files" first stores the list of text files in the files
variable, then prints the variable.

9. Explain the importance and usage of shell variables in shell scripting.

Netaji Subhash Engineering College


119
Ans: Shell variables store data that can be used within shell scripts. They allow you to store and manipulate
strings, numbers, and other values, making scripts more flexible and reusable. Variables can be used to
store user input, command output, or configuration settings.

For example, a script might use read -p "Enter your name: " name to get user input and store it in the
name variable. Then, it could use echo "Hello, $name!" to greet the user. Variables are essential for
creating dynamic and interactive scripts.

10. Write a short shell script that demonstrates several concepts discussed, including variables,
redirection, and pipes.

#!/bin/bash

# Set a variable
LOG_FILE="script_log.txt"

# Get the current date and time


DATE=$(date)

# Redirect output to a log file


echo "Script started at: $DATE" > $LOG_FILE

# Get user input


read -p "Enter a directory to list: " DIRECTORY

# Check if the directory exists


if [ -d "$DIRECTORY" ]; then
# List the files in the directory and append to the log file
ls -l "$DIRECTORY" | grep "^-" >> $LOG_FILE
echo "Files in $DIRECTORY (excluding directories):"
ls -l "$DIRECTORY" | grep "^-" #also display on screen
else
# Print an error message to standard error and the log file
echo "Error: $DIRECTORY is not a valid directory" 2>> $LOG_FILE
echo "Error: $DIRECTORY is not a valid directory" >&2 #display on
screen
exit 1
fi

# Append a message to the log file


echo "Script finished." >> $LOG_FILE

Netaji Subhash Engineering College


120

Chapter 06
Process

Netaji Subhash Engineering College


121

6.1 Process Management

Basic Idea About UNIX Processes

A process in UNIX is an instance of a computer program that is being executed. Think of a program as a blueprint
and a process as the actual house built from that blueprint, with people (data) moving in and out and activities
(instructions) happening.

Key concepts:

• Program vs. Process: A program is a static file containing instructions. A process is a dynamic entity,
actively running those instructions.

• Address Space: Each process gets its own private virtual address space. This isolated memory region
prevents one process from interfering with the memory of another, contributing to system stability. This
address space typically includes sections for the program code (text), data (initialized and uninitialized),
stack (for function calls and local variables), and heap (for dynamic memory allocation).

• Resources: When a process is created, the operating system allocates various resources to it, such as
memory, file descriptors (for accessing files and network connections), CPU time, and more.

• Process Control Block (PCB): The operating system maintains a data structure for each active process,
called the Process Control Block. This PCB stores crucial information about the process, including its
current state, priority, memory allocation, open files, and the program counter (indicating the next
instruction to be executed).

Displaying Process Attributes (ps)

The ps (process status) command is your window into the currently running processes on the system. It provides
a snapshot of these processes and their attributes.
ps

This typically shows information about the processes associated with the current terminal.

Commonly used options to display more attributes:

• ps aux: This is a very common combination.

o a: Displays information about processes of all users.

o u: Displays detailed information about each process, including the user who owns it.

o x: Displays information about processes without a controlling terminal (often background


daemons).

• ps -ef: Another widely used option for a more comprehensive view.

o -e: Selects all processes.

o -f: Provides a full listing, showing more details like the parent process ID (PPID).

Netaji Subhash Engineering College


122
Key attributes displayed by ps often include:

• PID (Process ID): A unique numerical identifier assigned to each process by the kernel.

• USER: The username of the owner of the process.

• CPU (%CPU): The percentage of the processor time that the process is currently using.

• MEM (%MEM): The percentage of the system's physical memory that the process is using.

• VSZ (Virtual Size): The total amount of virtual memory used by the process.

• RSS (Resident Set Size): The amount of physical RAM occupied by the process.

• TTY: The controlling terminal associated with the process (if any). A ? usually indicates no controlling
terminal.

• STAT (State): A code indicating the current state of the process (more on this later).

• START: The time the process was started.

• TIME: The total amount of CPU time the process has accumulated.

• COMMAND: The command that was executed to start the process.

You can also use options to filter the output based on specific criteria, like user (-u), process ID (-p), or command
name (-C). For example:
ps -u your_username
ps -p 1234
ps -C firefox

Displaying System Processes

UNIX systems run numerous background processes essential for their operation. These are often called daemons.
They perform tasks like managing network connections, handling printing, scheduling jobs, and more.

The ps aux or ps -ef commands, as mentioned earlier, are crucial for displaying these system processes. You'll
often see users like root, systemd, syslog, etc., associated with these processes. These processes typically don't
have a controlling terminal (their TTY will be ?).

Tools like top and htop provide a dynamic, real-time view of system processes and resource utilization, making it
easier to monitor system activity.

Process Creation Cycle

The creation of a new process in UNIX typically involves the following steps:

1. Forking: A running process (the parent process) makes a copy of itself using the fork() system call. This
creates a new process (the child process) that is almost an exact duplicate of the parent, including its
memory space, open files, and current execution point. The key difference is that the parent process
receives the PID of the newly created child, while the child process receives a return value of 0 from the
fork() call.

Netaji Subhash Engineering College


123
2. Execution (Optional): After forking, the child process often (but not always) needs to execute a different
program than the parent. This is achieved using the exec() family of system calls (e.g., execve, execlp).
An exec() call replaces the child process's memory space with the code and data of the new program.
The process ID remains the same.

3. Waiting (Optional): The parent process might need to wait for the child process to complete its execution.
This is done using the wait() or waitpid() system calls. These calls allow the parent to be notified
when a child process terminates and to retrieve its exit status.

This fork-exec model is fundamental to how new programs are started in UNIX. For instance, when you type a
command in your shell and press Enter, the shell forks a child process, and then the child process uses exec() to
run the command you entered.

Shell Creation Steps (init -> getty -> login -> shell)

The process of getting a login shell when you boot up a UNIX-like system involves a sequence of process creations:

1. init (PID 1): The init process is the very first process started by the kernel during the boot sequence. It's
the ancestor of all other processes on the system. Its primary role is to initialize the system and start other
essential services.

2. getty (or agetty, mingetty): init starts one or more getty (or similar) processes. These processes are
responsible for managing terminal lines (both physical and virtual). A getty process:

o Waits for a connection on a terminal line.

o Detects the connection.

o Configures the terminal settings.

o Displays a login prompt (e.g., "login: ").

3. login: When you enter your username at the login prompt, the getty process executes the login program.
The login program:

o Prompts you for your password.

o Authenticates your username and password against the system's user database (e.g.,
/etc/passwd, /etc/shadow).

o If authentication is successful, it sets up the user's environment (e.g., setting environment


variables, changing the current directory to the home directory).

o Finally, it executes the user's default shell (usually specified in /etc/passwd, like bash, zsh, sh).

4. Shell (e.g., bash, zsh): The shell is a command-line interpreter. It:

o Displays a prompt to the user.

o Reads commands entered by the user.

o Parses the commands.

o Executes the commands (often by forking and execing other programs).

o Waits for the commands to complete.

Netaji Subhash Engineering College


124
o Repeats the process.

This sequence ensures that you get an interactive shell session after the system boots up and you log in.

Process State

The STAT column in the output of ps indicates the current state of a process. Here are some common process
states:

• R (Running): The process is currently running on the CPU or is ready to run and waiting to be assigned to
a CPU.

• S (Sleeping): The process is waiting for an event to occur, such as the completion of an I/O operation or a
signal. It's in a wait queue and not consuming CPU time.

• D (Disk Sleep) or Uninterruptible Sleep: The process is waiting for a disk I/O operation to complete. This
state is often uninterruptible, meaning it cannot be woken up by signals.

• T (Stopped): The process has been stopped, usually by a signal (e.g., SIGSTOP or SIGTSTP) from the user
or the system. It can be resumed later with a SIGCONT signal.

• Z (Zombie): The process has terminated, but its entry in the process table still exists. This happens
because the parent process has not yet acknowledged the child's termination (by calling wait() or
waitpid()). Zombie processes consume minimal resources but should eventually be reaped by their
parent to free up their PID.

• I (Idle): (Less common in standard ps output, often seen in kernel-level process listings) The process is
idle or waiting for a more significant event.

• +: A trailing + in the STAT column often indicates that the process is in the foreground process group of
its controlling terminal.

• &lt;: A preceding < indicates a high-priority process (nice value &lt; 0).

• N: A preceding N indicates a low-priority process (nice value > 0).

• s: Indicates that the process is a session leader.

• l: Indicates that the process is multi-threaded.

Zombie State

As mentioned above, a zombie process (also known as a defunct process) is a process that has completed its
execution but whose entry in the process table has not yet been removed by its parent process.

When a child process terminates, the kernel sends a SIGCHLD signal to its parent. The parent process is then
expected to call a wait() or waitpid() system call to retrieve the child's exit status and clean up its entry in the
process table.

If the parent process doesn't call wait() (perhaps it has terminated itself or is poorly written), the zombie process
will remain in the system until the parent process terminates or the system is rebooted. While zombies don't
consume much in terms of CPU or memory, a large number of zombie processes can exhaust the system's process
ID limit.

You can identify zombie processes in the output of ps by the Z state in the STAT column. The <defunct> tag might
also appear in the COMMAND column.

Netaji Subhash Engineering College


125
Background Jobs (& operator, nohup command)

In a UNIX shell, you can run commands in the background so that they don't tie up your terminal. To do this, you
append an ampersand (&) to the end of the command:
long_running_command &

When you run a command in the background:

• The shell immediately returns to the prompt, allowing you to continue working.

• The background process continues to run independently.

• Output from the background process might still appear on your terminal.

However, there's a potential issue: if you close your terminal, any background jobs started from that terminal
will typically receive a SIGHUP (hang up) signal, which usually causes them to terminate.

The nohup command is used to prevent this:


nohup long_running_command &

When you run a command with nohup:

• It makes the command immune to the SIGHUP signal.

• If the command produces any output that would normally go to the terminal, nohup redirects it to a file
named nohup.out in the current directory (or the user's home directory if the current directory is not
writable).

• You still use & to run it in the background.

nohup is particularly useful for long-running tasks that you want to continue even after you log out.

Reduce Priority (nice)

The nice command allows you to adjust the priority of a process. In UNIX, each process has a nice value, which
ranges from -20 (highest priority) to 19 (lowest priority). The default nice value is typically 0.

A lower nice value gives the process a higher scheduling priority, meaning it gets more CPU time relative to other
processes. Conversely, a higher nice value gives the process a lower priority, making it "nicer" to other processes
by consuming less CPU time.

To start a command with a specific nice value:


nice -n 10 command_to_run # Start with a nice value of 10 (lower
priority)
nice --15 another_command # Start with a nice value of 15

You can also use a positive integer without -n. For example, nice 5 command is equivalent to nice -n 5 command.

To change the nice value of an already running process, you can use the renice command. You need the process
ID (PID) of the process you want to renice:
renice 12 -p 1234 # Set the nice value of process with PID 1234 to 12

Netaji Subhash Engineering College


126
Only the superuser (root) can decrease the nice value (increase priority) of a process. Regular users can only
increase the nice value (decrease priority) of their own processes.

Using Signals to Kill Process

Signals are asynchronous notifications sent to a process by the operating system or other processes to indicate
that a particular event has occurred. Signals can be used to control the behavior of processes, including
terminating them.

The kill command is used to send signals to processes. The basic syntax is:
kill [options] <PID> [...]

The most commonly used signal is SIGTERM (signal number 15), which politely asks a process to terminate. Most
well-behaved processes will catch this signal and perform any necessary cleanup before exiting.
kill 1234 # Sends SIGTERM to process with PID 1234

If a process doesn't terminate after receiving SIGTERM, you can use the SIGKILL signal (signal number 9), which
forcefully terminates the process immediately. This signal cannot be caught or ignored by the process.
kill -9 1234
kill -KILL 1234

Caution: Using SIGKILL should be a last resort, as it doesn't give the process a chance to clean up, which can
potentially lead to data loss or system inconsistencies.

Other useful signals include:

• SIGHUP (1): Hangup signal, often sent when the controlling terminal is closed.

• SIGINT (2): Interrupt signal, usually generated by pressing Ctrl+C.

• SIGTSTP (20): Terminal stop signal, usually generated by pressing Ctrl+Z, which suspends the process.

• SIGCONT (18): Continue signal, used to resume a stopped process.

You can list available signals using kill -l.

Sending Job to Background (bg) and Foreground (fg)

When you suspend a running job in the shell (e.g., using Ctrl+Z, which sends SIGTSTP), it stops executing and is
placed in the background. You can then use the bg command to resume a suspended job in the background:
bg %job_id

Here, %job_id is a job identifier. If you have only one suspended job, you can often just use %. You can see the
job IDs using the jobs command.

To bring a background job back to the foreground, where it will have control of your terminal, you use the fg
command:
fg %job_id

Again, if there's only one background job, you can often just use fg.

Listing Jobs (jobs)

Netaji Subhash Engineering College


127
The jobs command displays the status of background jobs associated with the current shell session. The output
typically includes:

• Job ID: A number (e.g., [1], [2]) used to refer to the job.

• Status: The current state of the job (e.g., Running, Stopped, Done).

• Command: The command that was executed.

Example output:
[1] + Running long_running_command &
[2] - Stopped another_command

In this example, long_running_command is running in the background (job ID 1), and another_command is
stopped (job ID 2). The + indicates the current default job (often the most recently stopped or started
background job), and - indicates the previous default job.

Suspend Job

As mentioned earlier, you can suspend a foreground job by pressing Ctrl+Z. This sends the SIGTSTP signal to the
process, causing it to stop execution and return control to the shell. The job then becomes a stopped background
job, which you can later resume with bg or bring to the foreground with fg.

Kill a Job

You can kill a background job using the kill command along with the job ID. Remember to prefix the job ID with a
percent sign (%):
kill %1 # Sends SIGTERM to job with ID 1
kill -9 %2 # Sends SIGKILL to job with ID 2

Execute at Specified Time (at and batch)

The at and batch commands allow you to schedule commands to be executed at a later time.

at: The at command lets you specify an exact time and date for the command to be executed.
at 8:00 tomorrow

This will prompt you to enter the command(s) you want to run at 8:00 AM the next day. Press Ctrl+D to indicate
the end of the commands.

You can specify various time formats:

• now + 5 minutes

• 10:30 PM

• noon

• teatime (usually 4 PM)

Netaji Subhash Engineering College


128
• May 25

• next Monday

To view your pending at jobs, use atq. To delete a job, use atrm <job_number>, where <job_number> is the
number listed by atq.

batch: The batch command allows you to schedule jobs to be executed when the system load average falls below
a certain threshold. This is useful for less urgent tasks that you want to run when the system is less busy.

To submit a job for batch execution, you can pipe commands to the batch command:
./long_analysis_script.sh | batch

Or you can use it similarly to at by specifying the time as now:


./another_script.sh
<Ctrl+D>

The system will then execute these commands when the load average is sufficiently low. You can view pending
batch jobs with atq (they might be listed with a different queue identifier) and remove them with atrm.

These commands (at and batch) are powerful tools for automating tasks and managing system resources
efficiently.

Netaji Subhash Engineering College


129

Review Question

Netaji Subhash Engineering College


130

Review Questions
Short Answer Type Questions

1. What is a UNIX process, and how does it differ from a program?


Answer: A UNIX process is a dynamic instance of a computer program being executed. Unlike a static
program file containing instructions, a process is an active entity with its own memory space, resources,
and a program counter indicating the next instruction to be executed.

2. Name three key attributes commonly displayed by the ps command.


Answer: The ps command typically displays the PID (Process ID), a unique identifier; the USER who owns
the process; and the COMMAND that was executed to start the process, along with other attributes like
CPU and memory usage.
3. How can you display all processes running on a UNIX system, regardless of the user or terminal?
Answer: You can use the command ps aux or ps -ef. The aux options show all users' processes with more
detail, while -ef provides a full listing including parent process IDs.

4. What is the fundamental system call used by a parent process to create a new child process in UNIX?
Briefly explain its outcome.
Answer: The fork() system call is used. Upon successful execution, it creates a new process (the child)
that is a nearly identical copy of the parent. The parent receives the child's PID, while the child receives a
return value of 0.

5. After a fork() call, what system call is often used by the child process to execute a different program?
Answer: The exec() family of system calls (e.g., execve, execlp) is used. These calls replace the child
process's memory space with the code and data of the new program, allowing it to run a different
executable.

6. Outline the primary responsibilities of the init process (PID 1) during system startup.
Answer: The init process is the first process started by the kernel. Its main responsibilities include
initializing the system environment, starting essential system services and daemons, and managing
terminal logins via getty processes.

7. What is the role of the getty process in the shell creation sequence?
Answer: The getty process listens for connections on terminal lines, configures the terminal, and
presents the initial login prompt to the user, acting as an intermediary between the terminal and the
login program.

8. Describe the 'Sleeping' (S) process state in UNIX.


Answer: A process in the 'Sleeping' (S) state is waiting for an event to occur, such as the completion of an
I/O operation or the receipt of a signal. It is not actively using the CPU and is waiting in a wait queue.

9. Explain the 'Zombie' (Z) process state and its implications.


Answer: A 'Zombie' (Z) process has terminated its execution, but its entry remains in the process table
because the parent process hasn't yet acknowledged its termination using a wait() system call. A large
number of zombies can exhaust process IDs.

10. How do you run a command in the background using the shell, and what is a potential drawback?
Answer: You append an ampersand (&) to the end of the command (e.g., long_command &). A potential

Netaji Subhash Engineering College


131
drawback is that if you close your terminal, the background job might receive a SIGHUP signal and
terminate.

11. What is the purpose of the nohup command when running background jobs? Provide an example.
Answer: The nohup command prevents a background job from being terminated by a SIGHUP signal
when the user logs out or closes the terminal. Example: nohup ./long_script.sh & will run long_script.sh
in the background, immune to hangups.

12. What does the nice command do, and what is the range of nice values?
Answer: The nice command adjusts the priority of a process by altering its nice value. The nice value
ranges from -20 (highest priority) to 19 (lowest priority), with a default of 0. Higher values mean the
process is "nicer" to others.

13. How can you reduce the priority of an already running process with PID 5678?
Answer: You can use the renice command: renice 10 -p 5678. This will set the nice value of the process
with PID 5678 to 10, reducing its priority.

14. What is a signal in UNIX, and what command is used to send signals to processes?
Answer: A signal is an asynchronous notification sent to a process to indicate an event. The kill
command is used to send signals to processes, identified by their PID.

15. What is the difference between the SIGTERM and SIGKILL signals?
Answer: SIGTERM (signal 15) politely requests a process to terminate, allowing it to perform cleanup.
SIGKILL (signal 9) forcefully terminates a process immediately without giving it a chance to clean up.

16. How can you send a running foreground job to the background without terminating it?
Answer: You can press Ctrl+Z while the job is in the foreground. This sends a SIGTSTP signal, suspending
the job and placing it in the background (stopped). You can then use bg to resume it in the background.

17. What command is used to bring a background job with job ID 2 back to the foreground?
Answer: The fg command is used. To bring job ID 2 to the foreground, you would type fg %2.

18. How can you list all the background jobs associated with your current shell session?
Answer: The jobs command displays a list of the active background jobs, showing their job IDs, status
(e.g., Running, Stopped, Done), and the command that was executed.

19. What is the purpose of the at command? Provide a simple example to run backup.sh at 6 PM today.
Answer: The at command schedules commands to be executed at a specified time in the future.
Example: echo "./backup.sh" | at 6:00 PM. This will execute the backup.sh script at 6:00 PM on the
current day.

20. How does the batch command differ from the at command in scheduling job execution?
Answer: The batch command schedules jobs to run when the system's load average falls below a certain
threshold, making it suitable for non-time-critical tasks that should run when the system is less busy,
unlike at, which executes at a specific time.

Netaji Subhash Engineering College


132
Descriptive Type Questions

1. Explain the process creation cycle in detail, including the roles of fork() and exec() system calls. Provide
a scenario where this model is crucial.
Answer: The process creation cycle typically begins with the fork() system call, where a parent process
duplicates itself to create a child process. The child inherits most of the parent's state. Subsequently, the
child often uses an exec() system call to replace its memory space with a new program. This two-step
process is crucial for command execution in the shell. When you type a command, the shell forks a child,
and then the child uses exec() to run the command, allowing the shell to remain active and handle other
commands.

2. Describe the sequence of steps involved in the shell creation process from system boot-up (init) to a
user receiving a shell prompt. Explain the function of each process in this chain.
Answer: The shell creation begins with init (PID 1), the first process. init starts getty (or a similar
program) on terminal lines. getty listens for connections, configures the terminal, and presents the
"login:" prompt. Upon entering credentials, getty executes the login program, which authenticates the
user. If successful, login sets up the user's environment and finally executes the user's default shell (e.g.,
bash). This chain ensures a secure and managed login process, culminating in an interactive shell for the
user.

3. Elaborate on the different states a UNIX process can be in, providing a brief description of each and
the typical reasons for a process being in that state.
Answer: UNIX processes can be in various states. Running (R) means the process is currently executing
on the CPU or ready to run. Sleeping (S) indicates the process is waiting for an event like I/O completion.
Disk Sleep (D) is similar but specifically waiting for disk I/O and is usually uninterruptible. Stopped (T)
signifies the process has been paused, often by a signal like SIGTSTP. Zombie (Z) means the process has
terminated but its resources haven't been cleaned up by the parent. Understanding these states helps in
diagnosing system performance and process behavior.

4. Discuss the implications of zombie processes on a UNIX system and explain how they arise. What
steps can be taken to mitigate issues caused by them?
Answer: Zombie processes, while consuming minimal resources, retain their PID in the process table. If a
parent process fails to call wait() on its terminated children, these zombies accumulate, potentially
exhausting the system's limit on the number of processes that can be created. They arise due to poor
programming or when parent processes terminate prematurely without reaping their children.
Mitigation involves ensuring parent processes properly handle child termination using wait() or
waitpid(). In cases of orphaned zombies, the init process eventually adopts and reaps them.

5. Explain the use of the & operator and the nohup command for managing background jobs. Highlight
the scenarios where each would be most appropriate and the key differences between them.
Answer: The & operator sends a command to run in the background, allowing the user to continue using
the terminal. However, these jobs are still tied to the terminal session and can be terminated by a
SIGHUP upon logout. Example: find / -name "*.log" &. nohup is used to make a background job immune
to SIGHUP signals, ensuring it continues running even after the user logs out. Output is typically
redirected to nohup.out. Example: nohup ./long_process.sh &. Use & for short background tasks within
a session; use nohup for long-running tasks that need to persist beyond the current session.

Netaji Subhash Engineering College


133
6. Describe the nice and renice commands, explaining how they affect process scheduling priority.
Provide an example of when you might use each command and the limitations faced by non-root
users.
Answer: The nice command sets the initial priority (nice value) of a new process. A higher nice value
(closer to 19) lowers the priority, making the process "nicer" to others by consuming less CPU. Example:
nice 15 ./cpu_intensive_task. renice modifies the nice value of a running process, requiring the process
ID. Example: renice -5 -p 1234 (attempts to increase priority). Non-root users can only increase the nice
value (lower priority) of their own processes; only root can decrease it (increase priority).

7. Explain the concept of signals in UNIX process management. Describe at least three commonly used
signals and how they can be used to control or terminate processes using the kill command.
Answer: Signals are asynchronous notifications sent to processes to indicate events. SIGINT (2), typically
generated by Ctrl+C, interrupts a foreground process. Example: kill -2 5678. SIGTERM (15) politely
requests a process to terminate, allowing cleanup. Example: kill 9876. SIGKILL (9) forcefully terminates a
process immediately. Example: kill -KILL 4321. Signals are a fundamental mechanism for inter-process
communication and process control.

8. Detail the process of moving a job between the foreground and background in a UNIX shell. Include
the commands used and the signals involved when suspending and resuming jobs.
Answer: A foreground job can be suspended by pressing Ctrl+Z, which sends the SIGTSTP signal,
stopping the process and returning control to the shell. The jobs command lists suspended and
background jobs. To resume a suspended job in the background, use bg %job_id. To bring a background
job to the foreground, use fg %job_id. These commands allow users to manage interactive and long-
running tasks efficiently without needing multiple terminal windows.

9. Compare and contrast the at and batch commands for scheduling future command execution. Discuss
the scenarios where using one might be more advantageous than the other, providing illustrative
examples.
Answer: Both at and batch schedule commands for later execution. at executes commands at a specific
time and date. Example: echo "date > logfile" | at 3 AM tomorrow. This is useful for time-critical tasks.
batch executes commands when the system load average is low. Example: ./resource_intensive_task |
batch. This is advantageous for tasks that can wait for system resources to be less utilized, preventing
performance impact during peak times. at is time-driven, while batch is resource-driven.

10. Describe a scenario where you might need to use a combination of backgrounding a process (&),
making it immune to hangups (nohup), and adjusting its priority (nice or renice). Explain the rationale
behind using each of these techniques in your scenario.
Answer: Imagine running a long-term data analysis script (analyze_data.py) on a remote server via SSH.
You anticipate the analysis will take several hours and you might lose your SSH connection. To ensure the
script completes uninterrupted, you would use nohup ./analyze_data.py & to run it in the background
and make it immune to SIGHUP. Furthermore, to prevent this CPU-intensive task from impacting other
users on the shared server, you might reduce its priority using nice 10 ./analyze_data.py &. If you later
notice it's still consuming too many resources, you could use renice 15 -p <PID_of_analyze_data.py> to
further lower its priority. This combination ensures the task runs reliably in the background without
negatively impacting system performance or being terminated by connection loss.

Netaji Subhash Engineering College


134

Chapter 07
Customization

Netaji Subhash Engineering College


135

7.1 Customizing the Environment

Environment variables are dynamic named values that can affect the way running processes will behave on a
computer. They provide a way to pass configuration information to applications. Think of them as global settings
that programs can look up to determine things like where to find files, what kind of terminal you're using, or
even the prompt you see.

Key aspects of environment variables:

• Name-Value Pairs: Each environment variable has a name and an associated value, like
PATH=/usr/local/bin:/usr/bin:/bin.

• Inheritance: When a process creates a new process (its child), the child inherits a copy of the parent's
environment variables. This allows settings to propagate down the process tree.

• Global Scope (within a session): Once set within a shell session, an environment variable is generally
accessible by any command executed within that session or any child processes it creates.

• Configuration: They are often used to configure application behavior without needing to modify the
application's code directly.

You can interact with environment variables using shell commands:

• printenv or env: To display a list of all currently set environment variables and their values.

• echo $<VARIABLE_NAME>: To display the value of a specific environment variable (e.g., echo $HOME).

• export VARIABLE_NAME=value: To set or modify an environment variable and mark it for export,
meaning it will be passed on to child processes.

• VARIABLE_NAME=value: To set or modify an environment variable for the current shell. To make it
available to child processes, you'll usually need to export it.

• unset VARIABLE_NAME: To remove an environment variable.

Some Common Environment Variables

Let's look at some frequently encountered environment variables:

1. HOME: Specifies the absolute path to the current user's home directory. This is often where personal
configuration files and data are stored.

o Example: If your username is "john", HOME might be set to /home/john.

o Usage: Many applications and shell commands (like cd without arguments) use $HOME to
quickly navigate to the user's home directory.

2. PATH: A colon-separated list of directories that the shell searches through when you enter a command
without specifying its full path.

o Example: PATH=/usr/local/bin:/usr/bin:/bin:/opt/myprogram/bin

Netaji Subhash Engineering College


136
o Usage: When you type ls, the shell looks in each directory listed in PATH in order until it finds an
executable file named ls. This allows you to run commands from different locations without
typing their full path every time.

3. LOGNAME or USER: Stores the username of the current user. LOGNAME is often set by the login
process, while USER might be set by other means. They usually hold the same value.

o Example: If you logged in as "john", both LOGNAME and USER would likely be set to "john".

o Usage: Applications can use these variables to identify the user running them. Your shell prompt
might also include this information.

4. TERM: Specifies the type of terminal or terminal emulator you are using (e.g., xterm, vt100, screen,
tmux).

o Example: TERM=xterm-256color

o Usage: Applications, especially those that manipulate the terminal display (like text editors such
as vim or nano), use $TERM to determine the terminal's capabilities and how to properly format
output and handle input.

5. PWD: Holds the absolute path of the current working directory. This is the directory you are currently
"in" when you use the shell.

o Example: If you are in the "documents" directory within your home directory, PWD might be
/home/john/documents.

o Usage: The shell updates $PWD automatically as you navigate the file system using cd. Many
commands and scripts rely on the current working directory.

6. PS1 (Primary Prompt String): Defines the appearance of your primary shell prompt – the prompt you
see when the shell is ready to accept a command.

o Example: A common PS1 might be \u@\h:\w\$, which could render as


john@myhost:~/documents$. The backslash-escaped characters have special meanings (e.g., \u
for username, \h for hostname, \w for the current working directory).

o Usage: You can customize $PS1 extensively to include information like the username, hostname,
current directory, time, and even colors, making your command line more informative and
visually appealing.

7. PS2 (Secondary Prompt String): Defines the appearance of the secondary shell prompt, which is
displayed when the shell expects more input to complete a command (e.g., after you've started a multi-
line command or an incomplete statement).

o Example: The default PS2 is often >.

o Usage: If you enter a command that requires more input (like a for loop without its done), the
shell will display $PS2 on subsequent lines until the command is complete. You can customize
this as well, though it's less common than customizing $PS1.

You can view the current values of these variables using echo:
echo $HOME
echo $PATH
echo $LOGNAME
Netaji Subhash Engineering College
137
echo $TERM
echo $PWD
echo "$PS1"
echo "$PS2"
You can experiment with changing PS1 to personalize your shell prompt. For example:
export PS1="MyShell> "

Your prompt will now look like MyShell>. Remember that changes made directly in the shell are usually temporary
and will only last for the current session. To make them permanent, you typically need to add the export
commands to your shell's configuration file (e.g., .bashrc for Bash, .zshrc for Zsh).

Aliases: Aliases are shortcuts or alternative names that you can define for commands. They allow you to replace
long or frequently used commands with shorter, more convenient names or to add default options to commands.

Key aspects of aliases:

• Shell Feature: Aliases are a feature of the shell itself (like Bash, Zsh, etc.).

• Temporary (by default): Aliases defined directly in the shell are usually only active for the current
session.

• Customization: They provide a powerful way to tailor your command-line experience.

You can work with aliases using these shell commands:

• alias: Without any arguments, it displays a list of currently defined aliases.

• alias short_name='long command with options': To define a new alias. The alias name should not
contain spaces, and the command it represents should be enclosed in single quotes.

• unalias short_name: To remove a previously defined alias.

Here are some examples of common alias uses:

Shorter names for frequent commands:


alias ll='ls -l'
alias la='ls -a'
alias l='ls -CF'

Now, instead of typing ls -l to get a detailed listing, you can just type ll.

Adding default options to commands:


alias rm='rm -i' # Make rm interactive by default (prompts before
deleting)
alias cp='cp -i' # Make cp interactive by default
alias mv='mv -i' # Make mv interactive by default

These aliases add the -i (interactive) option to rm, cp, and mv, which can help prevent accidental file deletions or
overwrites.

Netaji Subhash Engineering College


138
Combining multiple commands:
alias update='sudo apt update && sudo apt upgrade' # For Debian/Ubuntu
systems

Now, typing update will run both the package list update and the package upgrade commands.

To make aliases persistent across shell sessions, you need to add the alias commands to your shell's
configuration file (e.g., .bashrc for Bash, .zshrc for Zsh). These files are typically executed when you start a new
shell session.

Brief Idea of Command History

The command history is a feature of the shell that keeps a record of the commands you have entered in the
current and previous shell sessions. This allows you to easily recall and reuse commands, saving you typing time
and effort.

Key aspects of command history:

• Storage: The shell typically stores your command history in a file in your home directory (e.g.,
.bash_history for Bash, .zsh_history for Zsh).

• Recall: You can access your command history using various methods:

o Up and Down Arrow Keys: Pressing the up arrow key usually cycles backward through your
history, showing you the previously entered commands. The down arrow key cycles forward.

o history command: Typing history displays a numbered list of your recent commands. You can
often use options with history to control the output (e.g., history 10 to see the last 10
commands).

o Reverse Search (Ctrl+R): In many shells (like Bash and Zsh), pressing Ctrl+R allows you to search
backward through your history for a command containing specific text. You can press Ctrl+R
repeatedly to cycle through matches.

o Bang (!) Commands: The ! character followed by certain specifiers allows you to execute
commands from your history:

▪ !!: Executes the last command.

▪ !n: Executes the command with the number n from the history list.

▪ !-n: Executes the nth command from the end of the history list.

▪ !string: Executes the most recent command that starts with string.

▪ !?string?: Executes the most recent command that contains string.

• Customization: You can often customize aspects of your command history, such as the number of
commands to store, whether to save commands immediately after execution, and whether to ignore
certain commands. These settings are usually configured in your shell's configuration file (e.g., .bashrc,
.zshrc).

The command history is an invaluable tool for productivity on the command line, allowing you to quickly repeat,
modify, and reuse past commands without retyping them from scratch.

Netaji Subhash Engineering College


139

Review Question

Netaji Subhash Engineering College


140

Review Questions
Short Answer Type Questions

1. What is an environment variable in Unix, and why is it useful for customization?


Ans: An environment variable in Unix is a dynamic named value that can affect the way running
processes will behave on a computer. They are incredibly useful for customization as they allow users to
tailor their working environment, such as defining preferred directories or command prompts, without
directly modifying system or application code.

2. Explain the purpose of the HOME environment variable.


Ans: The HOME environment variable specifies the absolute path to the current user's home directory.
This is typically where a user's personal files, configuration files, and directories are stored. It's crucial for
customization as many applications and commands, like cd without arguments, rely on it to locate user-
specific settings.

3. What role does the PATH environment variable play in executing commands?
Ans: The PATH environment variable is a colon-separated list of directories that the shell searches
through when you enter a command without specifying its full path. Customizing the PATH allows users
to easily execute their own scripts or installed programs located in non-standard directories without
typing the complete path each time.

4. Describe the information stored in the LOGNAME environment variable.


Ans: The LOGNAME environment variable simply stores the username of the user who is currently
logged into the system. While not directly used for extensive customization, it can be accessed by scripts
or applications that need to identify the current user.

5. What information does the USER environment variable typically hold?


Ans: Similar to LOGNAME, the USER environment variable also holds the username of the currently
logged-in user. In many Unix-like systems, LOGNAME and USER often contain the same value and serve a
similar purpose for identification within the system.

6. Explain the significance of the TERM environment variable.


Ans: The TERM environment variable specifies the type of terminal or terminal emulator being used.
This information is vital for applications, especially those that display text-based interfaces or use special
characters, to adapt their output correctly to the capabilities of the specific terminal.

7. What does the PWD environment variable represent?


Ans: The PWD environment variable stands for "Present Working Directory" and stores the absolute
path of the directory the user is currently working in within the file system. It's dynamically updated as
the user navigates through directories using commands like cd.

8. Describe the function of the PS1 environment variable.


Ans: The PS1 environment variable defines the primary command prompt that is displayed by the shell.
Users can extensively customize PS1 to include information like their username, hostname, current
directory, and special characters, making their command line interface more informative and
personalized.

9. What is the purpose of the PS2 environment variable?


Ans: The PS2 environment variable defines the secondary command prompt, which is displayed when
the shell expects more input to complete a command (e.g., after pressing Enter in the middle of a multi-
line command). It often appears as a simple ">" and can also be customized.

Netaji Subhash Engineering College


141
10. What is an alias in Unix, and how does it aid customization?
Ans: An alias in Unix is a shortcut or an alternative name for a command. It allows users to define
shorter, more memorable names for frequently used commands or to include default options with those
commands, significantly enhancing efficiency and personalizing the command-line experience.

11. How can you view the current values of all environment variables in your Unix session?
Ans: You can view the current values of all environment variables in your Unix session by using the
command env or printenv. These commands will display a list of all the defined environment
variables and their corresponding values.

12. Explain how to set a new environment variable temporarily in your current shell session.
Ans: To set a new environment variable temporarily in your current shell session, you can use the syntax
VARIABLE_NAME=value. For example, MYVAR=hello will set the environment variable MYVAR to "hello"
for the duration of the current session. This variable will not be persistent across new sessions.

13. How can you make an environment variable persistent across multiple login sessions?
Ans: To make an environment variable persistent, you typically need to add its definition to one of the
shell's configuration files, such as .bashrc, .zshrc, or .profile in your home directory. After adding the line
(e.g., export MYVAR=hello), you need to source the file (e.g., source ~/.bashrc) or log out and log back in
for the changes to take effect.

14. Give an example of how you might use an alias to customize a common command.
Ans: For example, you might create an alias for the ls -l command to always include human-readable file
sizes and show hidden files by defining the alias alias l='ls -lah'. Now, simply typing l in the terminal will
execute ls -lah.

15. How do you list the currently defined aliases in your Unix shell?
Ans: You can list all the currently defined aliases in your Unix shell by simply typing the command alias
and pressing Enter. This will display a list of all active aliases and their corresponding command
substitutions.

16. Explain how to remove a previously defined alias in your current shell session.
Ans: To remove a previously defined alias in your current shell session, you can use the unalias
command followed by the name of the alias you want to remove. For example, unalias l would remove
the alias we defined earlier for ls -lah.

17. What is command history in Unix, and how can you access previously executed commands?
Ans: Command history in Unix is a feature that records the commands you have previously executed in
the shell. You can access this history using the history command, or by using the up and down arrow
keys to navigate through the previously entered commands.

18. How can you re-execute the last command you entered in the Unix shell?
Ans: You can re-execute the last command you entered in the Unix shell using a few shortcuts: !! or by
pressing the up arrow key once and then pressing Enter.

19. How can you execute the command at a specific position in your command history?
Ans: You can execute a command at a specific position in your command history by using the !n syntax,
where n is the line number of the command as displayed by the history command. For example, if
history shows command number 15 as ls -l, you can execute it again by typing !15.

20. Describe a scenario where customizing the PS1 environment variable can be particularly useful.
Ans: Customizing the PS1 environment variable is particularly useful in scenarios where you frequently
work on multiple servers or within different virtual environments. By including the hostname or the
name of the active environment in your prompt, you can quickly and easily identify the system you are

Netaji Subhash Engineering College


142
currently interacting with, reducing the risk of executing commands on the wrong machine or
environment.

Descriptive Type Questions

1. Explain in detail how environment variables contribute to the flexibility and customization capabilities
of the Unix operating system. Provide at least two distinct examples of common environment variables
and how their modification can alter the behaviour of commands or applications.
Ans: Environment variables are fundamental to Unix's flexibility, acting as dynamic configuration
parameters that influence how processes execute. They allow users and the system to communicate
configuration information to applications without altering their core code. For instance, modifying the
PATH variable allows users to extend the shell's search directories for executables. If a user has custom
scripts in ~/bin, adding this directory to PATH (export PATH=$PATH:~/bin) enables direct execution of these
scripts by name. Another example is the EDITOR variable. Many utilities, like git or crontab, use this
variable to determine the default text editor. By setting export EDITOR=vim, the user ensures that vim is
launched whenever these tools require text editing, personalizing their workflow.

2. Discuss the importance of the HOME and PATH environment variables in a multi-user Unix environment.
How do these variables ensure that different users have personalized experiences and can execute
commands effectively?
Ans: In a multi-user Unix environment, HOME and PATH are crucial for providing personalized and
functional experiences for each user. The HOME variable uniquely identifies each user's personal
workspace, ensuring that their files and configurations are kept separate and accessible only to them.
When a user logs in, they are automatically placed in their home directory. The PATH variable allows each
user to have their own set of preferred executable locations. System administrators typically set a default
PATH, but individual users can customize it to include directories containing their own scripts or locally
installed software, without affecting other users' ability to execute standard system commands. This
separation ensures both privacy and the ability for individual workflow customization.

Netaji Subhash Engineering College


143

Chapter 08
Filter

Netaji Subhash Engineering College


144

8.1 Introduction to Filters

In Unix, a filter is a command that processes data from standard input (stdin), performs a specific operation on
that data, and then writes the processed data to standard output (stdout). Filters are fundamental tools in Unix-
like operating systems, designed to work in a pipeline. A pipeline is a sequence of commands where the output
of one command becomes the input of the next command, connected by the pipe symbol (|). This modular
approach allows users to combine simple commands to perform complex data manipulations.

Filters are essential because they promote the Unix philosophy of "do one thing and do it well." By chaining
these small, specialized tools, users can efficiently manipulate text and data.

Prepare file for printing (pr): The pr command formats files for printing. It adds headers, footers, page numbers,
and can paginate the file, making it suitable for printing on paper. Example:
pr myfile.txt

This command formats myfile.txt and displays it on the terminal.


pr -l 60 -h "My Report" data.txt | lp

This command formats data.txt with 60 lines per page, adds the header "My Report," and sends the output to
the default printer using the lp command.

head: Displaying the Beginning of a File

The head command is used to display the first few lines of a file or standard input. By default, it shows the first
10 lines.

Syntax:
head [OPTION]... [FILE]...

Common Options:

• -n NUM: Display the first NUM lines.

• -c NUM: Display the first NUM bytes.

• -q, --quiet, --silent: Suppress printing of headers when multiple files are specified.

• -v, --verbose: Always print headers giving file names.

Examples:

1. Display the first 10 lines of data.txt:


head data.txt

2. Display the first 5 lines of log.txt:


head -n 5 log.txt

3. Display the first 50 bytes of image.jpg:

Netaji Subhash Engineering College


145
head -c 50 image.jpg

4. Display the first 3 lines of multiple files without headers:


head -q -n 3 file1.txt file2.txt

tail: Displaying the End of a File

The tail command is the counterpart to head. It displays the last few lines of a file or standard input. By default,
it shows the last 10 lines.

Syntax:
tail [OPTION]... [FILE]...

Common Options:

• -n NUM: Display the last NUM lines.

• -c NUM: Display the last NUM bytes.

• -f, --follow[={name|descriptor}]: Output appended data as the file grows. This is useful
for monitoring log files in real-time.

• +NUM: Display lines starting from line number NUM.

• -q, --quiet, --silent: Suppress printing of headers when multiple files are specified.

• -v, --verbose: Always print headers giving file names.

Examples:

1. Display the last 10 lines of access.log:


tail access.log

2. Display the last 20 lines of system.log:


tail -n 20 system.log

3. Follow the changes in error.log in real-time:


tail -f error.log

4. Display lines of report.txt starting from the 15th line:


tail +15 report.txt

cut: Extracting Sections from Each Line

The cut command is used to extract specific columns (fields) or characters from each line of a file or standard
input. It's particularly useful for working with delimited data.

Syntax:
cut OPTION... [FILE]...

Netaji Subhash Engineering College


146
Common Options:

• -b LIST: Select only these bytes.

• -c LIST: Select only these characters.

• -f LIST: Select only these fields.

• -d DELIM: Use DELIM as the field delimiter instead of the default tab.

• -s, --only-delimited: Do not print lines not containing delimiters (when used with -f).

LIST Format:

LIST can be a single number, a range (e.g., n-m), or a comma-separated list of numbers or ranges.

Examples:

Assume you have a file data.csv with the following content (comma-separated):
Name,Age,City
Alice,30,New York
Bob,25,London
Charlie,35,Paris

1. Extract the names (first field):


cut -d ',' -f 1 data.csv

Output:
Name
Alice
Bob
Charlie
2. Extract the names and cities (first and third fields):
cut -d ',' -f 1,3 data.csv

Output:
Name,City
Alice,New York
Bob,London
Charlie,Paris
3. Extract the age and city (second and third fields):
cut -d ',' -f 2-3 data.csv

Output:
Age,City
30,New York
25,London
35,Paris

Netaji Subhash Engineering College


147
4. Extract characters from the 1st to the 5th position of each line in text.txt:
cut -c 1-5 text.txt

paste: Merging Lines of Files

The paste command is used to merge corresponding lines of one or more files. By default, it joins the lines with a
tab character.

Syntax:
paste [OPTION]... [FILE]...

Common Options:

• -d DELIM: Use DELIM as the separator instead of a tab.

• -s, --serial: Paste all lines of each file in turn, one after the other, instead of in parallel.

Examples:

Assume you have two files, names.txt:


Alice
Bob
Charlie

and ages.txt:
30
25
35

1. Paste the two files together (default tab delimiter):


paste names.txt ages.txt
Output:
Alice 30
Bob 25
Charlie 35

2. Paste the files with a comma as the delimiter:


paste -d ',' names.txt ages.txt
Output:
Alice,30
Bob,25
Charlie,35
Netaji Subhash Engineering College
148
3. Paste all lines of each file serially:
paste -s names.txt ages.txt
Output:
Alice Bob Charlie
30 25 35

sort: Ordering Lines of Text Files

The sort command is used to sort the lines of a text file or standard input. By default, it sorts lexicographically
based on the entire line.

Syntax:
sort [OPTION]... [FILE]...

Common Options:

• -n, --numeric-sort: Sort according to numeric value.

• -r, --reverse: Reverse the result of the sort.

• -k POS1[,POS2]: Sort by a key; start at POS1, optionally end at POS2 (field.character).

• -t SEP, --field-separator=SEP: Use SEP as the field delimiter.

• -u, --unique: With -c, check that only a single instance of each line appears. Without -c, output
only the first of an equal run.

• -f, --ignore-case: Fold lower case to upper case characters.

• -h, --human-numeric-sort: Compare human readable numbers (e.g., 2K, 1G).

Examples:

Assume you have a file unsorted.txt:


banana
apple
cherry
date
elderberry
1. Sort the lines alphabetically (default):
sort unsorted.txt
Output:
apple
banana
cherry
date
elderberry
2. Sort in reverse alphabetical order:
sort -r unsorted.txt
Netaji Subhash Engineering College
149
Output:
elderberry
date
cherry
banana
apple
Assume you have a file numbers.txt:
10
2
100
1
25
3. Sort numerically:
sort -n numbers.txt
Output:
1
2
10
25
100

Assume you have a file data.csv (from the cut example):


Name,Age,City
Alice,30,New York
Bob,25,London
Charlie,35,Paris
4. Sort by age (second field, numerically, using comma as delimiter):
sort -t ',' -k 2n data.csv
Output:
Bob,25,London
Alice,30,New York
Charlie,35,Paris
Name,Age,City

uniq: Removing Duplicate Lines

The uniq command filters adjacent matching lines from a file or standard input. It only removes duplicates that
appear consecutively. Therefore, it's often used in conjunction with sort to remove all duplicate lines.

Syntax:
uniq [OPTION]... [INPUT [OUTPUT]]

Common Options:

• -c, --count: Prefix each output line with the count of occurrences.

• -d, --repeated: Only output duplicate lines, one for each group.
Netaji Subhash Engineering College
150
• -D, --all-repeated[=METHOD]: Print all duplicate lines. METHOD can be separate (default),
prepend, or append.

• -i, --ignore-case: Ignore differences in case when comparing lines.

• -f NUM, --skip-fields=NUM: Avoid comparing the first NUM fields. Fields are separated by whitespace.

• -s NUM, --skip-chars=NUM: Avoid comparing the first NUM characters.

• -w NUM, --check-chars=NUM: Compare no more than NUM characters in lines.

Examples:

Assume you have a file duplicates.txt:


apple
banana
banana
cherry
apple
apple
date
1. Remove adjacent duplicate lines:
uniq duplicates.txt
Output:
apple
banana
cherry
apple
date

2. Remove all duplicate lines (by sorting first):


sort duplicates.txt | uniq
Output:
apple
banana
cherry
date
3. Count the occurrences of each unique line:
sort duplicates.txt | uniq -c
Output:
2 apple
2 banana
1 cherry
1 date
4. Only show the duplicate lines:
sort duplicates.txt | uniq -d
Output:
Netaji Subhash Engineering College
151
apple
banana

Piping Filters Together

The real power of these filters comes when you combine them using the pipe (|) operator. This allows you to
perform complex data transformations in a concise and readable way.

Example:

Suppose you have a log file web_access.log where each line contains information about a web request, including
the IP address as the first field (space-separated). You want to find the top 5 most frequent IP addresses.
cut -d ' ' -f 1 web_access.log | sort | uniq -c | sort -nr | head -n 5

Let's break down this pipeline:

1. cut -d ' ' -f 1 web_access.log: Extracts the first field (IP address) from each line, using space as the
delimiter.

2. sort: Sorts the list of IP addresses alphabetically (which is fine for counting).

3. uniq -c: Counts the occurrences of each unique IP address. The output will be like: 12 192.168.1.100, 5
10.0.0.5.

4. sort -nr: Sorts the output numerically (-n) in reverse order (-r), based on the counts (which are at the
beginning of each line).

5. head -n 5: Takes the first 5 lines of the sorted output, giving you the top 5 most frequent IP addresses.

This example demonstrates how you can chain these simple filters together to achieve a more complex data
analysis task.

Manipulating characters using tr

The tr command translates or deletes characters. It can replace one set of characters with another or remove
specific characters from the input. Example:
tr 'a-z' 'A-Z' < myfile.txt

This command translates all lowercase letters in myfile.txt to uppercase. The < redirects the input of tr from the
file.
tr -d '0-9' < data.txt

This command deletes all digits from data.txt.


tr -s ' ' < text.txt

This command squeezes multiple spaces to single space in text.txt

Netaji Subhash Engineering College


152

grep: Searching for Pattern


grep (Global Regular Expression Print) is a powerful command-line utility in Unix-like operating systems used for
searching plain-text data sets for lines that match a regular expression. Its power lies in its ability to use regular
expressions for complex pattern matching, allowing users to find very specific information within files or
command output.

Searching Patterns using grep

The basic syntax of the grep command is:


grep [options] pattern [file...]

• options: These are optional flags that modify grep's behavior (e.g., -i for case-insensitive, -n for line
numbers, -v to invert the match).

• pattern: This is the search pattern, which can be a simple string or a regular expression. It's often best to
enclose the pattern in single quotes to prevent the shell from interpreting special characters.

• file...: This specifies the file(s) to search. If no file is specified, grep reads from standard input.

Basic grep Examples:

Searching for a simple string:


grep "error" /var/log/syslog

This will display all lines in /var/log/syslog that contain the word "error".

Case-insensitive search:
grep -i "Error" /var/log/syslog

This will find "Error", "error", "ERROR", etc.

Inverting the match (lines that do not contain the pattern):


grep -v "info" /var/log/messages

This will display all lines in /var/log/messages that do not contain "info".

Displaying line numbers:


grep -n "warning" app.log

This will show lines containing "warning" in app.log and prefix each matching line with its line number.

Searching in multiple files:


grep "failed" *.log

This will search for "failed" in all files ending with .log in the current directory.

Piping output to grep:


ls -l | grep "myfile"

This pipes the long listing of files to grep, which then filters for lines containing "myfile".

Regular Expressions (Regex) with grep

Netaji Subhash Engineering College


153
The real power of grep comes from its ability to use regular expressions. Regular expressions are sequences of
characters that define a search pattern. There are different "flavors" of regular expressions, primarily Basic Regular
Expressions (BRE) and Extended Regular Expressions (ERE).

Basic Regular Expression (BRE)

BRE is the default regular expression syntax used by grep when no specific option is provided (i.e., just grep). In
BRE, some special characters (metacharacters) require a backslash (\) to be interpreted as special. If they are not
escaped, they are treated as literal characters.

Common BRE Metacharacters and their meanings:

• . (Dot): Matches any single character (except newline).

o grep "a.c" file.txt will match "abc", "axc", "a1c", etc.

• * (Asterisk): Matches zero or more occurrences of the preceding character or regular expression.

o grep "a*b" file.txt will match "b", "ab", "aab", "aaab", etc.

• ^ (Caret): Matches the beginning of a line.

o grep "^start" file.txt will match lines that begin with "start".

• $ (Dollar sign): Matches the end of a line.

o grep "end$" file.txt will match lines that end with "end".

• [] (Bracket expressions): Matches any single character within the brackets.

o grep "[aeiou]" file.txt will match lines containing any vowel.

o grep "[0-9]" file.txt will match lines containing any digit.

o grep "[^0-9]" file.txt will match lines not containing any digit. (The ^ inside brackets negates the
set).

• \< and \> (Word boundaries): Match the beginning and end of a word, respectively.

o grep "\<the\>" file.txt will match the whole word "the", not "there" or "other".

• \(...\) (Grouping): Groups regular expressions. In BRE, you need to escape the parentheses.

o grep " \(ab\)*c" file.txt will match "c", "abc", "ababc", etc. (Zero or more occurrences of "ab"
followed by "c").

• \{n\} \{n,\} \{n,m\} (Quantifiers): In BRE, you need to escape the curly braces.

o \{n\}: Matches exactly n occurrences. grep "a\{3\}" file.txt matches "aaa".

o \{n,\}: Matches at least n occurrences. grep "a\{2,\}" file.txt matches "aa", "aaa", etc.

o \{n,m\}: Matches between n and m occurrences (inclusive). grep "a\{1,3\}" file.txt matches "a",
"aa", "aaa".

Netaji Subhash Engineering College


154
Extended Regular Expression (ERE) and egrep / grep -E

Extended Regular Expressions (ERE) provide additional features and make some common operations simpler by
not requiring backslashes for certain metacharacters that do require them in BRE.

Historically, egrep was a separate command specifically designed to use ERE. However, modern grep
implementations include the functionality of egrep via the -E option. So, egrep is essentially a deprecated
command that is now an alias or a hard link to grep -E.

Key differences in ERE (compared to BRE):

• ? (Question mark): Matches zero or one occurrence of the preceding character or regular expression.
(Special in ERE, requires \? in BRE)

o grep -E "colou?r" file.txt will match "color" or "colour".

• + (Plus sign): Matches one or more occurrences of the preceding character or regular expression.
(Special in ERE, requires \+ in BRE)

o grep -E "a+b" file.txt will match "ab", "aab", "aaab", etc. (at least one 'a').

• | (Pipe): Acts as an OR operator, matching either the expression before or after the pipe. (Special in ERE,
requires \| in BRE)

o grep -E "cat|dog" file.txt will match lines containing "cat" or "dog".

• ( ) (Grouping): Parentheses are special for grouping. (Special in ERE, requires \(...\) in BRE)

o grep -E "(word1|word2) phrase" file.txt will match "word1 phrase" or "word2 phrase".

• {n} {n,} {n,m} (Quantifiers): Curly braces are special for quantifiers. (Special in ERE, requires \{...\} in
BRE)

o grep -E "a{3}" file.txt matches "aaa".

o grep -E "a{2,}" file.txt matches "aa", "aaa", etc.

o grep -E "a{1,3}" file.txt matches "a", "aa", "aaa".

Using egrep or grep -E:

Both egrep and grep -E enable ERE syntax.

# Using egrep (older syntax)


egrep "error|warning" logs.txt

# Using grep -E (modern and preferred)


grep -E "error|warning" logs.txt

Netaji Subhash Engineering College


155

Review Questions

Netaji Subhash Engineering College


156

Review Questions
Short Answer Type Questions

1. What does the pr command do in UNIX?


The pr command is used to prepare text files for printing. It adds headers, footers, page breaks, and
divides output into columns.

2. How can you display the first 10 lines of a file using UNIX commands?
The head command is used to display the first 10 lines of a file by default. Syntax: head filename.

3. Which command displays the last few lines of a file, and how is it used?
The tail command shows the last 10 lines of a file by default. Usage: tail filename.

4. What is the use of the cut command in UNIX?


The cut command is used to extract specific columns or fields from a file. Example: cut -d ":" -f1
/etc/passwd.

5. How does the paste command work in UNIX?


The paste command merges lines of files horizontally. It combines content line-by-line from multiple files
side by side.

6. What is the function of the sort command?


The sort command arranges lines in text files in alphabetical or numerical order. Syntax: sort filename.

7. How can you remove repeated lines from a file in UNIX?


Use the uniq command after sorting the file. It filters out adjacent duplicate lines. Example: sort file.txt |
uniq.

8. What is the purpose of the tr command?


The tr command translates or deletes characters. For example, tr 'a-z' 'A-Z' converts lowercase to
uppercase.

9. What does the grep command do?


The grep command searches for a pattern in a file and prints matching lines. Example: grep "root"
/etc/passwd.

10. Explain the use of grep -E.


grep -E enables extended regular expressions. It supports special characters like +, ?, and {} in patterns.

11. How do you use cut to display only the first field of a colon-separated file?
Use: cut -d ":" -f1 filename. This extracts only the first field using ":" as delimiter.

12. What is the difference between head and tail commands?


head displays lines from the beginning of a file, while tail displays lines from the end of a file.

13. How do you view a file in multi-column format before printing?


Use pr -2 filename to display the file in 2-column format with proper formatting for printing.

14. How to sort a file numerically in UNIX?


Use: sort -n filename. The -n option sorts numbers in ascending order.

15. How to find unique lines in a file using uniq?


First sort the file, then apply uniq. Command: sort file.txt | uniq.

Netaji Subhash Engineering College


157
16. What does tr -d 'a' do?
This command deletes all occurrences of the character 'a' from the input.

17. What is a Basic Regular Expression (BRE)?


BRE includes simple regex metacharacters like ., *, ^, $, and []. It is the default regex used by grep.

18. What is the difference between grep and egrep?


egrep is equivalent to grep -E, which supports extended regular expressions for complex pattern
matching.

19. How do you merge two files line by line using paste?
Use paste file1 file2 to merge lines from both files side by side with tab as default delimiter.

20. How can you count repeated lines in a file using uniq?
Use uniq -c filename after sorting. It prefixes lines by the number of occurrences.

Descriptive Type Questions

1. Explain the pr command in UNIX with example.


The pr command formats text files for printing. It adds headers, page numbers, and can split output into
columns. For example, pr -2 sample.txt displays the file in 2 columns. It is helpful for preparing print-
ready files by organizing content neatly with spacing and headers.

2. How does the cut command work in UNIX? Give examples.


The cut command extracts sections of lines from a file. It supports character-based and field-based
cutting. For example:

o cut -c1-5 filename displays the first 5 characters of each line.

o cut -d "," -f2 filename.csv displays the second field from a comma-separated file.

3. What is the function of paste in UNIX? Illustrate with example.


The paste command combines lines from two or more files side by side.
Example:
$ cat file1
A
B
$ cat file2
1
2
$ paste file1 file2
A 1
B 2

This merges lines using a tab separator by default.

Netaji Subhash Engineering College


158
4. Explain how head and tail commands are used to view file content.
The head command displays the first 10 lines by default; use head -n 5 filename to view 5 lines.
Similarly, tail -n 5 filename shows the last 5 lines. These commands are useful for previewing or
analyzing large log files.

5. Describe the usage of sort with multiple options.


sort can be used with several options:

o sort filename sorts alphabetically.

o sort -n filename sorts numerically.

o sort -r filename sorts in reverse order.

o sort -k 2 filename sorts based on the second field.


It is useful in processing structured data like logs or reports.

6. Explain uniq command with practical examples.


The uniq command filters adjacent repeated lines. Use it after sort for accurate results.
Example:
sort names.txt | uniq

To display duplicates with counts:


sort names.txt | uniq -c

This helps in identifying repeated entries in logs or records.

7. What is the tr command? Show usage examples.


tr is a translate or delete character utility. It works with stdin.
Example 1: Convert lowercase to uppercase
echo "unix" | tr 'a-z' 'A-Z'

Output: UNIX
Example 2: Delete spaces
echo "a b c" | tr -d ' '

Output: abc

8. How does grep work and what are its options?


grep searches lines matching a pattern.
Example: grep "error" logfile.txt
Options:

o -i: ignore case

o -v: invert match

o -n: show line numbers

o -r: recursive search


It is widely used for pattern search in files or outputs.

Netaji Subhash Engineering College


159
9. Differentiate between Basic and Extended Regular Expressions.
BRE supports simple patterns like . (any character), ^ (line start), $ (line end).
ERE (used with egrep or grep -E) supports additional operators:

o +: one or more

o ?: zero or one

o {n}: exact matches


ERE is more powerful and preferred for complex searches.

10. Explain how to use grep -E or egrep with pattern examples.


grep -E or egrep enables extended regex features.
Example:
egrep "colou?r" text.txt

Matches both "color" and "colour".


egrep "file[0-9]+\.txt"

Matches filenames like file1.txt, file22.txt, etc. It is useful for advanced pattern matching in
scripts and filters.

Netaji Subhash Engineering College


160

Chapter 09
Shell scripts

Netaji Subhash Engineering College


161

9.1 Introduction to Shell Scripting

A shell script is a text file containing a series of commands intended for execution by the Unix/Linux shell (like
Bash, sh, etc.). Shell scripting allows users to automate repetitive tasks, execute commands in sequence, create
control structures (like loops and conditionals), and build complex programs using simple scripting techniques.

Shell scripts are widely used for system administration, automation, monitoring, and batch processing. Since
they are interpreted rather than compiled, shell scripts are easy to write and modify.

To create a shell script:

1. Use a text editor (like vi, nano, gedit, etc.)

2. Start the script with the shebang (#!/bin/bash) to specify the shell.

3. Make the script executable using: chmod +x scriptname.sh

4. Run it with: ./scriptname.sh

Introduction to nano Editor

nano is a simple, user-friendly command-line text editor for Unix and Linux systems. It’s often used by beginners
because it’s easy to learn and doesn’t require advanced knowledge of keyboard shortcuts like vi or vim. nano is
especially useful for editing configuration files and writing shell scripts.

How to Open nano


nano filename

• If filename exists, it opens the file for editing.

• If filename doesn’t exist, it creates a new file.

Example:
nano myscript.sh

Basic Navigation and Editing in nano

Once inside nano, you can:

• Move the cursor using arrow keys.

• Type and edit text as you would in a standard text editor.

Netaji Subhash Engineering College


162
Common Keyboard Shortcuts in nano

Here are some essential nano commands (use Ctrl key represented as ^):
Shortcut Description

Ctrl + O Write (save) the file ("O" for output)

Ctrl + X Exit the editor

Ctrl + K Cut current line

Ctrl + U Paste the last cut text

Ctrl + W Search for text

Ctrl + \ Replace text

Ctrl + G Show help menu

Ctrl + C Show cursor position

Note: All commands are listed at the bottom of the nano screen while editing.

Creating and Saving a File Using nano

1. Open the editor:


nano hello.sh

2. Type your shell script:


#!/bin/bash
echo "Hello from Nano!"

3. Save the file:

o Press Ctrl + O, then hit Enter to confirm.

o Press Ctrl + X to exit.

4. Make it executable:
chmod +x hello.sh

5. Run the script:


./hello.sh

Searching and Replacing Text in nano

• Search: Press Ctrl + W, type the keyword, press Enter.

• Replace: Press Ctrl + \, enter the word to find, press Enter, enter the replacement word.

Netaji Subhash Engineering College


163
Advantages of nano

• Easy to use and beginner-friendly.

• No need to learn complex modes like vim.

• Lightweight and available on most Unix/Linux systems.

• Helpful shortcut guide at the bottom of the screen.

Disadvantages of nano

• Limited functionality compared to advanced editors like vim or emacs.

• No syntax highlighting unless configured.

• Slower for large files or programming tasks.

Introduction to vi Editor

The vi editor (short for visual editor) is a powerful text editor available by default on almost every Unix and Linux
system. It’s lightweight, fast, and versatile, making it a favorite among system administrators and programmers.
Though it has a steeper learning curve than nano, its efficiency and features make it a valuable tool once mastered.

The improved version of vi is called vim (Vi IMproved), which offers additional features like syntax highlighting,
undo levels, and plugin support.

Modes in vi Editor

One of the key concepts in vi is its mode-based operation:

Mode Purpose

Normal mode Default mode for navigation and commands

Insert mode Used for text input (editing or typing)

Command mode Used for saving, exiting, and executing commands

Visual mode Used to select blocks of text (advanced usage)

Opening vi
vi filename

• Opens filename if it exists.

• Creates a new file if it doesn’t.

Switching Between Modes

• Press i to enter Insert mode (start editing).

Netaji Subhash Engineering College


164
• Press Esc to return to Normal mode.

• Use : from Normal mode to enter Command mode.

Basic vi Commands

Insert Mode Commands (Text Entry)


Command Description

i Insert before cursor

I Insert at beginning of line

a Append after cursor

A Append at end of line

o Open a new line below

O Open a new line above

Navigation in Normal Mode


Command Action

h Move left

l Move right

j Move down

k Move up

0 Start of line

$ End of line

G Go to end of file

gg Go to beginning of file

nG Go to line number n

Saving and Exiting (: Commands)


Command Description

:w Save (write) the file


:q Quit

:wq Save and quit

:q! Quit without saving

:x Save and quit (like :wq)

:set nu Show line numbers

Netaji Subhash Engineering College


165
Editing and Deleting
Command Action

x Delete character under cursor

dd Delete (cut) current line

yy Copy (yank) current line

p Paste after cursor/line

u Undo last action

r Replace a single character

Example Workflow in vi

1. Open file:

vi demo.sh

2. Press i to enter insert mode.

3. Type the script:


#!/bin/bash
echo "Hello from vi editor"

4. Press Esc to go back to normal mode.

5. Save and exit:


:wq

Advantages of vi

• Available on all Unix/Linux systems by default.

• Very efficient once learned.

• Ideal for power users and developers.

• Supports macros, regular expressions, scripting, and plugins (via vim).

Disadvantages

• Steep learning curve for beginners.

• Requires memorization of commands.

• Not intuitive for casual users.

Netaji Subhash Engineering College


166
1. Simple Shell Scripts

A simple shell script is just a text file containing one or more commands. When executed, the shell reads and
runs each command in sequence.

Example:

Let's create a script named hello.sh that prints a greeting and the current date.

1. Create the file:


nano hello.sh

2. Add the content:


#!/bin/bash
# This is a simple shell script
echo "Hello, Shell Scripting!"
echo "Today's date is: $(date)"

• #!/bin/bash: This is called a "shebang" and tells the system which interpreter to use for executing the
script (in this case, Bash).
• # This is a simple shell script: Lines starting with # are comments and are ignored by the shell.
• echo: This command is used to print text to the console.
• $(date): This is command substitution. The date command is executed, and its output replaces $(date).

3. Make the script executable:

chmod +x hello.sh #chmod +x: Grants execute permissions to the file.

4. Run the script:


./hello.sh

Output:

Hello, Shell Scripting!

Today's date is: Wed May 21 01:29:39 AM IST 2025

2. Interactive Shell Scripts

Interactive shell scripts prompt the user for input during execution, making them more dynamic and user-
friendly.

Example:

Let's create a script named user_info.sh that asks for the user's name and age.

1. Create the file:


nano user_info.sh

Netaji Subhash Engineering College


167
2. Add the content:
#!/bin/bash
echo "Please enter your name:"
read name
echo "Please enter your age:"
read age
echo "Hello, $name! You are $age years old."

o read name: The read command waits for user input and stores it in the variable name.

o $name: This is how you access the value of a variable.

3. Make the script executable:


chmod +x user_info.sh

4. Run the script:


./user_info.sh

Output (during execution):


Please enter your name:
John Doe
Please enter your age:
30
Hello, John Doe! You are 30 years old.

3. Using Command Line Arguments (Positional Parameters)

Shell scripts can accept arguments directly from the command line when they are executed. These arguments
are accessed using special variables called "positional parameters."

• $0: The name of the script itself.

• $1, $2, $3, ...: The first, second, third, and subsequent arguments.

• $#: The number of arguments passed to the script.

• $*: All arguments as a single string.

• $@: All arguments as separate strings (useful when iterating).

Netaji Subhash Engineering College


168
Example:

Let's create a script named greet_user.sh that takes a name and a greeting message as arguments.

1. Create the file:


nano greet_user.sh

2. Add the content:

#!/bin/bash
echo "Script name: $0"
echo "Number of arguments: $#"
echo "First argument (name): $1"
echo "Second argument (message): $2"
echo "$2, $1!"
echo "All arguments as a single string: $*"
echo "All arguments as separate strings:"
for arg in "$@"; do
echo "- $arg"
done
3. Make the script executable:
chmod +x greet_user.sh

4. Run the script with arguments:


./greet_user.sh Alice "Good morning"

Output:
Script name: ./greet_user.sh
Number of arguments: 2
First argument (name): Alice
Second argument (message): Good morning
Good morning, Alice!
All arguments as a single string: Alice Good morning
All arguments as separate strings:
- Alice
- Good morning

Netaji Subhash Engineering College


169
4. Logical Operators (&&, ||)

Logical operators allow you to combine commands based on their exit status (0 for success, non-zero for failure).

• && (Logical AND): The second command executes only if the first command succeeds.

• || (Logical OR): The second command executes only if the first command fails.

Example:

Let's create a script named check_file.sh to demonstrate these operators.

1. Create the file:


nano check_file.sh

2. Add the content:

#!/bin/bash
FILE="non_existent_file.txt"
EXISTING_FILE="hello.sh"

# Example of &&
# This will try to create a directory and then list its content
echo "--- Using && ---"
mkdir my_new_dir && ls my_new_dir

# This will fail to list non_existent_dir and then not proceed with echo
echo "--- Failed && ---"
ls non_existent_dir && echo "This won't be printed"

# Example of ||
# This will try to remove a file; if it fails (e.g., file doesn't exist), it will echo a message
echo "--- Using || ---"
rm $FILE || echo "$FILE does not exist or could not be removed."

# This will successfully remove existing_file and then not proceed with
echo
echo "--- Successful || ---"
rm $EXISTING_FILE || echo "$EXISTING_FILE was not removed."

o rm non_existent_file.txt will fail, triggering the echo statement.

o rm hello.sh will likely succeed (if hello.sh exists), so the echo statement after || won't run.

3. Make the script executable:


chmod +x check_file.sh

Netaji Subhash Engineering College


170
4. Run the script:
./check_file.sh

Output (will vary based on file existence, but general idea):


--- Using && ---
# (Output of mkdir and ls if successful)
--- Failed && ---
ls: cannot access 'non_existent_dir': No such file or directory
--- Using || ---
rm: cannot remove 'non_existent_file.txt': No such file or directory
non_existent_file.txt does not exist or could not be removed.
--- Successful || ---
# (No output for this section if hello.sh was successfully removed)

5. Condition Checking (if, case)

Conditional statements allow your script to make decisions and execute different blocks of code based on certain
conditions.

a. if statement

The if statement evaluates a condition and executes a block of code if the condition is true.

Syntax:
if condition; then
# code to execute if condition is true
elif another_condition; then
# code to execute if another_condition is true
else
# code to execute if no condition is true
fi

Conditions:

Conditions are often tested using the test command or square brackets [].

Common test operators:

• File operators:

o -e file: True if file exists.

o -f file: True if file exists and is a regular file.

o -d file: True if file exists and is a directory.


Netaji Subhash Engineering College
171
o -r file: True if file exists and is readable.

o -w file: True if file exists and is writable.

o -x file: True if file exists and is executable.

• String operators:

o str1 = str2: True if strings are equal.

o str1 != str2: True if strings are not equal.

o -z string: True if string is empty.

o -n string: True if string is not empty.

• Arithmetic operators:

o num1 -eq num2: True if numbers are equal.

o num1 -ne num2: True if numbers are not equal.

o num1 -gt num2: True if num1 is greater than num2.

o num1 -ge num2: True if num1 is greater than or equal to num2.

o num1 -lt num2: True if num1 is less than num2.

o num1 -le num2: True if num1 is less than or equal to num2.

Example (if):

Let's create a script named file_checker.sh that checks if a file exists and is a directory.

1. Create the file:


nano file_checker.sh

2. Add the content:

#!/bin/bash
read -p "Enter a file or directory path: " path

if [ -e "$path" ]; then
echo "'$path' exists."
if [ -f "$path" ]; then
echo "'$path' is a regular file."
elif [ -d "$path" ]; then
echo "'$path' is a directory."
else
echo "'$path' is neither a regular file nor a directory."
fi
else
echo "'$path' does not exist."
fi

Netaji Subhash Engineering College


172
o [ -e "$path" ]: This is equivalent to test -e "$path". The spaces around the brackets and
operators are crucial.

o "$path": Quoting variables is important to prevent word splitting and globbing issues, especially
when paths contain spaces.

3. Make the script executable:


chmod +x file_checker.sh

4. Run the script:


./file_checker.sh

Output (example):
Enter a file or directory path: /etc/passwd
'/etc/passwd' exists.
'/etc/passwd' is a regular file.
Enter a file or directory path: /home
'/home' exists.
'/home' is a directory.

b. case statement

The case statement is useful for handling multiple possible values of a single variable or expression. It's often
cleaner than a long series of if-elif-else statements.

Syntax:
case expression in
pattern1)
# code for pattern1
;;
pattern2)
# code for pattern2
;;
*)
# default code (if no other pattern matches)
;;
esac

Example (case):

Let's create a script named menu.sh that presents a simple menu to the user.

1. Create the file:


nano menu.sh

Netaji Subhash Engineering College


173
2. Add the content:
#!/bin/bash
echo "Select an option:"
echo "1. Show current date and time"
echo "2. List files in current directory"
echo "3. Check disk space"
echo "4. Exit"
read -p "Enter your choice (1-4): " choice

case $choice in
1)
echo "Current date and time: $(date)"
;;
2)
echo "Files in current directory:"
ls -l
;;
3)
echo "Disk space usage:"
df -h
;;
4)
echo "Exiting..."
exit 0
;;
*)
echo "Invalid choice. Please enter a number between 1 and 4."
;;
Esac

o ;;: Marks the end of a case block.

o *): The default case, matching any value not covered by previous patterns.

3. Make the script executable:


chmod +x menu.sh

4. Run the script:


./menu.sh

Output (example with choice 2):


Select an option:
1. Show current date and time
2. List files in current directory
3. Check disk space
4. Exit
Enter your choice (1-4): 2
Files in current directory:
total 24

Netaji Subhash Engineering College


174
-rwxr-xr-x 1 user user 128 May 21 01:29 check_file.sh
-rwxr-xr-x 1 user user 195 May 21 01:29 file_checker.sh
-rwxr-xr-x 1 user user 140 May 21 01:29 greet_user.sh
-rwxr-xr-x 1 user user 96 May 21 01:29 hello.sh
-rwxr-xr-x 1 user user 403 May 21 01:29 menu.sh
-rwxr-xr-x 1 user user 133 May 21 01:29 user_info.sh

6. Expression Evaluation (test, [], [[]])

As seen in the if section, test and [] are used for evaluating conditions. The [[]] (double square brackets) provide
more advanced features and are specific to Bash (not POSIX shell).

• test command: A standalone command for evaluating expressions. Example:


test -f myfile.txt

• [] (single square brackets): A synonym for the test command. Example:


[ -f myfile.txt ]

Important: Spaces are required around the brackets and operators.

• [[]] (double square brackets): Enhanced conditional expression for Bash.

o Supports regular expressions (=~).

o Allows C-style logical operators (&&, ||) within the brackets, simplifying complex conditions.

o No need for quotes around variables within [[]] unless they contain shell special characters.

o Example: [[ "$name" == "Alice" && $age -gt 18 ]]

Example:

Let's create a script named expression_eval.sh to illustrate the differences.

1. Create the file:


nano expression_eval.sh

2. Add the content:

#!/bin/bash
name="Alice"
age=25
filename="my_document.txt"

echo "--- Using [] ---"


if [ "$name" = "Alice" -a "$age" -gt 20 ]; then # -a for AND
echo "Alice is over 20."
fi

echo "--- Using [[]] ---"

Netaji Subhash Engineering College


175
if [[ "$name" == "Alice" && $age -gt 20 ]]; then # && for AND
echo "Alice is over 20 (using double brackets)."
fi

echo "--- Using regular expression with [[]] ---"


if [[ "$filename" =~ \.txt$ ]]; then
echo "$filename is a text file."
fi

echo "--- Checking exit status of commands ---"


grep "root" /etc/passwd
if [ $? -eq 0 ]; then
echo "User 'root' found in /etc/passwd."
else
echo "User 'root' not found in /etc/passwd."
Fi

• $?: This special variable holds the exit status of the last executed command.

3. Make the script executable:


chmod +x expression_eval.sh

4. Run the script:


./expression_eval.sh

Output:
--- Using [] ---
Alice is over 20.
--- Using [[]] ---
Alice is over 20 (using double brackets).
--- Using regular expression with [[]] ---
my_document.txt is a text file.
--- Checking exit status of commands ---
root:x:0:0:root:/root:/bin/bash
User 'root' found in /etc/passwd.

7. Computation (expr, $((...)))

Shell scripts primarily treat values as strings. For arithmetic operations, you need specific tools.

a. expr command

The expr command evaluates an expression and prints the result to standard output. It's an older method and
has some limitations.

Syntax: expr expression

Example (expr):

1. Create the file:


nano basic_math.sh
Netaji Subhash Engineering College
176
2. Add the content:

#!/bin/bash
num1=10
num2=5

sum=$(expr $num1 + $num2)


echo "Sum: $sum"

difference=$(expr $num1 - $num2)


echo "Difference: $difference"

product=$(expr $num1 \* $num2) # Note the backslash to escape *


echo "Product: $product"

quotient=$(expr $num1 / $num2)


echo "Quotient: $quotient"

o Important: Spaces are required around operators when using expr.

o * needs to be escaped (\*) because it's a shell wildcard character.

b. $(()) (Arithmetic Expansion)

This is the preferred and more modern way to perform arithmetic in Bash. It allows C-style arithmetic
expressions directly within the shell.

Syntax: variable=$((expression))

Example ($(())):

Continuing from basic_math.sh:

1. Modify the file or create a new one:


#!/bin/bash
num1=10
num2=5

sum=$((num1 + num2))
echo "Sum (using \$()): $sum"

difference=$((num1 - num2))
echo "Difference (using \$()): $difference"

product=$((num1 * num2)) # No need to escape *


echo "Product (using \$()): $product"

quotient=$((num1 / num2))
echo "Quotient (using \$()): $quotient"

Netaji Subhash Engineering College


177
remainder=$((num1 % num2))
echo "Remainder (using \$()): $remainder"

increment=$((num1++)) # Post-increment
echo "Incremented num1 (post): $increment, num1 is now: $num1"

decrement=$((--num2)) # Pre-decrement
echo "Decremented num2 (pre): $decrement, num2 is now: $num2"

2. Make the script executable:


chmod +x basic_math.sh

3. Run the script:


./basic_math.sh

Output:
Sum: 15
Difference: 5
Product: 50
Quotient: 2
Sum (using $()): 15
Difference (using $()): 5
Product (using $()): 50
Quotient (using $()): 2
Remainder (using $()): 0
Incremented num1 (post): 10, num1 is now: 11
Decremented num2 (pre): 4, num2 is now: 4

8. Using expr for Strings

While expr is primarily for arithmetic, it can also perform some basic string operations like finding length,
extracting substrings, and finding the index of a substring. However, for more advanced string manipulation, Bash's
built-in string manipulation features are generally preferred.

Example (expr for strings):

1. Create the file:


nano string_ops.sh

2. Add the content:


#!/bin/bash
my_string="Hello World"

# String length
length=$(expr length "$my_string")
echo "Length of '$my_string': $length"

# Substring extraction (start_position, length)


substring=$(expr substr "$my_string" 1 5)
Netaji Subhash Engineering College
178
echo "Substring (1-5): $substring"

# Find index of first occurrence of a character set (returns 0 if not


found)
index=$(expr index "$my_string" "oW")
echo "Index of 'oW': $index" # Output: 5 (o is at 5th position)

# Match (regular expression matching)


match=$(expr "$my_string" : '\(Hello\)')
echo "Match 'Hello': $match" # Output: Hello

nomatch=$(expr "$my_string" : '\(Goodbye\)')


echo "Match 'Goodbye': $nomatch" # Output: (empty or 0)

3. Make the script executable:


chmod +x string_ops.sh

4. Run the script:


./string_ops.sh

Output:
Length of 'Hello World': 11
Substring (1-5): Hello
Index of 'oW': 5
Match 'Hello': Hello
Match 'Goodbye':

Note: For modern Bash scripting, use built-in string manipulations like ${#string} for length, ${string:offset:length}
for substring, and [[ string =~ regex ]] for regular expression matching, as they are generally more efficient and
easier to read.

9. Loops (while, for)

Loops allow you to repeatedly execute a block of code.

a. while loop

The while loop repeatedly executes commands as long as a condition remains true.

Syntax:
while condition; do
# code to execute
done

Netaji Subhash Engineering College


179
Example (while):

Let's create a script named countdown.sh that counts down from 5.

1. Create the file:


nano countdown.sh

2. Add the content:

#!/bin/bash
count=5
while [ $count -gt 0 ]; do
echo "Countdown: $count"
count=$((count - 1)) # Decrement count
sleep 1 # Wait for 1 second
done
echo "Blast off!"

3. Make the script executable:


chmod +x countdown.sh

4. Run the script:

./countdown.sh

Output (with 1-second delay between lines):


Countdown: 5
Countdown: 4
Countdown: 3
Countdown: 2
Countdown: 1
Blast off!

b. for loop

The for loop iterates over a list of items or a range of numbers.

Syntax (list iteration):


for variable in item1 item2 item3 ...; do
# code to execute for each item
done

Syntax (C-style numeric iteration - Bash specific):


for (( initialization; condition; increment/decrement )); do
# code to execute
done

Netaji Subhash Engineering College


180
Example (for - list iteration):

Let's create a script named fruits.sh that iterates through a list of fruits.

1. Create the file:


nano fruits.sh

2. Add the content:

#!/bin/bash
fruits="Apple Banana Cherry Date"
for fruit in $fruits; do
echo "I like $fruit."
done

echo "--- Iterating over files ---"


for file in *.sh; do
echo "Found shell script: $file"
done

Example (for - C-style numeric iteration):

Continuing from fruits.sh or a new file:

1. Add the content:

#!/bin/bash
echo "--- Numeric loop (1 to 5) ---"
for (( i=1; i<=5; i++ )); do
echo "Number: $i"
done

2. Make the script executable:


chmod +x fruits.sh

3. Run the script:


./fruits.sh

Output:

I like Apple.
I like Banana.
I like Cherry.
I like Date.
--- Iterating over files ---
Found shell script: basic_math.sh
Found shell script: check_file.sh
Found shell script: countdown.sh
Found shell script: expression_eval.sh

Netaji Subhash Engineering College


181
Found shell script: file_checker.sh
Found shell script: fruits.sh
Found shell script: greet_user.sh
Found shell script: hello.sh
Found shell script: menu.sh
Found shell script: string_ops.sh
Found shell script: user_info.sh
--- Numeric loop (1 to 5) ---
Number: 1
Number: 2
Number: 3
Number: 4
Number: 5

10. Use of Positional Parameters

We briefly touched on positional parameters ($1, $2, etc.) in section 3. They are fundamental for making scripts
flexible and reusable. They allow your script to receive inputs directly when it's run from the command line.

Recall:

• $0: The name of the script itself.

• $1, $2, $3, ...: The first, second, third, and subsequent arguments.

• $#: The number of arguments passed to the script.

• $*: All arguments as a single string.

• $@: All arguments as separate strings.

Example (revisiting and expanding):

Let's create a script backup_files.sh that takes multiple files as arguments and backs them up to a specified
directory.

1. Create the file:


nano backup_files.sh

2. Add the content:


#!/bin/bash
# This script backs up specified files to a target directory.

if [ "$#" -lt 2 ]; then


echo "Usage: $0 <target_directory> <file1> [file2...]"
exit 1
fi

TARGET_DIR="$1"
shift # Shift positional parameters: $1 becomes $2, $2 becomes $3,
etc.
Netaji Subhash Engineering College
182

if [ ! -d "$TARGET_DIR" ]; then
echo "Error: Target directory '$TARGET_DIR' does not exist."
exit 1
fi

echo "Backing up files to: $TARGET_DIR"


for file in "$@"; do
if [ -f "$file" ]; then
cp "$file" "$TARGET_DIR/"
echo "Backed up '$file'"
else
echo "Warning: '$file' is not a regular file or does not
exist. Skipping."
fi
done

echo "Backup complete."

• shift: This command shifts the positional parameters to the left. After shift, $1 will contain the value that
was previously in $2, $2 will contain the value from $3, and so on. This is very useful when you want to
process the first argument differently (e.g., as a directory) and then iterate over the remaining
arguments (files).
• "$@": When used in a for loop, "$@" expands to separate arguments, which is crucial for handling
filenames with spaces correctly.

3. Make the script executable:


backup_files.sh

4. Prepare some files and a target directory:


mkdir my_backups
touch file_a.txt file_b.log "file with spaces.txt"

5. Run the script:


./backup_files.sh my_backups file_a.txt file_b.log "file with
spaces.txt" non_existent.txt
Output:
Backing up files to: my_backups
Backed up 'file_a.txt'
Backed up 'file_b.log'
Backed up 'file with spaces.txt'
Warning: 'non_existent.txt' is not a regular file or does not exist.
Skipping.
Backup complete.

Netaji Subhash Engineering College


183
You can then verify the files in my_backups directory:

ls my_backups

Output:
file_a.txt file_b.log file with spaces.txt

Sample Shell Script:

1. Write shell script to rename all *.c file in your directory to *.cpp file.

#!/bin/bash
# Rename all .c files to .cpp in current directory

for file in *.c


do
if [ -f "$file" ]; then
mv "$file" "${file%.c}.cpp"
fi
done

echo "All .c files have been renamed to .cpp"

2. Write shell script to find the reverse of string.


#!/bin/bash
# Reverse a string entered by user

echo -n "Enter a string: "


read str

# Use rev command


rev_str=$(echo "$str" | rev)
echo "Reversed string: $rev_str"

Netaji Subhash Engineering College


184

3. Write a shell script to check a given number is palindrome or not.


#!/bin/bash
# Check whether the entered number is a palindrome

echo -n "Enter a number: "


read num
rev=$(echo "$num" | rev)

if [ "$num" -eq "$rev" ]; then


echo "$num is a palindrome"
else
echo "$num is not a palindrome"
fi

4. List only the name of the c files in your current directory, which have used the header file “math.h”.
#!/bin/bash
# List C files using "math.h" in the current directory

for file in *.c


do
if grep -q "#include *<math.h>" "$file"; then
echo "$file"
fi
done

Netaji Subhash Engineering College


185

5. Write shell script to find out the sum of following series:


1 2 3
S = + + + ...nth term
1! 2! 3!

#!/bin/bash

# Function to calculate factorial


factorial() {
fact=1
for (( i=1; i<=$1; i++ ))
do
fact=$((fact * i))
done
echo $fact
}

# Read number of terms


echo -n "Enter the value of n: "
read n

sum=0

# Loop to calculate sum of series


for (( i=1; i<=n; i++ ))
do
fact=$(factorial $i)
term=$(echo "scale=6; $i / $fact" | bc)
sum=$(echo "scale=6; $sum + $term" | bc)
done

echo "Sum of the series is: $sum"

6. Display all the arguments along with the argument number that are provided as a command line
to the shell script. (note number of the argument may be greater than 9)

#!/bin/bash

# Loop through all arguments


index=1
for arg in "$@"
do
echo "Argument $index: $arg"
index=$((index + 1))
done

Netaji Subhash Engineering College


186
7. Write a shell script to print given number in reverse order. (e.g., if number is 123 the it must printed
as 321.)
#!/bin/bash
# Reverse the digits of a number

echo -n "Enter a number: "


read num

rev=0
temp=$num

while [ $temp -gt 0 ]


do
digit=$((temp % 10))
rev=$((rev * 10 + digit))
temp=$((temp / 10))
done

echo "Reverse of $num is $rev"

8. Write a shell script to count the no. of executable file and no. of subdirectories that exists in a
directory whose name is to given as command line argument to the script.
#!/bin/bash

# Check if directory is provided


if [ $# -ne 1 ]; then
echo "Usage: $0 <directory>"
exit 1
fi

dir=$1

# Validate directory existence


if [ ! -d "$dir" ]; then
echo "Error: '$dir' is not a valid directory."
exit 1
fi
Netaji Subhash Engineering College
187

# Count executable files (non-recursive)


exec_count=$(find "$dir" -maxdepth 1 -type f -executable | wc -l)

# Count subdirectories (excluding current directory)


dir_count=$(find "$dir" -mindepth 1 -maxdepth 1 -type d | wc -l)

# Output results
echo "In directory: $dir"
echo "Executable files: $exec_count"
echo "Subdirectories: $dir_count"

9. Write a shell script to delete all non-executable file of size 0 byte from a directory whose name is
to given as command line argument to the script.

#!/bin/bash

# Check if directory name is given


if [ $# -ne 1 ]; then
echo "Usage: $0 <directory>"
exit 1
fi

dir=$1

# Verify it's a directory


if [ ! -d "$dir" ]; then
echo "Error: '$dir' is not a directory."
exit 1
fi

# Find and delete 0-byte non-executable files (non-recursive)


find "$dir" -maxdepth 1 -type f ! -executable -size 0c -print -delete

echo "Deleted all 0-byte non-executable files from $dir"

Netaji Subhash Engineering College


188
10. Write a shell script for handling a simple telephone directory.
Main Menu
a. Add Record
b. Delete Record
c. Search Record
d. Exit.....
Enter your choice:...

#!/bin/bash

filename="telephone_directory.txt"

# Ensure the file exists


touch $filename

while true
do
echo ""
echo "===== Telephone Directory ====="
echo "a. Add Record"
echo "b. Delete Record"
echo "c. Search Record"
echo "d. Exit"
echo "==============================="
echo -n "Enter your choice: "
read choice

case $choice in
a|A)
echo -n "Enter Name: "
read name
echo -n "Enter Phone Number: "
read phone

Netaji Subhash Engineering College


189
echo "$name : $phone" >> $filename
echo "Record added successfully."
;;

b|B)
echo -n "Enter name to delete: "
read del_name
grep -v "^$del_name :" $filename > temp.txt && mv temp.txt
$filename
echo "Record deleted (if it existed)."
;;

c|C)
echo -n "Enter name to search: "
read search_name
grep "^$search_name :" $filename
if [ $? -ne 0 ]; then
echo "Record not found."
fi
;;

d|D)
echo "Exiting..."
break
;;

*)
echo "Invalid choice. Please select a, b, c, or d."
;;
esac
done

Netaji Subhash Engineering College


190
11. Given text file (stud.dat) contain following information about student names and marks.
Ramen 20 30 60
Haren 40 50 45
Jadu 35 60 90
Now write a shell script to print the total marks along with their name for each record.

Content of stud.dat file


Ramen 20 30 60
Haren 40 50 45
Jadu 35 60 90

#!/bin/bash

# Check if file exists


if [ ! -f stud.dat ]; then
echo "Error: stud.dat file not found!"
exit 1
fi

# Read each line of the file


while read name m1 m2 m3
do
total=$((m1 + m2 + m3))
echo "$name : Total Marks = $total"
done < stud.dat

Netaji Subhash Engineering College


191
12. Write a shell script to count the number of executable file and number of sub-directory that exists
in a directory whose name is to given as command line argument to the script.

#!/bin/bash

if [ $# -ne 1 ]; then
echo "Usage: $0 directory"
exit 1
fi

dir=$1

if [ ! -d "$dir" ]; then
echo "Error: $dir is not a directory"
exit 1
fi

# Count executable files (regular files with executable permission)


exec_count=$(find "$dir" -maxdepth 1 -type f -executable | wc -l)

# Count sub-directories (excluding . and ..)


subdir_count=$(find "$dir" -maxdepth 1 -mindepth 1 -type d | wc -l)

echo "Number of executable files: $exec_count"


echo "Number of sub-directories: $subdir_count"

13. Write a shell script to generate prime numbers between 100 and 200.

#!/bin/bash

is_prime() {
local num=$1
if [ $num -le 1 ]; then
return 1
fi
for ((i=2; i*i<=num; i++)); do
if (( num % i == 0 )); then
return 1
fi
done
return 0
}

Netaji Subhash Engineering College


192
for ((n=100; n<=200; n++)); do
if is_prime $n; then
echo $n
fi
done

14. Write a shell script to print the sum of all prime numbers between 1 to n.

#!/bin/bash

if [ $# -ne 1 ]; then
echo "Usage: $0 n"
exit 1
fi

n=$1

is_prime() {
local num=$1
if [ $num -le 1 ]; then
return 1
fi
for ((i=2; i*i<=num; i++)); do
if (( num % i == 0 )); then
return 1
fi
done
return 0
}

sum=0
for ((i=2; i<=n; i++)); do
if is_prime $i; then
((sum += i))
fi
done

echo "Sum of prime numbers between 1 and $n is: $sum"

Netaji Subhash Engineering College


193
15. Write a shell script to generate Fibonacci number up to n.
#!/bin/bash

if [ $# -ne 1 ]; then
echo "Usage: $0 n"
exit 1
fi

n=$1

a=0
b=1

echo "Fibonacci numbers up to $n:"

while [ $a -le $n ]; do
echo -n "$a "
fn=$((a + b))
a=$b
b=$fn
done

echo

16. Write a shell script for checking a given year is a leap year or not.
#!/bin/bash

if [ $# -ne 1 ]; then
echo "Usage: $0 year"
exit 1
fi

year=$1

if (( (year % 400 == 0) )); then


echo "$year is a leap year."
elif (( (year % 100 == 0) )); then
echo "$year is not a leap year."
elif (( (year % 4 == 0) )); then
echo "$year is a leap year."
else
echo "$year is not a leap year."
fi

Netaji Subhash Engineering College


194
17. Write a shell script to delete all files in root and its subdirectories having extension ‘tmp’, which
have not been created or referred to in the last 15 days.

#!/bin/bash

# Directory to start from, passed as an argument


if [ $# -ne 1 ]; then
echo "Usage: $0 <directory_path>"
exit 1
fi

dir=$1

# Safety check: make sure directory exists


if [ ! -d "$dir" ]; then
echo "Error: $dir is not a valid directory"
exit 1
fi

# Find and delete .tmp files not accessed or modified in last 15


days
find "$dir" -type f -name "*.tmp" -atime +15 -mtime +15 -print -
delete

echo "Deleted all .tmp files in $dir not accessed or modified in


the last 15 days."

18. Write a shell script that accepts filenames as arguments. For every filename, it should check
whether it exits in the current directory and then cover its name to uppercase, but only if file with
the new name doesn’t exists.
#!/bin/bash

# Check if at least one argument is passed


if [ $# -eq 0 ]; then
echo "Usage: $0 filename1 [filename2 ...]"
exit 1
fi

for file in "$@"; do


if [ -f "$file" ]; then
# Get uppercase name
uppername=$(echo "$file" | tr '[:lower:]' '[:upper:]')
Netaji Subhash Engineering College
195

if [ "$file" == "$uppername" ]; then


echo "File '$file' is already in uppercase. Skipping."
elif [ -e "$uppername" ]; then
echo "Cannot rename '$file' to '$uppername': file already
exists."
else
mv "$file" "$uppername"
echo "Renamed '$file' to '$uppername'"
fi
else
echo "File '$file' does not exist in the current directory."
fi
done

Netaji Subhash Engineering College


196

Review Questions

Netaji Subhash Engineering College


197

Review Questions
Short Answer Type Questions

1. What is a shell script in UNIX/Linux?


A shell script is a text file containing a series of commands that are executed by the shell interpreter to
automate tasks.

2. How do you create and run a simple shell script?


Create the file with .sh extension, use chmod +x filename.sh to make it executable, and run it using
./filename.sh.

3. What is an interactive shell script?


An interactive shell script prompts the user for input during execution using commands like read, making
it dynamic.

4. What are command line arguments in shell scripting?


Command line arguments are inputs passed when executing a script. $1, $2, etc., refer to the first,
second arguments respectively.

5. Explain the use of && operator in shell scripting.


The && logical operator allows execution of the second command only if the first command is successful
(exit status 0).

6. Explain the use of || operator in shell scripting.


The || operator executes the second command only if the first command fails (non-zero exit status).

7. What is the purpose of the if statement in shell scripts?


The if statement is used for conditional execution of commands based on whether a condition is true.

8. How is the case statement used in shell scripts?


The case statement allows multi-way branching based on matching patterns and is used as an alternative
to multiple if statements.

9. What is the test command in shell scripts?


The test command evaluates expressions such as string comparison, file checks, and arithmetic
operations, returning true or false.

10. What is the purpose of [] in shell scripting?


Square brackets are used as a synonym for the test command, e.g., [ $a -eq $b ].

11. How is expr used for arithmetic in shell scripts?


expr is used for evaluating expressions like expr 5 + 2, which returns 7. Spaces around operators are
necessary.

12. Can expr be used for string operations? Give an example.


Yes, expr can be used to find string length: expr length "hello" returns 5.

13. What is the purpose of the while loop in shell scripts?


The while loop executes a block of code repeatedly as long as the condition remains true.

14. How does the for loop work in shell scripting?


The for loop iterates over a list of items, executing a block of code for each item.

Netaji Subhash Engineering College


198
15. What are positional parameters in shell scripts?
Positional parameters like $1, $2, etc., represent arguments passed to the script from the command line.

16. What does $0 represent in a shell script?


$0 contains the name of the shell script being executed.

17. How do you read user input in a shell script?


Use the read command: read name takes input and stores it in the variable name.

18. Explain the use of double square brackets [[ ]] in condition checking.


[[ ]] offers more advanced syntax for conditions, supports regex, and doesn't require escaping certain
operators.

19. What is the purpose of using shift in shell scripting?


The shift command shifts all positional parameters left by one, useful in processing arguments in a loop.

20. What does "$@" and "$*" mean in shell scripting?


"$@" treats each quoted parameter as a separate word, while "$*" treats all parameters as a single
word.

Descriptive Type Questions

1. Explain how to write a simple shell script and execute it with an example.
A simple shell script includes commands written in a text file. Example:
#!/bin/bash
echo "Hello, World!"

Save it as hello.sh, make it executable using chmod +x hello.sh, and run it using ./hello.sh. It will display
"Hello, World!".

2. How does an interactive shell script work? Illustrate with an example.


An interactive script takes user input using the read command:
#!/bin/bash
echo "Enter your name:"
read name
echo "Hello, $name!"

When run, the script pauses for input and responds with a greeting. It's useful for dynamic user-driven
processes.

3. What are command line arguments in shell scripting? How are they used?
Arguments passed when executing the script can be accessed using $1, $2, etc. Example:
#!/bin/bash
echo "First argument: $1"
echo "Second argument: $2"

Running ./script.sh Hello World will output Hello and World respectively.

Netaji Subhash Engineering College


199
4. Explain the use of logical operators && and || with examples.
This control command execution based on success/failure:
mkdir test_dir && echo "Directory created"
cd no_dir || echo "Directory not found"

Here, echo runs only if mkdir or cd succeeds or fails, respectively.

5. Describe how if and case statements are used in shell scripting.


The if statement checks a condition:
if [ $1 -gt 10 ]; then
echo "Greater than 10"
fi

The case statement is used for multiple conditions:


case $1 in
start) echo "Starting...";;
stop) echo "Stopping...";;
*) echo "Unknown command";;
esac

6. Explain the use of test and [] for expression evaluation with examples.
test or [] evaluates conditions:
if test $a -eq $b

or
if [ $a -eq $b ]

Both check if a equals b. It is used for comparisons, file existence, string evaluation, etc.

7. Demonstrate how to perform arithmetic and string operations using expr.


Arithmetic:
a=10
b=20
sum=`expr $a + $b`
echo $sum

String:
name="ChatGPT"
len=`expr length "$name"`
echo $len

It calculates string length and evaluates numeric expressions.

Netaji Subhash Engineering College


200
8. Describe the usage of while and for loops in shell scripts with examples.
while loop:
count=1
while [ $count -le 5 ]; do
echo $count
count=`expr $count + 1`
done

for loop:
for i in 1 2 3; do
echo $i
done

Both are used to repeat tasks.

9. Explain positional parameters in shell scripting and how shift helps in processing them.
Positional parameters ($1, $2, etc.) hold script arguments. shift helps iterate:
while [ "$1" != "" ]; do
echo "Arg: $1"
shift
done

This loop prints all arguments one by one and moves them using shift.

10. How can you write a script to calculate the factorial of a number using a while loop?
#!/bin/bash
echo "Enter a number:"
read num
fact=1
while [ $num -gt 1 ]; do
fact=`expr $fact \* $num`
num=`expr $num - 1`
done
echo "Factorial is $fact"

This script calculates factorial by multiplying the number down to 1 using a loop.

Netaji Subhash Engineering College


201

Chapter 10
System Administration

Netaji Subhash Engineering College


202

10.1 Introduction to System Administration

System administration is a crucial field within information technology that involves the management,
maintenance, and operation of computer systems and networks. System administrators, often referred to as
sysadmins, are the backbone of any organization's IT infrastructure, ensuring that hardware and software run
smoothly, securely, and efficiently. Their responsibilities span a wide array of tasks, from installing and configuring
software to troubleshooting complex issues, managing user access, and implementing security measures. In
essence, sysadmins are problem-solvers who keep the digital gears turning, making sure that employees have the
resources they need to perform their jobs effectively.

This document will focus on system administration within a UNIX-like environment. UNIX, with its robust multi-
user and multitasking capabilities, forms the foundation for many mission-critical systems and servers worldwide.
Understanding its principles and command-line tools is fundamental for any aspiring or practicing system
administrator.

Essential Duties of a UNIX System Administrator

UNIX system administrators have a multifaceted role, encompassing a wide range of responsibilities that ensure
the optimal performance, security, and availability of UNIX-based systems. Here are some of their essential
duties:

• System Installation and Configuration:

o Installing and configuring the UNIX operating system (e.g., Linux distributions like Ubuntu, Red
Hat, CentOS; or commercial UNIX variants like Solaris, AIX, HP-UX).

o Setting up network services (DNS, DHCP, NFS, Samba, web servers, email servers).

o Configuring hardware components and device drivers.

• System Monitoring and Performance Tuning:

o Regularly monitoring system health (CPU usage, memory, disk I/O, network traffic).

o Identifying and resolving performance bottlenecks.

o Optimizing system parameters for improved efficiency.

o Using tools like top, vmstat, iostat, netstat.

• Security Management:

o Implementing and enforcing security policies.

o Managing user permissions and access controls.

o Configuring firewalls and intrusion detection systems.

o Applying security patches and updates.

o Regularly reviewing system logs for suspicious activity.

• Backup and Recovery:

o Implementing and managing backup strategies to ensure data integrity and availability.

o Performing regular backups of critical system files and user data.


Netaji Subhash Engineering College
203
o Developing and testing disaster recovery plans.

o Using tools like tar, rsync, dump, restore.

• User Account Management:

o Creating, modifying, and deleting user accounts.

o Managing user passwords, home directories, and group memberships.

o Implementing disk quotas.

o This topic will be elaborated further in a dedicated section.

• Software and Patch Management:

o Installing, upgrading, and patching operating system components and applications.

o Ensuring all software is up-to-date to address security vulnerabilities and bugs.

o Using package managers like apt, yum, dnf.

• Troubleshooting and Problem Resolution:

o Diagnosing and resolving system and application issues.

o Analyzing log files to identify root causes of problems.

o Working with vendors for hardware or software support.

• Documentation:

o Maintaining comprehensive documentation of system configurations, procedures, and


troubleshooting steps.

• Scripting and Automation:

o Writing shell scripts (Bash, Python, Perl) to automate repetitive tasks and streamline
administrative processes.

System Startup and Shutdown in UNIX

Understanding the startup and shutdown procedures is critical for a UNIX system administrator. Improper
shutdown can lead to data corruption or system instability.

System Startup (Boot Process)

The UNIX boot process involves several stages, transitioning from low-level hardware initialization to the full
operation of the operating system. While the exact steps can vary slightly between different UNIX-like systems
(especially between SysVinit and systemd), the general flow is as follows:

1. BIOS/UEFI Initialization: The firmware (BIOS or UEFI) initializes hardware components, performs a
Power-On Self-Test (POST), and then locates the bootloader.

2. Bootloader Execution: The bootloader (e.g., GRUB, LILO) loads the kernel into memory.

3. Kernel Initialization: The kernel takes control, initializes devices, mounts the root filesystem, and starts
the init process (or systemd).

4. Init/Systemd:

Netaji Subhash Engineering College


204
o SysVinit (Older systems): The init process reads /etc/inittab and executes scripts in directories
like /etc/rc.d/init.d or /etc/rc.d/rcX.d (where X is the runlevel) to start essential services.

o systemd (Modern systems): systemd is the primary init system in most modern Linux
distributions. It uses "units" (e.g., service units, target units) to manage the startup of services in
parallel, leading to faster boot times. It moves through different "targets" (similar to runlevels)
to reach a desired state.

Common UNIX Commands related to Startup:

• dmesg: Displays the kernel ring buffer messages, which contain information about hardware detection and
initialization during boot.
dmesg | less

• journalctl: (systemd-based systems) Used to query and display messages from the systemd journal, including
boot messages.
journalctl -b # Show messages from the current boot
journalctl -b -1 # Show messages from the previous boot

• systemctl status: (systemd-based systems) Shows the status of services.


systemctl status sshd # Check the status of the SSH daemon

• chkconfig: (SysVinit-based systems, or for compatibility on some systemd systems) Used to manage services
that start at different runlevels.
chkconfig --list # List all services and their runlevel status
chkconfig httpd on # Enable httpd to start at default runlevels

System Shutdown

Proper shutdown procedures ensure that all processes are gracefully terminated, filesystems are unmounted
cleanly, and no data is lost.

Common UNIX Commands for Shutdown:

• shutdown: The most common and recommended command for shutting down the system. It allows you to
specify a time for shutdown and send a warning message to users.
shutdown -h now # Halt (power off) the system immediately
shutdown -r now # Reboot the system immediately
shutdown -h +10 "System going down for maintenance in 10 minutes" #
Halt in 10 minutes
shutdown -c # Cancel a pending shutdown
▪ -h: Halt the system after shutdown.
▪ -r: Reboot the system after shutdown.
▪ now: Shuts down immediately.

▪ +minutes: Shuts down after the specified number of minutes.

• reboot: A simpler command to reboot the system immediately.


Netaji Subhash Engineering College
205
reboot

• halt: A simpler command to halt the system immediately.


halt

Note: On modern systems, reboot and halt often internally call shutdown.

• init 0: (SysVinit-based systems) Changes the system to runlevel 0 (halt).


init 0

• init 6: (SysVinit-based systems) Changes the system to runlevel 6 (reboot).


init 6

• systemctl poweroff: (systemd-based systems) Powers off the system.


systemctl poweroff

• systemctl reboot: (systemd-based systems) Reboots the system.


systemctl reboot

Important Considerations during Shutdown:

• Notify Users: Always notify users well in advance of a scheduled shutdown, especially on multi-user
systems.

• Save Work: Encourage users to save their work before the shutdown.

• Graceful Termination: The shutdown command sends signals to running processes to allow them to
terminate gracefully and save any open files.

Brief Idea About User Account Management

User account management is a fundamental aspect of UNIX system administration. It involves creating,
modifying, and deleting user accounts, as well as managing their access rights, resources, and security.

Key Components of a User Account:

1. Username:

o A unique identifier for the user.

o Typically lowercase letters, numbers, and underscores.

o Used for logging in and identifying the user within the system.

2. Password:

o Authenticates the user during login.

o Stored in an encrypted (hashed) format in /etc/shadow (for security reasons) or /etc/passwd


(less secure, older systems).

o Should be strong, unique, and changed regularly.

3. User ID (UID):

o A unique numerical identifier for the user.

Netaji Subhash Engineering College


206
o 0 is reserved for the root user.

o Typically, UIDs below 500 or 1000 are reserved for system accounts.

o Stored in /etc/passwd.

4. Group ID (GID):

o Every user belongs to at least one primary group.

o The primary GID is stored in /etc/passwd.

o Users can also be members of supplementary groups.

o Groups simplify permission management.

5. Home Directory:

o The default directory where a user lands after logging in.

o Typically /home/username.

o Used to store personal files, configurations, and data.

6. Login Shell:

o The command-line interpreter that starts when the user logs in.

o Common shells include Bash (/bin/bash), Zsh (/bin/zsh), Korn Shell (/bin/ksh), C Shell (/bin/csh).

o Configured in /etc/passwd.

7. Disk Quota:

o A mechanism to limit the amount of disk space a user or group can consume.

o Helps prevent a single user from filling up the entire filesystem.

8. Terminal:

o The interface through which a user interacts with the system (e.g., a physical console, an SSH
client, a terminal emulator).

UNIX Commands for User Account Management:

1. Managing Users:

• useradd (or adduser on some systems): Creates a new user account.


useradd -m newuser # Create user 'newuser' and its home
directory
useradd -m -s /bin/bash -g sales -G projects newuser2 # Create user with
specific shell and groups
o -m: Create the user's home directory if it doesn't exist.
o -s: Specify the login shell.
o -g: Specify the primary group.
o -G: Specify supplementary groups.

Netaji Subhash Engineering College


207
• passwd: Sets or changes a user's password.
passwd newuser # Set password for 'newuser' (will prompt for
password)
echo "mynewpassword" | passwd --stdin newuser # Set password non-
interactively (use with caution!)

• usermod: Modifies an existing user account.


usermod -l newloginname oldusername # Change username
usermod -d /new/home/dir username # Change home directory
usermod -s /bin/zsh username # Change login shell
usermod -aG sudo username # Add user to a supplementary group
(sudo)
usermod -L username # Lock user account (disable login)
usermod -U username # Unlock user account
o -l: New login name.
o -d: New home directory.
o -s: New shell.
o -aG: Append user to supplementary group.
o -L: Lock account.
o -U: Unlock account.

• userdel (or deluser on some systems): Deletes a user account.


userdel username # Delete user account (leaves home
directory)
userdel -r username # Delete user account and its home directory
o -r: Remove the home directory and mail spool.

2. Managing Groups:

• groupadd: Creates a new group.


groupadd mygroup

• groupdel: Deletes a group.


groupdel mygroup

• groupmod: Modifies an existing group.


groupmod -n newgroupname oldgroupname # Change group name
-n: New group name.

3. Viewing User and Group Information:

• /etc/passwd: Contains basic user account information (username, password placeholder x, UID, GID,
GECOS field, home directory, shell).
Netaji Subhash Engineering College
208
cat /etc/passwd | grep newuser

• /etc/shadow: Contains encrypted user passwords and password expiration information. Requires root
privileges to view.
sudo cat /etc/shadow | grep newuser

• /etc/group: Contains group information (group name, password placeholder x, GID, list of members).
cat /etc/group | grep mygroup

• id: Displays the UID, GID, and supplementary groups of the current user or a specified user.
id # Show current user's IDs
id username # Show specified user's IDs

• groups: Displays the groups a user belongs to.


groups # Show current user's groups
groups username # Show specified user's groups

4. Disk Quotas:

• quota: Displays disk usage and quotas for a user or group.


quota -s # Show summary of quotas
quota -u username # Show user's quota

• edquota: Edits user or group quotas. Requires root privileges.


sudo edquota -u username # Edit user's quota interactively

• quotacheck: Scans a filesystem for disk usage and creates/updates quota files.
sudo quotacheck -cum /dev/sda1 # Check user quotas on /dev/sda1

• quotaon / quotaoff: Enables/disables disk quotas on a filesystem.


sudo quotaon /dev/sda1 # Enable quotas on /dev/sda1
sudo quotaoff /dev/sda1 # Disable quotas on /dev/sda1

Important Notes for User Account Management:

• Security Best Practices: Always enforce strong password policies, regularly audit user accounts, and
promptly disable accounts of former employees.

• Least Privilege: Grant users only the necessary permissions to perform their tasks. Avoid giving root
access unnecessarily.

• Documentation: Maintain clear records of user accounts, their roles, and any special permissions.

By diligently performing these duties and understanding the intricacies of UNIX commands, a system
administrator ensures a stable, secure, and efficient computing environment for their organization.

Netaji Subhash Engineering College


209

Review Questions

Netaji Subhash Engineering College


210

Review Questions
Short Answer Type Questions

1. What is the primary role of a UNIX system administrator?


Ans: The primary role of a UNIX system administrator is to ensure the stable, secure, and efficient
operation of UNIX-based computer systems. This involves managing hardware, software, network
services, and user accounts.

2. Name two key responsibilities of a UNIX system administrator regarding system security.
Ans: Two key responsibilities include implementing and maintaining robust security measures such as
configuring firewalls, managing access controls, and regularly patching vulnerabilities to protect against
unauthorized access and cyber threats.

3. How does a UNIX system administrator contribute to system performance?


Ans: A UNIX system administrator contributes to system performance by monitoring resource
utilization, optimizing system configurations, managing disk space, and troubleshooting bottlenecks to
ensure applications run smoothly and efficiently.

4. What is the significance of backups in UNIX system administration?


Ans: Backups are crucial in UNIX system administration for disaster recovery. They ensure that critical
data can be restored in case of hardware failure, accidental deletion, or data corruption, minimizing
downtime and data loss.

5. Briefly explain the purpose of the /etc/passwd file.


Ans: The /etc/passwd file stores essential information about user accounts on a UNIX system, including
usernames, user IDs (UIDs), group IDs (GIDs), home directories, and the default shell.

6. What is the function of the /etc/shadow file?


Ans: The /etc/shadow file securely stores encrypted user passwords and password aging information. It
separates password hashes from the publicly readable /etc/passwd file for enhanced security.

7. How is a user's home directory typically determined in UNIX?


Ans: A user's home directory is typically determined by an entry in the /etc/passwd file and is the
default working directory where the user lands after logging in, usually /home/username.

8. What is a user's Group ID (GID) in UNIX?


Ans: A user's Group ID (GID) is a numerical identifier that links a user to their primary group. It
determines the user's permissions and access rights to files and directories owned by that group.

9. What is the purpose of a login shell for a user?


Ans: A login shell is the command-line interpreter that a user interacts with after logging into a UNIX
system. It executes commands and provides the user interface for system interaction.

10. Briefly explain the concept of disk quota.


Ans: Disk quota is a system administration feature that limits the amount of disk space or the number
of files a user or group can consume on a file system, preventing any single user from monopolizing
resources.

11. What is the command to create a new user account in UNIX?


Ans: The useradd command is used to create a new user account in UNIX. For example, sudo useradd -
m newuser creates a new user newuser and their home directory.

Netaji Subhash Engineering College


211
12. How would you assign an initial password to a new user?
Ans: To assign an initial password to a new user, the passwd command is used. For instance, sudo
passwd newuser prompts the administrator to set a password for newuser.

13. What command is used to modify an existing user account?


Ans: The usermod command is used to modify an existing user account. For example, sudo usermod -d
/new/home/dir username changes a user's home directory.

14. How do you delete a user account in UNIX?


Ans: To delete a user account, the userdel command is used. Using sudo userdel -r username will
remove the user and their home directory.

15. What is the significance of runlevels during system startup?


Ans: Runlevels define the state of a UNIX system, determining which services are started or stopped.
Different runlevels provide specific operating environments, such as multi-user mode or single-user
mode.

16. Name the first process initiated during UNIX system startup.
Ans: The init (or systemd in modern systems) process is the first process initiated during UNIX system
startup. It is responsible for starting all other system processes.

17. What is the purpose of the /etc/rc.d directory (or similar) during startup?
Ans: The /etc/rc.d directory (or /etc/init.d or /etc/systemd/system in modern systems) contains scripts
that control the startup and shutdown of various system services and daemons.

18. Which command is used to gracefully shut down a UNIX system?


Ans: The shutdown command is used to gracefully shut down a UNIX system. For example, sudo
shutdown -h now immediately halts the system.

19. What is the difference between reboot and halt commands?


Ans: The reboot command restarts the system, while the halt command stops the system's operation
entirely without restarting it, requiring a manual power-on.

20. Why is it important to shut down a UNIX system gracefully rather than just cutting power?
Ans: Graceful shutdown allows the system to properly unmount file systems, stop services, and save any
pending data to disk, preventing data corruption and ensuring system integrity.

Descriptive Type Questions

1. Describe the essential duties of a UNIX system administrator in detail, covering at least four key areas.
Ans: A UNIX system administrator's duties are multifaceted. Firstly, system installation and configuration
involve installing the OS, configuring hardware, and setting up network services. Secondly, user and group
management includes creating, modifying, and deleting user accounts, assigning permissions, and
managing group memberships. Thirdly, system monitoring and performance tuning require constant
vigilance over resource utilization, identifying bottlenecks, and optimizing configurations to ensure
smooth operation. Finally, security management is paramount, involving implementing firewalls,
intrusion detection systems, regularly patching vulnerabilities, and enforcing access controls to protect
sensitive data and prevent unauthorized access.

Netaji Subhash Engineering College


212
2. Explain the typical sequence of events during a UNIX system startup, from the power-on to a fully
operational system.
Ans: During a UNIX system startup, the process begins with the BIOS/UEFI performing a Power-On Self-
Test (POST) and then locating the boot loader (e.g., GRUB). The boot loader loads the kernel into memory,
which initializes hardware and mounts the root file system. Next, the kernel starts the init process (or
systemd in modern systems), which is the parent of all other processes. init then reads its configuration
files (e.g., /etc/inittab or systemd unit files) to determine the desired runlevel and starts various system
services and daemons, eventually bringing the system to a fully operational state.

3. Detail the process of gracefully shutting down a UNIX system using the shutdown command, including
common options and their effects.
Ans: Gracefully shutting down a UNIX system using the shutdown command is crucial to prevent data
loss. The general syntax is shutdown [OPTIONS] TIME [MESSAGE].

sudo shutdown -h now "System going down for maintenance."

Explanation: This command immediately halts the system (-h now) and sends a message to all
logged-in users.

Common Options:

▪ -h: Halts the system after shutdown.

▪ -r: Reboots the system after shutdown.

▪ now: Shuts down immediately.

▪ +minutes: Shuts down after a specified number of minutes.

▪ HH:MM: Shuts down at a specific time.

The shutdown command notifies users, prevents new logins, sends TERM signals to running
processes to allow them to terminate gracefully, and eventually unmounts file systems before
halting or rebooting. This ensures data integrity.

4. Discuss the importance of user account management for system security and efficient operation. What
are the potential risks of poor user account practices?
Ans: User account management is fundamental for both system security and efficient operation. Proper
management ensures that users only have the necessary privileges, adhering to the principle of least
privilege, thereby minimizing the attack surface. It involves setting strong password policies, regularly
reviewing user permissions, and promptly disabling accounts of departed employees. Poor user account
practices, such as weak passwords, excessive permissions, or unmanaged dormant accounts, pose
significant risks. These can lead to unauthorized access, data breaches, privilege escalation, and system
compromise, making the system vulnerable to malicious activities.

5. Explain the various components of a user account entry in /etc/passwd and their significance for user
identification and system interaction.
Ans: A typical user account entry in /etc/passwd consists of seven colon-separated fields, each holding
vital information:

• Username: The login name for the user.

Netaji Subhash Engineering College


213
• Password (x): Historically, this field held the encrypted password, but now it's usually 'x'
indicating the actual password hash is in /etc/shadow.
• User ID (UID): A unique numerical identifier for the user. UIDs below 500 (or 1000 on some
systems) are typically reserved for system accounts.
• Group ID (GID): The numerical ID of the user's primary group.
• GECOS field: Contains general information about the user, such as full name, office location, and
phone number.
• Home Directory: The absolute path to the user's home directory, where their personal files and
configurations are stored.
• Login Shell: The path to the default command-line interpreter (e.g., /bin/bash, /bin/sh) that the
user will use after logging in. These fields collectively define a user's identity, permissions, and
initial environment on the system.

6. Describe how to create a new user account in UNIX, including setting up their home directory, primary
group, and initial password. Provide command examples.
Ans: Creating a new user account involves several steps using the useradd and passwd commands.

o Creating the user and home directory:


sudo useradd -m newuser

Explanation: The useradd command creates a new user named newuser. The -m option ensures
that a home directory (/home/newuser by default) is created for the user. This command also
adds an entry for newuser in /etc/passwd and /etc/shadow and creates a primary group for
newuser with the same name.

o Setting the initial password:


sudo passwd newuser

Explanation: After creating the user, the passwd command is used to set the initial password for
newuser. The system will prompt the administrator to enter the new password twice for
confirmation. This password hash is then stored securely in /etc/shadow.

Example Output:
▪ $ sudo useradd -m newuser
▪ $ sudo passwd newuser
▪ New password:
▪ Retype new password:
▪ passwd: password updated successfully

This process ensures a new user can log in with a defined home directory and secure credentials.

7. Explain the concept of user groups in UNIX and their role in managing file permissions and access
control. Provide an example of adding a user to an existing group.
Ans: User groups in UNIX are collections of users that share common access permissions to files and
directories. They streamline access control by allowing administrators to grant permissions to a group
rather than to individual users. This simplifies management, especially in environments with many users.

Netaji Subhash Engineering College


214
o Example of adding a user to an existing group: Let's say we have an existing group named
developers and we want to add newuser to it.
sudo usermod -aG developers newuser

Explanation: The usermod command is used to modify an existing user account. The -a option
means "append" (to the existing groups), and -G specifies the supplementary group(s) to which
the user should be added. In this case, newuser is added to the developers group.

o Verification: You can verify the user's group memberships using the groups command: groups
newuser (output might be newuser : newuser developers). By using groups, an administrator
can set read/write/execute permissions on a directory or file for the developers group, and all
members of that group will inherit those permissions without needing individual assignments.

8. How do disk quotas work in UNIX, and why are they important for system resource management?
Ans: Describe how to set a disk quota for a user. Disk quotas are a vital feature in UNIX for managing
disk space consumption. They allow system administrators to limit the amount of disk space or the
number of files (inodes) a user or group can utilize on a specific file system. This prevents any single user
from monopolizing disk resources, which could lead to a "disk full" scenario, impacting system stability
and performance for all users.

o Importance: Disk quotas ensure fair resource allocation, prevent abuse, and help maintain system
health by proactively managing storage.

o Setting a Disk Quota (Conceptual Steps, as exact commands vary by distribution/filesystem):

▪ Enable quota support: Ensure the file system is mounted with quota options (e.g., usrquota,
grpquota in /etc/fstab).
▪ Remount the filesystem: Remount the filesystem to apply the quota options.
▪ Create quota files: Run quotacheck -cum /path/to/filesystem to create initial quota files.
▪ Set user quota: Use the edquota command.
sudo edquota -u username

Explanation: This command opens a text editor (usually vi) with the quota settings for
username. You'll see lines for blocks (disk space) and inodes (number of files). You can set
soft and hard limits for both.

▪ Soft Limit: A warning threshold.

▪ Hard Limit: An absolute limit that cannot be exceeded.

Example edquota editor content:


▪ Disk quotas for user username (uid 1001):
▪ Filesystem blocks soft hard inodes soft hard
▪ /dev/sda1 0 500M 1G 0 1000 2000

This example sets a soft limit of 500MB and a hard limit of 1GB for blocks, and soft/hard limits of
1000/2000 for inodes on /dev/sda1.

▪ Activate quotas: Run quotaon /path/to/filesystem.

▪ Regular monitoring with quota -s is also important.


Netaji Subhash Engineering College
215
9. Compare and contrast the useradd, usermod, and userdel commands in UNIX for user account
management. Provide an example for each command's primary function.
Ans: These three commands are fundamental for managing user accounts in UNIX, each serving a
distinct purpose:

o useradd (Create User): Used to create new user accounts on the system. It adds an entry to
/etc/passwd and /etc/shadow, creates a home directory (with -m), and sets up other initial user
attributes.
sudo useradd -m -s /bin/bash -g developers john_doe

Explanation: Creates a new user john_doe, creates their home directory (-m), sets their
default shell to /bin/bash, and assigns their primary group to developers.

o usermod (Modify User): Used to modify existing user account properties. This can include
changing the username, home directory, shell, primary group, or adding/removing from
supplementary groups.
sudo usermod -d /home/johndoe/new_home -s /bin/sh john_doe

Explanation: Changes john_doe's home directory to /home/johndoe/new_home and


their default shell to /bin/sh.

o userdel (Delete User): Used to delete user accounts from the system. It removes the user's
entry from /etc/passwd and /etc/shadow. With the -r option, it also removes the user's home
directory and mail spool.
sudo userdel -r john_doe

Explanation: Deletes the user account john_doe and recursively removes their home
directory and mail spool. In summary, useradd initializes, usermod updates, and userdel
removes user accounts, forming a complete lifecycle management suite.

10. Discuss the process of system startup in a modern UNIX/Linux system using systemd. How does it
differ from older init-based systems, and what are its advantages?
Ans: In modern UNIX/Linux systems, systemd has largely replaced the traditional init process as the
primary system and service manager.

o systemd Startup Process:

▪ Kernel Initialization: Similar to init, the kernel is loaded and initialized.


▪ systemd PID 1: The kernel hands control to systemd, which becomes the first process
(PID 1).
▪ Target Units: systemd doesn't use runlevels directly but employs "target units" (e.g.,
multi-user.target, graphical.target) that define the desired system state.
▪ Parallel Service Startup: systemd reads .service unit files (and other unit types) to
determine which services to start. It can start multiple services in parallel, significantly
speeding up boot time. Dependencies between services are explicitly defined in unit
files, ensuring services start in the correct order.
▪ Socket Activation: Services can be started only when their corresponding socket is
accessed, further optimizing resource usage.

Netaji Subhash Engineering College


216
o Differences from init:

▪ Parallelization: systemd starts services in parallel, while init typically starts them
sequentially based on runlevel scripts.

▪ Unit Files vs. Shell Scripts: systemd uses declarative unit files for service configuration,
which are more structured and less error-prone than the imperative shell scripts used by
init in /etc/rc.d or /etc/init.d.

▪ On-demand Activation: systemd supports socket and D-Bus activation, allowing services
to start only when needed, unlike init which starts all services at boot.

▪ Logging: systemd has integrated logging (journald).

o Advantages:

▪ Faster Boot Times: Due to parallelization and on-demand activation.

▪ Improved Dependency Management: Unit files clearly define service dependencies.

▪ Enhanced Reliability: Better handling of service failures and restart capabilities.

▪ Centralized Management: Provides a unified way to manage services, mounts, sockets,


and other system resources.

▪ Modern Features: Supports cgroups for resource control and system state snapshots.

Netaji Subhash Engineering College


217

END

Netaji Subhash Engineering College

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy