0% found this document useful (0 votes)
27 views105 pages

Cte 314 - Os LN

An operating system (OS) is system software that manages computer resources and acts as an interface between hardware and software, allowing multiple programs to run simultaneously. The evolution of OS has progressed from no OS in the 1940s to modern systems that support multitasking, memory management, and real-time processing, with notable examples including UNIX, Windows, and mobile OS like iOS and Android. Operating systems are crucial for resource allocation, security, and user interaction, while also facing challenges such as data recovery and vulnerability to threats.

Uploaded by

killerboi496
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views105 pages

Cte 314 - Os LN

An operating system (OS) is system software that manages computer resources and acts as an interface between hardware and software, allowing multiple programs to run simultaneously. The evolution of OS has progressed from no OS in the 1940s to modern systems that support multitasking, memory management, and real-time processing, with notable examples including UNIX, Windows, and mobile OS like iOS and Android. Operating systems are crucial for resource allocation, security, and user interaction, while also facing challenges such as data recovery and vulnerability to threats.

Uploaded by

killerboi496
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 105

CTE 314 OPERATING SYSTEMS II UNIT 3

CHAPTER ONE
OPERATING SYSTEMS

1.1 Operating System (OS)

An operating system (OS) is a System software that manages all the resources of the
computing device.
It is a program that manages a computer’s resources, especially the allocation of those
resources among other programs. Typical resources include the central processing unit
(CPU), computer memory, file storage, input/output (I/O) devices, and network
connections.

• Acts as an interface between the software and different parts of the computer or
the computer hardware.
• Manages the overall resources and operations of the computer.
• Controls and monitors the execution of all other programs that reside in the
computer, which also includes application programs and other system software
of the computer.
• Examples of Operating Systems are Windows, Linux, macOS, Android, iOS, etc.

Management tasks include scheduling resource use to avoid conflicts and interference
between programs. Unlike most programs, which complete a task and terminate, an
operating system runs indefinitely and terminates only when the computer is turned off.

1
Modern multiprocessing operating systems allow many processes to be active, where
each process is a “thread” of computation being used to execute a program. One form
of multiprocessing is called time-sharing, which lets many users share computer access
by rapidly switching between them. Time-sharing must guard against interference
between users’ programs, and most systems use virtual memory, in which the memory,
or “address space,” used by a program may reside in secondary memory (such as on a
magnetic hard disk drive) when not in immediate use, to be swapped back to occupy the
faster main computer memory on demand. This virtual memory both increases the
address space available to a program and helps to prevent programs from interfering
with each other, but it requires careful control by the operating system and a set of
allocation tables to keep track of memory use. Perhaps the most delicate and critical
task for a modern operating system is allocation of the CPU; each process is allowed to
use the CPU for a limited time, which may be a fraction of a second, and then must give
up control and become suspended until its next turn. Switching between processes must
itself use the CPU while protecting all data of the processes.
The first digital computers had no operating systems. They ran one program at a time,
which had command of all system resources, and a human operator would provide any
special resources needed. The first operating systems were developed in the mid-1950s.
These were small “supervisor programs” that provided basic I/O operations (such as
controlling punch card readers and printers) and kept accounts of CPU usage for billing.
Supervisor programs also provided multiprogramming capabilities to enable several
programs to run at once. This was particularly important so that these early multimillion-
dollar machines would not be idle during slow I/O operations.

Computers acquired more powerful operating systems in the 1960s with the emergence
of time-sharing, which required a system to manage multiple users sharing CPU time and
terminals. Two early time-sharing systems were CTSS (Compatible Time Sharing
System), developed at the Massachusetts Institute of Technology, and the Dartmouth
College Basic System, developed at Dartmouth College. Other multiprogrammed
systems included Atlas, at the University of Manchester, England, and IBM’s OS/360,
probably the most complex software package of the 1960s. After 1972 the Multics
system for General Electric Co.’s GE 645 computer (and later for Honeywell Inc.’s
computers) became the most sophisticated system, with most of the multiprogramming
and time-sharing capabilities that later became standard.
The minicomputers of the 1970s had limited memory and required smaller operating
systems. The most important operating system of that period was UNIX, developed by
AT&T for large minicomputers as a simpler alternative to Multics. It became widely used
in the 1980s, in part because it was free to universities and in part because it was
designed with a set of tools that were powerful in the hands of skilled programmers. More
recently, Linux, an open-source version of UNIX developed in part by a group led by
Finnish computer science student Linus Torvalds and in part by a group led by American

2
computer programmer Richard Stallman, has become popular on personal computers
as well as on larger computers.
In addition to such general-purpose systems, special-purpose operating systems run on
small computers that control assembly lines, aircraft, and even home appliances. They
are real-time systems, designed to provide rapid response to sensors and to use their
inputs to control machinery. Operating systems have also been developed for mobile
devices such as smartphones and tablets. Apple Inc.’s iOS, which runs on iPhones and
iPads, and Google Inc.’s Android are two prominent mobile operating systems.
From the standpoint of a user or an application program, an operating system provides
services. Some of these are simple user commands like “dir”—show the files on a disk—
while others are low-level “system calls” that a graphics program might use to display an
image. In either case the operating system provides appropriate access to its objects,
the tables of disk locations in one case and the routines to transfer data to the screen in
the other. Some of its routines, those that manage the CPU and memory, are generally
accessible only to other portions of the operating system.
Contemporary operating systems for personal computers commonly provide a graphical
user interface (GUI). The GUI may be an intrinsic part of the system, as in the older
versions of Apple’s Mac OS and Microsoft Corporation’s Windows OS; in others it is a set
of programs that depend on an underlying system, as in the X Window system for UNIX
and Apple’s Mac OS X.
Operating systems also provide network services and file-sharing capabilities—even the
ability to share resources between systems of different types, such as Windows and
UNIX. Such sharing has become feasible through the introduction of network protocols
(communication rules) such as the Internet’s TCP/IP.

1.2 Evolution of OS

History of Operating System

An operating system is a type of software that acts as an interface between the user and
the hardware. It is responsible for handling various critical functions of the computer and
utilizing resources very efficiently so the operating system is also known as a resource
manager. The operating system also acts like a government because just as the
government has authority over everything, similarly the operating system has authority
over all resources. Various tasks that are handled by OS are file management, task
management, garbage management, memory management, process management, disk
management, I/O management, peripherals management, etc.

Generations of Operating Systems


• 1940s-1950s: Early Beginnings
o Computers operated without operating systems (OS).
o Programs were manually loaded and run, one at a time.

3
o The first operating system was introduced in 1956. It was a batch
processing system GM-NAA I/O (1956) that automated job handling.
• 1960s: Multiprogramming and Timesharing
o Introduction of multiprogramming to utilize CPU efficiently.
o Timesharing systems, like CTSS (1961) and Multics (1969), allowed
multiple users to interact with a single system.
• 1970s: Unix and Personal Computers
o Unix (1971) revolutionized OS design with simplicity, portability, and
multitasking.
o Personal computers emerged, leading to simpler OSs like CP/M (1974) and
PC-DOS (1981).
• 1980s: GUI and Networking
o Graphical User Interfaces (GUIs) gained popularity with systems like Apple
Macintosh (1984) and Microsoft Windows (1985).
o Networking features, like TCP/IP in Unix, became essential.
• 1990s: Linux and Advanced GUIs
o Linux (1991) introduced open-source development.
o Windows and Mac OS refined GUIs and gained widespread adoption.
• 2000s-Present: Mobility and Cloud
o Mobile OSs like iOS (2007) and Android (2008) dominate.
o Cloud-based and virtualization technologies reshape computing, with OSs
like Windows Server and Linux driving innovation.
• AI Integration – (Ongoing)
With the growth of time, Artificial intelligence came into picture. Operating
system integrates features of AI technology like Siri, Google Assistant, and Alexa
and became more powerful and efficient in many way. These AI features with
operating system create an entire new feature like voice commands, predictive
text, and personalized recommendations.
Note: The above-mentioned OS basically tells how the OS evolved with the time by
adding new features but it doesn’t mean that only new generation OS are in use and
previously OS system is not in use, according to the need, all these OS are still used in
software industry.
Operating systems have evolved from basic program execution to complex ecosystems
supporting diverse devices and users.

Function of Operating System

• Memory management
• Process management
• File management
• Device Management
• Deadlock Prevention
• Input/Output device management
4
History According to Types of Operating Systems
Operating Systems have evolved in past years. It went through several changes before
getting its current form.

1. No OS – (0s to 1940s)
As we know that before 1940s, there was no use of OS. Earlier, people are lacking OS in
their computer system so they had to manually type instructions for each tasks in
machine language (0-1 based language) . And at that time, it was very hard for users to
implement even a simple task. And it was very time consuming and also not user-
friendly. Because not everyone had that much level of understanding to understand the
machine language and it required a deep understanding.

2. Batch Processing Systems -(1950s)


With the growth of time, batch processing system came into the market. Now Users had
facility to write their programs on punch cards and load it to the computer operator. And
then operator make different batches of similar types of jobs and then serve the different
batch (group of jobs) one by one to the CPU. CPU first executes jobs of one batch and
them jump to the jobs of other batch in a sequence manner.

3. Multiprogramming Systems -(1960s and 1970s)


Multiprogramming was the first operating system where actual revolution began. It
provide user facility to load the multiple program into the memory and provide a specific
portion of memory to each program. When one program is waiting for any I/O operations
(which take much time) at that time the OS give permission to CPU to switch from
previous program to other program(which is first in ready queue) for continuous
execution of program with interrupt.

4. Personal Computers Systems -(1970s)


Unix (1971) revolutionized OS design with simplicity, portability, and multitasking.
Personal computers emerged, leading to simpler OSs like CP/M (1974) and PC-DOS
(1981).

5. Introduction of GUI -(1980s)


With the growth of time, Graphical User Interfaces (GUIs) came. First time OS became
more user-friendly and changed the way of people to interact with computer. GUI
provides computer system visual elements which made user’s interaction with
computer more comfortable and user-friendly. User can just click on visual elements
rather than typing commands. Here are some feature of GUI in Microsoft’s windows
icons, menus and windows.

6. Networked Systems – (1990s)

5
At 1980s, the craze of computer networks at it’s peak .A special type of Operating
Systems needed to manage the network communication. The OS like Novell NetWare
and Windows NT were developed to manage network communication which provide
users facility to work in collaborative environment and made file sharing and remote
access very easy.

7. Mobile Operating Systems – (2000s)


Invention of smartphones create a big revolution in software industry; to handle the
operation of smartphones, a special type of operating systems was developed. Some of
them are : iOS and Android etc. These operating systems were optimized with the time
and became more powerful.

8. AI Integration – (2010s to ongoing)


With the growth of time, Artificial intelligence came into picture. Operating system
integrates features of AI technology like Siri, Google Assistant, and Alexa and became
more powerful and efficient in many ways. These AI features with operating system
create a entire new feature like voice commands, predictive text, and personalized
recommendations.

Advantages of Operating System


• Operating System manages external and internal devices for example, printers,
scanners, and other.
• Operating System provides interfaces and drivers for proper communication
between system and hardware devices.
• Allows multiple applications to run simultaneously.
• Manages the execution of processes, ensuring that the system remains
responsive.
• Organizes and manages files on storage devices.
• Operating system allocates resources to various applications and ensures their
efficient utilization.

Disadvantages of Operating System


• If an error occurred in your operating system, then there may be a chance that
your data may not be recovered therefore always have a backup of your data.
• Threats and viruses can attack our operating system at any time, making it
challenging for the OS to keep the system protected from these dangers.
• For learning about new operating system can be a time-consuming and
challenging, specially for those who using particular Operating system for
example switching from Windows OS to Linux is difficult.
• Keeping an operating system up-to-date requires regular maintenance, which can
be time-consuming.

6
• Operating systems consume system resources, including CPU, memory, and
storage, which can affect the performance of other applications.

1.3 Characteristic of Modern OS

Characteristics of Operating System


The detailed characteristics of the Operating System.

• Multitasking: An OS allows multiple tasks to run simultaneously, enabling users


to work on different applications simultaneously. It manages task switching by
allocating CPU time to various processes.

• Multiprogramming: Multiprogramming measures the OS’s capability to support


several programs in memory simultaneously. It keeps the CPU busy and doing
something rather than waiting for the next action.

• Concurrency: Thus, the operating system enables several processes to run


concurrently, allowing some tasks to be conducted simultaneously without
causing one process to interfere with another.

• Memory Management: It controls the system’s memory, assigns memory space


to purposes, efficiently frees it when there is no need, and ensures each
application is given adequate memory to function well.

• Device Management: The OS manages device communication through device


drivers, allowing software to interact with hardware components like printers,
hard drives and keyboards.

• Portability: Many OSs are designed to run on different hardware platforms with
minimal modifications, making them portable across various systems.

• Real-Time Processing: Operating systems are designed for real-time


applications, where processes must be completed within a specific time frame.
They are commonly used in embedded systems and industrial applications.

• Virtualization Support: The modern Operating system supports virtualization,


allowing multiple operating systems to run concurrently on the same hardware,
improving resource utilization and flexibility.

Importance of Operating Systems in Modern Computing

7
Operating Systems play a critical role in various aspects of computing:
• The operating system distributes resources to programs, including memory,
CPUs, and storage devices. This helps to ensure their smooth functioning.
• It manages the execution of software programs. It prioritizes tasks and ensures
applications do not conflict with one another.
• Operating systems come with security measures. These include user
authentication and access control. They safeguard the system from unwanted
access and harmful applications.
• It controls peripheral devices such as printers, scanners, and network adapters.
These devices allow communication and data transmission.
• The operating system provides a user interface (UI) via which users may interact
with their computers.
• The UI allows users to launch apps, manage files and directories, and access
system services.

1.4 Concept Of OS
The concept of OS deals with the following: Processes, Files, System calls, Shell, Kernel,
etc.

A Process is an instance of a program running in a computer. It is close in meaning to


task, a term used in some operating systems. In UNIX and some other operating systems,
a process is started when a program is initiated (either by a user entering a shell
command or by another program).
There are five process states in an operating system. A process starts in the “New” state,
moves to “Ready” when it’s ready to execute, then to “Running” when it gets CPU time.
It may move to “Waiting” if it needs to wait for something, and finally to “Terminated”
when it finishes.

A File is a structure used by an operating system to organize and manage files on a


storage device such as a hard drive, solid state drive (SSD), or USB flash drive. It defines
how data is stored, accessed, and organized on the storage device. A file system is a
method an operating system uses to store, organize, and manage files and directories on
a storage device.

8
A computer file is defined as a medium used for saving and managing data in the
computer system. The data stored in the computer system is completely in digital
format, although there can be various types of files that help us to store the data.

File systems are a crucial part of any operating system, providing a structured way to
store, organize, and manage data on storage devices such as hard drives, SSDs, and USB
drives. Essentially, a file system acts as a bridge between the operating system and the
physical storage hardware, allowing users and applications to create, read, update, and
delete files in an organized and efficient manner.

What is a File System?


A file system is a method an operating system uses to store, organize, and manage files
and directories on a storage device. Some common types of file systems include:
• FAT (File Allocation Table): An older file system used by older versions of
Windows and other operating systems.
• NTFS (New Technology File System): A modern file system used by Windows. It
supports features such as file and folder permissions, compression, and
encryption.
• ext (Extended File System): A file system commonly used on Linux and Unix-
based operating systems.
• HFS (Hierarchical File System): A file system used by macOS.
• APFS (Apple File System): A new file system introduced by Apple for their Macs
and iOS devices.

A file is a collection of related information that is recorded on secondary storage. Or file


is a collection of logically related entities. From the user’s perspective, a file is the
smallest allotment of logical secondary storage.
The name of the file is divided into two parts as shown below:
• Name
• Extension, separated by a period.

A System Call is an interface between a program running in user space and the operating
system (OS). Application programs use system calls to request services and
functionalities from the OS's kernel. This mechanism allows the program to call for a
service, like reading from a file, without accessing system resources directly.

A system call is a programmatic way in which a computer program requests a service


from the kernel of the operating system it is executed on. A system call is a way for
programs to interact with the operating system. A computer program makes a system
call when it requests the operating system’s kernel.

9
System call provides the services of the operating system to the user programs via the
Application Program Interface (API). It provides an interface between a process and an
operating system to allow user-level processes to request services of the operating
system. System calls are the only entry points into the kernel system. All programs
needing resources must use system calls.

When a program invokes a system call, the execution context switches from user to
kernel mode, allowing the system to access hardware and perform the required
operations safely. After the operation is completed, the control returns to user mode,
and the program continues its execution.
This layered approach facilitated by system calls:
• Ensures that hardware resources are isolated from user space processes.
• Prevents direct access to the kernel or hardware memory.
• Allows application code to run across different hardware architectures.

What Is the Purpose of the System Call?


System calls serve several important functions, which include:
• User-Kernel Boundary. System calls serve as the authorized gateway for user
programs when requesting services from the kernel. They ensure that user
programs cannot arbitrarily access kernel functions or critical system resources.
• Resource Management. User programs can request and manage vital resources
like CPU time, memory, and file storage via system calls. The OS oversees the
process and guarantees that it is completed in an organized manner.
• Streamlined Development. System calls abstract the complexities of hardware.
This allows developers to perform operations like reading and writing to a file or
managing network data without needing to write hardware-specific code.
• Security and Access Control. System calls implement checks to ensure that
requests made by user programs are valid and that the programs have the
necessary permissions to perform the requested operations.
• Inter-Process Communication (IPC). System calls provide the mechanisms for
processes to communicate with each other. They offer features like pipes,
message queues, and shared memory to facilitate this inter-process
communication.
• Network Operations. System calls provide the framework for network
communications between programs. Developers can devote their attention to
building their application's logic instead of focusing on low-level network
programming.

How Do System Calls Work?


This high-level overview explains how system calls work:

10
1. System Call Request. The application requests a system call by invoking its
corresponding function. For instance, the program might use the read() function to read
data from a file.
2. Context Switch to Kernel Space. A software interrupt or special instruction is used to
trigger a context switch and transition from the user mode to the kernel mode.
3. System Call Identified. The system uses an index to identify the system call and
address the corresponding kernel function.
4. Kernel Function Executed. The kernel function corresponding to the system call is
executed. For example, reading data from a file.

5. System Prepares Return Values. After the kernel function completes its operation,
any return values or results are prepared for the user application.
6. Context Switch to User Space. The execution context is switched back from kernel
mode to user mode.
7. Resume Application. The application resumes its execution from where it left off, now
with the results or effects of the system call.

What Are the Features of System Calls?


The following features are indicative of system calls:
• Security. System calls ensure that user-space applications cannot harm the
system or interfere with other processes.
• Abstraction. For example, programs do not need to know the specifics of network
hardware configurations to send data over the internet or disk operations to read
a file, as the OS handles these tasks.
• Access Control. System calls enforce security measures by checking whether a
program has the appropriate permissions to access resources.
11
• Consistency. Interactions between the OS and program remain consistent,
regardless of the underlying hardware configuration. The same program can run
on different hardware if the operating system supports it.
• Synchronous Operation. Many system calls operate synchronously, blocking the
calling process until the operation is complete. However, there are also
asynchronous system calls that allow processes to continue execution without
waiting.
• Process Control. System calls facilitate stable process management and
multitasking through process creation, termination, scheduling, and
synchronization mechanisms.
• File Management. System calls support file operations such as reading, writing,
opening, and closing files.

• Device Management. System calls enable processes to request device access,


perform read or write operations on these devices, and release them afterward.
• Resource Management. System calls help allocate and deallocate resources
like memory, CPU time, and I/O devices.
• Maintenance. System calls are used to obtain or configure system information,
such as the date and time or process status.
• Communication. System calls allow processes to communicate with each other
and synchronize their actions.
• Error Handling. When a system call cannot be completed, it returns an error code
that the calling program can process. System calls can return error codes to
indicate problems with the requested service. Programs must check for these
errors and handle them appropriately.
• Interface: System calls provide a well-defined interface between user programs
and the operating system. Programs make requests by calling specific functions,
and the operating system responds by executing the requested service and
returning a result.
12
• Protection: System calls are used to access privileged operations that are not
available to normal user programs. The operating system uses this privilege to
protect the system from malicious or unauthorized access.
• Kernel Mode: When a system call is made, the program is temporarily switched
from user mode to kernel mode. In kernel mode, the program has access to all
system resources, including hardware, memory, and other processes.
• Context Switching: A system call requires a context switch, which involves
saving the state of the current process and switching to the kernel mode to
execute the requested service. This can introduce overhead, which can impact
system performance.
• Synchronization: System calls can be used to synchronize access to shared
resources, such as files or network connections. The operating system provides
synchronization mechanisms, such as locks or semaphores, to ensure that
multiple programs can access these resources safely.

Types of System Calls


The following list categorizes system calls based on their functionalities:
1. Process Control
System calls play an essential role in controlling system processes. They enable you to:
• Create new processes or terminate existing ones.
• Load and execute programs within a process's space.
• Schedule processes and set execution attributes, such as priority.
• Wait for a process to complete or signal upon its completion.
2. File Management
System calls support a wide array of file operations, such as:
• Reading from or writing to files.
• Opening and closing files.
• Deleting or modifying file attributes.
• Moving or renaming files.
3. Device Management
System calls can be used to facilitate device management by:
• Requesting device access and releasing it after use.
• Setting device attributes or parameters.
• Reading from or writing to devices.
• Mapping logical device names to physical devices.
4. Information Maintenance
This type of system call enables processes to:
• Retrieve or modify various system attributes.
• Set the system date and time.
• Query system performance metrics.
5. Communication
The communication call type facilitates:

13
• Sending or receiving messages between processes.
• Synchronizing actions between user processes.
• Establishing shared memory regions for inter-process communication.
• Networking via sockets.
6. Security and Access Control
System calls contribute to security and access control by:
• Determining which processes or users get access to specific resources and who
can read, write, and execute resources.
• Facilitating user authentication procedures.

Examples of System Calls


The table below lists common Unix and Windows system calls and their descriptions.
Note: System call behaviour, parameters, and return values might differ depending on
the OS and version. Consult the OS's manual or documentation for detailed information.

UNIX SYSTEM DESCRIPTION WINDOWS API CALLS DESCRIPTI


CALLS ON
Process Control
fork() Create a new CreateProcess() Create a
process. new
process.
exit() Terminate the ExitProcess() Terminate
current the current
process. process.
wait() Make a process WaitForSingleObject() Wait for a
wait until its process or
child processes thread to
terminate. terminate.
exec() Execute a new CreateProcess() or Execute a
program in a ShellExecute() new
process. program in a
new
process.
getpid() Get the unique GetCurrentProcessId() Get the
process ID. unique
process ID.
File
Management
open() Open a file (or CreateFile() Open or
device). create a file
or device.

14
close() Close an open CloseHandle() Close an
file (or device). open object
handle.
read() Read from a file ReadFile() Read data
(or device). from a file or
input
device.
write() Write to a file WriteFile() Write data
(or device). to a file or
output
device.
lseek() Change the SetFilePointer() Set the
read/write position of
location in a the file
file. pointer.
unlink() Delete a file. DeleteFile() Delete an
existing file.
rename() Rename a file. MoveFile() Move or
rename a
file.
Directory
Management
mkdir() Create a CreateDirectory() Create a
new directory. new
directory.
rmdir() Remove a RemoveDirectory() Remove an
directory. existing
directory.
chdir() Change the SetCurrentDirectory() Change the
current current
directory. directory.
stat() Get file status. GetFileAttributesEx() Get
extended
file
attributes.
fstat() Get status of an GetFileInformationByHandle() Get file
open file. information
using a file
handle.
link() Create a link to CreateHardLink() Create a
a file. hard link to

15
an existing
file.
symlink() Get the status CreateSymbolicLink() Create a
of an open file. symbolic
link.
Device
Management
brk() or sbrk() Increase/decre VirtualAlloc() or Reserve,
ase the VirtualFree() commit, or
program's data free a region
space. of memory.
mmap() Map files or MapViewOfFile() Map a file
devices into into the
memory. application'
s address
space.
Information
Maintenance
time() Get the current GetSystemTime() Get the
time. current
system
time.
alarm() Get the status SetWaitableTimer() Set a timer
of an open file. object.
getuid() Set an alarm GetUserName() or Get the
clock for the LookupAccountName() username
delivery of a or ID.
signal.
getgid() Get the group GetTokenInformation() Get the
ID. group
information
of a security
token.
Communicatio
n Calls
socket() Create a new socket() Create a
socket. new socket.
bind() Bind a socket to bind() Bind a
a network socket to a
address. network
address.

16
listen() Bind a socket to listen() Listen for
a network connection
address. s on a
socket.
accept() Accept a new accept() Accept a
connection on new
a socket. connection
on a socket.
connect() Initiate a connect() Initiate a
connection on connection
a socket. on a socket.
send() or recv() Send and send() or recv() Send and
receive data on receive data
a socket. on a socket.
Security and
Access Control
chmod() or uma Change the SetFileAttributes() or SetSecurit Change the
sk() permissions/m yInfo() file
ode of a file. attributes or
security
info.
chown() Change the SetSecurityInfo() Set the
owner and security
group of a file. information.

What Are the Rules for Passing Parameters to the System Call?
When a user-space program invokes a system call, it typically needs to pass additional
parameters to specify the request. System performance depends on how efficiently
these parameters are passed between user and kernel space.
The method for passing parameters depends on the system architecture, but some
general rules apply:
• Limited Number of Parameters. System calls are often designed to accept a
limited number of parameters. This rule is intended to streamline the interface
and compel users to utilize data structures or memory blocks.
• Leveraging CPU Registers. CPU registers are the fastest accessible memory
locations. The number of CPU registers is limited, which restricts the number of
call parameters that can be passed. Use CPU registers when passing a small
number of system call parameters.

17
• Using Pointers for Data Aggregation. Instead of passing many parameters or
large data sizes, use pointer variables to point to memory blocks or structures
(containing all the parameters). The kernel uses the pointer to access this
memory block and retrieve the parameters.
• Data Integrity and Security Checks. The kernel must validate any pointers
passed from user space. It checks that these pointers only target areas the user
program can access. It also double-checks all data coming from user programs
before using it.
• Stack-based Parameter Handling. Some systems push parameters onto a stack
and allow the kernel to remove them for processing. This method is less common
than using CPU registers and pointers as it is more challenging to implement and
manage.
• Data Isolation Through Copying. The kernel often copies data from user space
to kernel space (and vice versa) to protect the system from erroneous or harmful
data. Data passed between user space and kernel space should not be shared
directly.
• Return Values and Error Handling. The system call returns a value, typically a
simple success/error code. In case of an error, always seek out additional
information about the error. Error responses are often stored in specific locations,
like the errno variable in Linux.
Note: The rules and methods above vary based on the architecture (x86, ARM, MIPS, etc.)
and the specifics of the operating system. Always refer to the OS documentation or
source code for precise information.
Conclusion

18
This article explained how system calls work and why they are essential for ensuring
seamless operations, security, and managing hardware and software resources.
Continue exploring system calls by reading about the role of the select() system call in
Linux, which is essential for handling multiple I/O operations simultaneously.

A Shell program is software that provides users with an interface for accessing services
in the kernel. The kernel manages the operating system's (OS) core services. It's a highly
protected and controlled space that limits access to the system's resources. A shell
provides an intermediary connection point between the user and the kernel. It runs in the
system's user space and executes the commands issued by the user, whether through a
keyboard, mouse, trackpad or other device.

On some platforms, the shell is called a command interpreter because it interprets the
commands the user issues. The shell then translates those commands into system calls
in the kernel. Each system call sends a request to the kernel to perform a specific task.
For example, the shell might request the kernel to delete a file, create a directory, change
an OS configuration, connect to a network, run a shell script or carry out a variety of other
operations.

How does a shell program work?


A shell program is either a command-line interface (CLI) or a graphical user interface
(GUI). Some sources consider only CLI programs to be actual shells, or they conflate
command terminals with the shells themselves. However, a shell can be either a CLI or
GUI, and it isn't a terminal. A terminal merely provides a command prompt for working
with a shell. For example, the default terminal in macOS is named Terminal, and the
default terminal in Windows is called Command Prompt. Neither is considered a shell.
Most OSes provide a default CLI shell, although they typically support other shells as
well. When working in a terminal, a user can issue commands directly to a shell by
entering the commands at the command prompt. For example, Figure 1 shows a
Terminal window in macOS. In this case, Z shell (zsh) is the active shell, which is now the
default shell in macOS. Before that, bash (Bourne Again Shell) was the default shell.

Kernel

A kernel is the core part of an operating system. It acts as a bridge between software
applications and the hardware of a computer. The kernel manages system resources,
such as the CPU, memory, and devices, ensuring everything works together smoothly
and efficiently. It handles tasks like running programs, accessing files, and connecting to
devices like printers and keyboards.

19
Difference between Shell and Kernel

In computing, the operating system (OS) serves as the fundamental layer that bridges the
gap between computer hardware and the user. Two critical components of an operating
system are the kernel and the shell. Understanding the relationship between these two
components is fundamental to grasping how operating systems function and how users
interact with their computers.

What is a Shell?
The shell is a command-line interface that allows the user to enter commands to interact
with the operating system. It acts as an intermediary between the user and the kernel,
interpreting commands entered by the user and translating them into instructions that
the kernel can execute. The shell also provides various features like command history,
tab completion, and scripting capabilities to make it easier for the user to work with the
system.
Advantages
• Efficient Command Execution
• Scripting capability
Disadvantages
• Limited Visualization
• Steep Learning Curve

What is Kernel?
The kernel is the core component of the operating system that manages system
resources and provides services to other programs running on the system. It acts as a
bridge between the user and the resources of the system by accessing various computer
resources like the CPU , I/O devices and other resources. It is responsible for tasks such

20
as memory management, process scheduling, and device drivers. The kernel operates at
a lower level than the shell and interacts directly with the hardware of the computer.
Advantages
• Efficient Resource Management
• Process Management
• Hardware Abstraction
Disadvantages
• Limited Flexibility
• Dependency on Hardware

Difference Between Shell and Kernel

Shell Kernel

Shell allows the users to communicate Kernel controls all the tasks of the
with the kernel. system.

It is the interface between kernel and


It is the core of the operating system.
user.

Its a low level program interfacing with


It is a command line interpreter (CLI). the hardware (CPU, RAM, disks) on top of
which applications are running.

Its types are – Bourne Shell, C shell, Korn Its types are – Monolithic Kernel, Micro
Shell, etc. kernel, Hybrid kernel, etc.

It carries out commands on a group of


It performs memory management.
files by specifying a pattern to match

Shell commands like ls, mkdir and many


more can be used to request to complete It performs process management.
the specific operation to the OS.

It is the outer layer of OS. It is the inner layer of OS.

Kernel directly interacts with the


It interacts with user and interprets to
hardware by accepting machine
machine understandable language.
understandable language from the shell.

21
Shell Kernel

Core component of the operating system


Command-line interface that allows user
that manages system resources
interaction

Provides services to other programs


Interprets and translates user
running on the system
commands

Operates at a lower level than the shell


Acts as an intermediary between the user
and interacts with hardware
and the kernel

Responsible for tasks such as memory


Provides various features like command
management, process scheduling, and
history, tab completion, and scripting
device drivers
capabilities

Enables user and applications to interact


Executes commands and programs
with hardware resources

Conclusion
The kernel operates at the core of the system, managing hardware resources and
ensuring the smooth execution of processes, while the shell acts as an interface
between the user and the system, allowing commands to be issued and executed.

1.5 Architecture of OS
An operating system allows the user application programs to interact with the system
hardware. Since the operating system is such a complex structure, its architecture plays
an important role in its usage. Each component of the Operating System Architecture
should be well defined with clear inputs, outputs and functions.

Import Terms

In operating system Architecture, we've two major terms which defines the major
components of the operating systems.

• Kernel − Kernal is the central component of an operating system architecture in


most of the implementation. A kernel is responsible for all major operations and
interaction with the hardware. A kernel manages memory, processor,
22
input/output devices and provides interface to application programs to interact
with hardware components.

• Shell − Shell is an interface of an operating system. It can be command line


interface or a graphical user interface. User interacts with an operating system
using shell. Application programs can also use shell interface to interact with
underlying operating system.

• System Software − System software are the programs which interact with Kernal
and provides interface for security management, memory management and
other low-level activities.

• Application Programs − Application software/Programs are the one using which


a user interacts with the operating system. For example, a word processor to
create a document and save it on the file system, a notepad to create notes etc.

Popular Architectures

Following are various popular implementations of Operating System architectures.

a. Simple Architecture

b. Monolith Architecture

c. Micro-Kernel Architecture

d. Exo-Kernel Architecture

e. Layered Architecture

f. Modular Architecture

g. Virtual Machine Architecture

a. Simple Architecture

There are many operating systems that have a rather simple structure. These started as
small systems and rapidly expanded much further than their scope. A common example
of this is MS-DOS (Microsoft Disk Operating System). It was designed simply for a niche
amount for people. There was no indication that it would become so popular.

23
Few operating systems have a simple yet powerful architecture, for example, MS-DOS.
That would lead to greater control over the computer system and its various
applications. The simple architecture allows the programmers to hide information as
required and implement internal routines as they see fit without changing the outer
specifications.

Advantages

Following are advantages of a simple operating system architecture.

• Easy Development - In simple operation system, being very few interfaces,


development is easy especially when only limited functionalities are to be
delivered.

• Better Performance - Such a sytem, as have few layers and directly interects
with hardware, can provide a better performance as compared to other types of
operating systems.

Disadvantages

Following are disadvantages of a simple operating system architecture.

• Frequent System Failures - Being poorly designed, such a system is not robust.
If one program fails, entires operating system crashses. Thus system failures are
quiet frequent in simple operating systems.

• Poor Maintainability - As all layers of operating systems are tightly coupled,


change in one layer can impact other layers heavily and making code
unmanageable over a period of time.

b. Monolith Architecture
24
In monolith architecture operating system, a central piece of code called kernel is
responsible for all major operations of an operating system. Such operations includes
file management, memory management, device management and so on. The kernal is
the main component of an operating system and it provides all the services of an
operating system to the application programs and system programs.

The kernel has access to the all the resources and it acts as an interface with
application programs and the underlying hardware. A monolithic kernel architecture
promotes timesharing, multiprogramming model and was used in old banking systems.

Advantages

Following are advantages of a monolith operating system architecture.

• Easy Development - As kernel is the only layer to develop with all major
functionalities, it is easier to design and develop.

• Performance - As Kernel is responsible for memory management, other


operations and have direct access to the hardware, it performs better.

Disadvantages

Following are disadvantages of a monolith operating system architecture.

• Crash Prone - As Kernel is responsible for all functions, if one function fails
entire operating system fails.

• Difficult to enhance - It is very difficult to add a new service without impacting


other services of a monolith operating system.
25
c. Micro-Kernel Architecture

As in case monolith architecture, there was single kernel, in micro-kernel, we have


multiple kernels each one specialized in particular service. Each microkernel is
developed independent to the other one and makes system more stable. If one kernel
fails the operating system will keep working with other kernel's functionalities.

Advantages

Following are advantages of a microkernel operating system architecture.

• Reliable and Stable - As multiple kernels are working simultaneously, chances


of failure of operating sytem is very less. If one functionlity is down, operating
system can still provide other functionalities using stable kernels.

• Maintainability - Being small sized kernels, code size is maintainable. One can
enhance a microkernel code base without impacting other microkernel code
base.

Disadvantages

Following are disadvantages of a microkernel operating system architecture.

• Complex to Design - Such a microkernel based architecture is difficult to


design.

• Performance Degradation - Multi kernel, Multi-modular communication may


hamper the performance as compared to monolith architecture.

26
Layered Architecture

One way to achieve modularity in the operating system is the layered approach. In this,
the bottom layer is the hardware and the topmost layer is the user interface.

An image demonstrating the layered approach is as follows −

As seen from the image, each upper layer is built on the bottom layer. All the layers hide
some structures, operations etc from their upper layers.

One problem with the layered architecture is that each layer needs to be carefully
defined. This is necessary because the upper layers can only use the functionalities of
the layers below them.

Advantages

Following are advantages of a layered operating system architecture.

• High Customizable - Being layered, each layer implementation can be


customized easily. A new functionality can be added without impacting other
modules as well.

• Verifiable - Being modular, each layer can be verified and debugged easily.

Disadvantages

Following are disadvantages of a layered operating system Architecture.

• Less Performant - A layered structured operating system is less performant as


compared to basic structured operating system.

• Complex designing - Each layer is to planned carefully as each layer


communicates with lower layer only and a good design process is required to
create a layered operating system.

27
Modular Architecture

Modular architecture operating system works on the similar principle as a monolith but
with better design. A central kernel is responsible for all major operations of operating
system. This kernel has set of core functionality and other services are loaded as
modules dynamically to the kernel at boot time or at runtime. Sun Solaris OS is one of
the examples of Modular structured operating system.

Advantages

Following are advantages of a modular operating system architecture.

• High Customizable - Being modular, each module implementation can be


customized easily. A new functionality can be added without impacting other
modules as well.

• Verifiable - Being modular, each layer can be verified and debugged easily.

Disadvantages

Following are disadvantages of a modular operating system architecture.

• Less Performant - A modular architecture operating system is less performant


as compared to basic structured operating system.

• Complex designing - Each module is to planned carefully as each module


communicates with kernel. A communication API is to be devised to facilitate
the communication.

1.6 Describe mode of operations of OS


Computers separate the OS into two modes for resource allocation and security
purposes.
The distinction protects a computer system's basic functionality and ensures stability.
While the computer is operating, it separates more abstract functions from those that
involve the computer's essential components to improve fault tolerance.
An error in one program can adversely affect many processes, it might modify data of
another program or also can affect the operating system. For example, if a process stuck
in the infinite loop, then this infinite loop could affect the correct operation of other
processes. So, to ensure the proper execution of the operating system, there are two
modes of operation:

a) User mode and


b) Kernel mode

The computer's CPU switches between user and kernel mode depending on the code
that's running. Certain applications are restricted to user mode, while others operate in
28
kernel mode. Generally, user applications operate in user mode, whereas basic OS
components function in kernel mode.
The 2024 CrowdStrike outage, which rendered millions of Windows machines
inoperable, was precipitated by security software that malfunctioned while running in
kernel mode.

What is user mode?


User mode is an OS state with restricted access to the computer system's hardware and
resources. User mode has a lower level of privileges than kernel mode and cannot
execute specific commands that have the potential to interfere with the stability of the
system. Applications in user mode can only interact with privileged hardware and
perform privileged operations through a system call, which is transmitted using the OS'
API.

This gives programs in user mode a private section of memory that other applications
cannot access and keeps applications in user mode from altering each other's data. In
this mode, if one application crashes, it doesn't take the entire system down with it
because it runs in isolation from other applications.

User applications, such as word processors, web browsers and video players, run in user
mode. When a user launches one of these applications, the OS creates a process that
gives the application its own private virtual address space in memory.

When the computer system is run by user applications like creating a text document or
using any application program, then the system is in user mode. When the user
application requests for a service from the operating system or an interrupt occurs or
system call, then there will be a transition from user to kernel mode to fulfill the
requests.

Note: To switch from kernel mode to user mode, the mode bit should be 1.

Given below image describes what happens when an interruption occurs:

29
What is kernel mode?
Kernel mode is an OS state with unrestricted access to system resources and hardware.
It is a privileged mode where the OS' core functions are conducted. Kernel mode
enforces isolation processes by handling system calls from user mode. It also has direct
access to peripheral devices.
When the system boots, the hardware starts in kernel mode and when the operating
system is loaded, it starts user application in user mode. To provide protection to the
hardware, we have privileged instructions which execute only in kernel mode. If the user
attempts to run privileged instruction in user mode, then it will treat instruction as illegal
and traps to OS. Some of the privileged instructions are:
1. Handling Interrupts
2. To switch from user mode to kernel mode.
3. Input-Output management.

The kernel serves as the bridge between the OS and hardware.

In kernel mode, there is no separation of virtual address space -- all code in this mode
shares the same virtual address space in memory. This means the CPU can switch

30
between running programs and reading and writing both kernel memory and user
memory.
Programs that run in kernel mode include the OS itself, process-related code and
some security software. Program data running in this mode is not protected from other
applications. If an application crashes in kernel mode, it can negatively affect the other
applications running in kernel mode. For example, if a driver crashes in kernel mode, it
could potentially corrupt the entire OS.

User mode vs. kernel mode


User and kernel mode are two OS states that work together to ensure the security and
stability of computer systems.

Characteristics User mode Kernel mode


Definition Restricted OS mode for Privileged mode for core OS
running application code functions
Resource access Limited access to system Full access to system
resources and hardware resources and hardware
Memory access Cannot access kernel Unrestricted access to user
memory directly; code is and kernel memory; code is not
isolated isolated
Privilege level Lower privilege level Higher privilege level
Purpose Runs nonsystem software, Manages system resources and
like applications enforces restrictions
Security and Less critical for operations Critical for system operations
stability and less consequence for but larger consequence for
errors errors

How user mode and kernel mode work together


The CPU contains a register that notes the mode that the CPU is in -- either user mode or
kernel mode. The CPU boots up in kernel mode and then loads and runs the OS.
Eventually -- on a trigger from the user, for example -- the OS loads the instructions for a
program to run and sets up memory for the program to run. Before executing the
instructions, the CPU changes the register to denote that the CPU is in user mode. Then,
the CPU executes the program in user mode, where it has a safe level of restrictions.

How to switch from user mode to kernel mode


User mode applications are generally restricted from critical system resources but need
to access those resources in some contexts. For example, when a program needs to
access a hardware device or update system settings, that program performs a system
call that indicates the specific service it requires from the kernel. System call
instructions have memory protections that make them unmodifiable or readable by user
mode programs. After the system call, the CPU is reset back to user mode.

31
Which programs run in user mode and kernel mode?
Any program that performs memory management, process management or I/O
management typically runs in kernel mode. Any software in this mode has full access to
the system and thus needs to be trusted. Once running, the code in the kernel or new
code that is inserted in the kernel needs to be trusted so that it doesn't corrupt the core
functions of the computer.

Computer systems have rings of privilege. Kernel mode operates in Ring 0, the most
privileged zone. User mode operates in Ring 3, the least privileged zone. Rings 1 and 2
are sometimes referred to as the supervisor level, depending on system architecture.

System calls establish this trust. Although applications such as word processors are
executed in user mode, they use system calls regularly to enter kernel mode and perform
processes involving peripherals and memory. For example, if the word processor needs
to save a file, it needs to do so through a system call because it needs to write bytes to
the disk. The same goes for typing or moving the cursor -- the program needs to interact
with the hardware in some way and needs some kernel-level access to do so.
Another example of a system call occurs when a program is listening for incoming
network connections. The system call tells the kernel's networking stack to arrange data
structures to prepare it to receive future incoming network packets.
The above are examples of programs that execute in user mode but use system calls to
access kernel mode. Another example of software that requires access to the kernel is
third-party security software. One notable example of this is CrowdStrike's Falcon
sensor, which regularly publishes content updates to the kernel to help the software

32
detect new threats. The sensor validates the content, which allows it, theoretically, to do
its job safely in the kernel.
However, because of a bug in the content validator, a content update passed through
with problematic data. This caused the CrowdStrike software to crash. Because the
software -- at least part of it -- resided in the kernel, the Windows machines that received
the update completely crashed as well. This example speaks to the importance of only
running trusted processes in the kernel.

Need for Dual Mode Operations:


Certain types of tasks do not require any type of hardware support, that’s why certain
types of processes are to be made hidden from the user. These tasks can be deal
separately by using the Dual Mode of the operating system.
The Kernel Level programs perform all the bottom level functions of the operating
systems like memory management, process management etc, for this purpose the
operating system needs to function in the Dual Mode. Dual Mode is necessary for
specifying the access to the users only to the tasks of their use in an operating system.
Basically, whenever the operating system works on the user applications, it held in the
user mode. When the user requests for some hardware services, a transition from User
Mode to the Kernel Mode occurs which is done by changing the mode bit from 1 to 0. Also
the mode bit again changed to 1 for returning back in the User Mode.

Advantages:
1. Protection: Dual-mode operation provides a layer of protection between user
programs and the operating system. In user mode, programs are restricted from
accessing privileged resources, such as hardware devices or sensitive system
data. In kernel mode, the operating system has full access to these resources,
allowing it to protect the system from malicious or unauthorized access.
2. Stability: Dual-mode operation helps to ensure system stability by preventing
user programs from interfering with system-level operations. By restricting
access to privileged resources in user mode, the operating system can prevent
programs from accidentally or maliciously causing system crashes or other
errors.
3. Flexibility: Dual-mode operation allows the operating system to support a wide
range of applications and hardware devices. By providing a well-defined interface
between user programs and the operating system, it is easier to develop and
deploy new applications and hardware.
4. Debugging: Dual-mode operation makes it easier to debug and diagnose
problems with the operating system and applications. By switching between user
mode and kernel mode, developers can identify and fix issues more quickly and
easily.
5. Security: Dual-mode operation enhances system security by preventing
unauthorized access to critical system resources. User programs running in user

33
mode cannot modify system data or perform privileged operations, reducing the
risk of malware attacks or other security threats.
6. Efficiency: Dual-mode operation can improve system performance by reducing
overhead associated with system-level operations. By allowing user programs to
access resources directly in user mode, the operating system can avoid
unnecessary context switches and other performance penalties.
7. Compatibility: Dual-mode operation ensures backward compatibility with
legacy applications and hardware devices. By providing a standard interface for
user programs to interact with the operating system, it is easier to maintain
compatibility with older software and hardware.
8. Isolation: Dual-mode operation provides isolation between user programs,
preventing one program from interfering with another. By running each program in
its own protected memory space, the operating system can prevent programs
from accessing each other’s data or causing conflicts.
9. Reliability: Dual-mode operation enhances system reliability by preventing
crashes and other errors caused by user programs. By restricting access to
critical system resources, the operating system can ensure that system-level
operations are performed correctly and reliably.

Disadvantages:
1. Performance: Dual-mode operation can introduce overhead and reduce system
performance. Switching between user mode and kernel mode requires a context
switch, which can be time-consuming and can impact system performance.
2. Complexity: Dual-mode operation can increase system complexity and make it
more difficult to develop and maintain operating systems. The need to support
both user mode and kernel mode can make it more challenging to design and
implement system features and to ensure system stability.
3. Security: Dual-mode operation can introduce security vulnerabilities. Malicious
programs may be able to exploit vulnerabilities in the operating system to gain
access to privileged resources or to execute malicious code.
4. Reliability: Dual-mode operation can introduce reliability issues as it is difficult
to test and verify the correct operation of both user mode and kernel mode. Bugs
or errors in either mode can lead to system crashes, data corruption, or other
reliability issues.
5. Compatibility: Dual-mode operation can create compatibility issues as different
operating systems may implement different interfaces or policies for user mode
and kernel mode. This can make it difficult to develop applications that are
compatible with multiple operating systems or to migrate applications between
different systems.
6. Development complexity: Dual-mode operation requires a higher level of
technical expertise and development skills to design and implement the

34
operating system. This can increase the development time and cost for creating
new operating systems or updating existing ones.
7. Maintenance complexity: Dual-mode operation can make maintenance and
support more complex due to the need to ensure compatibility and security
across both user mode and kernel mode. This can increase the cost and time
required for system updates, patches, and upgrades.
Note: To switch from user mode to kernel mode bit should be 0.

1.7 Resource Management in Operating System


Resource Management in Operating System is the process to manage all the
resources efficiently like CPU, memory, input/output devices, and other hardware
resources among the various programs and processes running in the computer.
Resource management is an important thing because resources of a computer are
limited and multiple processes or users may require access to the same resources like
CPU, memory etc. at the same time. The operating system has to manage and ensure
that all processes get the resources they need to execute, without any problems like
deadlocks.
Here are some Terminologies related to the resource management in OS:
• Resource Allocation: This term defines the process of assigning the available
resources to processes in the operating system. This can be done dynamically or
statically.
• Resource: Resource can be anything that can be assigned dynamically or
statically in the operating system. Example may include CPU time, memory, disk
space, and network bandwidth etc.
• Resource Management: It refers to how to manage resources efficiently
between different processes.
• Process: Process refers to any program or application that is being executed in
the operating system and has its own memory space, execution state, and set of
system resources.
• Scheduling: It is the process of determining from multiple number of processes
which process should be allocated a particular resource at a given time.
• Deadlock: When two or more processes are waiting for some resource but
resources are busy somewhere else and resources are also waiting for some
process to complete their execution . In such condition neither resources will be
freed nor process would get it and this situation is called deadlock.
• Semaphore: It is the method or tool which is used to prevent race condition.
Semaphore is an integer variable which is used in mutual exclusive manner by
various concurrent cooperative process in order to achieve synchronization.
• Mutual Exclusion: It is the technique to prevent multiple number of process to
access the same resources simultaneously.

35
• Memory Management: Memory management is a method used in the operating
systems to manage operations between main memory and disk during process
execution.

Features or characteristics of the Resource management of operating system:


• Resource scheduling: The OS allocate available resources to the processes. It
decides the sequence of which process will get access to the CPU, memory, and
other resources at any given time.
• Resource Monitoring: The operating system monitors which resources is used by
which process and also take action if any process takes many resources at the
same time causing into deadlock.
• Resource Protection: The OS protects the system from unauthorized or fake
access by the user or any other process.
• Resource Sharing: The operating system permits many processes like memory
and I/O devices to share resources. It guarantees that common resources are
utilized in a fair and productive way.
• Deadlock prevention: The OS prevents deadlock and also ensure that no
process is holding resources indefinitely . For that it uses techniques likes
resource preemption.
• Resource accounting: The operating system always tracks the use of resources
by different processes for allocation and statistical purposes.
• Performance optimization: The OS optimizes resources distribution , the reason
is to increase the system performance. For that many techniques like load
balancing and memory management are followed that ensures efficient
resources distribution.

Diagrammatically representation of the Resource management :

1.8 Design philosophy of OS

36
Design and Implementation in Operating System
The design of an operating system is a broad and complex topic that touches on many
aspects of computer science.

Design Goals:
Design goals are the objectives of the operating system. They must be met to fulfill design
requirements and they can be used to evaluate the design. These goals may not always
be technical, but they often have a direct impact on how users perceive their experience
with an operating system. While designers need to identify all design goals and prioritize
them, they also need to ensure that these goals are compatible with each other as well
as compatible with user expectations or expert advice
Designers also need to identify all possible ways in which their designs could conflict
with other parts of their systems—and then prioritize those potential conflicts based on
cost-benefit analysis (CBA). This process allows for better decision-making about what
features make sense for inclusion into final products versus those which would require
extensive rework later down the road. It’s also important to note that CBA is not just
about financial costs; it can also include other factors like user experience, time to
market, and the impact on other systems.
The process of identifying design goals, conflicts, and priorities is often referred to as
“goal-driven design.” The goal of this approach is to ensure that each design decision is
made with the best interest of users and other stakeholders in mind.

An operating system is a construct that allows the user application programs to interact
with the system hardware. Operating system by itself does not provide any function but
it provides an atmosphere in which different applications and programs can do useful
work.
There are many problems that can occur while designing and implementing an operating
system. These are covered in operating system design and implementation.

37
The design and implementation of an operating system is a complex process that
involves many different disciplines. The goal is to provide users with a reliable, efficient,
and convenient computing environment, so as to make their work more efficient.

Operating System Design Goals


It is quite complicated to define all the goals and specifications of the operating system
while designing it. The design changes depending on the type of the operating system i.e
if it is batch system, time shared system, single user system, multi user system,
distributed system etc.
There are basically two types of goals while designing an operating system. These are −

User Goals
The operating system should be convenient, easy to use, reliable, safe and fast according
to the users. However, these specifications are not very useful as there is no set method
to achieve these goals.

System Goals
The operating system should be easy to design, implement and maintain. These are
specifications required by those who create, maintain and operate the operating system.
But there is not specific method to achieve these goals as well.

Mechanisms and Policies:

38
An operating system is a set of software components that manage a computer’s
resources and provide overall system management.
Mechanisms and policies are the two main components of an operating system.
Mechanisms handle low-level functions such as scheduling, memory management,
and interrupt handling; policies handle higher-level functions such as resource
management, security, and reliability. A well-designed OS should provide both
mechanisms and policies for each component in order for it to be successful at its task:
Mechanisms should ensure that applications have access to appropriate hardware
resources (seats). They should also make sure that applications don’t interfere with each
other’s use of these resources (for example through mutual exclusion).
Policies determine how processes will interact with one another when they’re running
simultaneously on multiple CPUs within a single machine instance – what processor
affinity should occur during multitasking operations? Should all processes be allowed
access simultaneously or just those belonging specifically within group ‘A’?’
These are just some of the many questions that policies must answer. The OS is
responsible for enforcing these mechanisms and policies, as well as handling
exceptions when they occur. The operating system also provides a number of services to
applications, such as file access and networking capabilities.
The operating system is also responsible for making sure that all of these tasks are done
efficiently and in a timely manner. The OS provides applications with access to the
underlying hardware resources and ensures that they’re properly utilized by the
application. It also handles any exceptions that occur during execution so that they don’t
cause the entire system to crash.
There is no specific way to design an operating system as it is a highly creative task.
However, there are general software principles that are applicable to all operating
systems.
A subtle difference between mechanism and policy is that mechanism shows how to do
something and policy shows what to do. Policies may change over time and this would
lead to changes in mechanism. So, it is better to have a general mechanism that would
require few changes even when a policy change occurs.
For example - If the mechanism and policy are independent, then few changes are
required in mechanism if policy changes. If a policy favours I/O intensive processes over
CPU intensive processes, then a policy change to preference of CPU intensive processes
will not change the mechanism.

Implementation:
Implementation is the process of writing source code in a high-level programming
language, compiling it into object code, and then interpreting (executing) this object code
by means of an interpreter. The purpose of an operating system is to provide services to
users while they run applications on their computers.

39
The operating system needs to be implemented after it is designed. Earlier they were
written in assembly language but now higher-level languages are used. The first system
not written in assembly language was the Master Control Program (MCP) for Burroughs
Computers.

The main function of an operating system is to control the execution of programs. It also
provides services such as memory management, interrupt handling, and file system
access facilities so that programs can be better utilized by users or other devices
attached to the system.
An operating system is a program or software that controls the computer’s hardware and
resources. It acts as an intermediary between applications, users, and the computer’s
hardware. It manages the activities of all programs running on a computer without any
user intervention.
The operating system performs many functions such as managing the computer’s
memory, enforcing security policies, and controlling peripheral devices. It also provides
a user interface that allows users to interact with their computers.
The operating system is typically stored in ROM or flash memory so it can be run when
the computer is turned on. The first operating systems were designed to control
mainframe computers. They were very large and complex, consisting of millions of lines
of code and requiring several people to develop them.
Today, operating systems are much smaller and easier to use. They have been designed
to be modular so they can be customized by users or developers.
There are many different types of operating systems:
1. Graphical user interfaces (GUIs) like Microsoft Windows and Mac OS.
2. Command line interfaces like Linux or UNIX
3. Real-time operating systems that control industrial and scientific equipment
4. Embedded operating systems are designed to run on a single computer system
without needing an external display or keyboard.

Advantages of Higher Level Language


There are multiple advantages to implementing an operating system using a higher level
language such as: the code is written more fast, it is compact and also easier to debug
and understand. Also, the operating system can be easily moved from one hardware to
another if it is written in a high level language.

Disadvantages of Higher Level Language


Using high level language for implementing an operating system leads to a loss in speed
and increase in storage requirements. However, in modern systems only a small amount
of code is needed for high performance, such as the CPU scheduler and memory
manager. Also, the bottleneck routines in the system can be replaced by assembly
language equivalents if required.

40
1.9 Types of Operating Systems
Operating Systems can be categorized according to different criteria like whether an
operating system is for mobile devices (examples Android and iOS) or desktop (examples
Windows and Linux). We are going to classify based on functionalities an operating
system provides.

1. Batch Operating System


This type of operating system does not interact with the computer directly. There is an
operator which takes similar jobs having the same requirements and groups them into
batches. It is the responsibility of the operator to sort jobs with similar needs. Batch
Operating System is designed to manage and execute a large number of jobs efficiently
by processing them in groups.

Advantages of Batch Operating System


• Multiple users can share the batch systems.
• The idle time for the batch system is very less.
• It is easy to manage large work repeatedly in batch systems.
Disadvantages of Batch Operating System
• CPU is not used efficiently. When the current process is doing IO, CPU is free and
could be utilized by other processes waiting.
• The other jobs will have to wait for an unknown time if any job fails.
• In batch operating system, average response time increases as all processes are
processed one by one.
Examples of Batch Operating Systems: Payroll Systems, Bank Statements, etc.
2. Multi-Programming Operating System

41
Multiprogramming Operating Systems can be simply illustrated as more than one
program is present in the main memory and any one of them can be kept in execution.
This is basically used for better utilization of resources.

Advantages of Multi-Programming Operating System


• CPU is better utilized and overall performance of the system improves.
• It helps in reducing the response time.
Multi-Tasking/Time-sharing Operating systems
It is a type of Multiprogramming system with every process running in round robin
manner. Each task is given some time to execute so that all the tasks work smoothly.
Each user gets the time of the CPU as they use a single system. These systems are also
known as Multitasking Systems. The task can be from a single user or different users also.
The time that each task gets to execute is called quantum. After this time interval is over
OS switches over to the next task.

42
Advantages of Time-Sharing OS
• Each task gets an equal opportunity.
• Fewer chances of duplication of software.
• CPU idle time can be reduced.
• Resource Sharing: Time-sharing systems allow multiple users to share hardware
resources such as the CPU, memory, and peripherals, reducing the cost of
hardware and increasing efficiency.
• Improved Productivity: Time-sharing allows users to work concurrently, thereby
reducing the waiting time for their turn to use the computer. This increased
productivity translates to more work getting done in less time.
• Improved User Experience: Time-sharing provides an interactive environment that
allows users to communicate with the computer in real time, providing a better
user experience than batch processing.
Disadvantages of Time-Sharing OS
• Reliability problem.
• One must have to take care of the security and integrity of user programs and
data.
• Data communication problem.
• High Overhead: Time-sharing systems have a higher overhead than other
operating systems due to the need for scheduling, context switching, and other
overheads that come with supporting multiple users.
• Complexity: Time-sharing systems are complex and require advanced software to
manage multiple users simultaneously. This complexity increases the chance of
bugs and errors.
• Security Risks: With multiple users sharing resources, the risk of security
breaches increases. Time-sharing systems require careful management of user

43
access, authentication, and authorization to ensure the security of data and
software.

Examples of Time-Sharing OS with explanation


• IBM VM/CMS : IBM VM/CMS is a time-sharing operating system that was first
introduced in 1972. It is still in use today, providing a virtual machine environment
that allows multiple users to run their own instances of operating systems and
applications.
• TSO (Time Sharing Option) : TSO is a time-sharing operating system that was first
introduced in the 1960s by IBM for the IBM System/360 mainframe computer. It
allowed multiple users to access the same computer simultaneously, running
their own applications.
• Windows Terminal Services : Windows Terminal Services is a time-sharing
operating system that allows multiple users to access a Windows server
remotely. Users can run their own applications and access shared resources,
such as printers and network storage, in real-time.

3. Multi-Processing Operating System


Multi-Processing Operating System is a type of Operating System in which more than one
CPU is used for the execution of resources. It betters the throughput of the System.

Advantages of Multi-Processing Operating System


• It increases the throughput of the system as processes can be parallelized.
• As it has several processors, so, if one processor fails, we can proceed with
another processor.
4. Multi User Operating Systems
These systems allow multiple users to be active at the same time. These system can be
either multiprocessor or single processor with interleaving.

44
5. Distributed Operating System
These types of operating system is a recent advancement in the world of computer
technology and are being widely accepted all over the world and, that too, at a great pace.
Various autonomous interconnected computers communicate with each other using a
shared communication network. Independent systems possess their own memory unit
and CPU. These are referred to as loosely coupled systems or distributed systems. These
systems’ processors differ in size and function. The major benefit of working with these
types of the operating system is that it is always possible that one user can access the
files or software which are not actually present on his system but some other system
connected within this network i.e., remote access is enabled within the devices
connected in that network.

Advantages of Distributed Operating System


• Failure of one will not affect the other network communication, as all systems are
independent of each other.
• Electronic mail increases the data exchange speed.
• Since resources are being shared, computation is highly fast and durable.
• Load on host computer reduces.
• These systems are easily scalable as many systems can be easily added to the
network.
45
• Delay in data processing reduces.
Disadvantages of Distributed Operating System
• Failure of the main network will stop the entire communication.
• To establish distributed systems the language is used not well-defined yet.
• These types of systems are not readily available as they are very expensive. Not
only that the underlying software is highly complex and not understood well yet.
Examples of Distributed Operating Systems are LOCUS, etc.
Issues With Distributed Operating Systems
• Networking causes delays in the transfer of data between nodes of a distributed
system. Such delays may lead to an inconsistent view of data located in different
nodes, and make it difficult to know the chronological order in which events
occurred in the system.
• Control functions like scheduling, resource allocation, and deadlock detection
have to be performed in several nodes to achieve computation speedup and
provide reliable operation when computers or networking components fail.
• Messages exchanged by processes present in different nodes may travel over
public networks and pass through computer systems that are not controlled by
the distributed operating system. An intruder may exploit this feature to tamper
with messages, or create fake messages to fool the authentication procedure and
masquerade as a user of the system.
6. Network Operating System
These systems run on a server and provide the capability to manage data, users, groups,
security, applications, and other networking functions. These types of operating systems
allow shared access to files, printers, security, applications, and other networking
functions over a small private network. One more important aspect of Network Operating
Systems is that all the users are well aware of the underlying configuration, of all other
users within the network, their individual connections, etc. and that’s why these
computers are popularly known as tightly coupled systems .

Client 1 Client 2

File Server

Client 3 Client 4

46
Advantages of Network Operating System
• Highly stable centralized servers.
• Security concerns are handled through servers.
• New technologies and hardware up-gradation are easily integrated into the
system.
• Server access is possible remotely from different locations and types of systems.
Disadvantages of Network Operating System
• Servers are costly.
• User has to depend on a central location for most operations.
• Maintenance and updates are required regularly.
Examples of Network Operating Systems are Microsoft Windows Server 2003,
Microsoft Windows Server 2008, UNIX, Linux, Mac OS X, Novell NetWare, BSD, etc.

7. Real-Time Operating System


These types of OSs serve real-time systems. The time interval required to process and
respond to inputs is very small. This time interval is called response time. Real-time
systems are used when there are time requirements that are very strict like missile
systems, air traffic control systems, robots, etc.
Types of Real-Time Operating Systems
• Hard Real-Time Systems: Hard Real-Time OSs are meant for applications where
time constraints are very strict and even the shortest possible delay is not
acceptable. These systems are built for saving life like automatic parachutes or
airbags which are required to be readily available in case of an accident. Virtual
memory is rarely found in these systems.
• Soft Real-Time Systems: These OSs are for applications where time-constraint
is less strict.

Advantages of RTOS
• Maximum Consumption: Maximum utilization of devices and systems, thus
more output from all the resources.

47
• Task Shifting: The time assigned for shifting tasks in these systems is very less.
For example, in older systems, it takes about 10 microseconds in shifting from
one task to another, and in the latest systems, it takes 3 microseconds.
• Focus on Application: Focus on running applications and less importance on
applications that are in the queue.
• Real-time operating system in the embedded system: Since the size of
programs is small, RTOS can also be used in embedded systems like in transport
and others.
• Error Free: These types of systems are error-free.
• Memory Allocation: Memory allocation is best managed in these types of
systems.
Disadvantages of RTOS
• Limited Tasks: Very few tasks run at the same time and their concentration is very
less on a few applications to avoid errors.
• Use heavy system resources: Sometimes the system resources are not so good
and they are expensive as well.
• Complex Algorithms: The algorithms are very complex and difficult for the
designer to write on.
• Device driver and interrupt signals: It needs specific device drivers and
interrupts signal to respond earliest to interrupts.
• Thread Priority: It is not good to set thread priority as these systems are very less
prone to switching tasks.
Examples of Real-Time Operating Systems are Scientific experiments, medical
imaging systems, industrial control systems, weapon systems, robots, air traffic control
systems, etc.
8. Mobile Operating Systems
These operating systems are mainly for mobile devices. Examples of such operating
systems are Android and iOS.
Conclusion
Operating systems come in various types, each used for specific needs. Whether it’s
managing large batches of jobs, enabling multiple users to work simultaneously,
coordinating networked computers, or ensuring timely execution in critical systems.
Understanding these types helps in choosing the right operating system for the right job,
ensuring efficiency and effectiveness.

QUESTION: How has the development of computer hardware been impacted by the
evolution of operating systems?

ANSWER: The design and advancement of computer hardware have been significantly
influenced by the development of operating systems. By time by time hardware
producers added new features and capabilities to their products as operating systems

48
improved in order to better support the functionality offered by the operating systems
after a long variation of time. Like for instance, basically the development of memory
managementunits (MMUs) in hardware to handle memory addressing and protection
followed the introduction of virtual memory in operating systems. Similarly, the demand
for operating system multitasking and multiprocessing support prompted the creation of
more potent and effective processors and other hardware components.

QUESTION: How has the development of distributed systems impacted how


operating systems have changed over time?
ANSWER: Operating systems have been significantly impacted by the rise of distributed
systems, such as client-server architectures and cloud computing. To support network
communication, distributed file systems, and resource sharing across multiple
machines, operating systems had to develop. Distributed operating systemsalso
developed to offer scalability, fault tolerance, and coordination in distributed
environments. These modifications improved the ability to manage resources across
interconnected systems by advancing networking protocols, remote procedure calls,
and distributed file systems.

QUESTION: What are the examples of operating system?

ANSWER: There are many popular operating system for example Apple macOS, Microsoft
windows, Google Android OS, Linux Operating System, and apple iOS.

QUESTION: What Is Deadlock In Operating System?

ANSWER: Deadlock is a situation in operating system where a set of processes are


blocked because each process is holding one resource and waiting for another resource
for complete execution but this resource is allocated to other process this situation is
known as Deadlock. Or any event that is not happened is a Deadlock.

QUESTION: What is the mother of operating system?

ANSWER: UNIX is the mother of operating System. Unix is an Operating System that is
truly the base of all Operating Systems like Ubuntu, Solaris, POSIX, etc.

QUESTION: What is the purpose of dual-mode operation?

ANSWER: Dual-mode operation is designed to provide a layer of protection and stability


to computer systems by separating user programs and the operating system into two
modes: user mode and kernel mode. User mode restricts access to privileged resources,
while kernel mode has full access to these resources.

49
QUESTION: How does dual-mode operation improve system performance?

ANSWER: By allowing user programs to access resources directly in user mode, the
operating system can avoid unnecessary context switches and other performance
penalties, which can improve system performance.

QUESTION: What are some common privileged instructions in kernel mode?

ANSWER: Some common privileged instructions in kernel mode include handling


interrupts, switching from user mode to kernel mode, and input-output management.

QUESTION: How can dual-mode operation enhance system security?

ANSWER: The dual-mode operation can enhance system security by preventing


unauthorized access to critical system resources. User programs running in user mode
cannot modify system data or perform privileged operations, reducing the risk of malware
attacks or other security threats.

QUESTION: What are some potential drawbacks of dual-mode operation?

ANSWER: Some potential drawbacks of dual-mode operation include increased


complexity and development/maintenance costs, security vulnerabilities, and potential
performance issues due to context switching. However, these drawbacks are generally
outweighed by the benefits of dual-mode operation in terms of system stability, security,
and flexibility.

What is data communication? Then, state THREE importance of Computer


Networking
data communication is the transferring of data over a transmission medium between
two or more devices, systems, or places.

importance of computer networking

50
• Provides best way of business communication.

• Streamline communication.

• Cost-effective resource sharing.

• Improving storage efficiency and volume.

• Cut costs on software.

• Cut costs on hardware.

• Utilizes Centralized Database.

• Increase in efficiency.

• Optimize convenience and flexibility.

• Allows File sharing.

• sharing of peripherals and internet access.

• Network gaming.

• Voice over IP (VoIP)

• Media Center Server.

• Centralize network administration, meaning less IT support.

• Flexibility.

• Allowing information sharing.

• Supporting distributed processing.

• User communication.

• Overcoming geographic separation.

Explain USART and give TWO differences between UART and USART
The most common use of the USART in asynchronous mode is to communicate to a PC
serial port using the RS-232 protocol.
differences between UART and USART
UART USART

Mode Full duplex Half duplex

Signal count 2 (Tx and Rx) 4 (Tx, Rx, XCK (clock), and
XDIR (direction))

51
Data rate Specified between Defined by the clock pulse
transmitter and stream on the XCK pin
receiver

What is Distributed Processing? How does a DNS works?


Distributed Processing refers to a model where different parts of a data processing task
is executed simultaneously across multiple computing resources, usually in a networked
environment.
Distributed processing also means that a specific task can be broken up into functions,
and the functions are dispersed across two or more interconnected processors.
How DNS works
The basic process of a DNS resolution follows these steps:
1. The user enters a web address or domain name into a browser.
2. The browser sends a message, called a recursive DNS query, to the network to
find out which IP or network address the domain corresponds to.
3. The query goes to a recursive DNS server, which is also called a recursive resolver,
and is usually managed by the internet service provider (ISP). If the recursive
resolver has the address, it will return the address to the user, and the webpage
will load.
4. If the recursive DNS server does not have an answer, it will query a series of other
servers in the following order: DNS root name servers, top-level domain (TLD)
name servers and authoritative name servers.
5. The three server types work together and continue redirecting until they retrieve a
DNS record that contains the queried IP address. It sends this information to the
recursive DNS server, and the webpage the user is looking for loads. DNS root
name servers and TLD servers primarily redirect queries and rarely provide the
resolution themselves.
6. The recursive server stores, or caches, the A record for the domain name, which
contains the IP address. The next time it receives a request for that domain name,
it can respond directly to the user instead of querying other servers.
7. If the query reaches the authoritative server and it cannot find the information, it
returns an error message.
CHAPTER TWO
STRUCTURE, FUNCTIONS, AND PHILOSOPHY OF OPERATING SYSTEMS

2.1 Process Management


Process Management
It is an important part of the operating system. It allows you to control the way your
computer runs by managing the currently active processes. This includes ending

52
processes that are no longer needed, setting process priorities, and more. You can do it
on your computer also.
There are a few ways to manage your processes. The first is through the use of Task
Manager. This allows you to see all of the processes currently running on your computer
and their current status and CPU/memory usage. You can end any process that you no
longer need, set a process priority, or start or stop a service.

2.2 Process Description

Process Control Block (PCB)

The process control block is a data structure used by an operating system to store
information about a process. This includes the process state, program counter, CPU
scheduling information, memory management information, accounting information,
and IO status information. The operating system uses the process control block to keep
track of all the processes in the system.

53
Attributes of process Description
The process could be in any state, so the current state of the
Process State
process (ready, running, waiting, terminated)
This is required for allowing/disallowing access to the system
Process privileges
resource.
This specifies Unique identification for all processes in the
Process ID
operating system.
Contains the address of the next instruction, which will be
Program Counter
executed in the process.
This specifies the registers that are used by the process. They
CPU registers may include general-purpose registers, index registers
accumulators, stack pointers, etc.
Memory management This includes the memory information like information
information related to the Segment table, memory limits, and page table.
This includes the information about the amount of CPU used
Accounting information
for process execution, execution time, time limits, etc.
There are many Input/Output devices attached to the
IO status information process, and the list of I/O devices allocated to the process
is maintained.

Process Operations
Example of some operations:

1. Process creation
The first step is process creation. Process creation could be from a user request (using
fork()), a system call by a running process, or system initialization.
2. Scheduling
If the process is ready to get executed, then it will be in the ready queue, and now it’s the
job of the scheduler to choose a process from the ready queue and starts its execution

54
3. Execution
Here, execution of the process means the CPU is assigned to the process. Once the
process has started executing, it can go into a waiting queue or blocked state. Maybe the
process wants to make an I/O request, or some high-priority process comes in.
4. Killing the process
After process execution, the operating system terminates the Process control block
(PCB).

States of the Process

1. New state
When a process is created, it is a new state. The process is not yet ready to run in this
state and is waiting for the operating system to give it the green light. Long-term
schedulers shift the process from a NEW state to a READY state.
2. Ready state
After creation now, the process is ready to be assigned to the processor. Now the
process is waiting in the ready queue to be picked up by the short-term scheduler. The
short-term scheduler selects one process from the READY state to the RUNNING state.
We will cover schedulers in detail in the next blog.
3. Running state
Once the process is ready, it moves on to the running state, where it starts to execute the
instructions that were given to it. The running state is also where the process consumes
most of its CPU time.
4.Waiting state
If the process needs to stop running for some reason, it enters the waiting state. This
could be because
• It’s waiting for some input
• It’s waiting for a resource that’s not available yet.
• Some high-priority process comes in that need to be executed.
Then the process is suspended for some time and is put in a WAITING state. Till then next
process is given chance to get executed.

55
5. Terminate state
After the execution, the process exit to the terminate state, which means process
execution is complete.

Context Switching in an Operating System


Context switching is the process of switching between tasks or contexts. Basically, the
state of the currently executing process is saved before moving to the new process.
Saving state means copying all live registers to PCB(Process Control Block)
Context switching can be a resource-intensive process, requiring allocating CPU time,
memory, and other resources. In addition, context switching can also cause latency and
jitter.
Due to these potential issues, minimising the amount of context switching in your
operating system is crucial. By doing so, you can improve performance and stability.

Conclusion
Process management is a critical function of the operating system. By managing
processes, the operating system can ensure that resources are used efficiently and that
the system remains stable. In addition, process management allows the operating
system to control how programs interact with each other. This article has explained
process management, different states of the process, and context switching. If you liked
this article, please share it with your friends.

2.3 Process Scheduling in Operating System


A process is the instance of a computer program in execution.

• Scheduling is important in operating systems with multiprogramming as multiple


processes might be eligible for running at a time.

• One of the key responsibilities of an Operating System (OS) is to decide which


programs will execute on the CPU.

• Process Schedulers are fundamental components of operating systems


responsible for deciding the order in which processes are executed by the CPU.
In simpler terms, they manage how the CPU allocates its time among multiple
tasks or processes that are competing for its attention.

What is Process Scheduling?

Process scheduling is the activity of the process manager that handles the removal of
the running process from the CPU and the selection of another process based on a
particular strategy. Throughout its lifetime, a process moves between
various scheduling queues, such as the ready queue, waiting queue, or devices queue.

56
Categories of Scheduling

Scheduling falls into one of two categories:

• Non-Preemptive: In this case, a process’s resource cannot be taken before the


process has finished running. When a running process finishes and transitions to
a waiting state, resources are switched.

• Pre-emptive: In this case, the OS can switch a process from running state to
ready state. This switching happens because the CPU may give other processes
priority and substitute the currently active process for the higher priority process.

Types of Process Schedulers

There are three types of process schedulers:

1. Long Term or Job Scheduler

Long Term Scheduler loads a process from disk to main memory for execution. The new
process to the ‘Ready State’.

• It mainly moves processes from Job Queue to Ready Queue.

• It controls the Degree of Multi-programming, i.e., the number of processes


present in a ready state or in main memory at any point in time.

• It is important that the long-term scheduler make a careful selection of both I/O
and CPU-bound processes. I/O-bound tasks are which use much of their time in
input and output operations while CPU-bound processes are which spend their
time on the CPU. The job scheduler increases efficiency by maintaining a
balance between the two.

57
• In some systems, the long-term scheduler might not even exist. For example, in
time-sharing systems like Microsoft Windows, there is usually no long-term
scheduler. Instead, every new process is directly added to memory for the short-
term scheduler to handle.

• Slowest among the three (that is why called long term).

2. Short-Term or CPU Scheduler

CPU Scheduler is responsible for selecting one process from the ready state for running
(or assigning CPU to it).

• STS (Short Term Scheduler) must select a new process for the CPU frequently to
avoid starvation.

• The CPU scheduler uses different scheduling algorithms to balance the


allocation of CPU time.

• It picks a process from ready queue.

• Its main objective is to make the best use of CPU.

• It mainly calls dispatcher.

• Fastest among the three (that is why called Short Term).

The dispatcher is responsible for loading the process selected by the Short-term
scheduler on the CPU (Ready to Running State). Context switching is done by the
dispatcher only. A dispatcher does the following work:

• Saving context (process control block) of previously running process if not


finished.

• Switching system mode to user mode.

• Jumping to the proper location in the newly loaded program.

Time taken by dispatcher is called dispatch latency or process context switch time.

58
3. Medium-Term Scheduler

Medium Term Scheduler (MTS) is responsible for moving a process from memory to disk
(or swapping).

• It reduces the degree of multiprogramming (Number of processes present in


main memory).

• A running process may become suspended if it makes an I/O request. A


suspended processes cannot make any progress towards completion. In this
condition, to remove the process from memory and make space for other
processes, the suspended process is moved to the secondary storage. This
process is called swapping, and the process is said to be swapped out or rolled
out. Swapping may be necessary to improve the process mix (of CPU bound and
IO bound)

• When needed, it brings process back into memory and pick up right where it left
off.

• It is faster than long term and slower than short term.

59
Some Other Schedulers

• I/O Schedulers: I/O schedulers are in charge of managing the execution of I/O
operations such as reading and writing to discs or networks. They can use
various algorithms to determine the order in which I/O operations are executed,
such as FCFS (First-Come, First-Served) or RR (Round Robin).

• Real-Time Schedulers: In real-time systems, real-time schedulers ensure that


critical tasks are completed within a specified time frame. They can prioritize
and schedule tasks using various algorithms such as EDF (Earliest Deadline
First) or RM (Rate Monotonic).

Comparison Among Scheduler

Long Term Scheduler Short Term Schedular Medium Term Scheduler

It is a process-swapping
It is a job scheduler It is a CPU scheduler
scheduler.

Speed lies in between


Speed is the fastest
The slowest scheduler. both short and long-term
among all of them.
schedulers.

It controls the degree of It gives less control over It reduces the degree of
multiprogramming how much multiprogramming.

60
Long Term Scheduler Short Term Schedular Medium Term Scheduler

multiprogramming is
done.

It is barely present or non-


It is a minimal time- It is a component of
existent in the time-
sharing system. systems for time sharing.
sharing system.

It can re-enter the It can re-introduce the


It selects those processes
process into memory, process into memory and
which are ready to
allowing for the execution can be
execute
continuation of execution. continued.

Context Switching

In order for a process execution to be continued from the same point at a later time,
context switching is a mechanism to store and restore the state or context of a CPU in
the Process Control block. A context switcher makes it possible for multiple processes
to share a single CPU using this method. A multitasking operating system must include
context switching among its features.

The state of the currently running process is saved into the process control block when
the scheduler switches the CPU from executing one process to another. The state used
to set the computer, registers, etc. for the process that will run next is then loaded from
its own PCB. After that, the second can start processing.

Context Switching

In order for a process execution to be continued from the same point at a later time,
context switching is a mechanism to store and restore the state or context of a CPU in
the Process Control block. A context switcher makes it possible for multiple processes

61
to share a single CPU using this method. A multitasking operating system must include
context switching among its features.

• Program Counter

• Scheduling information

• The base and limit register value

• Currently used register

• Changed State

• I/O State information

• Accounting information

Conclusion

Process schedulers are the essential parts of operating system that manage how the
CPU handles multiple tasks or processes. They ensure that processes are executed
efficiently, making the best use of CPU resources and maintaining system
responsiveness. By choosing the right process to run at the right time, schedulers help
optimize overall system performance, improve user experience, and ensure fair access
to CPU resources among competing processes.

CHAPTER THREE
UNDERSTAND INTER-PROCESS COMMUNICATION

3.1 Define process concepts


What is a Process in an Operating System?
A process is essentially running software. The execution of any process must occur in a
specific order. A process refers to an entity that helps in representing the fundamental
unit of work that must be implemented in any system.
In other words, we write the computer programs in the form of a text file, thus when we
run them, these turn into processes that complete all of the duties specified in the
program.
A program can be segregated into four pieces when put into memory to become a
process: stack, heap, text, and data. The diagram below depicts a simplified
representation of a process in the main memory.

Components of a Process
It is divided into the following four sections:

62
Stack
Temporary data like method or function parameters, return address, and local variables
are stored in the process stack.

Heap
This is the memory that is dynamically allocated to a process during its execution.

Text
This comprises the contents present in the processor’s registers as well as the current
activity reflected by the value of the program counter.

Data
The global as well as static variables are included in this section.

Process Life Cycle


When a process runs, it goes through many states. Distinct operating systems have
different stages, and the names of these states are not standardised. In general, a
process can be in one of the five states listed below at any given time.

Start
When a process is started/created first, it is in this state.

Ready
Here, the process is waiting for a processor to be assigned to it. Ready processes are
waiting for the operating system to assign them a processor so that they can run. The
process may enter this state after starting or while running, but the scheduler may
interrupt it to assign the CPU to another process.

63
Running
When the OS scheduler assigns a processor to a process, the process state gets set to
running, and the processor executes the process instructions.

Waiting
If a process needs to wait for any resource, such as for user input or for a file to become
available, it enters the waiting state.

Terminated or Exit
The process is relocated to the terminated state, where it waits for removal from the main
memory once it has completed its execution or been terminated by the operating system.

Process Control Block (PCB)


Every process has a process control block, which is a data structure managed by the
operating system. An integer process ID (or PID) is used to identify the PCB. As shown
below, PCB stores all of the information required to maintain track of a process.

Process state
The process’s present state, such as whether it’s ready, waiting, running, or whatever.

Process privileges
This is required in order to grant or deny access to system resources.

Process ID
Each process in the OS has its own unique identifier.

Pointer
It refers to a pointer that points to the parent process.

Program counter
The program counter refers to a pointer that points to the address of the process’s next
instruction.

64
CPU registers
Processes must be stored in various CPU registers for execution in the running state.

CPU scheduling information


Process priority and additional scheduling information are required for the process to be
scheduled.

Memory management information


This includes information from the page table, memory limitations, and segment table,
all of which are dependent on the amount of memory used by the OS.

Accounting information
This comprises CPU use for process execution, time constraints, and execution ID,
among other things.

IO status information
This section includes a list of the process’s I/O devices.
The PCB architecture is fully dependent on the operating system, and different operating
systems may include different information. A simplified diagram of a PCB is shown
below.

The PCB is kept for the duration of a procedure and then removed once the process is
finished.
The Different Process States
The operating system’s processes can be in one of the following states:
• NEW – The creation of the process.

65
• READY – The waiting for the process that is to be assigned to any processor.
• RUNNING – Execution of the instructions.
• WAITING – The waiting of the process for some event that is about to occur (like
an I/O completion, a signal reception, etc.).
• TERMINATED – A process has completed execution.

Process vs Program
A program is a piece of code that can be as simple as a single line or as complex as
millions of lines. A computer program is usually developed in a programming language
by a programmer. The process, on the other hand, is essentially a representation of the
computer program that is now running. It has a comparatively shorter lifetime.
Here is a basic program created in the C programming language as an example:

#include <stdio.h>
int main() {
printf(“Hi, Subhadip! \n”);
return 0;
}

A computer program refers to a set of instructions that, when executed by a computer,


perform a certain purpose. We can deduce that a process refers to a dynamic instance
of a computer program when we compare a program to a process. An algorithm is an
element of a computer program that performs a certain task. A software package is a
collection of computer programs, libraries, and related data.

Process Scheduling
When there are several or more runnable processes, the operating system chooses
which one to run first; this is known as process scheduling.
A scheduler is a program that uses a scheduling algorithm to make choices. The following
are characteristics of a good scheduling algorithm:

66
• For users, response time should be kept to a bare minimum.
• The total number of jobs processed every hour should be as high as possible,
implying that a good scheduling system should provide the highest possible
throughput.
• The CPU should be used to its full potential.
• Each process should be given an equal amount of CPU time.

3.2 Process creation and process terminations (watt signal, semylose and deadlock)
IPC techniques

3.2 Process creation and process terminations (watt signal, semylose and deadlock) IPC
techniques

Process Creation
As discussed above, processes in most of the operating systems (both Windows and
Linux) form hierarchy. So a new process is always created by a parent process. The
process that creates the new one is called the parent process, and the newly created
process is called the child process. A process can create multiple new processes while
it’s running by using system calls to create them.

1. When a new process is created, the operating system assigns a unique Process
Identifier (PID) to it and inserts a new entry in the primary process table.
2. Then required memory space for all the elements of the process such as program,
data, and stack is allocated including space for its Process Control Block (PCB).
3. Next, the various values in PCB are initialized such as,
1. The process identification part is filled with PID assigned to it in step (1) and also
its parent’s PID.
2. The processor register values are mostly filled with zeroes, except for the stack
pointer and program counter. The stack pointer is filled with the address of the
stack-allocated to it in step (2) and the program counter is filled with the address
of its program entry point.
3. The process state information would be set to ‘New’.
4. Priority would be lowest by default, but the user can specify any priority during
creation. Then the operating system will link this process to the scheduling queue
and the process state would be changed from ‘New’ to ‘Ready’. Now the process
is competing for the CPU.
5. Additionally, the operating system will create some other data structures such as
log files or accounting files to keep track of process activity.

67
Understanding System Calls for Process Creation in Windows Operating System:
In Windows, the system call used for process creation is CreateProcess(). This function
is responsible for creating a new process, initializing its memory, and loading the
specified program into the process’s address space.
• CreateProcess() in Windows combines the functionality of both
UNIX’s fork() and exec(). It creates a new process with its own memory space
rather than duplicating the parent process like fork() does. It also allows
specifying which program to run, similar to how exec() works in UNIX.
• When you use CreateProcess(), you need to provide some extra details to handle
any changes between the parent and child processes. These details control
things like the process’s environment, security settings, and how the child
process works with the parent or other processes. It gives you more control and
flexibility compared to the UNIX system.

Process Deletion
Processes terminate themselves when they finish executing their last statement, after
which the operating system uses the exit() system call to delete their context. Then all
the resources held by that process like physical and virtual memory, 10 buffers, open
files, etc., are taken back by the operating system. A process P can be terminated either
by the operating system or by the parent process of P.

A parent may terminate a process due to one of the following reasons:


1. When task given to the child is not required now.
2. When the child has taken more resources than its limit.
3. The parent of the process is exiting, as a result, all its children are deleted. This is
called cascaded termination.

A process can be terminated/deleted in many ways. Some of the ways are:


1. Normal termination: The process completes its task and calls an exit() system
call. The operating system cleans up the resources used by the process and
removes it from the process table.
2. Abnormal termination/Error exit: A process may terminate abnormally if it
encounters an error or needs to stop immediately. This can happen through
the abort() system call.
3. Termination by parent process: A parent process may terminate a child process
when the child finishes its task. This is done by the using kill() system call.
4. Termination by signal: The parent process can also send specific signals
like SIGSTOP to pause the child or SIGKILL to immediately terminate it.

Processes are created using system calls like fork() in UNIX or CreateProcess() in
Windows, which handle the allocation of resources, assignment of unique identifiers,

68
and initialization of the process control block. Once a process completes its task, it can
terminate through various methods, such as normal termination, abnormal termination,
or by the parent process using system calls like exit(), abort(), or kill()

Methods in Inter process Communication

Inter Process communication (IPC) refers to the mechanisms and techniques used by
operating systems to allow different processes to communicate with each other. This
allows running programs concurrently in an Operating System.
The two fundamental models of Inter Process Communication are:
A. Shared Memory
B. Message Passing

Shared Memory
IPC through Shared Memory is a method where multiple processes are given access to
the same region of memory. This shared memory allows the processes to communicate
with each other by reading and writing data directly to that memory area.
Shared Memory in IPC can be visualized as Global variables in a program which are
shared in the entire program but shared memory in IPC goes beyond global variables,
allowing multiple processes to share data through a common memory space, whereas
global variables are restricted to a single process.

Message Passing
IPC through Message Passing is a method where processes communicate by sending
and receiving messages to exchange data. In this method, one process sends a
message, and the other process receives it, allowing them to share information. Message
Passing can be achieved through different methods like Sockets, Message Queues or
Pipes.
Sockets provide an endpoint for communication, allowing processes to send and receive
messages over a network. In this method, one process (the server) opens a socket and
listens for incoming connections, while the other process (the client) connects to the
server and sends data. Sockets can use different communication protocols, such
as TCP (Transmission Control Protocol) for reliable, connection-oriented
communication or UDP (User Datagram Protocol) for faster, connectionless
communication.

Different methods of Inter process Communication (IPC) are as follows:


1. Pipes – A pipe is a unidirectional communication channel used for IPC between
two related processes. One process writes to the pipe, and the other process
reads from it.

69
Types of Pipes are:
Anonymous Pipes and
Named Pipes (FIFOs)
2. Sockets – Sockets are used for network communication between processes
running on different hosts. They provide a standard interface for communication,
which can be used across different platforms and programming languages.
3. Shared memory – In shared memory IPC, multiple processes are given access to
a common memory space. Processes can read and write data to this memory,
enabling fast communication between them.
4. Semaphores – Semaphores are used for controlling access to shared resources.
They are used to prevent multiple processes from accessing the same resource
simultaneously, which can lead to data corruption.
5. Message Queuing – This allows messages to be passed between processes
using either a single queue or several message queue. This is managed by system
kernel these messages are coordinated using an API.

3.3 Describe Inter-process communication (IPC) techniques


Processes need to communicate with each other in many situations, for example, to
count occurrences of a word in text file, output of grep command needs to be given to wc
command, something like grep -o -i <word> <file> | wc -l.

Inter-process communication (IPC) is a mechanism that allows processes to


communicate. It helps processes synchronize their activities, share information, and
avoid conflicts while accessing shared resources.

It is a process that allows different processes of a computer system to share information.


IPC lets different programs run in parallel, share data, and communicate with each other.

Types of Process
Let us first talk about types of types of processes.
• Independent process: An independent process is not affected by the execution
of other processes. Independent processes are processes that do not share any
data or resources with other processes. No inter-process communication
required here.
• Co-operating process: Interact with each other and share data or resources. A
co-operating process can be affected by other executing processes. Inter-
process communication (IPC) is a mechanism that allows processes to
communicate with each other and synchronize their actions. The communication
between these processes can be seen as a method of cooperation between them.

70
Advantages of IPC
a. Enables processes to communicate with each other and share resources,
leading to increased efficiency and flexibility.
b. Facilitates coordination between multiple processes, leading to better overall
system performance.
c. Allows for the creation of distributed systems that can span multiple
computers or networks.
d. Can be used to implement various synchronization and communication
protocols, such as semaphores, pipes, and sockets.

Disadvantages of IPC
a. Increases system complexity, making it harder to design, implement, and
debug.
b. Can introduce security vulnerabilities, as processes may be able to access or
modify data belonging to other processes.
c. Requires careful management of system resources, such as memory
and CPU time, to ensure that IPC operations do not degrade overall system
performance.
Can lead to data inconsistencies if multiple processes try to access or modify
the same data at the same time.
d. Overall, the advantages of IPC outweigh the disadvantages, as it is a
necessary mechanism for modern operating systems and enables processes
to work together and share resources in a flexible and efficient manner.
However, care must be taken to design and implement IPC systems carefully,
in order to avoid potential security vulnerabilities and performance issues.

3.4 Explain process states, process table.

71
CHAPTER FOUR
KNOW VARIOUS SCHEDULING TECHNIQUES

4.1 Define CPU Scheduling


Scheduling means arranging or planning the particular time at which certain events will
take place; it may also refer to parts of an event taking place. It also means planning and
prioritising a sequence of activities in a way that the set goal is achieved in time.

What is CPU Scheduling in OS?


CPU scheduling in OS is a method by which one process is allowed to use the CPU while
the other processes are kept on hold or are kept in the waiting state. This hold or waiting
state is implemented due to the unavailability of any of the system resources like I/O etc.
Thus, the purpose of CPU scheduling in OS is to result in an efficient, faster and fairer
system.
As soon as the CPU becomes idle, the operating system selects one of the processes
that are in the ready queue, waiting to be executed. The short-term scheduler also known
as the CPU scheduler carries out the task of selecting a process from among the
processes in memory that are waiting to be executed and allocates the CPU to one of
these.

The working of all multiprogramming operating systems is based on CPU scheduling.

Types Of CPU Scheduling


running-waiting, running-ready, waiting-ready and terminate

Explain CPU Scheduling criteria (CPU utilization, Throughput, Turn Around Time, Waiting
Time, Load Average, Response Time

4.2 List type of scheduling


The types of scheduling in OS are primarily categorized into two main categories:

o Preemptive Scheduling

o Non-Preemptive Scheduling

4.3 CPU scheduling criteria: pre-emptive and non-pre-emptive


Preemptive and Non-Preemptive Scheduling
In operating systems, scheduling is the method by which processes are given access the
CPU. Efficient scheduling is essential for optimal system performance and user

72
experience. There are two primary types of CPU scheduling: preemptive and non-
preemptive.
Understanding the differences between preemptive and non-preemptive scheduling
helps in designing and choosing the right scheduling algorithms for various types of
operating systems.

What is Preemptive Scheduling?


The operating system can interrupt or preempt a running process to allocate CPU time
to another process, typically based on priority or time-sharing policies. Mainly a process
is switched from the running state to the ready state. Algorithms based on preemptive
scheduling are Round Robin (RR) , Shortest Remaining Time First (SRTF) , Priority
(preemptive version) , etc.
In the following example P2 is preempted at time 1 due to arrival of a higher priority
process.

Preemptive Scheduling
Advantages of Preemptive Scheduling
• Because a process may not monopolize the processor, it is a more reliable
method and does not cause denial of service attack.
• Each preemption occurrence prevents the completion of ongoing tas s.
• The average response time is improved. Utilizing this method in a multi-
programming environment is more advantageous.
• Most of the modern operating systems (Window, Linux and macOS) implement
Preemptive Scheduling.
Disadvantages of Preemptive Scheduling
• More Complex to implement in Operating Systems.
• Suspending the running process, change the context, and dispatch the new
incoming process all take more time.
73
• Might cause starvation : A low-priority process might be preempted again and
again if multiple high-priority processes arrive.
• Causes Concurrency Problems as processes can be stopped when they were
accessing shared memory (or variables) or resources.
What is Non-Preemptive Scheduling?
In non-preemptive scheduling, a running process cannot be interrupted by the operating
system; it voluntarily relinquishes control of the CPU. In this scheduling, once the
resources (CPU cycles) are allocated to a process, the process holds the CPU till it gets
terminated or reaches a waiting state.
Algorithms based on non-preemptive scheduling are: First Come First Serve, Shortest
Job First (SJF basically non preemptive) and Priority (nonpreemptive version) , etc.

Below is the table and Gantt Chart according to the First Come FIrst Serve (FCFS)
Algorithm: We can notice that every process finishes execution once it gets CPU.

Advantages of Non-Preemptive Scheduling


• It is easy to implement in an operating system. It was used in Windows 3.11 and
early macOS.
• It has a minimal scheduling burden.
• Less computational resources are used.
Disadvantages of Non-Preemptive Scheduling
• It is open to denial-of-service attack. A malicious process can take CPU forever.
• Since we cannot implement round robin, the average response time becomes
less.

Differences Between Preemptive and Non-Preemptive Scheduling

74
• In Preemptive Scheduling, there is the overhead of switching the process from the
ready state to the running state, vice-verse, and maintaining the ready queue.
Whereas in the case of non-preemptive scheduling has no overhead of switching
the process from running state to ready state.
• Preemptive scheduling attains flexibility by allowing the critical processes to
access the CPU as they arrive in the ready queue, no matter what process is
executing currently. Non-preemptive scheduling is called rigid as even if a critical
process enters the ready queue the process running CPU is not disturbed.
• Preemptive Scheduling has to maintain the integrity of shared data that’s why it is
cost associative which is not the case with Non-preemptive Scheduling.

NON-PREEMPTIVE
Parameter PREEMPTIVE SCHEDULING SCHEDULING

Once resources(CPU Cycle) are


In this resources(CPU Cycle) allocated to a process, the
Basic are allocated to a process for process holds it till it completes
a limited time. its burst time or switches to
waiting state

Process can not be interrupted


Process can be interrupted in
Interrupt until it terminates itself or its time
between.
is up

If a process having high If a process with a long burst time


priority frequently arrives in is running CPU, then later coming
Starvation
the ready queue, a low process with less CPU burst time
priority process may starve may starve

It has overheads of
Overhead It does not have overheads
scheduling the processes

Flexibility flexible Rigid

Cost Cost associated No cost associated

75
NON-PREEMPTIVE
Parameter PREEMPTIVE SCHEDULING SCHEDULING

Response Preemptive scheduling Non-preemptive scheduling


Time response time is less response time is high

Decisions are made by the


Decisions are made by the
Decision scheduler and are based on
process itself and the OS just
making priority and time slice
follows the process’s instructions
allocation

The OS has greater control


Process The OS has less control over the
over the scheduling of
control scheduling of processes
processes

Higher overhead due to Lower overhead since context


Overhead
frequent context switching switching is less frequent

More as a process might be


Concurrency Less as a process is never
preempted when it was
Overhead preempted.
accessing a shared resource.

Examples of preemptive
Examples of non-preemptive
scheduling are Round Robin
Examples scheduling are First Come First
and Shortest Remaining Time
Serve and Shortest Job First
First

Conclusion
Preemptive scheduling allows the operating system to interrupt and reassign the CPU to
different processes, making it responsive and efficient for high-priority tasks. Non-
preemptive scheduling lets processes run to completion without interruption,
simplifying the system but potentially causing delays for other tasks. The choice
between these methods depends on the system’s needs for performance and simplicity.

4.4 Describe Scheduling Algorithms


The Different Types of CPU Scheduling Algorithms in OS are: First Come
First Serve (FCFS), Shortest-Job-First (SJF), Longest Job First (LJF), Priority Scheduling,
Round Robin (RR), Shortest Remaining Time First, Longest Remaining Time First and
Highest Response Ratio Next

76
Computer scientists have developed some algorithms that can decide the order of
execution of processes in a way that shall help achieve maximum utilization of the CPU.
These CPU Scheduling algorithms in operating systems can be classified as follows:
1. First Come First Serve:
Of all the scheduling algorithms it is one of the simplest and easiest to implement. As the
name suggests the First Come First Serve scheduling algorithm means that the process
that requests the CPU first is allocated the CPU first. It is basically implemented using a
First In First Out queue. It supports both non-preemptive and preemptive CPU
scheduling algorithms.
2. Shortest Job First (SJF):
Shortest job first (SJF) is a scheduling algorithm that selects the waiting process with the
smallest execution time to be executed next. The method followed by the SJF scheduling
algorithm may or may not be preemptive. SJF reduces the average waiting time of other
waiting processes significantly.
3. Longest Job First (LJF):
The working of the Longest Job First(LJF) scheduling process is the opposite of the
shortest job first (SJF), Here, the process with the largest burst time is processed first.
Longest Job First is one of the non-preemptive algorithms.
4. Priority Scheduling:

77
The Priority CPU Scheduling Algorithm is one of the pre-emptive methods of CPU
scheduling. Here, the most important process must be done first. In the case there are
more than one processes with the same priority value then the FCFS algorithm is used to
resolve the situation.
5. Round Robin:
In the Round Robin CPU scheduling algorithm the processes are allocated CPU time in a
cyclic order with a fixed time slot for each process. It is said to be a pre-emptive version
of FCFS. With the cyclic allocation of equal CPU time, it works on the principle of time-
sharing.
6. Shortest Remaining Time First:
The Shortest remaining time first CPU scheduling algorithm is a preemptive version of
the Shortest job first scheduling algorithm. Here the CPU is allocated to the process that
needs the smallest amount of time for its completion. Here the short processes are
handled very fast but the time taking processes keep waiting.
7. Longest Remaining Time First:
The longest remaining time first CPU scheduling algorithm is a preemptive CPU
scheduling algorithm. This algorithm selects those processes first which have the
longest processing time remaining for completion i.e. processes with the largest burst
time are allocated the CPU time first.
8. Highest Response Ratio Next:
The Highest Response Ratio Next is one of the non-preemptive CPU Scheduling
algorithms. It has the distinction of being recognised as one of the most optimal
scheduling algorithms. As the name suggests here the CPU time is allocated on the basis
of the response ratio of all the available processes where it selects the process that has
the highest Response Ratio. The selected process will run till its execution is complete.

Comparison of CPU Scheduling Algorithms

Algorithm Allocatio Complexi Average Preempti Starvati Performan


n ty waiting on on ce
time
(AWT)
FCFS Accordin Simple Large. No No Slow
g to the and easy performan
arrival to ce
time of implemen
the t
processe
s, the
CPU is

78
allocated.

SJF Based on More Smaller No Yes Minimum


the complex than Average
lowest than FCFS Waiting
CPU FCFS Time
burst
time (BT).

SRTF Same as More Dependin Yes Yes The


SJF the complex g on preference
allocation than some is given to
of the FCFS measure the short
CPU is s e.g., jobs
based on arrival
the time,
lowest process
CPU size, etc
burst
time (BT).
But it is
preempti
ve.
RR Accordin The Large as Yes No Each
g to the complexit compare process
order of y depends d to SJF has given
the on Time and a fairly
process Quantum Priority fixed time
arrives size scheduli
with fixed ng.
time
quantum
(TQ)
Priority Pre- Accordin This type Smaller Yes Yes Well
emptive g to the is less than performan
priority. complex FCFS ce but
The contain a
bigger starvation
priority problem
task

79
executes
first
Priority non- Accordin This type Preempti No Yes Most
preemptive g to the is less ve beneficial
priority complex Smaller with batch
with than than systems
monitorin Priority FCFS
g the new preemptiv
incoming e
higher
priority
jobs
MLQ Accordin More Smaller No Yes Good
g to the complex than performan
process than the FCFS ce but
that priority contain a
resides in schedulin starvation
the bigger g problem
queue algorithm
priority s
MFLQ Accordin It is the Smaller No No Good
g to the most than all performan
process Complex scheduli ce
of a but its ng types
bigger complexit in many
priority y rate cases
queue. depends
on the TQ
size

CPU scheduling algorithms help to schedule the tasks or processes in a way that the
processes are executed in a particular order and the CPU time is utilised to its optimum
level.

4.5 Recognise: Multiprogramming, Multiprocessing, Multitasking, and


Multithreading

80
CHAPTER FIVE
UNDERSTAND INTERRUPT AND MASKING TRAPS

5.1 Define Interrupt


Interrupts play a crucial role in computer devices by allowing the processor to react
quickly to events or requests from external devices or software. However, we are going
to discuss every point about interruption and its various types in detail.

What is an Interrupt?

An interrupt is a signal from a device attached to a computer or from a program within


the computer that requires the operating system to stop and figure out what to do next.
The interrupt is a signal emitted by hardware or software when a process or an event
needs immediate attention. It alerts the processor to a high-priority process requiring
interruption of the current working process. In I/O devices one of the bus control lines is
dedicated for this purpose and is called the Interrupt Service Routine (ISR).

Interrupt systems are nothing but while the CPU can process the programs if the CPU
needs any IO operation. Then, it is sent to the queue and it does the CPU process. Later
on, Input/output (I/O) operation is ready.

The I/O devices interrupt the data which is available and does the remaining process; like
that interrupts are useful. If interrupts are not present, the CPU needs to be in idle state
for some time, until the IO operation needs to complete. So, to avoid the CPU waiting
time interrupts are coming into picture.

When a device raises an interrupt at let’s say process i.e., the processor first completes
the execution of instruction i. Then it loads the Program Counter (PC) with the address of
the first instruction of the ISR. Before loading the Program Counter with the address, the
address of the interrupted instruction is moved to a temporary location. Therefore, after
handling the interrupt the processor can continue with process i+1.

While the processor is handling the interrupts, it must inform the device that its request
has been recognized so that it stops sending the interrupt request signal. Also, saving the
registers so that the interrupted process can be restored in the future, increases the
delay between the time an interrupt is received and the start of the execution of the ISR.
This is called Interrupt Latency.

81
5.2 Types of Interrupts
Interrupts are classified into two types:
1. Software interrupts or
2. Hardware interrupts.

1. Software Interrupts
A sort of interrupt called a software interrupt is one that is produced by software or a
system as opposed to hardware. Traps and exceptions are other names for software
interruptions. They serve as a signal for the operating system or a system service to carry
out a certain function or respond to an error condition.

A software interrupt occurs when an application program terminates or requests certain


services from the OS. Usually, the processor requests a software interrupt when certain
conditions are met by executing a special instruction. This instruction invokes the
interrupt and functions like a subroutine call.
Software interrupts are commonly used when the system interacts with device drivers or
when a program requests OS services.
In some cases, software interrupts may be triggered unexpectedly by program execution
errors rather than by design. These interrupts are known as exceptions or traps.

The interrupt signal generated from internal devices and software programs need to
access any system call then software interrupts are present.

Generally, software interrupts occur as a result of specific instructions being used or


exceptions in the operation. In our system, software interrupts often occur when system

82
calls are made. In contrast to the fork() system call, which also generates a software
interrupt, division by zero throws an exception that results in the software interrupt.

A particular instruction known as an “interrupt instruction” is used to create software


interrupts. When the interrupt instruction is used, the processor stops what it is doing
and switches over to a particular interrupt handler code. The interrupt handler routine
completes the required work or handles any errors before handing back control to the
interrupted application.

Software interrupt is divided into two types. They are as follows −


• Normal Interrupts − The interrupts that are caused by the software instructions
are called software instructions.
• Exception − Exception is nothing but an unplanned interruption while executing a
program. For example − while executing a program if we got a value that is divided
by zero is called an exception.

2. Hardware interrupt
A hardware interrupt is an electronic signal from an external hardware device that
indicates it needs attention from the OS. One example of this is moving a mouse or
pressing a keyboard key. In these examples of interrupts, the processor must stop to read
the mouse position or keystroke at that instant.
In this type of interrupt, all the devices are connected to the Interrupt Request Line (IRL).
A single request line is used for all the n devices. To request an interrupt, a device closes
its associated switch. When a device requests an interrupt, the value of INTR is the
logical OR of the requests from individual devices.

Typically, a hardware IRQ has a value that associates it with a particular device. This
makes it possible for the processor to determine which device is requesting service by
raising the IRQ, and then provide service accordingly.

There are three types of hardware interrupts:

Maskable interrupts
In a processor, an internal interrupt mask register selectively enables and disables
hardware requests. When the mask bit is set, the interrupt is enabled. When it is clear,
the interrupt is disabled. Signals that are affected by the mask are maskable interrupts.

Non-maskable interrupts

83
In some cases, the interrupt mask cannot be disabled so it does not affect some interrupt
signals. These are non-maskable interrupts and are usually high-priority events that
cannot be ignored.

Spurious interrupts
Also known as a phantom interrupt or ghost interrupt, a spurious interrupt is a type of
hardware interrupt for which no source can be found. These interrupts are difficult to
identify if a system misbehaves. If the ISR does not account for the possibility of such
interrupts, it may result in a system deadlock.

Sequences of Events Involved in Handling an IRQ(Interrupt Request)


• Devices raise an IRQ.
• The processor interrupts the program currently being executed.
• The device is informed that its request has been recognized and the device
deactivates the request signal.
• The requested action is performed.
• An interrupt is enabled and the interrupted program is resumed.

Flowchart of Interrupt Handling Mechanism


The Image below depicts the flowchart of interrupt handling mechanism

Step 1. Any time that an interrupt is raised, it may either be an I/O interrupt or a system
interrupt.
Step 2. The current state comprising registers and the program counter is then stored
in order to conserve the state of the process.
Step 3. The current interrupt and its handler is identified through the interrupt vector
table in the processor.
Step 4. This control now shifts to the interrupt handler, which is a function located in
the kernel space.
Step 5. Specific tasks are performed by Interrupt Service Routine (ISR) which are
essential to manage interrupt.

84
Step 6. The status from the previous session is retrieved so as to build on the process
from that point.
Step 7. The control is then shifted back to the other process that was pending and the
normal process continues.

These are the steps in which ISR handles interrupts. These are as follows –

Step 1 − When an interrupt occurs let assume processor is executing i' th instruction and
program counter will point to the next instruction (i+1)th.
Step 2 − When an interrupt occurs the program value is stored on the process stack and
the program counter is loaded with the address of interrupt service routine.
Step 3 − Once the interrupt service routine is completed the address on the process
stack is popped and placed back in the program counter.
Step 4 − Now it executes the resume for (i+1)th line.

Difference Between Hardware Interrupt and Software Interrupt

S/n Hardware Interrupt Software Interrupt


1. Hardware interrupt is an interrupt Software interrupt is the interrupt that is
generated from an external device generated by any internal system of the
or hardware. computer.
2. It does not increment the program It increments the program counter.
counter.
3. Hardware interrupt can be invoked Software interrupt can be invoked with the
with some external device such as help of INT instruction.
request to start an I/O or
occurrence of a hardware failure.
4. It has lowest priority than software It has highest priority among all interrupts.
interrupts
5. Hardware interrupt is triggered by Software interrupt is triggered by software
external hardware and is and considered one of the ways to
considered one of the ways to communicate with kernel or to trigger
communicate with the outside system calls, especially during error or
peripherals, hardware. exception handling.
6. It is an asynchronous event. It is synchronous event.
7. Hardware interrupts can be Software interrupts can be classified into
classified into two types they are: 1. two types they are: 1. Normal Interrupts. 2.
Maskable Interrupt. 2. Non Exception
Maskable Interrupt.

85
8. Keystroke depressions and mouse All system calls are examples of software
movements are examples of interrupts
hardware interrupt.

Benefits of Interrupt
• Real-time Responsiveness: Interrupts permit a system to reply promptly to
outside events or signals, permitting real-time processing.
• Efficient Resource usage: Interrupt-driven structures are more efficient than
system that depend on busy-waiting or polling strategies. Instead of continuously
checking for the incidence of event, interrupts permit the processor to remain idle
until an event occurs, conserving processing energy and lowering energy intake.
• Multitasking and Concurrency: Interrupts allow multitasking with the aid of
allowing a processor to address multiple tasks concurrently.
• Improved system Throughput: By coping with occasions asynchronously,
interrupts allow a device to overlap computation with I/O operations or other
responsibilities, maximizing system throughput and universal overall
performance.

5.3 Masking Traps


In an operating system, "masking traps" refers to the process of temporarily disabling or
ignoring specific types of traps (exception conditions) that would normally trigger an
interrupt and transfer control to the kernel, essentially preventing the system from
responding to certain trap events until the mask is lifted.

What is a Trap in Operating System?


A trap in an operating system is a software-generated interruption brought on by an error
or exception that happens while a programme is being executed. When a trap occurs, the
CPU switches from user mode to kernel mode and jumps to the trap handler, a
predefined point in the operating system. Traps can happen for a number of reasons,
including division by zero, accessing erroneous memory addresses, carrying out
erroneous instructions, or other unanticipated occurrences that might force the
programme to crash or yield inaccurate results.

Traps can also be purposefully created by the software to ask the operating system for a
particular service, such reading from a file or allocating memory. The operating system's
trap handler is in charge of managing the trap and taking the proper action in accordance
with the trap's cause. For instance, if an unlawful instruction set off the trap, the trap
handler may terminate the programme and notify the user of the error. The trap handler

86
may carry out the requested service and transfer control back to the programme if the
trap is brought on by a request for a particular service.

The use of interrupt vector

Interrupt vectors are addresses that inform the interrupt handler as to where to find the
ISR (interrupt service routine, also called interrupt service procedure). All interrupts are
assigned a number from 0 to 255, with each of these interrupts being associated with a
specific interrupt vector.

“Interrupt vector is used to determine the (starting address of the) interrupt routine of the
interrupting device quickly.”

Difference between Traps and Interrupts in Operating System

The following table highlights the major differences between Traps and Interrupts:

S/n Trap Interrupt

1. The trap is a signal that a user The interrupt is a signal


software sends to the from hardware that tells
operating system telling it to the CPU that something
carry out particular activity has to be attended to right
right away. away.

2. The procedure is The process is


synchronized. asynchronous.

3. Every trap is interruptible. Not every interruption is a


trap.

4. That might only occur from That might only occur from
software-based devices. software-based devices.
Devices' hardware and Devices' hardware and
software could be at fault. software could be at fault.

5. It is generated by a user They are produced by


programme instruction. hardware.

6. A software interruption is It is also known as a


another name for it. hardware interrupt.

7. The operating system's It compels the Processor to


specialised functionality is launch a particular
carried out, and the trap interrupt handler
handler is given control. programme.

87
Explain the use of masking in relation to interrupt

Levels Of Interrupt

The interrupt level defines the source of the interrupt and is often referred to as the
interrupt source. The interrupt priority defines which of a set of pending interrupts is
serviced first. There are two types of trigger mechanisms,

a) Level-triggered interrupts and


b) Edge-triggered interrupts

Level-triggered interrupts

This interrupt module generates an interrupt by holding the interrupt signal at a particular
active logic level. The signal gets negated once the processor commands it after the
device has been serviced. The processor samples the interrupt signal during each
instruction cycle and recognizes it when it's inserted during the sampling process. After
servicing a device, the processor may service other devices before exiting the ISR.

Level-triggered interrupts allow multiple devices to share a common signal using wired-
OR connections. However, it can be cumbersome for firmware to deal with them.

Edge-triggered interrupts

This module generates an interrupt only when it detects an asserting edge of the interrupt
source. Here, the device that wants to signal an interrupt creates a pulse which changes
the interrupt source level.

Edge-triggered interrupts provide a more flexible way to handle interrupts. They can be
serviced immediately, regardless of how the interrupt source behaves. Plus, they reduce
the complexity of firmware code and reduce the number of conditions required for the
firmware. The drawback is that if the pulse is too short to be detected, special hardware
may be required to detect and service the interruption.

5.4 Differentiate between S/O interrupt timers, Hardware error and programming
interrupt
A "S/O interrupt timer" refers to a software-based timer that triggers an interrupt when a
specific time period has elapsed, while a "hardware error" is a malfunction within a
physical component of a computer system, and a "programming interrupt" is an interrupt
triggered by an error or specific instruction within a program itself; essentially, S/O timers
are software-controlled, hardware errors are physical issues, and programming
interrupts are triggered by code errors.
Breakdown:
S/O Interrupt Timer:

88
• Function: Used to schedule events or time-based operations within a program by
setting a timer that, when reached, generates an interrupt signal to the processor,
allowing the program to execute specific code at that time.
• Trigger: Software instruction to start the timer and its programmed duration.
• Example: Implementing a delay function in a program, where the timer is set to
trigger after a specific time interval.
Hardware Error:
• Function: An unexpected malfunctioning of a physical component like RAM,
CPU, or hard drive, which can potentially disrupt normal system operation.
• Trigger: Physical issues like faulty hardware, extreme temperatures, or power
surges.
• Example: A sudden system crash due to a failing RAM chip.

Programming Interrupt:
• Function: An interrupt triggered by a specific instruction or error condition within
a program, causing the processor to temporarily suspend its current execution
and jump to a dedicated error handling routine.
• Trigger: Invalid memory access, division by zero, or intentional "software
interrupt" instructions.
• Example: A program attempting to access memory that is not allocated, leading
to a "page fault" interrupt.

The key differences are:


• Origin:
S/O interrupt timers are initiated by software, hardware errors occur due to physical
issues, and programming interrupts are caused by code errors.
• Control:
S/O interrupt timers are intentionally triggered by the programmer, while hardware errors
are unpredictable and programming interrupts are triggered by specific code conditions.
• Response:
S/O interrupt timers allow for planned actions at specific times, hardware errors often
require system recovery mechanisms, and programming interrupts usually lead to error
handling routines.
CHAPTER SIX
UNDERSTAND OPERATING SYSTEM KERNEL

6.1 OS Kernel.
What is Kernel?
A kernel is the essential foundation of a computer's operating system (OS). It's the core
that provides basic services for all other parts of the OS. It's the main layer between the

89
OS and underlying computer hardware, and it helps with tasks such as process
and memory management, inter-process communication, file system management,
device control and networking.
During normal system startup, a computer's basic input/output system, or BIOS,
completes a hardware bootstrap or initialization. It then runs a bootloader which loads
the kernel from a storage device -- such as a hard drive -- into a protected memory space.
Once the kernel is loaded into computer memory, the BIOS transfers control to the
kernel. It then loads other OS components to complete the system startup and make
control available to users through a desktop or other user interface.
If the kernel is damaged or can't load successfully, the computer won't be able to start
completely -- if at all. Service will be required to correct hardware damage or to restore
the OS kernel to a working version.

What is the purpose of the kernel?


In broad terms, an OS kernel performs the following three primary jobs:
1. Provides the interfaces needed for users and applications to interact with the
computer.
2. Launches and manages applications.
3. Manages the underlying system hardware devices.
In more granular terms, accomplishing these three kernel functions involves a range of
computer tasks, including the following:
• Loading and managing less-critical OS components, such as device drivers.
• Organizing and managing threads and the various processes spawned by running
applications.
• Scheduling which applications can access and use the kernel and supervising
that use when the scheduled time occurs.
• Deciding which nonprotected user memory space each application process uses.
• Handling conflicts and errors in memory allocation and management.
• Managing and optimizing hardware resources and dependencies, such as central
processing unit (CPU) and cache use, file system operation and network transport
mechanisms.
• Managing and accessing I/O devices such as keyboards, mice, disk drives, USB
ports, network adapters and displays.
• Handling device and application system calls using various mechanisms such as
hardware interrupts or device drivers.
Scheduling and management are central to the kernel's operation. Computer hardware
can only do one thing at a time. However, a computer's OS components and applications
can spawn dozens and even hundreds of processes that the computer must host. It's
impossible for all those processes to use the computer's hardware -- such as a memory
address or CPU instruction pipeline -- at the same time. The kernel is the central manager

90
of these processes. It knows which hardware resources are available and which
processes need them. It then allocates time for each process to use those resources.

6.2 Different Types of Kernel


Types of Kernel
The kernel manages the system’s resources and facilitates communication between
hardware and software components. These kernels are of different types let’s discuss
each type along with its advantages and disadvantages:
1. Monolithic Kernel
It is one of the types of kernel where all operating system services operate in kernel
space. It has dependencies between systems components. It has huge lines of code
which is complex.
Example:
Unix, Linux, Open VMS, XTS-400 etc.
Advantages
• Efficiency: Monolithic kernels are generally faster than other types of kernels
because they don’t have to switch between user and kernel modes for every
system call, which can cause overhead.
• Tight Integration: Since all the operating system services are running in kernel
space, they can communicate more efficiently with each other, making it easier
to implement complex functionalities and optimizations.
• Simplicity: Monolithic kernels are simpler to design, implement, and debug than
other types of kernels because they have a unified structure that makes it easier
to manage the code.
• Lower latency: Monolithic kernels have lower latency than other types of kernels
because system calls and interrupts can be handled directly by the kernel.
Disadvantages
• Stability Issues: Monolithic kernels can be less stable than other types of kernels
because any bug or security vulnerability in a kernel service can affect the entire
system.
• Security Vulnerabilities: Since all the operating system services are running in
kernel space, any security vulnerability in one of the services can compromise the
entire system.
• Maintenance Difficulties: Monolithic kernels can be more difficult to maintain
than other types of kernels because any change in one of the services can affect
the entire system.
• Limited Modularity: Monolithic kernels are less modular than other types of
kernels because all the operating system services are tightly integrated into the

91
kernel space. This makes it harder to add or remove functionality without affecting
the entire system.
2. Micro Kernel
It is kernel types which has minimalist approach. It has virtual memory and thread
scheduling. It is more stable with less services in kernel space. It puts rest in user
space. It is use in small OS.
Example: Mach, L4, AmigaOS, Minix, K42 etc.
Advantages
• Reliability: Microkernel architecture is designed to be more reliable than
monolithic kernels. Since most of the operating system services run outside the
kernel space, any bug or security vulnerability in a service won’t affect the entire
system.
• Flexibility: Microkernel architecture is more flexible than monolithic kernels
because it allows different operating system services to be added or removed
without affecting the entire system.
• Modularity: Microkernel architecture is more modular than monolithic kernels
because each operating system service runs independently of the others. This
makes it easier to maintain and debug the system.
• Portability: Microkernel architecture is more portable than monolithic kernels
because most of the operating system services run outside the kernel space. This
makes it easier to port the operating system to different hardware architectures.

Disadvantages
• Performance: Microkernel architecture can be slower than monolithic kernels
because it requires more context switches between user space and kernel space.
• Complexity: Microkernel architecture can be more complex than monolithic
kernels because it requires more communication and synchronization
mechanisms between the different operating system services.
• Development Difficulty: Developing operating systems based on microkernel
architecture can be more difficult than developing monolithic kernels because it
requires more attention to detail in designing the communication and
synchronization mechanisms between the different services.
• Higher Resource Usage: Microkernel architecture can use more system
resources, such as memory and CPU, than monolithic kernels because it requires
more communication and synchronization mechanisms between the different
operating system services.

3. Hybrid Kernel
It is the combination of both monolithic kernel and microkernel. It has speed and design
of monolithic kernel and modularity and stability of microkernel.

92
Example:
Windows NT, Netware, BeOS etc.

Advantages
• Performance: Hybrid kernels can offer better performance than microkernels
because they reduce the number of context switches required between user
space and kernel space.
• Reliability: Hybrid kernels can offer better reliability than monolithic kernels
because they isolate drivers and other kernel components in separate protection
domains.
• Flexibility: Hybrid kernels can offer better flexibility than monolithic kernels
because they allow different operating system services to be added or removed
without affecting the entire system.
• Compatibility: Hybrid kernels can be more compatible than microkernels
because they can support a wider range of device drivers.

Disadvantages
• Complexity: Hybrid kernels can be more complex than monolithic kernels
because they include both monolithic and microkernel components, which can
make the design and implementation more difficult.
• Security: Hybrid kernels can be less secure than microkernels because they have
a larger attack surface due to the inclusion of monolithic components.
• Maintenance: Hybrid kernels can be more difficult to maintain than microkernels
because they have a more complex design and implementation.
• Resource Usage: Hybrid kernels can use more system resources than
microkernels because they include both monolithic and microkernel
components.

4. Exo Kernel
It is the type of kernel which follows end-to-end principle. It has fewest hardware
abstractions as possible. It allocates physical resources to applications.

Example:
Nemesis, ExOS etc.

Advantages
• Flexibility: Exokernels offer the highest level of flexibility, allowing developers to
customize and optimize the operating system for their specific application needs.
• Performance: Exokernels are designed to provide better performance than
traditional kernels because they eliminate unnecessary abstractions and allow
applications to directly access hardware resources.

93
• Security: Exokernels provide better security than traditional kernels because
they allow for fine-grained control over the allocation of system resources, such
as memory and CPU time.
• Modularity: Exokernels are highly modular, allowing for the easy addition or
removal of operating system services.

Disadvantages
• Complexity: Exokernels can be more complex to develop than traditional kernels
because they require greater attention to detail and careful consideration of
system resource allocation.
• Development Difficulty: Developing applications for exokernels can be more
difficult than for traditional kernels because applications must be written to
directly access hardware resources.
• Limited Support: Exokernels are still an emerging technology and may not have
the same level of support and resources as traditional kernels.
• Debugging Difficulty: Debugging applications and operating system services on
exokernels can be more difficult than on traditional kernels because of the direct
access to hardware resources.

5. Nano Kernel
It is the type of kernel that offers hardware abstraction but without system services.
Micro Kernel also does not have system services therefore the Micro Kernel and Nano
Kernel have become analogous.

Example: EROS etc.

Advantages
• Small Size: Nanokernels are designed to be extremely small, providing only the
most essential functions needed to run the system. This can make them more
efficient and faster than other kernel types.
• High Modularity: Nanokernels are highly modular, allowing for the easy addition
or removal of operating system services, making them more flexible and
customizable than traditional monolithic kernels.
• Security: Nanokernels provide better security than traditional kernels because
they have a smaller attack surface and a reduced risk of errors or bugs in the code.
• Portability: Nanokernels are designed to be highly portable, allowing them to run
on a wide range of hardware architectures.

Disadvantages

94
• Limited Functionality: Nanokernels provide only the most essential functions,
making them unsuitable for more complex applications that require a broader
range of services.
• Complexity: Because nanokernels provide only essential functionality, they can
be more complex to develop and maintain than other kernel types.
• Performance: While nanokernels are designed for efficiency, their minimalist
approach may not be able to provide the same level of performance as other
kernel types in certain situations.
• Compatibility: Because of their minimalist design, nanokernels may not be
compatible with all hardware and software configurations, limiting their practical
use in certain contexts.

Functions of Kernel
The kernel is responsible for various critical functions that ensure the smooth operation
of the computer system. These functions include:
1. Process Management
• Scheduling and execution of processes.
• Context switching between processes.
• Process creation and termination.
2. Memory Management
• Allocation and deallocation of memory space.
• Managing virtual memory.
• Handling memory protection and sharing.
3. Device Management
• Managing input/output devices.
• Providing a unified interface for hardware devices.
• Handling device driver communication.
4. File System Management
• Managing file operations and storage.
• Handling file system mounting and unmounting.
• Providing a file system interface to applications.
5. Resource Management
• Managing system resources (CPU time, disk space, network bandwidth)
• Allocating and deallocating resources as needed
• Monitoring resource usage and enforcing resource limits
6. Security and Access Control
• Enforcing access control policies.
• Managing user permissions and authentication.
• Ensuring system security and integrity.
7. Inter-Process Communication
• Facilitating communication between processes.

95
• Providing mechanisms like message passing and shared memory.

Working of Kernel
• A kernel loads first into memory when an operating system is loaded and remains
in memory until the operating system is shut down again. It is responsible for
various tasks such as disk management , task management, and memory
management .
• The kernel has a process table that keeps track of all active processes
• The process table contains a per-process region table whose entry points to
entries in the region table.
• The kernel loads an executable file into memory during the ‘exec’ system call’.
• It decides which process should be allocated to the processor to execute and
which process should be kept in the main memory to execute. It basically acts as
an interface between user applications and hardware. The major aim of the kernel
is to manage communication between software i.e. user-level applications and
hardware i.e., CPU and disk memory.

Objectives of Kernel
• To establish communication between user-level applications and hardware.
• To decide the state of incoming processes.
• To control disk management.
• To control memory management.
• To control task management.

Conclusion
Kernels are the heart of operating systems , managing how hardware and software
communicate and ensuring everything runs smoothly. Different types of kernels—like
monolithic, microkernels, hybrid kernels, and others—offer various ways to balance
performance, flexibility, and ease of maintenance. Understanding these kernel types
helps us appreciate how operating systems work and how they handle the complex tasks
required to keep our computers and devices running efficiently. Each type of kernel has
its own strengths and weaknesses, but all play a crucial role in the world of computing.

QUESTION: What are the main functions of a kernel?


ANSWER: The main functions of a kernel include process management, memory
management, device management, and system calls handling.
QUESTION: What is a monolithic kernel?

96
ANSWER: A monolithic kernel is a single large process running entirely in a single address
space, containing all core services like process management, memory management, file
systems, and device drivers.
QUESTION: What is a microkernel?
ANSWER: A microkernel is a minimalistic kernel that includes only essential functions
such as inter-process communication and basic memory management, with other
services running in user space.
QUESTION: What is a hybrid kernel?
ANSWER: A hybrid kernel combines aspects of both monolithic and microkernels,
running some services in kernel space and others in user space, to balance performance
and modularity.

6.3 Differences Between OS And Kernel


Comparing the Operating System and the Kernel

S/N Operating System Kernel

1. An operating system is a key component that The kernel is the heart of the OS that
manages computer software and hardware translates user commands into
resources. machine language.

2 It functions like system software. It is a system software that is a crucial


part of the operating system.

3 One primary function of an operating system is to The kernel's main function is to


provide security. manage memory, disk, and tasks.

4 It serves as an interface between hardware and It serves as an interface between the


user. application and hardware.

5 A computer cannot operate without an operating An operating system cannot function


system. without a kernel.

6 Single OS, multiuser OS, multiprocessor OS, Monolithic and Micro kernels are types
real-time OS, and Distributed OS are types of of kernels.
operating systems.

7 It is the first program to run when the computer It is the first program to start when the
boots up. operating system is launched.

97
6.4 Component of OS System
1) Kernel: A kernel is a vital computer program and the core element of an operating
system. It has complete control in the system as it converts the user request into
machine language.

2) Process Execution: Process execution is the process of turning a process design


into a working process in an operating system. It is the implementation of a
process design using an execution engine.

3) Interrupt: An interrupt is a signal from a device attached to a computer or from a


program within the computer that requires the operating system to stop and
figure out what to do next.
Interrupt systems are nothing but while the CPU can process the programs if the
CPU needs any IO operation. Then, it is sent to the queue and it does the CPU
process. Later on Input/output (I/O) operation is ready.

4) memory management: Memory management is the process of managing a


computer's main memory so that the operating system (OS) and other programs
have the memory they need to run.

5) multitasking: Multitasking in an OS enables a user to execute multiple computer


tasks at the same time. Processes that hold common processing resources, such
as a CPU, are known as many tasks.

6) networking: Networking in an operating system is the process of connecting


devices to share resources and exchange data. It involves the use of a network
operating system (NOS) to manage network resources, security, and
performance.

7) security: Security refers to providing a protection system to computer system


resources such as CPU, memory, disk, software programs and most importantly
data/information stored in the computer system. If a computer program is run by
an unauthorized user, then he/she may cause severe damage to computer or data
stored in it. So, a computer system must be protected against unauthorized
access, malicious access to system memory, viruses, worms etc. We're going to
discuss following topics in this chapter.

8) user interface: A user interface (UI) is a program that allows a user to interact with
an operating system (OS). It's the way a user sees and uses a device to give it
instructions or receive information.

98
CHAPTER SEVEN
KNOW THE DIFFERENT OPERATING SYSTEM COMMANDS
A shell script is a text file that contains a sequence of commands for a Unix-based
operating system (OS). It's called a shell script because it combines a sequence of
commands in a file that would otherwise have to be typed in one at a time into a single
script. The shell is the OS's command-line interface (CLI) and interpreter for the set of
commands that are used to communicate with the system.

A shell script is usually created to save time for command sequences that a user needs
to use repeatedly. Like other programs, the shell script can contain parameters,
comments and subcommands that the shell must follow. Users simply enter the file
name on a command line to initiate the sequence of commands in the shell script.

7.1 use of shell in writing commands


The shell is the Linux command line interpreter. It provides an interface between the user
and the kernel and executes programs called commands. For example, if a user enters
ls then the shell executes the ls command.

Shells provide a way for you to communicate with the operating system. This
communication is carried out either interactively (input from the keyboard is acted upon
immediately) or as a shell script. A shell script is a sequence of shell and operating
system commands that is stored in a file.

7.2 commands for navigating OS

Basic commands for navigating operating systems include:

"cd" (change directory) to move between folders,


"ls" (list) to view files and folders within a directory,
"mkdir" (make directory) to create new folders,
"rm" (remove) to delete files,
".." (parent directory) to navigate one level up,
"." (current directory) to stay in the current location and
“pwd” (print working directory) to display complete path to your current directory. It's a
straightforward way to confirm your present location in the file system hierarchy.

These commands can simply be used by opening your terminal or command line
interface and typing the command then press Enter. These commands can vary slightly

99
depending on the specific operating system (like Windows, Linux, macOS) and
command prompt used.

Key points about navigation commands:


• "cd":
• "cd directory_name": Navigate to a specific directory named
"directory_name".
• "cd ..": Move one level up in the directory hierarchy.
• "cd ~": Go to the home directory.

• "ls":
• "ls -l": Display a detailed listing of files including permissions, size, and
owner information.
• "ls -a": Show hidden files.

• "mkdir":
• "mkdir new_folder_name": Create a new folder called
"new_folder_name".

• "rm":
• "rm file_name": Delete a file named "file_name".
• "rm -r directory_name": Recursively delete a directory and its contents.

File System Navigation Unix Command


Command Description Example
cd Changes the current working directory. cd Documents
ls Lists files and directories in the current ls
directory.
pwd Prints the current working directory. pwd
mkdir Creates a new directory. mkdir new_folder
rmdir Removes an empty directory. rmdir empty_folder
mv Moves files or directories. mv file1.txt
Documents/

7.4 commands to manipulate files and directories (mkdir, cp, mv, rm, ln, rmdir, type, etc.)

File Manipulation Unix Command

Command Description Example

100
touch Creates an empty file or updates the access and touch new_file.txt
modification times.
cp Copies files or directories. cp file1.txt file2.txt
mv Moves files or directories. mv file1.txt
Documents
rm Remove files or directories. rm old_file.txt
chmod Changes the permissions of a file or directory. chmod 644 file.txt
chown Changes the owner and group of a file or chown user:group
directory. file.txt
ln Creates links between files. ln -s target_file
symlink
cat Concatenates files and displays their contents. cat file1.txt file2.txt
head Displays the first few lines of a file. head file.txt
tail Displays the last few lines of a file. tail file.txt
more Displays the contents of a file page by page. more file.txt
less Displays the contents of a file with advanced less file.txt
navigation features.
diff Compares files line by line. diff file1.txt file2.txt
patch Applies a diff file to update a target file. patch file.txt <
changes.diff

7.3 State commands for exploring OS (ls, file, less, etc.)


Process Management Unix Command
Command Description Example
ps Displays information about active processes, ps aux
including their status and IDs.
top Displays a dynamic real-time view of system top
processes and their resource usage.
kill Terminates processes using their process IDs (PIDs). kill <pid>
pkill Sends signals to processes based on name or other pkill -9 firefox
attributes.
killall Terminates processes by name. killall -9 firefox
renice Changes the priority of running processes. renice -n 10
<pid>
nice Runs a command with modified scheduling priority. nice -n 10
command
pstree Displays running processes as a tree. pstree
pgrep Searches for processes by name or other attributes. pgrep firefox

101
jobs Lists active jobs and their status in the current shell jobs
session.
bg Puts a job in the background. bg <job_id>
fg Brings a background job to the foreground. fg <job_id>
nohup Runs a command immune to hangups, with output to nohup
a specified file. command &
disown Removes jobs from the shell’s job table, allowing disown <job_id>
them to run independently.

Text Processing Unix Command


Command Description Example
grep Searches for patterns in text files. grep "error" logfile.txt
sed Processes and transforms text streams. sed 's/old_string/new_string/g'
file.txt
awk Processes and analyzes text files using awk '{print $1, $3}' data.csv
a pattern scanning and processing
language.

Network Communication Unix Command


Command Description Example
ping Tests connectivity with another host ping google.com
using ICMP echo requests.
traceroute Traces the route that packets take to traceroute google.com
reach a destination.
nslookup Queries DNS servers for domain name nslookup google.com
resolution and IP address information.
dig Performs DNS queries, providing dig google.com
detailed information about DNS
records.
host Performs DNS lookups, displaying host google.com
domain name to IP address resolution.
whois Retrieves information about domain whois google.com
registration and ownership.
ssh Provides secure remote access to a ssh username@hostname
system.
scp Securely copies files between hosts scp file.txt
over a network. username@hostname:/path/
ftp Transfers files between hosts using the ftp hostname
File Transfer Protocol (FTP).

102
telnet Establishes interactive text-based telnet hostname
communication with a remote host.
netstat Displays network connections, routing netstat -tuln
tables, interface statistics,
masquerade connections, and
multicast memberships.
ifconfig Displays or configures network ifconfig
interfaces and their settings.
iwconfig Configures wireless network iwconfig wlan0
interfaces.
route Displays or modifies the IP routing route -n
table.
arp Displays or modifies the Address arp -a
Resolution Protocol (ARP) cache.
ss Displays socket statistics. ss -tuln
hostname Displays or sets the system’s hostname
hostname.
mtr Combines the functionality of ping and mtr google.com
traceroute, providing detailed network
diagnostic information.

System Administration Unix Command


Command Description Example
df Displays disk space usage. df -h
du Displays disk usage of files and directories. du -sh
/path/to/directory
crontab -e Manages cron jobs, which are scheduled tasks crontab -e
that run at predefined times or intervals.

Text Editors in Unix


Text Description Example
Editor
Vi / Vim Vi (Vim) is a highly configurable, powerful, and Open a file with Vim: vim
feature-rich text editor based on the original filename
Vi editor. Vim offers modes for both Exit Vim editor: Press Esc,
command-line operations and text editing. then type :wq and
press Enter

103
Emacs Emacs is a versatile text editor with extensive Open a file with
customization capabilities and support for Emacs: emacs filename
various programming languages. Save and exit Emacs:
Press Ctrl + X, then Ctrl +
S and Ctrl + X, then Ctrl +
C to exit
Nano Nano is a simple and user-friendly text editor Open a file with Nano: nano
designed for ease of use and accessibility. filename
Save and exit Nano:
Press Ctrl + O, then Ctrl + X
Ed Ed is a standard Unix text editor that operates Open a file with Ed: ed
in line-oriented mode, making it suitable for filename
batch processing and automation tasks. Exit Ed editor: Type q and
press Enter
Jed Jed is a lightweight yet powerful text editor Open a file with Jed: jed
that provides an intuitive interface and filename
support for various programming languages. Save and exit Jed: Press Alt
+ X, then type exit and
press Enter

Conclusion
In conclusion, Unix commands serve as a fundamental toolkit for navigating and
managing the Unix operating system, which has evolved from its inception in the 1960s
to become one of the most widely used OS platforms across various domains including
personal computing, servers, and mobile devices. From its origins at Bell Labs with
developers Dennis M. Ritchie and Ken Thompson to the birth of the C programming
language and the subsequent emergence of Unix-like systems such as Linux, the Unix
ecosystem has significantly shaped the computing landscape. Understanding basic Unix
commands is essential for users to efficiently manipulate files, manage processes,
configure networks, and perform system administration tasks, thereby empowering
them to leverage the full potential of Unix-based systems for diverse computing needs.

Reference
• https://www.geeksforgeeks.org/difference-between-shell-and-kernel/
• User mode vs. kernel mode: OSes explained By Ben Lutkevich, Site Editor
Published: 16 Aug 2024
• Resource Management in Operating System, geeksforgeeks.org, Last Updated
: 25 Jun, 2023.

104
• Process Management in Operating System, May 2, 2024 Anshuman Singh,
Senior Executive - Content

105

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy