Cte 314 - Os LN
Cte 314 - Os LN
CHAPTER ONE
OPERATING SYSTEMS
An operating system (OS) is a System software that manages all the resources of the
computing device.
It is a program that manages a computer’s resources, especially the allocation of those
resources among other programs. Typical resources include the central processing unit
(CPU), computer memory, file storage, input/output (I/O) devices, and network
connections.
• Acts as an interface between the software and different parts of the computer or
the computer hardware.
• Manages the overall resources and operations of the computer.
• Controls and monitors the execution of all other programs that reside in the
computer, which also includes application programs and other system software
of the computer.
• Examples of Operating Systems are Windows, Linux, macOS, Android, iOS, etc.
Management tasks include scheduling resource use to avoid conflicts and interference
between programs. Unlike most programs, which complete a task and terminate, an
operating system runs indefinitely and terminates only when the computer is turned off.
1
Modern multiprocessing operating systems allow many processes to be active, where
each process is a “thread” of computation being used to execute a program. One form
of multiprocessing is called time-sharing, which lets many users share computer access
by rapidly switching between them. Time-sharing must guard against interference
between users’ programs, and most systems use virtual memory, in which the memory,
or “address space,” used by a program may reside in secondary memory (such as on a
magnetic hard disk drive) when not in immediate use, to be swapped back to occupy the
faster main computer memory on demand. This virtual memory both increases the
address space available to a program and helps to prevent programs from interfering
with each other, but it requires careful control by the operating system and a set of
allocation tables to keep track of memory use. Perhaps the most delicate and critical
task for a modern operating system is allocation of the CPU; each process is allowed to
use the CPU for a limited time, which may be a fraction of a second, and then must give
up control and become suspended until its next turn. Switching between processes must
itself use the CPU while protecting all data of the processes.
The first digital computers had no operating systems. They ran one program at a time,
which had command of all system resources, and a human operator would provide any
special resources needed. The first operating systems were developed in the mid-1950s.
These were small “supervisor programs” that provided basic I/O operations (such as
controlling punch card readers and printers) and kept accounts of CPU usage for billing.
Supervisor programs also provided multiprogramming capabilities to enable several
programs to run at once. This was particularly important so that these early multimillion-
dollar machines would not be idle during slow I/O operations.
Computers acquired more powerful operating systems in the 1960s with the emergence
of time-sharing, which required a system to manage multiple users sharing CPU time and
terminals. Two early time-sharing systems were CTSS (Compatible Time Sharing
System), developed at the Massachusetts Institute of Technology, and the Dartmouth
College Basic System, developed at Dartmouth College. Other multiprogrammed
systems included Atlas, at the University of Manchester, England, and IBM’s OS/360,
probably the most complex software package of the 1960s. After 1972 the Multics
system for General Electric Co.’s GE 645 computer (and later for Honeywell Inc.’s
computers) became the most sophisticated system, with most of the multiprogramming
and time-sharing capabilities that later became standard.
The minicomputers of the 1970s had limited memory and required smaller operating
systems. The most important operating system of that period was UNIX, developed by
AT&T for large minicomputers as a simpler alternative to Multics. It became widely used
in the 1980s, in part because it was free to universities and in part because it was
designed with a set of tools that were powerful in the hands of skilled programmers. More
recently, Linux, an open-source version of UNIX developed in part by a group led by
Finnish computer science student Linus Torvalds and in part by a group led by American
2
computer programmer Richard Stallman, has become popular on personal computers
as well as on larger computers.
In addition to such general-purpose systems, special-purpose operating systems run on
small computers that control assembly lines, aircraft, and even home appliances. They
are real-time systems, designed to provide rapid response to sensors and to use their
inputs to control machinery. Operating systems have also been developed for mobile
devices such as smartphones and tablets. Apple Inc.’s iOS, which runs on iPhones and
iPads, and Google Inc.’s Android are two prominent mobile operating systems.
From the standpoint of a user or an application program, an operating system provides
services. Some of these are simple user commands like “dir”—show the files on a disk—
while others are low-level “system calls” that a graphics program might use to display an
image. In either case the operating system provides appropriate access to its objects,
the tables of disk locations in one case and the routines to transfer data to the screen in
the other. Some of its routines, those that manage the CPU and memory, are generally
accessible only to other portions of the operating system.
Contemporary operating systems for personal computers commonly provide a graphical
user interface (GUI). The GUI may be an intrinsic part of the system, as in the older
versions of Apple’s Mac OS and Microsoft Corporation’s Windows OS; in others it is a set
of programs that depend on an underlying system, as in the X Window system for UNIX
and Apple’s Mac OS X.
Operating systems also provide network services and file-sharing capabilities—even the
ability to share resources between systems of different types, such as Windows and
UNIX. Such sharing has become feasible through the introduction of network protocols
(communication rules) such as the Internet’s TCP/IP.
1.2 Evolution of OS
An operating system is a type of software that acts as an interface between the user and
the hardware. It is responsible for handling various critical functions of the computer and
utilizing resources very efficiently so the operating system is also known as a resource
manager. The operating system also acts like a government because just as the
government has authority over everything, similarly the operating system has authority
over all resources. Various tasks that are handled by OS are file management, task
management, garbage management, memory management, process management, disk
management, I/O management, peripherals management, etc.
3
o The first operating system was introduced in 1956. It was a batch
processing system GM-NAA I/O (1956) that automated job handling.
• 1960s: Multiprogramming and Timesharing
o Introduction of multiprogramming to utilize CPU efficiently.
o Timesharing systems, like CTSS (1961) and Multics (1969), allowed
multiple users to interact with a single system.
• 1970s: Unix and Personal Computers
o Unix (1971) revolutionized OS design with simplicity, portability, and
multitasking.
o Personal computers emerged, leading to simpler OSs like CP/M (1974) and
PC-DOS (1981).
• 1980s: GUI and Networking
o Graphical User Interfaces (GUIs) gained popularity with systems like Apple
Macintosh (1984) and Microsoft Windows (1985).
o Networking features, like TCP/IP in Unix, became essential.
• 1990s: Linux and Advanced GUIs
o Linux (1991) introduced open-source development.
o Windows and Mac OS refined GUIs and gained widespread adoption.
• 2000s-Present: Mobility and Cloud
o Mobile OSs like iOS (2007) and Android (2008) dominate.
o Cloud-based and virtualization technologies reshape computing, with OSs
like Windows Server and Linux driving innovation.
• AI Integration – (Ongoing)
With the growth of time, Artificial intelligence came into picture. Operating
system integrates features of AI technology like Siri, Google Assistant, and Alexa
and became more powerful and efficient in many way. These AI features with
operating system create an entire new feature like voice commands, predictive
text, and personalized recommendations.
Note: The above-mentioned OS basically tells how the OS evolved with the time by
adding new features but it doesn’t mean that only new generation OS are in use and
previously OS system is not in use, according to the need, all these OS are still used in
software industry.
Operating systems have evolved from basic program execution to complex ecosystems
supporting diverse devices and users.
• Memory management
• Process management
• File management
• Device Management
• Deadlock Prevention
• Input/Output device management
4
History According to Types of Operating Systems
Operating Systems have evolved in past years. It went through several changes before
getting its current form.
1. No OS – (0s to 1940s)
As we know that before 1940s, there was no use of OS. Earlier, people are lacking OS in
their computer system so they had to manually type instructions for each tasks in
machine language (0-1 based language) . And at that time, it was very hard for users to
implement even a simple task. And it was very time consuming and also not user-
friendly. Because not everyone had that much level of understanding to understand the
machine language and it required a deep understanding.
5
At 1980s, the craze of computer networks at it’s peak .A special type of Operating
Systems needed to manage the network communication. The OS like Novell NetWare
and Windows NT were developed to manage network communication which provide
users facility to work in collaborative environment and made file sharing and remote
access very easy.
6
• Operating systems consume system resources, including CPU, memory, and
storage, which can affect the performance of other applications.
• Portability: Many OSs are designed to run on different hardware platforms with
minimal modifications, making them portable across various systems.
7
Operating Systems play a critical role in various aspects of computing:
• The operating system distributes resources to programs, including memory,
CPUs, and storage devices. This helps to ensure their smooth functioning.
• It manages the execution of software programs. It prioritizes tasks and ensures
applications do not conflict with one another.
• Operating systems come with security measures. These include user
authentication and access control. They safeguard the system from unwanted
access and harmful applications.
• It controls peripheral devices such as printers, scanners, and network adapters.
These devices allow communication and data transmission.
• The operating system provides a user interface (UI) via which users may interact
with their computers.
• The UI allows users to launch apps, manage files and directories, and access
system services.
1.4 Concept Of OS
The concept of OS deals with the following: Processes, Files, System calls, Shell, Kernel,
etc.
8
A computer file is defined as a medium used for saving and managing data in the
computer system. The data stored in the computer system is completely in digital
format, although there can be various types of files that help us to store the data.
File systems are a crucial part of any operating system, providing a structured way to
store, organize, and manage data on storage devices such as hard drives, SSDs, and USB
drives. Essentially, a file system acts as a bridge between the operating system and the
physical storage hardware, allowing users and applications to create, read, update, and
delete files in an organized and efficient manner.
A System Call is an interface between a program running in user space and the operating
system (OS). Application programs use system calls to request services and
functionalities from the OS's kernel. This mechanism allows the program to call for a
service, like reading from a file, without accessing system resources directly.
9
System call provides the services of the operating system to the user programs via the
Application Program Interface (API). It provides an interface between a process and an
operating system to allow user-level processes to request services of the operating
system. System calls are the only entry points into the kernel system. All programs
needing resources must use system calls.
When a program invokes a system call, the execution context switches from user to
kernel mode, allowing the system to access hardware and perform the required
operations safely. After the operation is completed, the control returns to user mode,
and the program continues its execution.
This layered approach facilitated by system calls:
• Ensures that hardware resources are isolated from user space processes.
• Prevents direct access to the kernel or hardware memory.
• Allows application code to run across different hardware architectures.
10
1. System Call Request. The application requests a system call by invoking its
corresponding function. For instance, the program might use the read() function to read
data from a file.
2. Context Switch to Kernel Space. A software interrupt or special instruction is used to
trigger a context switch and transition from the user mode to the kernel mode.
3. System Call Identified. The system uses an index to identify the system call and
address the corresponding kernel function.
4. Kernel Function Executed. The kernel function corresponding to the system call is
executed. For example, reading data from a file.
5. System Prepares Return Values. After the kernel function completes its operation,
any return values or results are prepared for the user application.
6. Context Switch to User Space. The execution context is switched back from kernel
mode to user mode.
7. Resume Application. The application resumes its execution from where it left off, now
with the results or effects of the system call.
13
• Sending or receiving messages between processes.
• Synchronizing actions between user processes.
• Establishing shared memory regions for inter-process communication.
• Networking via sockets.
6. Security and Access Control
System calls contribute to security and access control by:
• Determining which processes or users get access to specific resources and who
can read, write, and execute resources.
• Facilitating user authentication procedures.
14
close() Close an open CloseHandle() Close an
file (or device). open object
handle.
read() Read from a file ReadFile() Read data
(or device). from a file or
input
device.
write() Write to a file WriteFile() Write data
(or device). to a file or
output
device.
lseek() Change the SetFilePointer() Set the
read/write position of
location in a the file
file. pointer.
unlink() Delete a file. DeleteFile() Delete an
existing file.
rename() Rename a file. MoveFile() Move or
rename a
file.
Directory
Management
mkdir() Create a CreateDirectory() Create a
new directory. new
directory.
rmdir() Remove a RemoveDirectory() Remove an
directory. existing
directory.
chdir() Change the SetCurrentDirectory() Change the
current current
directory. directory.
stat() Get file status. GetFileAttributesEx() Get
extended
file
attributes.
fstat() Get status of an GetFileInformationByHandle() Get file
open file. information
using a file
handle.
link() Create a link to CreateHardLink() Create a
a file. hard link to
15
an existing
file.
symlink() Get the status CreateSymbolicLink() Create a
of an open file. symbolic
link.
Device
Management
brk() or sbrk() Increase/decre VirtualAlloc() or Reserve,
ase the VirtualFree() commit, or
program's data free a region
space. of memory.
mmap() Map files or MapViewOfFile() Map a file
devices into into the
memory. application'
s address
space.
Information
Maintenance
time() Get the current GetSystemTime() Get the
time. current
system
time.
alarm() Get the status SetWaitableTimer() Set a timer
of an open file. object.
getuid() Set an alarm GetUserName() or Get the
clock for the LookupAccountName() username
delivery of a or ID.
signal.
getgid() Get the group GetTokenInformation() Get the
ID. group
information
of a security
token.
Communicatio
n Calls
socket() Create a new socket() Create a
socket. new socket.
bind() Bind a socket to bind() Bind a
a network socket to a
address. network
address.
16
listen() Bind a socket to listen() Listen for
a network connection
address. s on a
socket.
accept() Accept a new accept() Accept a
connection on new
a socket. connection
on a socket.
connect() Initiate a connect() Initiate a
connection on connection
a socket. on a socket.
send() or recv() Send and send() or recv() Send and
receive data on receive data
a socket. on a socket.
Security and
Access Control
chmod() or uma Change the SetFileAttributes() or SetSecurit Change the
sk() permissions/m yInfo() file
ode of a file. attributes or
security
info.
chown() Change the SetSecurityInfo() Set the
owner and security
group of a file. information.
What Are the Rules for Passing Parameters to the System Call?
When a user-space program invokes a system call, it typically needs to pass additional
parameters to specify the request. System performance depends on how efficiently
these parameters are passed between user and kernel space.
The method for passing parameters depends on the system architecture, but some
general rules apply:
• Limited Number of Parameters. System calls are often designed to accept a
limited number of parameters. This rule is intended to streamline the interface
and compel users to utilize data structures or memory blocks.
• Leveraging CPU Registers. CPU registers are the fastest accessible memory
locations. The number of CPU registers is limited, which restricts the number of
call parameters that can be passed. Use CPU registers when passing a small
number of system call parameters.
17
• Using Pointers for Data Aggregation. Instead of passing many parameters or
large data sizes, use pointer variables to point to memory blocks or structures
(containing all the parameters). The kernel uses the pointer to access this
memory block and retrieve the parameters.
• Data Integrity and Security Checks. The kernel must validate any pointers
passed from user space. It checks that these pointers only target areas the user
program can access. It also double-checks all data coming from user programs
before using it.
• Stack-based Parameter Handling. Some systems push parameters onto a stack
and allow the kernel to remove them for processing. This method is less common
than using CPU registers and pointers as it is more challenging to implement and
manage.
• Data Isolation Through Copying. The kernel often copies data from user space
to kernel space (and vice versa) to protect the system from erroneous or harmful
data. Data passed between user space and kernel space should not be shared
directly.
• Return Values and Error Handling. The system call returns a value, typically a
simple success/error code. In case of an error, always seek out additional
information about the error. Error responses are often stored in specific locations,
like the errno variable in Linux.
Note: The rules and methods above vary based on the architecture (x86, ARM, MIPS, etc.)
and the specifics of the operating system. Always refer to the OS documentation or
source code for precise information.
Conclusion
18
This article explained how system calls work and why they are essential for ensuring
seamless operations, security, and managing hardware and software resources.
Continue exploring system calls by reading about the role of the select() system call in
Linux, which is essential for handling multiple I/O operations simultaneously.
A Shell program is software that provides users with an interface for accessing services
in the kernel. The kernel manages the operating system's (OS) core services. It's a highly
protected and controlled space that limits access to the system's resources. A shell
provides an intermediary connection point between the user and the kernel. It runs in the
system's user space and executes the commands issued by the user, whether through a
keyboard, mouse, trackpad or other device.
On some platforms, the shell is called a command interpreter because it interprets the
commands the user issues. The shell then translates those commands into system calls
in the kernel. Each system call sends a request to the kernel to perform a specific task.
For example, the shell might request the kernel to delete a file, create a directory, change
an OS configuration, connect to a network, run a shell script or carry out a variety of other
operations.
Kernel
A kernel is the core part of an operating system. It acts as a bridge between software
applications and the hardware of a computer. The kernel manages system resources,
such as the CPU, memory, and devices, ensuring everything works together smoothly
and efficiently. It handles tasks like running programs, accessing files, and connecting to
devices like printers and keyboards.
19
Difference between Shell and Kernel
In computing, the operating system (OS) serves as the fundamental layer that bridges the
gap between computer hardware and the user. Two critical components of an operating
system are the kernel and the shell. Understanding the relationship between these two
components is fundamental to grasping how operating systems function and how users
interact with their computers.
What is a Shell?
The shell is a command-line interface that allows the user to enter commands to interact
with the operating system. It acts as an intermediary between the user and the kernel,
interpreting commands entered by the user and translating them into instructions that
the kernel can execute. The shell also provides various features like command history,
tab completion, and scripting capabilities to make it easier for the user to work with the
system.
Advantages
• Efficient Command Execution
• Scripting capability
Disadvantages
• Limited Visualization
• Steep Learning Curve
What is Kernel?
The kernel is the core component of the operating system that manages system
resources and provides services to other programs running on the system. It acts as a
bridge between the user and the resources of the system by accessing various computer
resources like the CPU , I/O devices and other resources. It is responsible for tasks such
20
as memory management, process scheduling, and device drivers. The kernel operates at
a lower level than the shell and interacts directly with the hardware of the computer.
Advantages
• Efficient Resource Management
• Process Management
• Hardware Abstraction
Disadvantages
• Limited Flexibility
• Dependency on Hardware
Shell Kernel
Shell allows the users to communicate Kernel controls all the tasks of the
with the kernel. system.
Its types are – Bourne Shell, C shell, Korn Its types are – Monolithic Kernel, Micro
Shell, etc. kernel, Hybrid kernel, etc.
21
Shell Kernel
Conclusion
The kernel operates at the core of the system, managing hardware resources and
ensuring the smooth execution of processes, while the shell acts as an interface
between the user and the system, allowing commands to be issued and executed.
1.5 Architecture of OS
An operating system allows the user application programs to interact with the system
hardware. Since the operating system is such a complex structure, its architecture plays
an important role in its usage. Each component of the Operating System Architecture
should be well defined with clear inputs, outputs and functions.
Import Terms
In operating system Architecture, we've two major terms which defines the major
components of the operating systems.
• System Software − System software are the programs which interact with Kernal
and provides interface for security management, memory management and
other low-level activities.
Popular Architectures
a. Simple Architecture
b. Monolith Architecture
c. Micro-Kernel Architecture
d. Exo-Kernel Architecture
e. Layered Architecture
f. Modular Architecture
a. Simple Architecture
There are many operating systems that have a rather simple structure. These started as
small systems and rapidly expanded much further than their scope. A common example
of this is MS-DOS (Microsoft Disk Operating System). It was designed simply for a niche
amount for people. There was no indication that it would become so popular.
23
Few operating systems have a simple yet powerful architecture, for example, MS-DOS.
That would lead to greater control over the computer system and its various
applications. The simple architecture allows the programmers to hide information as
required and implement internal routines as they see fit without changing the outer
specifications.
Advantages
• Better Performance - Such a sytem, as have few layers and directly interects
with hardware, can provide a better performance as compared to other types of
operating systems.
Disadvantages
• Frequent System Failures - Being poorly designed, such a system is not robust.
If one program fails, entires operating system crashses. Thus system failures are
quiet frequent in simple operating systems.
b. Monolith Architecture
24
In monolith architecture operating system, a central piece of code called kernel is
responsible for all major operations of an operating system. Such operations includes
file management, memory management, device management and so on. The kernal is
the main component of an operating system and it provides all the services of an
operating system to the application programs and system programs.
The kernel has access to the all the resources and it acts as an interface with
application programs and the underlying hardware. A monolithic kernel architecture
promotes timesharing, multiprogramming model and was used in old banking systems.
Advantages
• Easy Development - As kernel is the only layer to develop with all major
functionalities, it is easier to design and develop.
Disadvantages
• Crash Prone - As Kernel is responsible for all functions, if one function fails
entire operating system fails.
Advantages
• Maintainability - Being small sized kernels, code size is maintainable. One can
enhance a microkernel code base without impacting other microkernel code
base.
Disadvantages
26
Layered Architecture
One way to achieve modularity in the operating system is the layered approach. In this,
the bottom layer is the hardware and the topmost layer is the user interface.
As seen from the image, each upper layer is built on the bottom layer. All the layers hide
some structures, operations etc from their upper layers.
One problem with the layered architecture is that each layer needs to be carefully
defined. This is necessary because the upper layers can only use the functionalities of
the layers below them.
Advantages
• Verifiable - Being modular, each layer can be verified and debugged easily.
Disadvantages
27
Modular Architecture
Modular architecture operating system works on the similar principle as a monolith but
with better design. A central kernel is responsible for all major operations of operating
system. This kernel has set of core functionality and other services are loaded as
modules dynamically to the kernel at boot time or at runtime. Sun Solaris OS is one of
the examples of Modular structured operating system.
Advantages
• Verifiable - Being modular, each layer can be verified and debugged easily.
Disadvantages
The computer's CPU switches between user and kernel mode depending on the code
that's running. Certain applications are restricted to user mode, while others operate in
28
kernel mode. Generally, user applications operate in user mode, whereas basic OS
components function in kernel mode.
The 2024 CrowdStrike outage, which rendered millions of Windows machines
inoperable, was precipitated by security software that malfunctioned while running in
kernel mode.
This gives programs in user mode a private section of memory that other applications
cannot access and keeps applications in user mode from altering each other's data. In
this mode, if one application crashes, it doesn't take the entire system down with it
because it runs in isolation from other applications.
User applications, such as word processors, web browsers and video players, run in user
mode. When a user launches one of these applications, the OS creates a process that
gives the application its own private virtual address space in memory.
When the computer system is run by user applications like creating a text document or
using any application program, then the system is in user mode. When the user
application requests for a service from the operating system or an interrupt occurs or
system call, then there will be a transition from user to kernel mode to fulfill the
requests.
Note: To switch from kernel mode to user mode, the mode bit should be 1.
29
What is kernel mode?
Kernel mode is an OS state with unrestricted access to system resources and hardware.
It is a privileged mode where the OS' core functions are conducted. Kernel mode
enforces isolation processes by handling system calls from user mode. It also has direct
access to peripheral devices.
When the system boots, the hardware starts in kernel mode and when the operating
system is loaded, it starts user application in user mode. To provide protection to the
hardware, we have privileged instructions which execute only in kernel mode. If the user
attempts to run privileged instruction in user mode, then it will treat instruction as illegal
and traps to OS. Some of the privileged instructions are:
1. Handling Interrupts
2. To switch from user mode to kernel mode.
3. Input-Output management.
In kernel mode, there is no separation of virtual address space -- all code in this mode
shares the same virtual address space in memory. This means the CPU can switch
30
between running programs and reading and writing both kernel memory and user
memory.
Programs that run in kernel mode include the OS itself, process-related code and
some security software. Program data running in this mode is not protected from other
applications. If an application crashes in kernel mode, it can negatively affect the other
applications running in kernel mode. For example, if a driver crashes in kernel mode, it
could potentially corrupt the entire OS.
31
Which programs run in user mode and kernel mode?
Any program that performs memory management, process management or I/O
management typically runs in kernel mode. Any software in this mode has full access to
the system and thus needs to be trusted. Once running, the code in the kernel or new
code that is inserted in the kernel needs to be trusted so that it doesn't corrupt the core
functions of the computer.
Computer systems have rings of privilege. Kernel mode operates in Ring 0, the most
privileged zone. User mode operates in Ring 3, the least privileged zone. Rings 1 and 2
are sometimes referred to as the supervisor level, depending on system architecture.
System calls establish this trust. Although applications such as word processors are
executed in user mode, they use system calls regularly to enter kernel mode and perform
processes involving peripherals and memory. For example, if the word processor needs
to save a file, it needs to do so through a system call because it needs to write bytes to
the disk. The same goes for typing or moving the cursor -- the program needs to interact
with the hardware in some way and needs some kernel-level access to do so.
Another example of a system call occurs when a program is listening for incoming
network connections. The system call tells the kernel's networking stack to arrange data
structures to prepare it to receive future incoming network packets.
The above are examples of programs that execute in user mode but use system calls to
access kernel mode. Another example of software that requires access to the kernel is
third-party security software. One notable example of this is CrowdStrike's Falcon
sensor, which regularly publishes content updates to the kernel to help the software
32
detect new threats. The sensor validates the content, which allows it, theoretically, to do
its job safely in the kernel.
However, because of a bug in the content validator, a content update passed through
with problematic data. This caused the CrowdStrike software to crash. Because the
software -- at least part of it -- resided in the kernel, the Windows machines that received
the update completely crashed as well. This example speaks to the importance of only
running trusted processes in the kernel.
Advantages:
1. Protection: Dual-mode operation provides a layer of protection between user
programs and the operating system. In user mode, programs are restricted from
accessing privileged resources, such as hardware devices or sensitive system
data. In kernel mode, the operating system has full access to these resources,
allowing it to protect the system from malicious or unauthorized access.
2. Stability: Dual-mode operation helps to ensure system stability by preventing
user programs from interfering with system-level operations. By restricting
access to privileged resources in user mode, the operating system can prevent
programs from accidentally or maliciously causing system crashes or other
errors.
3. Flexibility: Dual-mode operation allows the operating system to support a wide
range of applications and hardware devices. By providing a well-defined interface
between user programs and the operating system, it is easier to develop and
deploy new applications and hardware.
4. Debugging: Dual-mode operation makes it easier to debug and diagnose
problems with the operating system and applications. By switching between user
mode and kernel mode, developers can identify and fix issues more quickly and
easily.
5. Security: Dual-mode operation enhances system security by preventing
unauthorized access to critical system resources. User programs running in user
33
mode cannot modify system data or perform privileged operations, reducing the
risk of malware attacks or other security threats.
6. Efficiency: Dual-mode operation can improve system performance by reducing
overhead associated with system-level operations. By allowing user programs to
access resources directly in user mode, the operating system can avoid
unnecessary context switches and other performance penalties.
7. Compatibility: Dual-mode operation ensures backward compatibility with
legacy applications and hardware devices. By providing a standard interface for
user programs to interact with the operating system, it is easier to maintain
compatibility with older software and hardware.
8. Isolation: Dual-mode operation provides isolation between user programs,
preventing one program from interfering with another. By running each program in
its own protected memory space, the operating system can prevent programs
from accessing each other’s data or causing conflicts.
9. Reliability: Dual-mode operation enhances system reliability by preventing
crashes and other errors caused by user programs. By restricting access to
critical system resources, the operating system can ensure that system-level
operations are performed correctly and reliably.
Disadvantages:
1. Performance: Dual-mode operation can introduce overhead and reduce system
performance. Switching between user mode and kernel mode requires a context
switch, which can be time-consuming and can impact system performance.
2. Complexity: Dual-mode operation can increase system complexity and make it
more difficult to develop and maintain operating systems. The need to support
both user mode and kernel mode can make it more challenging to design and
implement system features and to ensure system stability.
3. Security: Dual-mode operation can introduce security vulnerabilities. Malicious
programs may be able to exploit vulnerabilities in the operating system to gain
access to privileged resources or to execute malicious code.
4. Reliability: Dual-mode operation can introduce reliability issues as it is difficult
to test and verify the correct operation of both user mode and kernel mode. Bugs
or errors in either mode can lead to system crashes, data corruption, or other
reliability issues.
5. Compatibility: Dual-mode operation can create compatibility issues as different
operating systems may implement different interfaces or policies for user mode
and kernel mode. This can make it difficult to develop applications that are
compatible with multiple operating systems or to migrate applications between
different systems.
6. Development complexity: Dual-mode operation requires a higher level of
technical expertise and development skills to design and implement the
34
operating system. This can increase the development time and cost for creating
new operating systems or updating existing ones.
7. Maintenance complexity: Dual-mode operation can make maintenance and
support more complex due to the need to ensure compatibility and security
across both user mode and kernel mode. This can increase the cost and time
required for system updates, patches, and upgrades.
Note: To switch from user mode to kernel mode bit should be 0.
35
• Memory Management: Memory management is a method used in the operating
systems to manage operations between main memory and disk during process
execution.
36
Design and Implementation in Operating System
The design of an operating system is a broad and complex topic that touches on many
aspects of computer science.
Design Goals:
Design goals are the objectives of the operating system. They must be met to fulfill design
requirements and they can be used to evaluate the design. These goals may not always
be technical, but they often have a direct impact on how users perceive their experience
with an operating system. While designers need to identify all design goals and prioritize
them, they also need to ensure that these goals are compatible with each other as well
as compatible with user expectations or expert advice
Designers also need to identify all possible ways in which their designs could conflict
with other parts of their systems—and then prioritize those potential conflicts based on
cost-benefit analysis (CBA). This process allows for better decision-making about what
features make sense for inclusion into final products versus those which would require
extensive rework later down the road. It’s also important to note that CBA is not just
about financial costs; it can also include other factors like user experience, time to
market, and the impact on other systems.
The process of identifying design goals, conflicts, and priorities is often referred to as
“goal-driven design.” The goal of this approach is to ensure that each design decision is
made with the best interest of users and other stakeholders in mind.
An operating system is a construct that allows the user application programs to interact
with the system hardware. Operating system by itself does not provide any function but
it provides an atmosphere in which different applications and programs can do useful
work.
There are many problems that can occur while designing and implementing an operating
system. These are covered in operating system design and implementation.
37
The design and implementation of an operating system is a complex process that
involves many different disciplines. The goal is to provide users with a reliable, efficient,
and convenient computing environment, so as to make their work more efficient.
User Goals
The operating system should be convenient, easy to use, reliable, safe and fast according
to the users. However, these specifications are not very useful as there is no set method
to achieve these goals.
System Goals
The operating system should be easy to design, implement and maintain. These are
specifications required by those who create, maintain and operate the operating system.
But there is not specific method to achieve these goals as well.
38
An operating system is a set of software components that manage a computer’s
resources and provide overall system management.
Mechanisms and policies are the two main components of an operating system.
Mechanisms handle low-level functions such as scheduling, memory management,
and interrupt handling; policies handle higher-level functions such as resource
management, security, and reliability. A well-designed OS should provide both
mechanisms and policies for each component in order for it to be successful at its task:
Mechanisms should ensure that applications have access to appropriate hardware
resources (seats). They should also make sure that applications don’t interfere with each
other’s use of these resources (for example through mutual exclusion).
Policies determine how processes will interact with one another when they’re running
simultaneously on multiple CPUs within a single machine instance – what processor
affinity should occur during multitasking operations? Should all processes be allowed
access simultaneously or just those belonging specifically within group ‘A’?’
These are just some of the many questions that policies must answer. The OS is
responsible for enforcing these mechanisms and policies, as well as handling
exceptions when they occur. The operating system also provides a number of services to
applications, such as file access and networking capabilities.
The operating system is also responsible for making sure that all of these tasks are done
efficiently and in a timely manner. The OS provides applications with access to the
underlying hardware resources and ensures that they’re properly utilized by the
application. It also handles any exceptions that occur during execution so that they don’t
cause the entire system to crash.
There is no specific way to design an operating system as it is a highly creative task.
However, there are general software principles that are applicable to all operating
systems.
A subtle difference between mechanism and policy is that mechanism shows how to do
something and policy shows what to do. Policies may change over time and this would
lead to changes in mechanism. So, it is better to have a general mechanism that would
require few changes even when a policy change occurs.
For example - If the mechanism and policy are independent, then few changes are
required in mechanism if policy changes. If a policy favours I/O intensive processes over
CPU intensive processes, then a policy change to preference of CPU intensive processes
will not change the mechanism.
Implementation:
Implementation is the process of writing source code in a high-level programming
language, compiling it into object code, and then interpreting (executing) this object code
by means of an interpreter. The purpose of an operating system is to provide services to
users while they run applications on their computers.
39
The operating system needs to be implemented after it is designed. Earlier they were
written in assembly language but now higher-level languages are used. The first system
not written in assembly language was the Master Control Program (MCP) for Burroughs
Computers.
The main function of an operating system is to control the execution of programs. It also
provides services such as memory management, interrupt handling, and file system
access facilities so that programs can be better utilized by users or other devices
attached to the system.
An operating system is a program or software that controls the computer’s hardware and
resources. It acts as an intermediary between applications, users, and the computer’s
hardware. It manages the activities of all programs running on a computer without any
user intervention.
The operating system performs many functions such as managing the computer’s
memory, enforcing security policies, and controlling peripheral devices. It also provides
a user interface that allows users to interact with their computers.
The operating system is typically stored in ROM or flash memory so it can be run when
the computer is turned on. The first operating systems were designed to control
mainframe computers. They were very large and complex, consisting of millions of lines
of code and requiring several people to develop them.
Today, operating systems are much smaller and easier to use. They have been designed
to be modular so they can be customized by users or developers.
There are many different types of operating systems:
1. Graphical user interfaces (GUIs) like Microsoft Windows and Mac OS.
2. Command line interfaces like Linux or UNIX
3. Real-time operating systems that control industrial and scientific equipment
4. Embedded operating systems are designed to run on a single computer system
without needing an external display or keyboard.
40
1.9 Types of Operating Systems
Operating Systems can be categorized according to different criteria like whether an
operating system is for mobile devices (examples Android and iOS) or desktop (examples
Windows and Linux). We are going to classify based on functionalities an operating
system provides.
41
Multiprogramming Operating Systems can be simply illustrated as more than one
program is present in the main memory and any one of them can be kept in execution.
This is basically used for better utilization of resources.
42
Advantages of Time-Sharing OS
• Each task gets an equal opportunity.
• Fewer chances of duplication of software.
• CPU idle time can be reduced.
• Resource Sharing: Time-sharing systems allow multiple users to share hardware
resources such as the CPU, memory, and peripherals, reducing the cost of
hardware and increasing efficiency.
• Improved Productivity: Time-sharing allows users to work concurrently, thereby
reducing the waiting time for their turn to use the computer. This increased
productivity translates to more work getting done in less time.
• Improved User Experience: Time-sharing provides an interactive environment that
allows users to communicate with the computer in real time, providing a better
user experience than batch processing.
Disadvantages of Time-Sharing OS
• Reliability problem.
• One must have to take care of the security and integrity of user programs and
data.
• Data communication problem.
• High Overhead: Time-sharing systems have a higher overhead than other
operating systems due to the need for scheduling, context switching, and other
overheads that come with supporting multiple users.
• Complexity: Time-sharing systems are complex and require advanced software to
manage multiple users simultaneously. This complexity increases the chance of
bugs and errors.
• Security Risks: With multiple users sharing resources, the risk of security
breaches increases. Time-sharing systems require careful management of user
43
access, authentication, and authorization to ensure the security of data and
software.
44
5. Distributed Operating System
These types of operating system is a recent advancement in the world of computer
technology and are being widely accepted all over the world and, that too, at a great pace.
Various autonomous interconnected computers communicate with each other using a
shared communication network. Independent systems possess their own memory unit
and CPU. These are referred to as loosely coupled systems or distributed systems. These
systems’ processors differ in size and function. The major benefit of working with these
types of the operating system is that it is always possible that one user can access the
files or software which are not actually present on his system but some other system
connected within this network i.e., remote access is enabled within the devices
connected in that network.
Client 1 Client 2
File Server
Client 3 Client 4
46
Advantages of Network Operating System
• Highly stable centralized servers.
• Security concerns are handled through servers.
• New technologies and hardware up-gradation are easily integrated into the
system.
• Server access is possible remotely from different locations and types of systems.
Disadvantages of Network Operating System
• Servers are costly.
• User has to depend on a central location for most operations.
• Maintenance and updates are required regularly.
Examples of Network Operating Systems are Microsoft Windows Server 2003,
Microsoft Windows Server 2008, UNIX, Linux, Mac OS X, Novell NetWare, BSD, etc.
Advantages of RTOS
• Maximum Consumption: Maximum utilization of devices and systems, thus
more output from all the resources.
47
• Task Shifting: The time assigned for shifting tasks in these systems is very less.
For example, in older systems, it takes about 10 microseconds in shifting from
one task to another, and in the latest systems, it takes 3 microseconds.
• Focus on Application: Focus on running applications and less importance on
applications that are in the queue.
• Real-time operating system in the embedded system: Since the size of
programs is small, RTOS can also be used in embedded systems like in transport
and others.
• Error Free: These types of systems are error-free.
• Memory Allocation: Memory allocation is best managed in these types of
systems.
Disadvantages of RTOS
• Limited Tasks: Very few tasks run at the same time and their concentration is very
less on a few applications to avoid errors.
• Use heavy system resources: Sometimes the system resources are not so good
and they are expensive as well.
• Complex Algorithms: The algorithms are very complex and difficult for the
designer to write on.
• Device driver and interrupt signals: It needs specific device drivers and
interrupts signal to respond earliest to interrupts.
• Thread Priority: It is not good to set thread priority as these systems are very less
prone to switching tasks.
Examples of Real-Time Operating Systems are Scientific experiments, medical
imaging systems, industrial control systems, weapon systems, robots, air traffic control
systems, etc.
8. Mobile Operating Systems
These operating systems are mainly for mobile devices. Examples of such operating
systems are Android and iOS.
Conclusion
Operating systems come in various types, each used for specific needs. Whether it’s
managing large batches of jobs, enabling multiple users to work simultaneously,
coordinating networked computers, or ensuring timely execution in critical systems.
Understanding these types helps in choosing the right operating system for the right job,
ensuring efficiency and effectiveness.
QUESTION: How has the development of computer hardware been impacted by the
evolution of operating systems?
ANSWER: The design and advancement of computer hardware have been significantly
influenced by the development of operating systems. By time by time hardware
producers added new features and capabilities to their products as operating systems
48
improved in order to better support the functionality offered by the operating systems
after a long variation of time. Like for instance, basically the development of memory
managementunits (MMUs) in hardware to handle memory addressing and protection
followed the introduction of virtual memory in operating systems. Similarly, the demand
for operating system multitasking and multiprocessing support prompted the creation of
more potent and effective processors and other hardware components.
ANSWER: There are many popular operating system for example Apple macOS, Microsoft
windows, Google Android OS, Linux Operating System, and apple iOS.
ANSWER: UNIX is the mother of operating System. Unix is an Operating System that is
truly the base of all Operating Systems like Ubuntu, Solaris, POSIX, etc.
49
QUESTION: How does dual-mode operation improve system performance?
ANSWER: By allowing user programs to access resources directly in user mode, the
operating system can avoid unnecessary context switches and other performance
penalties, which can improve system performance.
50
• Provides best way of business communication.
• Streamline communication.
• Increase in efficiency.
• Network gaming.
• Flexibility.
• User communication.
Explain USART and give TWO differences between UART and USART
The most common use of the USART in asynchronous mode is to communicate to a PC
serial port using the RS-232 protocol.
differences between UART and USART
UART USART
Signal count 2 (Tx and Rx) 4 (Tx, Rx, XCK (clock), and
XDIR (direction))
51
Data rate Specified between Defined by the clock pulse
transmitter and stream on the XCK pin
receiver
52
processes that are no longer needed, setting process priorities, and more. You can do it
on your computer also.
There are a few ways to manage your processes. The first is through the use of Task
Manager. This allows you to see all of the processes currently running on your computer
and their current status and CPU/memory usage. You can end any process that you no
longer need, set a process priority, or start or stop a service.
The process control block is a data structure used by an operating system to store
information about a process. This includes the process state, program counter, CPU
scheduling information, memory management information, accounting information,
and IO status information. The operating system uses the process control block to keep
track of all the processes in the system.
53
Attributes of process Description
The process could be in any state, so the current state of the
Process State
process (ready, running, waiting, terminated)
This is required for allowing/disallowing access to the system
Process privileges
resource.
This specifies Unique identification for all processes in the
Process ID
operating system.
Contains the address of the next instruction, which will be
Program Counter
executed in the process.
This specifies the registers that are used by the process. They
CPU registers may include general-purpose registers, index registers
accumulators, stack pointers, etc.
Memory management This includes the memory information like information
information related to the Segment table, memory limits, and page table.
This includes the information about the amount of CPU used
Accounting information
for process execution, execution time, time limits, etc.
There are many Input/Output devices attached to the
IO status information process, and the list of I/O devices allocated to the process
is maintained.
Process Operations
Example of some operations:
1. Process creation
The first step is process creation. Process creation could be from a user request (using
fork()), a system call by a running process, or system initialization.
2. Scheduling
If the process is ready to get executed, then it will be in the ready queue, and now it’s the
job of the scheduler to choose a process from the ready queue and starts its execution
54
3. Execution
Here, execution of the process means the CPU is assigned to the process. Once the
process has started executing, it can go into a waiting queue or blocked state. Maybe the
process wants to make an I/O request, or some high-priority process comes in.
4. Killing the process
After process execution, the operating system terminates the Process control block
(PCB).
1. New state
When a process is created, it is a new state. The process is not yet ready to run in this
state and is waiting for the operating system to give it the green light. Long-term
schedulers shift the process from a NEW state to a READY state.
2. Ready state
After creation now, the process is ready to be assigned to the processor. Now the
process is waiting in the ready queue to be picked up by the short-term scheduler. The
short-term scheduler selects one process from the READY state to the RUNNING state.
We will cover schedulers in detail in the next blog.
3. Running state
Once the process is ready, it moves on to the running state, where it starts to execute the
instructions that were given to it. The running state is also where the process consumes
most of its CPU time.
4.Waiting state
If the process needs to stop running for some reason, it enters the waiting state. This
could be because
• It’s waiting for some input
• It’s waiting for a resource that’s not available yet.
• Some high-priority process comes in that need to be executed.
Then the process is suspended for some time and is put in a WAITING state. Till then next
process is given chance to get executed.
55
5. Terminate state
After the execution, the process exit to the terminate state, which means process
execution is complete.
Conclusion
Process management is a critical function of the operating system. By managing
processes, the operating system can ensure that resources are used efficiently and that
the system remains stable. In addition, process management allows the operating
system to control how programs interact with each other. This article has explained
process management, different states of the process, and context switching. If you liked
this article, please share it with your friends.
Process scheduling is the activity of the process manager that handles the removal of
the running process from the CPU and the selection of another process based on a
particular strategy. Throughout its lifetime, a process moves between
various scheduling queues, such as the ready queue, waiting queue, or devices queue.
56
Categories of Scheduling
• Pre-emptive: In this case, the OS can switch a process from running state to
ready state. This switching happens because the CPU may give other processes
priority and substitute the currently active process for the higher priority process.
Long Term Scheduler loads a process from disk to main memory for execution. The new
process to the ‘Ready State’.
• It is important that the long-term scheduler make a careful selection of both I/O
and CPU-bound processes. I/O-bound tasks are which use much of their time in
input and output operations while CPU-bound processes are which spend their
time on the CPU. The job scheduler increases efficiency by maintaining a
balance between the two.
57
• In some systems, the long-term scheduler might not even exist. For example, in
time-sharing systems like Microsoft Windows, there is usually no long-term
scheduler. Instead, every new process is directly added to memory for the short-
term scheduler to handle.
CPU Scheduler is responsible for selecting one process from the ready state for running
(or assigning CPU to it).
• STS (Short Term Scheduler) must select a new process for the CPU frequently to
avoid starvation.
The dispatcher is responsible for loading the process selected by the Short-term
scheduler on the CPU (Ready to Running State). Context switching is done by the
dispatcher only. A dispatcher does the following work:
Time taken by dispatcher is called dispatch latency or process context switch time.
58
3. Medium-Term Scheduler
Medium Term Scheduler (MTS) is responsible for moving a process from memory to disk
(or swapping).
• When needed, it brings process back into memory and pick up right where it left
off.
59
Some Other Schedulers
• I/O Schedulers: I/O schedulers are in charge of managing the execution of I/O
operations such as reading and writing to discs or networks. They can use
various algorithms to determine the order in which I/O operations are executed,
such as FCFS (First-Come, First-Served) or RR (Round Robin).
It is a process-swapping
It is a job scheduler It is a CPU scheduler
scheduler.
It controls the degree of It gives less control over It reduces the degree of
multiprogramming how much multiprogramming.
60
Long Term Scheduler Short Term Schedular Medium Term Scheduler
multiprogramming is
done.
Context Switching
In order for a process execution to be continued from the same point at a later time,
context switching is a mechanism to store and restore the state or context of a CPU in
the Process Control block. A context switcher makes it possible for multiple processes
to share a single CPU using this method. A multitasking operating system must include
context switching among its features.
The state of the currently running process is saved into the process control block when
the scheduler switches the CPU from executing one process to another. The state used
to set the computer, registers, etc. for the process that will run next is then loaded from
its own PCB. After that, the second can start processing.
Context Switching
In order for a process execution to be continued from the same point at a later time,
context switching is a mechanism to store and restore the state or context of a CPU in
the Process Control block. A context switcher makes it possible for multiple processes
61
to share a single CPU using this method. A multitasking operating system must include
context switching among its features.
• Program Counter
• Scheduling information
• Changed State
• Accounting information
Conclusion
Process schedulers are the essential parts of operating system that manage how the
CPU handles multiple tasks or processes. They ensure that processes are executed
efficiently, making the best use of CPU resources and maintaining system
responsiveness. By choosing the right process to run at the right time, schedulers help
optimize overall system performance, improve user experience, and ensure fair access
to CPU resources among competing processes.
CHAPTER THREE
UNDERSTAND INTER-PROCESS COMMUNICATION
Components of a Process
It is divided into the following four sections:
62
Stack
Temporary data like method or function parameters, return address, and local variables
are stored in the process stack.
Heap
This is the memory that is dynamically allocated to a process during its execution.
Text
This comprises the contents present in the processor’s registers as well as the current
activity reflected by the value of the program counter.
Data
The global as well as static variables are included in this section.
Start
When a process is started/created first, it is in this state.
Ready
Here, the process is waiting for a processor to be assigned to it. Ready processes are
waiting for the operating system to assign them a processor so that they can run. The
process may enter this state after starting or while running, but the scheduler may
interrupt it to assign the CPU to another process.
63
Running
When the OS scheduler assigns a processor to a process, the process state gets set to
running, and the processor executes the process instructions.
Waiting
If a process needs to wait for any resource, such as for user input or for a file to become
available, it enters the waiting state.
Terminated or Exit
The process is relocated to the terminated state, where it waits for removal from the main
memory once it has completed its execution or been terminated by the operating system.
Process state
The process’s present state, such as whether it’s ready, waiting, running, or whatever.
Process privileges
This is required in order to grant or deny access to system resources.
Process ID
Each process in the OS has its own unique identifier.
Pointer
It refers to a pointer that points to the parent process.
Program counter
The program counter refers to a pointer that points to the address of the process’s next
instruction.
64
CPU registers
Processes must be stored in various CPU registers for execution in the running state.
Accounting information
This comprises CPU use for process execution, time constraints, and execution ID,
among other things.
IO status information
This section includes a list of the process’s I/O devices.
The PCB architecture is fully dependent on the operating system, and different operating
systems may include different information. A simplified diagram of a PCB is shown
below.
The PCB is kept for the duration of a procedure and then removed once the process is
finished.
The Different Process States
The operating system’s processes can be in one of the following states:
• NEW – The creation of the process.
65
• READY – The waiting for the process that is to be assigned to any processor.
• RUNNING – Execution of the instructions.
• WAITING – The waiting of the process for some event that is about to occur (like
an I/O completion, a signal reception, etc.).
• TERMINATED – A process has completed execution.
Process vs Program
A program is a piece of code that can be as simple as a single line or as complex as
millions of lines. A computer program is usually developed in a programming language
by a programmer. The process, on the other hand, is essentially a representation of the
computer program that is now running. It has a comparatively shorter lifetime.
Here is a basic program created in the C programming language as an example:
#include <stdio.h>
int main() {
printf(“Hi, Subhadip! \n”);
return 0;
}
Process Scheduling
When there are several or more runnable processes, the operating system chooses
which one to run first; this is known as process scheduling.
A scheduler is a program that uses a scheduling algorithm to make choices. The following
are characteristics of a good scheduling algorithm:
66
• For users, response time should be kept to a bare minimum.
• The total number of jobs processed every hour should be as high as possible,
implying that a good scheduling system should provide the highest possible
throughput.
• The CPU should be used to its full potential.
• Each process should be given an equal amount of CPU time.
3.2 Process creation and process terminations (watt signal, semylose and deadlock)
IPC techniques
3.2 Process creation and process terminations (watt signal, semylose and deadlock) IPC
techniques
Process Creation
As discussed above, processes in most of the operating systems (both Windows and
Linux) form hierarchy. So a new process is always created by a parent process. The
process that creates the new one is called the parent process, and the newly created
process is called the child process. A process can create multiple new processes while
it’s running by using system calls to create them.
1. When a new process is created, the operating system assigns a unique Process
Identifier (PID) to it and inserts a new entry in the primary process table.
2. Then required memory space for all the elements of the process such as program,
data, and stack is allocated including space for its Process Control Block (PCB).
3. Next, the various values in PCB are initialized such as,
1. The process identification part is filled with PID assigned to it in step (1) and also
its parent’s PID.
2. The processor register values are mostly filled with zeroes, except for the stack
pointer and program counter. The stack pointer is filled with the address of the
stack-allocated to it in step (2) and the program counter is filled with the address
of its program entry point.
3. The process state information would be set to ‘New’.
4. Priority would be lowest by default, but the user can specify any priority during
creation. Then the operating system will link this process to the scheduling queue
and the process state would be changed from ‘New’ to ‘Ready’. Now the process
is competing for the CPU.
5. Additionally, the operating system will create some other data structures such as
log files or accounting files to keep track of process activity.
67
Understanding System Calls for Process Creation in Windows Operating System:
In Windows, the system call used for process creation is CreateProcess(). This function
is responsible for creating a new process, initializing its memory, and loading the
specified program into the process’s address space.
• CreateProcess() in Windows combines the functionality of both
UNIX’s fork() and exec(). It creates a new process with its own memory space
rather than duplicating the parent process like fork() does. It also allows
specifying which program to run, similar to how exec() works in UNIX.
• When you use CreateProcess(), you need to provide some extra details to handle
any changes between the parent and child processes. These details control
things like the process’s environment, security settings, and how the child
process works with the parent or other processes. It gives you more control and
flexibility compared to the UNIX system.
Process Deletion
Processes terminate themselves when they finish executing their last statement, after
which the operating system uses the exit() system call to delete their context. Then all
the resources held by that process like physical and virtual memory, 10 buffers, open
files, etc., are taken back by the operating system. A process P can be terminated either
by the operating system or by the parent process of P.
Processes are created using system calls like fork() in UNIX or CreateProcess() in
Windows, which handle the allocation of resources, assignment of unique identifiers,
68
and initialization of the process control block. Once a process completes its task, it can
terminate through various methods, such as normal termination, abnormal termination,
or by the parent process using system calls like exit(), abort(), or kill()
Inter Process communication (IPC) refers to the mechanisms and techniques used by
operating systems to allow different processes to communicate with each other. This
allows running programs concurrently in an Operating System.
The two fundamental models of Inter Process Communication are:
A. Shared Memory
B. Message Passing
Shared Memory
IPC through Shared Memory is a method where multiple processes are given access to
the same region of memory. This shared memory allows the processes to communicate
with each other by reading and writing data directly to that memory area.
Shared Memory in IPC can be visualized as Global variables in a program which are
shared in the entire program but shared memory in IPC goes beyond global variables,
allowing multiple processes to share data through a common memory space, whereas
global variables are restricted to a single process.
Message Passing
IPC through Message Passing is a method where processes communicate by sending
and receiving messages to exchange data. In this method, one process sends a
message, and the other process receives it, allowing them to share information. Message
Passing can be achieved through different methods like Sockets, Message Queues or
Pipes.
Sockets provide an endpoint for communication, allowing processes to send and receive
messages over a network. In this method, one process (the server) opens a socket and
listens for incoming connections, while the other process (the client) connects to the
server and sends data. Sockets can use different communication protocols, such
as TCP (Transmission Control Protocol) for reliable, connection-oriented
communication or UDP (User Datagram Protocol) for faster, connectionless
communication.
69
Types of Pipes are:
Anonymous Pipes and
Named Pipes (FIFOs)
2. Sockets – Sockets are used for network communication between processes
running on different hosts. They provide a standard interface for communication,
which can be used across different platforms and programming languages.
3. Shared memory – In shared memory IPC, multiple processes are given access to
a common memory space. Processes can read and write data to this memory,
enabling fast communication between them.
4. Semaphores – Semaphores are used for controlling access to shared resources.
They are used to prevent multiple processes from accessing the same resource
simultaneously, which can lead to data corruption.
5. Message Queuing – This allows messages to be passed between processes
using either a single queue or several message queue. This is managed by system
kernel these messages are coordinated using an API.
Types of Process
Let us first talk about types of types of processes.
• Independent process: An independent process is not affected by the execution
of other processes. Independent processes are processes that do not share any
data or resources with other processes. No inter-process communication
required here.
• Co-operating process: Interact with each other and share data or resources. A
co-operating process can be affected by other executing processes. Inter-
process communication (IPC) is a mechanism that allows processes to
communicate with each other and synchronize their actions. The communication
between these processes can be seen as a method of cooperation between them.
70
Advantages of IPC
a. Enables processes to communicate with each other and share resources,
leading to increased efficiency and flexibility.
b. Facilitates coordination between multiple processes, leading to better overall
system performance.
c. Allows for the creation of distributed systems that can span multiple
computers or networks.
d. Can be used to implement various synchronization and communication
protocols, such as semaphores, pipes, and sockets.
Disadvantages of IPC
a. Increases system complexity, making it harder to design, implement, and
debug.
b. Can introduce security vulnerabilities, as processes may be able to access or
modify data belonging to other processes.
c. Requires careful management of system resources, such as memory
and CPU time, to ensure that IPC operations do not degrade overall system
performance.
Can lead to data inconsistencies if multiple processes try to access or modify
the same data at the same time.
d. Overall, the advantages of IPC outweigh the disadvantages, as it is a
necessary mechanism for modern operating systems and enables processes
to work together and share resources in a flexible and efficient manner.
However, care must be taken to design and implement IPC systems carefully,
in order to avoid potential security vulnerabilities and performance issues.
71
CHAPTER FOUR
KNOW VARIOUS SCHEDULING TECHNIQUES
Explain CPU Scheduling criteria (CPU utilization, Throughput, Turn Around Time, Waiting
Time, Load Average, Response Time
o Preemptive Scheduling
o Non-Preemptive Scheduling
72
experience. There are two primary types of CPU scheduling: preemptive and non-
preemptive.
Understanding the differences between preemptive and non-preemptive scheduling
helps in designing and choosing the right scheduling algorithms for various types of
operating systems.
Preemptive Scheduling
Advantages of Preemptive Scheduling
• Because a process may not monopolize the processor, it is a more reliable
method and does not cause denial of service attack.
• Each preemption occurrence prevents the completion of ongoing tas s.
• The average response time is improved. Utilizing this method in a multi-
programming environment is more advantageous.
• Most of the modern operating systems (Window, Linux and macOS) implement
Preemptive Scheduling.
Disadvantages of Preemptive Scheduling
• More Complex to implement in Operating Systems.
• Suspending the running process, change the context, and dispatch the new
incoming process all take more time.
73
• Might cause starvation : A low-priority process might be preempted again and
again if multiple high-priority processes arrive.
• Causes Concurrency Problems as processes can be stopped when they were
accessing shared memory (or variables) or resources.
What is Non-Preemptive Scheduling?
In non-preemptive scheduling, a running process cannot be interrupted by the operating
system; it voluntarily relinquishes control of the CPU. In this scheduling, once the
resources (CPU cycles) are allocated to a process, the process holds the CPU till it gets
terminated or reaches a waiting state.
Algorithms based on non-preemptive scheduling are: First Come First Serve, Shortest
Job First (SJF basically non preemptive) and Priority (nonpreemptive version) , etc.
Below is the table and Gantt Chart according to the First Come FIrst Serve (FCFS)
Algorithm: We can notice that every process finishes execution once it gets CPU.
74
• In Preemptive Scheduling, there is the overhead of switching the process from the
ready state to the running state, vice-verse, and maintaining the ready queue.
Whereas in the case of non-preemptive scheduling has no overhead of switching
the process from running state to ready state.
• Preemptive scheduling attains flexibility by allowing the critical processes to
access the CPU as they arrive in the ready queue, no matter what process is
executing currently. Non-preemptive scheduling is called rigid as even if a critical
process enters the ready queue the process running CPU is not disturbed.
• Preemptive Scheduling has to maintain the integrity of shared data that’s why it is
cost associative which is not the case with Non-preemptive Scheduling.
NON-PREEMPTIVE
Parameter PREEMPTIVE SCHEDULING SCHEDULING
It has overheads of
Overhead It does not have overheads
scheduling the processes
75
NON-PREEMPTIVE
Parameter PREEMPTIVE SCHEDULING SCHEDULING
Examples of preemptive
Examples of non-preemptive
scheduling are Round Robin
Examples scheduling are First Come First
and Shortest Remaining Time
Serve and Shortest Job First
First
Conclusion
Preemptive scheduling allows the operating system to interrupt and reassign the CPU to
different processes, making it responsive and efficient for high-priority tasks. Non-
preemptive scheduling lets processes run to completion without interruption,
simplifying the system but potentially causing delays for other tasks. The choice
between these methods depends on the system’s needs for performance and simplicity.
76
Computer scientists have developed some algorithms that can decide the order of
execution of processes in a way that shall help achieve maximum utilization of the CPU.
These CPU Scheduling algorithms in operating systems can be classified as follows:
1. First Come First Serve:
Of all the scheduling algorithms it is one of the simplest and easiest to implement. As the
name suggests the First Come First Serve scheduling algorithm means that the process
that requests the CPU first is allocated the CPU first. It is basically implemented using a
First In First Out queue. It supports both non-preemptive and preemptive CPU
scheduling algorithms.
2. Shortest Job First (SJF):
Shortest job first (SJF) is a scheduling algorithm that selects the waiting process with the
smallest execution time to be executed next. The method followed by the SJF scheduling
algorithm may or may not be preemptive. SJF reduces the average waiting time of other
waiting processes significantly.
3. Longest Job First (LJF):
The working of the Longest Job First(LJF) scheduling process is the opposite of the
shortest job first (SJF), Here, the process with the largest burst time is processed first.
Longest Job First is one of the non-preemptive algorithms.
4. Priority Scheduling:
77
The Priority CPU Scheduling Algorithm is one of the pre-emptive methods of CPU
scheduling. Here, the most important process must be done first. In the case there are
more than one processes with the same priority value then the FCFS algorithm is used to
resolve the situation.
5. Round Robin:
In the Round Robin CPU scheduling algorithm the processes are allocated CPU time in a
cyclic order with a fixed time slot for each process. It is said to be a pre-emptive version
of FCFS. With the cyclic allocation of equal CPU time, it works on the principle of time-
sharing.
6. Shortest Remaining Time First:
The Shortest remaining time first CPU scheduling algorithm is a preemptive version of
the Shortest job first scheduling algorithm. Here the CPU is allocated to the process that
needs the smallest amount of time for its completion. Here the short processes are
handled very fast but the time taking processes keep waiting.
7. Longest Remaining Time First:
The longest remaining time first CPU scheduling algorithm is a preemptive CPU
scheduling algorithm. This algorithm selects those processes first which have the
longest processing time remaining for completion i.e. processes with the largest burst
time are allocated the CPU time first.
8. Highest Response Ratio Next:
The Highest Response Ratio Next is one of the non-preemptive CPU Scheduling
algorithms. It has the distinction of being recognised as one of the most optimal
scheduling algorithms. As the name suggests here the CPU time is allocated on the basis
of the response ratio of all the available processes where it selects the process that has
the highest Response Ratio. The selected process will run till its execution is complete.
78
allocated.
79
executes
first
Priority non- Accordin This type Preempti No Yes Most
preemptive g to the is less ve beneficial
priority complex Smaller with batch
with than than systems
monitorin Priority FCFS
g the new preemptiv
incoming e
higher
priority
jobs
MLQ Accordin More Smaller No Yes Good
g to the complex than performan
process than the FCFS ce but
that priority contain a
resides in schedulin starvation
the bigger g problem
queue algorithm
priority s
MFLQ Accordin It is the Smaller No No Good
g to the most than all performan
process Complex scheduli ce
of a but its ng types
bigger complexit in many
priority y rate cases
queue. depends
on the TQ
size
CPU scheduling algorithms help to schedule the tasks or processes in a way that the
processes are executed in a particular order and the CPU time is utilised to its optimum
level.
80
CHAPTER FIVE
UNDERSTAND INTERRUPT AND MASKING TRAPS
What is an Interrupt?
Interrupt systems are nothing but while the CPU can process the programs if the CPU
needs any IO operation. Then, it is sent to the queue and it does the CPU process. Later
on, Input/output (I/O) operation is ready.
The I/O devices interrupt the data which is available and does the remaining process; like
that interrupts are useful. If interrupts are not present, the CPU needs to be in idle state
for some time, until the IO operation needs to complete. So, to avoid the CPU waiting
time interrupts are coming into picture.
When a device raises an interrupt at let’s say process i.e., the processor first completes
the execution of instruction i. Then it loads the Program Counter (PC) with the address of
the first instruction of the ISR. Before loading the Program Counter with the address, the
address of the interrupted instruction is moved to a temporary location. Therefore, after
handling the interrupt the processor can continue with process i+1.
While the processor is handling the interrupts, it must inform the device that its request
has been recognized so that it stops sending the interrupt request signal. Also, saving the
registers so that the interrupted process can be restored in the future, increases the
delay between the time an interrupt is received and the start of the execution of the ISR.
This is called Interrupt Latency.
81
5.2 Types of Interrupts
Interrupts are classified into two types:
1. Software interrupts or
2. Hardware interrupts.
1. Software Interrupts
A sort of interrupt called a software interrupt is one that is produced by software or a
system as opposed to hardware. Traps and exceptions are other names for software
interruptions. They serve as a signal for the operating system or a system service to carry
out a certain function or respond to an error condition.
The interrupt signal generated from internal devices and software programs need to
access any system call then software interrupts are present.
82
calls are made. In contrast to the fork() system call, which also generates a software
interrupt, division by zero throws an exception that results in the software interrupt.
2. Hardware interrupt
A hardware interrupt is an electronic signal from an external hardware device that
indicates it needs attention from the OS. One example of this is moving a mouse or
pressing a keyboard key. In these examples of interrupts, the processor must stop to read
the mouse position or keystroke at that instant.
In this type of interrupt, all the devices are connected to the Interrupt Request Line (IRL).
A single request line is used for all the n devices. To request an interrupt, a device closes
its associated switch. When a device requests an interrupt, the value of INTR is the
logical OR of the requests from individual devices.
Typically, a hardware IRQ has a value that associates it with a particular device. This
makes it possible for the processor to determine which device is requesting service by
raising the IRQ, and then provide service accordingly.
Maskable interrupts
In a processor, an internal interrupt mask register selectively enables and disables
hardware requests. When the mask bit is set, the interrupt is enabled. When it is clear,
the interrupt is disabled. Signals that are affected by the mask are maskable interrupts.
Non-maskable interrupts
83
In some cases, the interrupt mask cannot be disabled so it does not affect some interrupt
signals. These are non-maskable interrupts and are usually high-priority events that
cannot be ignored.
Spurious interrupts
Also known as a phantom interrupt or ghost interrupt, a spurious interrupt is a type of
hardware interrupt for which no source can be found. These interrupts are difficult to
identify if a system misbehaves. If the ISR does not account for the possibility of such
interrupts, it may result in a system deadlock.
Step 1. Any time that an interrupt is raised, it may either be an I/O interrupt or a system
interrupt.
Step 2. The current state comprising registers and the program counter is then stored
in order to conserve the state of the process.
Step 3. The current interrupt and its handler is identified through the interrupt vector
table in the processor.
Step 4. This control now shifts to the interrupt handler, which is a function located in
the kernel space.
Step 5. Specific tasks are performed by Interrupt Service Routine (ISR) which are
essential to manage interrupt.
84
Step 6. The status from the previous session is retrieved so as to build on the process
from that point.
Step 7. The control is then shifted back to the other process that was pending and the
normal process continues.
These are the steps in which ISR handles interrupts. These are as follows –
Step 1 − When an interrupt occurs let assume processor is executing i' th instruction and
program counter will point to the next instruction (i+1)th.
Step 2 − When an interrupt occurs the program value is stored on the process stack and
the program counter is loaded with the address of interrupt service routine.
Step 3 − Once the interrupt service routine is completed the address on the process
stack is popped and placed back in the program counter.
Step 4 − Now it executes the resume for (i+1)th line.
85
8. Keystroke depressions and mouse All system calls are examples of software
movements are examples of interrupts
hardware interrupt.
Benefits of Interrupt
• Real-time Responsiveness: Interrupts permit a system to reply promptly to
outside events or signals, permitting real-time processing.
• Efficient Resource usage: Interrupt-driven structures are more efficient than
system that depend on busy-waiting or polling strategies. Instead of continuously
checking for the incidence of event, interrupts permit the processor to remain idle
until an event occurs, conserving processing energy and lowering energy intake.
• Multitasking and Concurrency: Interrupts allow multitasking with the aid of
allowing a processor to address multiple tasks concurrently.
• Improved system Throughput: By coping with occasions asynchronously,
interrupts allow a device to overlap computation with I/O operations or other
responsibilities, maximizing system throughput and universal overall
performance.
Traps can also be purposefully created by the software to ask the operating system for a
particular service, such reading from a file or allocating memory. The operating system's
trap handler is in charge of managing the trap and taking the proper action in accordance
with the trap's cause. For instance, if an unlawful instruction set off the trap, the trap
handler may terminate the programme and notify the user of the error. The trap handler
86
may carry out the requested service and transfer control back to the programme if the
trap is brought on by a request for a particular service.
Interrupt vectors are addresses that inform the interrupt handler as to where to find the
ISR (interrupt service routine, also called interrupt service procedure). All interrupts are
assigned a number from 0 to 255, with each of these interrupts being associated with a
specific interrupt vector.
“Interrupt vector is used to determine the (starting address of the) interrupt routine of the
interrupting device quickly.”
The following table highlights the major differences between Traps and Interrupts:
4. That might only occur from That might only occur from
software-based devices. software-based devices.
Devices' hardware and Devices' hardware and
software could be at fault. software could be at fault.
87
Explain the use of masking in relation to interrupt
Levels Of Interrupt
The interrupt level defines the source of the interrupt and is often referred to as the
interrupt source. The interrupt priority defines which of a set of pending interrupts is
serviced first. There are two types of trigger mechanisms,
Level-triggered interrupts
This interrupt module generates an interrupt by holding the interrupt signal at a particular
active logic level. The signal gets negated once the processor commands it after the
device has been serviced. The processor samples the interrupt signal during each
instruction cycle and recognizes it when it's inserted during the sampling process. After
servicing a device, the processor may service other devices before exiting the ISR.
Level-triggered interrupts allow multiple devices to share a common signal using wired-
OR connections. However, it can be cumbersome for firmware to deal with them.
Edge-triggered interrupts
This module generates an interrupt only when it detects an asserting edge of the interrupt
source. Here, the device that wants to signal an interrupt creates a pulse which changes
the interrupt source level.
Edge-triggered interrupts provide a more flexible way to handle interrupts. They can be
serviced immediately, regardless of how the interrupt source behaves. Plus, they reduce
the complexity of firmware code and reduce the number of conditions required for the
firmware. The drawback is that if the pulse is too short to be detected, special hardware
may be required to detect and service the interruption.
5.4 Differentiate between S/O interrupt timers, Hardware error and programming
interrupt
A "S/O interrupt timer" refers to a software-based timer that triggers an interrupt when a
specific time period has elapsed, while a "hardware error" is a malfunction within a
physical component of a computer system, and a "programming interrupt" is an interrupt
triggered by an error or specific instruction within a program itself; essentially, S/O timers
are software-controlled, hardware errors are physical issues, and programming
interrupts are triggered by code errors.
Breakdown:
S/O Interrupt Timer:
88
• Function: Used to schedule events or time-based operations within a program by
setting a timer that, when reached, generates an interrupt signal to the processor,
allowing the program to execute specific code at that time.
• Trigger: Software instruction to start the timer and its programmed duration.
• Example: Implementing a delay function in a program, where the timer is set to
trigger after a specific time interval.
Hardware Error:
• Function: An unexpected malfunctioning of a physical component like RAM,
CPU, or hard drive, which can potentially disrupt normal system operation.
• Trigger: Physical issues like faulty hardware, extreme temperatures, or power
surges.
• Example: A sudden system crash due to a failing RAM chip.
Programming Interrupt:
• Function: An interrupt triggered by a specific instruction or error condition within
a program, causing the processor to temporarily suspend its current execution
and jump to a dedicated error handling routine.
• Trigger: Invalid memory access, division by zero, or intentional "software
interrupt" instructions.
• Example: A program attempting to access memory that is not allocated, leading
to a "page fault" interrupt.
6.1 OS Kernel.
What is Kernel?
A kernel is the essential foundation of a computer's operating system (OS). It's the core
that provides basic services for all other parts of the OS. It's the main layer between the
89
OS and underlying computer hardware, and it helps with tasks such as process
and memory management, inter-process communication, file system management,
device control and networking.
During normal system startup, a computer's basic input/output system, or BIOS,
completes a hardware bootstrap or initialization. It then runs a bootloader which loads
the kernel from a storage device -- such as a hard drive -- into a protected memory space.
Once the kernel is loaded into computer memory, the BIOS transfers control to the
kernel. It then loads other OS components to complete the system startup and make
control available to users through a desktop or other user interface.
If the kernel is damaged or can't load successfully, the computer won't be able to start
completely -- if at all. Service will be required to correct hardware damage or to restore
the OS kernel to a working version.
90
of these processes. It knows which hardware resources are available and which
processes need them. It then allocates time for each process to use those resources.
91
kernel space. This makes it harder to add or remove functionality without affecting
the entire system.
2. Micro Kernel
It is kernel types which has minimalist approach. It has virtual memory and thread
scheduling. It is more stable with less services in kernel space. It puts rest in user
space. It is use in small OS.
Example: Mach, L4, AmigaOS, Minix, K42 etc.
Advantages
• Reliability: Microkernel architecture is designed to be more reliable than
monolithic kernels. Since most of the operating system services run outside the
kernel space, any bug or security vulnerability in a service won’t affect the entire
system.
• Flexibility: Microkernel architecture is more flexible than monolithic kernels
because it allows different operating system services to be added or removed
without affecting the entire system.
• Modularity: Microkernel architecture is more modular than monolithic kernels
because each operating system service runs independently of the others. This
makes it easier to maintain and debug the system.
• Portability: Microkernel architecture is more portable than monolithic kernels
because most of the operating system services run outside the kernel space. This
makes it easier to port the operating system to different hardware architectures.
Disadvantages
• Performance: Microkernel architecture can be slower than monolithic kernels
because it requires more context switches between user space and kernel space.
• Complexity: Microkernel architecture can be more complex than monolithic
kernels because it requires more communication and synchronization
mechanisms between the different operating system services.
• Development Difficulty: Developing operating systems based on microkernel
architecture can be more difficult than developing monolithic kernels because it
requires more attention to detail in designing the communication and
synchronization mechanisms between the different services.
• Higher Resource Usage: Microkernel architecture can use more system
resources, such as memory and CPU, than monolithic kernels because it requires
more communication and synchronization mechanisms between the different
operating system services.
3. Hybrid Kernel
It is the combination of both monolithic kernel and microkernel. It has speed and design
of monolithic kernel and modularity and stability of microkernel.
92
Example:
Windows NT, Netware, BeOS etc.
Advantages
• Performance: Hybrid kernels can offer better performance than microkernels
because they reduce the number of context switches required between user
space and kernel space.
• Reliability: Hybrid kernels can offer better reliability than monolithic kernels
because they isolate drivers and other kernel components in separate protection
domains.
• Flexibility: Hybrid kernels can offer better flexibility than monolithic kernels
because they allow different operating system services to be added or removed
without affecting the entire system.
• Compatibility: Hybrid kernels can be more compatible than microkernels
because they can support a wider range of device drivers.
Disadvantages
• Complexity: Hybrid kernels can be more complex than monolithic kernels
because they include both monolithic and microkernel components, which can
make the design and implementation more difficult.
• Security: Hybrid kernels can be less secure than microkernels because they have
a larger attack surface due to the inclusion of monolithic components.
• Maintenance: Hybrid kernels can be more difficult to maintain than microkernels
because they have a more complex design and implementation.
• Resource Usage: Hybrid kernels can use more system resources than
microkernels because they include both monolithic and microkernel
components.
4. Exo Kernel
It is the type of kernel which follows end-to-end principle. It has fewest hardware
abstractions as possible. It allocates physical resources to applications.
Example:
Nemesis, ExOS etc.
Advantages
• Flexibility: Exokernels offer the highest level of flexibility, allowing developers to
customize and optimize the operating system for their specific application needs.
• Performance: Exokernels are designed to provide better performance than
traditional kernels because they eliminate unnecessary abstractions and allow
applications to directly access hardware resources.
93
• Security: Exokernels provide better security than traditional kernels because
they allow for fine-grained control over the allocation of system resources, such
as memory and CPU time.
• Modularity: Exokernels are highly modular, allowing for the easy addition or
removal of operating system services.
Disadvantages
• Complexity: Exokernels can be more complex to develop than traditional kernels
because they require greater attention to detail and careful consideration of
system resource allocation.
• Development Difficulty: Developing applications for exokernels can be more
difficult than for traditional kernels because applications must be written to
directly access hardware resources.
• Limited Support: Exokernels are still an emerging technology and may not have
the same level of support and resources as traditional kernels.
• Debugging Difficulty: Debugging applications and operating system services on
exokernels can be more difficult than on traditional kernels because of the direct
access to hardware resources.
5. Nano Kernel
It is the type of kernel that offers hardware abstraction but without system services.
Micro Kernel also does not have system services therefore the Micro Kernel and Nano
Kernel have become analogous.
Advantages
• Small Size: Nanokernels are designed to be extremely small, providing only the
most essential functions needed to run the system. This can make them more
efficient and faster than other kernel types.
• High Modularity: Nanokernels are highly modular, allowing for the easy addition
or removal of operating system services, making them more flexible and
customizable than traditional monolithic kernels.
• Security: Nanokernels provide better security than traditional kernels because
they have a smaller attack surface and a reduced risk of errors or bugs in the code.
• Portability: Nanokernels are designed to be highly portable, allowing them to run
on a wide range of hardware architectures.
Disadvantages
94
• Limited Functionality: Nanokernels provide only the most essential functions,
making them unsuitable for more complex applications that require a broader
range of services.
• Complexity: Because nanokernels provide only essential functionality, they can
be more complex to develop and maintain than other kernel types.
• Performance: While nanokernels are designed for efficiency, their minimalist
approach may not be able to provide the same level of performance as other
kernel types in certain situations.
• Compatibility: Because of their minimalist design, nanokernels may not be
compatible with all hardware and software configurations, limiting their practical
use in certain contexts.
Functions of Kernel
The kernel is responsible for various critical functions that ensure the smooth operation
of the computer system. These functions include:
1. Process Management
• Scheduling and execution of processes.
• Context switching between processes.
• Process creation and termination.
2. Memory Management
• Allocation and deallocation of memory space.
• Managing virtual memory.
• Handling memory protection and sharing.
3. Device Management
• Managing input/output devices.
• Providing a unified interface for hardware devices.
• Handling device driver communication.
4. File System Management
• Managing file operations and storage.
• Handling file system mounting and unmounting.
• Providing a file system interface to applications.
5. Resource Management
• Managing system resources (CPU time, disk space, network bandwidth)
• Allocating and deallocating resources as needed
• Monitoring resource usage and enforcing resource limits
6. Security and Access Control
• Enforcing access control policies.
• Managing user permissions and authentication.
• Ensuring system security and integrity.
7. Inter-Process Communication
• Facilitating communication between processes.
95
• Providing mechanisms like message passing and shared memory.
Working of Kernel
• A kernel loads first into memory when an operating system is loaded and remains
in memory until the operating system is shut down again. It is responsible for
various tasks such as disk management , task management, and memory
management .
• The kernel has a process table that keeps track of all active processes
• The process table contains a per-process region table whose entry points to
entries in the region table.
• The kernel loads an executable file into memory during the ‘exec’ system call’.
• It decides which process should be allocated to the processor to execute and
which process should be kept in the main memory to execute. It basically acts as
an interface between user applications and hardware. The major aim of the kernel
is to manage communication between software i.e. user-level applications and
hardware i.e., CPU and disk memory.
Objectives of Kernel
• To establish communication between user-level applications and hardware.
• To decide the state of incoming processes.
• To control disk management.
• To control memory management.
• To control task management.
Conclusion
Kernels are the heart of operating systems , managing how hardware and software
communicate and ensuring everything runs smoothly. Different types of kernels—like
monolithic, microkernels, hybrid kernels, and others—offer various ways to balance
performance, flexibility, and ease of maintenance. Understanding these kernel types
helps us appreciate how operating systems work and how they handle the complex tasks
required to keep our computers and devices running efficiently. Each type of kernel has
its own strengths and weaknesses, but all play a crucial role in the world of computing.
96
ANSWER: A monolithic kernel is a single large process running entirely in a single address
space, containing all core services like process management, memory management, file
systems, and device drivers.
QUESTION: What is a microkernel?
ANSWER: A microkernel is a minimalistic kernel that includes only essential functions
such as inter-process communication and basic memory management, with other
services running in user space.
QUESTION: What is a hybrid kernel?
ANSWER: A hybrid kernel combines aspects of both monolithic and microkernels,
running some services in kernel space and others in user space, to balance performance
and modularity.
1. An operating system is a key component that The kernel is the heart of the OS that
manages computer software and hardware translates user commands into
resources. machine language.
6 Single OS, multiuser OS, multiprocessor OS, Monolithic and Micro kernels are types
real-time OS, and Distributed OS are types of of kernels.
operating systems.
7 It is the first program to run when the computer It is the first program to start when the
boots up. operating system is launched.
97
6.4 Component of OS System
1) Kernel: A kernel is a vital computer program and the core element of an operating
system. It has complete control in the system as it converts the user request into
machine language.
8) user interface: A user interface (UI) is a program that allows a user to interact with
an operating system (OS). It's the way a user sees and uses a device to give it
instructions or receive information.
98
CHAPTER SEVEN
KNOW THE DIFFERENT OPERATING SYSTEM COMMANDS
A shell script is a text file that contains a sequence of commands for a Unix-based
operating system (OS). It's called a shell script because it combines a sequence of
commands in a file that would otherwise have to be typed in one at a time into a single
script. The shell is the OS's command-line interface (CLI) and interpreter for the set of
commands that are used to communicate with the system.
A shell script is usually created to save time for command sequences that a user needs
to use repeatedly. Like other programs, the shell script can contain parameters,
comments and subcommands that the shell must follow. Users simply enter the file
name on a command line to initiate the sequence of commands in the shell script.
Shells provide a way for you to communicate with the operating system. This
communication is carried out either interactively (input from the keyboard is acted upon
immediately) or as a shell script. A shell script is a sequence of shell and operating
system commands that is stored in a file.
These commands can simply be used by opening your terminal or command line
interface and typing the command then press Enter. These commands can vary slightly
99
depending on the specific operating system (like Windows, Linux, macOS) and
command prompt used.
• "ls":
• "ls -l": Display a detailed listing of files including permissions, size, and
owner information.
• "ls -a": Show hidden files.
• "mkdir":
• "mkdir new_folder_name": Create a new folder called
"new_folder_name".
• "rm":
• "rm file_name": Delete a file named "file_name".
• "rm -r directory_name": Recursively delete a directory and its contents.
7.4 commands to manipulate files and directories (mkdir, cp, mv, rm, ln, rmdir, type, etc.)
100
touch Creates an empty file or updates the access and touch new_file.txt
modification times.
cp Copies files or directories. cp file1.txt file2.txt
mv Moves files or directories. mv file1.txt
Documents
rm Remove files or directories. rm old_file.txt
chmod Changes the permissions of a file or directory. chmod 644 file.txt
chown Changes the owner and group of a file or chown user:group
directory. file.txt
ln Creates links between files. ln -s target_file
symlink
cat Concatenates files and displays their contents. cat file1.txt file2.txt
head Displays the first few lines of a file. head file.txt
tail Displays the last few lines of a file. tail file.txt
more Displays the contents of a file page by page. more file.txt
less Displays the contents of a file with advanced less file.txt
navigation features.
diff Compares files line by line. diff file1.txt file2.txt
patch Applies a diff file to update a target file. patch file.txt <
changes.diff
101
jobs Lists active jobs and their status in the current shell jobs
session.
bg Puts a job in the background. bg <job_id>
fg Brings a background job to the foreground. fg <job_id>
nohup Runs a command immune to hangups, with output to nohup
a specified file. command &
disown Removes jobs from the shell’s job table, allowing disown <job_id>
them to run independently.
102
telnet Establishes interactive text-based telnet hostname
communication with a remote host.
netstat Displays network connections, routing netstat -tuln
tables, interface statistics,
masquerade connections, and
multicast memberships.
ifconfig Displays or configures network ifconfig
interfaces and their settings.
iwconfig Configures wireless network iwconfig wlan0
interfaces.
route Displays or modifies the IP routing route -n
table.
arp Displays or modifies the Address arp -a
Resolution Protocol (ARP) cache.
ss Displays socket statistics. ss -tuln
hostname Displays or sets the system’s hostname
hostname.
mtr Combines the functionality of ping and mtr google.com
traceroute, providing detailed network
diagnostic information.
103
Emacs Emacs is a versatile text editor with extensive Open a file with
customization capabilities and support for Emacs: emacs filename
various programming languages. Save and exit Emacs:
Press Ctrl + X, then Ctrl +
S and Ctrl + X, then Ctrl +
C to exit
Nano Nano is a simple and user-friendly text editor Open a file with Nano: nano
designed for ease of use and accessibility. filename
Save and exit Nano:
Press Ctrl + O, then Ctrl + X
Ed Ed is a standard Unix text editor that operates Open a file with Ed: ed
in line-oriented mode, making it suitable for filename
batch processing and automation tasks. Exit Ed editor: Type q and
press Enter
Jed Jed is a lightweight yet powerful text editor Open a file with Jed: jed
that provides an intuitive interface and filename
support for various programming languages. Save and exit Jed: Press Alt
+ X, then type exit and
press Enter
Conclusion
In conclusion, Unix commands serve as a fundamental toolkit for navigating and
managing the Unix operating system, which has evolved from its inception in the 1960s
to become one of the most widely used OS platforms across various domains including
personal computing, servers, and mobile devices. From its origins at Bell Labs with
developers Dennis M. Ritchie and Ken Thompson to the birth of the C programming
language and the subsequent emergence of Unix-like systems such as Linux, the Unix
ecosystem has significantly shaped the computing landscape. Understanding basic Unix
commands is essential for users to efficiently manipulate files, manage processes,
configure networks, and perform system administration tasks, thereby empowering
them to leverage the full potential of Unix-based systems for diverse computing needs.
Reference
• https://www.geeksforgeeks.org/difference-between-shell-and-kernel/
• User mode vs. kernel mode: OSes explained By Ben Lutkevich, Site Editor
Published: 16 Aug 2024
• Resource Management in Operating System, geeksforgeeks.org, Last Updated
: 25 Jun, 2023.
104
• Process Management in Operating System, May 2, 2024 Anshuman Singh,
Senior Executive - Content
105