OS MATERAIL R23
OS MATERAIL R23
OPERATING SYSTEMS
Course Objectives: The main objectives of the course is to make student
Understand the basic concepts and principles of operating systems, including process.
management, memory management, file systems, and Protection Make use of process
scheduling algorithms and synchronization techniques to achieve.
better performance of a computer system. Illustrate different conditions for deadlock and
their possible solutions.
Text Books:
1. Operating System Concepts, Silberschatz A, Galvin P B, Gagne G, 10th Edition, Wiley, 2018.
2. Modern Operating Systems, Tanenbaum A S, 4th Edition, Pearson , 2016.
Reference Books:
1. Operating Systems -Internals and Design Principles, Stallings W, 9th edition, Pearson, 2018
2. Operating Systems: A Concept Based Approach, D.M Dhamdhere, 3rd Edition, McGraw- Hill,
2013
Online Learning Resources:
1. https://nptel.ac.in/courses/106/106/106106144/
2. http://peterindia.net/OperatingSystems.html
1
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM
2
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM
UNIT-I
Operating Systems Overview: Introduction, Operating system functions, Operating systems
operations, Computing environments, Free and Open-Source Operating Systems.
System Structures: Operating System Services, User and Operating-System Interface, system
calls, Types of System Calls, system programs, Operating system Design and Implementation,
Operating system structure, Building and Booting an Operating System, Operating system debugging.
Introduction to Operating System:
An operating system (OS) is essential computer software that manages and coordinates hardware and software
resources. It acts as an interface between the user and the computer's hardware, enabling users to interact with the
device and run applications.
Resource Management: The OS manages and allocates resources like CPU, memory, storage, and input/output
devices.
File Management: It allows users to store, organize, and access files and folders.
Process Management: The OS handles the execution of programs, ensuring efficient resource utilization.
Interface: It provides a user interface (UI) that allows users to interact with the computer through visual elements or
commands.
Device Management: It controls and manages communication with various hardware components, including printers,
keyboards, and more.
Networking: It enables communication with other devices and networks.
Examples:
Desktop OS: Windows, macOS, Linux.
Mobile OS: Android, iOS.
Other: Embedded systems (e.g., in cars), real-time OS (used in critical applications).
What are the functions of OS:
perating system (OS) manages computer hardware and software resources, providing a
platform for running applications and interacting with the user.
Its primary functions include process management, memory management, file management,
and device management.
Additionally, an OS handles security, networking, user interface, and resource allocation.
Different functions of an OS:
1. Process Management:
Manages the execution of multiple programs (processes) concurrently.
Allocates resources (like CPU time) to each process.
Ensures processes can interact with each other.
2. Memory Management:
Allocates memory space to different programs.
Optimizes memory usage to avoid fragmentation and maximize efficiency.
Handles memory allocation and deallocation.
3. File Management:
Organizes and stores files and directories on secondary storage devices.
Provides functionalities like creating, deleting, copying, and renaming files.
Implements file system structures for efficient storage and retrieval.
4. Device Management:
Manages interactions between the computer and its peripherals (printers, scanners, etc.).
Provides drivers for different hardware devices.
Enables communication between the OS and hardware devices.
5. Security:
3
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM
4
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM
5
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM
6
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM
Both free software and open source software are often distributed under open licenses, allowing users to use,
modify, and distribute the software without restrictions.
Difference between Free Software and Open Source Software
S.No. FS Philosophy OSS Philosophy
View on
Software is an important part of people’s Software is just software. There are no
Software’s Role
lives. ethics associated directly with it.
in Life
Ethics and Software freedom translates to social Ethics are to be associated with the people
Freedom freedom. not with the software.
Value of Freedom is a value that is more important Freedom is not an absolute concept.
Freedom than any economical advantage. Freedom should be allowed, not imposed.
7
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM
8
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM
Multiple processes communicate with one another through communication lines in the
network.
The OS handles routing and connection strategies, and the problems of contention and
security.
Following are the major activities of an operating system with respect to communication
Two processes often require data to be transferred between them
Both the processes can be on one computer or on different computers, but are
connected through a computer network.
Communication may be implemented by two methods, either by Shared Memory or by
Message Passing.
5.Error handling:
Errors can occur anytime and anywhere.
An error may occur in CPU, in I/O devices or in the memory hardware.
Following are the major activities of an operating system with respect to error handling −
The OS constantly checks for possible errors.
The OS takes an appropriate action to ensure correct and consistent computing.
6.Resource Management:
In case of multi-user or multi-tasking environment, resources such as main memory, CPU
cycles and files storage are to be allocated to each user or job.
Following are the major activities of an operating system with respect to resource
management −
The OS manages all kinds of resources using schedulers.
CPU scheduling algorithms are used for better utilization of CPU.
7.Protection:
Considering a computer system having multiple users and concurrent execution of multiple
processes, the various processes must be protected from each other's activities.
Protection refers to a mechanism or a way to control the access of programs, processes, or
users to the resources defined by a computer system
Following are the major activities of an operating system with respect to protection −
The OS ensures that all access to system resources is controlled.
The OS ensures that external I/O devices are protected from invalid access attempts.
The OS provides authentication features for each user by means of passwords.
User Interface and Operating System Interface :
The user and operating system are connected with each other with the help of interface, so
interface is used to connect the user and OS.
In computers there are different types of interface that can be used for connection with
computers to users and their connection is responsible for data transfer.
Also, in computers there are different interfaces.
These interfaces are not necessarily used but can be used in computers whenever it is needed.
So, different types of tasks can be performed by the help of different interfaces.
9
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM
The interface is always connected to the OS so that the command given by the user directly
works by the OS and a number of operations can be performed with the help of the command
line interface because multiple commands can be interrupted at same time and execute only
one.
The command line interface is necessary because all the basic operations in the computer are
performed with the help of the OS and it is responsible for memory management.
By using this we can divide the memory and we can use the memory.
Command Line Interface advantages :
Controls OS or application
faster management
ability to store scripts which helps in automating regular tasks.
Troubleshoot network connection issues.
Command Line Interface disadvantages :
The steeper learning curve is associated with memorizing commands and a complex syntax.
Different commands are used in different shells.
Graphical user interface:
The graphical user interface is used for playing games, watching videos, etc.
these are done with the help of GUI because all these applications require graphics.
The GUI is one of the necessary interfaces because only by using the user can clearly see the
picture, play videos.
So we need GUI for computers and this can be done only with the help of an operating
system.
When a task is performed in the computer then the OS checks the task and defines the
interface which is necessary for the task.
So, we need GUI in the OS.
The basic components of GUIs are −
Start menu with program groups
Taskbar which showing running programs
Desktop screen
Different icons and shortcuts.
Choice of interface:
The interface that is used with the help of OS for a particular task and that task can be
performed with minimum possible time and the output is shown on the screen in that case we
use the choice of interface.
The choice of interface means the OS checks the task and finds out which interface can be
suitable for a particular task.
So that type of interface is called the choice of interface and this can be done with the help of
an OS.
Introduction of System Call:
A system call is a programmatic way in which a computer program requests a service from the
kernel of the operating system on which it is executed.
A system call is a way for programs to interact with the operating system.
A computer program makes a system call when it requests the operating system’s kernel.
System call provides the services of the operating system to the user programs via the
Application Program Interface(API).
System calls are the only entry points into the kernel system and are executed in kernel mode.
10
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM
1. File System Operations:These system calls are made while working with files in OS, File manipulation
operations such as creation, deletion, termination etc.
open(): Opens a file for reading or writing. A file could be of any type like text file, audio file etc.
read(): Reads data from a file. Just after the file is opened through open() system call, then if some process want
to read the data from a file, then it will make a read() system call.
write(): Writes data to a file. Wheneve the user makes any kind of modification in a file and saves it, that's when
this is called.
close(): Closes a previously opened file.
seek(): Moves the file pointer within a file. This call is typically made when we the user tries to read the data from
a specific position in afile. For example, read from line - 47. Than the file pointer will move from line 1 or
wherever it was previously to line-47.
2. Process Control:
These types of system calls deal with process creation, process termination, process
allocation, deallocation etc.
Basically manages all the process that are a part of OS.
fork(): Creates a new process (child) by duplicating the current process (parent).
This call is made when a process makes a copy of itself and the parent process is halted
temporarily until the child process finishes its execution.
exec(): Loads and runs a new program in the current process and replaces the current process
with a new process. All the data such as stack, register, heap memory everything is replaced
by a new process and this is known as overlay. For example, when you execute a java byte
code using command - java "filename". Then in the background, exec() call will be made to
execute the java file and JVM will also be executed.
11
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM
wait(): The primary purpose of this call is to ensure that the parent process doesn't proceed
further with its execution until all its child processes have finished their execution. This call is
made when one or more child processes are forked.
exit(): It simply terminates the current process.
kill(): This call sends a signal to a specific process and has various purpose including -
requesting it to quit voluntarily, or force quit, or reload configuration.
3. Memory Management:
These types of system calls deals with memory allocation, deallocation & dynamically changing the
size of a memory allocated to a process. In short, the overall management of memory is done by
making these system calls.
brk(): Changes the data segment size for a process in HEAP Memory. It takes an address as
argument to define the end of the heap and explicitly sets the size of HEAP.
sbrk(): This call is also for memory management in heap, it also takes an argument as an integer
(+ve or -ve) specifying whether to increase or decrease the size respectively.
mmap(): Memory Map - It basically maps a file or device into main memory and further into a
process's address space for performing operations. And any changes made in the content of a file will
be reflected in the actual file.
munmap(): Unmaps a memory-mapped file from a process's address space and out of main memory
mlock() and unlock(): memory lock defines a mechanism through which certain pages stay in memory
and are not swapped out to the swap space in the disk. This could be done to avoid page faults.
Memory unlock is the opposite of lock, it releases the lock previously acquired on pages.
4. Interprocess Communication (IPC)
When two or more process are required to communicate, then various IPC mechanism are used by the
OS which involves making numerous system calls. Some of them are :
pipe(): Creates a unidirectional communication channel between processes. For example, a
parent process may communicate to its child process through a pipe making a parent process
as input source of its child process.
socket(): Creates a network socket for communication. Processes in same or other networks
can communicate through this socket, provided that they have necessary network permissions
granted.
shmget(): It is short for - 'shared-memory-get'. It allows one or more processes to share a
portion of memory and achieve interprocess communication.
semget(): It is short for - 'semaphore-get'. This call typically manages the coordination of
multiple processes while accessing a shared resource that is, the critical section.
msgget(): It is short for - 'message-get'. IPC mechanism has one of the fundamental concept
called - 'message queue' which is a queue data structure inside memory through which various
processes communicate with each other. This message queue is allocated through this call
allowing other processes a structured way of communication for data exchange purpose.
5. Device Management
The device management system calls are used to interact with various peripherial devices attached to
the PC or even the management of the current device.
Set ConsoleMode(): This call is made to set the mode of console (input or output). It
allows a process to control various console modes. In windows, it is used to control the
behaviour of command line.
Write Console(): It allows us to write data on console screen.
12
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM
Read Console(): It allows us to read data from console screen (if any arguments are
provided).
open(): This call is made whenever a device or a file is opened. A unique file descriptor is
created to maintain the control access to the opened file or device.
close(): This call is made when the system or the user closes the file or device.
Importance of System Calls:
Efficient Resource Management: System Calls help your computer manage its resources
efficiently. They allocate and manage memory so programs run smoothly without using up too many
resources. This is important for multitasking and overall performance.
Security and Isolation: System Calls ensure that one program cannot interfere with or access the
memory of another program. This enhances the security and stability of your device.
Multitasking Capabilities: System Calls support multitasking, allowing multiple programs to run
simultaneously. This improves productivity and makes it easy to switch between applications.
Enhanced Control: System Calls provide a high level of control over your device’s operations.
They allow you to start and stop processes, manage files, and perform various system-related tasks.
Input/Output (I/O) Operations: System Calls enable communication with input and output
devices, such as your keyboard, mouse, and screen. They ensure that these devices work effectively.
Networking and Communication: System Calls facilitate networking and communication
between different applications. They make it easy to transfer data over networks, browse the web,
send emails, and connect online.
What is a System Program:
In an operating system a user is able to use different types of system programs and the system program
is responsible for all the application software performance of the computer.
The system programs are responsible for the development and execution of a program and they can be
used by the help of system calls because system calls define different types of system programs for
different tasks.
File management − These programs create, delete, copy, rename, print, exit and generally manipulate
the files and directory.
Status information − It is the information regarding input, output process, storage and the CPU
utilization time how the process will be calculated in how much memory required to perform a task is
known by status information.
Programming language supports − compiler, assembler, interrupt are programming language support
used in the operating system for a particular purpose in the computer.
Programming Loading and execution − The needs to enter the program and after the loading of a
program it needs to execute the output of the programs and this task is also performed by system calls
by the help of system programs.
Communication − These services are provided by the user because by using this number of devices
communicates with each other by the help of device or wireless and communication is necessary for
the operating system.
Background services − There are different types of services available on the operating system for communication
and background service is used to change the background of your window and it also works for scanning and
detecting viruses in the computer.
Purpose of using system program
System programs communicate and coordinate the activities and functions of hardware and software of a system and also
controls the operations of the hardware.
An operating system is one of the examples of system software.
Operating system controls the computer hardware and acts like an interface between the application software’s.
Types of System programs
The types of system programs are as follows −
13
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM
Utility program
It manages, maintains and controls various computer resources. Utility programs are comparatively technical and are
targeted for the users with solid technical knowledge.
Few examples of utility programs are: antivirus software, backup software and disk tools.
Device drivers
It controls the particular device connected to a computer system. Device drivers basically act as a translator between the
operating system and device connected to the system.
Example − printer driver, scanner driver, storage device driver etc.
Directory reporting tools
These tools are required in an operation system to have some software to facilitate the navigation through the computer
system.
Example − dir, ls, Windows Explorer etc.
Operating System Design and Implementation
An operating system is a construct that allows the user application programs to interact with the system hardware.
Operating system by itself does not provide any function but it provides an atmosphere in which different applications and
programs can do useful work.
There are many problems that can occur while designing and implementing an operating system. These are covered in operating
system design and implementation.
14
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM
For example - If the mechanism and policy are independent, then few changes are required in mechanism if policy changes. If a policy
favours I/O intensive processes over CPU intensive processes, then a policy change to preference of CPU intensive processes will not
change the mechanism.
Operating System Implementation:
The operating system needs to be implemented after it is designed. Earlier they were written in assembly language but now higher level
languages are used. The first system not written in assembly language was the Master Control Program (MCP) for Burroughs Computers.
15
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM
16
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM
Exo-Kernel Structure
Exokernel is an operating system developed at MIT to provide application-level management
of hardware resources.
By separating resource management from protection, the exokernel architecture aims to
enable application-specific customization.
Due to its limited operability, exokernel size typically tends to be minimal.
The OS will always have an impact on the functionality, performance, and scope of the apps
that are developed on it because it sits in between the software and the hardware.
The exokernel operating system makes an attempt to address this problem by rejecting the
notion that an operating system must provide abstractions upon which to base applications.
The objective is to limit developers use of abstractions as little as possible while still giving
them freedom.
Advantages of Exo-Kernel:
17
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM
Advantages:
High Customizable - Being virtual, functionality are easily accessible, can be customized
on need basis.
Secure - Being virtual, and no direct hardware access, such systems are highly secured.
Disadvantages
Less Performance - A virtual structured operating system is less performant as compared
to modular structured operating system.
Complex designing - Each virtual component of the machine is to planned carefully as
each component is to abstract underlying hardware.
Layered Structure:
An OS can be broken into pieces and retain much more control over the system. In this
structure, the OS is broken into a number of layers (levels).
18
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM
The bottom layer (layer 0) is the hardware, and the topmost layer (layer N) is the user
interface.
These layers are so designed that each layer uses the functions of the lower-level layers.
This simplifies the debugging process, if lower-level layers are debugged and an error occurs
during debugging, then the error must be on that layer only, as the lower-level layers have
already been debugged.
The main disadvantage of this structure is that at each layer, the data needs to be modified
and passed on which adds overhead to the system. Moreover, careful planning of the layers is
necessary, as a layer can use only lower-level layers.
UNIX is an example of this structure.
Advantages :
Layering makes it easier to enhance the operating system, as the implementation of a layer
can be changed easily without affecting the other layers.
It is very easy to perform debugging and system verification.
Disadvantages:
In this structure, the application’s performance is degraded as compared to simple structure.
It requires careful planning for designing the layers, as the higher layers use the
functionalities of only the lower layers.
19
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM
Testing and Debugging:Rigorous testing and debugging are crucial to ensure the OS functions
correctly and reliably.
Booting an Operating System:
Power-on Self-Test (POST):When a computer is powered on, the BIOS or UEFI firmware runs a diagnostic
routine to check hardware components.
Boot Device Selection:The BIOS/UEFI determines the order in which boot devices (like hard drives or USB
drives) will be checked for bootable software.
Bootloader:A special program, often stored in ROM, loads the operating system kernel into memory.
Kernel Initialization:The kernel initializes the hardware and prepares the system for running applications.
System Startup:Once the kernel is loaded and initialized, the OS starts up, and the user can interact with the
system.
Connecting Building and Booting:
The boot process is essentially how the built operating system is loaded and made functional on the computer.
The bootloader acts as a bridge between the hardware and the operating system, allowing the OS to load and run.
The built OS provides the software that manages the hardware and allows applications to run, and the booting process
makes it available for use.
Debugging :
In an operating system (OS), debugging is the process of identifying, isolating, and resolving errors or
bugs within the OS code or related components. It's a critical task in software development to ensure
the OS functions correctly, reliably, and efficiently. This process involves using various tools and
techniques to trace the source of an issue, understand how it manifests, and then correct the
underlying problem.
Here's a more detailed look:
Identifying Errors:
Debugging begins with identifying the presence of an error, which could be a crash, unexpected
behavior, or performance issue.
Isolating the Cause:
Once an error is identified, the focus shifts to pinpointing the specific part of the OS code or a
related component where the issue originates.
Using Debugging Tools:
20
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM
Various tools are employed to assist in this process. These include debuggers, log files, memory
dump analysis, and static/dynamic analysis tools.
UNIT - II
Processes: Process Concept, Process scheduling, Operations on processes, Inter-process
communication.
Threads and Concurrency: Multithreading models, Thread libraries, Threading issues.
CPU Scheduling: Basic concepts, Scheduling criteria, Scheduling algorithms, Multiple
processor scheduling.
PROCESS CONCEPT:
21
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM
Process State :
As a process executes, it changes states
new: The process is being created
running: Instructions are being executed
waiting: The process is waiting for some event to occur
ready: The process is waiting to be assigned to a processor
terminated: The process has finished execution.
Diagram of Process State Or Lifecycle of a Process:
22
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM
If that process gets suspended, the contents of the registers are saved on a stack and the
pointer to the particular stack frame is stored in the PCB.
By this technique, the hardware state can be restored so that the process can be scheduled to
run again.
A Process Control Block is a data structure maintained by the Operating System for every
process.
The PCB is identified by an integer process ID (PID).
A PCB keeps all the information needed to keep track of a process as listed below in the table
23
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM
24
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM
Mutual Exclusion
Mutual exclusion requires that only one process thread can enter the critical section at a time. This is
useful for synchronization and also prevents race conditions.
Barrier
A barrier does not allow individual processes to proceed until all the processes reach it. Many parallel languages
and collective routines impose barriers.
Spinlock
This is a type of lock. The processes trying to acquire this lock wait in a loop while checking if the lock is
available or not. This is known as busy waiting because the process is not doing any useful operation even
though it is active.
Approaches to Inter process Communication:
The different approaches to implement inter process communication are given as follows −
Pipe
A pipe is a data channel that is unidirectional. Two pipes can be used to create a two-way data
channel between two processes. This uses standard input and output methods. Pipes are used in all
POSIX systems as well as Windows operating systems.
Socket
The socket is the endpoint for sending or receiving data in a network. This is true for data sent
between processes on the same computer or data sent between different computers on the same
network. Most of the operating systems use sockets for interprocess communication.
File
A file is a data record that may be stored on a disk or acquired on demand by a file server. Multiple
processes can access a file as required. All operating systems use files for data storage.
Signal
Signals are useful in inter process communication in a limited way. They are system messages that are
sent from one process to another. Normally, signals are not used to transfer data but are used for
remote commands between processes.
Shared Memory
Shared memory is the memory that can be simultaneously accessed by multiple processes. This is
done so that the processes can communicate with each other. All POSIX systems, as well as Windows
operating systems use shared memory.
Message Queue
Multiple processes can read and write data to the message queue without being connected to each
other. Messages are stored in the queue until their recipient retrieves them. Message queues are quite
useful for inter process communication and are used by most operating systems.
A diagram that demonstrates message queue and shared memory methods of inter process communication is as
follows
25
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM
26
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM
Dividing an application or a program into multiple sequential threads that run in quasi-parallel, the
programming model becomes simpler.
Thread has the ability to share an address space and all of its data among themselves.
This ability is essential for some specific applications.
Threads are lighter weight than processes, but they are faster to create and destroy than processes.
Types of Threads:
There are two main types of threads User Level Thread and Kernal Level Thread let’s discuss each one by
one in detail:
User Level Thread (ULT):
User Level Thread is implemented in the user level library, they are not created using the system calls.
Thread switching does not need to call OS and to cause interrupt to Kernel. Kernel doesn’t know about the
user level thread and manages them as if they were single-threaded processes.
Advantages of ULT
Can be implemented on an OS that doesn’t supportmultithreading .
Simple representation since thread has onlyprogram counter , register set, stack space.
Simple to create since no intervention of kernel.
Thread switching is fast since no OS calls need to be made.
Disadvantages of ULT
No or less co-ordination among the threads and Kernel.
If one thread causes a page fault, the entire process blocks.
Kernel Level Thread (KLT)
Kernel knows and manages the threads. Instead of thread table in each process, the kernel itself has thread
table (a master one) that keeps track of all the threads in the system. In addition kernel also maintains the
traditional process table to keep track of the processes. OS kernel provides system call to create and manage
threads.
Advantages of KLT
Since kernel has full knowledge about the threads in the system, scheduler may decide to give more time
to processes having large number of threads.
Good for applications that frequently block.
Disadvantages of KLT
Slow and inefficient.
It requires thread control block so it is an overhead.
Thread Library:
A thread library provides the programmer with an Application program interface for creating and managing
thread.
Ways of implementing thread library:
There are two primary ways of implementing thread library, which are as follows −
The first approach is to provide a library entirely in user space with kernel support. All code and data structures for the
library exist in a local function call in user space and not in a system call.
The second approach is to implement a kernel level library supported directly by the operating system. In this case the
code and data structures for the library exist in kernel space.
Invoking a function in the application program interface for the library typically results in a system call to the kernel.
The main thread libraries which are used are given below −
27
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM
POSIX threads − Pthreads, the threads extension of the POSIX standard, may be provided as either a user level or
a kernel level library.
WIN 32 thread − The windows thread library is a kernel level library available on windows systems.
JAVA thread − The JAVA thread API allows threads to be created and managed directly as JAVA programs.
Multi-Threading Models:
Multithreading allows the execution of multiple parts of a program at the same
time.
These parts are known as threads and are lightweight processes available within
the process.
Therefore, multithreading leads to maximum utilization of the CPU by
multitasking.
The main models for multithreading are
1.One to One model
2.Many to One model
3.Many to Many model.
One to One Model:
The one to one model maps each of the user threads to a kernel thread.
This means that many threads can run in parallel on multiprocessors and other threads can run when one
thread makes a blocking system call.
A disadvantage of the one to one model is that the creation of a user thread requires a corresponding
kernel thread.
Since a lot of kernel threads burden the system, there is restriction on the number of threads in the
system.
A diagram that demonstrates the one to one model is given as follows −
This model is quite efficient as the user space manages the thread management.
A disadvantage of the many to one model is that a thread blocking system call blocks the entire
process. Also, multiple threads cannot run in parallel as only one thread can access the kernel at a time.
A diagram that demonstrates the many to one model is given as follows −
28
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM
There can be as many user threads as required and their corresponding kernel threads can run in
parallel on a multiprocessor.
A diagram that demonstrates the many to many model is given as follows −
29
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM
Since the exec() system call will overwrite the whole process with the one given in the arguments
passed to exec().
This means that in cases like this; a fork() which only replicates one invoking thread will do.
Thread Cancellation
The process of prematurely aborting an active thread during its run is called ‘thread cancellation’.
So, let’s take a look at an example to make sense of it.
Suppose, there is a multithreaded program whose several threads have been given the right to scan a
database for some information.
The other threads however will get canceled once one of the threads happens to return with the
necessary results.
The target thread is now the thread that we want to cancel . Thread cancellation can be done in two ways:
Asynchronous Cancellation: The asynchronous cancellation involves only one thread that cancels
the target thread immediately.
Deferred Cancellation: In the case of deferred cancellation, the target thread checks itself
repeatedly until it is able to cancel itself voluntarily or decide otherwise .
Signal Handling
Signal is easily directed at the process in single threaded applications. However, in relation to
multithreaded programs, the question is which thread of a program the signal should be sent.
Suppose the signal will be delivered to:
Every line of this process.
Some special thread of a process.
thread to which it applies
THREAD POOL :
The primary issue with thread pools in an operating system is the potential for
excessive resource consumption, including CPU cycles and memory
if not managed properly, which can occur when too many threads are created, leading
to high context switching overhead and potential system slowdowns or crashes.
this is especially problematic when dealing with a large number of concurrent
requests that might trigger rapid thread creation without a pool to reuse existing
threads.
30
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM
3. Turnaround Time
For a particular process, an important criterion is how long it takes to execute that process. The time elapsed
from the time of submission of a process to the time of completion is known as the turnaround time. Turn-
around time is the sum of times spent waiting to get into memory, waiting in the ready queue, executing in
CPU, and waiting for I/O.
Turn Around Time = Completion Time – Arrival Time.
4. Waiting Time
A scheduling algorithm does not affect the time required to complete the process once it starts
execution. It only affects the waiting time of a process i.e. time spent by a process waiting in
the ready queue.
Waiting Time = Turnaround Time – Burst Time.
5. Response Time
In an interactive system, turn-around time is not the best criterion. A process may produce
some output fairly early and continue computing new results while previous results are being
output to the user. Thus another criterion is the time taken from submission of the process of
the request until the first response is produced. This measure is called response time.
31
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM
Response Time = CPU Allocation Time(when the CPU was allocated for the first) – Arrival
Time
6. Completion Time
The completion time is the time when the process stops executing, which means that the
process has completed its burst time and is completely executed.
7. Priority
If the operating system assigns priorities to processes, the scheduling mechanism should favor
the higher-priority processes.
8. Predictability
A given process always should run in about the same amount of time under a similar system
load.
32
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM
33
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM
In Multiple-Processor Scheduling, A system with many processors that share the same
memory, bus, and input/output devices is referred to as a multi-processor.
The bus links all of the computer's other parts, including the RAM and I/O devices, to
the processor.
Types of Multiprocessor Scheduling Algorithms:
Operating systems utilize a range of multiprocessor scheduling algorithms. Among the most typical types are −
Round-Robin Scheduling − The round-robin scheduling algorithm allocates a time quantum to each CPU
and configures processes to run in a round-robin fashion on each processor. Since it ensures that each process
gets an equivalent amount of CPU time, this strategy might be useful in systems wherein all programs have the
same priority.
Priority Scheduling − Processes are given levels of priority in this method, and those with greater priorities are
scheduled to run first. This technique might be helpful in systems where some jobs, like real-time tasks, call for
a higher priority.
Scheduling with the shortest job first (SJF) − This algorithm schedules tasks according to how long they should
take to complete. It is planned for the shortest work to run first, then the next smallest job, and so on. This
technique can be helpful in systems with lots of quick processes since it can shorten the typical response time.
Fair-share scheduling − In this technique, the number of processors and the priority of each process determine how
much time is allotted to each. As it ensures that each process receives a fair share of processing time, this technique
might be helpful in systems with a mix of long and short processes.
Earliest deadline first (EDF) scheduling − Each process in this algorithm is given a deadline, and the process with
the earliest deadline is the one that will execute first. In systems with real-time activities that have stringent deadlines,
this approach can be helpful.
Scheduling using a multilevel feedback queue (MLFQ) − Using a multilayer feedback queue (MLFQ), processes
are given a range of priority levels and are able to move up or down the priority levels based on their behavior. This
strategy might be useful in systems with a mix of short and long processes.
34
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM
UNIT – III
Synchronization Tools: The Critical Section Problem, Peterson’s Solution, Mutex Locks,
Semaphores, Monitors, Classic problems of Synchronization.
Deadlocks: system Model, Deadlock characterization, Methods for handling Deadlocks, Deadlock
prevention, Deadlock avoidance, Deadlock detection, Recovery from Deadlock.
35
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM
Process Synchronization is a technique which is used to coordinate the process that use shared Data. There are two types of Processes in an
Operating Systems:-
1. Independent Process – The process that does not affect or is affected by the other process while its execution then the process is called
Independent Process. Example The process that does not share any shared variable, database, files, etc.
2. Cooperating Process – The process that affect or is affected by the other process while execution, is called a Cooperating Process. Example
The process that share file, variable, database, etc are the Cooperating Process.
Critical section:
A critical section is a part of a program where shared resources like memory or files are accessed by multiple processes or threads. To
avoid issues like data inconsistency or race conditions, synchronization techniques ensure that only one process or thread uses the critical
section at a time.
The critical section contains shared variables or resources that need to be synchronized to maintain the consistency of data variables.
In simple terms, a critical section is a group of instructions/statements or regions of code that need to be executed atomically, such as
accessing a resource (file, input or output port, global data, etc.) In concurrent programming, if one process tries to change the value of
shared data at the same time as another thread tries to read the value (i.e., data race across threads), the result is unpredictable. The
access to such shared variables (shared memory, shared files, shared port, etc.) is to be synchronized.
Few programming languages have built-in support for synchronization. It is critical to understand the importance of race conditions while
writing kernel-mode programming (a device driver, kernel thread, etc.) since the programmer can directly access and modify kernel data
structures
Although there are some properties that should be followed if any code in the critical section
Critical section
Entry Section – It is part of the process which decide the entry of a particular process in the Critical Section, out of many other
processes.
Critical Section – It is the part in which only one process is allowed to enter and modify the shared variable.This part of the process
ensures that only no other process can access the resource of shared data.
Exit Section – This process allows the other process that are waiting in the Entry Section, to enter into the Critical Sections. It checks
that a process that after a process has finished execution in Critical Section can be removed through this Exit Section.
Remainder Section – The other parts of the Code other than Entry Section, Critical Section and Exit Section are known as Remainder
Section.
Critical Section problems must satisfy these three requirements:
1. Mutual Exclusion – It states that no other process is allowed to execute in the critical section if a process is executing in critical
section.
2. Progress – When no process is in the critical section, then any process from outside that request for execution can enter in the critical
section without any delay. Only those process can enter that have requested and have finite time to enter the process.
3. Bounded Waiting – An upper bound must exist on the number of times a process enters so that other processes are allowed to enter
their critical sections after a process has made a request to enter its critical section and before that request is granted.
Process Synchronization are handled by two approaches:
Types of Solutions to Critical Section Problem
There are two main types of solutions to the Critical Section Problem:
Software Based
Hardware Based
Software Based
Software Approach – In Software Approach, Some specific Algorithm approach is used to maintain synchronization of the
data. Like in Approach One or Approach Two, for a number of two process, a temporary variable like (turn) or boolean
variable (flag) value is used to store the data. When the condition is True then the process in waiting State, known as Busy
Waiting State. This does not satisfy all the Critical Section requirements. Another Software approach known as Peterson’s
Solution is best for Synchronization. It uses two variables in the Entry Section so as to maintain consistency, like Flag (boolean
variable) and Turn variable(storing the process states). It satisfy all the three Critical Section requirements. //Image of
Peterson’s Algorithm
Peterson’s Algorithm:
To handle the problem of Critical Section (CS) Peterson gave an algorithm with a bounded waiting.
• Suppose there are N processes (P1, P2, … PN) and each of them at some point need to enter the Critical Section.
• A flag[] array of size N is maintained which is by default false and whenever a process need to enter the critical section it has to set its flag as
true, i.e. suppose Pi wants to enter so it will set flag[i]=TRUE.
36
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM
• There is another variable called turn which indicates the process number which is currently to enter into the CS. The process that enters into the
CS while exiting would change the turn to another number from among the list of ready processes.
2. Hardware Approach – The Hardware Approach of synchronization can be done through Lock & Unlock technique.Locking part is done in the Entry Section, so that
only one process is allowed to enter into the Critical Section, after it complete its execution, the process is moved to the Exit Section, where Unlock Operation is done
so that another process in the Lock Section can repeat this process of Execution.This process is designed in such a way that all the three conditions of the Critical
Sections are satisfied. //Image of Lock
Using Interrupts –
These are easy to implement.When Interrupt are disabled then no other process is allowed to perform Context Switch operation that would allow only one process to
enter into the Critical State. //Image of Interrupts
37
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM
Test_and_Set Operation –
This allows boolean value (True/False) as a hardware Synchronization, which is atomic in nature i.e no other
interrupt is allowed to access.This is mainly used in Mutual Exclusion Application. Similar type operation can
be achieved through Compare and Swap function. In this process, a variable is allowed to accessed in Critical
Section while its lock operation is ON.Till then, the other process is in Busy Waiting State. Hence Critical
Section Requirements are achieved.
A solution to a process synchronization problem should meet three important criteria:
• Correctness: Data access synchronization and control synchronization should be performed in accordance
with synchronization requirements of the problem.
• Maximum concurrency: A process should be able to operate freely except when it needs to wait for other
processes to perform synchronization actions.
• No busy waits: To avoid performance degradation, synchronization should be performed through blocking
rather than through busy waits
Hardware Synchronization Algorithms : Unlock and Lock, Test and Set, Swap
There are three algorithms in the hardware approach of solving Process Synchronization problem:
Test and Set
Swap
Unlock and Lock
Hardware instructions in many operating systems help in the effective solution of critical section
problems.
MutexLocks:
A mutex is different from a binary semaphore, which provides a locking mechanism. It stands
for Mutual Exclusion Object. Mutex is mainly used to provide mutual exclusion to a specific portion
of the code so that the process can execute and work with a particular section of the code at a
particular time. A mutex enforces strict ownership. Only the thread that locks the mutex can unlock
it. It is specifically used for locking a resource to ensure that only one thread accesses it at a time.
Due to this strict ownership, a mutex is not only typically used for signaling between threads, but it
is used for mutual exclusion also to ensuring that a resource is accessed by only one thread at a time.
A mutex is an object.
Mutex works upon the locking mechanism.
38
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM
Operations on mutex:
Lock
Unlock
Mutex does not have any subtypes.
A mutex can only be modified by the process that is requesting or releasing a resource.
If the mutex is locked then the process needs to wait in the process queue, and mutex can only
be accessed once the lock is released.
Advantages of Mutex
No race condition arises, as only one process is in the critical section at a time.
Data remains consistent and it helps in maintaining integrity.
It is a simple locking mechanism that into a critical section and is released while leaving the
critical section.
Disadvantages of Mutex
If after entering into the critical section, the thread sleeps or gets preempted by a high-priority
process, no other thread can enter into the critical section. This can lead to starvation.
When the previous thread leaves the critical section, then only other processes can enter into it,
there is no other mechanism to lock or unlock the critical section.
Implementation of mutex can lead to busy waiting, which leads to the wastage of the CPU cycle.
Using Mutex
The producer-consumer problem: Consider the standard producer-consumer problem. Assume, we
have a buffer of 4096-byte length. A producer thread collects the data and writes it to the buffer. A
consumer thread processes the collected data from the buffer. The objective is, that both the threads
should not run at the same time.
Solution: A mutex provides mutual exclusion, either producer or consumer can have the key (mutex)
and proceed with their work. As long as the buffer is filled by the producer, the consumer needs to
wait, and vice versa. At any point in time, only one thread can work with the entire buffer. The
concept can be generalized using semaphore.
Mutex
What is Semaphore?
A semaphore is an integer.
Semaphore uses signaling mechanism.
Operation on semaphore:
Wait
Signal
Semaphore is of two types:
Counting Semaphore
Binary Semaphore
Semaphore work with two atomic operations (Wait, signal) which can modify it.
If the process needs a resource, and no resource is free. So, the process needs to perform wait operation
39
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM
40
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM
4. Synchronization: The producer and consumer must be synchronized to ensure that the buffer is
accessed correctly.
Semaphore Solution:
1. Full Semaphore: A semaphore that keeps track of the number of full slots in the buffer.
2. Empty Semaphore: A semaphore that keeps track of the number of empty slots in the buffer.
3. Mutex Semaphore: A semaphore that protects access to the buffer.
Producer Process:
1. Produce Item: Produce an item.
2. Wait for Empty Slot: Wait for an empty slot in the buffer using the empty semaphore.
3. Add Item to Buffer: Add the item to the buffer.
4. Signal Full Slot: Signal that a full slot is available using the full semaphore.
Consumer Process:
1. Wait for Full Slot: Wait for a full slot in the buffer using the full semaphore.
2. Remove Item from Buffer: Remove an item from the buffer.
3. Signal Empty Slot: Signal that an empty slot is available using the empty semaphore.
4. Consume Item: Consume the item.
Pseudocode:
// Producer Process
while (true) {
// Produce item
item = produce_item();
// Wait for empty slot
wait(empty_semaphore);
// Add item to buffer
buffer.add(item);
// Signal full slot
signal(full_semaphore);
}
// Consumer Process
while (true) {
// Wait for full slot
wait(full_semaphore);
// Remove item from buffer
item = buffer.remove();
// Signal empty slot
signal(empty_semaphore);
// Consume item
consume_item(item);}
Benefits of Semaphore Solution:
1. Synchronization: Semaphores ensure that the producer and consumer are synchronized.
2. Mutual Exclusion: Semaphores ensure that only one process can access the buffer at a time.
3. Efficient: Semaphores provide an efficient solution to the producer-consumer problem.
Real-World Applications:
1. Print Queue: The producer-consumer problem can be applied to a print queue, where multiple
processes can add print jobs to the queue.
2. Network Buffer: The producer-consumer problem can be applied to a network buffer, where
multiple processes can send and receive data.
41
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM
If the mutex is locked then the process If the process needs a resource, and no
needs to wait in the process queue, and resource is free. So, the process needs to
mutex can only be accessed once the perform a wait operation until the semaphore
lock is released. value is greater than zero.
42
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM
3. Wait or Signal: If the condition has not occurred, the process or thread waits on the condition
variable. If the condition has occurred, the process or thread signals other processes or threads that are
waiting.
4. Release Mutex Lock: The process or thread releases the mutex lock when it is finished accessing
the shared resource.
Types of Monitors:
1. Hoare Monitor: This type of monitor allows threads to wait on condition variables and notifies
other threads when the condition becomes true.
2. Mesa Monitor: This type of monitor allows threads to wait on condition variables, but it does not
notify other threads when the condition becomes true.
Advantages of Monitors:
1. Synchronization: Monitors provide a way to synchronize access to shared resources.
2. Mutual Exclusion: Monitors ensure that only one process or thread can access a shared resource at
a time.
3. Efficient: Monitors provide an efficient way to manage shared resources.
Disadvantages of Monitors:
1. Complexity: Monitors can be complex to implement and use.
2. Overhead: Monitors can introduce overhead due to the need to acquire and release the mutex lock.
Real-World Applications of Monitors:
1. Database Systems: Monitors can be used to synchronize access to shared data in database systems.
2. Operating Systems: Monitors can be used to synchronize access to shared resources in operating
systems.
3. Embedded Systems: Monitors can be used to synchronize access to shared resources in embedded
systems.
Classical Problems of Synchronization
Semaphore can be used in other synchronization problems besides Mutual Exclusion.
Below are some of the classical problem depicting flaws of process synchronaization in systems where cooperating
processes are present.
We will discuss the following three problems:
1. Bounded Buffer (Producer-Consumer) Problem
43
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM
Producer Process:
1. Produce Item: Produce an item.
2. Wait for Empty Slot: Wait for an empty slot in the buffer using the empty semaphore (E). wait(E)
3. Acquire Mutex: Acquire the mutex semaphore (M) to protect access to the buffer. wait(M)
4. Add Item to Buffer: Add the item to the buffer.
5. Signal Full Slot: Signal that a full slot is available using the full semaphore (F). signal(F)
6. Release Mutex: Release the mutex semaphore (M). signal(M)
Consumer Process:
1. Wait for Full Slot: Wait for a full slot in the buffer using the full semaphore (F). wait(F)
2. Acquire Mutex: Acquire the mutex semaphore (M) to protect access to the buffer. wait(M)
3. Remove Item from Buffer: Remove an item from the buffer.
4. Signal Empty Slot: Signal that an empty slot is available using the empty semaphore (E). signal(E)
5. Release Mutex: Release the mutex semaphore (M). signal(M)
6. Consume Item: Consume the item.
Pseudocode:
// Producer Process
while (true) {
// Produce item
item = produce_item();
// Wait for empty slot
wait(E);
// Acquire mutex
wait(M);
// Add item to buffer
buffer.add(item);
// Signal full slot
signal(F);
// Release mutex
signal(M);
}
// Consumer Process
while (true) {
// Wait for full slot
wait(F);
// Acquire mutex
wait(M);
// Remove item from buffer
item = buffer.remove();
// Signal empty slot
signal(E);
// Release mutex
signal(M);
// Consume item
consume_item(item);
}
44
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM
45
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM
Deadlocks in OS:
A deadlock is a situation in a computer system where two or more
processes are blocked indefinitely, each waiting for the other to release a
resource. This creates a circular wait condition, where none of the
processes can proceed because they are all waiting for each other.
Deadlock System model:
A deadlock is a situation in a computer system where two or more processes are blocked
indefinitely, each waiting for the other to release a resource.
A deadlock occurs when a set of processes is stalled because each process is holding a
resource and waiting for another process to acquire another resource.
In the diagram below, for example, Process 1 is holding Resource 1 while Process 2 acquires
Resource 2, and Process 2 is waiting for Resource 1.
46
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM
Suppose we have two processes, P1 and P2, and two resources, R1 and R2.
In this example, P1 is holding R1 and waiting for R2, which is held by P2. P2 is holding R2 and
waiting for R1, which is held by P1. This is a deadlock situation.
Operations :
In normal operation, a process must request a resource before using it and release it when finished, as
shown below.
Request –
If the request cannot be granted immediately, the process must wait until the resource(s) required to
become available. The system, for example, uses the functions open(), malloc(), new(), and request ().
Use –
The process makes use of the resource, such as printing to a printer or reading from a file.
47
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM
Release –
The process relinquishes the resource, allowing it to be used by other processes.
Deadlock Characterization :
Deadlock characterization is the process of identifying the necessary and sufficient conditions for a
deadlock to occur in a computer system. These conditions are often referred to as the "deadlock
characterization".
Necessary Conditions for Deadlock:
The following conditions are necessary for a deadlock to occur:
1. Mutual Exclusion: Two or more processes must be competing for a common resource that
cannot be used simultaneously.
2. Hold and Wait: A process must be holding a resource and waiting for another resource, which is
held by another process.
3. No Preemption: The operating system must not be able to preempt one process and give the
resource to another process.
4. Circular Wait: A process must be waiting for a resource that is held by another process, which is
waiting for a resource held by the first process.
Implications of Deadlock Characterization:
Understanding the deadlock characterization theorem has several implications:
1. Deadlock Prevention: This involves preventing deadlocks from occurring in the first place.
2. Deadlock Detection and Recovery: This involves detecting deadlocks and recovering from
them.
3. Deadlock Avoidance: This involves avoiding deadlocks by carefully allocating resources.
Deadlock methods:
Methods for Handling Deadlocks , Deadlocks can be handled using several methods, which
can be broadly classified into three categories:
1. Deadlock Prevention: This involves preventing deadlocks from occurring in the first place.
2. Deadlock Detection and Recovery: This involves detecting deadlocks and recovering from
them.
3. Deadlock Avoidance: This involves avoiding deadlocks by carefully allocating resources.
Deadlock Prevention :
Deadlock prevention is a technique used to prevent deadlocks from occurring in a computer
system. This is achieved by ensuring that at least one of the necessary conditions for a
deadlock is not met.
Necessary Conditions for Deadlock:
For a deadlock to occur, the following conditions must be met:
1. Mutual Exclusion: Two or more processes must be competing for a common resource that
cannot be used simultaneously.
2. Hold and Wait: A process must be holding a resource and waiting for another resource, which is
held by another process.
3. No Preemption: The operating system must not be able to preempt one process and give the
resource to another process.
4. Circular Wait: A process must be waiting for a resource that is held by another process, which is
waiting for a resource held by the first process.
Deadlock Prevention Techniques:
To prevent deadlocks, we can use the following techniques:
48
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM
1. Mutual Exclusion Prevention: Ensure that at least one resource is not shared.
2. Hold and Wait Prevention: Ensure that a process does not hold a resource and wait for another
resource.
3. No Preemption Prevention: Ensure that the operating system can preempt one process and give
the resource to another process.
4. Circular Wait Prevention: Ensure that a process does not wait for a resource that is held by
another process, which is waiting for a resource held by the first process.
5. Resource Ordering: Ensure that resources are always requested in a specific order.
6. Avoid Nested Locks: Avoid acquiring multiple locks simultaneously.
7. Use Lock Timeout: Use a lock timeout to prevent a process from holding a lock indefinitely.
8. Use Lock Queue: Use a lock queue to ensure that processes acquire locks in a specific order.
Examples of Deadlock Prevention:
1. Dining Philosopher Problem: This problem can be solved by ensuring that each philosopher picks
up the chopsticks in a specific order, preventing the circular wait condition.
2. Banker's Algorithm: This algorithm ensures that a process does not hold a resource and wait for
another resource, preventing the hold and wait condition.
Advantages of Deadlock Prevention:
1. Prevents Deadlocks: Deadlock prevention ensures that deadlocks do not occur in the system.
2. Improves System Performance: By preventing deadlocks, the system can run more efficiently
and respond to user requests more quickly.
3. Reduces System Downtime: Deadlock prevention reduces the likelihood of system downtime
due to deadlocks.
Disadvantages of Deadlock Prevention:
1. Resource Underutilization: Deadlock prevention can lead to resource underutilization, as some
resources may be left idle to prevent deadlocks.
2. Increased Complexity: Deadlock prevention can add complexity to the system, as it requires
careful resource allocation and management.
Deadlock Avoidance:
Deadlock avoidance is a technique used to avoid deadlocks in a computer system. This is achieved by
carefully allocating resources and ensuring that the system never enters a deadlock state.
Deadlock Avoidance Techniques:
1. Resource Ordering: Ensure that resources are always requested in a specific order.
2. Avoid Nested Locks: Avoid acquiring multiple locks simultaneously.
3. Use Lock Timeout: Use a lock timeout to prevent a process from holding a lock indefinitely.
4. Use Lock Queue: Use a lock queue to ensure that processes acquire locks in a specific order.
5. Banker's Algorithm: Use the Banker's algorithm to ensure that a process does not hold a resource
and wait for another resource.
Banker's Algorithm:
The Banker's algorithm is a deadlock avoidance technique that ensures that a process does not hold a
resource and wait for another resource. The algorithm works as follows:
1. Available Resources: The system maintains a list of available resources.
2. Max Resources: Each process specifies the maximum number of resources it may need.
3. Current Resources: Each process specifies the current number of resources it is holding.
4. Need Resources: Each process specifies the number of resources it needs to complete its task.
5. Safe State: The system checks if the current state is safe by ensuring that there are enough
available resources to satisfy the needs of all processes.
49
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM
50
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM
51
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM
52
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM
Starvation –
How do you ensure that a process does not go hungry because its resources are constantly being
preempted? One option is to use a priority system and raise the priority of a process whenever its
resources are preempted. It should eventually gain a high enough priority that it will no longer be
preempted.
53
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM
UNIT - IV
Memory-Management Strategies: Introduction, Contiguous memory allocation, Paging,
Structure of the Page Table, Swapping.
Virtual Memory Management: Introduction, Demand paging, Copy-on-write, Page replacement,
Allocation of frames, Thrashing
Storage Management: Overview of Mass Storage Structure, HDD Scheduling.
54
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM
done by a hardware device Memory Management Unit(MMU). The physical address always
remains constant.
55
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM
Non-contiguous
56
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM
External Fragmentation:
It is found in variable partition scheme.
To overcome the problem of external fragmentation, compaction technique is used or
non-contiguous memory management techniques are used.
Advantages of Variable Partition Scheme
Portion size = process size
There is no internal fragmentation (which is the drawback of fixed partition schema).
Degree of multiprogramming varies and is directly proportional to a number of processes.
Disadvantage Variable Partition Scheme
External fragmentation is still there.
2. Non-contiguous memory allocation:
In a Non-Contiguous memory management scheme, the program is divided into different blocks and loaded
at different portions of the memory that need not necessarily be adjacent to one another.
This scheme can be classified depending upon the size of blocks and whether the blocks reside in the main
memory or not.
1. Physical address space: Main memory (physical memory) is divided into blocks of the
same size called frames. frame size is defined by the operating system by comparing it with
the size of the process.
2. Logical Address space: Logical memory is divided into blocks of the same size called
process pages. page size is defined by hardware system and these pages are stored in the
main memory during the process in non-contiguous frames.
What is paging?
Paging is a technique that eliminates the requirements of contiguous allocation of main memory.
In this, the main memory is divided into fixed-size blocks of physical memory called frames.
The size of a frame should be kept the same as that of a page to maximize the main memory and avoid
external fragmentation.
Advantages of paging:
Pages reduce external fragmentation.
Simple to implement.
Memory efficient.
Due to the equal size of frames, swapping becomes very easy.
It is used for faster access of data.
What is Segmentation?
Segmentation is a technique that eliminates the requirements of contiguous allocation of main memory.
In this, the main memory is divided into variable-size blocks of physical memory called segments.
57
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM
With segmented memory allocation, each job is divided into several segments of different sizes, one for each
module.
What is Paging?
Paging is a Memory Management technique that helps in retrieving the processes from
the secondary memory in the form of pages. It eliminates the need for contagious
allocation of memory to the processes. In paging, processes are divided into equal parts
called pages, and main memory is also divided into equal parts and each part is called a
frame.
Each page gets stored in one of the frames of the main memory whenever required. So,
the size of a frame is equal to the size of a page. Pages of a process can be stored in the
non-contagious locations in the main memory.
Let there is a process P1 of size 4 MB and there are two holes of size 2 MB each but are
not contiguous. Despite having the total available space equal to 4 MB it is useless and
can't be allocated to the process.
Paging helps in allocating memory to a process at different locations in the main memory.
It reduces memory wastage and removes external fragmentation.
The CPU always generates a Logical Address. But, the Physical Address is needed to
access the main memory.
1. Page Number(p) - It is the number of bits required to represent the pages in the
Logical Address Space. It is used as an index in a page table that contains the base
address of a page in the physical memory.
2. Page Offset(d) - It denotes the page size or the number of bits required to represent
a word on a page. It is combined with the Page Number to get the Physical
Address.
58
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM
2. Frame Offset(d) - It is the page size or the number of bits required to represent a
word in a frame. It is equal to the Page Offset.
Page Table
The Page Table contains the base address of each page inside the Physical Memory. It is
then combined with Page Offset to get the actual address of the required data in the main
memory.
The Page Number is used as the index of the Page Table which contains the base address
which is the Frame Number. Page offset is then used to retrieve the required data from the
main memory.
Example of Paging
If Logical Address = 2 bits, then Logical Address Space = 22 words and vice versa.
If Physical Address = 4 bits, then Physical Address Space = 24 words and vice versa.
Let Page Size = 1 bit, Logical Address = 2 bits, Physical Address= 4 bits.
Frame Size = Page Size = 21 = 2 Words. It means every page will contain 2 words.
59
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM
Advantages of Paging
60
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM
Memory Divides the process’s memory into Moves entire processes to and from
Management fixed-size pages. disk (swap space) as needed.
Pages are typically smaller than the Swapping involves entire processes,
Size entire process. sizes are usually fixed which can be much larger than a
(e.g., 4KB). single page.
61
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM
62
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM
2. Efficient use of RAM: The OS can optimize RAM usage by swapping out pages that are not
currently being used.
3. Improved multitasking: Virtual memory enables smooth multitasking by allowing programs to
run in the background while others are active.
Drawbacks of Virtual Memory:
1. Performance overhead: Swapping pages in and out of RAM can lead to a performance slowdown.
2. Disk usage: Virtual memory relies on disk storage, which can lead to increased disk usage and
wear.
In summary: virtual memory is a crucial feature in modern operating systems that enables efficient
use of RAM, improved multitasking, and allows multiple programs to run simultaneously.
How Virtual Memory Works
Virtual Memory is a technique that is implemented using both hardware and software.
It maps memory addresses used by a program, called virtual addresses, into physical
addresses in computer memory.
All memory references within a process are logical addresses that are dynamically
translated into physical addresses at run time.
This means that a process can be swapped in and out of the main memory such that it
occupies different places in the main memory at different times during the course of
execution.
A process may be broken into a number of pieces and these pieces need not be
continuously located in the main memory during execution.
The combination of dynamic run-time address translation and the use of a page or
segment table permits this.
If these characteristics are present then, it is not necessary that all the pages or segments
are present in the main memory during execution.
This means that the required pages need to be loaded into memory whenever required.
Virtual memory is implemented using Demand Paging or Demand Segmentation.
Snapshot of a virtual memory management system
Let us assume 2 processes, P1 and P2, contains 4 pages each. Each page size is 1 KB.
In the third partition, 1 st page of P1 is stored and the other frames are also shown as filled with the different
pages of processes in the main memory.
The page tables of both the pages are 1 KB size each and therefore they can be fit in one frame each.
The page tables of both the processes contain various information that is also shown in the image.
The CPU contains a register which contains the base address of page table that is 5 in the case of P1 and 7 in
the case of P2.
This page table base address will be added to the page number of the Logical address when it comes to
accessing the actual corresponding entry.
63
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM
An abstraction that extends the The actual hardware (RAM) that stores
Definition available memory by using disk data and instructions currently being
storage used by the CPU
64
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM
65
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM
Segmentation
66
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM
Translation
67
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM
Overhead: Using a segment table can increase overhead and reduce performance. Each
segment table entry requires additional memory, and accessing the table to retrieve memory
locations can increase the time needed for memory operations.
Complexity: Segmentation can be more complex to implement and manage than paging. In
particular, managing multiple segments per process can be challenging, and the potential for
segmentation faults can increase as a result.
Thrashing:
Thrashing is a phenomenon in Operating Systems (OS) where the system spends more time
swapping pages in and out of memory (also known as "page faults") than executing actual
processes. This leads to a significant decrease in system performance and productivity.
Causes of Thrashing:
1.Insufficient RAM: When the system has limited RAM, it cannot hold all the required pages,
leading to frequent page faults.
2.Poor Page Replacement Algorithms: Inefficient page replacement algorithms, such as FIFO
(First-In-First-Out), can lead to thrashing.
3.Multiprogramming: Running multiple programs simultaneously can cause thrashing if the
system is unable to handle the increased demand for memory.
Effects of Thrashing:
1. System Slowdown: Thrashing leads to a significant decrease in system performance,
making it slow and unresponsive.
2.Increased Page Faults: The system spends more time handling page faults than executing
processes.
3.Decreased Throughput: Thrashing reduces the overall throughput of the system.
Solutions to Thrashing:
1.Increase RAM: Adding more RAM to the system can help reduce thrashing.
2. Improve Page Replacement Algorithms: Using more efficient page replacement
algorithms, such as LRU (Least Recently Used) or Optimal, can help minimize thrashing.
3.Reduce Multiprogramming: Limiting the number of programs running simultaneously can
help alleviate thrashing.
4.Use Disk Caching: Implementing disk caching can help reduce the number of page faults
and alleviate thrashing.
68
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM
Copy on Write :
Copy on Write or simply COW is a resource management technique. One of its main use is
in the implementation of the fork system call in which it shares the virtual memory(pages)
of the OS.
In UNIX like OS, fork() system call creates a duplicate process of the parent process which
is called as the child process.
The idea behind a copy-on-write is that when a parent process creates a child process then
both of these processes initially will share the same pages in memory and these shared
pages will be marked as copy-on-write .
which means that if any of these processes will try to modify the shared pages then only a
copy of these pages will be created and the modifications will be done on the copy of pages
by that process and thus not affecting the other process.
Suppose, there is a process P that creates a new process Q and then process P modifies page
3.The below figures shows what happens before and after process P modifies page 3.
69
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM
70
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM
Frame 1 1 1 1 4 4 4 5 5 5 1 1 1 1
Frame 2 - 2 2 2 2 2 2 6 6 6 6 3 3
Frame 3 - - 3 3 3 1 1 1 2 2 2 2 7
Page Ye Ye Ye Ye N Ye Ye Ye Ye Ye N Ye Ye
Fault s s s s o s s s s s o s s
ADVANTAGES:
71
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM
o Simple to implement
o Easy to understand
Disadvantages:
o This may lead to terrible overall performance (Belady's anomaly), wherein growing the quantity of page frames will
increase the number of page faults.
Frame=3
72
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM
Advantages
Disadvantages
Example
Number of Frames: 3
Page 1 2 3 4 2 1 5 6 2 1 2 3 7
Referenc
e
Frame 1 1 1 1 4 4 4 4 6 6 6 6 6 6
Frame 2 - 2 2 2 2 2 5 5 5 5 5 3 7
Frame 3 - - 3 3 3 1 1 1 2 2 2 3 3
Page Ye Ye Ye Ye N Ye Ye Ye Ye N N Ye Ye
Fault s s s s o s s s s o o s s
Implementation
Implementing LRU may be completed by the use of a stack or a counter. The stack
implementation maintains a stack of web page numbers, with the most currently used web
page on the pinnacle. When a page is accessed, it's far removed from the stack and driven to
the top. The page at the bottom is the least recently used.
73
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM
74
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM
75
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM
Disadvantages
76
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM
Allocation on frames:
Frame allocation refers to the process of assigning physical memory frames to a process's virtual
memory pages. Here are the different frame allocation algorithms:
1. First-Fit Algorithm:
Assigns the first available frame that is large enough to hold the process.
2. Best-Fit Algorithm:
Assigns the smallest frame that is large enough to hold the process.
3. Worst-Fit Algorithm:
Assigns the largest available frame.
4. Next-Fit Algorithm:
Assigns the next available frame after the last allocated frame.
5. Buddy System:
Divides memory into power-of-2 sized blocks, and assigns the smallest block that can hold the
process.
6. Slab Allocation:
Divides memory into fixed-size blocks, and assigns a block to a process when needed.
Frame Allocation Techniques:
1. Fixed Allocation:
Divides physical memory into fixed-size frames.
2. Dynamic Allocation:
Allocates frames of varying sizes to processes.
3. Hybrid Allocation:
Combines fixed and dynamic allocation techniques.
Considerations for Frame Allocation:
1. Internal Fragmentation:
Wasted space within a frame.
2. External Fragmentation:
Wasted space between frames.
3. Frame Size:
Affects internal fragmentation and page table size.
4. Page Table Size:
Affects memory overhead and access time.
5. Allocation Algorithm:
Affects performance, fragmentation, and complexity.
Overview of storage structure:
Each track is further divided into sectors.
Spindle revolves the platters and is controlled by r/w unit of OS. Some advanced
spindles have capability to only revolve a particular disk and keep others intact.
Arm Assembly is there which keeps a pointy r/w head on each disk to read of write on
a particular disk.
A world cylinder may also be used at times to refer disk stack.
77
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM
Transfer rate: This is the rate at which the data moves from disk to the computer.
Random access time: It is the sum of the seek time and rotational latency.
Seek time is the time taken by the arm to move to the required track. Rotational latency is
defined as the time taken by the arm to reach the required sector in the track.
Transfer time is the time to transfer the data. It depends on the rotating speed of the disk and
number of bytes to be transferred.
78
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM
Disk Response Time: Response Time is the average time spent by a request waiting to perform
its I/O operation. The average Response time is the response time of all requests. Variance
Response Time is the measure of how individual requests are serviced with respect to average
response time. So the disk scheduling algorithm that gives minimum variance response time is
better.
Goal of Disk Scheduling Algorithms:
Minimize Seek Time
Maximize Throughput
Minimize Latency
Fairness
Efficiency in Resource Utilization
Disk Scheduling Algorithms:
There are several Disk Several Algorithms. We will discuss in detail each one of them.
FCFS (First Come First Serve)
SSTF (Shortest Seek Time First)
SCAN
C-SCAN
LOOK
C-LOOK
RSS (Random Scheduling)
LIFO (Last-In First-Out)
N-STEP SCAN
F-SCAN
1. FCFS (First Come First Serve)
FCFS is the simplest of all Disk Scheduling Algorithms.
In FCFS, the requests are addressed in the order they arrive in the disk queue.
79
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM
So, total overhead movement (total distance covered by the disk arm) =
(82-50)+(170-82)+(170-43)+(140-43)+(140-24)+(24-16)+(190-16) =642
Advantages of FCFS:
Here are some of the advantages of First Come First Serve.
Every request gets a fair chance
No indefinite postponement
Disadvantages of FCFS:
Here are some of the disadvantages of First Come First Serve.
Does not try to optimize seek time
May not provide the best possible service
2. SSTF (Shortest Seek Time First):
In SSTF (Shortest Seek Time First), requests having the shortest seek time are executed first.
So, the seek time of every request is calculated in advance in the queue and then they are scheduled
according to their calculated seek time.
As a result, the request near the disk arm will get executed first.
SSTF is certainly an improvement over FCFS as it decreases the average response time and increases
the throughput of the system.
Let us understand this with the help of an example.
Example: Suppose the order of request is- (82, 170,43,140,24,16,190)
80
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM
Therefore, the total overhead movement (total distance covered by the disk arm) is calculated as
= (199-50) + (199-16) = 332.
Advantages of SCAN Algorithm:
Here are some of the advantages of the SCAN Algorithm.
High throughput
Low variance of response time
Average response time
Disadvantages of SCAN Algorithm:
Here are some of the disadvantages of the SCAN Algorithm.
Long waiting time for requests for locations just visited by disk arm
4. C-SCAN:
In the SCAN algorithm, the disk arm again scans the path that has been scanned, after reversing its
direction.
81
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM
So, it may be possible that too many requests are waiting at the other end or there may be zero or few
requests pending at the scanned area.
These situations are avoided in the CSCAN algorithm in which the disk arm instead of reversing its
direction goes to the other end of the disk and starts servicing the requests from there.
So, the disk arm moves in a circular fashion and this algorithm is also similar to the SCAN algorithm
hence it is known as C-SCAN (Circular SCAN).
Example:
Suppose the requests to be addressed are-82,170,43,140,24,16,190.
And the Read/Write arm is at 50, and it is also given that the disk arm should move “towards the larger value”.
So, the total overhead movement (total distance covered by the disk arm) is calculated as =(199-50) + (199-0)
+ (43-0) = 391
Advantages of C-SCAN Algorithm:
Provides more uniform wait time compared to SCAN.
5. LOOK:
LOOK Algorithm is similar to the SCAN disk scheduling algorithm except for the difference
that the disk arm in spite of going to the end of the disk goes only to the last request to be
serviced in front of the head and then reverses its direction from there only.
Thus it prevents the extra delay which occurred due to unnecessary traversal to the end of the
disk.
Example:
Suppose the requests to be addressed are-82,170,43,140,24,16,190. And the Read/Write arm is at 50,
and it is also given that the disk arm should move “towards the larger value”.
82
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM
So, the total overhead movement (total distance covered by the disk arm) is calculated as = (190-50) + (190-
16) = 314
6. C-LOOK:
As LOOK is similar to the SCAN algorithm, in a similar way, C-LOOK is similar to the CSCAN disk
scheduling algorithm.
In CLOOK, the disk arm in spite of going to the end goes only to the last request to be serviced in
front of the head and then from there goes to the other end’s last request.
Thus, it also prevents the extra delay which occurred due to unnecessary traversal to the end of the
disk.
Example:
Suppose the requests to be addressed are-82, 170 ,43,140,24,16,190. And the Read/Write arm is at
50, and it is also given that the disk arm should move “towards the larger value”
So, the total overhead movement (total distance covered by the disk arm) is calculated as = (190-
50) + (190-16) + (43-16) = 341
UNIT - V
File System: File System Interface: File concept, Access methods, Directory Structure.
File system Implementation: File-system structure, File-system Operations, Directory
implementation, Allocation method, Free space management.
File-System Internals: File-System Mounting, Partitions and Mounting, File Sharing.
Protection: Goals of protection, Principles of protection, Protection Rings, Domain of protection,
Access matrix.
FILE CONCEPT:
A file is a named collection of related information that is recorded on secondary storage such
as magnetic disks, magnetic tapes and optical disks.
In general, a file is a sequence of bits, bytes, lines or records whose meaning is defined by the
files creator and user.
File Structure:
A File Structure should be according to a required format that the operating system can understand.
A file has a certain defined structure according to its type.
83
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM
Direct/Random access:
Random access file organization provides, accessing the records directly.
Each record has its own address on the file with by the help of which it can be directly accessed for
reading or writing.
The records need not be in any sequence within the file and they need not be in adjacent locations on
the storage medium.
Indexed sequential access:
This mechanism is built up on base of sequential access.
An index is created for each file which contains pointers to various blocks.
Index is searched sequentially and its pointer is used to access the file directly.
Space Allocation:
Files are allocated disk spaces by operating system. Operating systems deploy following three main
ways to allocate disk space to files.
Contiguous Allocation
Linked Allocation
Indexed Allocation
84
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM
Contiguous Allocation:
Each file occupies a contiguous address space on disk.
Assigned disk address is in linear order.
Easy to implement.
External fragmentation is a major issue with this type of allocation technique.
Linked Allocation:
Each file carries a list of links to disk blocks.
Directory contains link / pointer to first block of a file.
No external fragmentation
Effectively used in sequential access file.
Inefficient in case of direct access file.
Indexed Allocation:
Provides solutions to problems of contiguous and linked allocation.
A index block is created having all pointers to files.
Each file has its own index block which stores the addresses of disk space occupied by the file.
Directory contains the addresses of index blocks of files.
Structures of Directory in Operating System:
What is a directory:
Directory can be defined as the listing of the related files on the disk.
The directory may store some or the entire file attributes.
To get the benefit of different file systems on the different operating systems, A hard disk can be
divided into the number of partitions of different sizes.
The partitions are also called volumes or mini disks.
Each partition must have at least one directory in which, all the files of the partition can be listed.
A directory entry is maintained for each file in the directory which stores all the information related to
that file.
A directory can be viewed as a file which contains the Meta data of the bunch of files.
Every Directory supports a number of common operations on the file:
1. File Creation
2. Search for the file
3. File deletion
4. Renaming the file
5. Traversing Files
6. Listing of files
There are different types of directories:
1. Single-Level Directory
2. Two-Level Directory
3. Tree Structure/ Hierarchical Structure
4. Acyclic Graph Structure
5. General-Graph Directory Structure.
85
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM
The entire system will contain only one directory which is supposed to mention all the files present in
the file system.
The directory contains one entry per each file present on the file system.
There is one master directory which contains separate directories dedicated to each user.
For each user, there is a different directory present at the second level, containing group of user's file.
The system doesn't let a user to enter in the other user's directory without permission.
86
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM
87
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM
In the above image, you can see that a cycle is formed in the User 2 directory. While this structure offers more
flexibility, it is also more complicated to implement.
Advantages of General-Graph Directory:
More flexible than other directory structures.
Allows cycles, meaning directories can loop back to each other.
Disadvantages of General-Graph Directory:
More expensive to implement compared to other solutions.
Requires garbage collection to manage and clean up unused files and directories.
Definition of File System:
A file system is a way of organizing and managing files on a storage device, such as a
hard disk or a flash drive.
It provides a logical structure to the physical storage space and allows users and
applications to access and manipulate the files.
A file system typically consists of three components: files, directories, and file
metadata.
Importance of File System:
The importance of a file system lies in its ability to provide a convenient and efficient way of storing
and retrieving files.
Without a file system, users and applications would have to manage data in raw, unstructured formats,
making it difficult to organize and access.
A well designed file system can improve data integrity, reduce data loss, and optimize data storage and
retrieval.
88
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM
89
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM
The file ‘mail’ in the following figure starts from the block 19 with length = 6 blocks.
Therefore, it occupies 19, 20, 21, 22, 23, 24 blocks.
Advantages:
It is very easy to implement.
There is a minimum amount of seek time.
The disk head movement is minimum.
Memory access is faster.
It supports sequential as well as direct access.
Disadvantages
Advantages
Disadvantages
Indexed Allocation − Indexed allocation is a method of storing files in which a separate index is maintained
that contains a list of all the blocks that make up each file. This method allows for quick access to files and helps
prevent fragmentation, but it requires additional overhead to maintain the index.
90
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM
The choice of file allocation method depends on the specific needs of the system and the type of storage device
being used.
Advantages
Disadvantages
91
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM
Backup and Restore − Backup and restore involves creating backups of critical files and directories and
restoring them in case of data loss or other issues. Regular backups are essential for ensuring the availability and
integrity of data.
Updating Software − Updating software, including the operating system and file system, is essential for
maintaining system security and compatibility with new hardware and software.
File System Performance:
File system performance refers to the speed and efficiency with which a file system can read, write,
and access data. Here are some factors that can affect file system performance −
File System Type − Different types of file systems have different performance characteristics. For example,
some file systems may be optimized for speed, while others may prioritize data reliability or data security.
File System Size − Larger file systems may require more time to index and search, which can impact
performance. File system fragmentation can also impact performance, as fragmented files may require more
time to read or write.
Hardware Configuration − The hardware configuration of a system, including the type and speed of the
storage device and the amount of memory, can have a significant impact on file system performance.
Network Performance − When files are accessed over a network, the speed and reliability of the network can
impact file system performance.
Application Design − The design of applications that access the file system can also impact performance.
Applications that access large files or read and write files frequently can impact file system performance.
file system mounting :
Mounting is the process of attaching a file system to a directory in the operating system's file
system hierarchy.
This allows the operating system to access the files and directories on the mounted file
system.
Mounting is a way to integrate a new file system into the existing file system hierarchy.
When a file system is mounted, its root directory becomes a subdirectory of the existing file
system hierarchy.
Why is Mounting Needed?
Mounting is needed to:
1. Access Data: Mounting allows the operating system to access data on a file system.
2. Integrate File Systems: Mounting integrates multiple file systems into a single file system
hierarchy.
3. Provide a Unified View: Mounting provides a unified view of all file systems, making it easier
to manage and access data.
How Does Mounting Work?
The mounting process involves the following steps:
1. Device Identification: The operating system identifies the device that contains the file system to
be mounted.
2. File System Type: The operating system determines the type of file system on the device (e.g.,
ext4, NTFS, etc.).
3. Mount Point: The operating system creates a mount point, which is a directory in the file system
hierarchy where the mounted file system will be accessible.
4. Mounting: The operating system mounts the file system to the mount point, making the files and
directories on the mounted file system accessible.
File System Mounting Options:
1. Read-Only: The file system is mounted in read-only mode, preventing any modifications to the
file system.
2. Read-Write: The file system is mounted in read-write mode, allowing modifications to the file
system.
92
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM
3. Noexec: The file system is mounted with the noexec option, preventing any executable files on the
file system from being executed.
4. Nosuid: The file system is mounted with the nosuid option, preventing any setuid or setgid bits on
the file system from being honored.
Mounting Commands:
Common mounting commands include:
1. mount: Mounts a file system.
2. umount: Unmounts a file system.
3. fstab: Configures file system mounting at boot time.
Partitions and mounting:
Partitions and mounting are two related concepts in operating systems that deal with the
organization and access of data on storage devices.
A partition is a logical division of a storage device, such as a hard drive or solid-state drive
(SSD), into separate areas for storing data.
Each partition can have its own file system, and the operating system can treat each partition
as a separate device.
Types of Partitions:
1. Primary Partition: A primary partition is a partition that can be used to boot an operating
system.
2. Extended Partition: An extended partition is a partition that can be further divided into logical
partitions.
3. Logical Partition: A logical partition is a partition that is created within an extended partition.
Mounting:
Mounting is the process of attaching a file system to a directory in the operating system's file system
hierarchy. This allows the operating system to access the files and directories on the mounted file
system.
Types of Mounting:
1. Local Mount: A local mount is when a file system on a local device is mounted to the file system
hierarchy.
2. Remote Mount: A remote mount is when a file system on a remote device is mounted to the file
system hierarchy.
3. Virtual Mount: A virtual mount is when a file system is mounted to a virtual device.
Mounting Process:
1. Device Identification: The operating system identifies the device that contains the file system to be
mounted.
2. File System Type: The operating system determines the type of file system on the device (e.g.,
ext4, NTFS, etc.).
3. Mount Point: The operating system creates a mount point, which is a directory in the file system
hierarchy where the mounted file system will be accessible.
4. Mounting: The operating system mounts the file system to the mount point, making the files and
directories on the mounted file system accessible.
Benefits of Partitions and Mounting:
1. Organization: Partitions and mounting help to organize data on storage devices, making it easier to
manage and access.
2. Flexibility: Partitions and mounting provide flexibility in terms of file system choice and
configuration.
93
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM
3. Security: Partitions and mounting can help to improve security by isolating sensitive data and
limiting access to certain file systems.
TOOLS:
1).Fdisk:Linux / unix os - command line arguments.
2).Disk utility: graphical for Mac os
3).Disk management: windows os .
Common Commands for Partitions and Mounting:
1. fdisk: A command-line utility for creating and managing partitions.
2. mkfs: A command-line utility for creating file systems on partitions.
3. mount: A command-line utility for mounting file systems to the file system hierarchy.
4. umount: A command-line utility for unmounting file systems from the file system hierarchy.
File Sharing :
File sharing is the process of allowing multiple users or processes to access and share files on
a computer or network.
In an operating system (OS), file sharing is an essential feature that enables collaboration,
communication, and data exchange among users.
Types of File Sharing:
1. Local File Sharing: Sharing files between users on the same computer.
2. Network File Sharing: Sharing files between computers on a network.
3. Remote File Sharing: Sharing files between computers over the internet.
File Sharing Models:
1. Shared Folder Model: A shared folder is created, and users are granted access to it.
2. Peer-to-Peer Model: Users share files directly with each other without a central server.
3. Client-Server Model: A central server manages file sharing, and clients access files from the server.
File Sharing Protocols:
1. SMB (Server Message Block): A protocol for sharing files and printers on a network.
2. NFS (Network File System): A protocol for sharing files on a network.
3. FTP (File Transfer Protocol): A protocol for transferring files over the internet.
File Sharing in Operating Systems:
1. Windows: Windows provides file sharing through the Shared Folders feature and the SMB
protocol.
2. Linux: Linux provides file sharing through the NFS protocol and the Samba software.
3. macOS: macOS provides file sharing through the Shared Folders feature and the SMB protocol.
File Sharing Security:
1. Access Control: Controlling access to shared files and folders through permissions and access
control lists (ACLs).
2. Encryption: Encrypting shared files to protect them from unauthorized access.
3. Authentication: Authenticating users before allowing them to access shared files.
File Sharing Benefits:
1. Collaboration: File sharing enables collaboration among users by allowing them to access and
share files.
2. Convenience: File sharing provides a convenient way to share files without having to physically
transfer them.
3. Productivity: File sharing can improve productivity by enabling users to access and share files
quickly and easily.
Goals of Protection in OS:
94
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM
The primary goals of protection in an operating system (OS) are to ensure the security, integrity, and
availability of computer resources, including hardware, software, and data.
Main Goals of Protection:
1. Confidentiality: Protect sensitive information from unauthorized access, disclosure, or theft.
2. Integrity: Ensure that data and programs are not modified without authorization.
3. Availability: Ensure that computer resources are accessible and usable when needed.
4. Authenticity: Verify the identity of users, devices, and processes to ensure that only authorized
entities access resources.
5. Accountability: Track and record user activities to ensure that users are held responsible for their
actions.
Additional Goals:
1. Access Control: Regulate access to resources based on user identity, role, or privilege.
2. Data Protection: Protect data from unauthorized access, modification, or destruction.
3. System Integrity: Ensure that the OS and its components are not compromised or modified without
authorization.
4. Resource Allocation: Manage resource allocation to prevent unauthorized access or misuse.
5. Error Detection and Recovery: Detect and recover from errors or security breaches.
Protection Mechanisms:
1. Access Control Lists (ACLs): Define access permissions for users and groups.
2. Encryption: Protect data confidentiality and integrity.
3. Firewalls: Control incoming and outgoing network traffic.
4. Authentication and Authorization: Verify user identities and grant access to resources.
5. Auditing and Logging: Track user activities and system events.
Benefits of Protection:
1. Security: Protect against unauthorized access, malware, and other security threats.
2. Reliability: Ensure system stability and availability.
3. Compliance: Meet regulatory and industry standards for data protection and security.
4. Accountability: Hold users responsible for their actions.
5. Trust: Establish trust among users, organizations, and systems.
Principles of Protection:
The principles of protection in an operating system (OS) are designed to ensure the security, integrity,
and availability of computer resources, including hardware, software, and data.
1. Least Privilege: Grant users and processes only the privileges necessary to perform their tasks.
2. Separation of Privilege: Separate privileges into distinct categories to prevent unauthorized access.
3. Access Control: Regulate access to resources based on user identity, role, or privilege.
4. Authentication: Verify the identity of users, devices, and processes before granting access.
5. Authorization: Grant access to resources based on user identity, role, or privilege.
6. Accountability: Track and record user activities to ensure that users are held responsible for their
actions.
7. Data Protection: Protect data from unauthorized access, modification, or destruction.
8. System Integrity: Ensure that the OS and its components are not compromised or modified without
authorization.
Types of Protection:
1. Memory Protection: Protect memory from unauthorized access or modification.
2. File Protection: Protect files from unauthorized access, modification, or destruction.
3. I/O Protection: Protect input/output (I/O) operations from unauthorized access or modification.
95
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM
4. CPU Protection: Protect the central processing unit (CPU) from unauthorized access or
modification.
Protection Mechanisms:
1. Access Control Lists (ACLs): Define access permissions for users and groups.
2. Encryption: Protect data confidentiality and integrity.
3. Firewalls: Control incoming and outgoing network traffic.
4. Authentication and Authorization: Verify user identities and grant access to resources.
5. Auditing and Logging: Track user activities and system events.
Benefits of Protection:
1. Security: Protect against unauthorized access, malware, and other security threats.
2. Reliability: Ensure system stability and availability.
3. Compliance: Meet regulatory and industry standards for data protection and security.
4. Accountability: Hold users responsible for their actions.
5. Trust: Establish trust among users, organizations, and systems.
Protection levels:
Protection levels in an operating system (OS) refer to the different levels of access control and
security that can be applied to resources such as files, directories, and devices.
Types of Protection Levels:
1. User Mode: A protection level that allows users to access only their own resources and prevents
them from accessing system resources.
2. Supervisor Mode: A protection level that allows the operating system to access all resources and
perform privileged operations.
3. Kernel Mode: A protection level that allows the operating system kernel to access all resources and
perform privileged operations.
In Computer Science, the ordered protection domains are referred to as Protection Rings.
These mechanisms help in improving fault tolerance and provide Computer Security. Operating
Systems provide different levels to access resources. Rings are hierarchically arranged from
most privileged to least privileged. Use of Protection Ring : Use of Protection Rings provides
logical space for the levels of permissions and execution. Two important uses of Protection
Rings are :
96
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM
Protection Rings
Modes of Protection Ring : There are basically two modes : Supervisor Mode, and Hypervisor
Mode. These are explained as following below in brief.
1. Supervisor Mode : Supervisor Mode is an execution mode in some of processors which
allows execution of all instructions including privileged instructions. It also gives access to
different address space, to memory management hardware, and to other peripherals.
Usually, Operating System runs in this mode.
2. Hypervisor Mode : Modern CPUs offer x86 virtualization instructions for hypervisor to
control “Ring 0” hardware access. In order to help virtualization, VT and Pacifica insert new
privilege level below “Ring 0” and Both these add nine new “machine code” instructions
that only work on Ring −1 and intended to be used by hypervisor.
Domain of Protection in OS:
The domain of protection in an operating system (OS) refers to the scope or range of protection
provided to resources such as files, directories, and devices.
Types of Domains:
1. User Domain: A domain that includes all resources owned by a specific user.
2. Group Domain: A domain that includes all resources shared by a group of users.
3. System Domain: A domain that includes all system resources, such as devices and system files.
Domain of Protection:
The domain of protection includes:
1. Subjects: Users, processes, and threads that access resources.
2. Objects: Resources such as files, directories, and devices.
3. Access Rights: Permissions that define what actions can be performed on objects.
Domain of Protection :
The protection policies limit the access of each process with respect to their resource
handling. A process is bound to use only those resources which it requires to complete its
task, in the time limit that it requires and also the mode in which it is required. That is the
protected domain of a process.
A computer system has processes and objects, which are treated as abstract data types, and
these objects have operations specific to them. A domain element is described as <object,
{set of operations on object}>.
Each domain consists of a set of objects and the operations that can be performed on them.
A domain can consist of either only a process or a procedure or a user. Then, if a domain
corresponds to a procedure, then changing domain would mean changing procedure ID.
Objects may share a common operation or two. Then the domains overlap.
97
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM
2. Changing or dynamic –
In dynamic association where a process can switch dynamically, creating a new domain in
the process, if need be.
Security Measures :
Security measures at different levels are taken against malpractices, such as no person
should be allowed on the premises or allowed access to the systems.
The network that is used for the transfer of files must be secure at all times. No alien
software must be able to extract information from the network while the transfer. This is
known as Network Sniffing, and it can be prevented by introducing encrypted channels of
data transfer. Also, the OS must be able to resist against forceful or even accidental
violations.
The best ways of authentication are using a username password combination, using
fingerprint, eye retina scan or even user cards to access the system.
Passwords are a good method to authenticate, but it is also one of the most common as well
as vulnerable methods. To crack passwords is not too hard. While there are weak passwords,
but even hard passwords can be cracked by either sniffing around or giving access to
multiple users or even network sniffing as mentioned above.
Security Authentication:
To make passwords strong and a formidable authentication source, one time passwords,
encrypted passwords and Cryptography are used as follows.
1. OneTimePasswords –
It is used in such a way that it is unique at every instance of login by the user. It is a pair of
passwords combined to give the user access. The system generates a random number and the
user provides a complementary one or the system and the user are provided a random
number by an algorithm and through a common function that the two share they match the
output&thusgetaccess.
2. EncryptedPasswords:
It is also a very way to authenticate access. Encrypted data is passed over the network which
does the transfer and checking of the passwords that helps in the data passage without
interruption or interception.
98
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM
3. Cryptography–
It is another method of ensuring that data transfer over a network is not available to the
unauthorized users. This helps in transfer of data with full protection. It protects the data by
introducing the concept of a key. The key is very important here. When a user sends the
data, he encodes it using a computer possessing the key and the receiver also has to decode
the data using the very same key. Thus, even if the data is stolen mid-way, there’s still a big
possibility that the unauthorized user cannot access it.
99
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM
4. What is Resource-Allocation-Graph?
8. What is Multi-Threading?
17.What are the most common attributes that are associated with an opened file?
20. When a process creates a new process, what is shared between parent process and child
process?
100
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM
39. Define Busy Waiting? How to overcome busy waiting using Semaphore operations.
42. What are the various attributes that are associated with an opened file?
44. Define System Call? List out any four Process Control System Calls.
101
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM
102
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM
103
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA