0% found this document useful (0 votes)
15 views103 pages

OS MATERAIL R23

The document outlines a course on Operating Systems, detailing its objectives, key concepts, and various units covering topics such as process management, memory management, file systems, and synchronization techniques. It also discusses the differences between Free Software and Open Source Software, their advantages and disadvantages, and various computing environments. Textbooks and online resources for further learning are provided as well.

Uploaded by

tejasree
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views103 pages

OS MATERAIL R23

The document outlines a course on Operating Systems, detailing its objectives, key concepts, and various units covering topics such as process management, memory management, file systems, and synchronization techniques. It also discusses the differences between Free Software and Open Source Software, their advantages and disadvantages, and various computing environments. Textbooks and online resources for further learning are provided as well.

Uploaded by

tejasree
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 103

R23 II-II(CSE) OPERATING SYSTEM

OPERATING SYSTEMS
Course Objectives: The main objectives of the course is to make student

 Understand the basic concepts and principles of operating systems, including process.
 management, memory management, file systems, and Protection Make use of process
scheduling algorithms and synchronization techniques to achieve.
 better performance of a computer system. Illustrate different conditions for deadlock and
their possible solutions.

UNIT-I : Operating Systems Overview: Introduction, Operating system functions, Operating


systems operations, Computing environments, Free and Open-Source Operating Systems.
System Structures: Operating System Services, User and Operating-System Interface, system calls,
Types of System Calls, system programs, Operating system Design and Implementation, Operating
system structure, Building and Booting an Operating System, Operating system debugging.
UNIT - II : Processes: Process Concept, Process scheduling, Operations on processes, Inter-process
communication. Threads and Concurrency: Multithreading models, Thread libraries, Threading
issues. CPU Scheduling: Basic concepts, Scheduling criteria, Scheduling algorithms, Multiple
processor scheduling.
UNIT – III :Synchronization Tools: The Critical Section Problem, Peterson’s Solution, Mutex
Locks, Semaphores, Monitors, Classic problems of Synchronization.
Deadlocks: system Model, Deadlock characterization, Methods for handling Deadlocks, Deadlock
prevention, Deadlock avoidance, Deadlock detection, Recovery from Deadlock.
UNIT - IV: Memory-Management Strategies: Introduction, Contiguous memory allocation,
Paging, Structure of the Page Table, Swapping. Virtual Memory Management: Introduction, Demand
paging, Copy-on-write, Page replacement, Allocation of frames, Thrashing Storage Management:
Overview of Mass Storage Structure, HDD Scheduling.
UNIT - V : File System: File System Interface: File concept, Access methods, Directory Structure;
File system Implementation: File-system structure, File-system Operations, Directoryimplementation,
Allocation method, Free space management; File-System Internals: File-System Mounting, Partitions
and Mounting, File Sharing. Protection: Goals of protection, Principles of protection, Protection
Rings, Domain of protection, Access matrix.

Text Books:
1. Operating System Concepts, Silberschatz A, Galvin P B, Gagne G, 10th Edition, Wiley, 2018.
2. Modern Operating Systems, Tanenbaum A S, 4th Edition, Pearson , 2016.
Reference Books:
1. Operating Systems -Internals and Design Principles, Stallings W, 9th edition, Pearson, 2018
2. Operating Systems: A Concept Based Approach, D.M Dhamdhere, 3rd Edition, McGraw- Hill,
2013
Online Learning Resources:
1. https://nptel.ac.in/courses/106/106/106106144/
2. http://peterindia.net/OperatingSystems.html

1
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM

2
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM

UNIT-I
Operating Systems Overview: Introduction, Operating system functions, Operating systems
operations, Computing environments, Free and Open-Source Operating Systems.
System Structures: Operating System Services, User and Operating-System Interface, system
calls, Types of System Calls, system programs, Operating system Design and Implementation,
Operating system structure, Building and Booting an Operating System, Operating system debugging.
Introduction to Operating System:
An operating system (OS) is essential computer software that manages and coordinates hardware and software
resources. It acts as an interface between the user and the computer's hardware, enabling users to interact with the
device and run applications.
 Resource Management: The OS manages and allocates resources like CPU, memory, storage, and input/output
devices.
 File Management: It allows users to store, organize, and access files and folders.
 Process Management: The OS handles the execution of programs, ensuring efficient resource utilization.
 Interface: It provides a user interface (UI) that allows users to interact with the computer through visual elements or
commands.
 Device Management: It controls and manages communication with various hardware components, including printers,
keyboards, and more.
 Networking: It enables communication with other devices and networks.
Examples:
 Desktop OS: Windows, macOS, Linux.
 Mobile OS: Android, iOS.
 Other: Embedded systems (e.g., in cars), real-time OS (used in critical applications).
What are the functions of OS:
 perating system (OS) manages computer hardware and software resources, providing a
platform for running applications and interacting with the user.
 Its primary functions include process management, memory management, file management,
and device management.
 Additionally, an OS handles security, networking, user interface, and resource allocation.
Different functions of an OS:
1. Process Management:
 Manages the execution of multiple programs (processes) concurrently.
 Allocates resources (like CPU time) to each process.
 Ensures processes can interact with each other.
2. Memory Management:
 Allocates memory space to different programs.
 Optimizes memory usage to avoid fragmentation and maximize efficiency.
 Handles memory allocation and deallocation.
3. File Management:
 Organizes and stores files and directories on secondary storage devices.
 Provides functionalities like creating, deleting, copying, and renaming files.
 Implements file system structures for efficient storage and retrieval.
4. Device Management:
 Manages interactions between the computer and its peripherals (printers, scanners, etc.).
 Provides drivers for different hardware devices.
 Enables communication between the OS and hardware devices.
5. Security:

3
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM

 Protects the system from unauthorized access.


 Implements user authentication and authorization mechanisms.
 Provides security features like firewalls and data encryption.
6. Networking:
 Manages network connections and protocols.
 Enables sharing of resources (like printers and files) on a network.
7. User Interface:
 Provides a way for users to interact with the computer.
 Includes graphical user interfaces (GUIs) or command-line interfaces (CLIs).
8. Resource Allocation:
 Allocates computer resources (like CPU, memory, storage) to different processes.
 Prioritizes resource allocation based on task needs and system policies.
OS operations:
 An operating system (OS) performs numerous crucial operations to manage and coordinate
hardware and software resources on a computer.
 These operations include booting, managing system resources, managing files, handling
input and output, and executing and providing services for application software.
common OS operations:
1. Booting: The OS manages the computer's startup process, loading the necessary files and
initializing hardware devices.
2. Resource Management: The OS allocates and manages various system resources, including
CPU time, memory, storage devices, and peripheral devices.
3. File Management: The OS provides a hierarchical file system, allowing users to create,
delete, modify, and organize files and directories.
4. Input/Output Handling: The OS manages interaction with input devices (like keyboards and
mice) and output devices (like displays and printers).
5. Process Management: The OS manages the execution of multiple processes (programs)
simultaneously, including scheduling, allocation of resources, and communication between
processes.
6. Memory Management: The OS manages the computer's main memory, allocating space to
different programs and preventing memory conflicts.
7. Device Management: The OS controls access to and manages various hardware devices,
including printers, disk drives, and other peripherals.
8. User Interface: The OS provides a user interface (UI) that allows users to interact with the
computer, such as through graphical user interfaces (GUIs) or command-line interfaces.
9. Security: The OS implements security measures to protect the system from unauthorized
access, data breaches, and other threats.
10. Networking: The OS provides networking capabilities, enabling users to connect to
networks and share resources.
Computing Environments:
 A computer system uses many devices, arranged in different ways to solve many problems.
This constitutes a computing environment where many computers are used to process and
exchange information to handle multiple issues.
 The different types of Computing Environments are −

4
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM

Personal Computing Environment


 In the personal computing environment, there is a single computer system. All the system processes
are available on the computer and executed there. The different devices that constitute a personal
computing environment are laptops, mobiles, printers, computer systems, scanners etc.
Time Sharing Computing Environment
 The time sharing computing environment allows multiple users to share the system simultaneously.
Each user is provided a time slice and the processor switches rapidly among the users according to
it. Because of this, each user believes that they are the only ones using the system.
Client Server Computing Environment
 In client server computing, the client requests a resource and the server provides that resource.
 A server may serve multiple clients at the same time while a client is in contact with only one server.
 Both the client and server usually communicate via a computer network but sometimes they may
reside in the same system.
Distributed Computing Environment
 A distributed computing environment contains multiple nodes that are physically separate but linked together using the
network.
 All the nodes in this system communicate with each other and handle processes in tandem.
 Each of these nodes contains a small part of the distributed operating system software.
Cloud Computing Environment
 The computing is moved away from individual computer systems to a cloud of computers in cloud computing environment.
 The cloud users only see the service being provided and not the internal details of how the service is provided.
 This is done by pooling all the computer resources and then managing them using a software.
Cluster Computing Environment
 The clustered computing environment is similar to parallel computing environment as they both have multiple CPUs.
 However a major difference is that clustered systems are created by two or more individual computer systems merged
together which then work parallel to each other.
Free Software and Open Source Software:
 Free Software and Open Source Software are two philosophies in software engineering.
 Free Software and Open Source Software both have common goals of collaboration and innovation but
they are distinct in terms of why they are doing it and prioritize different aspects of software
development and distribution.
 Free software and open-source software are two distinct concepts, each with its own strengths and
weaknesses.
 Free software is developed with the goal of promoting freedom and giving users complete control over
the software they use.
 Open-source software is developed with the goal of producing high-quality software that can be used
by anyone, regardless of their technical ability.
 Ultimately, the choice between free software and open-source software depends on the needs of the
user and the specific problem they are trying to solve.

5
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM

What is Free Software?


 “Free software” means software that respects users’ freedom and community.
 Roughly, it means that the users have the freedom to run, copy, distribute, study, change, and improve
the software.
 The term “free software” is sometimes misunderstood—it has nothing to do with price. It is about
freedom.
Advantages of Free Software
 Cost: Free software is typically free to use, modify, and distribute.
 Freedom: Free software is often accompanied by a set of ethical principles that promote users’ freedom to use,
study, modify, and share the software.
 Collaboration: Free software often encourages collaboration among developers and users, leading to faster
development and better quality software.
 Transparency: Free software is often developed in a transparent way, with the source code and development
process available for public scrutiny.
 Flexibility: Free software can be used on a wide range of platforms and devices.
Disadvantages of Free Software
 Support: While free software does have a community of developers and users, it may not always have the same
level of professional support as commercial software.
 Compatibility: Free software may not always be compatible with other software applications and hardware
devices.
 Security: Because free software is available for everyone to use and modify, it may be easier for malicious
actors to identify and exploit vulnerabilities.
 Complexity: Free software can be more complex and difficult to use than commercial software, especially for
non-technical users.
 Documentation: Free software may not always have the same level of documentation and user guides as
commercial software.
What is Open Source Software?
 Open Source Software is something that you can modify as per your needs, and share with others
without any licensing violation burden.
 When we say Open Source, the source code of the software is available publicly with Open Source
licenses like GNU (GPL) which allows you to edit the source code and distribute it.
 Read these licenses and you will realize that these licenses are created to help us.
1. Coined by the development environments around software produced by open collaboration of software
developers on the internet.
2. Later specified by the Open Source Initiative (OSI).
3. It does not explicitly state ethical values, besides those directly associated with software development.
Advantages of Open Source Software
 Cost: Open source software is typically free to use, modify and distribute.
 Customization: The source code of open source software is available to everyone, allowing users to modify
and customize it to suit their needs.
 Community support: Open source software often has a large community of developers and users who
contribute to its development and provide support.
 Transparency: The source code of open source software is open for everyone to see, making it easier to
identify and fix bugs and vulnerabilities.
 Flexibility: Open source software can be used on a wide range of platforms and devices.
Disadvantages of Source Software
 Support: While open source software does have a large community of developers and users, it may not always
have the same level of professional support as commercial software.
 Compatibility: Open source software may not always be compatible with other software applications and
hardware devices.
 Security: Because the source code of open source software is available to everyone, it may be easier for
malicious actors to identify and exploit vulnerabilities.
 Complexity: Open source software can be more complex and difficult to use than commercial software,
especially for non-technical users.
 Documentation: Open source software may not always have the same level of documentation and user guides
as commercial software.
Similarities between Free Software and Open Software
 Both free software and open source software have access to the source code, allowing users to modify and
improve the software.
 Both types of software often rely on a community of users and developers to provide support and contribute to
the development of the software.

6
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM

 Both free software and open source software are often distributed under open licenses, allowing users to use,
modify, and distribute the software without restrictions.
Difference between Free Software and Open Source Software
S.No. FS Philosophy OSS Philosophy

In response to the restrictions of free


It was coined by the Free Software
software, the phrase “open source” was
Foundation in the 1980s.
Origin coined in the late 1990s.

View on
Software is an important part of people’s Software is just software. There are no
Software’s Role
lives. ethics associated directly with it.
in Life

Ethics and Software freedom translates to social Ethics are to be associated with the people
Freedom freedom. not with the software.

Value of Freedom is a value that is more important Freedom is not an absolute concept.
Freedom than any economical advantage. Freedom should be allowed, not imposed.

Every open-source software is not free


Every free software is open source.
Compatibility software.

There are many different open-source


There is no such issue that exists in free software licenses, and some of them are
software. quite restricted, resulting in open-source
License Issue software that is not free.

No restrictions are imposed on free Open-source software occasionally


Restrictions software. imposes some constraints on users.

The Free Software Directory maintains a Prime examples of open-source products


large database of free software packages. are the Apache HTTP Server, the e-
Some of the best-known examples include commerce platform Open Source
the Linux kernel, the BSD and Linux Commerce, internet browsers Mozilla
operating systems, the GNU Compiler Firefox, and Chromium (the project where
Collection and C library; the MySQL the vast majority of development of the
relational database; the Apache web server; freeware Google Chrome is done), and the
Examples and the Sendmail mail transport agent. full office suite LibreOffice.

Operating System - Services


 An Operating System provides services to both the users and to the programs.
 It provides programs an environment to execute.
 It provides users the services to execute the programs in a convenient manner.

7
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM

common services provided by an operating system −


1. Program execution
2. I/O operations
3. File System manipulation
4. Communication
5. Error Detection
6. Resource Allocation
7. Protection
1.Program execution
 Operating systems handle many kinds of activities from user programs to system
programs like printer spooler, name servers, file server, etc.
 Each of these activities is encapsulated as a process.
 A process includes the complete execution context (code to execute, data to manipulate,
registers, OS resources in use).
 Following are the major activities of an operating system with respect to program
management −
 Loads a program into memory.
 Executes the program.
 Handles program's execution.
 Provides a mechanism for process synchronization.
 Provides a mechanism for process communication.
 Provides a mechanism for deadlock handling.
2.I/O Operation:
 An I/O subsystem comprises of I/O devices and their corresponding driver software.
 Drivers hide the peculiarities of specific hardware devices from the users.
 An Operating System manages the communication between user and device drivers.
 I/O operation means read or write operation with any file or any specific I/O device.
 Operating system provides the access to the required I/O device when required.
3.File system manipulation:
 A file represents a collection of related information.
 Computers can store files on the disk (secondary storage), for long-term storage purpose.
 Examples of storage media include magnetic tape, magnetic disk and optical disk drives like
CD, DVD. Each of these media has its own properties like speed, capacity, data transfer rate
and data access methods.
 A file system is normally organized into directories for easy navigation and usage.
 These directories may contain files and other directions.
 Following are the major activities of an operating system with respect to file management −
 Program needs to read a file or write a file.
 The operating system gives the permission to the program for operation on file.
 Permission varies from read-only, read-write, denied and so on.
 Operating System provides an interface to the user to create/delete files.
 Operating System provides an interface to the user to create/delete directories.
 Operating System provides an interface to create the backup of file system.
4.Communication:
 In case of distributed systems which are a collection of processors that do not share memory,
peripheral devices, or a clock.
 the operating system manages communications between all the processes.

8
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM

 Multiple processes communicate with one another through communication lines in the
network.
 The OS handles routing and connection strategies, and the problems of contention and
security.
 Following are the major activities of an operating system with respect to communication
 Two processes often require data to be transferred between them
 Both the processes can be on one computer or on different computers, but are
connected through a computer network.
 Communication may be implemented by two methods, either by Shared Memory or by
Message Passing.
5.Error handling:
 Errors can occur anytime and anywhere.
 An error may occur in CPU, in I/O devices or in the memory hardware.
 Following are the major activities of an operating system with respect to error handling −
 The OS constantly checks for possible errors.
 The OS takes an appropriate action to ensure correct and consistent computing.
6.Resource Management:
 In case of multi-user or multi-tasking environment, resources such as main memory, CPU
cycles and files storage are to be allocated to each user or job.
 Following are the major activities of an operating system with respect to resource
management −
 The OS manages all kinds of resources using schedulers.
 CPU scheduling algorithms are used for better utilization of CPU.
7.Protection:
 Considering a computer system having multiple users and concurrent execution of multiple
processes, the various processes must be protected from each other's activities.
 Protection refers to a mechanism or a way to control the access of programs, processes, or
users to the resources defined by a computer system
 Following are the major activities of an operating system with respect to protection −
 The OS ensures that all access to system resources is controlled.
 The OS ensures that external I/O devices are protected from invalid access attempts.
 The OS provides authentication features for each user by means of passwords.
User Interface and Operating System Interface :
 The user and operating system are connected with each other with the help of interface, so
interface is used to connect the user and OS.
 In computers there are different types of interface that can be used for connection with
computers to users and their connection is responsible for data transfer.
 Also, in computers there are different interfaces.
 These interfaces are not necessarily used but can be used in computers whenever it is needed.
So, different types of tasks can be performed by the help of different interfaces.

Command line interface:


 The command-line interface is an interface whenever the user needs to have different
commands regarding the input and output and then a task is performed so this is called the
command-line argument and it is used to execute the output and create, delete, print, copy,
paste, etc.
 All these operations are performed with the help of the command-line interface.

9
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM

The interface is always connected to the OS so that the command given by the user directly
works by the OS and a number of operations can be performed with the help of the command
line interface because multiple commands can be interrupted at same time and execute only
one.
 The command line interface is necessary because all the basic operations in the computer are
performed with the help of the OS and it is responsible for memory management.
 By using this we can divide the memory and we can use the memory.
Command Line Interface advantages :
 Controls OS or application
 faster management
 ability to store scripts which helps in automating regular tasks.
 Troubleshoot network connection issues.
Command Line Interface disadvantages :
 The steeper learning curve is associated with memorizing commands and a complex syntax.
 Different commands are used in different shells.
Graphical user interface:
 The graphical user interface is used for playing games, watching videos, etc.
 these are done with the help of GUI because all these applications require graphics.
 The GUI is one of the necessary interfaces because only by using the user can clearly see the
picture, play videos.
 So we need GUI for computers and this can be done only with the help of an operating
system.
 When a task is performed in the computer then the OS checks the task and defines the
interface which is necessary for the task.
 So, we need GUI in the OS.
The basic components of GUIs are −
 Start menu with program groups
 Taskbar which showing running programs
 Desktop screen
 Different icons and shortcuts.
Choice of interface:
 The interface that is used with the help of OS for a particular task and that task can be
performed with minimum possible time and the output is shown on the screen in that case we
use the choice of interface.
 The choice of interface means the OS checks the task and finds out which interface can be
suitable for a particular task.
 So that type of interface is called the choice of interface and this can be done with the help of
an OS.
Introduction of System Call:
 A system call is a programmatic way in which a computer program requests a service from the
kernel of the operating system on which it is executed.
 A system call is a way for programs to interact with the operating system.
 A computer program makes a system call when it requests the operating system’s kernel.
 System call provides the services of the operating system to the user programs via the
Application Program Interface(API).
 System calls are the only entry points into the kernel system and are executed in kernel mode.

10
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM

Types of System Calls:


 Services provided by an OS are typically related to any kind of operation that a user program can perform
like creation, termination, forking, moving, communication, etc.
 Similar types of operations are grouped into one single system call category.
 System calls are classified into the following categories:

1. File System Operations:These system calls are made while working with files in OS, File manipulation
operations such as creation, deletion, termination etc.
open(): Opens a file for reading or writing. A file could be of any type like text file, audio file etc.
read(): Reads data from a file. Just after the file is opened through open() system call, then if some process want
to read the data from a file, then it will make a read() system call.
write(): Writes data to a file. Wheneve the user makes any kind of modification in a file and saves it, that's when
this is called.
close(): Closes a previously opened file.
seek(): Moves the file pointer within a file. This call is typically made when we the user tries to read the data from
a specific position in afile. For example, read from line - 47. Than the file pointer will move from line 1 or
wherever it was previously to line-47.
2. Process Control:
 These types of system calls deal with process creation, process termination, process
allocation, deallocation etc.
 Basically manages all the process that are a part of OS.

 fork(): Creates a new process (child) by duplicating the current process (parent).
This call is made when a process makes a copy of itself and the parent process is halted
temporarily until the child process finishes its execution.
 exec(): Loads and runs a new program in the current process and replaces the current process
with a new process. All the data such as stack, register, heap memory everything is replaced
by a new process and this is known as overlay. For example, when you execute a java byte
code using command - java "filename". Then in the background, exec() call will be made to
execute the java file and JVM will also be executed.

11
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM

 wait(): The primary purpose of this call is to ensure that the parent process doesn't proceed
further with its execution until all its child processes have finished their execution. This call is
made when one or more child processes are forked.
 exit(): It simply terminates the current process.
 kill(): This call sends a signal to a specific process and has various purpose including -
requesting it to quit voluntarily, or force quit, or reload configuration.
3. Memory Management:
These types of system calls deals with memory allocation, deallocation & dynamically changing the
size of a memory allocated to a process. In short, the overall management of memory is done by
making these system calls.
brk(): Changes the data segment size for a process in HEAP Memory. It takes an address as
argument to define the end of the heap and explicitly sets the size of HEAP.
sbrk(): This call is also for memory management in heap, it also takes an argument as an integer
(+ve or -ve) specifying whether to increase or decrease the size respectively.
mmap(): Memory Map - It basically maps a file or device into main memory and further into a
process's address space for performing operations. And any changes made in the content of a file will
be reflected in the actual file.
munmap(): Unmaps a memory-mapped file from a process's address space and out of main memory
mlock() and unlock(): memory lock defines a mechanism through which certain pages stay in memory
and are not swapped out to the swap space in the disk. This could be done to avoid page faults.
Memory unlock is the opposite of lock, it releases the lock previously acquired on pages.
4. Interprocess Communication (IPC)
When two or more process are required to communicate, then various IPC mechanism are used by the
OS which involves making numerous system calls. Some of them are :
 pipe(): Creates a unidirectional communication channel between processes. For example, a
parent process may communicate to its child process through a pipe making a parent process
as input source of its child process.
 socket(): Creates a network socket for communication. Processes in same or other networks
can communicate through this socket, provided that they have necessary network permissions
granted.
 shmget(): It is short for - 'shared-memory-get'. It allows one or more processes to share a
portion of memory and achieve interprocess communication.
 semget(): It is short for - 'semaphore-get'. This call typically manages the coordination of
multiple processes while accessing a shared resource that is, the critical section.
 msgget(): It is short for - 'message-get'. IPC mechanism has one of the fundamental concept
called - 'message queue' which is a queue data structure inside memory through which various
processes communicate with each other. This message queue is allocated through this call
allowing other processes a structured way of communication for data exchange purpose.

5. Device Management
The device management system calls are used to interact with various peripherial devices attached to
the PC or even the management of the current device.
 Set ConsoleMode(): This call is made to set the mode of console (input or output). It
allows a process to control various console modes. In windows, it is used to control the
behaviour of command line.
 Write Console(): It allows us to write data on console screen.

12
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM

 Read Console(): It allows us to read data from console screen (if any arguments are
provided).
 open(): This call is made whenever a device or a file is opened. A unique file descriptor is
created to maintain the control access to the opened file or device.
 close(): This call is made when the system or the user closes the file or device.
Importance of System Calls:
Efficient Resource Management: System Calls help your computer manage its resources
efficiently. They allocate and manage memory so programs run smoothly without using up too many
resources. This is important for multitasking and overall performance.
Security and Isolation: System Calls ensure that one program cannot interfere with or access the
memory of another program. This enhances the security and stability of your device.
Multitasking Capabilities: System Calls support multitasking, allowing multiple programs to run
simultaneously. This improves productivity and makes it easy to switch between applications.
Enhanced Control: System Calls provide a high level of control over your device’s operations.
They allow you to start and stop processes, manage files, and perform various system-related tasks.
Input/Output (I/O) Operations: System Calls enable communication with input and output
devices, such as your keyboard, mouse, and screen. They ensure that these devices work effectively.
Networking and Communication: System Calls facilitate networking and communication
between different applications. They make it easy to transfer data over networks, browse the web,
send emails, and connect online.
What is a System Program:
 In an operating system a user is able to use different types of system programs and the system program
is responsible for all the application software performance of the computer.
 The system programs are responsible for the development and execution of a program and they can be
used by the help of system calls because system calls define different types of system programs for
different tasks.
 File management − These programs create, delete, copy, rename, print, exit and generally manipulate
the files and directory.
 Status information − It is the information regarding input, output process, storage and the CPU
utilization time how the process will be calculated in how much memory required to perform a task is
known by status information.
 Programming language supports − compiler, assembler, interrupt are programming language support
used in the operating system for a particular purpose in the computer.
 Programming Loading and execution − The needs to enter the program and after the loading of a
program it needs to execute the output of the programs and this task is also performed by system calls
by the help of system programs.
 Communication − These services are provided by the user because by using this number of devices
communicates with each other by the help of device or wireless and communication is necessary for
the operating system.
 Background services − There are different types of services available on the operating system for communication
and background service is used to change the background of your window and it also works for scanning and
detecting viruses in the computer.
Purpose of using system program
 System programs communicate and coordinate the activities and functions of hardware and software of a system and also
controls the operations of the hardware.
 An operating system is one of the examples of system software.
 Operating system controls the computer hardware and acts like an interface between the application software’s.
Types of System programs
The types of system programs are as follows −

13
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM

Utility program
It manages, maintains and controls various computer resources. Utility programs are comparatively technical and are
targeted for the users with solid technical knowledge.
Few examples of utility programs are: antivirus software, backup software and disk tools.
Device drivers
It controls the particular device connected to a computer system. Device drivers basically act as a translator between the
operating system and device connected to the system.
Example − printer driver, scanner driver, storage device driver etc.
Directory reporting tools
These tools are required in an operation system to have some software to facilitate the navigation through the computer
system.
Example − dir, ls, Windows Explorer etc.
Operating System Design and Implementation
 An operating system is a construct that allows the user application programs to interact with the system hardware.
 Operating system by itself does not provide any function but it provides an atmosphere in which different applications and
programs can do useful work.
 There are many problems that can occur while designing and implementing an operating system. These are covered in operating
system design and implementation.

Operating System Design Goals:


 It is quite complicated to define all the goals and specifications of the operating system while designing
it.
 The design changes depending on the type of the operating system i.e if it is batch system, time shared
system, single user system, multi user system, distributed system etc.
 There are basically two types of goals while designing an operating system. These are −
User Goals:
The operating system should be convenient, easy to use, reliable, safe and fast according to the users.
However, these specifications are not very useful as there is no set method to achieve these goals.
System Goals:
 The operating system should be easy to design, implement and maintain.
 These are specifications required by those who create, maintain and operate the operating
system.
 But there is not specific method to achieve these goals as well.
Operating System Mechanisms and Policies:
 There is no specific way to design an operating system as it is a highly creative task. However, there are general software
principles that are applicable to all operating systems.
 A subtle difference between mechanism and policy is that mechanism shows how to do something and policy shows what to do.
Policies may change over time and this would lead to changes in mechanism.
 So, it is better to have a general mechanism that would require few changes even when a policy change occurs.

14
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM

For example - If the mechanism and policy are independent, then few changes are required in mechanism if policy changes. If a policy
favours I/O intensive processes over CPU intensive processes, then a policy change to preference of CPU intensive processes will not
change the mechanism.
Operating System Implementation:
The operating system needs to be implemented after it is designed. Earlier they were written in assembly language but now higher level
languages are used. The first system not written in assembly language was the Master Control Program (MCP) for Burroughs Computers.

Advantages of Higher Level Language:


 There are multiple advantages to implementing an operating system using a higher level language such as: the code is written
more fast, it is compact and also easier to debug and understand.
 Also, the operating system can be easily moved from one hardware to another if it is written in a high level language.

Disadvantages of Higher Level Language:


 Using high level language for implementing an operating system leads to a loss in speed and increase in storage requirements.
However in modern systems only a small amount of code is needed for high performance, such as the CPU scheduler and
memory manager.
 Also, the bottleneck routines in the system can be replaced by assembly language equivalents if required.
What is a System Structure for an Operating System?
 A system structure for an operating system is like the blueprint of how an OS is organized and
how its different parts interact with each other.
 Because operating systems have complex structures, we want a structure that is easy to
understand so that we can adapt an operating system to meet our specific needs.
 Similar to how we break down larger problems into smaller, more manageable subproblems,
building an operating system in pieces is simpler.
 The operating system is a component of every segment.
 The strategy for integrating different operating system components within the kernel can be
thought of as an operating system structure.
 As will be discussed below, various types of structures are used to implement operating
systems.
Types of Operating Systems Structures
Depending on this, we have the following structures in the operating system:
1. Simple Structure
2. Monolithic Structure
3. Micro-Kernel Structure
4. Hybrid-Kernel Structure
5. Exo-Kernel Structure
6. Layered Structure
7. Modular Structure
8. Virtual Machines
Simple Structure:
 Simple structure operating systems do not have well-defined structures and are small, simple,
and limited. The interfaces and levels of functionality are not well separated.
 MS-DOS is an example of such an operating system.
 In MS-DOS, application programs are able to access the basic I/O routines.
 These types of operating systems cause the entire system to crash if one of the user programs
fails.

15
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM

Advantages of Simple Structure:


 It delivers better application performance because of the few interfaces between the
application program and the hardware.
 It is easy for kernel developers to develop such an operating system.
Disadvantages of Simple Structure:
 The structure is very complicated, as no clear boundaries exist between modules.
 It does not enforce data hiding in the operating system.
Monolithic Structure:
 A monolithic structure is a type of operating system architecture where the entire operating
system is implemented as a single large process in kernel mode.
 Essential operating system services, such as process management, memory management, file
systems, and device drivers, are combined into a single code block.

Advanatages of Monolithic Structure:


 Performance of Monolithic structure is fast as since everything runs in a single block
 therefore communication between components is quick.
 It is easier to build because all parts are in one code block.
Disadvanatges of Monolithic Architecture:
 It is hard to maintain as a small error can affect entire system.
 There are also some security risks in the Monolithic architecture.
Micro-Kernel Structure:
 Micro-Kernel structure designs the operating system by removing all non-essential
components from the kernel and implementing them as system and user programs.
 This results in a smaller kernel called the micro-kernel.
 Advantages of this structure are that all new services need to be added to user space and does
not require the kernel to be modified.
 Thus it is more secure and reliable as if a service fails, then rest of the operating system
remains untouched.
 Mac OS is an example of this type of OS.

16
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM

Advantages of Micro-kernel Structure:


 It makes the operating system portable to various platforms.
 As microkernels are small so these can be tested effectively.
Disadvantages of Micro-kernel Structure:
 Increased level of inter module communication degrades system performance.
Hybrid-Kernel Structure:
 Hybrid-Kernel structure is nothing but just a combination of both monolithic-kernel structure
and micro-kernel structure.
 Basically, it combines properties of both monolithic and micro-kernel and make a more
advance and helpful approach.
 It implement speed and design of monolithic and modularity and stability of micro-kernel
structure.
Advantages of Hybrid-Kernel Structure
 It offers good performance as it implements the advantages of both structure in it.
 It supports a wide range of hardware and applications.
 It provides better isolation and security by implementing micro-kernel approach.
 It enhances overall system reliability by separating critical functions into micro-kernel for
debugging and maintenance.
Disadvantages of Hybrid-Kernel Structure
 It increases overall complexity of system by implementing both structure (monolithic and
micro) and making the system difficult to understand.
 The layer of communication between micro-kernel and other component increases time
complexity and decreases performance compared to monolithic kernel.

Exo-Kernel Structure
 Exokernel is an operating system developed at MIT to provide application-level management
of hardware resources.
 By separating resource management from protection, the exokernel architecture aims to
enable application-specific customization.
 Due to its limited operability, exokernel size typically tends to be minimal.
 The OS will always have an impact on the functionality, performance, and scope of the apps
that are developed on it because it sits in between the software and the hardware.
 The exokernel operating system makes an attempt to address this problem by rejecting the
notion that an operating system must provide abstractions upon which to base applications.
 The objective is to limit developers use of abstractions as little as possible while still giving
them freedom.
Advantages of Exo-Kernel:

17
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM

 Support for improved application control.


 Separates management from security.
 It improves the performance of the application.
 A more efficient use of hardware resources is made possible by accurate resource allocation
and revocation.
 It is simpler to test and create new operating systems.
 Each user-space program is allowed to use a custom memory management system.
Disadvantages of Exo-Kernel:
 A decline in consistency.
 Exokernel interfaces have a complex architecture.
Modular Structure:
 Modular structure operating system works on the similar princhiple as a monolith but with
better design.
 A central kernal is responsible for all major operations of operating system.
 This kernal has set of core functionality and other services are loaded as modules dynamically
to the kernal at boot time or at runtime. Sun Solaris OS is one of the example of Modular
structured operating system.
Advantages
 High Customizable - Being modular, each module implmentation can be customized
easily. A new functionality can be added without impacting other modules as well.
 Verifiable - Being modular, each layer can be verified and debugged easily.
Disadvantages:
Less Performant - A modular structured operating system is less performant as compared to basic
structured operating system.
Complex designing - Each module is to planned carefully as each module communicates with
kernal. A communication API is to be devised to facilitate the communication.
Virtual Machine Structure:
In this kind of structure, hardware like CPU, memory, hard disks are abstracted into virtual machines.
User can use them with actually configure them using execution contexts. Virtual machine takes a
good amount of disk space and is to be provisioned. Muliple virtual machines can be created on a
single physical machine.

Advantages:
 High Customizable - Being virtual, functionality are easily accessible, can be customized
on need basis.
 Secure - Being virtual, and no direct hardware access, such systems are highly secured.
Disadvantages
 Less Performance - A virtual structured operating system is less performant as compared
to modular structured operating system.
 Complex designing - Each virtual component of the machine is to planned carefully as
each component is to abstract underlying hardware.
Layered Structure:
 An OS can be broken into pieces and retain much more control over the system. In this
structure, the OS is broken into a number of layers (levels).

18
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM

 The bottom layer (layer 0) is the hardware, and the topmost layer (layer N) is the user
interface.
 These layers are so designed that each layer uses the functions of the lower-level layers.
 This simplifies the debugging process, if lower-level layers are debugged and an error occurs
during debugging, then the error must be on that layer only, as the lower-level layers have
already been debugged.
 The main disadvantage of this structure is that at each layer, the data needs to be modified
and passed on which adds overhead to the system. Moreover, careful planning of the layers is
necessary, as a layer can use only lower-level layers.
 UNIX is an example of this structure.

Advantages :
 Layering makes it easier to enhance the operating system, as the implementation of a layer
can be changed easily without affecting the other layers.
 It is very easy to perform debugging and system verification.
Disadvantages:
 In this structure, the application’s performance is degraded as compared to simple structure.
 It requires careful planning for designing the layers, as the higher layers use the
functionalities of only the lower layers.

Building and booting an operating system:


 Building an operating system involves creating the software that manages hardware resources
and allows application programs to run.
 Booting, on the other hand, is the process of starting a computer and loading the operating
system into memory, which allows the computer to begin functioning.
 These two processes are distinct but interconnected, with the building process preparing the
software and the booting process bringing it to life.
Building an Operating System:
 Core Components:The operating system is typically built from a kernel, which manages the
hardware, and system programs that provide services to applications.
 Development Tools:Various tools and programming languages are used to create an OS,
including C, C++, and assembly language.
 Modular Design:Some operating systems are built with modular designs, allowing for flexibility
and adaptability.

19
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM

 System Generation:The process of creating a specific operating system version tailored to a


particular hardware configuration or user environment is called system generation.

 Testing and Debugging:Rigorous testing and debugging are crucial to ensure the OS functions
correctly and reliably.
Booting an Operating System:
 Power-on Self-Test (POST):When a computer is powered on, the BIOS or UEFI firmware runs a diagnostic
routine to check hardware components.
 Boot Device Selection:The BIOS/UEFI determines the order in which boot devices (like hard drives or USB
drives) will be checked for bootable software.
 Bootloader:A special program, often stored in ROM, loads the operating system kernel into memory.
 Kernel Initialization:The kernel initializes the hardware and prepares the system for running applications.
 System Startup:Once the kernel is loaded and initialized, the OS starts up, and the user can interact with the
system.
Connecting Building and Booting:
 The boot process is essentially how the built operating system is loaded and made functional on the computer.
 The bootloader acts as a bridge between the hardware and the operating system, allowing the OS to load and run.
 The built OS provides the software that manages the hardware and allows applications to run, and the booting process
makes it available for use.

Debugging :
In an operating system (OS), debugging is the process of identifying, isolating, and resolving errors or
bugs within the OS code or related components. It's a critical task in software development to ensure
the OS functions correctly, reliably, and efficiently. This process involves using various tools and
techniques to trace the source of an issue, understand how it manifests, and then correct the
underlying problem.
Here's a more detailed look:
 Identifying Errors:
Debugging begins with identifying the presence of an error, which could be a crash, unexpected
behavior, or performance issue.
 Isolating the Cause:
Once an error is identified, the focus shifts to pinpointing the specific part of the OS code or a
related component where the issue originates.
 Using Debugging Tools:

20
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM

Various tools are employed to assist in this process. These include debuggers, log files, memory
dump analysis, and static/dynamic analysis tools.

 Resolving the Issue:


After the root cause is identified, the next step involves fixing the bug, which might involve
modifying the code, updating system configurations, or even patching hardware.
 Testing and Verification:
After a fix, the change needs to be thoroughly tested to ensure the error is resolved and that the fix
doesn't introduce new issues.
 Debugging in an OS can be a challenging task due to the complexity of the system, the
potential for interactions between different components, and the need for efficient and reliable
operation.
 It's a crucial step in the software development process to ensure the OS meets its intended
functionality and performance requirements.

UNIT - II
Processes: Process Concept, Process scheduling, Operations on processes, Inter-process
communication.
Threads and Concurrency: Multithreading models, Thread libraries, Threading issues.
CPU Scheduling: Basic concepts, Scheduling criteria, Scheduling algorithms, Multiple
processor scheduling.

PROCESS CONCEPT:

21
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM

Process State :
As a process executes, it changes states
 new: The process is being created
 running: Instructions are being executed
 waiting: The process is waiting for some event to occur
 ready: The process is waiting to be assigned to a processor
 terminated: The process has finished execution.
Diagram of Process State Or Lifecycle of a Process:

Process Control Block (PCB)


•A process control block (PCB) is a data structure used by computer operating systems to store all the
information about a process.
 When a process is created (initialized or installed), the operating system creates a
corresponding process control block.
 Process control block includes CPU scheduling, I/O resource management, file management
information etc.

22
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM

 If that process gets suspended, the contents of the registers are saved on a stack and the
pointer to the particular stack frame is stored in the PCB.
 By this technique, the hardware state can be restored so that the process can be scheduled to
run again.
 A Process Control Block is a data structure maintained by the Operating System for every
process.
 The PCB is identified by an integer process ID (PID).
 A PCB keeps all the information needed to keep track of a process as listed below in the table

TYPES OF PROCESS SCHEDULER:


1. LONG TERM SCHEDULER
2. MEDUIM TERM SCHEDULER
3. SHORT TERM SCHEDULER.

23
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM

Inter process Communication:


 Inter process communication is the mechanism provided by the operating system that allows processes to
communicate with each other.
 This communication could involve a process letting another process know that some event has occurred or
the transferring of data from one process to another.
A diagram that illustrates inter process communication is as follows −

Synchronization in Inter process Communication:


 Synchronization is a necessary part of inter process communication.
 It is either provided by the inter process control mechanism or handled by the communicating
processes.
 Some of the methods to provide synchronization are as follows −
 Semaphore
A semaphore is a variable that controls the access to a common resource by multiple processes. The
two types of semaphores are binary semaphores and counting semaphores.

24
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM

 Mutual Exclusion
Mutual exclusion requires that only one process thread can enter the critical section at a time. This is
useful for synchronization and also prevents race conditions.
 Barrier
A barrier does not allow individual processes to proceed until all the processes reach it. Many parallel languages
and collective routines impose barriers.
 Spinlock
This is a type of lock. The processes trying to acquire this lock wait in a loop while checking if the lock is
available or not. This is known as busy waiting because the process is not doing any useful operation even
though it is active.
Approaches to Inter process Communication:
The different approaches to implement inter process communication are given as follows −
 Pipe
A pipe is a data channel that is unidirectional. Two pipes can be used to create a two-way data
channel between two processes. This uses standard input and output methods. Pipes are used in all
POSIX systems as well as Windows operating systems.
 Socket
The socket is the endpoint for sending or receiving data in a network. This is true for data sent
between processes on the same computer or data sent between different computers on the same
network. Most of the operating systems use sockets for interprocess communication.
 File
A file is a data record that may be stored on a disk or acquired on demand by a file server. Multiple
processes can access a file as required. All operating systems use files for data storage.
 Signal
Signals are useful in inter process communication in a limited way. They are system messages that are
sent from one process to another. Normally, signals are not used to transfer data but are used for
remote commands between processes.
 Shared Memory
Shared memory is the memory that can be simultaneously accessed by multiple processes. This is
done so that the processes can communicate with each other. All POSIX systems, as well as Windows
operating systems use shared memory.
 Message Queue
Multiple processes can read and write data to the message queue without being connected to each
other. Messages are stored in the queue until their recipient retrieves them. Message queues are quite
useful for inter process communication and are used by most operating systems.
A diagram that demonstrates message queue and shared memory methods of inter process communication is as
follows

25
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM

Different Process Operations in Operating System:


The operations on a process occur when the process is in a particular state or when the process makes
a transition from one state to another. There are mainly four operations on a process −
 Process Creation
 Process Dispatch
 Process Pre-emption
 Process Blocking
 Process Termination
1. Process Creation:
 The first operation that a process undergoes when it enters a system is process creation.
 It involves formation of a process and is associated with “New” state of the process.
 Process creation occurs due to any of the following events .
 System initialization − When the computer is started, a number of system processes and background
processes are created.
 User request − A user may start executing a program leading to creation of a process.
 Child process system call − A running process may create a child process by process creation system
call.
 Batch system − The batch system may initiate batch jobs.
2. Process Dispatch
 The event or activity in which the state of the process is changed from ready to run.
 It means the operating system puts the process from the ready state into the running state.
 Dispatching is done by the operating system when the resources are free or the process has higher
priority than the ongoing process.
 There are various other cases in which the process in the running state is preempted and the process
in the ready state is dispatched by the operating system.
3. Blocking
 When a process invokes an input-output system call that blocks the process, and operating system is
put in block mode.
 Block mode is basically a mode where the process waits for input-output.
 Hence on the demand of the process itself, the operating system blocks the process and dispatches
another process to the processor.
 Hence, in process-blocking operations, the operating system puts the process in a ‘waiting’ state.
4. Preemption
 When a timeout occurs that means the process hadn’t been terminated in the allotted time interval
and the next process is ready to execute, then the operating system preempts the process.
 This operation is only valid where CPU scheduling supports preemption.
 Basically, this happens in priority scheduling where on the incoming of high priority process the
ongoing process is preempted.
 Hence, in process preemption operation, the operating system puts the process in a ‘ready’ state.
5. Process Termination
 Process termination is the activity of ending the process. In other words, process termination is the
relaxation of computer resources taken by the process for the execution.
 Like creation, in termination also there may be several events that may lead to the process of
termination. Some of them are:
 The process completes its execution fully and it indicates to the OS that it has finished.
 The operating system itself terminates the process due to service errors.
 There may be a problem in hardware that terminates the process.
THREAD:
 A thread is a path of execution that is composed of a program counter, thread id, stack and set of
registers within the process.
 A process has a single thread of control where one program can counter and one sequence of
instructions is carried out at any given time.

26
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM

 Dividing an application or a program into multiple sequential threads that run in quasi-parallel, the
programming model becomes simpler.
 Thread has the ability to share an address space and all of its data among themselves.
 This ability is essential for some specific applications.
 Threads are lighter weight than processes, but they are faster to create and destroy than processes.

Types of Threads:
There are two main types of threads User Level Thread and Kernal Level Thread let’s discuss each one by
one in detail:
User Level Thread (ULT):
User Level Thread is implemented in the user level library, they are not created using the system calls.
Thread switching does not need to call OS and to cause interrupt to Kernel. Kernel doesn’t know about the
user level thread and manages them as if they were single-threaded processes.
Advantages of ULT
 Can be implemented on an OS that doesn’t supportmultithreading .
 Simple representation since thread has onlyprogram counter , register set, stack space.
 Simple to create since no intervention of kernel.
 Thread switching is fast since no OS calls need to be made.
Disadvantages of ULT
 No or less co-ordination among the threads and Kernel.
 If one thread causes a page fault, the entire process blocks.
Kernel Level Thread (KLT)
Kernel knows and manages the threads. Instead of thread table in each process, the kernel itself has thread
table (a master one) that keeps track of all the threads in the system. In addition kernel also maintains the
traditional process table to keep track of the processes. OS kernel provides system call to create and manage
threads.
Advantages of KLT
 Since kernel has full knowledge about the threads in the system, scheduler may decide to give more time
to processes having large number of threads.
 Good for applications that frequently block.
Disadvantages of KLT
 Slow and inefficient.
 It requires thread control block so it is an overhead.
Thread Library:
 A thread library provides the programmer with an Application program interface for creating and managing
thread.
Ways of implementing thread library:
There are two primary ways of implementing thread library, which are as follows −
 The first approach is to provide a library entirely in user space with kernel support. All code and data structures for the
library exist in a local function call in user space and not in a system call.
 The second approach is to implement a kernel level library supported directly by the operating system. In this case the
code and data structures for the library exist in kernel space.
 Invoking a function in the application program interface for the library typically results in a system call to the kernel.

The main thread libraries which are used are given below −

27
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM

 POSIX threads − Pthreads, the threads extension of the POSIX standard, may be provided as either a user level or
a kernel level library.
 WIN 32 thread − The windows thread library is a kernel level library available on windows systems.
 JAVA thread − The JAVA thread API allows threads to be created and managed directly as JAVA programs.
Multi-Threading Models:

Multithreading allows the execution of multiple parts of a program at the same
time.
 These parts are known as threads and are lightweight processes available within
the process.
 Therefore, multithreading leads to maximum utilization of the CPU by
multitasking.
The main models for multithreading are
1.One to One model
2.Many to One model
3.Many to Many model.
One to One Model:
 The one to one model maps each of the user threads to a kernel thread.
 This means that many threads can run in parallel on multiprocessors and other threads can run when one
thread makes a blocking system call.
 A disadvantage of the one to one model is that the creation of a user thread requires a corresponding
kernel thread.

 Since a lot of kernel threads burden the system, there is restriction on the number of threads in the
system.
A diagram that demonstrates the one to one model is given as follows −

Many to One Model:


 The many to one model maps many of the user threads to a single kernel thread.

 This model is quite efficient as the user space manages the thread management.
 A disadvantage of the many to one model is that a thread blocking system call blocks the entire
process. Also, multiple threads cannot run in parallel as only one thread can access the kernel at a time.
A diagram that demonstrates the many to one model is given as follows −

28
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM

Many to Many Model:


 The many to many model maps many of the user threads to a equal number or lesser kernel threads.

 The number of kernel threads depends on the application or machine.


 The many to many does not have the disadvantages of the one to one model or the many to one model.

 There can be as many user threads as required and their corresponding kernel threads can run in
parallel on a multiprocessor.
A diagram that demonstrates the many to many model is given as follows −

Threading Issues in OS:


1. System Call
2. Thread Cancellation
3. Signal Handling
4. Thread Pool
5. Thread Specific Data
System Call:
 They are the system calls fork() and exec().
 The fork, on the one hand, generates new processes while simultaneously preserving its parent
process.
 The exec, on the other hand, creates new processes but doesn't preserve the parent process
simultaneously.
 Now, certain UNIX systems have two variants of fork().
 fork can either duplicate all threads of the parent process to a child process or just those that were
invoked by the parent process.
 The application will determine which version of fork() to use.
 When the next system call, namely exec() system call is issued, it replaces the whole programming
with all its threads by the program specified in the exec() system call’s parameters.
 Ordinarily, the exec() system call goes into queue after the fork() system call.
 However, this implies that the exec() system call should not be queued immediately after the fork()
system call because duplicating all the threads of the parent process into the child process will be
superfluous.

29
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM

 Since the exec() system call will overwrite the whole process with the one given in the arguments
passed to exec().
 This means that in cases like this; a fork() which only replicates one invoking thread will do.
Thread Cancellation
 The process of prematurely aborting an active thread during its run is called ‘thread cancellation’.
 So, let’s take a look at an example to make sense of it.
 Suppose, there is a multithreaded program whose several threads have been given the right to scan a
database for some information.
 The other threads however will get canceled once one of the threads happens to return with the
necessary results.
The target thread is now the thread that we want to cancel . Thread cancellation can be done in two ways:
 Asynchronous Cancellation: The asynchronous cancellation involves only one thread that cancels
the target thread immediately.
 Deferred Cancellation: In the case of deferred cancellation, the target thread checks itself
repeatedly until it is able to cancel itself voluntarily or decide otherwise .
Signal Handling
Signal is easily directed at the process in single threaded applications. However, in relation to
multithreaded programs, the question is which thread of a program the signal should be sent.
Suppose the signal will be delivered to:
 Every line of this process.
 Some special thread of a process.
 thread to which it applies
THREAD POOL :
 The primary issue with thread pools in an operating system is the potential for
excessive resource consumption, including CPU cycles and memory
 if not managed properly, which can occur when too many threads are created, leading
to high context switching overhead and potential system slowdowns or crashes.
 this is especially problematic when dealing with a large number of concurrent
requests that might trigger rapid thread creation without a pool to reuse existing
threads.

Thread Specific Data:


 Thread specific data allows each thread to possess a separate copy of data for the
purpose of ensuring isolated data and independent thread.
 Thread specific data issues" in an operating system arise when multiple threads within the
same process need to maintain their own unique data .
 which can lead to problems if not managed properly, like accidental data corruption or
unexpected behavior due to concurrent access to shared data without proper
synchronization mechanisms.
 Essentially, it's the challenge of ensuring each thread has its own private data space while
still allowing necessary communication between threads.

30
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM

What is CPU scheduling?


 CPU Scheduling is a process that allows one process to use the CPU while another
 process is delayed due to unavailability of any resources such as I / O etc, thus making
full use of the CPU.
 In short, CPU scheduling decides the order and priority of the processes to run and
allocates the CPU time based on various parameters such as CPU usage, throughput,
turnaround, waiting time, and response time.
 The purpose of CPU Scheduling is to make the system more efficient, faster, and fairer.

Criteria of CPU Scheduling


CPU scheduling criteria, such as turnaround time, waiting time, and throughput, are essential metrics used to
evaluate the efficiency of scheduling algorithms.
Now let’s discuss CPU Scheduling has several criteria. Some of them are mentioned below.
1. CPU utilization
The main objective of any CPU scheduling algorithm is to keep the CPU as busy as possible. Theoretically,
CPU utilization can range from 0 to 100 but in a real-time system, it varies from 40 to 90 percent depending
on the load upon the system.
2. Throughput
A measure of the work done by the CPU is the number of processes being executed and completed per unit of
time. This is called throughput. The throughput may vary depending on the length or duration of the
processes.

3. Turnaround Time
For a particular process, an important criterion is how long it takes to execute that process. The time elapsed
from the time of submission of a process to the time of completion is known as the turnaround time. Turn-
around time is the sum of times spent waiting to get into memory, waiting in the ready queue, executing in
CPU, and waiting for I/O.
Turn Around Time = Completion Time – Arrival Time.
4. Waiting Time
A scheduling algorithm does not affect the time required to complete the process once it starts
execution. It only affects the waiting time of a process i.e. time spent by a process waiting in
the ready queue.
Waiting Time = Turnaround Time – Burst Time.
5. Response Time
In an interactive system, turn-around time is not the best criterion. A process may produce
some output fairly early and continue computing new results while previous results are being
output to the user. Thus another criterion is the time taken from submission of the process of
the request until the first response is produced. This measure is called response time.

31
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM

Response Time = CPU Allocation Time(when the CPU was allocated for the first) – Arrival
Time
6. Completion Time
The completion time is the time when the process stops executing, which means that the
process has completed its burst time and is completely executed.
7. Priority
If the operating system assigns priorities to processes, the scheduling mechanism should favor
the higher-priority processes.
8. Predictability
A given process always should run in about the same amount of time under a similar system
load.

32
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM

MULTI PROCESOR SCHEDULING :

33
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM

 In Multiple-Processor Scheduling, A system with many processors that share the same
memory, bus, and input/output devices is referred to as a multi-processor.
 The bus links all of the computer's other parts, including the RAM and I/O devices, to
the processor.
Types of Multiprocessor Scheduling Algorithms:
Operating systems utilize a range of multiprocessor scheduling algorithms. Among the most typical types are −
Round-Robin Scheduling − The round-robin scheduling algorithm allocates a time quantum to each CPU
and configures processes to run in a round-robin fashion on each processor. Since it ensures that each process
gets an equivalent amount of CPU time, this strategy might be useful in systems wherein all programs have the
same priority.
Priority Scheduling − Processes are given levels of priority in this method, and those with greater priorities are
scheduled to run first. This technique might be helpful in systems where some jobs, like real-time tasks, call for
a higher priority.
Scheduling with the shortest job first (SJF) − This algorithm schedules tasks according to how long they should
take to complete. It is planned for the shortest work to run first, then the next smallest job, and so on. This
technique can be helpful in systems with lots of quick processes since it can shorten the typical response time.
Fair-share scheduling − In this technique, the number of processors and the priority of each process determine how
much time is allotted to each. As it ensures that each process receives a fair share of processing time, this technique
might be helpful in systems with a mix of long and short processes.
Earliest deadline first (EDF) scheduling − Each process in this algorithm is given a deadline, and the process with
the earliest deadline is the one that will execute first. In systems with real-time activities that have stringent deadlines,
this approach can be helpful.
Scheduling using a multilevel feedback queue (MLFQ) − Using a multilayer feedback queue (MLFQ), processes
are given a range of priority levels and are able to move up or down the priority levels based on their behavior. This
strategy might be useful in systems with a mix of short and long processes.

34
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM

UNIT – III

Synchronization Tools: The Critical Section Problem, Peterson’s Solution, Mutex Locks,
Semaphores, Monitors, Classic problems of Synchronization.
Deadlocks: system Model, Deadlock characterization, Methods for handling Deadlocks, Deadlock
prevention, Deadlock avoidance, Deadlock detection, Recovery from Deadlock.

 Process Synchronization is used in a computer system to ensure that multiple processes or


threads can run concurrently without interfering with each other.
 The main objective of process synchronization is to ensure that multiple processes access
shared resources without interfering with each other and to prevent the possibility of
inconsistent data due to concurrent access.
 To achieve this, various synchronization techniques such as semaphores, monitors, and
critical sections are used.
 In a multi-process system, synchronization is necessary to ensure data consistency and
integrity, and to avoid the risk of deadlocks and other synchronization problems.
 Process synchronization is an important aspect of modern operating systems, and it plays a
crucial role in ensuring the correct and efficient functioning of multi-process systems.

35
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM

Process Synchronization is a technique which is used to coordinate the process that use shared Data. There are two types of Processes in an
Operating Systems:-
1. Independent Process – The process that does not affect or is affected by the other process while its execution then the process is called
Independent Process. Example The process that does not share any shared variable, database, files, etc.
2. Cooperating Process – The process that affect or is affected by the other process while execution, is called a Cooperating Process. Example
The process that share file, variable, database, etc are the Cooperating Process.
Critical section:
A critical section is a part of a program where shared resources like memory or files are accessed by multiple processes or threads. To
avoid issues like data inconsistency or race conditions, synchronization techniques ensure that only one process or thread uses the critical
section at a time.
 The critical section contains shared variables or resources that need to be synchronized to maintain the consistency of data variables.
 In simple terms, a critical section is a group of instructions/statements or regions of code that need to be executed atomically, such as
accessing a resource (file, input or output port, global data, etc.) In concurrent programming, if one process tries to change the value of
shared data at the same time as another thread tries to read the value (i.e., data race across threads), the result is unpredictable. The
access to such shared variables (shared memory, shared files, shared port, etc.) is to be synchronized.
Few programming languages have built-in support for synchronization. It is critical to understand the importance of race conditions while
writing kernel-mode programming (a device driver, kernel thread, etc.) since the programmer can directly access and modify kernel data
structures
Although there are some properties that should be followed if any code in the critical section

Critical section

 Entry Section – It is part of the process which decide the entry of a particular process in the Critical Section, out of many other
processes.
 Critical Section – It is the part in which only one process is allowed to enter and modify the shared variable.This part of the process
ensures that only no other process can access the resource of shared data.
 Exit Section – This process allows the other process that are waiting in the Entry Section, to enter into the Critical Sections. It checks
that a process that after a process has finished execution in Critical Section can be removed through this Exit Section.
 Remainder Section – The other parts of the Code other than Entry Section, Critical Section and Exit Section are known as Remainder
Section.
Critical Section problems must satisfy these three requirements:
1. Mutual Exclusion – It states that no other process is allowed to execute in the critical section if a process is executing in critical
section.
2. Progress – When no process is in the critical section, then any process from outside that request for execution can enter in the critical
section without any delay. Only those process can enter that have requested and have finite time to enter the process.
3. Bounded Waiting – An upper bound must exist on the number of times a process enters so that other processes are allowed to enter
their critical sections after a process has made a request to enter its critical section and before that request is granted.
Process Synchronization are handled by two approaches:
Types of Solutions to Critical Section Problem
There are two main types of solutions to the Critical Section Problem:
 Software Based
 Hardware Based
 Software Based
 Software Approach – In Software Approach, Some specific Algorithm approach is used to maintain synchronization of the
data. Like in Approach One or Approach Two, for a number of two process, a temporary variable like (turn) or boolean
variable (flag) value is used to store the data. When the condition is True then the process in waiting State, known as Busy
Waiting State. This does not satisfy all the Critical Section requirements. Another Software approach known as Peterson’s
Solution is best for Synchronization. It uses two variables in the Entry Section so as to maintain consistency, like Flag (boolean
variable) and Turn variable(storing the process states). It satisfy all the three Critical Section requirements. //Image of
Peterson’s Algorithm
Peterson’s Algorithm:
To handle the problem of Critical Section (CS) Peterson gave an algorithm with a bounded waiting.
• Suppose there are N processes (P1, P2, … PN) and each of them at some point need to enter the Critical Section.
• A flag[] array of size N is maintained which is by default false and whenever a process need to enter the critical section it has to set its flag as
true, i.e. suppose Pi wants to enter so it will set flag[i]=TRUE.

36
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM

• There is another variable called turn which indicates the process number which is currently to enter into the CS. The process that enters into the
CS while exiting would change the turn to another number from among the list of ready processes.

2. Hardware Approach – The Hardware Approach of synchronization can be done through Lock & Unlock technique.Locking part is done in the Entry Section, so that
only one process is allowed to enter into the Critical Section, after it complete its execution, the process is moved to the Exit Section, where Unlock Operation is done
so that another process in the Lock Section can repeat this process of Execution.This process is designed in such a way that all the three conditions of the Critical
Sections are satisfied. //Image of Lock

Using Interrupts –
These are easy to implement.When Interrupt are disabled then no other process is allowed to perform Context Switch operation that would allow only one process to
enter into the Critical State. //Image of Interrupts

37
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM

Test_and_Set Operation –
This allows boolean value (True/False) as a hardware Synchronization, which is atomic in nature i.e no other
interrupt is allowed to access.This is mainly used in Mutual Exclusion Application. Similar type operation can
be achieved through Compare and Swap function. In this process, a variable is allowed to accessed in Critical
Section while its lock operation is ON.Till then, the other process is in Busy Waiting State. Hence Critical
Section Requirements are achieved.
A solution to a process synchronization problem should meet three important criteria:
• Correctness: Data access synchronization and control synchronization should be performed in accordance
with synchronization requirements of the problem.
• Maximum concurrency: A process should be able to operate freely except when it needs to wait for other
processes to perform synchronization actions.
• No busy waits: To avoid performance degradation, synchronization should be performed through blocking
rather than through busy waits
Hardware Synchronization Algorithms : Unlock and Lock, Test and Set, Swap
There are three algorithms in the hardware approach of solving Process Synchronization problem:
Test and Set
Swap
Unlock and Lock
Hardware instructions in many operating systems help in the effective solution of critical section
problems.
MutexLocks:
A mutex is different from a binary semaphore, which provides a locking mechanism. It stands
for Mutual Exclusion Object. Mutex is mainly used to provide mutual exclusion to a specific portion
of the code so that the process can execute and work with a particular section of the code at a
particular time. A mutex enforces strict ownership. Only the thread that locks the mutex can unlock
it. It is specifically used for locking a resource to ensure that only one thread accesses it at a time.
Due to this strict ownership, a mutex is not only typically used for signaling between threads, but it
is used for mutual exclusion also to ensuring that a resource is accessed by only one thread at a time.
 A mutex is an object.
 Mutex works upon the locking mechanism.

38
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM

 Operations on mutex:
 Lock
 Unlock
 Mutex does not have any subtypes.
 A mutex can only be modified by the process that is requesting or releasing a resource.
 If the mutex is locked then the process needs to wait in the process queue, and mutex can only
be accessed once the lock is released.
Advantages of Mutex
 No race condition arises, as only one process is in the critical section at a time.
 Data remains consistent and it helps in maintaining integrity.
 It is a simple locking mechanism that into a critical section and is released while leaving the
critical section.
Disadvantages of Mutex
 If after entering into the critical section, the thread sleeps or gets preempted by a high-priority
process, no other thread can enter into the critical section. This can lead to starvation.
 When the previous thread leaves the critical section, then only other processes can enter into it,
there is no other mechanism to lock or unlock the critical section.
 Implementation of mutex can lead to busy waiting, which leads to the wastage of the CPU cycle.
Using Mutex
The producer-consumer problem: Consider the standard producer-consumer problem. Assume, we
have a buffer of 4096-byte length. A producer thread collects the data and writes it to the buffer. A
consumer thread processes the collected data from the buffer. The objective is, that both the threads
should not run at the same time.
Solution: A mutex provides mutual exclusion, either producer or consumer can have the key (mutex)
and proceed with their work. As long as the buffer is filled by the producer, the consumer needs to
wait, and vice versa. At any point in time, only one thread can work with the entire buffer. The
concept can be generalized using semaphore.

Mutex

What is Semaphore?

 A semaphore is an integer.
 Semaphore uses signaling mechanism.
 Operation on semaphore:

 Wait
 Signal
 Semaphore is of two types:

 Counting Semaphore
 Binary Semaphore
 Semaphore work with two atomic operations (Wait, signal) which can modify it.
 If the process needs a resource, and no resource is free. So, the process needs to perform wait operation

39
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM

until the semaphore value is greater than zero.


A semaphore is a non-negative integer variable that is shared between various threads. Semaphore
works upon signaling mechanism, in this a thread can be signaled by another thread. It provides a
less restrictive control mechanism. Any thread can invoke signal() (also known as release() or up()),
and any other thread can invoke wait() (also known as acquire() or down()). There is no strict
ownership in semaphores, meaning the thread that signals doesn’t necessarily have to be the same
one that waited. Semaphores are often used for coordinating signaling between threads. Semaphore
uses two atomic operations for process synchronisation:
 Wait (P)
 Signal (V)
Advantages of Semaphore
 Multiple threads can access the critical section at the same time.
 Semaphores are machine-independent.
 Only one process will access the critical section at a time, however, multiple threads are allowed.
 Semaphores are machine-independent, so they should be run over microkernel.
 Flexible resource management.
Disadvantages of Semaphore
 It has priority inversion.
 Semaphore operations (Wait, Signal) must be implemented in the correct manner to avoid
deadlock.
 It leads to a loss of modularity, so semaphores can’t be used for large-scale systems.
 Semaphore is prone to programming error and this can lead to deadlock or violation of mutual
exclusion property.
 Operating System has to track all the calls to wait and signal operations.
Using Semaphore
The producer-consumer problem: Consider the standard producer-consumer problem. Assume, we
have a buffer of 4096-byte length. A producer thread collects the data and writes it to the buffer. A
consumer thread processes the collected data from the buffer. The objective is, both the threads
should not run at the same time.
Solution: A semaphore is a generalized mutex. instead a single buffer, we can split the 4 KB buffer
into four 1 KB buffers (identical resources). A semaphore can be associated with these four buffers.
The consumer and producer can work on different buffers at the same time.

Producer-Consumer Problem using Semaphores in OS:


The producer-consumer problem is a classic synchronization problem in operating systems (OS). It
involves two processes: a producer and a consumer, which share a common buffer. Semaphores can
be used to solve this problem.
Problem Statement:
1. Shared Buffer: The producer and consumer share a common buffer to store and retrieve items.
2. Producer: Produces items and adds them to the buffer.
3. Consumer: Consumes items from the buffer.

40
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM

4. Synchronization: The producer and consumer must be synchronized to ensure that the buffer is
accessed correctly.
Semaphore Solution:
1. Full Semaphore: A semaphore that keeps track of the number of full slots in the buffer.
2. Empty Semaphore: A semaphore that keeps track of the number of empty slots in the buffer.
3. Mutex Semaphore: A semaphore that protects access to the buffer.
Producer Process:
1. Produce Item: Produce an item.
2. Wait for Empty Slot: Wait for an empty slot in the buffer using the empty semaphore.
3. Add Item to Buffer: Add the item to the buffer.
4. Signal Full Slot: Signal that a full slot is available using the full semaphore.
Consumer Process:
1. Wait for Full Slot: Wait for a full slot in the buffer using the full semaphore.
2. Remove Item from Buffer: Remove an item from the buffer.
3. Signal Empty Slot: Signal that an empty slot is available using the empty semaphore.
4. Consume Item: Consume the item.
Pseudocode:
// Producer Process
while (true) {
// Produce item
item = produce_item();
// Wait for empty slot
wait(empty_semaphore);
// Add item to buffer
buffer.add(item);
// Signal full slot
signal(full_semaphore);
}
// Consumer Process
while (true) {
// Wait for full slot
wait(full_semaphore);
// Remove item from buffer
item = buffer.remove();
// Signal empty slot
signal(empty_semaphore);
// Consume item
consume_item(item);}
Benefits of Semaphore Solution:
1. Synchronization: Semaphores ensure that the producer and consumer are synchronized.
2. Mutual Exclusion: Semaphores ensure that only one process can access the buffer at a time.
3. Efficient: Semaphores provide an efficient solution to the producer-consumer problem.
Real-World Applications:
1. Print Queue: The producer-consumer problem can be applied to a print queue, where multiple
processes can add print jobs to the queue.
2. Network Buffer: The producer-consumer problem can be applied to a network buffer, where
multiple processes can send and receive data.

41
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM

3. Database Transactions: The producer-consumer problem can be applied to database transactions,


where multiple processes can access and modify data.
Difference Between Mutex and Semaphore
Mutex Semaphore

A mutex is an object. A semaphore is an integer.

Mutex works upon the locking


Semaphore uses signaling mechanism.
mechanism.

Operations on mutex: Operation on semaphore:


 Lock  Wait
 Unlock  Signal
Semaphore is of two types:
Mutex does not have any subtypes.  Counting Semaphore
 Binary Semaphore
A mutex can only be modified by the
Semaphore work with two atomic operations
process that is requesting or releasing a
(Wait, signal) which can modify it.
resource.

If the mutex is locked then the process If the process needs a resource, and no
needs to wait in the process queue, and resource is free. So, the process needs to
mutex can only be accessed once the perform a wait operation until the semaphore
lock is released. value is greater than zero.

Monitors in Operating System:


 A monitor is a high-level synchronization construct that provides a way to synchronize access
to shared resources in an operating system (OS). A monitor is a combination of a mutex
(mutual exclusion) lock and a condition variable.
A monitor is a programming construct that allows only one process or thread to access a shared
resource at a time. It provides a way to synchronize access to shared resources, ensuring that only one
process or thread can access the resource at a time.
Components of a Monitor:
1. Mutex Lock: A mutex lock is used to protect the shared resource from simultaneous access by
multiple processes or threads.
2. Condition Variable: A condition variable is used to signal processes or threads that are waiting for a
specific condition to occur.
How Monitors Work:
1. Acquire Mutex Lock: A process or thread acquires the mutex lock to access the shared resource.
2. Check Condition: The process or thread checks the condition variable to see if the desired condition
has occurred.

42
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM

3. Wait or Signal: If the condition has not occurred, the process or thread waits on the condition
variable. If the condition has occurred, the process or thread signals other processes or threads that are
waiting.
4. Release Mutex Lock: The process or thread releases the mutex lock when it is finished accessing
the shared resource.
Types of Monitors:
1. Hoare Monitor: This type of monitor allows threads to wait on condition variables and notifies
other threads when the condition becomes true.
2. Mesa Monitor: This type of monitor allows threads to wait on condition variables, but it does not
notify other threads when the condition becomes true.
Advantages of Monitors:
1. Synchronization: Monitors provide a way to synchronize access to shared resources.
2. Mutual Exclusion: Monitors ensure that only one process or thread can access a shared resource at
a time.
3. Efficient: Monitors provide an efficient way to manage shared resources.
Disadvantages of Monitors:
1. Complexity: Monitors can be complex to implement and use.
2. Overhead: Monitors can introduce overhead due to the need to acquire and release the mutex lock.
Real-World Applications of Monitors:
1. Database Systems: Monitors can be used to synchronize access to shared data in database systems.
2. Operating Systems: Monitors can be used to synchronize access to shared resources in operating
systems.
3. Embedded Systems: Monitors can be used to synchronize access to shared resources in embedded
systems.
Classical Problems of Synchronization
 Semaphore can be used in other synchronization problems besides Mutual Exclusion.
 Below are some of the classical problem depicting flaws of process synchronaization in systems where cooperating
processes are present.
We will discuss the following three problems:
1. Bounded Buffer (Producer-Consumer) Problem

2. Dining Philosophers Problem

3. The Readers Writers Problem


1.Bounded Buffer Problem using Semaphores:
The Bounded Buffer Problem is a classic synchronization problem that can be solved using
semaphores. Here's a solution:
Problem Statement:
1. Shared Buffer: A shared buffer with a finite capacity (e.g., 5 slots).
2. Producer Process: Produces items and adds them to the buffer.
3. Consumer Process: Consumes items from the buffer.
4. Synchronization: The producer and consumer processes must be synchronized to prevent buffer
overflow or underflow.
Semaphore Solution:
1. Empty Semaphore (E): A semaphore that keeps track of the number of empty slots in the buffer.
Initialized to the buffer size (e.g., 5).
2. Full Semaphore (F): A semaphore that keeps track of the number of full slots in the buffer.
Initialized to 0.
3. Mutex Semaphore (M): A semaphore that protects access to the buffer. Initialized to 1.

43
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM

Producer Process:
1. Produce Item: Produce an item.
2. Wait for Empty Slot: Wait for an empty slot in the buffer using the empty semaphore (E). wait(E)
3. Acquire Mutex: Acquire the mutex semaphore (M) to protect access to the buffer. wait(M)
4. Add Item to Buffer: Add the item to the buffer.
5. Signal Full Slot: Signal that a full slot is available using the full semaphore (F). signal(F)
6. Release Mutex: Release the mutex semaphore (M). signal(M)
Consumer Process:
1. Wait for Full Slot: Wait for a full slot in the buffer using the full semaphore (F). wait(F)
2. Acquire Mutex: Acquire the mutex semaphore (M) to protect access to the buffer. wait(M)
3. Remove Item from Buffer: Remove an item from the buffer.
4. Signal Empty Slot: Signal that an empty slot is available using the empty semaphore (E). signal(E)
5. Release Mutex: Release the mutex semaphore (M). signal(M)
6. Consume Item: Consume the item.
Pseudocode:
// Producer Process
while (true) {
// Produce item
item = produce_item();
// Wait for empty slot
wait(E);
// Acquire mutex
wait(M);
// Add item to buffer
buffer.add(item);
// Signal full slot
signal(F);
// Release mutex
signal(M);
}
// Consumer Process
while (true) {
// Wait for full slot
wait(F);
// Acquire mutex
wait(M);
// Remove item from buffer
item = buffer.remove();
// Signal empty slot
signal(E);
// Release mutex
signal(M);
// Consume item
consume_item(item);
}

44
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM

45
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM

Deadlocks in OS:
A deadlock is a situation in a computer system where two or more
processes are blocked indefinitely, each waiting for the other to release a
resource. This creates a circular wait condition, where none of the
processes can proceed because they are all waiting for each other.
Deadlock System model:
 A deadlock is a situation in a computer system where two or more processes are blocked
indefinitely, each waiting for the other to release a resource.
 A deadlock occurs when a set of processes is stalled because each process is holding a
resource and waiting for another process to acquire another resource.
 In the diagram below, for example, Process 1 is holding Resource 1 while Process 2 acquires
Resource 2, and Process 2 is waiting for Resource 1.

46
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM

The system model of deadlock consists of the following components:


1. Processes: These are the active entities that request and release resources.
2. Resources: These are the passive entities that are requested and released by processes.
3. Resource Allocation: This is the process of assigning resources to processes.
Deadlock Conditions:
For a deadlock to occur, the following conditions must be met:
1. Mutual Exclusion: Two or more processes must be competing for a common resource that
cannot be used simultaneously.
2. Hold and Wait: A process must be holding a resource and waiting for another resource, which is
held by another process.
3. No Preemption: The operating system must not be able to preempt one process and give the
resource to another process.
4. Circular Wait: A process must be waiting for a resource that is held by another process, which is
waiting for a resource held by the first process.
Deadlock Example:

Suppose we have two processes, P1 and P2, and two resources, R1 and R2.
In this example, P1 is holding R1 and waiting for R2, which is held by P2. P2 is holding R2 and
waiting for R1, which is held by P1. This is a deadlock situation.
Operations :
In normal operation, a process must request a resource before using it and release it when finished, as
shown below.
Request –
If the request cannot be granted immediately, the process must wait until the resource(s) required to
become available. The system, for example, uses the functions open(), malloc(), new(), and request ().
Use –
The process makes use of the resource, such as printing to a printer or reading from a file.

47
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM

Release –
The process relinquishes the resource, allowing it to be used by other processes.
Deadlock Characterization :
Deadlock characterization is the process of identifying the necessary and sufficient conditions for a
deadlock to occur in a computer system. These conditions are often referred to as the "deadlock
characterization".
Necessary Conditions for Deadlock:
The following conditions are necessary for a deadlock to occur:
1. Mutual Exclusion: Two or more processes must be competing for a common resource that
cannot be used simultaneously.
2. Hold and Wait: A process must be holding a resource and waiting for another resource, which is
held by another process.
3. No Preemption: The operating system must not be able to preempt one process and give the
resource to another process.
4. Circular Wait: A process must be waiting for a resource that is held by another process, which is
waiting for a resource held by the first process.
Implications of Deadlock Characterization:
Understanding the deadlock characterization theorem has several implications:
1. Deadlock Prevention: This involves preventing deadlocks from occurring in the first place.
2. Deadlock Detection and Recovery: This involves detecting deadlocks and recovering from
them.
3. Deadlock Avoidance: This involves avoiding deadlocks by carefully allocating resources.
Deadlock methods:
 Methods for Handling Deadlocks , Deadlocks can be handled using several methods, which
can be broadly classified into three categories:
1. Deadlock Prevention: This involves preventing deadlocks from occurring in the first place.
2. Deadlock Detection and Recovery: This involves detecting deadlocks and recovering from
them.
3. Deadlock Avoidance: This involves avoiding deadlocks by carefully allocating resources.
Deadlock Prevention :
 Deadlock prevention is a technique used to prevent deadlocks from occurring in a computer
system. This is achieved by ensuring that at least one of the necessary conditions for a
deadlock is not met.
Necessary Conditions for Deadlock:
For a deadlock to occur, the following conditions must be met:
1. Mutual Exclusion: Two or more processes must be competing for a common resource that
cannot be used simultaneously.
2. Hold and Wait: A process must be holding a resource and waiting for another resource, which is
held by another process.
3. No Preemption: The operating system must not be able to preempt one process and give the
resource to another process.
4. Circular Wait: A process must be waiting for a resource that is held by another process, which is
waiting for a resource held by the first process.
Deadlock Prevention Techniques:
To prevent deadlocks, we can use the following techniques:

48
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM

1. Mutual Exclusion Prevention: Ensure that at least one resource is not shared.
2. Hold and Wait Prevention: Ensure that a process does not hold a resource and wait for another
resource.
3. No Preemption Prevention: Ensure that the operating system can preempt one process and give
the resource to another process.
4. Circular Wait Prevention: Ensure that a process does not wait for a resource that is held by
another process, which is waiting for a resource held by the first process.
5. Resource Ordering: Ensure that resources are always requested in a specific order.
6. Avoid Nested Locks: Avoid acquiring multiple locks simultaneously.
7. Use Lock Timeout: Use a lock timeout to prevent a process from holding a lock indefinitely.
8. Use Lock Queue: Use a lock queue to ensure that processes acquire locks in a specific order.
Examples of Deadlock Prevention:
1. Dining Philosopher Problem: This problem can be solved by ensuring that each philosopher picks
up the chopsticks in a specific order, preventing the circular wait condition.
2. Banker's Algorithm: This algorithm ensures that a process does not hold a resource and wait for
another resource, preventing the hold and wait condition.
Advantages of Deadlock Prevention:
1. Prevents Deadlocks: Deadlock prevention ensures that deadlocks do not occur in the system.
2. Improves System Performance: By preventing deadlocks, the system can run more efficiently
and respond to user requests more quickly.
3. Reduces System Downtime: Deadlock prevention reduces the likelihood of system downtime
due to deadlocks.
Disadvantages of Deadlock Prevention:
1. Resource Underutilization: Deadlock prevention can lead to resource underutilization, as some
resources may be left idle to prevent deadlocks.
2. Increased Complexity: Deadlock prevention can add complexity to the system, as it requires
careful resource allocation and management.
Deadlock Avoidance:
Deadlock avoidance is a technique used to avoid deadlocks in a computer system. This is achieved by
carefully allocating resources and ensuring that the system never enters a deadlock state.
Deadlock Avoidance Techniques:
1. Resource Ordering: Ensure that resources are always requested in a specific order.
2. Avoid Nested Locks: Avoid acquiring multiple locks simultaneously.
3. Use Lock Timeout: Use a lock timeout to prevent a process from holding a lock indefinitely.
4. Use Lock Queue: Use a lock queue to ensure that processes acquire locks in a specific order.
5. Banker's Algorithm: Use the Banker's algorithm to ensure that a process does not hold a resource
and wait for another resource.
Banker's Algorithm:
The Banker's algorithm is a deadlock avoidance technique that ensures that a process does not hold a
resource and wait for another resource. The algorithm works as follows:
1. Available Resources: The system maintains a list of available resources.
2. Max Resources: Each process specifies the maximum number of resources it may need.
3. Current Resources: Each process specifies the current number of resources it is holding.
4. Need Resources: Each process specifies the number of resources it needs to complete its task.
5. Safe State: The system checks if the current state is safe by ensuring that there are enough
available resources to satisfy the needs of all processes.

49
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM

Advantages of Deadlock Avoidance:


1. Avoids Deadlocks: Deadlock avoidance ensures that the system never enters a deadlock state.
2. Improves System Performance: By avoiding deadlocks, the system can run more efficiently and
respond to user requests more quickly.
3. Reduces System Downtime: Deadlock avoidance reduces the likelihood of system downtime due to
deadlocks.
Disadvantages of Deadlock Avoidance:
1. Complexity: Deadlock avoidance can add complexity to the system, as it requires careful resource
allocation and management.
2. Resource Underutilization: Deadlock avoidance can lead to resource underutilization, as some
resources may be left idle to avoid deadlocks.
Deadlock Detection
Deadlock detection is a technique used to detect whether a deadlock has occurred in a computer
system. This is achieved by analyzing the system's state and checking for the presence of a deadlock.
Deadlock Detection Algorithms:
There are several deadlock detection algorithms, including:
1. Wait-for-Graph (WFG) Algorithm: This algorithm constructs a graph of processes and
resources, and checks for cycles in the graph.
2. Banker's Algorithm: This algorithm checks if the system is in a safe state by ensuring that there
are enough available resources to satisfy the needs of all processes.
3. Floyd's Cycle Detection Algorithm: This algorithm detects cycles in a graph by traversing the
graph and checking for revisited nodes.
Wait-for-Graph (WFG) Algorithm:
The WFG algorithm constructs a graph of processes and resources, where:
1. Processes are represented as nodes.
2. Resources are represented as edges.
3. An edge from process Pi to resource Rj indicates that Pi is waiting for Rj.
The algorithm checks for cycles in the graph by traversing the graph and checking for revisited nodes.
If a cycle is detected, a deadlock is present.
Banker's Algorithm:
The Banker's algorithm checks if the system is in a safe state by ensuring that there are enough
available resources to satisfy the needs of all processes.
1. Available Resources: The system maintains a list of available resources.
2. Max Resources: Each process specifies the maximum number of resources it may need.
3. Current Resources: Each process specifies the current number of resources it is holding.
4. Need Resources: Each process specifies the number of resources it needs to complete its task.
The algorithm checks if the system is in a safe state by ensuring that there are enough available
resources to satisfy the needs of all processes.
Advantages of Deadlock Detection:
1. Detects Deadlocks: Deadlock detection algorithms can detect whether a deadlock has occurred in
the system.
2. Enables Recovery: Once a deadlock is detected, the system can take recovery actions to resolve the
deadlock.
3. Improves System Reliability: Deadlock detection improves system reliability by detecting and
recovering from deadlocks.

50
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM

Disadvantages of Deadlock Detection:


1. Complexity: Deadlock detection algorithms can be complex to implement and require significant
system resources.
2. Overhead: Deadlock detection can introduce overhead in the system, potentially impacting
performance.
3. False Positives: Deadlock detection algorithms may produce false positives, indicating a deadlock
when none exists.
Recovery from Deadlocks:
Recovery from deadlocks involves terminating or aborting one or more processes involved in the
deadlock, and then restarting them. The goal is to break the circular wait condition and restore the
system to a safe state.
Methods for Recovery from Deadlocks:
1. Process Termination: Terminate one or more processes involved in the deadlock. This can be done
by aborting the process, killing the process, or restarting the process.
2. Resource Preemption: Preempt one or more resources from a process involved in the deadlock and
allocate it to another process.
3. Rollback Recovery: Roll back the system to a previous safe state by undoing all changes made
since the deadlock occurred.
4. Restart: Restart the system or the affected processes from a safe state.
Algorithms for Recovery from Deadlocks:
1. Abort-and-Restart Algorithm: Abort one or more processes involved in the deadlock and restart
them from the beginning.
2. Preemption Algorithm: Preempt one or more resources from a process involved in the deadlock and
allocate it to another process.
3. Rollback Algorithm: Roll back the system to a previous safe state by undoing all changes made
since the deadlock occurred.
Example of Recovery from Deadlocks:
Suppose we have two processes, P1 and P2, and two resources, R1 and R2.
| Process | Resource |
| --- | --- |
| P1 | R1 |
| P1 | R2 |
| P2 | R2 |
| P2 | R1 |
The system is in a deadlock state. To recover from the deadlock, we can terminate process P1, which
will release resources R1 and R2. Process P2 can then acquire resource R1 and continue execution.
Advantages of Recovery from Deadlocks:
1. Restores System to Safe State: Recovery from deadlocks restores the system to a safe state,
allowing normal execution to resume.
2. Minimizes Data Loss: Recovery from deadlocks minimizes data loss by restoring the system to a
previous safe state.
3. Improves System Availability: Recovery from deadlocks improves system availability by quickly
restoring the system to a safe state.

51
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM

Disadvantages of Recovery from Deadlocks:


1. Complexity: Recovery from deadlocks can be complex, requiring careful analysis of the system
state.
2. Performance Overhead: Recovery from deadlocks can incur performance overhead, as the system
must be rolled back to a previous safe state.
3. Data Consistency: Recovery from deadlocks can lead to data consistency issues, as the system may
be rolled back to a previous state that is inconsistent with the current state.
Deadlock Detection :
If deadlocks cannot be avoided, another approach is to detect them and recover in some way.
Aside from the performance hit of constantly checking for deadlocks, a policy/algorithm for
recovering from deadlocks must be in place, and when processes must be aborted or have their
resources preempted, there is the possibility of lost work.
Recovery From Deadlock :
There are three basic approaches to getting out of a bind:
Inform the system operator and give him/her permission to intervene manually.
Stop one or more of the processes involved in the deadlock.
Prevent the use of resources.
Approach of Recovery From Deadlock :
Here, we will discuss the approach of Recovery From Deadlock as follows.
Approach-1 :
Process Termination :
There are two basic approaches for recovering resources allocated to terminated processes as follows.
Stop all processes that are involved in the deadlock. This does break the deadlock, but at the expense
of terminating more processes than are absolutely necessary.
Processes should be terminated one at a time until the deadlock is broken. This method is more
conservative, but it necessitates performing deadlock detection after each step.
In the latter case, many factors can influence which processes are terminated next as follows.
Priorities in the process
How long has the process been running and how close it is to completion.
How many and what kind of resources does the process have? (Are they simple to anticipate and
restore? )
How many more resources are required for the process to be completed?
How many processes will have to be killed?
Whether the process is batch or interactive.
Approach-2 :
Resource Preemption :
When allocating resources to break the deadlock, three critical issues must be addressed:
Selecting a victim –
Many of the decision criteria outlined above apply to determine which resources to preempt from
which processes.
Rollback –
A preempted process should ideally be rolled back to a safe state before the point at which that
resource was originally assigned to the process. Unfortunately, determining such a safe state can be
difficult or impossible, so the only safe rollback is to start from the beginning. (In other words, halt
and restart the process.)

52
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM

Starvation –
How do you ensure that a process does not go hungry because its resources are constantly being
preempted? One option is to use a priority system and raise the priority of a process whenever its
resources are preempted. It should eventually gain a high enough priority that it will no longer be
preempted.

53
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM

UNIT - IV
Memory-Management Strategies: Introduction, Contiguous memory allocation, Paging,
Structure of the Page Table, Swapping.
Virtual Memory Management: Introduction, Demand paging, Copy-on-write, Page replacement,
Allocation of frames, Thrashing
Storage Management: Overview of Mass Storage Structure, HDD Scheduling.

What is Memory Management?


 Memory management mostly involves management of main memory.
 In a multiprogramming computer, the Operating System resides in a part of the main
memory, and the rest is used by multiple processes.
 The task of subdividing the memory among different processes is called Memory
Management.
 Memory management is a method in the operating system to manage operations
between main memory and disk during process execution.
 The main aim of memory management is to achieve efficient utilization of memory.

Why Memory Management is Required?


 Allocate and de-allocate memory before and after process execution.
 To keep track of used memory space by processes.
 To minimize fragmentation issues.
 To proper utilization of main memory.
 To maintain data integrity while executing of process.
Logical and Physical Address Space
 Logical Address Space: An address generated by the CPU is known as a “Logical
Address”. It is also known as a Virtual address. Logical address space can be defined as the
size of the process. A logical address can be changed.
 Physical Address Space: An address seen by the memory unit (i.e. the one loaded into the
memory address register of the memory) is commonly known as a “Physical Address”. A
Physical address is also known as a Real address. The set of all physical addresses
corresponding to these logical addresses is known as Physical address space. A physical
address is computed by MMU. The run-time mapping from virtual to physical addresses is

54
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM

done by a hardware device Memory Management Unit(MMU). The physical address always
remains constant.

Static and Dynamic Loading


Loading a process into the main memory is done by a loader. There are two different types of
loading :
 Static Loading: Static Loading is basically loading the entire program into a fixed address.
It requires more memory space.
 Dynamic Loading: The entire program and all data of a process must be in physical
memory for the process to execute. So, the size of a process is limited to the size of physical
memory. To gain proper memory utilization, dynamic loading is used. In dynamic loading, a
routine is not loaded until it is called. All routines are residing on disk in a re locatable load
format. One of the advantages of dynamic loading is that the unused routine is never loaded.
This loading is useful when a large amount of code is needed to handle it efficiently.
Static and Dynamic Linking
To perform a linking task a linker is used. A linker is a program that takes one or more object
files generated by a compiler and combines them into a single executable file.
 Static Linking: In static linking, the linker combines all necessary program modules into a
single executable program. So there is no runtime dependency. Some operating systems
support only static linking, in which system language libraries are treated like any other
object module.
 Dynamic Linking: The basic concept of dynamic linking is similar to dynamic loading.
In dynamic linking, “Stub” is included for each appropriate library routine reference. A stub
is a small piece of code. When the stub is executed, it checks whether the needed routine is
already in memory or not. If not available then the program loads the routine into memory.
Memory Management Techniques
Memory management techniques are methods used by an operating system to efficiently
allocate, utilize, and manage memory resources for processes. These techniques ensure smooth
execution of programs and optimal use of system memory
Different Memory Management techniques are:

Implementation of Contiguous Memory Management Techniques:


Memory Management Techniques are basic techniques that are used in managing the
memory in the operating system.
 Memory Management Techniques are classified broadly into two categories:
 Contiguous

55
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM

 Non-contiguous

What is Contiguous Memory Management?


 Contiguous memory allocation is a memory allocation strategy.
 As the name implies, we utilize this technique to assign contiguous blocks of memory
to each task.
 Thus, whenever a process asks to access the main memory, we allocate a continuous
segment from the empty region to the process based on its size.
 In this technique, memory is allotted in a continuous way to the processes.
Contiguous Memory Management has two types:
 Fixed(or Static) Partition
 Variable(or Dynamic) Partitioning.
1. Fixed Partition Scheme
 In the fixed partition scheme, memory is divided into fixed number of partitions. Fixed
means number of partitions are fixed in the memory.
 In the fixed partition, in every partition only one process will be accommodated.
 Degree of multi-programming is restricted by number of partitions in the memory.
 Maximum size of the process is restricted by maximum size of the partition.
 Every partition is associated with the limit registers.
 Limit Registers: It has two limit:
 Lower Limit: Starting address of the partition.
 Upper Limit: Ending address of the partition.

Internal Fragmentation: is found in fixed partition scheme. To overcome the


problem of internal fragmentation, instead of fixed partition scheme, variable partition
scheme is used.
Advantages of Fixed Partitioning memory management schemes:
 Simple to implement.
 Easy to manage and design.
Disadvantages of Fixed Partitioning memory management schemes:
 This scheme suffers from internal fragmentation.
 The number of partitions is specified at the time of system generation.
2. Variable Partition Scheme
 In the variable partition scheme, initially memory will be single continuous free block.
 Whenever the request by the process arrives, accordingly partition will be made in the
memory.
 If the smaller processes keep on coming then the larger partitions will be made into
smaller partitions.
 In variable partition schema initially, the memory will be full contiguous free block
 Memory divided into partitions according to the process size where process size will vary.
 One partition is allocated to each active partition.

56
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM

External Fragmentation:
 It is found in variable partition scheme.
 To overcome the problem of external fragmentation, compaction technique is used or
non-contiguous memory management techniques are used.
Advantages of Variable Partition Scheme
 Portion size = process size
 There is no internal fragmentation (which is the drawback of fixed partition schema).
 Degree of multiprogramming varies and is directly proportional to a number of processes.
Disadvantage Variable Partition Scheme
 External fragmentation is still there.
2. Non-contiguous memory allocation:
 In a Non-Contiguous memory management scheme, the program is divided into different blocks and loaded
at different portions of the memory that need not necessarily be adjacent to one another.
 This scheme can be classified depending upon the size of blocks and whether the blocks reside in the main
memory or not.
1. Physical address space: Main memory (physical memory) is divided into blocks of the
same size called frames. frame size is defined by the operating system by comparing it with
the size of the process.
2. Logical Address space: Logical memory is divided into blocks of the same size called
process pages. page size is defined by hardware system and these pages are stored in the
main memory during the process in non-contiguous frames.
What is paging?
 Paging is a technique that eliminates the requirements of contiguous allocation of main memory.

 In this, the main memory is divided into fixed-size blocks of physical memory called frames.

 The size of a frame should be kept the same as that of a page to maximize the main memory and avoid
external fragmentation.

Advantages of paging:
 Pages reduce external fragmentation.
 Simple to implement.
 Memory efficient.
 Due to the equal size of frames, swapping becomes very easy.
 It is used for faster access of data.
What is Segmentation?
 Segmentation is a technique that eliminates the requirements of contiguous allocation of main memory.

 In this, the main memory is divided into variable-size blocks of physical memory called segments.

57
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM

 It is based on the way the programmer follows to structure their programs.

 With segmented memory allocation, each job is divided into several segments of different sizes, one for each
module.

 Functions, subroutines, stack, array, etc., are examples of such modules.

What is Paging?

Paging is a Memory Management technique that helps in retrieving the processes from
the secondary memory in the form of pages. It eliminates the need for contagious
allocation of memory to the processes. In paging, processes are divided into equal parts
called pages, and main memory is also divided into equal parts and each part is called a
frame.

Each page gets stored in one of the frames of the main memory whenever required. So,
the size of a frame is equal to the size of a page. Pages of a process can be stored in the
non-contagious locations in the main memory.

Why do we need Paging?

Let there is a process P1 of size 4 MB and there are two holes of size 2 MB each but are
not contiguous. Despite having the total available space equal to 4 MB it is useless and
can't be allocated to the process.

Paging helps in allocating memory to a process at different locations in the main memory.
It reduces memory wastage and removes external fragmentation.

Conversion of Logical Address into Physical Address

The CPU always generates a Logical Address. But, the Physical Address is needed to
access the main memory.

The Logical Address generated by the CPU has two parts:

1. Page Number(p) - It is the number of bits required to represent the pages in the
Logical Address Space. It is used as an index in a page table that contains the base
address of a page in the physical memory.
2. Page Offset(d) - It denotes the page size or the number of bits required to represent
a word on a page. It is combined with the Page Number to get the Physical
Address.

The Physical Address also consists of two parts:

1. Frame Number(f) - It is the number of bits required to represent a frame in the


Physical Address Space. It is the location of the required page inside the Main
Memory.

58
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM

2. Frame Offset(d) - It is the page size or the number of bits required to represent a
word in a frame. It is equal to the Page Offset.

Page Table

The Page Table contains the base address of each page inside the Physical Memory. It is
then combined with Page Offset to get the actual address of the required data in the main
memory.

The Page Number is used as the index of the Page Table which contains the base address
which is the Frame Number. Page offset is then used to retrieve the required data from the
main memory.

Example of Paging

If Logical Address = 2 bits, then Logical Address Space = 22 words and vice versa.

If Physical Address = 4 bits, then Physical Address Space = 24 words and vice versa.

Now, consider an example :

Let Page Size = 1 bit, Logical Address = 2 bits, Physical Address= 4 bits.

Frame Size = Page Size = 21 = 2 Words. It means every page will contain 2 words.

Number of Pages = Logical Address Space / Page Size =22 / 21 = 2

Number of Frames =Physical Address Space/ Frame Size = 24 / 21 = 8

59
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM

Advantages of Paging

 It is one of the easiest Memory Management Algorithms.


 Paging helps in storing a process at non-contagious locations in the main memory.
 Paging removes the problem of External Fragmentation.
Disadvantages of Paging
 It may cause Internal Fragmentation.
 More Memory is consumed by the Page Tables.
 It faces a longer memory lookup time.
What is Swapping?
 Swapping is a process of swapping a process temporarily to a secondary memory from
the main memory which is faster than compared to secondary memory.
 But as RAM is of less size the inactive process is transferred to secondary memory. The
main part of swapping is transferred time and the total time is directly proportional to
the amount of memory swapped.

60
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM

Difference Between Paging and Swapping


Feature Paging Swapping

The process of moving entire


A memory management scheme that
processes between main memory
Definition eliminates the need for contiguous
and disk storage to manage memory
allocation of physical memory.
usage.

Memory Divides the process’s memory into Moves entire processes to and from
Management fixed-size pages. disk (swap space) as needed.

Pages are typically smaller than the Swapping involves entire processes,
Size entire process. sizes are usually fixed which can be much larger than a
(e.g., 4KB). single page.

Can lead to significant overhead if


Eliminates external fragmentation;
Fragmentation many processes are swapped in and
internal fragmentation may occur.
out.

Generally more efficient since only Can cause performance degradation


Performance the needed pages are loaded into due to the overhead of moving large
memory. processes to and from disk.

Slower access time since accessing


Faster access to memory, as only
Access Time swapped processes requires reading
needed pages are loaded.
from disk.

Typically involves a swap space on


Uses page tables to keep track of
Implementation disk and a swapping algorithm to
pages and their locations.
decide which processes to swap.

Useful for handling situations where


Commonly used in virtual
memory is fully utilized and
Use Case memory systems to manage memory
additional processes need to be
efficiently.
executed.

61
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM

Virtual Memory Management Introduction :


 Virtual memory is a memory management capability in operating systems (OS) that enables a
computer to use more memory than is physically available in its Random Access Memory
(RAM).
 It allows multiple programs to run simultaneously, even if the total memory requirements
exceed the available RAM.
 It allows larger applications to run on systems with less RAM.
 The main objective of virtual memory is to support multiprogramming.
 The main advantage that virtual memory provides is, a running process does not need
to be entirely in memory.
 Programs can be larger than the available physical memory.
 Virtual Memory provides an abstraction of main memory, eliminating concerns about
storage limitations.
 A memory hierarchy, consisting of a computer system’s memory and a disk, enables a
process to operate with only some portions of its address space in RAM to allow more
processes to be in memory.
 A virtual memory is what its name indicates- it is an illusion of a memory that is larger
than the real memory.
 The basis of virtual memory is the non contiguous memory allocation model.
 The size of virtual storage is limited by the addressing scheme of the computer system .
 The amount of secondary memory available not by the actual number of main storage
locations.

Key Features of Virtual Memory:


1. Combines RAM and disk storage: Virtual memory uses both physical RAM and hard drive
storage to provide a larger address space.
2. Paging: The OS divides the program into fixed-size blocks called pages. These pages are stored in
RAM or on disk.
3. Page replacement algorithms: The OS uses algorithms like FIFO, LRU, or Optimal to decide
which pages to swap out of RAM when it's full.
4. Swapping: When a page is needed, but it's not in RAM, the OS swaps it in from disk storage.
Benefits of Virtual Memory:
1. More programs can run simultaneously: Virtual memory allows multiple programs to run at
the same time, even if the total memory requirements exceed the available RAM.

62
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM

2. Efficient use of RAM: The OS can optimize RAM usage by swapping out pages that are not
currently being used.
3. Improved multitasking: Virtual memory enables smooth multitasking by allowing programs to
run in the background while others are active.
Drawbacks of Virtual Memory:
1. Performance overhead: Swapping pages in and out of RAM can lead to a performance slowdown.
2. Disk usage: Virtual memory relies on disk storage, which can lead to increased disk usage and
wear.
In summary: virtual memory is a crucial feature in modern operating systems that enables efficient
use of RAM, improved multitasking, and allows multiple programs to run simultaneously.
How Virtual Memory Works
 Virtual Memory is a technique that is implemented using both hardware and software.
 It maps memory addresses used by a program, called virtual addresses, into physical
addresses in computer memory.
 All memory references within a process are logical addresses that are dynamically
translated into physical addresses at run time.
 This means that a process can be swapped in and out of the main memory such that it
occupies different places in the main memory at different times during the course of
execution.
 A process may be broken into a number of pieces and these pieces need not be
continuously located in the main memory during execution.
 The combination of dynamic run-time address translation and the use of a page or
segment table permits this.
 If these characteristics are present then, it is not necessary that all the pages or segments
are present in the main memory during execution.
 This means that the required pages need to be loaded into memory whenever required.
 Virtual memory is implemented using Demand Paging or Demand Segmentation.
Snapshot of a virtual memory management system
 Let us assume 2 processes, P1 and P2, contains 4 pages each. Each page size is 1 KB.

 The main memory contains 8 frame of 1 KB each.

 The OS resides in the first two partitions.

 In the third partition, 1 st page of P1 is stored and the other frames are also shown as filled with the different
pages of processes in the main memory.

 The page tables of both the pages are 1 KB size each and therefore they can be fit in one frame each.

 The page tables of both the processes contain various information that is also shown in the image.

 The CPU contains a register which contains the base address of page table that is 5 in the case of P1 and 7 in
the case of P2.

 This page table base address will be added to the page number of the Logical address when it comes to
accessing the actual corresponding entry.

63
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM

Advantages of Virtual Memory


1. The degree of Multiprogramming will be increased.
2. User can run large application with less real RAM.
3. There is no need to buy more memory RAMs.
Disadvantages of Virtual Memory
1. The system becomes slower since swapping takes time.
2. It takes more time in switching between applications.
3. The user will have the lesser hard disk space for its use.
Virtual Memory vs Physical Memory
Feature Virtual Memory Physical Memory (RAM)

An abstraction that extends the The actual hardware (RAM) that stores
Definition available memory by using disk data and instructions currently being
storage used by the CPU

Location On the hard drive or SSD On the computer’s motherboard

Slower (due to disk I/O


Speed Faster (accessed directly by the CPU)
operations)

Smaller, limited by the amount of RAM


Capacity Larger, limited by disk space
installed

Lower (cost of additional disk


Cost Higher (cost of RAM modules)
storage)

Data Indirect (via paging and


Direct (CPU can access data directly)
Access swapping)

64
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM

Feature Virtual Memory Physical Memory (RAM)

Non-volatile (data persists on


Volatility Volatile (data is lost when power is off)
disk)

Types of Virtual Memory:


In a computer, virtual memory is managed by the Memory Management Unit (MMU), which is
often built into the CPU. The CPU generates virtual addresses that the MMU translates into
physical addresses.
There are two main types of virtual memory:
 Paging
 Segmentation
Paging
 Paging divides memory into small fixed-size blocks called pages.
 When the computer runs out of RAM, pages that aren’t currently in use are moved to
the hard drive, into an area called a swap file.
 The swap file acts as an extension of RAM.
 When a page is needed again, it is swapped back into RAM, a process known as page
swapping.
 This ensures that the operating system (OS) and applications have enough memory to
run.
Demand Paging: The process of loading the page into memory on demand (whenever a page
fault occurs) is known as demand paging. The process includes the following steps are as
follows:
 If the CPU tries to refer to a page that is currently not available in the main memory, it
generates an interrupt indicating a memory access fault.
 The OS puts the interrupted process in a blocking state. For the execution to proceed the OS
must bring the required page into the memory.
 The OS will search for the required page in the logical address space.
 The required page will be brought from logical address space to physical address space. The
page replacement algorithms are used for the decision-making of replacing the page in
physical address space.
 The page table will be updated accordingly.
 The signal will be sent to the CPU to continue the program execution and it will place the
process back into the ready state.
Hence whenever a page fault occurs these steps are followed by the operating system and the
required page is brought into memory.

65
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM

Segmentation in Operating System:


A process is divided into Segments. The chunks that a program is divided into which are not
necessarily all of the exact sizes are called segments. Segmentation gives the user’s view of the
process which paging does not provide. Here the user’s view is mapped to physical memory.
Types of Segmentation in Operating Systems
 Virtual Memory Segmentation: Each process is divided into a number of segments, but
the segmentation is not done all at once. This segmentation may or may not take place at the
run time of the program.
 Simple Segmentation: Each process is divided into a number of segments, all of which are
loaded into memory at run time, though not necessarily contiguously.
There is no simple relationship between logical addresses and physical addresses in
segmentation. A table stores the information about all such segments and is called Segment
Table.

What is Segment Table?


It maps a two-dimensional Logical address into a one-dimensional Physical address. It’s each
table entry has:
 Base Address: It contains the starting physical address where the segments reside in
memory.
 Segment Limit: Also known as segment offset. It specifies the length of the segment.

Segmentation

Translation of Two-dimensional Logical Address to Dimensional Physical Address.

66
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM

Translation

The address generated by the CPU is divided into:


 Segment number (s): Number of bits required to represent the segment.
 Segment offset (d): Number of bits required to represent the position of data within a
segment.
Advantages of Segmentation in Operating System
 Reduced Internal Fragmentation : Segmentation can reduce internal fragmentation
compared to fixed-size paging, as segments can be sized according to the actual needs of a
process. However, internal fragmentation can still occur if a segment is allocated more space
than it is actually used.
 Segment Table consumes less space in comparison to Page table in paging.
 As a complete module is loaded all at once, segmentation improves CPU utilization.
 The user’s perception of physical memory is quite similar to segmentation. Users can divide
user programs into modules via segmentation. These modules are nothing more than
separate processes’ codes.
 The user specifies the segment size, whereas, in paging, the hardware determines the page
size.
 Segmentation is a method that can be used to segregate data from security operations.
 Flexibility: Segmentation provides a higher degree of flexibility than paging. Segments can
be of variable size, and processes can be designed to have multiple segments, allowing for
more fine-grained memory allocation.
 Sharing: Segmentation allows for sharing of memory segments between processes. This can
be useful for inter-process communication or for sharing code libraries.
 Protection: Segmentation provides a level of protection between segments, preventing one
process from accessing or modifying another process’s memory segment. This can help
increase the security and stability of the system.
Disadvantages of Segmentation in Operating System
 External Fragmentation : As processes are loaded and removed from memory, the free
memory space is broken into little pieces, causing external fragmentation. This is a notable
difference from paging, where external fragmentation is significantly lesser.
 Overhead is associated with keeping a segment table for each activity.
 Due to the need for two memory accesses, one for the segment table and the other for main
memory, access time to retrieve the instruction increases.
 Fragmentation: As mentioned, segmentation can lead to external fragmentation as memory
becomes divided into smaller segments. This can lead to wasted memory and decreased
performance.

67
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM

 Overhead: Using a segment table can increase overhead and reduce performance. Each
segment table entry requires additional memory, and accessing the table to retrieve memory
locations can increase the time needed for memory operations.
 Complexity: Segmentation can be more complex to implement and manage than paging. In
particular, managing multiple segments per process can be challenging, and the potential for
segmentation faults can increase as a result.

Thrashing:
Thrashing is a phenomenon in Operating Systems (OS) where the system spends more time
swapping pages in and out of memory (also known as "page faults") than executing actual
processes. This leads to a significant decrease in system performance and productivity.

Causes of Thrashing:
1.Insufficient RAM: When the system has limited RAM, it cannot hold all the required pages,
leading to frequent page faults.
2.Poor Page Replacement Algorithms: Inefficient page replacement algorithms, such as FIFO
(First-In-First-Out), can lead to thrashing.
3.Multiprogramming: Running multiple programs simultaneously can cause thrashing if the
system is unable to handle the increased demand for memory.

Effects of Thrashing:
1. System Slowdown: Thrashing leads to a significant decrease in system performance,
making it slow and unresponsive.
2.Increased Page Faults: The system spends more time handling page faults than executing
processes.
3.Decreased Throughput: Thrashing reduces the overall throughput of the system.
Solutions to Thrashing:
1.Increase RAM: Adding more RAM to the system can help reduce thrashing.
2. Improve Page Replacement Algorithms: Using more efficient page replacement
algorithms, such as LRU (Least Recently Used) or Optimal, can help minimize thrashing.
3.Reduce Multiprogramming: Limiting the number of programs running simultaneously can
help alleviate thrashing.
4.Use Disk Caching: Implementing disk caching can help reduce the number of page faults
and alleviate thrashing.

68
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM

Copy on Write :
 Copy on Write or simply COW is a resource management technique. One of its main use is
in the implementation of the fork system call in which it shares the virtual memory(pages)
of the OS.
 In UNIX like OS, fork() system call creates a duplicate process of the parent process which
is called as the child process.
 The idea behind a copy-on-write is that when a parent process creates a child process then
both of these processes initially will share the same pages in memory and these shared
pages will be marked as copy-on-write .
 which means that if any of these processes will try to modify the shared pages then only a
copy of these pages will be created and the modifications will be done on the copy of pages
by that process and thus not affecting the other process.
 Suppose, there is a process P that creates a new process Q and then process P modifies page
3.The below figures shows what happens before and after process P modifies page 3.

69
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM

70
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM

PAGE REPLACEMENT ALGORITHMS:


 Page replacement algorithms are techniques used in operating systems to manage
memory efficiently when the physical memory is full.
 When a new page needs to be loaded into physical memory, and there is no free
space, these algorithms determine which existing page to replace.
 If no page frame is free, the virtual memory manager performs a page replacement
operation to replace one of the pages existing in memory with the page whose
reference caused the page fault.
 It is performed as follows: The virtual memory manager uses a page replacement
algorithm to select one of the pages currently in memory for replacement, accesses
the page table entry of the selected page to mark it as “not present” in memory, and
initiates a page-out operation for it if the modified bit of its page table entry indicates
that it is a dirty page.
Common Page Replacement Techniques:
 First In First Out (FIFO)
 Optimal Page replacement
 Least Recently Used (LRU)
 Most Recently Used (MRU)
1.FIFO(FIRST IN FIRST OUT):
 This is the simplest page replacement algorithm.
 In this algorithm, the operating system keeps track of all pages in the memory in a
queue, the oldest page is in the front of the queue.
 When a page needs to be replaced page in the front of the queue is selected for
removal.
 The FIFO algorithm is the handiest web page replacement set of rules. It maintains a
queue of pages in reminiscence, with the oldest page at the front of the queue.
 When a web page needs to be replaced, the oldest page (the one at the front) is
evicted.
Example
Page Reference Sequence: 1, 2, 3, 4, 2, 1, 5, 6, 2, 1, 2, 3, 7
Number of Frames: 3
Page 1 2 3 4 2 1 5 6 2 1 2 3 7
Referenc
e

Frame 1 1 1 1 4 4 4 5 5 5 1 1 1 1

Frame 2 - 2 2 2 2 2 2 6 6 6 6 3 3

Frame 3 - - 3 3 3 1 1 1 2 2 2 2 7

Page Ye Ye Ye Ye N Ye Ye Ye Ye Ye N Ye Ye
Fault s s s s o s s s s s o s s

ADVANTAGES:

71
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM

o Simple to implement
o Easy to understand

Disadvantages:

o This may lead to terrible overall performance (Belady's anomaly), wherein growing the quantity of page frames will
increase the number of page faults.

Frame=3

72
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM

2.LRU (LEAST RECENTLY USED):


The LRU set of rules replaces the page that has now not been used for the longest time. It
approximates the superior algorithm by assuming that pages used these days will, in all
likelihood, be used again quickly.

Advantages

o Efficient and good approximation of the optimal algorithm


o Works well in practice

Disadvantages

o More complex to implement than FIFO


o Requires additional data structures like a stack or counters

Example

Page Reference Sequence: 1, 2, 3, 4, 2, 1, 5, 6, 2, 1, 2, 3, 7

Number of Frames: 3

Page 1 2 3 4 2 1 5 6 2 1 2 3 7
Referenc
e

Frame 1 1 1 1 4 4 4 4 6 6 6 6 6 6

Frame 2 - 2 2 2 2 2 5 5 5 5 5 3 7

Frame 3 - - 3 3 3 1 1 1 2 2 2 3 3

Page Ye Ye Ye Ye N Ye Ye Ye Ye N N Ye Ye
Fault s s s s o s s s s o o s s

Implementation

Implementing LRU may be completed by the use of a stack or a counter. The stack
implementation maintains a stack of web page numbers, with the most currently used web
page on the pinnacle. When a page is accessed, it's far removed from the stack and driven to
the top. The page at the bottom is the least recently used.

73
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM

3.MRU (MOST RECENTLY USED):


The MRU algorithm is a type of replacement algorithm used in caching mechanisms, such as page
replacement algorithms. It works by replacing the most recently used page or block with a new one
when the cache is full.
Advantages of MRU Algorithm:
1. Simple Implementation: The MRU algorithm is easy to implement, as it only requires a simple data
structure to keep track of the most recently used page.
2. Fast Execution: The MRU algorithm has fast execution times, as it only requires a constant-time
operation to replace the most recently used page.
3. Good Performance: The MRU algorithm provides good performance in many scenarios, especially
when the cache is large and the page request pattern is random.
Disadvantages of MRU Algorithm:
1. Poor Performance in Certain Scenarios: The MRU algorithm can perform poorly in certain
scenarios, such as when the page request pattern is sequential or when the cache is small.
2. Thrashing: The MRU algorithm can suffer from thrashing, where the same page is repeatedly
replaced and re-added to the cache.

74
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM

HIT RATIO= 5/17*100=30%


MISS RATIO=12/17*100=70%

75
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM

4.Optimal Page Replacement Algorithm:


Optimal Page Replacement is one of the Algorithms of Page Replacement. In this algorithm, pages
are replaced which would not be used for the longest duration of time in the future.
The idea is simple, for every reference we do following:
1. If referred page is already present, increment hit count.
2. If not present, find if a page that is never referenced in future. If such a page exists, replace this
page with new page. If no such page exists, find a page that is referenced farthest in future.
Replace this page with new page.
Advantages

o Provides the optimal number of page faults

Disadvantages

o It is not practical for implementation as it requires future knowledge of page references

76
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM

Allocation on frames:
Frame allocation refers to the process of assigning physical memory frames to a process's virtual
memory pages. Here are the different frame allocation algorithms:
1. First-Fit Algorithm:
Assigns the first available frame that is large enough to hold the process.
2. Best-Fit Algorithm:
Assigns the smallest frame that is large enough to hold the process.
3. Worst-Fit Algorithm:
Assigns the largest available frame.
4. Next-Fit Algorithm:
Assigns the next available frame after the last allocated frame.
5. Buddy System:
Divides memory into power-of-2 sized blocks, and assigns the smallest block that can hold the
process.
6. Slab Allocation:
Divides memory into fixed-size blocks, and assigns a block to a process when needed.
Frame Allocation Techniques:
1. Fixed Allocation:
Divides physical memory into fixed-size frames.
2. Dynamic Allocation:
Allocates frames of varying sizes to processes.
3. Hybrid Allocation:
Combines fixed and dynamic allocation techniques.
Considerations for Frame Allocation:
1. Internal Fragmentation:
Wasted space within a frame.
2. External Fragmentation:
Wasted space between frames.
3. Frame Size:
Affects internal fragmentation and page table size.
4. Page Table Size:
Affects memory overhead and access time.
5. Allocation Algorithm:
Affects performance, fragmentation, and complexity.
Overview of storage structure:
 Each track is further divided into sectors.
 Spindle revolves the platters and is controlled by r/w unit of OS. Some advanced
spindles have capability to only revolve a particular disk and keep others intact.
 Arm Assembly is there which keeps a pointy r/w head on each disk to read of write on
a particular disk.
 A world cylinder may also be used at times to refer disk stack.
77
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM

The speed of the disk is measured as two parts:

 Transfer rate: This is the rate at which the data moves from disk to the computer.
 Random access time: It is the sum of the seek time and rotational latency.
Seek time is the time taken by the arm to move to the required track. Rotational latency is
defined as the time taken by the arm to reach the required sector in the track.
Transfer time is the time to transfer the data. It depends on the rotating speed of the disk and
number of bytes to be transferred.

Disk Scheduling Algorithms:


 Disk scheduling algorithms are crucial in managing how data is read from and
written to a computer’s hard disk.
 These algorithms help determine the order in which disk read and write requests are
processed, significantly impacting the speed and efficiency of data access.
 Common disk scheduling methods include First-Come, First-Served (FCFS),
Shortest Seek Time First (SSTF), SCAN, C-SCAN, LOOK, and C-LOOK.
 By understanding and implementing these algorithms, we can optimize system
performance and ensure faster data retrieval.
 Disk scheduling is a technique operating systems use to manage the order in which
disk I/O (input/output) requests are processed.
 Disk scheduling is also known as I/O Scheduling.
 The main goals of disk scheduling are to optimize the performance of disk
operations, reduce the time it takes to access data and improve overall system
efficiency.

78
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM

Importance of Disk Scheduling in Operating System:


 Multiple I/O requests may arrive by different processes and only one I/O request can be served at a
time by the disk controller. Thus other I/O requests need to wait in the waiting queue and need to
be scheduled.
 Two or more requests may be far from each other so this can result in greater disk arm movement.
 Hard drives are one of the slowest parts of the computer system and thus need to be accessed in an
efficient manner.
Key Terms Associated with Disk Scheduling:
 Seek Time: Seek time is the time taken to locate the disk arm to a specified track where the data is to be
read or written. So the disk scheduling algorithm that gives a minimum average seek time is better.
 Rotational Latency: Rotational Latency is the time taken by the desired sector of the disk to rotate into
a position so that it can access the read/write heads. So the disk scheduling algorithm that gives minimum
rotational latency is better.
 Transfer Time: Transfer time is the time to transfer the data. It depends on the rotating speed of the disk
and the number of bytes to be transferred.
 Disk Access Time:
Disk Access Time = Seek Time + Rotational Latency + Transfer Time
Total Seek Time = Total head Movement * Seek Time

 Disk Response Time: Response Time is the average time spent by a request waiting to perform
its I/O operation. The average Response time is the response time of all requests. Variance
Response Time is the measure of how individual requests are serviced with respect to average
response time. So the disk scheduling algorithm that gives minimum variance response time is
better.
Goal of Disk Scheduling Algorithms:
 Minimize Seek Time
 Maximize Throughput
 Minimize Latency
 Fairness
 Efficiency in Resource Utilization
Disk Scheduling Algorithms:
There are several Disk Several Algorithms. We will discuss in detail each one of them.
 FCFS (First Come First Serve)
 SSTF (Shortest Seek Time First)
 SCAN
 C-SCAN
 LOOK
 C-LOOK
 RSS (Random Scheduling)
 LIFO (Last-In First-Out)
 N-STEP SCAN
 F-SCAN
1. FCFS (First Come First Serve)
 FCFS is the simplest of all Disk Scheduling Algorithms.
 In FCFS, the requests are addressed in the order they arrive in the disk queue.

79
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM

 Let us understand this with the help of an example.


Suppose the order of request is- (82,170,43,140,24,16,190)
And current position of Read/Write head is: 50

So, total overhead movement (total distance covered by the disk arm) =
(82-50)+(170-82)+(170-43)+(140-43)+(140-24)+(24-16)+(190-16) =642
Advantages of FCFS:
Here are some of the advantages of First Come First Serve.
 Every request gets a fair chance
 No indefinite postponement
Disadvantages of FCFS:
Here are some of the disadvantages of First Come First Serve.
 Does not try to optimize seek time
 May not provide the best possible service
2. SSTF (Shortest Seek Time First):
 In SSTF (Shortest Seek Time First), requests having the shortest seek time are executed first.
 So, the seek time of every request is calculated in advance in the queue and then they are scheduled
according to their calculated seek time.
 As a result, the request near the disk arm will get executed first.
 SSTF is certainly an improvement over FCFS as it decreases the average response time and increases
the throughput of the system.
 Let us understand this with the help of an example.
Example: Suppose the order of request is- (82, 170,43,140,24,16,190)

Shortest Seek Time First

total overhead movement (total distance covered by the disk arm) =


(50-43)+(43-24)+(24-16)+(82-16)+(140-82)+(170-140)+(190-170) =208
Advantages of Shortest Seek Time First:
Here are some of the advantages of Shortest Seek Time First.

80
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM

 The average Response Time decreases


 Throughput increases
Disadvantages of Shortest Seek Time First:
Here are some of the disadvantages of Shortest Seek Time First.
 Overhead to calculate seek time in advance
 Can cause Starvation for a request if it has a higher seek time as compared to incoming
requests
The high variance of response time as SSTF favors only some requests
3. SCAN:
 In the SCAN algorithm the disk arm moves in a particular direction and services the requests coming
in its path and after reaching the end of the disk, it reverses its direction and again services the request
arriving in its path.
 So, this algorithm works as an elevator and is hence also known as an elevator algorithm.
 As a result, the requests at the midrange are serviced more and those arriving behind the disk arm will
have to wait.
Example:
Suppose the requests to be addressed are 82,170,43,140,24,16,190.
And the Read/Write arm is at 50, and it is also given that the disk arm should move “towards the larger value”.

Therefore, the total overhead movement (total distance covered by the disk arm) is calculated as
= (199-50) + (199-16) = 332.
Advantages of SCAN Algorithm:
Here are some of the advantages of the SCAN Algorithm.
 High throughput
 Low variance of response time
 Average response time
Disadvantages of SCAN Algorithm:
Here are some of the disadvantages of the SCAN Algorithm.
Long waiting time for requests for locations just visited by disk arm
4. C-SCAN:
 In the SCAN algorithm, the disk arm again scans the path that has been scanned, after reversing its
direction.

81
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM

 So, it may be possible that too many requests are waiting at the other end or there may be zero or few
requests pending at the scanned area.
 These situations are avoided in the CSCAN algorithm in which the disk arm instead of reversing its
direction goes to the other end of the disk and starts servicing the requests from there.
 So, the disk arm moves in a circular fashion and this algorithm is also similar to the SCAN algorithm
hence it is known as C-SCAN (Circular SCAN).

Example:
Suppose the requests to be addressed are-82,170,43,140,24,16,190.
And the Read/Write arm is at 50, and it is also given that the disk arm should move “towards the larger value”.

So, the total overhead movement (total distance covered by the disk arm) is calculated as =(199-50) + (199-0)
+ (43-0) = 391
Advantages of C-SCAN Algorithm:
Provides more uniform wait time compared to SCAN.
5. LOOK:
 LOOK Algorithm is similar to the SCAN disk scheduling algorithm except for the difference
that the disk arm in spite of going to the end of the disk goes only to the last request to be
serviced in front of the head and then reverses its direction from there only.
 Thus it prevents the extra delay which occurred due to unnecessary traversal to the end of the
disk.
Example:
Suppose the requests to be addressed are-82,170,43,140,24,16,190. And the Read/Write arm is at 50,
and it is also given that the disk arm should move “towards the larger value”.

82
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM

So, the total overhead movement (total distance covered by the disk arm) is calculated as = (190-50) + (190-
16) = 314

6. C-LOOK:
 As LOOK is similar to the SCAN algorithm, in a similar way, C-LOOK is similar to the CSCAN disk
scheduling algorithm.
 In CLOOK, the disk arm in spite of going to the end goes only to the last request to be serviced in
front of the head and then from there goes to the other end’s last request.
 Thus, it also prevents the extra delay which occurred due to unnecessary traversal to the end of the
disk.
Example:
Suppose the requests to be addressed are-82, 170 ,43,140,24,16,190. And the Read/Write arm is at
50, and it is also given that the disk arm should move “towards the larger value”

So, the total overhead movement (total distance covered by the disk arm) is calculated as = (190-
50) + (190-16) + (43-16) = 341
UNIT - V
File System: File System Interface: File concept, Access methods, Directory Structure.
File system Implementation: File-system structure, File-system Operations, Directory
implementation, Allocation method, Free space management.
File-System Internals: File-System Mounting, Partitions and Mounting, File Sharing.
Protection: Goals of protection, Principles of protection, Protection Rings, Domain of protection,
Access matrix.
FILE CONCEPT:
 A file is a named collection of related information that is recorded on secondary storage such
as magnetic disks, magnetic tapes and optical disks.
 In general, a file is a sequence of bits, bytes, lines or records whose meaning is defined by the
files creator and user.

File Structure:
A File Structure should be according to a required format that the operating system can understand.
 A file has a certain defined structure according to its type.

83
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM

 A text file is a sequence of characters organized into lines.


 A source file is a sequence of procedures and functions.
 An object file is a sequence of bytes organized into blocks that are understandable by the machine.
 When operating system defines different file structures, it also contains the code to support these file
structure. Unix, MS-DOS support minimum number of file structure.
File Type:
 File type refers to the ability of the operating system to distinguish different types of file such
as text files source files and binary files etc.
 Many operating systems support many types of files.
 Operating system like MS-DOS and UNIX have the following types of files –
Ordinary files:
 These are the files that contain user information.
 These may have text, databases or executable program.
 The user can apply various operations on such files like add, modify, delete or even remove the entire file.
Directory files:
 These files contain list of file names and other information related to these files.
Special files:
 These files are also known as device files.
 These files represent physical device like disks, terminals, printers, networks, tape drive etc.
 These files are of two types −
 Character special files − data is handled character by character as in case of terminals or printers.
 Block special files − data is handled in blocks as in the case of disks and tapes.
File Access Mechanisms:
File access mechanism refers to the manner in which the records of a file may be accessed.
There are several ways to access files −
 Sequential access
 Direct/Random access
 Indexed sequential access
Sequential access:
 A sequential access is that in which the records are accessed in some sequence, i.e., the
information in the file is processed in order, one record after the other.
 This access method is the most primitive one. Example: Compilers usually access files in this
fashion.

Direct/Random access:
 Random access file organization provides, accessing the records directly.
 Each record has its own address on the file with by the help of which it can be directly accessed for
reading or writing.
 The records need not be in any sequence within the file and they need not be in adjacent locations on
the storage medium.
Indexed sequential access:
 This mechanism is built up on base of sequential access.
 An index is created for each file which contains pointers to various blocks.
 Index is searched sequentially and its pointer is used to access the file directly.
Space Allocation:
Files are allocated disk spaces by operating system. Operating systems deploy following three main
ways to allocate disk space to files.
 Contiguous Allocation
 Linked Allocation
 Indexed Allocation

84
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM

Contiguous Allocation:
 Each file occupies a contiguous address space on disk.
 Assigned disk address is in linear order.
 Easy to implement.
 External fragmentation is a major issue with this type of allocation technique.
Linked Allocation:
 Each file carries a list of links to disk blocks.
 Directory contains link / pointer to first block of a file.
 No external fragmentation
 Effectively used in sequential access file.
 Inefficient in case of direct access file.
Indexed Allocation:
 Provides solutions to problems of contiguous and linked allocation.
 A index block is created having all pointers to files.
 Each file has its own index block which stores the addresses of disk space occupied by the file.
 Directory contains the addresses of index blocks of files.
Structures of Directory in Operating System:
What is a directory:
 Directory can be defined as the listing of the related files on the disk.
 The directory may store some or the entire file attributes.
 To get the benefit of different file systems on the different operating systems, A hard disk can be
divided into the number of partitions of different sizes.
 The partitions are also called volumes or mini disks.
 Each partition must have at least one directory in which, all the files of the partition can be listed.
 A directory entry is maintained for each file in the directory which stores all the information related to
that file.

A directory can be viewed as a file which contains the Meta data of the bunch of files.
Every Directory supports a number of common operations on the file:
1. File Creation
2. Search for the file
3. File deletion
4. Renaming the file
5. Traversing Files
6. Listing of files
There are different types of directories:
1. Single-Level Directory
2. Two-Level Directory
3. Tree Structure/ Hierarchical Structure
4. Acyclic Graph Structure
5. General-Graph Directory Structure.

85
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM

1.Single Level Directory:


 The simplest method is to have one big list of all the files on the disk.

 The entire system will contain only one directory which is supposed to mention all the files present in
the file system.

 The directory contains one entry per each file present on the file system.

This type of directories can be used for a simple system.


Advantages:
1. Implementation is very simple.
2. If the sizes of the files are very small then the searching becomes faster.
3. File creation, searching, deletion is very simple since we have only one directory.
Disadvantages:
1. We cannot have two files with the same name.
2. The directory may be very big therefore searching for a file may take so much time.
3. Protection cannot be implemented for multiple users.
4. There are no ways to group same kind of files.
5. Choosing the unique name for every file is a bit complex and limits the number of files in the system
because most of the Operating System limits the number of characters used to construct the file name.

2.Two Level Directory:


 In two level directory systems, we can create a separate directory for each user.

 There is one master directory which contains separate directories dedicated to each user.

 For each user, there is a different directory present at the second level, containing group of user's file.

 The system doesn't let a user to enter in the other user's directory without permission.

Characteristics of two level directory system:


1. Each files has a path name as /User-name/directory-name/
2. Different users can have the same file name.
3. Searching becomes more efficient as only one user's list needs to be traversed.
4. The same kind of files cannot be grouped into a single directory for a particular user.
Every Operating System maintains a variable as PWD which contains the present directory
name (present user name) so that the searching can be done appropriately .

86
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM

3.Tree Structured Directory:


 In Tree structured directory system, any directory entry can either be a file or sub directory.
 Tree structured directory system overcomes the drawbacks of two level directory system.
 The similar kind of files can now be grouped in one directory.
 Each user has its own directory and it cannot enter in the other user's directory.
 However, the user has the permission to read the root's data but he cannot write or modify this.
 Only administrator of the system has the complete access of root directory.
 Searching is more efficient in this directory structure.
 The concept of current working directory is used.
 A file can be accessed by two types of path, either relative or absolute.
 Absolute path is the path of the file with respect to the root directory of the system while relative path
is the path with respect to the current working directory of the system.
 In tree structured directory systems, the user is given the privilege to create the files as well as
directories.

Permissions on the file and directory:


 A tree structured directory system may consist of various levels therefore there is a set of permissions assigned to
each file and directory.
 The permissions are R W X which are regarding reading, writing and the execution of the files or directory.
 The permissions are assigned to three types of users: owner, group and others.
 There is a identification bit which differentiate between directory and file. For a directory, it is d and for a file, it is
dot (.)
The following snapshot shows the permissions assigned to a file in a Linux based system. Initial bit d represents that it is a
directory.

4.Acyclic-Graph Structured Directories:


 The tree structured directory system doesn't allow the same file to exist in multiple directories therefore
sharing is major concern in tree structured directory system.
 We can provide sharing by making the directory an acyclic graph.
 In this system, two or more directory entry can point to the same file or sub directory.
 That file or sub directory is shared between the two directory entries.
 These kinds of directory graphs can be made using links or aliases.
 We can have multiple paths for a same file.
 Links can either be symbolic (logical) or hard link (physical).
 If a file gets deleted in acyclic graph structured directory system, then
1. In the case of soft link, the file just gets deleted and we are left with a dangling pointer.
2. In the case of hard link, the actual file will be deleted only if all the references to it gets deleted.

87
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM

5) General-Graph Directory Structure:


 Unlike the acyclic-graph directory, which avoids loops, the general-graph directory can have cycles,
meaning a directory can contain paths that loop back to the starting point.
 This can make navigating and managing files more complex.

General Graph Directory Structure

In the above image, you can see that a cycle is formed in the User 2 directory. While this structure offers more
flexibility, it is also more complicated to implement.
Advantages of General-Graph Directory:
 More flexible than other directory structures.
 Allows cycles, meaning directories can loop back to each other.
Disadvantages of General-Graph Directory:
 More expensive to implement compared to other solutions.
 Requires garbage collection to manage and clean up unused files and directories.
Definition of File System:
 A file system is a way of organizing and managing files on a storage device, such as a
hard disk or a flash drive.

 It provides a logical structure to the physical storage space and allows users and
applications to access and manipulate the files.

 A file system typically consists of three components: files, directories, and file
metadata.
Importance of File System:
 The importance of a file system lies in its ability to provide a convenient and efficient way of storing
and retrieving files.
 Without a file system, users and applications would have to manage data in raw, unstructured formats,
making it difficult to organize and access.
 A well designed file system can improve data integrity, reduce data loss, and optimize data storage and
retrieval.

88
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM

Types of File Systems


There are several types of file systems, including −
 FAT (File Allocation Table)
 NTFS (New Technology File System)
 HFS+ (Hierarchical File System Plus)
 ext4 (Fourth
 ZFS (Zettabyte File System)
Each type of file system has its own advantages and disadvantages, and the choice of file system often
depends on the specific use case and the operating system being used.
File System Components:
The components of a file system include:
 Files − A file is a unit of data storage that contains information, such as text, images, audio, or video.
 Directories − A directory is a container that stores files and other directories. It provides a way to organize files
into a hierarchical structure.
 File metadata − File metadata includes information about a file, such as its name, size, creation date,
modification date, and access permissions.
 File system operations − File system operations are the actions that can be performed on files and directories,
such as creating, moving, copying, deleting, and renaming.
 Together, these components form the basis of a file system, which provides a logical structure for storing and
accessing data on a storage device such as a hard disk or a flash drive.

File System Hierarchy:


The file system hierarchy is the organization of files and directories in a logical and hierarchical structure. It
provides a way to organize files and directories based on their purpose and location. Here are the main
components of the file system hierarchy −
 Root Directory − The root directory is the top-level directory in the file system hierarchy. It is represented by a
forward slash (/) and contains all other directories and files.
 Subdirectories − Subdirectories are directories that are located within other directories. They provide a way to
organize files into logical groups based on their purpose or location.
 File Paths − File paths are the routes that are used to locate files within the file system hierarchy. They consist
of a series of directory names separated by slashes, leading up to the file itself.
 File System Mounting − File system mounting is the process of making a file system available for use. When
a file system is mounted, its root directory is attached to a directory in the existing file system hierarchy, such as
the root directory.
The file system hierarchy is a fundamental concept in modern operating systems and provides a way to organize
and access files in a logical and efficient manner.
File Allocation Methods:
The allocation methods define how the files are stored in the disk blocks. There are three main
disk space or file allocation methods.
 Contiguous Allocation
 Linked Allocation
 Indexed Allocation
The main idea behind these methods is to provide:
 Efficient disk space utilization.
 Fast access to the file blocks.
All the three methods have their own advantages and disadvantages as discussed below:
1. Contiguous Allocation
 − Contiguous allocation is a method of storing files in which each file is allocated a contiguous block of storage
space on the storage device. This method allows for quick and efficient access to files, but it can lead to
fragmentation if files are frequently added and deleted.
The directory entry for a file with contiguous allocation contains
 Address of starting block
 Length of the allocated portion.

89
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM

The file ‘mail’ in the following figure starts from the block 19 with length = 6 blocks.
Therefore, it occupies 19, 20, 21, 22, 23, 24 blocks.

Advantages:
 It is very easy to implement.
 There is a minimum amount of seek time.
 The disk head movement is minimum.
 Memory access is faster.
 It supports sequential as well as direct access.
Disadvantages

 At the time of creation, the file size must be initialized.


 As it is pre-initialized, the size cannot increase. As
 Due to its constrained allocation, it is possible that the disk would fragment internally or externally.
 Linked Allocation − Linked allocation is a method of storing files in which each file is divided into blocks that
are scattered throughout the storage device. Each block contains a pointer to the next block in the file. This
method can help prevent fragmentation, but it can also lead to slower access times due to the need to follow the
links between blocks.

Advantages

 There is no external fragmentation.


 The directory entry just needs the address of starting block.
 The memory is not needed in contiguous form, it is more flexible than contiguous file allocation.

Disadvantages

 It does not support random access or direct access.


 If pointers are affected so the disk blocks are also affected.
 Extra space is required for pointers in the block.

Indexed Allocation − Indexed allocation is a method of storing files in which a separate index is maintained
that contains a list of all the blocks that make up each file. This method allows for quick access to files and helps
prevent fragmentation, but it requires additional overhead to maintain the index.

90
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM

 The choice of file allocation method depends on the specific needs of the system and the type of storage device
being used.

Advantages

 It reduces the possibilities of external fragmentation.


 Rather than accessing sequentially it has direct access to the block.

Disadvantages

 Here more pointer overhead is there.


 If we lose the index block we cannot access the complete file.
 It becomes heavy for the small files.
 It is possible that a single index block cannot keep all the pointers for some large files.

File System Security:


File system security is the protection of files and directories from unauthorized access, modification, or
destruction. Here are some common methods for file system security −
 Access Control − Access control mechanisms, such as permissions and access control lists (ACLs), restrict who
can access files and directories and what actions they can perform on them.
 Encryption − Encryption is the process of converting data into a coded form to protect its confidentiality.
Encryption can be used to protect individual files or entire file systems.
 Backups − Backups are copies of files and directories that can be used to restore data in case of accidental
deletion, hardware failure, or other types of data loss. Regular backups help protect against data loss due to
security breaches.
 File System Auditing − File system auditing records events that occur on a file system, such as file access,
modification, or deletion. Auditing can help identify security breaches and monitor compliance with security
policies.
 Antivirus Software − Antivirus software detects and removes malware that can compromise the security of a
file system. Antivirus software can also protect against other types of threats, such as phishing and ransomware
attacks.
 File system security is an important aspect of overall system security, and it requires a combination of technical
measures and policies to ensure the protection of sensitive data.
File System Maintenance:
File system maintenance refers to the process of keeping a file system running smoothly and efficiently. Here
are some common tasks involved in file system maintenance −
 Disk Cleaning − Disk cleaning involves removing temporary files, cache files, and other unnecessary files from
the file system. This helps free up disk space and can improve system performance.
 Disk Defragmentation − Disk defragmentation is the process of rearranging the data on a storage device to
optimize performance. Defragmentation helps reduce file fragmentation and can improve system performance.
 Error Checking − Error checking involves scanning the file system for errors, such as bad sectors, corrupted
files, and other issues. Error checking can help identify and resolve issues that could lead to data loss or system
instability.

91
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM

 Backup and Restore − Backup and restore involves creating backups of critical files and directories and
restoring them in case of data loss or other issues. Regular backups are essential for ensuring the availability and
integrity of data.
 Updating Software − Updating software, including the operating system and file system, is essential for
maintaining system security and compatibility with new hardware and software.
File System Performance:
File system performance refers to the speed and efficiency with which a file system can read, write,
and access data. Here are some factors that can affect file system performance −
 File System Type − Different types of file systems have different performance characteristics. For example,
some file systems may be optimized for speed, while others may prioritize data reliability or data security.
 File System Size − Larger file systems may require more time to index and search, which can impact
performance. File system fragmentation can also impact performance, as fragmented files may require more
time to read or write.
 Hardware Configuration − The hardware configuration of a system, including the type and speed of the
storage device and the amount of memory, can have a significant impact on file system performance.
 Network Performance − When files are accessed over a network, the speed and reliability of the network can
impact file system performance.
 Application Design − The design of applications that access the file system can also impact performance.
Applications that access large files or read and write files frequently can impact file system performance.
file system mounting :
 Mounting is the process of attaching a file system to a directory in the operating system's file
system hierarchy.
 This allows the operating system to access the files and directories on the mounted file
system.
 Mounting is a way to integrate a new file system into the existing file system hierarchy.
 When a file system is mounted, its root directory becomes a subdirectory of the existing file
system hierarchy.
Why is Mounting Needed?
Mounting is needed to:
1. Access Data: Mounting allows the operating system to access data on a file system.
2. Integrate File Systems: Mounting integrates multiple file systems into a single file system
hierarchy.
3. Provide a Unified View: Mounting provides a unified view of all file systems, making it easier
to manage and access data.
How Does Mounting Work?
The mounting process involves the following steps:
1. Device Identification: The operating system identifies the device that contains the file system to
be mounted.
2. File System Type: The operating system determines the type of file system on the device (e.g.,
ext4, NTFS, etc.).
3. Mount Point: The operating system creates a mount point, which is a directory in the file system
hierarchy where the mounted file system will be accessible.
4. Mounting: The operating system mounts the file system to the mount point, making the files and
directories on the mounted file system accessible.
File System Mounting Options:
1. Read-Only: The file system is mounted in read-only mode, preventing any modifications to the
file system.
2. Read-Write: The file system is mounted in read-write mode, allowing modifications to the file
system.

92
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM

3. Noexec: The file system is mounted with the noexec option, preventing any executable files on the
file system from being executed.
4. Nosuid: The file system is mounted with the nosuid option, preventing any setuid or setgid bits on
the file system from being honored.
Mounting Commands:
Common mounting commands include:
1. mount: Mounts a file system.
2. umount: Unmounts a file system.
3. fstab: Configures file system mounting at boot time.
Partitions and mounting:
 Partitions and mounting are two related concepts in operating systems that deal with the
organization and access of data on storage devices.
 A partition is a logical division of a storage device, such as a hard drive or solid-state drive
(SSD), into separate areas for storing data.
 Each partition can have its own file system, and the operating system can treat each partition
as a separate device.
Types of Partitions:
1. Primary Partition: A primary partition is a partition that can be used to boot an operating
system.
2. Extended Partition: An extended partition is a partition that can be further divided into logical
partitions.
3. Logical Partition: A logical partition is a partition that is created within an extended partition.
Mounting:
Mounting is the process of attaching a file system to a directory in the operating system's file system
hierarchy. This allows the operating system to access the files and directories on the mounted file
system.
Types of Mounting:
1. Local Mount: A local mount is when a file system on a local device is mounted to the file system
hierarchy.
2. Remote Mount: A remote mount is when a file system on a remote device is mounted to the file
system hierarchy.
3. Virtual Mount: A virtual mount is when a file system is mounted to a virtual device.
Mounting Process:
1. Device Identification: The operating system identifies the device that contains the file system to be
mounted.
2. File System Type: The operating system determines the type of file system on the device (e.g.,
ext4, NTFS, etc.).
3. Mount Point: The operating system creates a mount point, which is a directory in the file system
hierarchy where the mounted file system will be accessible.
4. Mounting: The operating system mounts the file system to the mount point, making the files and
directories on the mounted file system accessible.
Benefits of Partitions and Mounting:
1. Organization: Partitions and mounting help to organize data on storage devices, making it easier to
manage and access.
2. Flexibility: Partitions and mounting provide flexibility in terms of file system choice and
configuration.

93
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM

3. Security: Partitions and mounting can help to improve security by isolating sensitive data and
limiting access to certain file systems.
TOOLS:
1).Fdisk:Linux / unix os - command line arguments.
2).Disk utility: graphical for Mac os
3).Disk management: windows os .
Common Commands for Partitions and Mounting:
1. fdisk: A command-line utility for creating and managing partitions.
2. mkfs: A command-line utility for creating file systems on partitions.
3. mount: A command-line utility for mounting file systems to the file system hierarchy.
4. umount: A command-line utility for unmounting file systems from the file system hierarchy.
File Sharing :
 File sharing is the process of allowing multiple users or processes to access and share files on
a computer or network.
 In an operating system (OS), file sharing is an essential feature that enables collaboration,
communication, and data exchange among users.
Types of File Sharing:
1. Local File Sharing: Sharing files between users on the same computer.
2. Network File Sharing: Sharing files between computers on a network.
3. Remote File Sharing: Sharing files between computers over the internet.
File Sharing Models:
1. Shared Folder Model: A shared folder is created, and users are granted access to it.
2. Peer-to-Peer Model: Users share files directly with each other without a central server.
3. Client-Server Model: A central server manages file sharing, and clients access files from the server.
File Sharing Protocols:
1. SMB (Server Message Block): A protocol for sharing files and printers on a network.
2. NFS (Network File System): A protocol for sharing files on a network.
3. FTP (File Transfer Protocol): A protocol for transferring files over the internet.
File Sharing in Operating Systems:
1. Windows: Windows provides file sharing through the Shared Folders feature and the SMB
protocol.
2. Linux: Linux provides file sharing through the NFS protocol and the Samba software.
3. macOS: macOS provides file sharing through the Shared Folders feature and the SMB protocol.
File Sharing Security:
1. Access Control: Controlling access to shared files and folders through permissions and access
control lists (ACLs).
2. Encryption: Encrypting shared files to protect them from unauthorized access.
3. Authentication: Authenticating users before allowing them to access shared files.
File Sharing Benefits:
1. Collaboration: File sharing enables collaboration among users by allowing them to access and
share files.
2. Convenience: File sharing provides a convenient way to share files without having to physically
transfer them.
3. Productivity: File sharing can improve productivity by enabling users to access and share files
quickly and easily.
Goals of Protection in OS:

94
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM

The primary goals of protection in an operating system (OS) are to ensure the security, integrity, and
availability of computer resources, including hardware, software, and data.
Main Goals of Protection:
1. Confidentiality: Protect sensitive information from unauthorized access, disclosure, or theft.
2. Integrity: Ensure that data and programs are not modified without authorization.
3. Availability: Ensure that computer resources are accessible and usable when needed.
4. Authenticity: Verify the identity of users, devices, and processes to ensure that only authorized
entities access resources.
5. Accountability: Track and record user activities to ensure that users are held responsible for their
actions.
Additional Goals:
1. Access Control: Regulate access to resources based on user identity, role, or privilege.
2. Data Protection: Protect data from unauthorized access, modification, or destruction.
3. System Integrity: Ensure that the OS and its components are not compromised or modified without
authorization.
4. Resource Allocation: Manage resource allocation to prevent unauthorized access or misuse.
5. Error Detection and Recovery: Detect and recover from errors or security breaches.
Protection Mechanisms:
1. Access Control Lists (ACLs): Define access permissions for users and groups.
2. Encryption: Protect data confidentiality and integrity.
3. Firewalls: Control incoming and outgoing network traffic.
4. Authentication and Authorization: Verify user identities and grant access to resources.
5. Auditing and Logging: Track user activities and system events.
Benefits of Protection:
1. Security: Protect against unauthorized access, malware, and other security threats.
2. Reliability: Ensure system stability and availability.
3. Compliance: Meet regulatory and industry standards for data protection and security.
4. Accountability: Hold users responsible for their actions.
5. Trust: Establish trust among users, organizations, and systems.
Principles of Protection:
The principles of protection in an operating system (OS) are designed to ensure the security, integrity,
and availability of computer resources, including hardware, software, and data.
1. Least Privilege: Grant users and processes only the privileges necessary to perform their tasks.
2. Separation of Privilege: Separate privileges into distinct categories to prevent unauthorized access.
3. Access Control: Regulate access to resources based on user identity, role, or privilege.
4. Authentication: Verify the identity of users, devices, and processes before granting access.
5. Authorization: Grant access to resources based on user identity, role, or privilege.
6. Accountability: Track and record user activities to ensure that users are held responsible for their
actions.
7. Data Protection: Protect data from unauthorized access, modification, or destruction.
8. System Integrity: Ensure that the OS and its components are not compromised or modified without
authorization.
Types of Protection:
1. Memory Protection: Protect memory from unauthorized access or modification.
2. File Protection: Protect files from unauthorized access, modification, or destruction.
3. I/O Protection: Protect input/output (I/O) operations from unauthorized access or modification.

95
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM

4. CPU Protection: Protect the central processing unit (CPU) from unauthorized access or
modification.

Protection Mechanisms:
1. Access Control Lists (ACLs): Define access permissions for users and groups.
2. Encryption: Protect data confidentiality and integrity.
3. Firewalls: Control incoming and outgoing network traffic.
4. Authentication and Authorization: Verify user identities and grant access to resources.
5. Auditing and Logging: Track user activities and system events.
Benefits of Protection:
1. Security: Protect against unauthorized access, malware, and other security threats.
2. Reliability: Ensure system stability and availability.
3. Compliance: Meet regulatory and industry standards for data protection and security.
4. Accountability: Hold users responsible for their actions.
5. Trust: Establish trust among users, organizations, and systems.
Protection levels:
Protection levels in an operating system (OS) refer to the different levels of access control and
security that can be applied to resources such as files, directories, and devices.
Types of Protection Levels:
1. User Mode: A protection level that allows users to access only their own resources and prevents
them from accessing system resources.
2. Supervisor Mode: A protection level that allows the operating system to access all resources and
perform privileged operations.
3. Kernel Mode: A protection level that allows the operating system kernel to access all resources and
perform privileged operations.
In Computer Science, the ordered protection domains are referred to as Protection Rings.
These mechanisms help in improving fault tolerance and provide Computer Security. Operating
Systems provide different levels to access resources. Rings are hierarchically arranged from
most privileged to least privileged. Use of Protection Ring : Use of Protection Rings provides
logical space for the levels of permissions and execution. Two important uses of Protection
Rings are :

1. Improving Fault Tolerance


2. Provide Computer Security
Levels of Protection Ring : There are basically 4 levels ranging from 0 which is the most
privileged to 3 which is least privileged. Most Operating Systems use level 0 as the kernel or
executive and use level 3 for application programs. A resource that is accessible to level n is
also accessible to levels 0 to n and the privilege levels are rings.

96
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM

Protection Rings

Modes of Protection Ring : There are basically two modes : Supervisor Mode, and Hypervisor
Mode. These are explained as following below in brief.
1. Supervisor Mode : Supervisor Mode is an execution mode in some of processors which
allows execution of all instructions including privileged instructions. It also gives access to
different address space, to memory management hardware, and to other peripherals.
Usually, Operating System runs in this mode.
2. Hypervisor Mode : Modern CPUs offer x86 virtualization instructions for hypervisor to
control “Ring 0” hardware access. In order to help virtualization, VT and Pacifica insert new
privilege level below “Ring 0” and Both these add nine new “machine code” instructions
that only work on Ring −1 and intended to be used by hypervisor.
Domain of Protection in OS:
The domain of protection in an operating system (OS) refers to the scope or range of protection
provided to resources such as files, directories, and devices.
Types of Domains:
1. User Domain: A domain that includes all resources owned by a specific user.
2. Group Domain: A domain that includes all resources shared by a group of users.
3. System Domain: A domain that includes all system resources, such as devices and system files.
Domain of Protection:
The domain of protection includes:
1. Subjects: Users, processes, and threads that access resources.
2. Objects: Resources such as files, directories, and devices.
3. Access Rights: Permissions that define what actions can be performed on objects.
Domain of Protection :
 The protection policies limit the access of each process with respect to their resource
handling. A process is bound to use only those resources which it requires to complete its
task, in the time limit that it requires and also the mode in which it is required. That is the
protected domain of a process.
 A computer system has processes and objects, which are treated as abstract data types, and
these objects have operations specific to them. A domain element is described as <object,
{set of operations on object}>.
 Each domain consists of a set of objects and the operations that can be performed on them.
A domain can consist of either only a process or a procedure or a user. Then, if a domain
corresponds to a procedure, then changing domain would mean changing procedure ID.
Objects may share a common operation or two. Then the domains overlap.

97
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM

Association between process and domain :


Processes switch from one domain to other when they have the access right to do so. It can be
of two types as follows.
1. Fixed or static –
In fixed association, all the access rights can be given to the processes at the very beginning
but that give rise to a lot of access rights for domain switching. So, a way of changing the
contents of the domain are found dynamically.

2. Changing or dynamic –
In dynamic association where a process can switch dynamically, creating a new domain in
the process, if need be.
Security Measures :
 Security measures at different levels are taken against malpractices, such as no person
should be allowed on the premises or allowed access to the systems.
 The network that is used for the transfer of files must be secure at all times. No alien
software must be able to extract information from the network while the transfer. This is
known as Network Sniffing, and it can be prevented by introducing encrypted channels of
data transfer. Also, the OS must be able to resist against forceful or even accidental
violations.
 The best ways of authentication are using a username password combination, using
fingerprint, eye retina scan or even user cards to access the system.
 Passwords are a good method to authenticate, but it is also one of the most common as well
as vulnerable methods. To crack passwords is not too hard. While there are weak passwords,
but even hard passwords can be cracked by either sniffing around or giving access to
multiple users or even network sniffing as mentioned above.
Security Authentication:
To make passwords strong and a formidable authentication source, one time passwords,
encrypted passwords and Cryptography are used as follows.
1. OneTimePasswords –
It is used in such a way that it is unique at every instance of login by the user. It is a pair of
passwords combined to give the user access. The system generates a random number and the
user provides a complementary one or the system and the user are provided a random
number by an algorithm and through a common function that the two share they match the
output&thusgetaccess.
2. EncryptedPasswords:
It is also a very way to authenticate access. Encrypted data is passed over the network which
does the transfer and checking of the passwords that helps in the data passage without
interruption or interception.

98
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM

3. Cryptography–
It is another method of ensuring that data transfer over a network is not available to the
unauthorized users. This helps in transfer of data with full protection. It protects the data by
introducing the concept of a key. The key is very important here. When a user sends the
data, he encodes it using a computer possessing the key and the receiver also has to decode
the data using the very same key. Thus, even if the data is stolen mid-way, there’s still a big
possibility that the unauthorized user cannot access it.

SHORT ANSWER IMPORTANT QUESTIONS

1.List out the services provided by an operating system.

99
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM

2. List Fields of Process Control Block.

3. What is Virtual Address Space?

4. What is Resource-Allocation-Graph?

5. What are the two ways of accessing disk storage?

6.What is activity stack in android?

7. List out the types of System calls.

8. What is Multi-Threading?

9. What is the Cause of Thrashing?

10. What is Process Synchronization?

11.What is a device driver

12.What is the Dalvik Virtual machine in Android?

13.Define Operating System.

14.What is Process control block?

15.Differentiate between Logical and Physical address space.

16.State the Critical Section problem.

17.What are the most common attributes that are associated with an opened file?

18.What is an i node in LINUX?

19. Draw the Layered structure of Operating system.

20. When a process creates a new process, what is shared between parent process and child
process?

21. List the disadvantages of single contiguous memory allocation.

22. What is Counting semaphore?

23. Write about Master File Directory in two-level directory structure.

24. What are Synchronous and Asynchronous interrupts in LINUX?

25. Relate boot blocks and booting in OS SYS generation.

26. Explain various models of multithreading.

27. Discuss Resource-Request Algorithm with respect to deadlock.

100
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM

28.Write short notes on File operations and types.

29. List out the various interrupts in LINUX.

30.Explain the importance of Real-Time Embedded systems.

31. Define Cooperating process?

32. What is the environment need in Cooperating processes?

33. What is Critical Section Problem?

34. Write the difference between internal and external fragmentation.

35. Define the Safe, unsafe, and deadlock state spaces.

36. List the components of LINUX?

37. What are Operating-System Services?

38. Identify the situations for Pre-emption of a process.

39. Define Busy Waiting? How to overcome busy waiting using Semaphore operations.

40. What is Deadlock?

41. Write short note on demand paging.

42. What are the various attributes that are associated with an opened file?

43. Define Operating System.

44. Define System Call? List out any four Process Control System Calls.

45. What are the functions of mutex semaphore?

46. What is meant by hit ratio? Explain.

47. Explain how multiprogramming increases the utilization of CPU?

48. Define short-term, medium-term, and long-term scheduling.

49. Distinguish between counting and binary semaphores.

50. Define dead locks with example.

IMPORTANT LONG ANSWER QUESTIONS :


1. Summarize the Classifications and functions of Operating System.

101
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM

2. Explain the activities of Operating System in connection with Memory and


Process management.
3. Draw and explain five state process model.
4.Write the important characteristics of Round Robin Scheduling algorithm.
And demonstrate its performance for the following workload in a system with
time quantum = 2 units.
Consider the set of 5 processes whose arrival time and burst time are given
below
Process Id Arrival time Burst time
P1 5 5
P2 4 6
P3 3 7
P4 1 9
P5 2 2
P6 6 3
Draw a Gantt chart illustrating the execution of these jobs and also Calculate
the average waiting and average turnaround times
5. What are the causes for External and Internal fragmentation? Suggest
solutions to the fragmentation problem.
6. Explain the LRU and Optimal page replacement algorithms.
7.Discuss various approaches for breaking a Deadlock.
8. What is meant by Starvation in Dining philosopher problem? Suggest a
solution to solve this problem using Semaphores.
9. Explain Indexed file allocation method and discuss its advantages and
disadvantages.
10.Briefly discuss various Disk scheduling algorithms
11.With a neat diagram, explain the layered structure of UNIX operating system.
12.What are the advantages and disadvantages of using the same system call interface for
manipulating both files and devices?
13.What is a process? Explain about various fields of Process Control Block.
14. What are the advantages of inter-process communication? How communication takes place in a
shared-memory environment?
15.What is a Critical Section problem? Give the conditions that a solution to the critical section
problem must satisfy.
16.What is Dining Philosophers problem? Discuss the solution to Dining philosopher’s problem using
monitors.
17.What is a Virtual Memory? Discuss the benefits of virtual memory technique.
18.What is Thrashing? What is the cause of Thrashing? How does the system detect Thrashing? What
can the system do to eliminate this problem?
19.What is a deadlock? How deadlocks are detected?
20. Explain the Resource-Allocation-Graph algorithm for deadlock avoidance.
21. Briefly explain about single-level, two-level and Tree-Structured directories.
22. Explain and compare the SCAN and C-SCAN disk scheduling algorithms.
23. Describe the features of a distributed operating system.
24. What is a scheduler? List and describe different types of schedulers.
25. Write in detail about the thread libraries.
26.Present producer-consumer problem. Explain how to solve it.
27.Distinguish between counting and binary semaphores. Show when does the semaphore definition
requires busy waiting. Suggest a solution to overcome this problem.
28.Consider the reference string: 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1 for a memory with
three frames. Trace FIFO, optimal, and LRU page replacement algorithms.
29. Discuss in detail about various page table structures.
30. Explain in detail about deadlock detection techniques.

102
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA
R23 II-II(CSE) OPERATING SYSTEM

31. Explain how to recover the system from a deadlock.


32.How to provide protection to a file system? Explain.
33. Write in detail about the on-disk and in-memory structures used to implement a file system.
34.What is a semaphore? List the types of semaphores and Show that, if the wait() and signal()
semaphore operations are not executed atomically, then mutual exclusion may be violated.
35.Discuss the Bounded-Buffer problem.
36. What is a page fault? Explain the steps involved in handling a page fault with a neat sketch.
37. Consider the following page reference string: 1,2,3,4,2,1,5,6,2,1,2,3,7,6,3,2,1,2,3,6 How many
page faults would occur for the optimal page replacement algorithm, assuming three frames and all
frames are initially empty.
38. Write about deadlock conditions and bankers algorithm in detail.
39. Discuss various techniques to recover from the deadlock.
40.Write in detail about file attributes, operations and types and structures.
41. Explain in detail about various ways of accessing disk storage.

103
ASSISTANT PROFESSOR OF CSE CH.SUNEETHA

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy