Unit-4 Operating system
Unit-4 Operating system
Deadlock.
A deadlock is a situation where a set of processes is blocked because each process is
holding a resource and waiting for another resource acquired by some other process. In
this article, we will discuss deadlock, its necessary conditions, etc. in detail.
Key concepts include mutual exclusion, resource holding, circular wait, and no
preemption.
Consider an example when two trains are coming toward each other on the same track
and there is only one track, none of the trains can move once they are in front of each
other. This is a practical example of deadlock.
P0 P1
Wait(A); Wait(B);
Wait(B); Wait(A);
3. Assume the space is available for allocation of 200K bytes, and the following sequence
of events occurs.
P0 P1
Request 80KB; Request 70KB;
Request 60KB; Request 80KB;
Let’s explain all four conditions related to deadlock in the context of the scenario with two
processes and two resources:
Mutual Exclusion
No Pre Emption
Circular Wait
1. Mutual Exclusion
Mutual Exclusion condition requires that at least one resource be held in a non-shareable
mode, which means that only one process can use the resource at any given time. Both
Resource 1 and Resource 2 are non-shareable in our scenario, and only one process can
have exclusive access to each resource at any given time. As an example:
The hold and wait condition specifies that a process must be holding at least one resource
while waiting for other processes to release resources that are currently held by other
processes. In our example,
Both processes hold one resource while waiting for the other, satisfying the hold and
wait condition.
3. No Preemption
It all depends on the no-priority method when there is no priority given to any process.
Preemption is the act of taking a resource from a process before it has finished its task.
According to the no preemption condition, resources cannot be taken forcibly from a
process a process can only release resources voluntarily after completing its task.
For example – Process p1 have resource r1 and requesting for r2 that is hold by process p2.
then process p1 can not preempt resource r2 until process p2 can finish his execution. After
some time it try to restart by requesting both r1 and r2 resources.
4. Circular Wait
Circular wait is a condition in which a set of processes are waiting for resources in such a
way that there is a circular chain, with each process in the chain holding a resource that the
next process needs. This is one of the necessary conditions for a deadlock to occur in a
system.
Example: Imagine four processes—P1, P2, P3, and P4—and four resources—R1, R2, R3,
and R4.
P1 is holding R1 and waiting for R2 (which is held by P2).
1. Deadlock Prevention
The strategy of deadlock prevention is to design the system in such a way that the possibility of
deadlock is excluded. The indirect methods prevent the occurrence of one of three necessary
conditions of deadlock i.e., mutual exclusion, no pre-emption, and hold and wait. The direct
method prevents the occurrence of circular wait. Prevention techniques -Mutual exclusion
– are supported by the OS. Hold and Wait – the condition can be prevented by requiring that a
process requests all its required resources at one time and blocking the process until all of its
requests can be granted at the same time simultaneously. But this prevention does not yield
good results because:
It is not possible to violate mutual exclusion because some resources, such as the tape drive,
are inherently non-shareable. For other resources, like printers, we can use a technique
called Spooling (Simultaneous Peripheral Operations Online).
In spooling, when multiple processes request the printer, their jobs (instructions of the
processes that require printer access) are added to the queue in the spooler directory. The
printer is allocated to jobs on a First-Come, First-Served (FCFS) basis. In this way, a process
does not have to wait for the printer and can continue its work after adding its job to the
queue.
Hold and wait is a condition in which a process holds one resource while simultaneously
waiting for another resource that is being held by a different process. The process cannot
continue until it gets all the required resources.
By eliminating wait: The process specifies the resources it requires in advance so that it
does not have to wait for allocation after execution starts.
For Example, Process1 declares in advance that it requires both Resource1 and
Resource2.
By eliminating hold: The process has to release all resources it is currently holding
before making a new request.
For Example: Process1 must release Resource2 and Resource3 before requesting
Resource1.
Eliminate No Preemption
Preemption is temporarily interrupting an executing task and later resuming it. Two ways to
eliminate No Preemption:
Processes must release resources voluntarily: A process should only give up resources it
holds when it completes its task or no longer needs them.
Avoid partial allocation: Allocate all required resources to a process at once before it
begins execution. If not all resources are available, the process must wait.
To eliminate circular wait for deadlock prevention, we can use order on resource allocation.
This prevents circular chains of processes waiting for resources, as no process can request a
resource lower than what it already holds.
2. Deadlock Avoidance
Deadlock avoidance ensures that a resource request is only granted if it won’t lead to deadlock,
either immediately or in the future. Since the kernel can’t predict future process behavior, it
uses a conservative approach. Each process declares the maximum number of resources it may
need. The kernel allows requests in stages, checking for potential deadlocks before granting
them. A request is granted only if no deadlock is possible; otherwise, it stays pending. This
approach is conservative, as a process may finish without using the maximum resources it
declared.
Banker’s Algorithm
Bankers’ Algorithm is a resource allocation and deadlock avoidance algorithm that tests all
resource requests made by processes. It checks for the safe state, and if granting a request
keeps the system in safe state, the request is allowed. However, if no safe state exists, the
request is denied.
If the request made by the process is less than equal to the freely available resource in
the system.
Timeouts
To avoid deadlocks caused by indefinite waiting, a timeout mechanism can be used to limit the
amount of time a process can wait for a resource. If the help is unavailable within the timeout
period, the process can be forced to release its current resources and try again later.
Example
A B C D
6 5 7 6
A B C D
3 1 1 2
A B C D
P1 1 2 2 1
P2 1 0 3 3
P3 1 2 1 0
Maximum resources we have for a process:
A B C D
P1 3 3 2 2
P2 1 2 3 4
P3 1 3 5 0
A B C D
P1 2 1 0 1
P2 0 2 0 1
P3 0 1 4 0
3. Deadlock Detection
o If Resources Have a Single Instance
In this case for Deadlock detection, we can run an algorithm to check for the cycle in the
Resource Allocation Graph. The presence of a cycle in the graph is a sufficient condition for
deadlock.
In the above diagram, resource 1 and resource 2 have single instances. There is a cycle R1 →
P1 → R2 → P2. So, Deadlock is Confirmed.
Detection of the cycle is necessary but not a sufficient condition for deadlock detection, in
this case, the system may or may not be in deadlock varies according to different situations.
For systems with multiple instances of resources, algorithms like Banker’s Algorithm can be
adapted to periodically check for deadlocks.
The Wait-For Graph Algorithm is a deadlock detection algorithm used to detect deadlocks in
a system where resources can have multiple instances. The algorithm works by constructing
a Wait-For Graph, which is a directed graph that represents the dependencies between
processes and resources.
4.Deadlock Recovery
A traditional operating system such as Windows doesn’t deal with deadlock recovery as it is a
time and space-consuming process. Real-time operating systems use Deadlock recovery.
Killing The Process: Killing all the processes involved in the deadlock. Killing process one
by one. After killing each process check for deadlock again and keep repeating the
process till the system recovers from deadlock. Killing all the processes one by one helps
a system to break circular wait conditions.
Process Rollback: Rollback deadlocked processes to a previously saved state where the
deadlock condition did not exist. It requires checkpointing to periodically save the state
of processes.
Resource Preemption: Resources are preempted from the processes involved in the
deadlock, and preempted resources are allocated to other processes so that there is a
possibility of recovering the system from the deadlock. In this case, the system goes into
starvation.
Improved System Stability: Deadlocks can cause system-wide stalls, and detecting and
resolving deadlocks can help to improve the stability of the system.
Better Resource Utilization: By detecting and resolving deadlocks, the operating system
can ensure that resources are efficiently utilized and that the system remains responsive
to user requests.
Better System Design: Deadlock detection and recovery algorithms can provide insight
into the behavior of the system and the relationships between processes and resources,
helping to inform and improve the design of the system.
False Positives and Negatives: Deadlock detection algorithms are not perfect and may
produce false positives or negatives, indicating the presence of deadlocks when they do
not exist or failing to detect deadlocks that do exist.
Risk of Data Loss: In some cases, recovery algorithms may require rolling back the state
of one or more processes, leading to data loss or corruption.
5.Deadlock Ignorance
Stick your head in the sand and pretend there is no problem at all, this method of solving
any problem is called Ostrich Algorithm. This Ostrich algorithm is the most widely used
technique in order to ignore the deadlock and also it used for all the single end-users uses. If
there is deadlock in the system, then the OS will reboot the system in order to function well.
The method of solving any problem varies according to the people.
Scientists all over the world believe that the most efficient method to deal with deadlock is
deadlock prevention. But the Engineers that deal with the system believe that deadlock
prevention should be paid less attention as there are very less chances for deadlock
occurrence.
System failure, compiler error, programming bugs, hardware crashes that occur once a week
should be paid more attention rather than deadlock problem that occur once in years.
Therefore most of the engineers don’t pay much amount in eliminating the deadlock.
Many operating systems suffers from deadlock that are not even detected and then
automatically broke. Just as an explanation we know that the number of processes is
determined by the process table. Now as we know there are only finite number of slots in
the process table and hence when the table is full the fork fails. Now the reasonable
approach for the new fork has to wait and try again when the slot in the process table is
empty.
Also, such problem is noticed while opening and closing file. The maximum time the file is
opened is restricted and mention in the i-node table and thus similar problem is observed
when the table is filled. Another limited resource is mentioned as swap space. In fact, all the
table storing data in the operating system has finite resource limit. It might happen that a
collection of n processes might each claim 1/n of the total, Should we clear all of these and
then each try to claim another one?
Including UNIX and WINDOWS, all the operating system ignore the deadlock first assuming
that the user is restricted to one process. The deadlock ignorance comes into picture
frequently because by this method deadlock is eliminated for free rather than spending
much on other deadlock prevention methods also putting some inconvenient restrictions on
process. Thus we have to decide between the correctness and convenience between the
different methods of solving the deadlock.
Ignoring the possibility of deadlock in operating systems can have both advantages and
disadvantages.
Advantages:
1. Simplicity: Ignoring the possibility of deadlock can make the design and
implementation of the operating system simpler and less complex.
2. Performance: Avoiding deadlock detection and recovery mechanisms can improve
the performance of the system, as these mechanisms can consume significant
system resources.
Disadvantages:
2. System crashes: If a deadlock does occur and the system is not prepared to handle it,
it can cause the entire system to crash, resulting in data loss and other problems.
3. Reduced availability: Deadlocks can cause processes to become blocked, which can
reduce the availability of the system and impact user experience.
In general, the disadvantages of ignoring deadlock outweigh the advantages. It is
important to implement effective deadlock prevention and recovery mechanisms in
operating systems to ensure the stability, reliability, and availability of the system.
2) Unix
UNIX is an operating system used in both single and multiple users and the various tasks
operation was invented in the late 1960s at the Bell Laboratories, where AT&T.
operates, It was intended to be a strong, reliable, and versatile system and it has been
initially targeted at servers, workstations, and academic systems.
3) Linux
Linux could be a free and open supply OS supported operating system standards. It
provides programming interface still as programme compatible with operating system
primarily based systems and provides giant selection applications. A UNIX operating
system additionally contains several severally developed parts, leading to UNIX
operating system that is totally compatible and free from proprietary code.
Window Unix Linux
It is a menu based operating It is a command-based It is a command-based
system. operating system. operating system.
It is developed by Microsoft, Unix was developed by AT&T Linux is Open Source, and a
It was first announced by bill Labs, different commercial large number of programmers
gate on November 10, 1983. vendors, and non-profit work together online and
organizations. contribute to its development.
It is a proprietary software Unix is a proprietary Linux, on the other hand, is
owned by Microsoft. operating system, meaning open-source software and can
that it requires a license to be used freely without any
use. licensing fees.
It has a Graphical User Initially, Unix was a Linux provides
Interface, making it simpler command-based OS, two GUIs, KDE and Gnome.
to use. however later a GUI was But there are many other
created called Common options. For example, LXDE,
Desktop Environment. Most Xfce, Unity, Mate, and so on.
distributions now ship with
Gnome.
It uses File Allocation System File system supports – jfs, File system supports – Ext2,
(FAT32) and New technology gpfs, hfs, hfs+, ufs, xfs, zfs Ext3, Ext4, Jfs, ReiserFS, Xfs,
file system(NTFS). Btrfs, FAT, FAT32, NTFS
It supports Multithreading. It supports Multiprocessing It is a multi-user, multitasking
OS.
It is less secure compared to It is more secure as all Linux is generally considered
UNIX. changes to the system to be more secure than the
require explicit user windows.
permission.
It is an user-friendly OS that It is an operating system It is an open-source operating
requires a paid lincense and which can only be utilized by system which is freely
is preferred for its software its copywriters. accessible to everyone.
compatibility and gaming
performance.
Some window versions are Some Unix versions are Some Linux versions
windows 1.0 – 3.1 , windows SunOS, Solaris, SCO are Ubuntu, Debian GNU, Arch
9x, windows NT, windows 10, UNIX, AIX, HP/UX, ULTRIX, Linux, etc.
windows 11 etc etc.
It has case sensitivity as an It is fully case-sensitive, and It is also a case-sensitive
option. files can be considered operating system.
separate files
Disk Scheduling Algorithms In OS.
Disk scheduling algorithms are crucial in managing how data is read from and written to a
computer’s hard disk. These algorithms help determine the order in which disk read and write
requests are processed, significantly impacting the speed and efficiency of data access.
Common disk scheduling methods include First-Come, First-Served (FCFS), Shortest Seek Time
First (SSTF), SCAN, C-SCAN, LOOK, and C-LOOK. By understanding and implementing these
algorithms, we can optimize system performance and ensure faster data retrieval.
Disk scheduling is a technique operating systems use to manage the order in which disk
I/O (input/output) requests are processed.
Disk scheduling is also known as I/O Scheduling.
The main goals of disk scheduling are to optimize the performance of disk operations,
reduce the time it takes to access data and improve overall system efficiency.
In this article, we will explore the different types of disk scheduling algorithms and their
functions. By understanding and implementing these algorithms, we can optimize system
performance and ensure faster data retrieval.
Advantages of FCFS
Every request gets a fair chance
No indefinite postponement
Disadvantages of FCFS
Does not try to optimize seek time
May not provide the best possible service
3. SCAN
In the SCAN algorithm the disk arm moves in a particular direction and services the
requests coming in its path and after reaching the end of the disk, it reverses its
direction and again services the request arriving in its path. So, this algorithm works as
an elevator and is hence also known as an elevator algorithm. As a result, the requests
at the midrange are serviced more and those arriving behind the disk arm will have to
wait.
Example:
SCAN Algorithm
Suppose the requests to be addressed are-82,170,43,140,24,16,190. And the
Read/Write arm is at 50, and it is also given that the disk arm should move “towards the
larger value”.
Therefore, the total overhead movement (total distance covered by the disk arm) is
calculated as
= (199-50) + (199-16) = 332
Advantages of SCAN Algorithm
o High throughput
Low variance of response time
Average response time
Disadvantages of SCAN Algorithm
Long waiting time for requests for locations just visited by disk arm
4. C-SCAN
In the SCAN algorithm, the disk arm again scans the path that has been scanned, after reversing
its direction. So, it may be possible that too many requests are waiting at the other end or there
may be zero or few requests pending at the scanned area.
These situations are avoided in the CSCAN algorithm in which the disk arm instead of reversing
its direction goes to the other end of the disk and starts servicing the requests from there. So,
the disk arm moves in a circular fashion and this algorithm is also similar to the SCAN algorithm
hence it is known as C-SCAN (Circular SCAN).
Example:
Circular SCAN
Suppose the requests to be addressed are-82,170,43,140,24,16,190. And the Read/Write arm is
at 50, and it is also given that the disk arm should move “towards the larger value”.
So, the total overhead movement (total distance covered by the disk arm) is calculated as:
=(199-50) + (199-0) + (43-0) = 391
Advantages of C-SCAN Algorithm
Here are some of the advantages of C-SCAN.
Provides more uniform wait time compared to SCAN.
5. LOOK
LOOK Algorithm is similar to the SCAN disk scheduling algorithm except for the difference that
the disk arm in spite of going to the end of the disk goes only to the last request to be serviced
in front of the head and then reverses its direction from there only. Thus it prevents the extra
delay which occurred due to unnecessary traversal to the end of the disk.
Example:
LOOK Algorithm
Suppose the requests to be addressed are-82,170,43,140,24,16,190. And the Read/Write arm is
at 50, and it is also given that the disk arm should move “towards the larger value”.
So, the total overhead movement (total distance covered by the disk arm) is calculated as:
= (190-50) + (190-16) = 314
6. C-LOOK
As LOOK is similar to the SCAN algorithm, in a similar way, C-LOOK is similar to the CSCAN disk
scheduling algorithm. In CLOOK, the disk arm in spite of going to the end goes only to the last
request to be serviced in front of the head and then from there goes to the other end’s last
request. Thus, it also prevents the extra delay which occurred due to unnecessary traversal to
the end of the disk.
Example:
1. Suppose the requests to be addressed are-82,170,43,140,24,16,190. And the
Read/Write arm is at 50, and it is also given that the disk arm should move “towards the
larger value”
C-LOOK
So, the total overhead movement (total distance covered by the disk arm) is calculated as
= (190-50) + (190-16) + (43-16) = 341
7. RSS (Random Scheduling)
It stands for Random Scheduling and just like its name it is natural. It is used in situations where
scheduling involves random attributes such as random processing time, random due dates,
random weights, and stochastic machine breakdowns this algorithm sits perfectly. Which is why
it is usually used for analysis and simulation.
8. LIFO (Last-In First-Out)
In LIFO (Last In, First Out) algorithm, the newest jobs are serviced before the existing ones i.e. in
order of requests that get serviced the job that is newest or last entered is serviced first, and
then the rest in the same order.
Advantages of LIFO (Last-In First-Out)
Here are some of the advantages of the Last In First Out Algorithm.
Maximizes locality and resource utilization
Can seem a little unfair to other requests and if new requests keep coming in, it cause
starvation to the old and existing ones.
9. N-STEP SCAN
It is also known as the N-STEP LOOK algorithm. In this, a buffer is created for N requests. All
requests belonging to a buffer will be serviced in one go. Also once the buffer is full no new
requests are kept in this buffer and are sent to another one. Now, when these N requests are
serviced, the time comes for another top N request and this way all get requests to get a
guaranteed service
Advantages of N-STEP SCAN
Here are some of the advantages of the N-Step Algorithm.
It eliminates the starvation of requests completely
10. F-SCAN
This algorithm uses two sub-queues. During the scan, all requests in the first queue are serviced
and the new incoming requests are added to the second queue. All new requests are kept on
halt until the existing requests in the first queue are serviced.
Advantages of F-SCAN
Here are some of the advantages of the F-SCAN Algorithm.
F-SCAN along with N-Step-SCAN prevents “arm stickiness” (phenomena in I/O
scheduling where the scheduling algorithm continues to service requests at or near the
current sector and thus prevents any seeking)
Each algorithm is unique in its own way. Overall Performance depends on the number and type
of requests.
Note: Average Rotational latency is generally taken as 1/2(Rotational latency).