0% found this document useful (0 votes)
3 views

Unit-4 Operating system

Deadlock occurs when processes are blocked, each holding a resource and waiting for another, characterized by mutual exclusion, hold and wait, no preemption, and circular wait. Handling deadlocks can be achieved through prevention, avoidance, detection, recovery, or ignorance, with techniques like the Banker’s Algorithm for avoidance and various recovery methods. Understanding and managing deadlocks is crucial for maintaining system stability and resource utilization.

Uploaded by

ririn79475
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

Unit-4 Operating system

Deadlock occurs when processes are blocked, each holding a resource and waiting for another, characterized by mutual exclusion, hold and wait, no preemption, and circular wait. Handling deadlocks can be achieved through prevention, avoidance, detection, recovery, or ignorance, with techniques like the Banker’s Algorithm for avoidance and various recovery methods. Understanding and managing deadlocks is crucial for maintaining system stability and resource utilization.

Uploaded by

ririn79475
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

Unit => vi

 Deadlock {Whole concept of deadlock}

 Deadlock.
A deadlock is a situation where a set of processes is blocked because each process is
holding a resource and waiting for another resource acquired by some other process. In
this article, we will discuss deadlock, its necessary conditions, etc. in detail.

 Deadlock is a situation in computing where two or more processes are unable to


proceed because each is waiting for the other to release resources.

 Key concepts include mutual exclusion, resource holding, circular wait, and no
preemption.

Consider an example when two trains are coming toward each other on the same track
and there is only one track, none of the trains can move once they are in front of each
other. This is a practical example of deadlock.

 How Does Deadlock occur in the Operating System?


Before going into detail about how deadlock occurs in the Operating System, let’s first
discuss how the Operating System uses the resources present. A process in an operating
system uses resources in the following way.
 Requests a resource
 Use the resource
 Releases the resource
A situation occurs in operating systems when there are two or more processes that hold
some resources and wait for resources held by other(s). For example, in the below
diagram, Process 1 is holding Resource 1 and waiting for resource 2 which is acquired by
process 2, and process 2 is waiting for resource 1.
 Examples of Deadlock
There are several examples of deadlock. Some of them are mentioned below.
1. The system has 2 tape drives. P0 and P1 each hold one tape drive and each needs
another one.
2. Semaphores A and B, initialized to 1, P0, and P1 are in deadlock as follows:
 P0 executes wait(A) and preempts.
 P1 executes wait(B).
 Now P0 and P1 enter in deadlock.

P0 P1
Wait(A); Wait(B);
Wait(B); Wait(A);

3. Assume the space is available for allocation of 200K bytes, and the following sequence
of events occurs.

P0 P1
Request 80KB; Request 70KB;
Request 60KB; Request 80KB;

Deadlock occurs if both processes progress to their second request.

 Deadlock Characterization or Necessary Conditions

Let’s explain all four conditions related to deadlock in the context of the scenario with two
processes and two resources:

 Mutual Exclusion

 Hold and Wait

 No Pre Emption

 Circular Wait
1. Mutual Exclusion

Mutual Exclusion condition requires that at least one resource be held in a non-shareable
mode, which means that only one process can use the resource at any given time. Both
Resource 1 and Resource 2 are non-shareable in our scenario, and only one process can
have exclusive access to each resource at any given time. As an example:

 Process 1 obtains Resource 1.

 Process 2 acquires Resource 2.

2. Hold and Wait

The hold and wait condition specifies that a process must be holding at least one resource
while waiting for other processes to release resources that are currently held by other
processes. In our example,

 Process 1 has Resource 1 and is awaiting Resource 2.

 Process 2 currently has Resource 2 and is awaiting Resource 1.

 Both processes hold one resource while waiting for the other, satisfying the hold and
wait condition.

3. No Preemption

It all depends on the no-priority method when there is no priority given to any process.

Preemption is the act of taking a resource from a process before it has finished its task.
According to the no preemption condition, resources cannot be taken forcibly from a
process a process can only release resources voluntarily after completing its task.

For example – Process p1 have resource r1 and requesting for r2 that is hold by process p2.
then process p1 can not preempt resource r2 until process p2 can finish his execution. After
some time it try to restart by requesting both r1 and r2 resources.

4. Circular Wait

Circular wait is a condition in which a set of processes are waiting for resources in such a
way that there is a circular chain, with each process in the chain holding a resource that the
next process needs. This is one of the necessary conditions for a deadlock to occur in a
system.

Example: Imagine four processes—P1, P2, P3, and P4—and four resources—R1, R2, R3,
and R4.
 P1 is holding R1 and waiting for R2 (which is held by P2).

 P2 is holding R2 and waiting for R3 (which is held by P3).

 P3 is holding R3 and waiting for R4 (which is held by P4).

 P4 is holding R4 and waiting for R1 (which is held by P1).

 Methods of Handling Deadlocks in Operating System

There are three ways to handle deadlock:

1. Deadlock Prevention or Avoidance


2. Deadlock Detection and Recovery
3. Deadlock Ignorance

1. Deadlock Prevention
The strategy of deadlock prevention is to design the system in such a way that the possibility of
deadlock is excluded. The indirect methods prevent the occurrence of one of three necessary
conditions of deadlock i.e., mutual exclusion, no pre-emption, and hold and wait. The direct
method prevents the occurrence of circular wait. Prevention techniques -Mutual exclusion
– are supported by the OS. Hold and Wait – the condition can be prevented by requiring that a
process requests all its required resources at one time and blocking the process until all of its
requests can be granted at the same time simultaneously. But this prevention does not yield
good results because:

 long waiting time required

 inefficient use of allocated resource

 A process may not know all the required resources in advance


We can prevent a Deadlock by eliminating any of the above four conditions.

Eliminate Mutual Exclusion

It is not possible to violate mutual exclusion because some resources, such as the tape drive,
are inherently non-shareable. For other resources, like printers, we can use a technique
called Spooling (Simultaneous Peripheral Operations Online).

In spooling, when multiple processes request the printer, their jobs (instructions of the
processes that require printer access) are added to the queue in the spooler directory. The
printer is allocated to jobs on a First-Come, First-Served (FCFS) basis. In this way, a process
does not have to wait for the printer and can continue its work after adding its job to the
queue.

Eliminate Hold and Wait

Hold and wait is a condition in which a process holds one resource while simultaneously
waiting for another resource that is being held by a different process. The process cannot
continue until it gets all the required resources.

There are two ways to eliminate hold and wait:

 By eliminating wait: The process specifies the resources it requires in advance so that it
does not have to wait for allocation after execution starts.
For Example, Process1 declares in advance that it requires both Resource1 and
Resource2.

 By eliminating hold: The process has to release all resources it is currently holding
before making a new request.
For Example: Process1 must release Resource2 and Resource3 before requesting
Resource1.

Eliminate No Preemption

Preemption is temporarily interrupting an executing task and later resuming it. Two ways to
eliminate No Preemption:
 Processes must release resources voluntarily: A process should only give up resources it
holds when it completes its task or no longer needs them.

 Avoid partial allocation: Allocate all required resources to a process at once before it
begins execution. If not all resources are available, the process must wait.

Eliminate Circular Wait

To eliminate circular wait for deadlock prevention, we can use order on resource allocation.

 Assign a unique number to each resource.

 Processes can only request resources in an increasing order of their numbers.

This prevents circular chains of processes waiting for resources, as no process can request a
resource lower than what it already holds.

2. Deadlock Avoidance
Deadlock avoidance ensures that a resource request is only granted if it won’t lead to deadlock,
either immediately or in the future. Since the kernel can’t predict future process behavior, it
uses a conservative approach. Each process declares the maximum number of resources it may
need. The kernel allows requests in stages, checking for potential deadlocks before granting
them. A request is granted only if no deadlock is possible; otherwise, it stays pending. This
approach is conservative, as a process may finish without using the maximum resources it
declared.

Banker’s Algorithm is the technique used for Deadlock Avoidance.

Banker’s Algorithm

Bankers’ Algorithm is a resource allocation and deadlock avoidance algorithm that tests all
resource requests made by processes. It checks for the safe state, and if granting a request
keeps the system in safe state, the request is allowed. However, if no safe state exists, the
request is denied.

Inputs to Banker’s Algorithm

 Max needs of resources by each process.

 Currently, allocated resources by each process.

 Max free available resources in the system.

The request will only be granted under the below condition


 If the request made by the process is less than equal to the max needed for that
process.

 If the request made by the process is less than equal to the freely available resource in
the system.

Timeouts

To avoid deadlocks caused by indefinite waiting, a timeout mechanism can be used to limit the
amount of time a process can wait for a resource. If the help is unavailable within the timeout
period, the process can be forced to release its current resources and try again later.

Example

Below is an example of a Banker’s algorithm

Total resources in system:

A B C D

6 5 7 6

Available system resources are:

A B C D

3 1 1 2

Processes (currently allocated resources):

A B C D

P1 1 2 2 1

P2 1 0 3 3

P3 1 2 1 0
Maximum resources we have for a process:

A B C D

P1 3 3 2 2

P2 1 2 3 4

P3 1 3 5 0

Need = Maximum Resources Requirement – Currently Allocated Resources

A B C D

P1 2 1 0 1

P2 0 2 0 1

P3 0 1 4 0

3. Deadlock Detection
o If Resources Have a Single Instance
In this case for Deadlock detection, we can run an algorithm to check for the cycle in the
Resource Allocation Graph. The presence of a cycle in the graph is a sufficient condition for
deadlock.
In the above diagram, resource 1 and resource 2 have single instances. There is a cycle R1 →
P1 → R2 → P2. So, Deadlock is Confirmed.

o If There are Multiple Instances of Resources

Detection of the cycle is necessary but not a sufficient condition for deadlock detection, in
this case, the system may or may not be in deadlock varies according to different situations.

For systems with multiple instances of resources, algorithms like Banker’s Algorithm can be
adapted to periodically check for deadlocks.

o Wait-For Graph Algorithm

The Wait-For Graph Algorithm is a deadlock detection algorithm used to detect deadlocks in
a system where resources can have multiple instances. The algorithm works by constructing
a Wait-For Graph, which is a directed graph that represents the dependencies between
processes and resources.

4.Deadlock Recovery
A traditional operating system such as Windows doesn’t deal with deadlock recovery as it is a
time and space-consuming process. Real-time operating systems use Deadlock recovery.

 Killing The Process: Killing all the processes involved in the deadlock. Killing process one
by one. After killing each process check for deadlock again and keep repeating the
process till the system recovers from deadlock. Killing all the processes one by one helps
a system to break circular wait conditions.

 Process Rollback: Rollback deadlocked processes to a previously saved state where the
deadlock condition did not exist. It requires checkpointing to periodically save the state
of processes.

 Resource Preemption: Resources are preempted from the processes involved in the
deadlock, and preempted resources are allocated to other processes so that there is a
possibility of recovering the system from the deadlock. In this case, the system goes into
starvation.

 Concurrency Control: Concurrency control mechanisms are used to prevent data


inconsistencies in systems with multiple concurrent processes. These mechanisms
ensure that concurrent processes do not access the same data at the same time, which
can lead to inconsistencies and errors. Deadlocks can occur in concurrent systems when
two or more processes are blocked, waiting for each other to release the resources they
need. This can result in a system-wide stall, where no process can make progress.
Concurrency control mechanisms can help prevent deadlocks by managing access to
shared resources and ensuring that concurrent processes do not interfere with each
other.

Advantages of Deadlock Detection and Recovery

 Improved System Stability: Deadlocks can cause system-wide stalls, and detecting and
resolving deadlocks can help to improve the stability of the system.

 Better Resource Utilization: By detecting and resolving deadlocks, the operating system
can ensure that resources are efficiently utilized and that the system remains responsive
to user requests.

 Better System Design: Deadlock detection and recovery algorithms can provide insight
into the behavior of the system and the relationships between processes and resources,
helping to inform and improve the design of the system.

Disadvantages of Deadlock Detection and Recovery

 Performance Overhead: Deadlock detection and recovery algorithms can introduce a


significant overhead in terms of performance, as the system must regularly check for
deadlocks and take appropriate action to resolve them.

 Complexity: Deadlock detection and recovery algorithms can be complex to implement,


especially if they use advanced techniques such as the Resource Allocation Graph or
Timestamping.

 False Positives and Negatives: Deadlock detection algorithms are not perfect and may
produce false positives or negatives, indicating the presence of deadlocks when they do
not exist or failing to detect deadlocks that do exist.

 Risk of Data Loss: In some cases, recovery algorithms may require rolling back the state
of one or more processes, leading to data loss or corruption.

5.Deadlock Ignorance
Stick your head in the sand and pretend there is no problem at all, this method of solving
any problem is called Ostrich Algorithm. This Ostrich algorithm is the most widely used
technique in order to ignore the deadlock and also it used for all the single end-users uses. If
there is deadlock in the system, then the OS will reboot the system in order to function well.
The method of solving any problem varies according to the people.

Scientists all over the world believe that the most efficient method to deal with deadlock is
deadlock prevention. But the Engineers that deal with the system believe that deadlock
prevention should be paid less attention as there are very less chances for deadlock
occurrence.

System failure, compiler error, programming bugs, hardware crashes that occur once a week
should be paid more attention rather than deadlock problem that occur once in years.
Therefore most of the engineers don’t pay much amount in eliminating the deadlock.

Many operating systems suffers from deadlock that are not even detected and then
automatically broke. Just as an explanation we know that the number of processes is
determined by the process table. Now as we know there are only finite number of slots in
the process table and hence when the table is full the fork fails. Now the reasonable
approach for the new fork has to wait and try again when the slot in the process table is
empty.

Also, such problem is noticed while opening and closing file. The maximum time the file is
opened is restricted and mention in the i-node table and thus similar problem is observed
when the table is filled. Another limited resource is mentioned as swap space. In fact, all the
table storing data in the operating system has finite resource limit. It might happen that a
collection of n processes might each claim 1/n of the total, Should we clear all of these and
then each try to claim another one?

Including UNIX and WINDOWS, all the operating system ignore the deadlock first assuming
that the user is restricted to one process. The deadlock ignorance comes into picture
frequently because by this method deadlock is eliminated for free rather than spending
much on other deadlock prevention methods also putting some inconvenient restrictions on
process. Thus we have to decide between the correctness and convenience between the
different methods of solving the deadlock.

Ignoring the possibility of deadlock in operating systems can have both advantages and
disadvantages.

Advantages:

1. Simplicity: Ignoring the possibility of deadlock can make the design and
implementation of the operating system simpler and less complex.
2. Performance: Avoiding deadlock detection and recovery mechanisms can improve
the performance of the system, as these mechanisms can consume significant
system resources.

Disadvantages:

1. Unpredictability: Ignoring deadlock can lead to unpredictable behavior, making the


system less reliable and stable.

2. System crashes: If a deadlock does occur and the system is not prepared to handle it,
it can cause the entire system to crash, resulting in data loss and other problems.

3. Reduced availability: Deadlocks can cause processes to become blocked, which can
reduce the availability of the system and impact user experience.
In general, the disadvantages of ignoring deadlock outweigh the advantages. It is
important to implement effective deadlock prevention and recovery mechanisms in
operating systems to ensure the stability, reliability, and availability of the system.

 Comparative Study Of Window, Unix, Linux System.


1) Window
Windows is one of the fastest operating systems developed and marketed by Microsoft.
The first Windows was launched in 1985. Windows supports a GUI-based system that is
more user-friendly with little or no knowledge of computers people.
Windows may be a commissioned OS within which ASCII text file is inaccessible. it’s
designed for the people with the angle of getting no programming information and for
business and alternative industrial users. it’s terribly straightforward and simple to use.

2) Unix
UNIX is an operating system used in both single and multiple users and the various tasks
operation was invented in the late 1960s at the Bell Laboratories, where AT&T.
operates, It was intended to be a strong, reliable, and versatile system and it has been
initially targeted at servers, workstations, and academic systems.

3) Linux
Linux could be a free and open supply OS supported operating system standards. It
provides programming interface still as programme compatible with operating system
primarily based systems and provides giant selection applications. A UNIX operating
system additionally contains several severally developed parts, leading to UNIX
operating system that is totally compatible and free from proprietary code.
Window Unix Linux
It is a menu based operating It is a command-based It is a command-based
system. operating system. operating system.
It is developed by Microsoft, Unix was developed by AT&T Linux is Open Source, and a
It was first announced by bill Labs, different commercial large number of programmers
gate on November 10, 1983. vendors, and non-profit work together online and
organizations. contribute to its development.
It is a proprietary software Unix is a proprietary Linux, on the other hand, is
owned by Microsoft. operating system, meaning open-source software and can
that it requires a license to be used freely without any
use. licensing fees.
It has a Graphical User Initially, Unix was a Linux provides
Interface, making it simpler command-based OS, two GUIs, KDE and Gnome.
to use. however later a GUI was But there are many other
created called Common options. For example, LXDE,
Desktop Environment. Most Xfce, Unity, Mate, and so on.
distributions now ship with
Gnome.
It uses File Allocation System File system supports – jfs, File system supports – Ext2,
(FAT32) and New technology gpfs, hfs, hfs+, ufs, xfs, zfs Ext3, Ext4, Jfs, ReiserFS, Xfs,
file system(NTFS). Btrfs, FAT, FAT32, NTFS
It supports Multithreading. It supports Multiprocessing It is a multi-user, multitasking
OS.
It is less secure compared to It is more secure as all Linux is generally considered
UNIX. changes to the system to be more secure than the
require explicit user windows.
permission.
It is an user-friendly OS that It is an operating system It is an open-source operating
requires a paid lincense and which can only be utilized by system which is freely
is preferred for its software its copywriters. accessible to everyone.
compatibility and gaming
performance.
Some window versions are Some Unix versions are Some Linux versions
windows 1.0 – 3.1 , windows SunOS, Solaris, SCO are Ubuntu, Debian GNU, Arch
9x, windows NT, windows 10, UNIX, AIX, HP/UX, ULTRIX, Linux, etc.
windows 11 etc etc.
It has case sensitivity as an It is fully case-sensitive, and It is also a case-sensitive
option. files can be considered operating system.
separate files
 Disk Scheduling Algorithms In OS.

Disk scheduling algorithms are crucial in managing how data is read from and written to a
computer’s hard disk. These algorithms help determine the order in which disk read and write
requests are processed, significantly impacting the speed and efficiency of data access.
Common disk scheduling methods include First-Come, First-Served (FCFS), Shortest Seek Time
First (SSTF), SCAN, C-SCAN, LOOK, and C-LOOK. By understanding and implementing these
algorithms, we can optimize system performance and ensure faster data retrieval.
 Disk scheduling is a technique operating systems use to manage the order in which disk
I/O (input/output) requests are processed.
 Disk scheduling is also known as I/O Scheduling.
 The main goals of disk scheduling are to optimize the performance of disk operations,
reduce the time it takes to access data and improve overall system efficiency.
In this article, we will explore the different types of disk scheduling algorithms and their
functions. By understanding and implementing these algorithms, we can optimize system
performance and ensure faster data retrieval.

Importance of Disk Scheduling in Operating System


 Multiple I/O requests may arrive by different processes and only one I/O request can be
served at a time by the disk controller. Thus other I/O requests need to wait in the
waiting queue and need to be scheduled.
 Two or more requests may be far from each other so this can result in greater disk arm
movement.
 Hard drives are one of the slowest parts of the computer system and thus need to be
accessed in an efficient manner.

Key Terms Associated with Disk Scheduling


 Seek Time: Seek time is the time taken to locate the disk arm to a specified track where
the data is to be read or written. So the disk scheduling algorithm that gives a minimum
average seek time is better.
 Rotational Latency: Rotational Latency is the time taken by the desired sector of the disk
to rotate into a position so that it can access the read/write heads. So the disk
scheduling algorithm that gives minimum rotational latency is better.
 Transfer Time: Transfer time is the time to transfer the data. It depends on the rotating
speed of the disk and the number of bytes to be transferred.
 Disk Access Time:
Disk Access Time = Seek Time + Rotational Latency + Transfer Time
Total Seek Time = Total head Movement * Seek Time

Disk Access Time and Disk Response Time


 Disk Response Time: Response Time is the average time spent by a request waiting to
perform its I/O operation. The average Response time is the response time of all
requests. Variance Response Time is the measure of how individual requests are serviced
with respect to average response time. So the disk scheduling algorithm that gives
minimum variance response time is better.

Goal of Disk Scheduling Algorithms


 Minimize Seek Time
 Maximize Throughput
 Minimize Latency
 Fairness
 Efficiency in Resource Utilization

 Disk Scheduling Algorithms


There are several Disk Several Algorithms. We will discuss in detail each one of them.
 FCFS (First Come First Serve)
 SSTF (Shortest Seek Time First)
 SCAN
 C-SCAN
 LOOK
 C-LOOK
 RSS (Random Scheduling)
 LIFO (Last-In First-Out)
 N-STEP SCAN
 F-SCAN

1. FCFS (First Come First Serve)


FCFS is the simplest of all Disk Scheduling Algorithms. In FCFS, the requests are
addressed in the order they arrive in the disk queue. Let us understand this with the
help of an example.
First Come First Serve
Example:
Suppose the order of request is- (82,170,43,140,24,16,190)
And current position of Read/Write head is: 50
So, total overhead movement (total distance covered by the disk arm) =
(82-50)+(170-82)+(170-43)+(140-43)+(140-24)+(24-16)+(190-16) =642

Advantages of FCFS
 Every request gets a fair chance
 No indefinite postponement
Disadvantages of FCFS
 Does not try to optimize seek time
 May not provide the best possible service

2. SSTF (Shortest Seek Time First)


In SSTF (Shortest Seek Time First), requests having the shortest seek time are executed
first. So, the seek time of every request is calculated in advance in the queue and then
they are scheduled according to their calculated seek time. As a result, the request near
the disk arm will get executed first. SSTF is certainly an improvement over FCFS as it
decreases the average response time and increases the throughput of the system. Let us
understand this with the help of an example.
Example:

Shortest Seek Time First


Suppose the order of request is- (82,170,43,140,24,16,190)
And current position of Read/Write head is: 50
So,
total overhead movement (total distance covered by the disk arm) =
(50-43)+(43-24)+(24-16)+(82-16)+(140-82)+(170-140)+(190-170) =208

Advantages of Shortest Seek Time First


 The average Response Time decreases
 Throughput increases
Disadvantages of Shortest Seek Time First
 Overhead to calculate seek time in advance
 Can cause Starvation for a request if it has a higher seek time as compared to incoming
requests
 The high variance of response time as SSTF favors only some requests

3. SCAN
In the SCAN algorithm the disk arm moves in a particular direction and services the
requests coming in its path and after reaching the end of the disk, it reverses its
direction and again services the request arriving in its path. So, this algorithm works as
an elevator and is hence also known as an elevator algorithm. As a result, the requests
at the midrange are serviced more and those arriving behind the disk arm will have to
wait.
Example:

SCAN Algorithm
Suppose the requests to be addressed are-82,170,43,140,24,16,190. And the
Read/Write arm is at 50, and it is also given that the disk arm should move “towards the
larger value”.
Therefore, the total overhead movement (total distance covered by the disk arm) is
calculated as
= (199-50) + (199-16) = 332
Advantages of SCAN Algorithm
o High throughput
 Low variance of response time
 Average response time
Disadvantages of SCAN Algorithm
 Long waiting time for requests for locations just visited by disk arm
4. C-SCAN
In the SCAN algorithm, the disk arm again scans the path that has been scanned, after reversing
its direction. So, it may be possible that too many requests are waiting at the other end or there
may be zero or few requests pending at the scanned area.
These situations are avoided in the CSCAN algorithm in which the disk arm instead of reversing
its direction goes to the other end of the disk and starts servicing the requests from there. So,
the disk arm moves in a circular fashion and this algorithm is also similar to the SCAN algorithm
hence it is known as C-SCAN (Circular SCAN).
Example:

Circular SCAN
Suppose the requests to be addressed are-82,170,43,140,24,16,190. And the Read/Write arm is
at 50, and it is also given that the disk arm should move “towards the larger value”.
So, the total overhead movement (total distance covered by the disk arm) is calculated as:
=(199-50) + (199-0) + (43-0) = 391
Advantages of C-SCAN Algorithm
Here are some of the advantages of C-SCAN.
 Provides more uniform wait time compared to SCAN.
5. LOOK
LOOK Algorithm is similar to the SCAN disk scheduling algorithm except for the difference that
the disk arm in spite of going to the end of the disk goes only to the last request to be serviced
in front of the head and then reverses its direction from there only. Thus it prevents the extra
delay which occurred due to unnecessary traversal to the end of the disk.
Example:
LOOK Algorithm
Suppose the requests to be addressed are-82,170,43,140,24,16,190. And the Read/Write arm is
at 50, and it is also given that the disk arm should move “towards the larger value”.
So, the total overhead movement (total distance covered by the disk arm) is calculated as:
= (190-50) + (190-16) = 314
6. C-LOOK
As LOOK is similar to the SCAN algorithm, in a similar way, C-LOOK is similar to the CSCAN disk
scheduling algorithm. In CLOOK, the disk arm in spite of going to the end goes only to the last
request to be serviced in front of the head and then from there goes to the other end’s last
request. Thus, it also prevents the extra delay which occurred due to unnecessary traversal to
the end of the disk.
Example:
1. Suppose the requests to be addressed are-82,170,43,140,24,16,190. And the
Read/Write arm is at 50, and it is also given that the disk arm should move “towards the
larger value”

C-LOOK
So, the total overhead movement (total distance covered by the disk arm) is calculated as
= (190-50) + (190-16) + (43-16) = 341
7. RSS (Random Scheduling)
It stands for Random Scheduling and just like its name it is natural. It is used in situations where
scheduling involves random attributes such as random processing time, random due dates,
random weights, and stochastic machine breakdowns this algorithm sits perfectly. Which is why
it is usually used for analysis and simulation.
8. LIFO (Last-In First-Out)
In LIFO (Last In, First Out) algorithm, the newest jobs are serviced before the existing ones i.e. in
order of requests that get serviced the job that is newest or last entered is serviced first, and
then the rest in the same order.
Advantages of LIFO (Last-In First-Out)
Here are some of the advantages of the Last In First Out Algorithm.
 Maximizes locality and resource utilization
 Can seem a little unfair to other requests and if new requests keep coming in, it cause
starvation to the old and existing ones.
9. N-STEP SCAN
It is also known as the N-STEP LOOK algorithm. In this, a buffer is created for N requests. All
requests belonging to a buffer will be serviced in one go. Also once the buffer is full no new
requests are kept in this buffer and are sent to another one. Now, when these N requests are
serviced, the time comes for another top N request and this way all get requests to get a
guaranteed service
Advantages of N-STEP SCAN
Here are some of the advantages of the N-Step Algorithm.
 It eliminates the starvation of requests completely
10. F-SCAN
This algorithm uses two sub-queues. During the scan, all requests in the first queue are serviced
and the new incoming requests are added to the second queue. All new requests are kept on
halt until the existing requests in the first queue are serviced.
Advantages of F-SCAN
Here are some of the advantages of the F-SCAN Algorithm.
 F-SCAN along with N-Step-SCAN prevents “arm stickiness” (phenomena in I/O
scheduling where the scheduling algorithm continues to service requests at or near the
current sector and thus prevents any seeking)
Each algorithm is unique in its own way. Overall Performance depends on the number and type
of requests.
Note: Average Rotational latency is generally taken as 1/2(Rotational latency).

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy