OS unit-3
OS unit-3
PART-1 (Synchronization)
Process Coordination: Process coordination or concurrency
control deals with mutual exclusion and synchronization.
1
while (true)
{
/* produce an item in next Produced */
while (counter == BUFFER.SIZE); /*
do nothing */
buffer[in] = next Produced;
in = (in + 1) % BUFFER-SIZE;
counter++;
}
The code for the consumer process:
while (true)
{
while (counter = = 0);
/* do nothing */
next Consumed = buffer [out] ;
out = (out + 1) % BUFFER_SIZE;
counter--;
/* consume the item in next Consumed */
}
Although both the producer and consumer routines are correct separately, they may not
function correctly when executed concurrently. As an illustration, suppose that the value of the
variable counter is currently 5 and that the producer and consumer processes execute the
statements "counter++" and "counter- -" concurrently.
We can show that the value of counter may be incorrect as follows. Note that the
statement "counter++" may be implemented in machine language (on a typical machine) as
Register1= counter
Register1 = register1 + 1
counter = register1
where register1is a local CPU register. Similarly, the statement "counter- -" is implemented as
follows:
register2= counter
2
register2 = register2- 1
counter = register2
where again register2is a local CPU register. Even though register1andregister2may be
the same physical register (an accumulator, say), remember that the contents of this register will
be saved and restored by the interrupt handler.
The concurrent execution of "counter++" and "counter--" is equivalent to a sequential execution
where the lower-level statements presented previously are interleaved in some arbitrary order.
One such interleaving is
Register1= counter {register1 = 5}
register 1= register1+ 1 {register1= 6}
register2 = counter {register2 =5}
register2 = register i - 1 {register2 =4}
counter = register1 {counter = 6}
counter = register2 {counter = 4}
Notice that we have arrived at the incorrect state "counter = = 4", indicating that four
buffers are full, when, in fact, five buffers are full. If we reversed the order of the statements, we
would arrive at the incorrect state "counter = = 6".
We would arrive at this incorrect state because we allowed both processes to manipulate
the variable counter concurrently. A situation like this, where several processes access and
manipulate the same data concurrently and the outcome of the execution depends on the
particular order in which the access takes place, is called a race condition. To guard against the
race condition above, we need to ensure that only one process at a time can be manipulating the
variable counter. To make such a guarantee, we require that the processes be synchronized in
some way.
2. The Critical-Section Problem
Consider a system consisting of n processes {PQ, PI, ..., P,,~\}. Each process has a
segment of code, called a critical section, in which the process maybe changing common
variables, updating a table, writing a file, and so on. The important feature of the system is that,
when one process is executing in its critical section, no other process is to be allowed to
execute in its critical section. That is, no two processes are executing in their critical
sections at the same time. The critical-section problem is to design a protocol that the
processes can use to cooperate. Each process must request permission to enter its critical
3
section. The section of code implementing this request is the entry section. The critical section
may be followed by an exit section. The remaining code is the remainder section. The general
structure of a typical process P, The entry section and exit section are enclosed in boxes to
highlight these important segments of code.
General structure of a typical process Pi:
do{
\\entry section
critical section
\\exit section
\\remainder section
} while (TRUE);
Solutions to the Critical-Section Problem
Several approaches exist to handle the critical-section problem:
• Peterson’s Algorithm
– A software solution that ensures mutual exclusion using two variables: a boolean
flag array and a "turn" variable.
• Synchronization Primitives
– Locks: Allow only one thread to access the critical section at a time.
– Semaphores: Use counting mechanisms to control access.
– Monitors: High-level constructs that manage synchronization automatically.
• Hardware-Based Solutions
Test-and-Set (TSL) Instruction: Atomic hardware operations to control access.
Compare-and-Swap (CAS): Ensures mutual exclusion at the processor level.
• Operating System Support
Modern OSes provide process synchronization mechanisms like mutexes, condition
variables, and message passing.
A solution to the critical-section problem must satisfy the following three requirements:
(i) Mutual exclusion: If process P; is executing in its critical section, then no other processes
can be executing in their critical sections.
(ii) Progress: If no process is executing in its critical section and some processes wish to enter
4
their critical sections, then only those processes that are not executing in their remainder
sections can participate in the decision on which will enter its critical section next, and this
selection cannot be postponed indefinitely.
(iii) Bounded waiting: There exists a bound, or limit, on the number of times that other
processes are allowed to enter their critical sections after a process has made a request to enter
its critical section and before that request is granted.
3. Peterson’s solution
do {
5
remainder section
} while (TRUE);
In concurrent programming, multiple threads or processes may try to modify a shared resource
simultaneously, leading to inconsistent results.
A Mutex (Mutual Exclusion Lock) allows only one thread to access a critical section at a time,
ensuring consistency.
1. Lock (pthread_mutex_lock): A thread must acquire the lock before entering the critical
section.
2. Critical Section: Only one thread executes this section at a time.
3. Unlock (pthread_mutex_unlock): The thread releases the lock after execution, allowing
others to proceed.
Mutex Variable − A mutex variable is used to represent the lock. It is a data structure that
maintains the state of the lock and allows threads or processes to acquire and release it.
Lock Acquisition − Threads or processes can attempt to acquire the lock by requesting it. If the
lock is available, the requesting thread or process gains ownership of the lock. Otherwise, it
enters a waiting state until the lock becomes available.
Lock Release − Once a thread or process has finished using the shared resource, it releases the
lock, allowing other threads or processes to acquire it.
Types of Mutex Locks
Mutex locks come in a variety of forms that offer varying levels of capabilities and behavior.
6
Recursive Mutex
A recursive mutex enables multiple lock acquisitions without blocking an operating system or
procedure. It keeps track of the number of occasions it was previously purchased and needs the
same amount of discharges before it can be completely unlocked.
Error-Checking Mutex
When lock processes, an error-checking mutex executes further error checking. By hindering
looping lock appropriation, it makes a guarantee that an application or procedure doesn't take
over a mutex lock it presently already holds.
Times Mutex
For a predetermined amount of time, an algorithm or procedure can try to acquire a lock using a
timed mutex. The acquiring functioning fails if the security key fails to become accessible during
the allotted time, as well as if the thread/process can respond appropriately.
Read-Write Mutex
A read-write lock is a synchronization system that permits several threads or procedures to
access the same resource concurrently whereas implementing exclusion between them
throughout write activities, though it does not constitute solely an instance of a mutex lock.
5. Synchronization Hardware
It is a hardware solution. We are using solve the critical section problem. We
explore several more solutions to the critical-section problem using techniques ranging from
hardware to software based APIs available to application programmers. Hardware features can
make any programming task easier and improve system efficiency. In this section, we present
some simple hardware instructions that are available on many systems and show how they can
be used effectively in solving the critical-section problem.
7
The critical-section problem could be solved simply in a uniprocessor environment, we
could be sure that the current sequence of instructions would be allowed to execute in order
without preemption. Unfortunately, this solution is not as feasible in a multiprocessor
environment. Disabling interrupts on a multiprocessor can be time consuming, as the message is
passed to all the processors. This message passing delays entry into each critical section, and
system efficiency decreases.
We have two types of instructions 1. TestAndSet ( ) instruction 2.swap ( ) instruction
The definition of the TestAndSet ( ) instruction:
boolean TestAndSet(boolean *target) {
boolean rv = *target;
*target = TRUE;
return rv;
Mutual-exclusion implementation with
TestAndSet ( ) :
do {
while (TestAndSetLock(&lock) )
; // do nothing//
// remainder section
}while (TRUE);
8
executed atomically. If the machine supports the Swap( ) instruction, then mutual exclusion can
be provided as follows. A global Boolean variable lock is declared and is initialized to false. In
addition, each process has a local Boolean variable key. The structure of process P, is shown in
Figure .
The definition of the Swap () instruction:
do
{
key = TRUE;
// critical
section
lock = FALSE;
// remainder section
} while (TRUE);
6. Semaphores
The various hardware-based solutions to the critical-section problem (using the
TestAndSetC) and Swap() instructions) are complicated for application programmers to use. To
overcome this difficulty, we can use a synchronization tool called a semaphore.
A semaphore S is an integer variable that, apart from initialization, is accessed only
through two standard atomic operations: wait ( ) and signal ( ).The wait( ) operation was
originally termed P (from the Dutch probercn, "to test"); signal ( ) was originally called V (from
verhogen, "to increment").
9
Wait (P operation): This operation decrements the semaphore's value. If the value is
positive, it proceeds. If it's zero, the thread is blocked until the value becomes positive.
Signal (V operation): these operation increments the semaphore's value and potentially
unblocks any threads that are waiting on it.
Types of Semaphores
• Binary Semaphore (Mutex):
– It can only take two values: 0 or 1.
– A binary semaphore is often used for mutual exclusion (mutex), where only one
process can access a critical section at a time.
– It behaves like a lock: if a process enters the critical section, the semaphore value
is set to 0, and the next process must wait until the semaphore is set back to 1
(released).
• Counting Semaphore:
– This type can take any non-negative integer value.
– It is used to control access to a resource that has a finite number of instances.
– For example, a counting semaphore could be used to manage a pool of 3 printers:
the semaphore’s value will be initialized to 3. When a process uses a printer, the
semaphore is decremented by 1, and when it is done, the semaphore is
incremented by 1.
10
Mutual-exclusion implementation with semaphores
do {
wait(mutex);
//criticalsection
signal(mutex);
// remainder section
}while (TRUE);
2. Semaphore Implementation
The big problem with semaphores as described above is the busy loop
in the wait call, which consumes CPU cycles without doing any useful
work. This type of lock is known as a spinlock, because the lock just
sits there and spins while it waits. While this is generally a bad thing, it
does have the advantage of not invoking context switches, and so it is
sometimes used in multi-processing systems when the wait time is
expected to be short - One thread spins on one processor while
another completes their critical section on another processor.
Key to the success of semaphores is that the wait and signal
operations be atomic, that is no other process can execute a wait or
signal on the same semaphore at the same time. On single processors
this can be implemented by disabling interrupts during the execution
of wait and signal; Multiprocessor systems have to use more complex
methods, including the use of spinlocking.
11
Benefits of Semaphores
Deadlock: If semaphores are not used properly (for example, if a process locks a
semaphore but never releases it), it could lead to a deadlock.
Starvation: If a process is always waiting for a resource and never gets a chance to
execute, it might starve.
Complexity: Correctly implementing semaphores in complex systems can lead to
difficult-to-debug issues.
13
readers can access the data simultaneously, but when a writer
accesses the data, it needs exclusive access.
14
reader to exit will release it; The remaining readers do not
touch rw_mutex.
o Note that the first reader to come along will block on rw_mutex if
there is currently a writer accessing the data, and that all
following readers will only block on mutex for their turn to
increment readcount.
The structure of a reader process
15
3. The Dining-Philosophers Problem:
Consider five philosophers who spend their lives thinking and eating. The philosophers share a
circular table surrounded by five chairs, each belonging to one philosopher. In the center of the
table is a bowl of rice, and the table is laid with five single chopsticks. When a philosopher
thinks, she does not interact with her colleagues. From time to time, a philosopher gets hungry
and tries to pick up the two chopsticks that are closest to her (the chopsticks that are between her
and her left and right neighbors). A philosopher may pickup only one chopstick
18
semaphores, which always affects the state of the semaphore.
Now suppose that, when the x. signal() operation is invoked by a process P, there is a
suspended process Q associated with condition x. Clearly, if the suspended process Q is
allowed to resume its execution, the signaling process P must wait. Otherwise, both P and Q
would be active simultaneously within the monitor. Note, however, that both processes can
conceptually continue with their execution. Two possibilities exist:
1. Signal and wait: P either waits until Q leaves the monitor or waits for another condition.
2. Signal and continue: Q either waits until P leaves the monitor or waits for another
condition.
2.Dining-Philosophers Solution Using Monitors:
This solution to the dining philosophers uses monitors, and the
restriction that a philosopher may only pick up chopsticks when both
are available. There are also two key data structures in use in this
solution:
19
monitor DP
{
enum { THINKING; HUNGRY, EATING) state [5] ;
condition self [5];
void pickup (int i) {
state[i] = HUNGRY; test(i);
if (state[i] != EATING)
self [i].wait;
}
void putdown (int i) {
state[i] = THINKING;
// test left and right neighbors
test((i + 4) % 5);
test((i + 1) % 5);
}
void test (int i) {
if ( (state[(i + 4) % 5] != EATING) &&
(state[i] == HUNGRY) &&
(state[(i + 1) % 5] != EATING) ) {
state[i] = EATING ;
self[i].signal () ;
}
}
initialization_code() {
for (int i = 0; i < 5; i++) state[i]
= THINKING;
}
}
It is easy to show that this solution ensures that no two neighbors are eating
simultaneously and that no deadlocks will occur. We note, however, that it is possible for a
philosopher to starve to death. We do not present a solution to this problem but rather leave it
as an exercise for you.
20
3. Implementing a Monitor Using Semaphores:
One possible implementation of a monitor uses a semaphore "mutex"
to control mutual exclusionary access to the monitor, and a counting
semaphore "next" on which processes can suspend themselves after
they are already "inside" the monitor ( in conjunction with condition
variables, see below. ) The integer next_count keeps track of how
many processes are waiting in the next queue. Externally accessible
monitor processes are then implemented as:
21
PART-2 (Deadlocks)
Deadlocks: A process requests resources; if the resources are not available
at that time, the process enters a wait state. It may happen that waiting
processes will never again change state, because the resources they have
requested are held by other waiting processes. This situation is called a
Deadlock.
9. System Model:
In some cases deadlocks can be understood more clearly through the use of Resource-
Allocation Graphs, having the following properties:
23
o A set of resource categories, { R1, R2, R3, . . ., RN }, which appear as square
nodes on the graph. Dots inside the resource nodes indicate specific instances of
the resource. ( E.g. two dots might represent two laser printers. )
o A set of processes, { P1, P2, P3, . . ., PN }, which appear as circle nodes on the
graph.
o Request Edges - A set of directed arcs from Pi to Rj, indicating that process Pi
has requested Rj, and is currently waiting for that resource to become available.
o Assignment Edges - A set of directed arcs from Rj to Pi indicating that resource
Rj has been allocated to process Pi, and that Pi is currently holding resource Rj.
o Note that a request edge can be converted into an assignment edge by reversing
the direction of the arc when the request is granted.
o If a resource-allocation graph contains no cycles, then the system is not
deadlocked.
If a resource-allocation graph does contain cycles and each resource category contains
only a single instance, then a deadlock exists.
If a resource category contains more than one instance, then the presence of a cycle in the
resource-allocation graph indicates the possibility of a deadlock, but does not guarantee
one.
24
11. Methods for Handling Deadlocks:
Generally speaking there are three ways of handling deadlocks:
1. Deadlock prevention or avoidance - Do not allow the system to get into a
deadlocked state.
2. Deadlock detection and recovery - Abort a process or preempt some resources
when deadlocks are detected.
3. Ignore the problem all together - If deadlocks only occur once a year or so, it
may be better to simply let them happen and reboot as necessary than to incur the
constant overhead and system performance penalties associated with deadlock
prevention or detection. This is the approach that both Windows and UNIX take.
In order to avoid deadlocks, the system must have additional information about all
processes. In particular, the system must know what resources a process will or may
request in the future.
Deadlock detection is fairly straightforward, but deadlock recovery requires either
aborting processes or preempting resources, neither of which is an attractive alternative.
If deadlocks are neither prevented nor detected, then when a deadlock occurs the system
will gradually slow down, as more and more processes become stuck waiting for
resources currently held by the deadlock and by other waiting processes. Unfortunately
this slowdown can be indistinguishable from a general system slowdown when a real-
time process has heavy computing needs.
25
(iii) Eliminating No Preemption:If a process holding some resources requests another
resource that cannot be immediately allocated to it, then the system can preempt the
resources held by the process. The resources are preempted and allocated to other processes
until the original process can be restarted with all its required resources.
Disadvantage: Preempting resources can result in wasted work, as processes may need to
restart after being preempted.
(iv) Eliminating Circular Wait:One approach is to impose a total ordering on the resources
and require that each process can only request resources in an increasing order of
enumeration. This means that if a process is holding a resource with a lower number, it
This requires more information about each process, and tends to lead to low device
utilization.
In some algorithms the scheduler only needs to know the maximum number of each
resource that a process might potentially use. In more complex algorithms the scheduler
can also take advantage of the schedule of exactly what resources may be needed in what
order.
When a scheduler sees that starting a process or granting resource requests may lead to
26
future deadlocks, then that process is just not started or the request is not granted.
A resource allocation state is defined by the number of available and allocated resources,
and the maximum requirements of all processes in the system.
A state is safe if the system can allocate all resources requested by all processes ( up to
their stated maximums ) without entering a deadlock state.
More formally, a state is safe if there exists a safe sequence of processes { P0, P1, P2, ...,
PN } such that all of the resource requests for Pi can be granted using the resources
currently allocated to Pi and all processes Pj where j < i.
If a safe sequence does not exist, then the system is in an unsafe state, which may lead to
deadlock.
If resource categories have only single instances of their resources, then deadlock states
can be detected by cycles in the resource-allocation graphs.
In this case, unsafe states can be recognized and avoided by augmenting the resource-
allocation graph with claim edges, noted by dashed lines, which point from a process to a
resource that it may request in the future.
In order for this technique to work, all claim edges must be added to the graph for any
particular process before that process is allowed to request any resources. ( Alternatively,
processes may only make requests for resources for which they have already established
claim edges, and claim edges cannot be added to any process that is currently holding
resources. )
When a process makes a request, the claim edge Pi->Rj is converted to a request edge.
Similarly when a resource is released, the assignment reverts back to a claim edge.
This approach works by denying requests that would produce cycles in the resource-
27
allocation graph, taking claim edges into effect.
Consider for example what happens when process P2 requests resource R2:
28
o Maximum Demand: The maximum number of each resource a process may need
during its execution.
o Need: The remaining number of resources a process still needs to complete its
execution. This is calculated as:
Need=Maximum Demand−Allocation
The Banker's Algorithm ensures that resource allocation always results in a safe state, meaning
that all processes can eventually complete without leading to deadlock.The banker's algorithm
relies on several key data structures: (where n is the number of processes and m is the number
of resource categories. )
Available[ m ] indicates how many resources are currently available of each type.
Max[ n ][ m ] indicates the maximum demand of each process of each resource.
Allocation[ n ][ m ] indicates the number of each resource category allocated to each
process.
Need[ n ][ m ] indicates the remaining resources needed of each type for each process.
Allocation matrices (i.e., Need[i][j] = Max[i][j] - Allocation[i][j])
Banker’s Algorithm has two Algorithms those are
1) Safety Algorithm
2) Resource-Request Algorithm
(i) Safety Algorithm:The safety algorithm determines whether the system is in a safe state. A
state is safe if there exists a sequence of processes such that each process can finish execution,
even if it requires maximum resources.
In the Banker's Algorithm, the Safety Algorithm is used to determine whether a system is in a
safe state after a resource request is made. A safe state is one where there is a sequence of
processes that can execute to completion without causing a deadlock, given the available
resources.
Steps of the Safety Algorithm:
1. Start with Work = Available and Finish[i] = false for all processes i.
2. Find a process i such that:
Finish[i] = false (the process has not finished yet).
Need[i] <= Work (the process's remaining resource requirements can be met with the
currently available resources).
29
3. If such a process is found:
– Work = Work + Allocation[i] (simulate the process finishing and releasing its
resources).
– This corresponds to process i finishing up and releasing its resources back into the
work pool. Then loop back to step 2.
4. If finish[ i ] == true for all i, then the state is a safe state, because a safe sequence has been
found.
Safe or Unsafe
• If all processes have Finish[i] = true, then the system is in a safe state, and there exists a
safe sequence of process executions.
• If any process remains with Finish[i] = false and no such process can be found to meet
the necessary conditions, the system is in an unsafe state, and deadlock is possible.
Example Problem: Consider the following table find Need and Safe state sequence
31
Safe State sequence is P0, P2, P3, P4, P1
Condition: The requested resources should not exceed the Need matrix for the requesting
process. Request <= Need
Step 2: Check if the Request is Less Than or Equal to the Available Resources
Condition: The requested resources should not exceed the Available resources in the system.
32
Request<= Available
• Perform the safety algorithm to check if the system can reach a safe state after granting
the request.
– Work = Available (Set the Work vector to be the current available resources).
– Finish[i] = False for all processes (Initially, assume no process is finished).
– Find a Process: Look for a process P[i] where:
• Finish[i] = False
• Need[i] ≤ Work (The process’s remaining need can be satisfied with the
currently available resources).
– If such a process is found:
• Set Work = Work + Allocation[i] (Assume that the process finishes and
releases its resources).
• Set Finish[i] = True (Mark the process as finished).
c) An Illustrative Example:
Consider the following resource allocation for a system with 3 types of resources (A=10,
B=5, C=7) and 5 processes (P0, P1, P2, P3, P4) and if process P1 requests 1 instance of A
and 2 instances of C. ( Request[ 1 ] = ( 1, 0, 2 ) ).To find NEED and Safe state sequence.
34
– Allocation[i] = Allocation[i] + Request
=200+102
=302
– Need[i] = Need[i] - Request
=122–102
=020
35
Check P4 process Allocation
Need[i] <= Work
4 3 1 <= 7 4 3
Condition is true go to step3
Work = Work + Allocation[i]
=743+002
=745
Check P0 process Allocation again
Need[i] <= Work
7 4 3 <= 7 4 5
Condition is true go to step3
Work = Work + Allocation[i]
=745+010
=755
Check P2 process Allocation again
Need[i] <= Work
7 4 3 <= 7 5 5
Condition is true go to step3
Work = Work + Allocation[i]
=755+302
= 10 5 7
Safe state is P1, P3, P4, P0, P2
14. Deadlock Detection:
Deadlock detection is a method used in operating systems and distributed computing systems to
identify deadlocks that have already occurred. A deadlock is a situation in which a set of
processes is unable to proceed because each process is waiting for another to release resources,
creating a cycle of dependencies that cannot be resolved.
How Deadlock Detection Works
Resource Allocation Graph (RAG): In deadlock detection, a Resource Allocation Graph
(RAG) is commonly used to represent the system's state. The graph consists of:
Processes (represented by nodes)
Resources (represented by nodes)
Edges representing relationships:
36
• Request edge from a process to a resource (indicating the process is
waiting for the resource).
• Assignment edge from a resource to a process (indicating the resource is
allocated to the process).
• Cycle Detection: A deadlock occurs when there is a cycle in the Resource Allocation
Graph. A cycle indicates that processes are involved in a circular wait, meaning each
process is holding a resource that another process in the cycle needs, and none of the
processes can proceed.
• To detect deadlocks, the system must check for the presence of such cycles in the RAG.
If a cycle is detected, it implies that the processes in the cycle are deadlocked.
Steps in Deadlock Detection
• Graph Construction:
– Construct a Resource Allocation Graph that reflects the current state of processes
and resources.
• Cycle Detection Algorithm:
– Use an algorithm (like Depth-First Search or BFS) to detect cycles in the graph.
This is the core of deadlock detection.
• Deadlock Identification:
– If a cycle is detected, the processes involved in the cycle are considered
deadlocked.
• Handle Deadlock:
– Once deadlock is detected, the system can take actions like killing one of the
processes involved or forcibly releasing resources.
If each resource category has a single instance, then we can use a variation of the
resource-allocation graph known as a wait-for graph.
A wait-for graph can be constructed from a resource-allocation graph by eliminating the
resources and collapsing the associated edges, as shown in the figure below.
An arc from Pi to Pj in a wait-for graph indicates that process Pi is waiting for a resource
that process Pj is currently holding.
37
(a) Resource allocation graph. (b) Corresponding wait-for graph
(ii) Several Instances of a Resource Type ( Banker's Algorithm for Deadlock Detection):
The detection algorithm outlined here is essentially the same as the Banker's algorithm,
with two subtle differences:
o In step 1, the Banker's Algorithm sets Finish[ i ] to false for all i. The algorithm
presented here sets Finish[ i ] to false only if Allocation[ i ] is not zero. If the
currently allocated resources for this process are zero, the algorithm sets Finish[ i ] to
true. This is essentially assuming that IF all of the other processes can finish, then
this process can finish also. Furthermore, this algorithm is specifically looking for
which processes are involved in a deadlock situation, and a process that does not
have any resources allocated cannot be involved in a deadlock, and so can be
removed from any further consideration.
o Steps 2 and 3 are unchanged
o In step 4, the basic Banker's Algorithm says that if Finish[ i ] == true for all i, that
there is no deadlock. This algorithm is more specific, by stating that if Finish[ i ]
== false for any process Pi, then that process is specifically involved in the
deadlock which has been detected.
The answer may depend on how frequently deadlocks are expected to occur, as well as
38
the possible consequences of not catching them immediately There are two obvious
approaches, each with trade-offs:
1. Do deadlock detection after every resource allocation which cannot be
immediately granted. This has the advantage of detecting the deadlock right away,
while the minimum numbers of processes are involved in the deadlock. The down
side of this approach is the extensive overhead and performance hit caused by
checking for deadlocks so frequently.
2. Do deadlock detection only when there is some clue that a deadlock may have
occurred, such as when CPU utilization reduces to 40% or some other magic
number. The advantage is that deadlock detection is done much less frequently,
but the down side is that it becomes impossible to detect the processes involved in
the original deadlock, and so deadlock recovery can be more complicated and
damaging to more processes.
39
– Whether the process is interactive or batch.
When preempting resources to relieve deadlock, there are three important issues to be
addressed:
40