0% found this document useful (0 votes)
8 views

deadlocks-notes-operating-systems

The document discusses process synchronization, critical section problems, and various solutions such as Peterson's solution, mutex locks, and semaphores. It outlines the requirements for mutual exclusion, progress, and bounded waiting, and describes classic synchronization problems like the bounded buffer, dining philosophers, and readers-writers problems. Additionally, it introduces monitors as a high-level synchronization construct to prevent timing errors in semaphore usage.

Uploaded by

suryacuteboy006
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views

deadlocks-notes-operating-systems

The document discusses process synchronization, critical section problems, and various solutions such as Peterson's solution, mutex locks, and semaphores. It outlines the requirements for mutual exclusion, progress, and bounded waiting, and describes classic synchronization problems like the bounded buffer, dining philosophers, and readers-writers problems. Additionally, it introduces monitors as a high-level synchronization construct to prevent timing errors in semaphore usage.

Uploaded by

suryacuteboy006
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 35

PROCESS SYNCHRONIZATION:

Process Synchronization is defined as the process of sharing system resources by


cooperating processes in such a way that, Concurrent access to shared data is handled
thereby minimizing the chance of inconsistent data.
EXAMPLE:

Consider the producer consumer process that contains a variable called counter.
Counter is incremented every time we add a new item to the buffer and is
decremented every time we remove one item from the buffer.

The code for the producer process is


while (true) {
/* produce an
item in next
produced */
while
(counter ==
BUFFER
SIZE)
/* do nothing */ 20
buffer[in] = next
produced; in = (in +
1) % BUFFER SIZE;
counter++;
}
The code for the consumer process is while (true) {
while (counter == 0)
/* do nothing */
next consumed = buffer[out];
out = (out + 1) % BUFFER SIZE;
counter--;
/* consume the item in next consumed */
}
Suppose that the value of the variable counter is currently 5 and that the
producer and consumer processes concurrently execute the statements
―counter++‖ and ―counter--‖.
Following the execution of these two statements, the value of the variable counter may be
4, 5, or 6!
The only correct result, though, is counter == 5, which is generated correctly if the
producer and consumer execute separately.

CRITICAL SECTION PROBLEM:


The critical-section problem is to design a protocol that the processes can use to cooperate.
Each process has a segment of code, called a critical section, in which the process may be
changing common variables, updating a table, writing a file, and so on.
When one process is executing in its critical section, no other process is allowed to execute in its
critical section. That is, no two processes are executing in their critical sections at the same
time.
The structure of critical section problem is
Each process must request permission to enter its critical section. The section of code
implementing this request is the entry section. The critical section may be followed by an
exit section. The remaining code is the remainder section.

A solution to the critical-section problem must satisfy the following three requirements:

1. Mutual exclusion. If process Pi is executing in its critical section, then no other


processes can be executing in their critical sections

Progress. If no process is executing in its critical section and some processes


wish to enter their critical sections, then only those processes that are not executing
in their remainder sections can participate in deciding which will enter its critical
section next.

Bounded waiting. There exists a bound, or limit, on the number of times that
other processes are allowed to enter their critical sections after a process has made
a request to enter its critical section and before that request is granted.

SOLUTIONS TO CRITICAL SECTION PROBLEM:

PETERSON’S SOLUTION:

Peterson’s solution is restricted to two processes that alternate execution between their
critical sections and remainder sections.

The processes are numbered P0 and P1. For convenience, when presenting Pi, we use
Pj to denote the other process. Peterson’s solution requires the two processes to share
two data items:

int turn; boolean flag[2];


The variable turn indicates whose turn it is to enter its critical section. That is, if turn
== i, then process Pi is allowed to execute in its critical section.

The flag array is used to indicate if a process is ready to enter its critical section. For
example, if flag[i] is true, this value indicates that Pi is ready to enter its critical
section.

To enter the critical section, process Pi first sets flag[i] to be true and then sets turn to the value j,
thereby checking that if the other process wishes to enter the critical section, it can do so.

Similarly to enter the critical section, process Pj first sets flag[j] to be true and then sets turn to the
value i, thereby checking that if the other process wishes to enter the critical section.

The solution is correct and thus provides the following.

Mutual exclusion is preserved.

The progress requirement is satisfied.

The bounded-waiting requirement is met.

MUTEX LOCKS:

Mutex locks are used to protect critical regions and thus prevent race conditions

A process must acquire the lock before entering a critical section; it releases the lock when it exits the
critical section.
The acquire() function acquires the lock, and the release() function releases the lock.

A mutex lock has a boolean variable available whose value indicates if the lock is available or not. If the
lock is available, a call to acquire() succeeds, and the lock is then considered unavailable.

A process that attempts to acquire an unavailable lock is blocked until the lock is released.

The definition of acquire() is as follows:

acquire()

while (!available)

/* busy wait */ available = false;;

The definition of release() is as follows:

release()

available = true;

The main disadvantage of the implementation given here is that it requires busy waiting.

While a process is in its critical section, any other process that tries to enter its critical section must loop
continuously in the call to acquire().

This type of mutex lock is also called a spinlock because the process ―spins‖ while waiting for the lock
to become available.

Busy waiting wastes CPU cycles that some other process might be able to use productively.

Spinlocks do have an advantage, however, in that no context switch is required when a process must
wait on a lock, and a context switch may take considerable time
SEMAPHORES:

A semaphore S is an integer variable that, is accessed only through two standard atomic operations:
wait() and signal().

The wait() operation was originally termed P and the meaning is to test, the signal() was originally called
V and the meaning is to increment.

The definition of wait() is as follows:

wait(S) { while (S <= 0)

// busy wait S--;

The definition of signal() is as follows:

signal(S) { S++;

➔ Operating systems often distinguish between counting and binary semaphores. The value of a
counting semaphore can range over an unrestricted domain.

➔ The value of a binary semaphore can range only between 0 and 1. Thus, binary semaphores
behave similarly to mutex locks.

➔ Counting semaphores can be used to control access to a given resource consisting of a finite
number of instances. In this case the semaphore is initialized to the number of resources
available.

➔ Each process that wishes to use a resource performs a wait() operation on the semaphore
When a process releases a resource, it performs a signal() operation.

➔ When the count for the semaphore goes to 0, all resources are being used. After that,
processes that wish to use a resource will block until the count becomes greater than 0.

SEMAPHORE IMPLEMENTATION:

The main disadvantage of semaphore is that it requires busy waiting. When one process is in its critical
section any other process that tries to enter the critical section must loop continuously in the entry
code.

The mutual exclusion implementation with semaphores is given by do { wait(mutex);

//critical section signal(mutex);

//remainder section }while(TRUE);

To overcome the need for busy waiting, we can modify the wait() and signal() operations.

When a process executes the wait() operation and finds that the semaphore value is not positive, it
must wait. However, rather than engaging in busy waiting, the process can block itself.

The block operation places a process into a waiting queue associated with the semaphore. Then
control is transferred to the CPU scheduler, which selects another process to execute.

A process that is blocked, waiting on a semaphore S, should be restarted when some other process
executes a signal() operation.

The process is restarted by a wakeup() operation, which changes the process fromthe waiting state to
the ready state.

To implement semaphores under this definition, we define a semaphore as follows:

typedef struct { int value;

struct process *list;

} semaphore

Each semaphore has an integer value and a list of processes list. When a process must wait on a
semaphore, it is added to the list of processes.

A signal() operation removes one process from the list of waiting processes and awakens that
process.

The wait() semaphore operation can be defined as

wait(semaphore *S) { S->value--;

if (S->value < 0) {

add this process to S->list; block();

The signal() semaphore operation can be defined as

signal(semaphore *S) { S->value++;


if (S->value <= 0) {

remove a process P from S->list; wakeup(P);

The Block() and wakeup() operations are provided by the operating system as system calls.

DEADLOCKS AND STARVATION:

The implementation of a semaphore with waiting queue may result in a situation where two or more
process are waiting indefinitely for an event that can be caused only by one of the waiting process.
This situation is called as deadlock.

Example: Consider a system consisting of two process p0 and p1, each accessing two semaphores
that is set to value 1.

P0 executes wait(S) and p1 executes Wait(Q).

When P0 executes wait(Q), it must wait until P1 executes signal(Q). Similarly, when P1 executes
wait(S), it must wait until P0 executes signal(S). Here the signal() operations cannot be executed, P0
and P1 are deadlocked.

PRIORITY INVERSION:

Assume we have three processes—L, M, and H—whose priorities follow the order L < M < H. The
process H requires resource R, which is currently being accessed by process L.

Process H would wait for L to finish using resource R. Suppose that process M becomes runnable,
thereby preempting process L.

Indirectly, a process with a lower priority—process M—has affected process H that is waiting for L
to release resource R.This problem is known as priority inversion

Priority-inheritance can solve the problem of priority inversion.

According to this protocol, all processes that are accessing resources needed by a higher-priority
process inherit the higher priority until they are finished with the resources that are requested.
When they are finished, their priorities revert to their original values.

CLASSIC PROBLEMS OF SYNCHRONIZATION:

1. Bounded Buffer problem:

In this problem, the producer process produces the data and the consumer processes consumes the data.
Both of the process share the following data structures:

int n;

Semaphore mutex = 1; Semaphore empty = n; Semaphore full = 0

Assume that the pool consists of n buffers, each capable of holding one item.

The mutex semaphore provides mutual exclusion for accesses to the buffer pool and is initialized to the
value

1. The empty and full semaphores count the number of empty and full buffers.

The semaphore empty is initialized to the value n; the semaphore full is initialized to the value 0.

Code for producer process: do{

/* produce an item in next_produced */

…… wait(empty); wait(mutex);

…..

/ * add next produced to the buffer

*/ signal(mutex); signal(full); } while(true);

Code for Consumer process:

do{ wait(full); wait(mutex);

……

/* remove an item from buffer to next_consumed */ signal(mutex);

signal(empty);

……..

/* consume the item in next_consumed */

} while(true);
THE DINING-PHILOSOPHERS PROBLEM:

Consider five philosophers who spend their lives thinking and eating. The philosophers share a circular
table surrounded by five chairs, each belonging to one philosopher.

In the center of the table is a bowl of rice, and the table is laid with five single chopsticks.

When a philosopher gets hungry she tries to pick up the two chopsticks that are closest to her.

A philosopher may pick up only one chopstick at a time. Obviously, she cannot pick up a chopstick that
is already in the hand of a neighbor.

When a hungry philosopher has both her chopsticks at the same time, she eats without releasing the
chopsticks. When she is finished eating, she puts down both chopsticks.

One simple solution is to represent each chopstick with a semaphore.

A philosopher tries to grab a chopstick by executing a wait() operation on that semaphore. She releases
her chopsticks by executing the signal () operation on the appropriate semaphores.

The shared data is semaphore chopstick [5]; where all the elements of the chopsticks are initialized to

1. do {

wait(chopstick[i]); wait(chopstick[(i+1)%5]);

……

/* eat for awhile */ signal(chopstick[i]); signal(chopstick[(i+1)%5]);

…..

/* think for awhile */


} while (true);

Several possible remedies to the deadlock problem are replaced by:

Allow at most four philosophers to be sitting simultaneously at the table.

Allow a philosopher to pick up her chopsticks only if both chopsticks are available (to do this, she must
pick them up in a critical section).

Use an asymmetric solution—that is, an odd-numbered philosopher picks up first her left chopstick and
then her right chopstick, whereas an even numbered philosopher picks up her right chopstick and
then her left chopstick

THE READERS WRITERS PROBLEM:

A database is to be shared among several concurrent processes.

Some of these processes may want only to read the database, whereas others may want to update (that is,
to read and write) the database.

We distinguish between these two types of processes by referring to the former as readers and to the
latter as writers.

If two readers access the shared data simultaneously, no effects will result.

However, if a writer and some other process (either a reader or a writer) access the database
simultaneously, problems may occur.

To ensure that these difficulties do not arise, we require that the writers have exclusive access to the
shared database while writing to the database. This synchronization problem is referred to as the
readers–writers problem.

In the solution to the first readers–writers problem, the reader processes share the following data
structures:

Semaphore rwmutex = 1; Semaphore mutex = 1; int read count = 0;

The semaphores mutex and rwmutex are initialized to 1; read count is initialized to 0. The semaphore
rwmutex is common to both reader and writer

Code for writer process:

do {

wait(rw mutex);

...

/* writing is performed */
...

signal(rw mutex);

} while (true);

The mutex semaphore is used to ensure mutual exclusion when the variable read count is updated. The
read count variable keeps track of how many processes are currently reading the object.

The semaphore rwmutex functions as a mutual exclusion semaphore for the writers.

Code for readers process:

do { wait(mutex); read count++;

if (read count == 1) wait(rw mutex); signal(mutex);

...

/* reading is performed */

...

wait(mutex); read count--;

if (read count == 0) signal(rw mutex); signal(mutex);

} while (true);

MONITORS:

Although semaphores provide a convenient and effective mechanism for process synchronization, using
them incorrectly can result in timing errors that are difficult to detect.

EXAMPLE: Suppose that a process interchanges the order in which the wait() and signal() operations on
the semaphore mutex are executed, resulting in the following execution: signal(mutex);

...

critical section

...

wait(mutex);
In this situation, several processes may be executing in their critical sections simultaneously, violating
the mutual-exclusion requirement.

Suppose that a process replaces signal(mutex) with wait(mutex). That is, it executes

wait(mutex);

...

critical section

...

wait(mutex);

In this case, a deadlock will occur. To deal with such errors one fundamental high-level synchronization
constructs called the monitor type is used.

A monitor type is an ADT that includes a set of programmer defined operations that are provided with
mutual exclusion within the monitor.

The monitor type also declares the variables whose values define the state of an instance of that type,
along with the bodies of functions that operate on those variables.

monitor monitor name

{/* shared variable declarations */ function P1( ){

... .

function P2( ){

....

function Pn( ){

....

initialization_code( ){
....

Thus, a function defined within a monitor can access only those variables declared locally within the
monitor and its formal parameters. Similarly, the local variables of a monitor can be accessed by
only the local functions.

The monitor construct ensures that only one process at a time is active within the monitor.

• The monitors also provide mechanisms of synchronization by the condition construct. A


programmer who needs to write a tailor-made synchronization scheme can define one or
more variables of type condition: Condition x, y;

• The only operations that can be invoked on a condition variable are wait() and signal().

• The operation x.wait(); means that the process invoking this operation is suspended until
another process invokes x.signal();

• The x.signal() operation resumes exactly one suspended process

• Now suppose that, when the x.signal() operation is invoked by a process P, there exists a
suspended process associated with condition x.

• Clearly, if the suspended process Q is allowed to resume its execution, the signaling process
P must wait. Otherwise, both P and Q would be active simultaneously within the monitor.

• Note, however, that conceptually both processes can continuewith their execution. Two
possibilities exist:

• Signal and wait. P either waits until Q leaves the monitor or waits for another condition.

• Signal and continue. Q either waits until P leaves the monitor or waits for another condition.
Dining-Philosophers Solution Using Monitors:

• Consider five philosophers who spend their lives thinking and eating. The philosophers share
a circular table surrounded by five chairs, each belonging to one philosopher.

• In the center of the table is a bowl of rice, and the table is laid with five single chopsticks.
When a philosopher gets hungry she tries to pick up the two chopsticks that are closest to her.

• A philosopher may pick up only one chopstick at a time. Obviously, she cannot pick up a
chopstick that is already in the hand of a neighbor.

• When a hungry philosopher has both her chopsticks at the same time, she eats without
releasing the chopsticks. When she is finished eating, she puts down both chopsticks.

• The solution imposes the restriction that a philosopher may pick up her chopsticks only if
both of them are available.

monitor DiningPhilsophers

enum {THINKING, HUNGRY, EATING} state[5];

condition self[5];
void pickup(int i) {

state[i] = HUNGRY; test(i);

if(state[i]!= EATING)

self[i].wait();

void putdown(int i){

state[i] = THINKING; test((i+4)%5);

test((i+1)%5);

void test(int i) {

if((state[(i+4)%5] != EATING) && (state[i] == HUNGRY) && (state[(i+1)%5] != EATING)){

state[i] = EATING; self[i].signal();

initialization_code() {

for(int i=0; i < 5; i++)

state[i] = THINKING;

To code this solution, we need to distinguish among three states in which we may find a philosopher.
For this purpose, we introduce the following data structure:

enum {THINKING, HUNGRY, EATING} state[5];

Philosopher i can set the variable state[i] = EATING only if her two neighbors are not eating:

(state[(i+4) % 5] != EATING) and(state[(i+1) % 5] != EATING).

Also declare condition self[5]; that allows philosopher i to delay herself when she is hungry but is
unable to obtain the chopsticks she needs.

Each philosopher, before starting to eat, must invoke the operation pickup(). This may result in the
suspension of the philosopher process.

After the successful completion of the operation, the philosopher may eat. Following this, the
philosopher invokes the putdown() operation.

Thus, philosopher i must invoke the operations pickup() and putdown() in the following sequence:
DiningPhilosophers.pickup(i);

eat DiningPhilosophers.putdown(i);

This solution ensures that no two neighbors are eating simultaneously and that no deadlocks will
occur.

DEADLOCKS:-

➔ When processes request a resource and if the resources are not available at that time the
process enters into waiting state. Waiting process may not change its state because the
resources they are requested are held by other process. This situation is called deadlock.
➔ The situation where the process waiting for the resource i.e., not available is called
deadlock.

System Model:-
➔ A system may consist of finite number of resources and is distributed among number of
processes. There resources are partitioned into several instances each with identical
instances.
➔ A process must request a resource before using it and it must release the resource after
using it. It can request any number of resources to carry out a designated task. The
amount of resource requested may not exceed the total number of resources available.
➔ A process may utilize the resources in only the following sequences:-
1. Request:- If the request is not granted immediately then the requesting process must
wait it can acquire the resources.
2. Use:- The process can operate on the resource.
3. Release:- The process releases the resource after using it.

➔ Deadlock may involve different types of resources.


For eg:- Consider a system with one printer and one tape drive. If a process Pi currently
holds a printer and a process Pj holds the tape drive. If process Pi request a tape drive and
process Pj request a printer then a deadlock occurs.
Multithread programs are good candidates for deadlock because they compete for shared
resources.

Deadlock Characterization:-

Necessary Conditions:-

A deadlock situation can occur if the following 4 conditions occur simultaneously


in a system:-
1. Mutual Exclusion:-
Only one process must hold the resource at a time. If any other process requests for the resource,
the requesting process must be delayed until the resource has been released.
2. Hold and Wait:-
A process must be holding at least one resource and waiting to acquire
additional resources that are currently being held by the other process.
3. No Preemption:-
Resources can’t be preempted i.e., only the process holding the resources
must release it after the process has completed its task.
4. Circular Wait:-
A set {P0,P1……..Pn} of waiting process must exist such that P0 is
waiting for a resource i.e., held by P1, P1 is waiting for a resource i.e., held by P2. Pn-1 is
waiting for resource held by process Pn and Pn is waiting for the resource i.e., held by P1.
All the four conditions must hold for a deadlock to occur.

Resource Allocation Graph:-


➔ Deadlocks are described by using a directed graph called system resource allocation
graph. The graph consists of set of vertices (v) and set of edges (e).
➔ The set of vertices (v) can be described into two different types of nodes
P={P1,P2……..Pn} i.e., set consisting of all active processes and
R={R1,R2… Rn}i.e., set consisting of all resource types in the system.
➔ A directed edge from process Pi to resource type Rj denoted by Pi-> Ri indicates that Pi
requested an instance of resource Rj and is waiting. This edge is called Request edge.
➔ A directed edge Ri-> Pj signifies that resource Rj is held by process Pi. This is called
Assignment edge.

➔ If the graph contain no cycle, then no process in the system is deadlock. If the graph
contains a cycle then a deadlock may exist.
➔ If each resource type has exactly one instance than a cycle implies that a deadlock has
occurred. If each resource has several instances then a cycle do not necessarily implies
that a deadlock has occurred.
Methods for Handling Deadlocks:-

There are three ways to deal with deadlock problem


• We can use a protocol to prevent deadlocks ensuring that the system will never enter
into the deadlock state.
• We allow a system to enter into deadlock state, detect it and recover from it.
• We ignore the problem and pretend that the deadlock never occur in the system.
This is used by most OS including UNIX.

➔ To ensure that the deadlock never occur the system can use either deadlock
avoidance or a deadlock prevention.
➔ Deadlock prevention is a set of method for ensuring that at least one of the
necessary conditions does not occur.
➔ Deadlock avoidance requires the OS is given advance information about which
re source a process will request and use during its lifetime.
➔ If a system does not use either deadlock avoidance or deadlock prevention then a
deadlock situation may occur. During this it can provide an algorithm that
examines the state of the system to determine whether a deadlock has occurred
and algorithm to recover from deadlock.
➔ Undetected deadlock will result in deterioration of the system performance.

Deadlock Prevention:-

➔ For a deadlock to occur each of the four necessary conditions must hold. If at least one of
the there condition does not hold then we can prevent occurrence of deadlock.
1. Mutual Exclusion:-
This holds for non-sharable resources.
Eg:- A printer can be used by only one process at a time.
Mutual exclusion is not possible in sharable resources and thus they
cannot be involved in deadlock. Read-only files are good examples for sharable resources. A
process never waits for accessing a sharable resource. So we cannot prevent deadlock by denying
the mutual exclusion condition in non-sharable resources.
2. Hold and Wait:-
This condition can be eliminated by forcing a process to release all its resources held by it when
it request a resource i.e., not available.
• One protocol can be used is that each process is allocated with all of its
resources before its start execution.
Eg:- consider a process that copies the data from a tape drive to the disk, sorts the file
and then prints the results to a printer. If all the resources are allocated at the
beginning then the tape drive, disk files and printer are assigned to the process. The
main problem with this is it leads to low resource utilization because it requires
printer at the last and is allocated with it from the beginning so that no other process
can use it.
• Another protocol that can be used is to allow a process to request a resource
when the process has none. i.e., the process is allocated with tape drive and disk
file. It performs the required operation and releases both. Then the process once
again request for disk file and the printer and the problem and with this is
starvation is possible.

3. No Preemption:-
To ensure that this condition never occurs the resources must be
preempted. The following protocol is used.
• If a process is holding some resource and request another resource that cannot be
immediately allocated to it, then all the resources currently held by the requesting
process are preempted and added to the list of resources for which other processes
may be waiting. The process will be restarted only when it regains the old
resources and the new resources that it is requesting.
• When a process request resources, we check whether they are available or not. If
they are available we allocate them else we check that whether they are allocated
to some other waiting process. If so we preempt the resources from the waiting
process and allocate them to the requesting process. The requesting process must
wait.
4. Circular Wait:-
The fourth and the final condition for deadlock is the circular wait
condition. One way to ensure that this condition never, is to impose ordering on all resource
types and each process requests resource in an increasing order.
Let R={R1,R2,………Rn} be the set of resource types. We assign each
resource type with a unique integer value. This will allows us to compare two resources and
determine whether one precedes the other in ordering.
Eg:-we can define a one to one function
F:R N as follows :- F(disk drive)=5
F(printer)=12
F(tape drive)=1
Deadlock can be prevented by using the following protocol:-
• Each process can request the resource in increasing order. A
process can request any number of instances of resource type say
Ri and it can request instances of resource type Rj only F(Rj) >
F(Ri).
• Alternatively when a process requests an instance of resource type
Rj, it has released any resource Ri such that F(Ri) >= F(Rj).
If these two protocol are used then the circular wait can’t hold.

Deadlock Avoidance:-
➔ Deadlock prevention algorithm may lead to low device utilization and reduces system
throughput.
➔ Avoiding deadlocks requires additional information about how resources are to be
requested. With the knowledge of the complete sequences of requests and releases we can
decide for each requests whether or not the process should wait.
➔ For each requests it requires to check the resources currently available, resources that are
currently allocated to each processes future requests and release of each process to decide
whether the current requests can be satisfied or must wait to avoid future possible
deadlock.
➔ A deadlock avoidance algorithm dynamically examines the resources allocation state to
ensure that a circular wait condition never exists. The resource allocation state is defined
by the number of available and allocated resources and the maximum demand of each
process.

Safe State:-
➔ A state is a safe state in which there exists at least one order in which all the process will
run completely without resulting in a deadlock.
➔ A system is in safe state if there exists a safe sequence.
➔ A sequence of processes <P1,P2,… ......... Pn> is a safe sequence for the current allocation
state if for each Pi the resources that Pi can request can be satisfied by the currently
available resources.
➔ If the resources that Pi requests are not currently available then Pi can obtain all of its
needed resource to complete its designated task.
➔ A safe state is not a deadlock state.
➔ Whenever a process request a resource i.e., currently available, the system must decide
whether resources can be allocated immediately or whether the process must wait. The
request is granted only if the allocation leaves the system in safe state.
➔ In this, if a process requests a resource i.e., currently available it must still have to wait.
Thus resource utilization may be lower than it would be without a deadlock avoidance
algorithm.

Resource Allocation Graph Algorithm:-

➔ This algorithm is used only if we have one instance of a resource type. In addition to the
request edge and the assignment edge a new edge called claim edge is used.
For eg:- A claim edge Pi Rj indicates that process Pi may request Rj in future. The
claim edge is represented by a dotted line.
• When a process Pi requests the resource Rj, the claim edge is converted to a
request edge.
• When resource Rj is released by process Pi, the assignment edge Rj Pi is
replaced by the claim edge Pi Rj.
➔ When a process Pi requests resource Rj the request is granted only if converting the
request edge Pi ->Rj to as assignment edge Rj->Pi do not result in a cycle. Cycle detection
algorithm is used to detect the cycle. If there are no cycles then the allocation of the resource to
process leave the system in safe state

.Banker’s Algorithm:-

➔ This algorithm is applicable to the system with multiple instances of each resource types,
but this is less efficient then the resource allocation graph algorithm.
➔ When a new process enters the system it must declare the maximum number of resources
that it may need. This number may not exceed the total number of resources in the
system. The system must determine that whether the allocation of the resources will leave
the system in a safe state or not. If it is so resources are allocated else it should wait until
the process release enough resources.
➔ Several data structures are used to implement the banker’s algorithm. Let ‘n’ be the
number of processes in the system and ‘m’ be the number of resources types. We need the
following data structures:-
• Available:- A vector of length m indicates the number of available resources. If
Available[i]=k, then k instances of resource type Rj is available.
• Max:- An n*m matrix defines the maximum demand of each process if Max[i,j]=k, then
Pi may request at most k instances of resource type Rj.
• Allocation:- An n*m matrix defines the number of resources of each type currently
allocated to each process. If Allocation[i,j]=k, then Pi is currently k instances of resource
type Rj.
• Need:- An n*m matrix indicates the remaining resources need of each process. If
Need[i,j]=k, then Pi may need k more instances of resource type Rj to compute its task.
So Need[i,j]=Max[i,j]-Allocation[i]

Safety Algorithm:-
➔ This algorithm is used to find out whether or not a system is in safe state or not.
Step 1. Let work and finish be two vectors of length M and N respectively.
Initialize work = available and
Finish[i]=false for i=1,2,3,…….n
Step 2. Find i such that both
Finish[i]=false
Need i <= work
If no such i exist then go to step 4
Step 3. Work = work + Allocation
Finish[i]=true
Go to step 2
Step 4. If finish[i]=true for all i, then the system is in safe state.
This algorithm may require an order of m*n*n operation to decide whether a state
is safe.

Resource Request Algorithm:-


Let Request(i) be the request vector of process Pi. If Request(i)[j]=k, then process Pi wants K
instances of the resource type Rj. When a request for resources is made by process Pi the
following actions are taken.
• If Request(i) <= Need(i) go to step 2 otherwise raise an error condition since the
process has exceeded its maximum claim.
• If Request(i) <= Available go to step 3 otherwise Pi must wait. Since the
resources are not available.
• If the system want to allocate the requested resources to process Pi then modify
the state as follows.
Available = Available – Request(i)
Allocation(i) = Allocation(i) + Request(i)
Need(i) = Need(i) – Request(i)
• If the resulting resource allocation state is safe, the transaction is complete and Pi
is allocated its resources. If the new state is unsafe then Pi must wait for
Request(i) and old resource allocation state is restored.

• 5 processes P0 through P4; 3 resource types A (10 instances),


B (5instances, and C (7 instances).
• Snapshot at time T0:

Allocation Max Available


ABC ABC ABC
P0 010 753 332
P1 200 322
P2 302 902
P3 211 222
P4 002 433
• The content of the matrix. Need is defined to be Max – Allocation.
Need
ABC
P0 743
P1 122
P2 600
P3 011
P4 431
• The system is in a safe state since the sequence < P1, P3, P4, P2, P0> satisfies safety
criteria.
• Check that Request Available (that is, (1,0,2) (3,3,2) ⟶ true.
Allocation Need Available
ABC ABC ABC
230
P2 301 600
P3 211 011
P4 002 431
• Executing safety algorithm shows that sequence <P1, P3, P4, P0, P2> satisfies safety
requirement.
• Can request for (3,3,0) by P4 be granted?
• Can request for (0,2,0) by P0 be granted?
Deadlock Detection:-
If a system does not employ either deadlock prevention or a deadlock avoidance
algorithm then a deadlock situation may occur. In this environment the system may
provide
• An algorithm that examines the state of the system to determine whether a
deadlock has occurred.
• An algorithm to recover from the deadlock.

Single Instances of each Resource Type:-


➔ If all the resources have only a single instance then we can define deadlock
detection algorithm that uses a variant of resource allocation graph called a wait
for graph. This graph is obtained by removing the nodes of type resources and
removing appropriate edges.
➔ An edge from Pi to Pj in wait for graph implies that Pi is waiting for Pj to release
a resource that Pi needs.
➔ An edge from Pi to Pj exists in wait for graph if and only if the corresponding
resource allocation graph contains the edges Pi ->Rq and Rq->Pj.
➔ Deadlock exists within the system if and only if there is a cycle. To detect
deadlock the system needs an algorithm that searches for cycle in a graph.
Several Instances of a Resource Types:-
➔ The wait for graph is applicable to only a single instance of a resource type. The
following algorithm applies if there are several instances of a resource type. The
following data structures are used:-
• Available:-
o Is a vector of length m indicating the number of available resources of
each type .

• Allocation:-
o Is an m*n matrix which the number of resources of each type

Request:-
o Is an m*n matrix indicating the current request of each process. If
request[i,j]=k then Pi is requesting k more instances of resources type Rj.

Step 1. let work and finish be vectors of length m and n respectively. Initialize Work =
available/expression
For i=0,1,2… ........ n if allocation(i)!=0 then Finish[i]=0
else Finish[i]=true

Step 2. Find an index(i) such that both


Finish[i] = false
Request(i)<=work
If no such I exist go to step 4.

Step 3. Work = work + Allocation(i)


Finish[i] = true
Go to step 2.

Step 4. If Finish[i] = false for some I where m>=i>=1.


When a system is in a deadlock state.

This algorithm needs an order of m*n square operations to detect whether the system is in
deadlock state or not.
• Five processes P0 through P4; three resource types
A (7 instances), B (2 instances), and C (6 instances).
• Snapshot at time T0:
Allocation Request Available
ABC ABC ABC
P0 010 000 000
P1 200 202
P2 303 000
P3 211 100
P4 002 002
Sequence <P0, P2, P3, P1, P4> will result in Finish[i] = true for all i.
• P2 requests an additional instance of type C.
Request

ABC
P0 000
P1 201
P2 001
P3 100
P4 002

State of system?
– Can reclaim resources held by process P0, but insufficient resources to fulfill other
processes; requests.
– Deadlock exists, consisting of processes P1, P2, P3, and P4.
Detection-Algorithm Usage
• When, and how often, to invoke depends on:
– How often a deadlock is likely to occur?
– How many processes will need to be rolled back?
T one for each disjoint cycle
If detection algorithm is invoked arbitrarily, there may be many cycles in the resource
graph and so we would not be able to tell which of the many deadlocked
processes “caused” the deadlock
Recovery from Deadlock: Process Termination
• Abort all deadlocked processes.
• Abort one process at a time until the deadlock cycle is eliminated.
• In which order should we choose to abort?
– Priority of the process.
– How long process has computed, and how much longer to completion.
– Resources the process has used.
– Resources process needs to complete.
– How many processes will need to be terminated.
– Is process interactive or batch?
Recovery from Deadlock: Resource Preemption
• Selecting a victim – select the process such that it should minimize cost.
• Rollback – return to some safe state, restart process from that state. Or restart from
the begining.
• Starvation – same process may always be picked as victim, include number of times ,the
process can be selected as victim

calculate the content of the need matrix?


Check if the system is in a safe state?
Determine the total sum of each type of resource?
Solution:

1. The Content of the need matrix can be calculated by using the formula given below:

Need = Max – Allocation

Bankers Algo Table


2. Let us now check for the safe state.

Safe sequence:

For process P0, Need = (3, 2, 1) and


Available = (2, 1, 0)
Need <=Available = False

So, the system will move to the next process.

2. For Process P1, Need = (1, 1, 0)

Available = (2, 1, 0)

Need <= Available = True

Request of P1 is granted.

Available = Available +Allocation

= (2, 1, 0) + (2, 1, 2)

= (4, 2, 2) (New Available)

3. For Process P2, Need = (5, 0, 1)

Available = (4, 2, 2)

Need <=Available = False

So, the system will move to the next process.

4. For Process P3, Need = (7, 3, 3)

Available = (4, 2, 2)

Need <=Available = False

So, the system will move to the next process.

5. For Process P4, Need = (0, 0, 0)

Available = (4, 2, 2)

Need <= Available = True

Request of P4 is granted.

Available = Available + Allocation

= (4, 2, 2) + (1, 1, 2)

= (5, 3, 4) now, (New Available)

6. Now again check for Process P2, Need = (5, 0, 1)

Available = (5, 3, 4)

Need <= Available = True


Request of P2 is granted.

Available = Available + Allocation

= (5, 3, 4) + (4, 0, 1)

= (9, 3, 5) now, (New Available)

7. Now again check for Process P3, Need = (7, 3, 3)

Available = (9, 3, 5)

Need <=Available = True

The request for P3 is granted.

Available = Available +Allocation

= (9, 3, 5) + (0, 2, 0) = (9, 5, 5)

8. Now again check for Process P0, = Need (3, 2, 1)

= Available (9, 5, 5)

Need <= Available = True

So, the request will be granted to P0.

Safe sequence: < P1, P4, P2, P3, P0>

The system allocates all the needed resources to each process. So, we can say that the
system is in a safe state.

3. The total amount of resources will be calculated by the following formula:

The total amount of resources= sum of columns of allocation + Available

= [8 5 7] + [2 1 0] = [10 6 7]

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy