0% found this document useful (0 votes)
55 views

OS unit-3

The document discusses synchronization and deadlocks in concurrent programming, focusing on tools like mutex locks, semaphores, and monitors to manage process coordination and mutual exclusion. It details the critical-section problem, solutions such as Peterson's algorithm, and various types of mutex locks and semaphores. Additionally, it addresses deadlock characterization and methods for handling deadlocks, including prevention, avoidance, and detection strategies.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
55 views

OS unit-3

The document discusses synchronization and deadlocks in concurrent programming, focusing on tools like mutex locks, semaphores, and monitors to manage process coordination and mutual exclusion. It details the critical-section problem, solutions such as Peterson's algorithm, and various types of mutex locks and semaphores. Additionally, it addresses deadlock characterization and methods for handling deadlocks, including prevention, avoidance, and detection strategies.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 40

UNIT-III

Synchronization & Deadlocks


Synchronization Tools: The Critical Section Problem, Peterson’s Solution, Mutex Locks,
Semaphores, Monitors, Classic problems of Synchronization.
Deadlocks: system Model, Deadlock characterization, Methods for handling Deadlocks,
Deadlock prevention, Deadlock avoidance, Deadlock detection, Recovery from Deadlock.

PART-1 (Synchronization)
Process Coordination: Process coordination or concurrency
control deals with mutual exclusion and synchronization.

Mutual exclusion: ensure that two concurrent activities do not access


shared data (resource) at the same time, critical region--set of
instructions that only one process can execute.

Synchronization: using a condition to coordinate the actions of concurrent


activities. A generalization of mutual exclusion.

The coordination of multiple processes or threads to ensure


orderly execution while avoiding issues like race conditions, deadlocks,
or inconsistencies in shared resources.

1. Process Synchronization (OR) Producer and Consumer (OR)


Race condition

The system consisting of cooperating sequential processes or threads, all running


asynchronously and possibly sharing data. We illustrated this model with the producer-consumer
problem, described how a bounded buffer could be used to enable processes to share memory.
Solution allows at most BUFFER.SIZE - 1 items in the buffer at the same time. Suppose
we want to modify the algorithm to remedy this deficiency. One possibility is to add an integer
variable counter, initialized to 0. counter is incremented every time we add a new item to the
buffer and is decremented every time we remove one item from the buffer.
The code for the producer process:

1
while (true)
{
/* produce an item in next Produced */
while (counter == BUFFER.SIZE); /*
do nothing */
buffer[in] = next Produced;
in = (in + 1) % BUFFER-SIZE;
counter++;
}
The code for the consumer process:
while (true)
{
while (counter = = 0);
/* do nothing */
next Consumed = buffer [out] ;
out = (out + 1) % BUFFER_SIZE;
counter--;
/* consume the item in next Consumed */
}
Although both the producer and consumer routines are correct separately, they may not
function correctly when executed concurrently. As an illustration, suppose that the value of the
variable counter is currently 5 and that the producer and consumer processes execute the
statements "counter++" and "counter- -" concurrently.
We can show that the value of counter may be incorrect as follows. Note that the
statement "counter++" may be implemented in machine language (on a typical machine) as
Register1= counter
Register1 = register1 + 1
counter = register1
where register1is a local CPU register. Similarly, the statement "counter- -" is implemented as
follows:
register2= counter
2
register2 = register2- 1
counter = register2
where again register2is a local CPU register. Even though register1andregister2may be
the same physical register (an accumulator, say), remember that the contents of this register will
be saved and restored by the interrupt handler.
The concurrent execution of "counter++" and "counter--" is equivalent to a sequential execution
where the lower-level statements presented previously are interleaved in some arbitrary order.
One such interleaving is
Register1= counter {register1 = 5}
register 1= register1+ 1 {register1= 6}
register2 = counter {register2 =5}
register2 = register i - 1 {register2 =4}
counter = register1 {counter = 6}
counter = register2 {counter = 4}
Notice that we have arrived at the incorrect state "counter = = 4", indicating that four
buffers are full, when, in fact, five buffers are full. If we reversed the order of the statements, we
would arrive at the incorrect state "counter = = 6".
We would arrive at this incorrect state because we allowed both processes to manipulate
the variable counter concurrently. A situation like this, where several processes access and
manipulate the same data concurrently and the outcome of the execution depends on the
particular order in which the access takes place, is called a race condition. To guard against the
race condition above, we need to ensure that only one process at a time can be manipulating the
variable counter. To make such a guarantee, we require that the processes be synchronized in
some way.
2. The Critical-Section Problem
Consider a system consisting of n processes {PQ, PI, ..., P,,~\}. Each process has a
segment of code, called a critical section, in which the process maybe changing common
variables, updating a table, writing a file, and so on. The important feature of the system is that,
when one process is executing in its critical section, no other process is to be allowed to
execute in its critical section. That is, no two processes are executing in their critical
sections at the same time. The critical-section problem is to design a protocol that the
processes can use to cooperate. Each process must request permission to enter its critical

3
section. The section of code implementing this request is the entry section. The critical section
may be followed by an exit section. The remaining code is the remainder section. The general
structure of a typical process P, The entry section and exit section are enclosed in boxes to
highlight these important segments of code.
General structure of a typical process Pi:
do{
\\entry section
critical section
\\exit section
\\remainder section
} while (TRUE);
Solutions to the Critical-Section Problem
Several approaches exist to handle the critical-section problem:
• Peterson’s Algorithm
– A software solution that ensures mutual exclusion using two variables: a boolean
flag array and a "turn" variable.
• Synchronization Primitives
– Locks: Allow only one thread to access the critical section at a time.
– Semaphores: Use counting mechanisms to control access.
– Monitors: High-level constructs that manage synchronization automatically.
• Hardware-Based Solutions
Test-and-Set (TSL) Instruction: Atomic hardware operations to control access.
Compare-and-Swap (CAS): Ensures mutual exclusion at the processor level.
• Operating System Support
Modern OSes provide process synchronization mechanisms like mutexes, condition
variables, and message passing.
A solution to the critical-section problem must satisfy the following three requirements:
(i) Mutual exclusion: If process P; is executing in its critical section, then no other processes
can be executing in their critical sections.
(ii) Progress: If no process is executing in its critical section and some processes wish to enter

4
their critical sections, then only those processes that are not executing in their remainder
sections can participate in the decision on which will enter its critical section next, and this
selection cannot be postponed indefinitely.
(iii) Bounded waiting: There exists a bound, or limit, on the number of times that other
processes are allowed to enter their critical sections after a process has made a request to enter
its critical section and before that request is granted.

3. Peterson’s solution

Peterson’s solution is a software-based approach to achieve mutual exclusion in a critical


section problem for two processes. It ensures no two processes enter the critical section
simultaneously and provides progress and bounded waiting properties.
Key Properties of Peterson’s Solution
1. Mutual Exclusion: Only one process can enter the critical section at a time.
2. Progress: If no process is in the critical section, one should be able to enter.
3. Bounded Waiting: A process should not wait indefinitely.
Working of Peterson’s Solution
Peterson’s algorithm uses two shared variables:
 flag[2]: Indicates if a process wants to enter the critical section.
 turn: Indicates whose turn it is to enter the critical section.

Each process follows these steps:


1. Wants to enter: It sets its flag to true and gives the turn to the other process.
2. Waits if the other process is inside: If the other process has the turn and also wants to
enter, it waits.
3. Critical section execution: Once the condition is met, it enters the critical section.
4. Exit section: It sets its flag to false so the other process can proceed.

The structure of process P in Peterson's solution:

do {

flag [i] = TRUE;


turn = j;
while (flag[j] && turn = = j);
critical section

flag [i] = FALSE;

5
remainder section
} while (TRUE);

4. Mutex Locks in Synchronization

A Mutex (Mutual Exclusion) lock is synchronization primitive used to prevent race


conditions in multi-threaded or multi-process environments. It ensures that only one thread
or process can access a shared resource at a time.

In concurrent programming, multiple threads or processes may try to modify a shared resource
simultaneously, leading to inconsistent results.

A Mutex (Mutual Exclusion Lock) allows only one thread to access a critical section at a time,
ensuring consistency.

How Mutex Works

1. Lock (pthread_mutex_lock): A thread must acquire the lock before entering the critical
section.
2. Critical Section: Only one thread executes this section at a time.
3. Unlock (pthread_mutex_unlock): The thread releases the lock after execution, allowing
others to proceed.

Components of Mutex Locks

Below we discuss some of the main components of Mutex Locks.

Mutex Variable − A mutex variable is used to represent the lock. It is a data structure that
maintains the state of the lock and allows threads or processes to acquire and release it.
Lock Acquisition − Threads or processes can attempt to acquire the lock by requesting it. If the
lock is available, the requesting thread or process gains ownership of the lock. Otherwise, it
enters a waiting state until the lock becomes available.
Lock Release − Once a thread or process has finished using the shared resource, it releases the
lock, allowing other threads or processes to acquire it.
Types of Mutex Locks

Mutex locks come in a variety of forms that offer varying levels of capabilities and behavior.

6
Recursive Mutex
A recursive mutex enables multiple lock acquisitions without blocking an operating system or
procedure. It keeps track of the number of occasions it was previously purchased and needs the
same amount of discharges before it can be completely unlocked.

Error-Checking Mutex
When lock processes, an error-checking mutex executes further error checking. By hindering
looping lock appropriation, it makes a guarantee that an application or procedure doesn't take
over a mutex lock it presently already holds.

Times Mutex
For a predetermined amount of time, an algorithm or procedure can try to acquire a lock using a
timed mutex. The acquiring functioning fails if the security key fails to become accessible during
the allotted time, as well as if the thread/process can respond appropriately.

Priority Inheritance Mutex


A particular kind of mutex that aids in reducing the inversion of priority issues is the priority
inheritance mutex (also known as the priority ceiling mutex). The priority inheritance mutex
momentarily increases the ranking associated with the low-priority organization to the level of
the highest-priority organization awaiting the lock to be released as a low-priority string or
procedure is holding the lock and a higher-priority string or procedure requires it.

Read-Write Mutex
A read-write lock is a synchronization system that permits several threads or procedures to
access the same resource concurrently whereas implementing exclusion between them
throughout write activities, though it does not constitute solely an instance of a mutex lock.

5. Synchronization Hardware
It is a hardware solution. We are using solve the critical section problem. We
explore several more solutions to the critical-section problem using techniques ranging from
hardware to software based APIs available to application programmers. Hardware features can
make any programming task easier and improve system efficiency. In this section, we present
some simple hardware instructions that are available on many systems and show how they can
be used effectively in solving the critical-section problem.
7
The critical-section problem could be solved simply in a uniprocessor environment, we
could be sure that the current sequence of instructions would be allowed to execute in order
without preemption. Unfortunately, this solution is not as feasible in a multiprocessor
environment. Disabling interrupts on a multiprocessor can be time consuming, as the message is
passed to all the processors. This message passing delays entry into each critical section, and
system efficiency decreases.
We have two types of instructions 1. TestAndSet ( ) instruction 2.swap ( ) instruction
The definition of the TestAndSet ( ) instruction:
boolean TestAndSet(boolean *target) {
boolean rv = *target;
*target = TRUE;
return rv;
Mutual-exclusion implementation with
TestAndSet ( ) :
do {
while (TestAndSetLock(&lock) )

; // do nothing//

critical section lock = FALSE;

// remainder section

}while (TRUE);

The TestAndSet() instruction can be defined as shown in Figure . The


important characteristic is instruction is executed atomically. Thus, if two
TestAndSet() instructions are executed simultaneously (each on a different
CPU), they will be executed sequentially in some arbitrary order. If the
machine supports the TestAndSet () instruction, then we can implement
mutual exclusion by declaring a Boolean variable lock, initialized to false.
The structure of process P, is shown in above Figure.
The Swap( ) instruction, in contrast to the TestAndSet( ) instruction, operates on the
contents of two words; it is defined as shown in Figure .Like the TestAndSet( ) instruction, it is

8
executed atomically. If the machine supports the Swap( ) instruction, then mutual exclusion can
be provided as follows. A global Boolean variable lock is declared and is initialized to false. In
addition, each process has a local Boolean variable key. The structure of process P, is shown in
Figure .
The definition of the Swap () instruction:

void Swap(boolean *a, boolean *b)


{

boolean temp = *a;


*a = *b;
*b = temp;
}

Mutual-exclusion implementation with the Swap() instruction:

do
{
key = TRUE;

while (key == TRUE)

Swap (&lock, &key)

// critical
section

lock = FALSE;

// remainder section
} while (TRUE);

6. Semaphores
The various hardware-based solutions to the critical-section problem (using the
TestAndSetC) and Swap() instructions) are complicated for application programmers to use. To
overcome this difficulty, we can use a synchronization tool called a semaphore.
A semaphore S is an integer variable that, apart from initialization, is accessed only
through two standard atomic operations: wait ( ) and signal ( ).The wait( ) operation was
originally termed P (from the Dutch probercn, "to test"); signal ( ) was originally called V (from
verhogen, "to increment").

9
 Wait (P operation): This operation decrements the semaphore's value. If the value is
positive, it proceeds. If it's zero, the thread is blocked until the value becomes positive.

 Signal (V operation): these operation increments the semaphore's value and potentially
unblocks any threads that are waiting on it.

The definition of wait() is as follows:


wait(S) {
while (S <= 0); // no-op
S--;
}
The definition of signal () is as follows:
signal(S) {
S++;
}
All the modifications to the integer value of the semaphore in the wait ( ) and signal( )
operations must be executed indivisibly. That is, when one process modifies the semaphore
value, no other process can simultaneously modify that same semaphore value.

Types of Semaphores
• Binary Semaphore (Mutex):
– It can only take two values: 0 or 1.
– A binary semaphore is often used for mutual exclusion (mutex), where only one
process can access a critical section at a time.
– It behaves like a lock: if a process enters the critical section, the semaphore value
is set to 0, and the next process must wait until the semaphore is set back to 1
(released).
• Counting Semaphore:
– This type can take any non-negative integer value.
– It is used to control access to a resource that has a finite number of instances.
– For example, a counting semaphore could be used to manage a pool of 3 printers:
the semaphore’s value will be initialized to 3. When a process uses a printer, the
semaphore is decremented by 1, and when it is done, the semaphore is
incremented by 1.

10
Mutual-exclusion implementation with semaphores
do {
wait(mutex);
//criticalsection
signal(mutex);
// remainder section
}while (TRUE);

2. Semaphore Implementation
 The big problem with semaphores as described above is the busy loop
in the wait call, which consumes CPU cycles without doing any useful
work. This type of lock is known as a spinlock, because the lock just
sits there and spins while it waits. While this is generally a bad thing, it
does have the advantage of not invoking context switches, and so it is
sometimes used in multi-processing systems when the wait time is
expected to be short - One thread spins on one processor while
another completes their critical section on another processor.
 Key to the success of semaphores is that the wait and signal
operations be atomic, that is no other process can execute a wait or
signal on the same semaphore at the same time. On single processors
this can be implemented by disabling interrupts during the execution
of wait and signal; Multiprocessor systems have to use more complex
methods, including the use of spinlocking.

11
Benefits of Semaphores

 Synchronization: Semaphores help in synchronizing processes, ensuring that only one


process has access to a shared resource at a time.
 Deadlock Prevention: By ensuring proper synchronization, semaphores can help avoid
situations where processes are stuck waiting for resources forever.
 Efficient Resource Management: Counting semaphores can be used to manage a pool
of resources, ensuring they are used efficiently and fairly.

Drawbacks and Challenges

 Deadlock: If semaphores are not used properly (for example, if a process locks a
semaphore but never releases it), it could lead to a deadlock.
 Starvation: If a process is always waiting for a resource and never gets a chance to
execute, it might starve.
 Complexity: Correctly implementing semaphores in complex systems can lead to
difficult-to-debug issues.

7. Classic Problems of Synchronization


Classic Problems of Synchronization are 3 types those are
1. Producer-Consumer Problem / Bounded-Buffer Problem
2. Readers-Writers Problem
3. Dining-Philosophers Problem
1. The Bounded-Buffer Problem:

 This is a generalization of the producer-consumer problem wherein


access is controlled to a shared group of buffers of a limited size.
12
 In this solution, the two counting semaphores "full" and "empty" keep
track of the current number of full and empty buffers respectively ( and
initialized to 0 and N respectively. ) The binary semaphore mutex
controls access to the critical section. The producer and consumer
processes are nearly identical - One can think of the producer as
producing full buffers, and the consumer producing empty buffers.

The structure of the producer process

The structure of the consumer process

2. The Readers-Writers Problem:

 In the readers-writers problem there are some processes ( termed


readers ) who only read the shared data, and never change it, and
there are other processes ( termed writers ) who may change the data
in addition to or instead of reading it. There is no limit to how many

13
readers can access the data simultaneously, but when a writer
accesses the data, it needs exclusive access.

 There are several variations to the readers-writers problem, most


centered around relative priorities of readers versus writers.

o The first readers-writers problem gives priority to readers. In this


problem, if a reader wants access to the data, and there is not
already a writer accessing it, then access is granted to the
reader. A solution to this problem can lead to starvation of the
writers, as there could always be more readers coming along to
access the data. ( A steady stream of readers will jump ahead of
waiting writers as long as there is currently already another
reader accessing the data, because the writer is forced to wait
until the data is idle, which may never happen if there are
enough readers. )

o The second readers-writers problem gives priority to the writers.


In this problem, when a writer wants access to the data it jumps
to the head of the queue - All waiting readers are blocked, and
the writer gets access to the data as soon as it becomes
available. In this solution the readers may be starved by a steady
stream of writers.

 The following code is an example of the first readers-writers problem,


and involves an important counter and two binary semaphores:

o readcount is used by the reader processes, to count the number


of readers currently accessing the data.
o mutex is a semaphore used only by the readers for controlled
access to readcount.
o rw_mutex is a semaphore used to block and release the writers.
The first reader to access the data will set this lock and the last

14
reader to exit will release it; The remaining readers do not
touch rw_mutex.
o Note that the first reader to come along will block on rw_mutex if
there is currently a writer accessing the data, and that all
following readers will only block on mutex for their turn to
increment readcount.
The structure of a reader process

The structure of a writer process

 Some hardware implementations provide specific reader-writer locks,


which are accessed using an argument specifying whether access is
requested for reading or writing. The use of reader-writer locks is
beneficial for situation in which:
(1) processes can be easily identified as either readers or writers.
(2) there are significantly more readers than writers, making the
additional overhead
of the reader-writer lock pay off in terms of increased
concurrency of the readers.

15
3. The Dining-Philosophers Problem:
Consider five philosophers who spend their lives thinking and eating. The philosophers share a
circular table surrounded by five chairs, each belonging to one philosopher. In the center of the
table is a bowl of rice, and the table is laid with five single chopsticks. When a philosopher
thinks, she does not interact with her colleagues. From time to time, a philosopher gets hungry
and tries to pick up the two chopsticks that are closest to her (the chopsticks that are between her
and her left and right neighbors). A philosopher may pickup only one chopstick

The situation of the dining philosophers.


One simple solution is to represent each chopstick with a semaphore. A philosopher tries
to grab a chopstick by executing a wait ( ) operation on that semaphore; she releases her
chopsticks by executing the signal( ) operation on the appropriate semaphores. Thus, the shared
data are semaphore chopstick[5];
where all the elements of chopstick are initialized to 1. The structure of philosopher i is
shown in Figure
The structure of philosopher i.
do {
wait(chopstick[i]);
wait(chopstick [ (i + 1) % 5] ) ;
//eat
signal(chopstick [i]);
signal(chopstick [(i + 1) % 5]);
/ / think
}while (TRUE);
Although this solution guarantees that no two neighbors are eating simultaneously, it
nevertheless must be rejected because it could create a deadlock. Suppose that all five
philosophers become hungry simultaneously and each grabs her left chopstick. All the elements
16
of chopstick will now be equal to 0. When each philosopher tries to grab her right chopstick, she
will be delayed forever.
we present a solution to the dining-philosophers problem that ensures freedom from
deadlocks.
• Allow at most four philosophers to be sitting simultaneously at the table.
• Allow a philosopher to pick up her chopsticks only if both chopsticks are available.
• Use an asymmetric solution; that is, an odd philosopher picks up first her left chopstick and
then her right chopstick, whereas an even philosopher picks up her right chopstick and then her
left chopstick.
Finally, any satisfactory solution to the dining-philosophers problem must guard
against the possibility that one of the philosophers will starve to death. A deadlock-free
solution does not necessarily eliminate the possibility of starvation.
8. Monitors

 Semaphores can be very useful for solving concurrency problems, but


only if programmers use them properly. If even one process fails
to abide by the proper use of semaphores, either accidentally or
deliberately, then the whole system breaks down.
 For this reason a higher-level language construct has been developed,
called monitors.
1.Monitor Usage:
A type, or abstract data type, encapsulates private data with public methods to
operate on that data. A monitor type presents a set of programmer-defined operations that are
provided mutual exclusion within the monitor.
Syntax of a monitor:
monitor monitor name
{
//shared variable
declarations
function P1 ( . . ) {
}
function P 2 ( . . . )
17
{…..}
::::::::::::::::::::::::::::::::::::::::
function P n ( . . . ) {….}
initialization code ( . . . ) {
……..
}
}
Thus, a procedure defined within a monitor can access only those variables declared
locally within the monitor and its formal parameters. Similarly, the local variables of a monitor
can be accessed by only the local procedures.
The monitor construct ensures that only one process at a time can be active within the
monitor. Consequently, the monitor construct, as defined so far, is not sufficiently powerful for
modeling some synchronization schemes. For this purpose, we need to define additional
synchronization mechanisms. These mechanisms are provided by the condition construct. A
programmer who needs to write a tailor- made synchronization scheme can define one or more
variables of type condition:
condition x, y;
The only operations that can be invoked on a condition variable are wait ()and signal(). The
operation
x.wait() ;
means that the process invoking this operation is suspended until another process invokes
x . signal ( ) ;
The x. signal () operation resumes exactly one suspended process. If no process is suspended,
then the signal () operation has no effect; that is, the state of x is the same as if the operation
had never been executed. Contrast this operation with the signal () operation associated with

Schematic view of a monitor. Monitor with condition variables.

18
semaphores, which always affects the state of the semaphore.

Now suppose that, when the x. signal() operation is invoked by a process P, there is a
suspended process Q associated with condition x. Clearly, if the suspended process Q is
allowed to resume its execution, the signaling process P must wait. Otherwise, both P and Q
would be active simultaneously within the monitor. Note, however, that both processes can
conceptually continue with their execution. Two possibilities exist:
1. Signal and wait: P either waits until Q leaves the monitor or waits for another condition.
2. Signal and continue: Q either waits until P leaves the monitor or waits for another
condition.
2.Dining-Philosophers Solution Using Monitors:
 This solution to the dining philosophers uses monitors, and the
restriction that a philosopher may only pick up chopsticks when both
are available. There are also two key data structures in use in this
solution:

1. enum { THINKING, HUNGRY,EATING } state[ 5 ]; A


philosopher may only set their state to eating when neither of
their adjacent neighbors is eating. ( state[ ( i + 1 ) % 5 ] !=
EATING && state[ ( i + 4 ) % 5 ] != EATING ).

2. condition self[ 5 ]; This condition is used to delay a hungry


philosopher who is unable to acquire chopsticks.

 In the following solution philosophers share a monitor, Dining


Philosophers, and eat using the following sequence of operations:

1. Dining Philosophers.pickup( ) - Acquires chopsticks, which may


block the process.
2. eat
3. Dining Philosophers.putdown( ) - Releases the chopsticks.
A monitor solution to the dining-philosopher problem:

19
monitor DP
{
enum { THINKING; HUNGRY, EATING) state [5] ;
condition self [5];
void pickup (int i) {
state[i] = HUNGRY; test(i);
if (state[i] != EATING)
self [i].wait;
}
void putdown (int i) {
state[i] = THINKING;
// test left and right neighbors
test((i + 4) % 5);
test((i + 1) % 5);
}
void test (int i) {
if ( (state[(i + 4) % 5] != EATING) &&
(state[i] == HUNGRY) &&
(state[(i + 1) % 5] != EATING) ) {
state[i] = EATING ;
self[i].signal () ;
}
}
initialization_code() {
for (int i = 0; i < 5; i++) state[i]
= THINKING;
}
}
It is easy to show that this solution ensures that no two neighbors are eating
simultaneously and that no deadlocks will occur. We note, however, that it is possible for a
philosopher to starve to death. We do not present a solution to this problem but rather leave it
as an exercise for you.

20
3. Implementing a Monitor Using Semaphores:
 One possible implementation of a monitor uses a semaphore "mutex"
to control mutual exclusionary access to the monitor, and a counting
semaphore "next" on which processes can suspend themselves after
they are already "inside" the monitor ( in conjunction with condition
variables, see below. ) The integer next_count keeps track of how
many processes are waiting in the next queue. Externally accessible
monitor processes are then implemented as:

 Condition variables can be implemented using semaphores as well. For


a condition x, a semaphore "x_sem" and an integer "x_count" are
introduced, both initialized to zero. The wait and signal methods are
then implemented as follows. ( This approach to the condition
implements the signal-and-wait option described above for ensuring
that only one process at a time is active inside the monitor. )

21
PART-2 (Deadlocks)
Deadlocks: A process requests resources; if the resources are not available
at that time, the process enters a wait state. It may happen that waiting
processes will never again change state, because the resources they have
requested are held by other waiting processes. This situation is called a
Deadlock.

9. System Model:

 For the purposes of deadlock discussion, a system can be modeled as a collection of


limited resources, which can be partitioned into different categories, to be allocated to a
number of processes, each having different needs.
 Resource categories may include memory, printers, CPUs, open files, tape drives, CD-
ROMS, etc.
 By definition, all the resources within a category are equivalent, and a request of this
category can be equally satisfied by any one of the resources in that category. If this is
not the case ( i.e. if there is some difference between the resources within a category ),
then that category needs to be further divided into separate categories. For example,
"printers" may need to be separated into "laser printers" and "color inkjet printers".
 Some categories may have a single resource.
 In normal operation a process must request a resource before using it, and release it when
it is done, in the following sequence:
22
1. Request - If the request cannot be immediately granted, then the process must
wait until the resource(s) it needs become available. For example the system calls
open( ), malloc( ), new( ), and request( ).
2. Use - The process uses the resource, e.g. prints to the printer or reads from the
file.
3. Release - The process relinquishes the resource. so that it becomes available for
other processes. For example, close( ), free( ), delete( ), and release( ).
 The resources needed by processes are typically managed using a resource allocation
graph (RAG) or through a wait-for graph in which processes and resources are
represented as nodes, and edges represent the relationships between them.
 Processes (P1, P2, P3, …) are represented as nodes in the graph.
 Resources (R1, R2, R3, …) are also represented as nodes, and each type of resource can
have multiple instances.
 Edges in the graph represent either the request or allocation of resources. An edge from a
process to a resource represents a request (i.e., the process is waiting for the resource),
while an edge from a resource to a process indicates that the process has allocated the
resource.

10. Deadlock Characterization:


(i) Necessary Conditions:

There are four conditions that are necessary to achieve deadlock:

– Mutual Exclusion: At least one resource is held in a non-shareable mode. Only


one process can use a resource at a time.
– Hold and Wait: A process holding at least one resource is waiting for additional
resources that are currently being held by other processes.
– No Preemption: Resources cannot be preempted, meaning that once a process
has acquired a resource, it cannot be forcibly taken away from it until it releases it
voluntarily.
– Circular Wait: A set of processes are waiting for each other in a circular chain.
For example, process P1 is waiting for a resource held by P2, process P2 is
waiting for a resource held by P3, and process P3 is waiting for a resource held by
P1, forming a cycle.

(ii) Resource-Allocation Graph:

In some cases deadlocks can be understood more clearly through the use of Resource-
Allocation Graphs, having the following properties:

23
o A set of resource categories, { R1, R2, R3, . . ., RN }, which appear as square
nodes on the graph. Dots inside the resource nodes indicate specific instances of
the resource. ( E.g. two dots might represent two laser printers. )
o A set of processes, { P1, P2, P3, . . ., PN }, which appear as circle nodes on the
graph.
o Request Edges - A set of directed arcs from Pi to Rj, indicating that process Pi
has requested Rj, and is currently waiting for that resource to become available.
o Assignment Edges - A set of directed arcs from Rj to Pi indicating that resource
Rj has been allocated to process Pi, and that Pi is currently holding resource Rj.
o Note that a request edge can be converted into an assignment edge by reversing
the direction of the arc when the request is granted.
o If a resource-allocation graph contains no cycles, then the system is not
deadlocked.

Resource allocation graph

 If a resource-allocation graph does contain cycles and each resource category contains
only a single instance, then a deadlock exists.
 If a resource category contains more than one instance, then the presence of a cycle in the
resource-allocation graph indicates the possibility of a deadlock, but does not guarantee
one.

Resource allocation graph with a deadlock

24
11. Methods for Handling Deadlocks:
Generally speaking there are three ways of handling deadlocks:
1. Deadlock prevention or avoidance - Do not allow the system to get into a
deadlocked state.
2. Deadlock detection and recovery - Abort a process or preempt some resources
when deadlocks are detected.
3. Ignore the problem all together - If deadlocks only occur once a year or so, it
may be better to simply let them happen and reboot as necessary than to incur the
constant overhead and system performance penalties associated with deadlock
prevention or detection. This is the approach that both Windows and UNIX take.
 In order to avoid deadlocks, the system must have additional information about all
processes. In particular, the system must know what resources a process will or may
request in the future.
 Deadlock detection is fairly straightforward, but deadlock recovery requires either
aborting processes or preempting resources, neither of which is an attractive alternative.
 If deadlocks are neither prevented nor detected, then when a deadlock occurs the system
will gradually slow down, as more and more processes become stuck waiting for
resources currently held by the deadlock and by other waiting processes. Unfortunately
this slowdown can be indistinguishable from a general system slowdown when a real-
time process has heavy computing needs.

12. Deadlock Prevention:


Deadlocks can be prevented by preventing at least one of the four required conditions:
(i) Eliminating Mutual Exclusion: Shared resources such as read-only files do not lead to
deadlocks.
This is not always feasible because many resources, such as printers or disk drives,
are inherently non-shareable. So, this condition cannot always be eliminated.
(ii) Eliminating Hold and Wait: Require that a process request all the resources it needs at
once, before it begins execution. If it cannot obtain all the required resources, it must release
any resources it has already acquired and retry the request later. This ensures that processes
don’t hold resources while waiting for others.
Disadvantage: This can lead to poor resource utilization, especially if resources are scarce.

25
(iii) Eliminating No Preemption:If a process holding some resources requests another
resource that cannot be immediately allocated to it, then the system can preempt the
resources held by the process. The resources are preempted and allocated to other processes
until the original process can be restarted with all its required resources.
Disadvantage: Preempting resources can result in wasted work, as processes may need to
restart after being preempted.

(iv) Eliminating Circular Wait:One approach is to impose a total ordering on the resources
and require that each process can only request resources in an increasing order of
enumeration. This means that if a process is holding a resource with a lower number, it

Examples of Deadlock Prevention Approaches:


• Resource Allocation Graph (RAG):
– One way to prevent deadlock is by using a resource allocation graph. If a process
requests a resource, a directed edge from the process to the resource is added. If a
resource is assigned to a process, the edge points from the resource to the process.
The system can avoid a situation where a circular wait exists by monitoring this
graph for cycles.
• Banker's Algorithm:
– The Banker's algorithm is another approach to deadlock prevention that works by
simulating resource allocation for each process and ensuring that the system can
always enter a safe state. This method, which is often used in systems that allocate
resources dynamically, ensures that processes do not enter unsafe states that could
lead to deadlock.
13. Deadlock Avoidance:

 This requires more information about each process, and tends to lead to low device
utilization.
 In some algorithms the scheduler only needs to know the maximum number of each
resource that a process might potentially use. In more complex algorithms the scheduler
can also take advantage of the schedule of exactly what resources may be needed in what
order.
 When a scheduler sees that starting a process or granting resource requests may lead to

26
future deadlocks, then that process is just not started or the request is not granted.
 A resource allocation state is defined by the number of available and allocated resources,
and the maximum requirements of all processes in the system.

(i) Safe State:

 A state is safe if the system can allocate all resources requested by all processes ( up to
their stated maximums ) without entering a deadlock state.
 More formally, a state is safe if there exists a safe sequence of processes { P0, P1, P2, ...,
PN } such that all of the resource requests for Pi can be granted using the resources
currently allocated to Pi and all processes Pj where j < i.
 If a safe sequence does not exist, then the system is in an unsafe state, which may lead to
deadlock.

Safe, unsafe, and deadlocked state spaces


(ii) Resource-Allocation Graph Algorithm:

 If resource categories have only single instances of their resources, then deadlock states
can be detected by cycles in the resource-allocation graphs.
 In this case, unsafe states can be recognized and avoided by augmenting the resource-
allocation graph with claim edges, noted by dashed lines, which point from a process to a
resource that it may request in the future.
 In order for this technique to work, all claim edges must be added to the graph for any
particular process before that process is allowed to request any resources. ( Alternatively,
processes may only make requests for resources for which they have already established
claim edges, and claim edges cannot be added to any process that is currently holding
resources. )
 When a process makes a request, the claim edge Pi->Rj is converted to a request edge.
Similarly when a resource is released, the assignment reverts back to a claim edge.
 This approach works by denying requests that would produce cycles in the resource-
27
allocation graph, taking claim edges into effect.
 Consider for example what happens when process P2 requests resource R2:

Resource allocation graph for deadlock avoidance


 The resulting resource-allocation graph would have a cycle in it, and so the request
cannot be granted.

An unsafe state in a resource allocation graph

(iii) Banker's Algorithm:

o The Banker's Algorithm is a classic deadlock avoidance algorithm used in


operating systems to allocate resources to processes in a safe way, ensuring that
no deadlock occurs. It is called the "Banker's Algorithm" because it was designed
to simulate the way a banker might allocate loans (resources) to customers
(processes), ensuring that the bank never gives out resources in such a way that it
cannot meet all its obligations.
o Processes: Represent the entities that request and use resources.
o Resources: The things that processes request, such as memory, CPU time, I/O
devices, etc.
o Available Resources: The number of each resource currently available in the
system.
o Allocation: The amount of each resource currently allocated to each process.

28
o Maximum Demand: The maximum number of each resource a process may need
during its execution.
o Need: The remaining number of resources a process still needs to complete its
execution. This is calculated as:

Need=Maximum Demand−Allocation

The Banker's Algorithm ensures that resource allocation always results in a safe state, meaning
that all processes can eventually complete without leading to deadlock.The banker's algorithm
relies on several key data structures: (where n is the number of processes and m is the number
of resource categories. )
Available[ m ] indicates how many resources are currently available of each type.
Max[ n ][ m ] indicates the maximum demand of each process of each resource.
Allocation[ n ][ m ] indicates the number of each resource category allocated to each
process.
Need[ n ][ m ] indicates the remaining resources needed of each type for each process.
Allocation matrices (i.e., Need[i][j] = Max[i][j] - Allocation[i][j])
Banker’s Algorithm has two Algorithms those are
1) Safety Algorithm
2) Resource-Request Algorithm
(i) Safety Algorithm:The safety algorithm determines whether the system is in a safe state. A
state is safe if there exists a sequence of processes such that each process can finish execution,
even if it requires maximum resources.
In the Banker's Algorithm, the Safety Algorithm is used to determine whether a system is in a
safe state after a resource request is made. A safe state is one where there is a sequence of
processes that can execute to completion without causing a deadlock, given the available
resources.
Steps of the Safety Algorithm:
1. Start with Work = Available and Finish[i] = false for all processes i.
2. Find a process i such that:
Finish[i] = false (the process has not finished yet).
Need[i] <= Work (the process's remaining resource requirements can be met with the
currently available resources).
29
3. If such a process is found:

– Work = Work + Allocation[i] (simulate the process finishing and releasing its
resources).

– Finish[i] = true (mark the process as finished).

– This corresponds to process i finishing up and releasing its resources back into the
work pool. Then loop back to step 2.
4. If finish[ i ] == true for all i, then the state is a safe state, because a safe sequence has been
found.
Safe or Unsafe

• If all processes have Finish[i] = true, then the system is in a safe state, and there exists a
safe sequence of process executions.

• If any process remains with Finish[i] = false and no such process can be found to meet
the necessary conditions, the system is in an unsafe state, and deadlock is possible.

Example Problem: Consider the following table find Need and Safe state sequence

Sol: Total number of Resource = Total number of allocation + Available


= 2 9 10 12 + 1 5 2 0 (adding in matrix wize)
= 3 14 12 12
Need[i][j] = Max[i][j] - Allocation[i][j]
30
Check P0 process Allocation
Need[i] <= Work
0 0 0 0 <= 1 5 2 0
Condition is true go to step3
Work = Work + Allocation[i]
Work= 1 5 2 0 + 0 0 1 2
=1532
Check P1 process Allocation
Need[i] <= Work
0 7 5 0 <= 1 5 3 2
Condition is false If FINSH[1] = false so P1 process is waiting state.
Check P2 process Allocation
Need[i] <= Work
1 0 0 2 <= 1 5 3 2
Condition is true go to step3
Work = Work + Allocation[i]
Work= 1 5 3 2 + 1 3 5 4
=2886
Check P3 process Allocation
Need[i] <= Work
0 0 2 0 <= 2 8 8 6
Condition is true go to step3
Work = Work + Allocation[i]
Work= 2 8 8 6 + 0 6 3 2
= 2 14 11 8
Check P4 process Allocation
Need[i] <= Work
0 6 4 2 <= 2 14 11 8
Condition is true go to step3
Work = Work + Allocation[i]
Work= 2 14 11 8 + 0 0 1 4
= 2 14 12 12
Check P1 process Allocation again
Need[i] <= Work
0 7 5 0 <= 2 14 12 12
Condition is true go to step3
Work = Work + Allocation[i]
Work= 2 14 12 12 + 1 0 0 0
= 3 14 12 12

31
Safe State sequence is P0, P2, P3, P4, P1

b) Resource-Request Algorithm ( The Bankers Algorithm ):

In the Banker's Algorithm, the Resource-Request Algorithm is responsible for


determining whether a request for resources by a process can be granted while ensuring that
the system remains in a safe state. The algorithm checks if granting the request would lead
the system into an unsafe state, and only grants the request if it can still ensure that all
processes can eventually complete.

Steps of the Resource-Request Algorithm:

Step 1: Check if the Request is Less Than or Equal to the Need

Condition: The requested resources should not exceed the Need matrix for the requesting
process. Request <= Need

 Need[i][j] = Max[i][j] - Allocation[i][j] (This represents how many resources the


process still needs).
 If the requested resources are greater than the Need of the process, deny the request (it
violates the maximum claim).

Step 2: Check if the Request is Less Than or Equal to the Available Resources

Condition: The requested resources should not exceed the Available resources in the system.

32
Request<= Available

– Available[j] is the number of instances of resource type j that are currently


available in the system.
– If the requested resources exceed Available, deny the request (not enough
resources are available).

Step 3: Pretend to Allocate the Requested Resources


If the request is valid, simulate the allocation:
– Available = Available - Request (Reduce the number of available resources by
the requested amount).
– Allocation[i] = Allocation[i] + Request (Allocate the requested resources to the
requesting process).
– Need[i] = Need[i] - Request (Update the remaining needs of the requesting
process).

Step 4: Check for System Safety (Safety Check)

• Perform the safety algorithm to check if the system can reach a safe state after granting
the request.
– Work = Available (Set the Work vector to be the current available resources).
– Finish[i] = False for all processes (Initially, assume no process is finished).
– Find a Process: Look for a process P[i] where:
• Finish[i] = False
• Need[i] ≤ Work (The process’s remaining need can be satisfied with the
currently available resources).
– If such a process is found:
• Set Work = Work + Allocation[i] (Assume that the process finishes and
releases its resources).
• Set Finish[i] = True (Mark the process as finished).

– Repeat step 3 until either:


• All processes are finished (Finish[i] = True for all i), or
• No such process can be found (indicating an unsafe state).
33
– If all processes can finish (i.e., Finish[i] = True for all processes), the system is
in a safe state.

Step 5: Grant or Deny the Request


• If the system is in a safe state after the request is granted (i.e., all processes can
eventually finish):
– Grant the request.
• If the system is not in a safe state (i.e., not all processes can finish):
– Deny the request (the request would lead to an unsafe state, and the system might
enter a deadlock).

c) An Illustrative Example:

Consider the following resource allocation for a system with 3 types of resources (A=10,
B=5, C=7) and 5 processes (P0, P1, P2, P3, P4) and if process P1 requests 1 instance of A
and 2 instances of C. ( Request[ 1 ] = ( 1, 0, 2 ) ).To find NEED and Safe state sequence.

Sol: given process request P1 is Request [ 1 ] = ( 1, 0, 2 )


Given P1 process Need is 1 2 2

Steps of the Resource-Request Algorithm:


Step 1: Check if the Request is Less Than or Equal to the Need
Request <= Need
1 0 2 <= 1 2 2
Step 2: Check if the Request is Less Than or Equal to the Available Resources
Request<= Available
1 0 2 <= 3 3 2
Both Step1 & Step2 is True go to Step3
Step 3: Pretend to Allocate the Requested Resources
If the request is valid, simulate the allocation:
– Available = Available - Request
=332–102
=230

34
– Allocation[i] = Allocation[i] + Request
=200+102
=302
– Need[i] = Need[i] - Request
=122–102
=020

Step 4: Check for System Safety (Safety Check) above table


Check P0 process Allocation
Need[i] <= Work
7 4 3 <= 2 3 0
Condition is false If FINSH [0] = false so P0 process is waiting state.
Check P1 process Allocation
Need[i] <= Work
0 2 0 <= 2 3 0
Condition is true go to step3
Work = Work + Allocation[i]
Work= 2 3 0 + 3 0 2
=532
Check P2 process Allocation
Need[i] <= Work
6 0 0 <= 5 3 2
Condition is false If FINSH [2] = false so P2 process is waiting state.

Check P3 process Allocation


Need[i] <= Work
0 1 1 <= 5 3 2
Condition is true go to step3
Work = Work + Allocation[i]
=532+211
=743

35
Check P4 process Allocation
Need[i] <= Work
4 3 1 <= 7 4 3
Condition is true go to step3
Work = Work + Allocation[i]
=743+002
=745
Check P0 process Allocation again
Need[i] <= Work
7 4 3 <= 7 4 5
Condition is true go to step3
Work = Work + Allocation[i]
=745+010
=755
Check P2 process Allocation again
Need[i] <= Work
7 4 3 <= 7 5 5
Condition is true go to step3
Work = Work + Allocation[i]
=755+302
= 10 5 7
Safe state is P1, P3, P4, P0, P2
14. Deadlock Detection:
Deadlock detection is a method used in operating systems and distributed computing systems to
identify deadlocks that have already occurred. A deadlock is a situation in which a set of
processes is unable to proceed because each process is waiting for another to release resources,
creating a cycle of dependencies that cannot be resolved.
How Deadlock Detection Works
Resource Allocation Graph (RAG): In deadlock detection, a Resource Allocation Graph
(RAG) is commonly used to represent the system's state. The graph consists of:
Processes (represented by nodes)
Resources (represented by nodes)
Edges representing relationships:

36
• Request edge from a process to a resource (indicating the process is
waiting for the resource).
• Assignment edge from a resource to a process (indicating the resource is
allocated to the process).
• Cycle Detection: A deadlock occurs when there is a cycle in the Resource Allocation
Graph. A cycle indicates that processes are involved in a circular wait, meaning each
process is holding a resource that another process in the cycle needs, and none of the
processes can proceed.
• To detect deadlocks, the system must check for the presence of such cycles in the RAG.
If a cycle is detected, it implies that the processes in the cycle are deadlocked.
Steps in Deadlock Detection
• Graph Construction:
– Construct a Resource Allocation Graph that reflects the current state of processes
and resources.
• Cycle Detection Algorithm:
– Use an algorithm (like Depth-First Search or BFS) to detect cycles in the graph.
This is the core of deadlock detection.
• Deadlock Identification:
– If a cycle is detected, the processes involved in the cycle are considered
deadlocked.
• Handle Deadlock:
– Once deadlock is detected, the system can take actions like killing one of the
processes involved or forcibly releasing resources.

(i) Single Instance of Each Resource Type (Wait-for Graph Algorithm):

 If each resource category has a single instance, then we can use a variation of the
resource-allocation graph known as a wait-for graph.
 A wait-for graph can be constructed from a resource-allocation graph by eliminating the
resources and collapsing the associated edges, as shown in the figure below.
 An arc from Pi to Pj in a wait-for graph indicates that process Pi is waiting for a resource
that process Pj is currently holding.

37
(a) Resource allocation graph. (b) Corresponding wait-for graph

 As before, cycles in the wait-for graph indicate deadlocks.


 This algorithm must maintain the wait-for graph, and periodically search it for cycles.

(ii) Several Instances of a Resource Type ( Banker's Algorithm for Deadlock Detection):

 The detection algorithm outlined here is essentially the same as the Banker's algorithm,
with two subtle differences:
o In step 1, the Banker's Algorithm sets Finish[ i ] to false for all i. The algorithm
presented here sets Finish[ i ] to false only if Allocation[ i ] is not zero. If the
currently allocated resources for this process are zero, the algorithm sets Finish[ i ] to
true. This is essentially assuming that IF all of the other processes can finish, then
this process can finish also. Furthermore, this algorithm is specifically looking for
which processes are involved in a deadlock situation, and a process that does not
have any resources allocated cannot be involved in a deadlock, and so can be
removed from any further consideration.
o Steps 2 and 3 are unchanged
o In step 4, the basic Banker's Algorithm says that if Finish[ i ] == true for all i, that
there is no deadlock. This algorithm is more specific, by stating that if Finish[ i ]
== false for any process Pi, then that process is specifically involved in the
deadlock which has been detected.

(iii) Detection-Algorithm Usage:

 The answer may depend on how frequently deadlocks are expected to occur, as well as

38
the possible consequences of not catching them immediately There are two obvious
approaches, each with trade-offs:
1. Do deadlock detection after every resource allocation which cannot be
immediately granted. This has the advantage of detecting the deadlock right away,
while the minimum numbers of processes are involved in the deadlock. The down
side of this approach is the extensive overhead and performance hit caused by
checking for deadlocks so frequently.
2. Do deadlock detection only when there is some clue that a deadlock may have
occurred, such as when CPU utilization reduces to 40% or some other magic
number. The advantage is that deadlock detection is done much less frequently,
but the down side is that it becomes impossible to detect the processes involved in
the original deadlock, and so deadlock recovery can be more complicated and
damaging to more processes.

15. Recovery From Deadlock:


There are two basic approaches to recovery from deadlock:
1) Process Termination
2) Resource Preemption
(1) Process Termination:
• Two basic approaches, both of which recover resources allocated to terminated processes:
– Terminate all processes involved in the deadlock. This definitely solves the
deadlock, but at the expense of terminating more processes than would be
absolutely necessary.
– Terminate processes one by one until the deadlock is broken. This is more
conservative, but requires doing deadlock detection after each step.
• In the latter case there are many factors that can go into deciding which processes to
terminate next:
– Process priorities.
– How long the process has been running, and how close it is to finishing.
– How many and what type of resources is the process holding.
– How many more resources does the process need to complete.
– How many processes will need to be terminated

39
– Whether the process is interactive or batch.

(2) Resource Preemption:

When preempting resources to relieve deadlock, there are three important issues to be
addressed:

– Selecting a victim:- Deciding which resources to preempt from which processes


involves many of the same decision criteria outlined above.
– Rollback:- Ideally one would like to roll back a preempted process to a safe state
prior to the point at which that resource was originally allocated to the process.
Unfortunately it can be difficult or impossible to determine what such a safe state
is, and so the only safe rollback is to roll back all the way back to the beginning.
– Starvation:- How do you guarantee that a process won't starve because its
resources are constantly being preempted? One option would be to use a priority
system, and increase the priority of a process every time its resources get
preempted. Eventually it should get a high enough priority that it won't get
preempted any more.

40

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy