OS Mod 3
OS Mod 3
SYSTEMS
Textbook : Operating Systems Concepts by Silberschatz
Producer consumer problem using shared memory
//shared data
Int item,in,out; 0
in=0;out=0; 1
Int BUFFER_SIZE=4
2
3
The solution allowed at most BUFFER_SIZE - 1 items in the buffer at the same time.
we can modify the algorithm to remedy this deficiency.
counter is incremented every time we add a new item to the buffer and is decremented
every time we remove one item from the buffer.
Producer consumer problem using shared memory
0
//shared data
int count=0, n=8;
//producer process
//consumer process
while (true) {
while (true) { 7
while (counter == BUFFER_SIZE)
while (counter == 0)
; /* do nothing */
; /* do nothing */
buffer[in] = next_produced; next_consumed = buffer[out];
in = (in + 1) % BUFFER_SIZE; out = (out + 1) % BUFFER_SIZE;
counter++; counter--;
}
}
Producer consumer problem using shared memory
0 x1
//shared data
int count=0; 1 X2
2 X3
Current status count=5
3 X4
4 x5
5
//producer process
//consumer process 6
while (true) {
while (true) { 7
while (counter == BUFFER_SIZE)
while (counter == 0)
; /* do nothing */
; /* do nothing */
buffer[in] = next_produced; next_consumed = buffer[out];
in = (in + 1) % BUFFER_SIZE; out = (out + 1) % BUFFER_SIZE;
counter++; counter--;
} register1 = counter register2 = counter
register1 = register1 + 1 } register2 = register2 - 1
counter = register1 counter = register2
Producer consumer problem using shared memory
0 x1
//shared data
int count=0; Producer runs: S0: producer execute 1 X2
register1 = counter {register1 = 5}
2 X3
S1: producer execute
register1 = register1 + 1 {register1 = 6} 3 X4
Process switch occurs 4 X5
5 x6
//producer process
//consumer process 6
while (true) {
while (true) { 7
while (counter == BUFFER_SIZE)
while (counter == 0)
; /* do nothing */
; /* do nothing */
buffer[in] = next_produced; next_consumed = buffer[out];
in = (in + 1) % BUFFER_SIZE; out = (out + 1) % BUFFER_SIZE;
counter++; counter--;
}
}
Producer consumer problem using shared memory
0
//shared data
int count=0; S2: consumer execute 1 X2
register2 = counter {register2 = 5}
2 X3
S3: consumer execute
register2 = register2 – 1 {register2 = 4} 3 X4
Process switch occurs 4 X5
5 x6
//producer process
//consumer process 6
while (true) {
while (true) { 7
while (counter == BUFFER_SIZE)
while (counter == 0)
; /* do nothing */
; /* do nothing */
buffer[in] = next_produced; next_consumed = buffer[out];
in = (in + 1) % BUFFER_SIZE; out = (out + 1) % BUFFER_SIZE;
counter++; counter--;
}
}
Producer consumer problem using shared memory
0
//shared data
int count=0; S4: producer execute 1 X2
counter = register1 {counter = 6 }
2 X3
Process switch occurs
3 X4
4 X5
5 x6
//producer process
//consumer process 6
while (true) {
while (true) { 7
while (counter == BUFFER_SIZE)
while (counter == 0)
; /* do nothing */
; /* do nothing */
buffer[in] = next_produced; next_consumed = buffer[out];
in = (in + 1) % BUFFER_SIZE; out = (out + 1) % BUFFER_SIZE;
counter++; counter--;
}
}
Producer consumer problem using shared memory
0
//shared data
int count=0; S5: consumer execute counter = register2 1 X2
{counter = 4}
2 X3
3 X4
4 X5
5 x6
//producer process
//consumer process 6
while (true) {
while (true) { 7
while (counter == BUFFER_SIZE)
while (counter == 0)
; /* do nothing */
; /* do nothing */
buffer[in] = next_produced; next_consumed = buffer[out];
in = (in + 1) % BUFFER_SIZE; out = (out + 1) % BUFFER_SIZE;
counter++; counter--;
}
}
counter++ could be implemented as
register1 = counter
register1 = register1 + 1
counter = register1
counter-- could be implemented as
register2 = counter
register2 = register2 - 1
counter = register2
when several processes access and manipulate the same data concurrently and the
outcome of the execution depends on the particular order in which the access
takes place, is called a race condition.
Part of the program where the shared memory is accessed is called critical section or
critical region of that process
when one process is executing in its critical section, no other process is to be allowed to
execute in its critical section. That is, no two processes are executing in their critical
sections at the same time. If so we can avoid race conditions.
Critical section
The critical-section problem is to design a protocol that the processes can use to cooperate.
Each process must request permission to enter its critical section. The section of code
implementing this request is the entry section. The critical section may be followed by an
exit section. The remaining code is the remainder section.
The general structure of a typical process Pi is shown in
Critical section
A solution to the critical-section problem must satisfy the following three requirements:
1. Mutual exclusion. If process Pi is executing in its critical section, then no
other processes can be executing in their critical sections.
2. Progress. If no process is executing in its critical section
some processes wish to enter their critical sections,
then only those processes that are not executing in their remainder sections can participate
in deciding which will enter its critical section next,
this selection cannot be postponed indefinitely.
3. Bounded waiting. There exists a bound, or limit, on the number of times
that other processes are allowed to enter their critical sections, after a
process has made a request to enter its critical section and before that
request is granted.
Peterson’s solution
One of the process synchronization mechanism
Peterson's solution is a classic software-based solution to the critical-section problem
Peterson's solution is restricted to two processes that alternate execution between their
critical sections and remainder sections.
The variable turn indicates whose turn it is to enter the critical section
The flag array is used to indicate if a process is ready to enter the critical
section.
flag[i] = true implies that process Pi is ready!
Algorithm for Process Pi
while (true){
flag[i] = true;
turn = j;
while (flag[j] && turn = = j)
;
/* critical section */
flag[i] = false;
/* remainder section */
}
Structure for p0 structure for p1
Initially flag[0] and flag[1] assigns false
A solution to the critical-section problem must satisfy the following three requirements:
1. Mutual exclusion.
2. Progress.
3.Bounded waiting.
Structure for p0 structure for p1
P0 and p1 willing
P1 in CS and p0 wants to enter
P0 in CS and p1 wants to enter
Structure for p0 structure for p1
The hard ware based solution Test and Set Lock for critical section problem
is complicated and inaccessible to programmers.
Another software tool used to solve critical section problem is Mutex Locks.
We use murex locks to protect critical regions and avoid race conditions
Process must acquire lock before entering the critical region and release
the lock when it exits.
The acquire function() acquire the lock and release function() releases the
lock
Mutex locks
Mutex has a boolean variable available whose value indicates lock is
available or not
if lock is available call to acquire() succeeds; the lock is then
considered unavailable
the process attempt to acquire unavailable lock is blocked until the
lock is available
acquire() {
while(!available)
;/* busy wait*/
available:=false;
}
The definition of release() as follows
release() {
available=true;
}
Mutex locks
do {
Release lock
noncritical section
} while(true);
When locks are expected to be held for short times spin locks are useful
Process synchronization using semaphore
A semaphore is a synchronization tool that provides more
sophisticated ways (than Mutex locks) for processes to
synchronize their activities..
A semaphore S is an integer variable that, apart from
initialization, is accessed only
through two standard atomic operations: wait () and signal ().
wait () operation was originally termed P;
signal() was originally called V.
Process synchronization using semaphore
The Definition of wait () is as follows:
wait(S) {
while S <= 0
; II no-op
s--;
}
The definition of signal() is as follows:
signal(S) {
S++;
}
semaphore
All modifications to the integer value of the semaphore in the wait () and
signal() operations must be executed indivisibly.
That is, when one process modifies the semaphore value, no other process can
simultaneously modify that same semaphore value.
In addition, in the case of wait (S), the testing of the integer value of S (S <= 0),
as well as its possible modification (S--), must be executed without interruption
semaphore
Usage
Two types of semaphores
counting semaphore: The value can range over an unrestricted domain.
binary semaphore:The value can range only between 0 and 1.
binary semaphores are known as mutex locks, as they are locks that
provide
mutual exclusion.
Binary semaphore
We can use binary semaphores to deal with the critical-section problem £or
multiple processes. Then processes share a semaphore, mutex, initialized to 1.
Each process Pi is organized as shown below
The main disadvantage of the semaphore definition given here is that it requires busy
waiting
While a process is in its critical section, any other process that
tries to enter its critical section must loop continuously in the entry code.
Busy waiting wastes CPU cycles that some other process might be able to use
productively.
Semaphore Implementation
To overcome the need for busy waiting, we can modify the definition of
the wait() and signal() semaphore operations.
When a process executes the wait () operation and finds that the semaphore value is not
positive, it must wait.
However, rather than engaging in busy waiting, the process can block itself. The block
operation places a process into a waiting queue associated with the semaphore, and the
state of the process is switched to the waiting state.
Then control is transferred to the CPU scheduler, which selects another process to execute.
Semaphore Implementation
A process that is blocked, waiting on a semaphore S, should be restarted when some other
process executes a signal() operation. The process is restarted by a wakeup () operation, which
changes the process from the waiting state to the ready state. The process is then placed in the
ready queue.
To implement semaphores under this definition, we define a semaphore as a "C' struct:
typedef struct {
int value;
struct process *list;
} semaphore;
Semaphore Implementation
In this implementation, semaphore values may be negative, although semaphore values are
never negative under the classical definition of semaphores with busy waiting.
If a semaphore value is negative, its magnitude is the number of processes waiting on that
semaphore.
Each semaphore contains an integer value and a pointer to a list of PCBs. One way to add
and remove processes from the list so as to ensure bounded waiting is to use a FIFO
queue, where the semaphore contains both head and tail pointers to the queue.
Semaphore Implementation
Suppose that Po executes wait (S) and then P1 executes wait (Q). When Po
executes wait (Q), it must wait until P1 executes signal (Q). Similarly, when
P1 executes wait (S), it must wait until Po executes signal(S). Since these
signal() operations cannot be executed, Po and P1 are deadlocked.
Deadlock and starvation
Another problem is starvation--a situation in which processes wait indefinitely within the
semaphore. Indefinite blocking may occur if we remove processes from the list associated
with a semaphore in LIFO (last-in, first-out) order.
Priority inversion problem
Suppose there are three processes L(low priority process),M(Medium priority process)
and H(High priority process)
L is running in CS ; H also needs to run in CS ; H waits for L to come out of CS ; M
interrupts L and starts running ; M runs till completion and relinquishes control ; L
resumes and starts running till the end of CS ; H enters CS and starts running.
Note that neither L nor H share CS with M.
Here, we can see that running of M has delayed the running of both L and H.
Precisely speaking, H is of higher priority and doesn’t share CS with M; but H had
to wait for M. This is where Priority based scheduling didn’t work as expected
because priorities of M and H got inverted in spite of not sharing any CS. This
problem is called Priority Inversion.
Priority inheritance protocol
We can interpret this code as the producer producing full buffers for the consumer or as the
consumer producing empty buffers for the producer.
The Readers-Writers Problem
Structure of a writer
Solution--The Readers-Writers Problem
//gain acce ss to readcount
//increment readcount
//if this is the first process to read db
//prevent writer process to acce ss db
//allow other process to acce ss readcount
Structure of a reader
OPERATING Module3_Part6
SYSTEMS
Textbook : Operating Systems Concepts by Silberschatz
Deadlocks
Deadlock definition
A set of processes is in a deadlocked state when every process in the set is
waiting for an event that can be caused only by another process in the set.
The events with which we are mainly concerned here are resource acquisition and
release.
The resources may be either physical resources (for example, printers, tape drives, memory
space, and CPU cycles) or logical resources (for example, files, semaphores, and
monitors).
Resources
3. No preemption. Resources cannot be preempted; that is, a resource can be released only
voluntarily by the process holding it, after that process has completed its task.
4. Circular wait. A set { P0 , Pl, ... , Pn } of waiting processes must exist such that Po is
waiting for a resource held by P1, P1 is waiting for a resource held by P2, ... , Pn-1 is
waiting for a resource held by Pn and Pn is waiting for a resource held by Po.
There should be a circular chain of processes and resources.
Deadlocks can be described more precisely in terms of a directed graph called a system
resource allocation graph.
This graph consists of a set of vertices V and a set of edges E. The set of vertices V is
partitioned into two different types of nodes:
Process states:
Process P1 is holding an instance of resource type
R2 and is waiting for an instance of resource type
R1 .
Given the definition of a resource-allocation graph, it can be shown that, if the graph contains
no cycles, then no process in the system is deadlocked.
If the graph does contain a cycle, then a deadlock may exist.
If each resource type has exactly one instance, then a cycle implies that a deadlock has
occurred.
If the cycle involves only a set of resource types, each of which has only a single instance,
then a deadlock has occurred. Each process involved in the cycle is deadlocked. In this
case, a cycle in the graph is both a necessary and a sufficient condition for the existence
of deadlock.
Resource allocation graph
If each resource type has several instances, then a cycle does not necessarily
imply that a deadlock has occurred. In this case, a cycle in the graph is a
necessary but not a sufficient condition for the existence of deadlock.
P1 R 1 P2 R3 P3 R2 P1
P2 R3 P3 R2 P2
Processes P1, P2, and P3 are deadlocked. Process P2 is waiting for the resource
R3, which is held by process P3. Process P3 is waiting for either process P1 or
process P2 to release resource R2. In addition, process P1 is waiting for process
P2 to release resource R1.
Resource allocation graph
Now consider the resource-allocation graph below. in this example,
we also have a cycle:
P1 R1P3 R2 P1
Resource
allocation graph
with cycle and no
deadlock
However, there is no deadlock. Observe that process P4 may release its instance
of resource type R2. That resource can then be allocated to P3, breaking the cycle.
In summary, if a resource-allocation graph does not have a cycle, then the
system is not in a deadlocked state. If there is a cycle, then the system may or
may not be in a deadlocked state. This observation is important when we deal
with the deadlock problem.
OPERATING Module3_Part8
SYSTEMS
Textbook : Operating Systems Concepts by Silberschatz
Methods of handling the deadlock
For a deadlock to occur, each of the four necessary conditions must hold. By ensuring
that at least one of these conditions cannot hold, we can prevent the occurrence of a
deadlock.
This protocol is often applied to resources whose state can be easily saved
and restored later, such as CPU registers and memory space.
To illustrate, we let R = { R1, R2, ... , Rm} be the set of resource types. We assign to each
resource type a unique integer number, which allows us to compare two resources and to
determine whether one precedes another in our ordering. Formally, we define a one-to-one
Function F: R N, where N is the set of natural numbers
For example, if the set of resource types R includes tape drives, disk drives, and printers,
then the function F might be defined as follows:
F (tape drive) = 1
F (disk drive) = 5
F (printer) = 12
Deadlock prevention
We can now consider the following protocol to prevent deadlocks: Each process can
request resources only in an increasing order of enumeration.
That is, a process can initially request any number of instances of a resource type
-say, Ri. After that, the process can request instances of resource type Rj if
and only if F(Rj) > F(Ri).
For example, using the function defined previously, a process that wants to use the tape
drive and printer at the same time must first request the tape drive and then request the
printer.
Alternatively, we can require that a process requesting an instance of resource type Rj must
have released any resources Ri such that F(Ri) >= F(Rj).
OPERATING Module3_Part9
SYSTEMS
Textbook : Operating Systems Concepts by Silberschatz
Deadlock recovery
Although it is more effective to roll back the process only as far as necessary to break the
deadlock, this method requires the system to keep more information about the state of all
running processes
Starvation. How do we ensure that starvation will not occur? That is,how can we
guarantee that resources will not always be preempted from the same process?
we must ensure that a process can be picked as a victim" only a (small) finite number of
times. The most common solution is to include the number of rollbacks in the cost factor.
OPERATING Module3_Part10
SYSTEMS
Textbook : Operating Systems Concepts by Silberschatz
Deadlock avoidance
With the knowledge of the complete sequence of requests and releases for each process, the
system can decide for each request whether or not the process should wait in order to avoid a
possible future deadlock.
Each request requires that in making this decision, the system consider
the resources currently available,
the resources currently allocated to each process, and
the future requests and releases of each process.
Deadlock avoidance
The simplest and most useful model requires that each process declare the maximum
number of resources of each type that it may need.
Using this information, it is possible to construct an algorithm that ensures that the system
will never enter a deadlocked state.
This deadlock-avoidance algorithm dynamically examines the resource-allocation state to
ensure that a circular wait condition can never exist.
The resource-allocation state is defined by the number of available and allocated resources
and the maximum demands of the processes.
Safe State
A state is safe if the system can allocate resources to each process (up to its
maximum) in some order and still avoid a deadlock. More formally, a system is in
a safe state only if there exists a safe sequence.
A sequence of processes <P1, P2, ... , Pn> is a safe sequence for the current
allocation state if, for each Pi, the resource requests that Pi can still make can be
satisfied by the currently available resources plus the resources held by all Pj,
with j < i.
if the resources that Pi needs are not immediately available, then Pi can wait until
all Pj have finished. When they have finished, Pi can obtain all of its needed
resources, complete its designated task, return its allocated resources, and
terminate. When Pi terminates, Pi+l can obtain its needed resources, and so on. If
no such sequence exists, then the system state is said to be unsafe.
A safe state is not a deadlocked state.
Conversely, a deadlocked state is an unsafe
state.
Not all unsafe states are deadlocks,
However.an unsafe state may lead to a
deadlock.
As long as the state is safe, the operating
system can avoid unsafe (and deadlocked)
states.
In an unsafe state, the operating system
cannot prevent processes from requesting
resources in such a way that a deadlock
occurs. The behavior of the processes
controls unsafe states.
Example
Consider a system with 12 magnetic tape drives and three processes: Po, P1, and P2.
Given the concept of a safe state, we can define avoidance algorithms that
ensure that the system will never deadlock.
The idea is simply to ensure that the system will always remain in a safe state. Initially, the
system is in a safe state.
Whenever a process requests a resource that is currently available, the system
must decide whether the resource can be allocated immediately or whether
the process must wait.
The request is granted only if the allocation leaves the system in a safe state.
OPERATING Module3_Part11
SYSTEMS
Textbook : Operating Systems Concepts by Silberschatz
Resource-Allocation-Graph Algorithm
a b
Resource-Request Algorithm
Next, we describe the algorithm for determining whether requests can be safely
granted.
Let Requesti be the request vector for process Pi. If Request[i] [j] == k, then
process Pi wants k instances of resource type Rj.
When a request for resources is made by process Pi,
the following actions are taken:
1.If Requesti <= Needi, go to step 2. Otherwise, raise an error condition, since
the process has exceeded its maximum claim.
2.If Requesti<= Available, go to step 3. Otherwise, Pi must wait, since the
resources are not available
Banker’s Algorithm
3.Have the system pretend to have allocated the requested resources to process Pi by
modifying the state as follows:
Available= Available- Requesti
Allocationi = Allocationi +Requesti;
Needi = Needi - Requesti;
If the resulting resource-allocation state is safe, the transaction is completed,
and process Pi is allocated its resources.
However, if the new state is unsafe, then Pi must wait for Request;, and the old resource-
allocation state is restored.
An Illustrative Example
To illustrate the use of the banker's algorithm, consider a system with 5 processes
Po through P4 and three resource types A, B, and C.
Resource type A has 10 instances, resource type B has 5 instances, and resource
type C has 7 instances.
Suppose that, at time T0 , the following snapshot of the system has been taken:
The content of the matrix Need is defined to be Max - Allocation and is as
follows:
We claim that the system is currently in a safe state.
Indeed, the sequence < P1, P3, P4, P0, P2> satisfies the safety criteria.
we can define a deadlock detection algorithm that uses a variant of the resource-allocation
graph, called a wait-for graph.
We obtain this graph from the resource-allocation graph by removing the resource nodes
and collapsing the appropriate edges.
More precisely, an edge from Pi Pj in a wait-for graph implies that process Pi is waiting
for process Pj to release a resource that Pi needs.
An edge Pi Pj exists in a wait-for graph if and only if the corresponding resource
allocation graph contains two edges Pi Rq and Rq Pjfor some resource Rq.
Deadlock detection for systems with
Single Instance of Each Resource Type
For example, we present a resource-allocation graph and the corresponding wait-for graph.
As before, a deadlock exists in the system if and only if the wait-for graph contains a cycle.
To detect deadlocks, the system needs to maintain the wait-for graph and periodically
invoke an algorithm that searches for a cycle in the graph.
Deadlock detection for systems with
Several instances of each resource type
The algorithm employs several time-varying data structures that are similar to those used in
the banker's algorithm
Available. A vector of length n indicates the number of available resources of each type.
Allocation. An n x n matrix defines the number of resources of each type currently
allocated to each process.
Request. An n x m matrix indicates the current request of each process. If Request[i][j]
equals k, then process Pi is requesting k more instances of resource type Rj.
Allocation and Request as vectors; we refer to them as Allocationi and Requesti. The
detection algorithm described here simply investigates every possible allocation sequence
for the processes that remain to be completed.
Deadlock detection for systems with
Several instances of each resource type
Example
To illustrate this algorithm, we consider a system with five processes Po through P4 and
three resource types A, B, and C.
Resource type A has seven instances, resource type B has two instances, and resource type
C has six instances.
Suppose that, at time T0, we have the following resource-allocation state:
Suppose now that process P2 makes one additional request for an instance
of type C. The Request matrix is modified as follows:
We claim that the system is now deadlocked. Although we can reclaim the
resources held by process Po, the number of available resources is not sufficient
to fulfill the requests of the other processes. Thus, a deadlock exists, consisting