CS204-OS-module 3
CS204-OS-module 3
PROCESS SYNCHRONISATION
CRITICAL SECTION
Each process has a segment of code, called a critical section in which the process
may be changing common variables, updating a table, writing a file, and so on.
The important feature of the system is that, when one process is executing in its
critical section, no other process is to be allowed to execute in its critical section.
That is, no two processes are executing in their critical sections at the same time.
The critical-section problem is to design a protocol that the processes can use to
cooperate. Each process must request permission to enter its critical section. The
section of code implementing this request is the entry section. The critical section
may be followed by an exit section. The remaining code is the remainder section.
The general structure of a typical process Pi is shown below
do
KTU STUDNTS
{
entry section
critical section
exit section
remainder section
} while (TRUE);
1. Mutual exclusion.
If process Pi is executing in its critical section, then no other processes can be
executing in their critical sections.
2. Progress.
If no process is executing in its critical section and some processes wish to enter
their critical sections, then only those processes that are not executing in their
remainder sections can participate in deciding which will enter its critical section
next, and this selection cannot be postponed indefinitely.
3 Bounded waiting.
KTU STUDNTS
PETERSON’S SOLUTION
Do
{
critical section
remainder section
}
While (TRUE);
The eventual value of turn determines which of the two processes is allowed to enter its critical
section first.
1
2
3
KTU STUDNTS
To prove that this solution is correct, it is required to show that:
Mutual exclusion is preserved.
The progress requirement is satisfied.
The bounded-waiting requirement is met.
To prove property 1, we note that each P; enters its critical section only if either flag [j] ==
false or turn == i. Also note that, if both processes can be executing in their critical sections at
the same time, then flag [0] == flag [1] ==true. These two observations imply that Po and P1
could not have successfully executed their while statements at about the same time, since the
value of turn can be either 0 or 1 but cannot be both. Hence, one of the processes -say, Pi -
must have successfully executed the while statement , whereas P; had to execute at least one
additional statement ("turn== j"). However, at that time, flag [j] == true and turn == j, and this
condition will persist as long as Pi is in its critical section; as a result, mutual exclusion is
preserved. To prove properties 2 and 3, we note that a process P; can be prevented from
entering the critical section only if it is stuck in the while loop with the condition flag [j]
==true and turn=== j; this loop is the only one possible. If Pi is not ready to enter the critical
section, then flag [j] ==false, and P; can enter its critical section. If Pj has set flag [j] to true
and is also executing in its while statement, then either turn === i or turn === j. If turn == i,
then P; will enter the critical section. If turn== j, then Pi will enter the critical section.
However, once Pi exits its critical section, it will reset flag [j] to false, allowing P; to enter its
critical section. If Pi resets flag [j] to true, it must also set turn to i. Thus, since P; does not
change the value of the variable turn while executing the while statement, P; will enter the
critical section (progress) after at most one entry by P1 (bounded waiting).
SYNCHRONIZATION HARDWARE
It can generally state that any solution to the critical-section problem requires a simple
tool-a lock. Race conditions are prevented by requiring that critical regions be protected by
locks. That is, a process must acquire a lock before entering a critical section; it releases the
lock when it exits the critical section.
The critical-section problem could be solved simply in a single-processor environment if it could
prevent interrupts from occurring while a shared variable was being modified. In this way, it could be
sure that the current sequence of instructions would be allowed to execute in order without
preemption. No other instructions would be run, so no unexpected modifications could be made to the
shared variable. This is often the approach taken by nonpreemptive kernels.
Disabling interrupts on a multiprocessor can be time consuming, since the message is passed to
all the processors. This message passing delays entry into each critical section, and system efficiency
decreases.
Many modern computer systems therefore provide special hardware instructions that allow us
either to test and modify the content of a word or to swap the contents of two words atomically.
Definition:
boolean test_and_set (boolean *target)
{
boolean rv = *target;
*target = TRUE;
return rv:
}
KTU STUDNTS
The definition of the test and set() instruction
Solution:
do{
while (test_and_set(&lock))
; /* do nothing */
/* critical section */
lock = false;
/* remainder section */
} while (true);
Mutual-exclusion implementation with test and set().
The compare and swap() instruction, operates on three operands.
Definition:
int compare _and_swap(int *value, int expected, int new_value) {
int temp = *value;
if (*value == expected)
*value = new_value;
return temp;
}
The definition of the compare and swap() instruction.
Solution:
do{
while (compare_and_swap(&lock, 0, 1) != 0)
; /* do nothing */
/* critical section */
Mutex Locks
do
{
KTU STUDNTS
acquire lock
critical section
release lock
remainder section
} while (TRUE);
wait(S)
{
while S <= 0
II busy waiting
s--;
signal(S)
{
S++;
}
KTU STUDNTS
All modifications to the integer value of the semaphore in the wait () and signal()
operations must be executed indivisibly. That is, when one process modifies the semaphore
value, no other process can simultaneously modify that same semaphore value. In addition, in
the case of wait (S), the testing of the integer value of S (S <- 0), as well as its possible
modification (S--), must be executed without interruption.
Semaphore Usage
Operating systems often distinguish between counting and binary semaphores.
The value of a counting semaphore can range over an unrestricted domain.
The value of a binary semaphore can range only between 0 and 1.
Counting semaphores can be used to control access to a given resource consisting of a finite number of
instances. The semaphore is initialized to the number of resources available. Each process that wishes to
use a resource performs a wait() operation on the semaphore (thereby decrementing the count). When a
process releases a resource, it performs a signal() operation (incrementing the count). When the count for
the semaphore goes to 0, all resources are being used. After that, processes that wish to use a resource will
block until the count becomes greater than 0.
Semaphore Implementation
When a process executes the wait() operation and finds that the semaphore value is not positive, it
must wait. However, rather than engaging in busy waiting, the process can block itself. The block
operation places a process into a waiting queue associated with the semaphore, and the state of the
process is switched to the waiting state. Then control is transferred to the CPU scheduler, which selects
another process to execute.
A process that is blocked, waiting on a semaphore S, should be restarted when some other process
executes a signal() operation. The process is restarted by a wakeup() operation, which changes the process
from the waiting state to the ready state. The process is then placed in the ready queue.
KTU STUDNTS
typedef struct{
int value;
} semaphore;
If a semaphore value is negative, its magnitude is the number of processes waiting on that semaphore.
This fact results from switching the order of the decrement and the test in the implementation of the wait()
operation.
The implementation of a semaphore with a waiting queue may result in a situation where two or more
processes are waiting indefinitely for an event that can be caused only by one of the waiting processes.
When such a state is reached, these processes are said to be deadlocked.
P0 P1
wait(S); wait(Q);
wait(Q); wait(S);
... ...
signal(S); signal(Q);
KTU STUDNTS
signal(Q); signal(S);
Suppose that P0 executes wait(S) and then P1 executes wait(Q). When P0 executes wait(Q), it must
wait until P1 executes signal(Q). Similarly, when P1 executes wait(S), it must wait until P0 executes
signal(S). Since these signal() operations cannot be executed, P0 and P1 are deadlocked.
Priority Inversion
Scheduling problem when lower-priority process holds a lock needed by higher-priority process
is called priority inversion.
Solution
to have only two priorities
implement a priority-inheritance protocol- According to this protocol, all
processes that are accessing resources needed by a higher-priority process inherit
the higher priority until they are finished with the resources in question. When
they are finished, their priorities revert to their original values.
}
}
Syntax of a monitor.
A function defined within a monitor can access only those variables declared locally within the
monitor and its formal parameters.
The local variables of a monitor can be accessed by only the local functions.
The monitor construct ensures that only one process at a time is active within the monitor.
KTU STUDNTS
If process P invokes x.signal(), and process Q is suspended in x.wait(),
Both Q and P cannot execute in paralel. If Q is resumed, then P must wait
Options include
Signal and wait – P waits until Q either leaves the monitor or it waits for another
condition
Signal and continue – Q waits until P either leaves the monitor or it waits for
another condition
The producer and consumer processes share the following data structures:
int n;
semaphore mutex = 1;
semaphore empty = n;
semaphore full = 0
The pool consists of n buffers, each capable of holding one item. The mutex semaphore provides
mutual exclusion for accesses to the buffer pool and is initialized to the value 1. The empty and
full semaphores count the number of empty and full buffers. The semaphore empty is initialized
to the value n; the semaphore full is initialized to the value 0.
The producer producing full buffers for the consumer or as the consumer producing empty
buffers for the producer.
The structure of the producer process.
do {
...
/* produce an item in next_produced */
KTU STUDNTS
...
wait(empty);
wait(mutex);
...
/* add next produced to the buffer */
...
signal(mutex);
signal(full);
} while (true);
do {
wait(full);
wait(mutex);
...
/* remove an item from buffer to next_consumed */
...
signal(mutex);
signal(empty);
...
/* consume the item in next consumed */
...
} while (true);
Readers-Writers Problem
KTU STUDNTS
A data set is shared among a number of concurrent processes
Readers – only read the data set; they do not perform any updates
Writers – can both read and write
Problem –
allow multiple readers to read at the same time
Only one single writer can access the shared data at the same time
To ensure that these difficulties do not arise, we require that the writers have exclusive access to
the shared database while writing to the database. This synchronization problem is referred to as
the readers–writers problem
The readers–writers problem has several variations,
The first readers–writers problem requires that no reader be kept waiting unless a
writer has already obtained permission to use the shared object. In other words, no reader
should wait for other readers to finish simply because a writer is waiting
The second readers –writers problem requires that, once a writer is ready, that writer
perform its write as soon as possible. In other words, if a writer is waiting to access the
object, no new readers may start reading.
A solution to either problem may result in starvation.
do {
wait(rw_mutex);
...
/* writing is performed */
...
signal(rw_mutex);
KTU STUDNTS
} while (true);
wait(mutex);
read count--;
if (read_count == 0)
signal(rw_mutex);
signal(mutex);
} while (true);