operating system synchronizatin
operating system synchronizatin
When two or more process cooperates with each other, their order of execution must be
preserved otherwise there can be conflicts in their execution and inappropriate outputs can
be produced.
Process Synchronization was introduced to handle problems that arose while multiple
process executions.
Process is categorized into two types on the basis of synchronization and these are given
below:
Independent Process
Cooperative Process
Independent Processes
Two processes are said to be independent if the execution of one process does not affect the
execution of another process.
Cooperative Processes
Two processes are said to be cooperative if the execution of one process affects the
execution of another process. These processes need to be synchronized so that the order of
execution can be guaranteed.
cooperative process:-
A cooperative process is the one which can affect the execution of other process or can be
affected by the execution of other process. Such processes need to be synchronized so that
their order of execution can be guaranteed.
At the time when more than one process is either executing the same code or accessing the
same memory or any shared variable; In that condition, there is a possibility that the output
or the value of the shared variable is wrong so for that purpose all the processes are doing
the race to say that my output is correct. This condition is commonly known as a race
condition.
Mainly this condition is a situation that may occur inside the critical section. Race
condition in the critical section happens when the result of multiple thread execution differs
according to the order in which the threads execute.
Critical Section
The regions of a program that try to access shared resources and may cause race conditions
are called critical section. To avoid race condition among the processes, we need to assure
that only one process at a time can execute within the critical section.
Critical Section is the part of a program which tries to access shared resources. That
resource may be any resource in a computer like a memory location, Data structure, CPU or
any IO device.
The critical section cannot be executed by more than one process at the same time;
operating system faces the difficulties in allowing and disallowing the processes from
entering the critical section.
The critical section problem is used to design a set of protocols which can ensure that the
Race condition among the processes will never arise.
In order to synchronize the cooperative processes, our main task is to solve the critical
section problem. We need to provide a solution in such a way that the following conditions
can be satisfied.
Entry Section
In this section mainly the process requests for its entry in the critical
section.
Exit Section
Our solution must provide mutual exclusion. By Mutual Exclusion, we mean that if
one process is executing inside critical section then the other process must not enter
in the critical section.
2. Progress
progress means that if one process doesn't need to execute into critical section then
it should not stop other processes to get into the critical section.
Secondary
1. Bounded Waiting
We should be able to predict the waiting time for every process to get into the critical
section. The process must not be endlessly waiting for getting into the critical
section.
2. Architectural Neutrality
Our mechanism must be architectural neutral. It means that if our solution is working
fine on one architecture then it should also run on the other ones as well.
Introduction to Semaphores
Semaphore is a Hardware Solution. This Hardware solution is written or given to critical
section problem.
In 1965, Dijkstra proposed a new and very significant technique for managing concurrent
processes by using the value of a simple integer variable to synchronize the progress of
interacting processes. This integer variable is called a semaphore. So it is basically a
synchronizing tool and is accessed only through two low standard atomic
operations, wait and signal designated by P(S) and V(S) respectively.
In very simple words, the semaphore is a variable that can hold only a non-negative
Integer value, shared between all the threads, with operations wait and signal, which work
as follow:
else S := S + 1;
wait(S)
S--;
Note:
When one process modifies the value of a semaphore then, no other
process can simultaneously modify that same semaphore's value. In the
above case the integer value of S(S<=0) as well as the possible
modification that is S-- must be executed without any interruption.
signal(S)
S++;
Also, note that all the modifications to the integer value of semaphore in
the wait() and signal() operations must be executed indivisibly.
Properties of Semaphores
Types of Semaphores
1. Binary Semaphore:
It is a special form of semaphore used for implementing mutual
exclusion, hence it is often called a Mutex. A binary semaphore is
initialized to 1 and only takes the values 0 and 1 during the
execution of a program. In Binary Semaphore, the wait operation
works only if the value of semaphore = 1, and the signal operation
succeeds when the semaphore= 0. Binary Semaphores are easier to
implement than counting semaphores.
2. Counting Semaphores:
Example of Use
Process i
begin
P(mutex);
execute CS;
V(mutex);
End;
Advantages of Semaphores
They allow more than one thread to access the critical section.
Disadvantages of Semaphores