We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 60
Process Synchronization
Process Synchronization means sharing
system resources by processes in a such a way that, Concurrent access to shared data is handled thereby minimizing the chance of inconsistent data. Process Synchronization was introduced to handle problems that arose while multiple process executions. Critical Section Problem A Critical Section is the part of a program that accesses shared resources. That resource may be any resource in a computer like a memory location, Data structure, CPU or any IO device. The critical section cannot be executed by more than one process at the same time. It means that in a group of cooperating processes, at a given point of time, only one process must be executing its critical section. If any other process also wants to execute its critical section, it must wait until the first one finishes. A critical section may be used to ensure that a shared resource, for example, a printer, can only be accessed by one process at a time. Solution to Critical Section Problem A solution to the critical section problem must satisfy the following three conditions: 1. Mutual Exclusion Out of a group of cooperating processes, only one process can be in its critical section at a given point of time. 2. Progress If no process is in its critical section, and if one or more threads want to execute their critical section then any one of these threads must be allowed to get into its critical section. 3. Bounded Waiting After a process makes a request for getting into its critical section, there is a limit for how many other processes can get into their critical section, before this process's request is granted. So after the limit is reached, system must grant the process permission to get into its critical section. Semaphores A semaphore is a variable or abstract data type used to control access to a common resource by multiple processes in a concurrent system. Semaphores are integer variables that are used to solve the critical section problem by using two atomic operations, wait and signal that are used for process synchronization. The definitions of wait and signal are as follows: Wait The wait operation decrements the value of its argument S, if it is positive. If S is negative or zero, then no operation is performed. wait(S) { while (S>0); S--; } Signal The signal operation increments the value of its argument S. signal(S) { S++; } Example-
Semaphore count= no. of available resources =4
Semaphore count is decremented by 1 by Wait() Operation everytime a table is occupied by a person Semaphore count =0 Semaphore count=0 hence no operation performed 1 empty table as 1 person leaves Signal() operation increments the value of Semaphore To S=1 and 1 person from the queue can move into restaurant now Types of Semaphores There are two main types of semaphores i.e. counting semaphores and binary semaphores. Counting Semaphores These are integer value semaphores and have an unrestricted value domain. These semaphores are used to coordinate the resource access, where the semaphore count is the number of available resources. If the resources are added, semaphore count automatically incremented and if the resources are removed, the count is decremented. Binary Semaphores The binary semaphores are like counting semaphores but their value is restricted to 0 and 1. The wait operation only works when the semaphore is 1 and the signal operation succeeds when semaphore is 0. It is sometimes easier to implement binary semaphores than counting semaphores. Counting semaphore A counting semaphore has two components- An integer value An associated waiting list (usually a queue) The value of counting semaphore may be positive or negative. Positive value indicates the number of processes that can be present in the critical section at the same time. Negative value indicates the number of processes that are blocked in the waiting list. Waiting list The waiting list of counting semaphore contains the processes that got blocked when trying to enter the critical section. In waiting list, the blocked processes are put to sleep. The waiting list is usually implemented using a queue data structure. Using a queue as waiting list ensures bounded waiting. This is because the process which arrives first in the waiting queue gets the chance to enter the critical section first. Wait operation: The wait operation is executed when a process tries to enter the critical section. Wait operation decrements the value of counting semaphore by 1. Then, following two cases are possible-
Case-01: Counting Semaphore Value >= 0
If the resulting value of counting semaphore is greater than
or equal to 0, process is allowed to enter the critical section.
Case-02: Counting Semaphore Value < 0
If the resulting value of counting semaphore is less than
0, process is not allowed to enter the critical section. In this case, process is put to sleep in the waiting list. Signal operation: The signal operation is executed when a process takes exit from the critical section. Signal operation increments the value of counting semaphore by 1. Then, following two cases are possible-
Case-01: Counting Semaphore <= 0
If the resulting value of counting semaphore is less than or
equal to 0, a process is chosen from the waiting list and wake up to execute.
Case-02: Counting Semaphore > 0
If the resulting value of counting semaphore is greater than
0, no action is taken. Down(semaphore S) Up(semaphore S) { { S value=S value-1; S value=S value+1; If(s value<0) If(s value < = 0) { { Put process in suspend Select a process from list suspend list Sleep(); Wakeup(); } } Else } Return; } Binary semaphore A binary semaphore has two components- An integer value which can be either 0 or 1 An associated waiting list (usually a queue) Waiting list The waiting list of binary semaphore contains the processes that got blocked when trying to enter the critical section. In waiting list, the blocked processes are put to sleep. The waiting list is usually implemented using a queue data structure. Using a queue as waiting list ensures bounded waiting. This is because the process which arrives first in the waiting queue gets the chance to enter the critical section first. Wait operation: The wait operation is executed when a process tries to enter the critical section. Then, there are two cases possible- Case-01: Binary Semaphore Value = 1
If the value of binary semaphore is 1,
The value of binary semaphore is set to 0. The process is allowed to enter the critical section.
Case-02: Binary Semaphore Value = 0
If the value of binary semaphore is 0,
The process is blocked and not allowed to enter the critical section. The process is put to sleep in the waiting list. Signal operation: The signal operation is executed when a process takes exit from the critical section. Then, there are two cases possible-
Case-01: Waiting List is Empty-
If the waiting list is empty, the value of binary
semaphore is set to 1.
Case-02: Waiting List is Not Empty-
If the waiting list is not empty, a process is chosen
from the waiting list and wake up to execute. Down(semaphore S) Up(semaphore S) { { If (S value==1) If(suspend list is empty) { S value=1; S value =0; } } Else else { { Select a process from Put process in suspend suspend list list Wakeup(); Sleep(); } } } } Other names by which wait operation may be referred : Down operation, P operation. Other names by which signal operation may be referred : Up operation, V operation, Release operation. Dining Philosophers Problem Consider there are five philosophers sitting around a circular dining table. The dining table has five chopsticks and a bowl of rice in the middle as shown in the below figure. The Dining Philosopher Problem – The Dining Philosopher Problem states that K philosophers seated around a circular table. There is one chopstick between each philosopher. A philosopher may eat if he can pickup the two chopsticks adjacent to him. One chopstick may be picked up by any one of its adjacent followers but not both. At any instant, a philosopher is either eating or thinking. When a philosopher wants to eat, he uses two chopsticks - one from their left and one from their right. When a philosopher wants to think, he keeps down both chopsticks at their original place. Solution- From the problem statement, it is clear that a philosopher can think for an indefinite amount of time. But when a philosopher starts eating, he has to stop at some point of time. The philosopher is in an endless cycle of thinking and eating. When a philosopher wants to eat the rice, he will wait for the chopstick at his left and picks up that chopstick. Then he waits for the right chopstick to be available, and then picks it too. After eating, he puts both the chopsticks down. But if all five philosophers are hungry simultaneously, and each of them pickup one chopstick, then a deadlock situation occurs because they will be waiting for another chopstick forever. The possible solutions for this are: A philosopher must be allowed to pick up the chopsticks only if both the left and right chopsticks are available. Allow only four philosophers to sit at the table. That way, if all the four philosophers pick up four chopsticks, there will be one chopstick left on the table. So, one philosopher can start eating and eventually, two chopsticks will be available. In this way, deadlocks can be avoided. Mutex Locks In this approach, in the entry section of code, a LOCK is acquired over the critical resources modified and used inside critical section, and in the exit section that LOCK is released. As the resource is locked while a process executes its critical section hence no other process can access it. Readers Writer Problem Readers writer problem is another example of a classic synchronization problem. The Problem Statement- There is a shared resource which should be accessed by multiple processes. There are two types of processes in this context. They are reader and writer. Any number of readers can read from the shared resource simultaneously, but only one writer can write to the shared resource. When a writer is writing data to the resource, no other process can access the resource. A writer cannot write to the resource if there are non zero number of readers accessing the resource at that time. If two readers access the object at the same time there is no problem. However if two writers or a reader and writer access the object at the same time, there may be problems. To solve this situation, a writer should get exclusive access to an object i.e. when a writer is accessing the object, no reader or writer may access it. However, multiple readers can access the object at the same time. This can be implemented using semaphores. The codes for the reader and writer process in the reader-writer problem are given as follows: Reader Process In the above code, mutex and wrt are semaphores that are initialized to 1. Also, rc is a variable that is initialized to 0. The mutex semaphore ensures mutual exclusion and wrt handles the writing mechanism and is common to the reader and writer process code. The variable rc denotes the number of readers accessing the object. As soon as rc becomes 1, wait operation is used on wrt. This means that a writer cannot access the object anymore. After the read operation is done, rc is decremented. When rc becomes 0, signal operation is used on wrt. So a writer can access the object now. Writer Process wait(wrt); . . WRITE INTO THE OBJECT . signal(wrt); If a writer wants to access the object, wait operation is performed on wrt. After that no other writer can access the object. When a writer is done writing into the object, signal operation is performed on wrt. Producer–consumer problem Also known as the bounded-buffer problem. Problem Statement? There is a buffer of n slots and each slot is capable of storing one unit of data. There are two processes running, namely, producer and consumer, which are operating on the buffer. A producer tries to insert data into an empty slot of the buffer. A consumer tries to remove data from a filled slot in the buffer. Solution One solution of this problem is to use semaphores. The semaphores which will be used here are: m, a binary semaphore which is used to acquire and release the lock. empty, a counting semaphore whose initial value is the number of slots in the buffer, since, initially all slots are empty. full, a counting semaphore whose initial value is 0. At any instant, the current value of empty represents the number of empty slots in the buffer and full represents the number of occupied slots in the buffer. The Producer Operation We can see that a producer first waits until there is atleast one empty slot. Then it decrements the empty semaphore because, there will now be one less empty slot, since the producer is going to insert data in one of those slots. Then, it acquires lock on the buffer, so that the consumer cannot access the buffer until producer completes its operation. After performing the insert operation, the lock is released and the value of full is incremented because the producer has just filled a slot in the buffer. The Consumer Operation The consumer waits until there is atleast one full slot in the buffer. Then it decrements the full semaphore because the number of occupied slots will be decreased by one, after the consumer completes its operation. After that, the consumer acquires lock on the buffer. Following that, the consumer completes the removal operation so that the data from one of the full slots is removed. Then, the consumer releases the lock. Finally, the empty semaphore is incremented by 1, because the consumer has just removed data from an occupied slot, thus making it empty. Two process solution It is a busy waiting solution which can be implemented only for two processes. In this approach, A turn variable is used which is actually a lock. This approach can only be used for only two processes. In general, let the two processes be Pi and Pj. They share a variable called turn variable. The pseudo code of the program can be given as following. For Process Pi Non - CS while (turn ! = i); //if false, process will be allowed to enter CS Critical Section turn = j; Non - CS For Process Pj Non - CS while (turn ! = j); Critical Section turn = i ; Non - CS Initially, two processes Pi and Pj are available and want to execute into critical section.
The turn variable is equal to i hence Pi will get the chance to
enter into the critical section. The value of Pi remains I until Pi finishes critical section. Pi finishes its critical section and assigns j to turn variable. Pj will get the chance to enter into the critical section. The value of turn remains j until Pj finishes its critical section. Analysis of two process solution approach Mutual Exclusion The strict alternation approach provides mutual exclusion in every case. This procedure works only for two processes. The pseudo code is different for both of the processes. The process will only enter when it sees that the turn variable is equal to its Process ID otherwise not Hence No process can enter in the critical section regardless of its turn. Progress Progress is not guaranteed in this mechanism. If Pi doesn't want to get enter into the critical section on its turn then Pj got blocked for infinite time. Pj has to wait for so long for its turn since the turn variable will remain 0 until Pi assigns it to j. Two process solution using flag variable We have to make sure that the progress must be provided by our synchronization mechanism. In the turn variable mechanism, progress was not provided due to the fact that the process which doesn't want to enter in the critical section does not consider the other interested process as well. The other process will also have to wait regardless of the fact that there is no one inside the critical section. If the operating system can make use of an extra variable along with the turn variable then this problem can be solved and our problem can provide progress to most of the extent. For Process Pi For Process Pj Non CS Non CS flag[i] = T ; flag [j] = T ; while ( flag[j] == T ) ; while ( flag[i] == T ) ;
Critical Section Critical Section
flag[i] = F ; flag[j]=F ; In this mechanism, an extra variable flag is used. This is a Boolean variable used to store the interest of the processes to get enter inside the critical section. A process which wants to enter in the critical section first checks in the entry section whether the other process is interested to get inside. The process will wait for the time until the other process is interested. In exit section, the process makes the value of its interest variable false so that the other process can get into the critical section. The table shows the possible values of interest variable of both the processes and the process which get the chance in the scenario. Mutual Exclusion In interested variable mechanism, if one process is interested in getting into the CPU then the other process will wait until it becomes uninterested. Therefore, more than one process can never be present in the critical section at the same time hence the mechanism guarantees mutual exclusion. Progress In this mechanism, if a process is not interested in getting into the critical section then it will not stop the other process from getting into the critical section. Therefore the progress will definitely be provided by this method. Bounded Waiting When there is a condition of deadlock, bounded waiting can never be provided. Peterson’s solution Peterson’s Solution is a classical software based solution to the critical section problem. In Peterson’s solution, we have two shared variables: boolean flag[i] :Initialized to FALSE, initially no one is interested in entering the critical section. int turn : The process whose turn is to enter the critical section. Peterson’s Solution preserves all three conditions : Mutual Exclusion is assured as only one process can access the critical section at any time. Progress is also assured, as a process outside the critical section does not blocks other processes from entering the critical section. Bounded Waiting is preserved as every process gets a fair chance. Monitors Monitors are abstract data types and contain shared data variables and procedures. The shared data variables cannot be directly accessed by a process and procedures are required to allow a single process to access the shared data variables at a time. Monitors Monitor is one of the ways to achieve Process synchronization. Monitor is supported by programming languages to achieve mutual exclusion between processes. 1. It is the collection of condition variables and procedures combined together in a special kind of module or a package. 2. The processes running outside the monitor can’t access the internal variable of monitor but can call procedures of the monitor. 3. Only one process at a time can execute code inside monitors. Condition Variables Two different operations are performed on the condition variables of the monitor. Wait & signal. Wait operation x.wait() : Process performing wait operation on any condition variable are suspended. The suspended processes are placed in block queue of that condition variable. Signal operation x.signal(): When a process performs signal operation on condition variable, one of the blocked processes is given chance. Synchronization Hardware Many systems provide hardware support for critical section code. The critical section problem could be solved easily in a single-processor environment if we could disallow interrupts to occur while a shared variable or resource is being modified. In this manner, we could be sure that the current sequence of instructions would be allowed to execute in order without pre-emption. Unfortunately, this solution is not feasible in a multiprocessor environment. Disabling interrupt on a multiprocessor environment can be time consuming as the message is passed to all the processors. This message transmission lag, delays entry of threads into critical section and the system efficiency decreases. Mutex Locks As the synchronization hardware solution is not easy to implement for everyone, a strict software approach called Mutex Locks was introduced. In this approach, in the entry section of code, a LOCK is acquired over the critical resources modified and used inside critical section, and in the exit section that LOCK is released. As the resource is locked while a process executes its critical section hence no other process can access it.