0% found this document useful (0 votes)
3 views

OS Module III Part I SPP

This document discusses process synchronization, focusing on cooperating processes and the need for inter-process communication (IPC) to maintain data consistency. It outlines classical synchronization problems such as the Producer-Consumer, Readers-Writers, and Dining-Philosopher problems, and presents solutions including bounded and unbounded buffers. Additionally, it explains critical section problems, semaphore usage, and issues like deadlock and priority inversion in process synchronization.

Uploaded by

yash1215singh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

OS Module III Part I SPP

This document discusses process synchronization, focusing on cooperating processes and the need for inter-process communication (IPC) to maintain data consistency. It outlines classical synchronization problems such as the Producer-Consumer, Readers-Writers, and Dining-Philosopher problems, and presents solutions including bounded and unbounded buffers. Additionally, it explains critical section problems, semaphore usage, and issues like deadlock and priority inversion in process synchronization.

Uploaded by

yash1215singh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 49

Module-III (Part-I):

Process Synchronization

by:
Dr. Soumya Priyadarsini Panda
Assistant Professor
Dept. of CSE
Silicon Institute of Technology, Bhubaneswar
Basic Concepts
 A cooperating process is one that can affect or be affected by other
processes executing in the system.

 Cooperating processes use IPC mechanism to communicate among each


other.
 They can either directly share a logical address space (that is, both
code and data) or be allowed to share data only through files or
messages.

 Concurrent access to shared data may result in data inconsistency.

 Process synchronization is required to maintain data consistency


Classical Problems of Synchronization

 Producer Consumer problem (Bounded Buffer)

 Readers-Writers Problem

 Dining-Philosopher Problem

 Sleeping Barber Problem


Producer-Consumer Problem

 Producer process produces information that is consumed by a consumer


process.

 Examples:
 A compiler may produce assembly code that is consumed by an
assembler
 A web server produces (provides) html files and images which are
consumed (read) by client web browser requesting the resource.
Cont…

 To allow concurrent execution of producer and consumer process-


 Solution: use shared memory

 i.e. a buffer of items will reside in a shared memory location that can be
filled by producer and emptied by the consumer.

 2 types of buffers can be used:


 Unbounded-buffer
 Bounded-buffer
Cont…
Unbounded-buffer:
 Places no practical limit on the size of the buffer.
 The producer can always produce new items but the consumer may have
to wait for new items if not produced by producer.

Bounded-buffer:
 Assumes that there is a fixed buffer size.
 The consumer must wait if the buffer is empty and producer must wait if
buffer is full.
Bounded-Buffer – Shared-Memory
Solution
Shared data: Can be implemented using circular array.

Where variables:
#define BUFFER_SIZE 5 ‘in’ points to next free position,
typedef struct { ‘out’ points to first full position
... Buffer is empty when in==out
} item;
The buffer is full when:
item buffer[BUFFER_SIZE]; ((in+1)%BUFFER_SIZE)==out
int in = 0;
int out = 0;
0 1 2 3 4

in
out
Bounded-Buffer – Shared-Memory
Solution Cont…
//Producer Process: //Consumer Process:

item nextProduced; item nextConsumed;

while (true) while (true)


{ {
/* Produce an item in next produced*/ while (in = = out);
// do nothing: nothing to consume
while (((in + 1) % BUFFER SIZE) = = out);
/* do nothing: no free buffers */ nextConsumed = buffer[out];
out = (out + 1) % BUFFER SIZE;
buffer[in] = nextProduced; //consume the item in nextConsumed
in = (in + 1) % BUFFER SIZE; }
}

 This solution allows at most n-1 items in buffer (of size n) at the same
time.
Example: 0 1 2 3 4

out in
0 1 2 3 4
item1
out in
0 1 2 3 4
item1 item2
out in
…….
0 1 2 3 4
item1 item2 item3 item4
out in

//*while (((in + 1) % BUFFER SIZE) == out) condition satisfies,


i.e. no free buffer location available (but location 4 is empty) */
 So this solution allows at most n-1 items in buffer (of size n) at the same
time.
Modifications for using all n buffer
locations
 To use all n buffer locations, the producer-consumer code can be modified
by adding a counter variable initialized to 0

 Counter is incremented each time a new item is added to the buffer and
decremented each time an item is removed from the buffer.

Modified Shared data contents:


#define BUFFER_SIZE 5
typedef struct {
...
} item;
item buffer[BUFFER_SIZE];
int in = 0;
int out = 0;
int counter=0;
Modified Code

Modified Producer code: Modified Consumer Code:


item nextProduced; item nextConsumed;
while (true) { while (true) {
/* produce an item */ while (counter = = 0) ;
/* do nothing */
while (counter = = BUFFER_SIZE) ; next_consumed = buffer[out];
/* do nothing : no free buffers */ out = (out + 1) % BUFFER_SIZE;
counter--;
buffer[in] = next_produced; /* consume the item in next consumed
in = (in + 1) % BUFFER_SIZE; */
counter++; }
}

 The modified producer and consumer code works fine when executed
separately but doesn’t function correctly when executed concurrently
Problems with this solution

 If both the producer and consumer attempt to update counter


concurrently, the assembly language statements may get interleaved.

 Interleaving depends upon how the producer and consumer processes


are scheduled.

Example:
counter++ may be implemented as: counter-- may be implemented as:

register1 = counter register2 = counter


register1 = register1 + 1 register2 = register2 - 1
counter = register1 counter = register2
Example Cont…
Let counter=5
producer and consumer executes the statement counter++ and counter--
concurrently.

S0: producer execute register1 = counter {register1 = 5}

S1: producer execute register1 = register1 + 1 {register1 = 6}

S2: consumer execute register2 = counter {register2 = 5}

S3: consumer execute register2 = register2 – 1 {register2 = 4}

S4: producer execute counter = register1 {counter = 6 }

S5: consumer execute counter = register2 {counter = 4}

An inconsistent state arrived: counter=4 indicates 4 buffer locations are full


(but actually 5 are full)
Race Condition
 When several process access and manipulate the same data
concurrently and the outcome of execution depends on the particular
order in which access takes place is called a race condition.

 Example: The final value of the shared data counter depends upon
which process finishes last

 Race conditions must be prevented by allowing only one process to


manipulate the data value at a time.

 To avoid race conditions, concurrent processes must be synchronized


Critical Section Problem
 Consider a system consisting of n processes {p0, p1, … pn-1}

 Each process has a segment of code called critical section in which the
process may change values of common variables, update a table, write
to a file, etc.

 When one process is executing in its critical section, no other process is


allowed to execute in its critical section.
 i.e. no 2 processes are executing in their critical sections at the same
time.

 The Critical section problem is to design a protocol that the processes


can use to cooperate.
Cont…
Structure of a typical process Pi :

 Each process must request


permission to enter its critical
section

 The section of code


implementing this request is the
entry section.

 The critical section may be


followed by an exit section

 The remaining code is the


remainder section
Solution to the Critical-Section Problem
A solution to the critical section problem must satisfy 3 requirements:

1. Mutual Exclusion: If process Pi is executing in its critical section, then


no other processes can be executing in their critical sections

2. Progress: If no process is executing in its critical section and some


processes wish to enter their critical section, then only those processes
that are not executing in their remainder section can participate in
deciding which process will enter its critical section next and this
selection cannot be postponed indefinitely.

3. Bounded Waiting: There exist a limit on the number of times that


other processes are allowed to enter their critical sections after a process
has made a request to enter its critical section and before that request is
granted.
Critical-Section Handling in OS
• There are 2 approaches used to handle the critical sections in OS:
Preemptive kernels and non-preemptive kernels.

Preemptive Kernel:
• It allows a process to be preempted when running in kernel mode.
• Pre-emptive kernel must be carefully designed to ensure that shared
kernel data are free from race condition.
• Pre-emptive kernels are more suitable for real time programming.

Non-preemptive Kernel:
• It doesn’t allows a process running in kernel mode to be preempted.
• A kernel mode process will run until it exits kernel mode, blocks, or
voluntarily yields CPU.
• A non-preemptive kernel is free from race condition.
Peterson’s Solution
 It’s a classical s/w based solution to the critical section problem.

 It is restricted to two processes that alternate execution between their


critical sections and remainder sections.

 Let Pi and Pj are 2 process sharing some data and alternate execution
between their critical sections

 The two processes share two variables:


 int turn;
 Boolean flag[ ]
Peterson’s Solution Cont…

 The variable turn indicates whose turn it is to enter the critical section.

 i.e. if turn = = j, then process Pj is allowed to execute in its critical


section.

 The flag array is used to indicate if a process is ready to enter its critical
section.

 i.e. flag[i] = true implies that process Pi is ready for execution.


Structure of Process Pi

If Pj is ready and its


turn then Pi must
wait

Pi is out of critical section

To enter the critical section, process Pi first sets flag[i] to be true and
then sets turn to the value j
Structure of Process Pj
do {

flag[j] = true; If Pi is ready and its


turn = i; turn then Pj must
while (flag[i] && turn == i); wait

critical section

flag[j] = false; Pj is out of critical section

remainder section

} while (true);

To enter the critical section, process Pj first sets flag[j] to be true and
then sets turn to the value i
Peterson’s Solution (Cont…)

 The solution is correct as it satisfies the 3 conditions:


1. Mutual exclusion is preserved
Pi enters CS only if:
either flag[j] = false or turn = i

2. Progress requirement is satisfied

3. Bounded-waiting requirement is met

 However s/w based solutions are not guaranteed to work on modern


computer architectures.

 Therefore h/w based solutions based on locking mechanism are used.


Synchronization Hardware
 Many systems provide hardware support for implementing the critical
section code.

 Those solutions are based on the idea of locking


 Protecting critical regions via locks

 Solution to Critical-section Problem using Locks:

do {
acquire lock
critical section
release lock
remainder section
} while (TRUE);
Cont…
 The critical section problem can be solved in a uni-processor
environment by preventing interrupts from occurring while a shared
variable is updated.

 In this way, the currently running code would execute without


preemption and no other instructions would run.
 So no unexpected modifications are possible on shared variables.
 This solution is generally too inefficient on multiprocessor systems

 Many modern computer systems provide special atomic (non-


interruptible) hardware instructions to -
 either test memory word and set value ( TestAndSet() instruction)
 or swap contents of two memory words (Swap() instruction)
* Cont…
Definition of TestAndSet() instruction:
:

Mutual Exclusion implementation with TestAndSet():


* Cont…
Definition of Swap() instruction:

Mutual Exclusion implementation with Swap() instruction:

Although these algorithms satisfy


the mutual-exclusion requirement,
they do not satisfy the bounded-
waiting requirement.
* Cont…
Bounded-waiting Mutual Exclusion implementation with TestAndSet():
Semaphore

 The h/w based solutions to critical section problem are complicated for
application programmers to use.

 To overcome this difficulty, a synchronization tool can be used called a


semaphore.

 A Semaphore S is an integer variable that apart from initialization, is


accessed only through two standard atomic operations:
 wait()
 signal()

 wait(): used to decrement values


 signal(): used to increment values
Semaphore Cont…
 Definition of the wait()  Definition of the signal()
operation: operation:

wait(S) signal(S)
{ {
while (S <= 0) S++;
; // busy wait }
S--;
}

 When one process modifies the semaphore values no other process can
simultaneously modify the same semaphore value.

 In the case of wait(S), the testing of the integer value of S (S ≤ 0), as


well as its possible modification (S--), must be executed without
interruption.
Example
 Counting semaphores can be used to control access to a given
resource consisting of a finite number of instances.

 The semaphore is initialized to the number of resources available.

 Each process that wishes to use a resource performs a wait() operation


on the semaphore

 When a process releases a resource, it performs a signal() operation

 When the count for the semaphore goes to 0, all resources are being
used.
Semaphore Cont…
 The main disadvantage of semaphore is:

 Busy waiting:

 While a process is in its critical section, any other process that tries
to enter its critical section must loop continuously in the entry code.

 In a multiprogramming system, where a single CPU is shared by


many processes, busy waiting wastes CPU cycles that some other
process may use.
 This type of semaphore is called spinlock
Deadlock and Starvation
Deadlock:
 Two or more processes are waiting indefinitely for an event that can be
caused only by one of the waiting processes.

Example:
 Let S and Q be two semaphores initialized to 1
P0 P1
wait(S); wait(Q);
wait(Q); wait(S);
… ...
signal(S); signal(Q);
signal(Q); signal(S);

Starvation – indefinite blocking


 A process may never be removed from the semaphore queue in which
it is suspended
Priority Inversion:
 When a higher-priority process needs to read or modify kernel data that
are currently being accessed by a lower-priority process, the higher-
priority process have to wait for the lower-priority process to finish with
the resource.
 In between if the lower priority process is pre-empted in favour of
arrival of another process with a high priority than itself, then the initial
high priority process has to wait.
 This is called priority inversion.

Example:
 Let L, M, H are 3 processes with prioritie in the order: L < M < H
 If process H requires resource R, currently accessed by L. Then H has to wait
for L to release the resource.
 If process M becomes runnable, and it preempts L.
 Indirectly a process with low priority (M) has affected the high priority
process’s (H) execution.
Priority Inheritance Protocol (PIP)

 Priority inversion can be solved via priority-inheritance protocol


(PIP)

 In PIP protocol, all processes that are accessing resources needed by a


higher-priority process inherit the higher priority until they are finished
with the resources.

 When they are finished, their priorities revert to their original values.
Semaphore Solutions to the classical
synchronization problems
The Bounded-Buffer Problem
 The classical problems of synchronization-
 producer – consumer problem (Bounded buffer)

 There are n buffers locations, each can hold one item


 Semaphore mutex provides mutual exclusion for access to the buffer
pool
 Empty and full counts the number of empty and full buffers

Initialization:
Semaphore mutex initialized to the value 1
Semaphore full initialized to the value 0
Semaphore empty initialized to the value n
Semaphore solution to Bounded-
Buffer Problem
Producer Process: Consumer Process:
Readers-Writers Problem
 A database is shared among a number of concurrent processes
 Readers – only read the database; they do not perform any updates
 Writers – can both read and write

 Problem – allow multiple readers to read at the same time


 Only one single writer can access the shared data at the same time

 Shared Data:
 Database
 Semaphore rw_mutex initialized to 1
 Semaphore mutex initialized to 1
 Integer read_count initialized to 0
Code

writer process: Reader process:


do {
do { wait(mutex);
wait(rw_mutex); read_count++;
if (read_count == 1)
... wait(rw_mutex);
/* writing is performed */ signal(mutex);
... ...
signal(rw_mutex); /* reading is performed */
} while (true); ...
wait(mutex);
read count--;
if (read_count == 0)
signal(rw_mutex);
signal(mutex);
} while (true);
Dining-Philosophers Problem
 Philosophers spend their lives alternating thinking and eating

 Don’t interact with their neighbors, occasionally try to pick up 2


chopsticks (one at a time) to eat from bowl.
 Need both to eat, then release both when done

 In the case of 5 philosophers


 Shared data
 Bowl of rice (database)
 Semaphore chopstick [5] initialized to 1
Dining-Philosophers Problem Algorithm
Structure of Philosopher i:

do {
wait (chopstick[i] );
wait (chopStick[ (i + 1) % 5] );
// eat
signal (chopstick[i] );
signal (chopstick[ (i + 1) % 5] );
// think
} while (TRUE);

Problem with the solution:


If all 5 philosophers try to eat simultaneously and each grabs one
chopstick
Deadlock situation
Cont…

 Deadlock Handling:
 Allow at most 4 philosophers to be sitting simultaneously at the
table.

 Allow a philosopher to pick up the chopstick only if both are


available (picking must be done in a critical section.

 Use an asymmetric solution -- an odd-numbered philosopher picks


up first the left chopstick and then the right chopstick. Even-numbered
philosopher picks up first the right chopstick and then the left
chopstick.
Monitors
 A high-level abstraction that provides a convenient and effective
mechanism for process synchronization.

 A monitor type is an Abstract data type (ADT) which presents a set of


programmer defined operations that provide mutual exclusion within the
monitor.

Syntax of a monitor:
monitor monitor-name {
// shared variable declarations
procedure P1 (…)
{ ….
}
…….
procedure Pn (…)
{
……
}
Initialization code (…) { … }
}
}
Schematic view of a Monitor

 The monitor construct ensures that only one process at a time is


active within the monitor.
Additional synchronization
Mechanisms used in Monitors:
Condition Constructs:
 A programmer can define one or more variables of type condition:
condition x, y;

 Two operations are allowed on a condition variable:


 x.wait()–a process that invokes the operation is suspended until
x.signal()

 x.signal() – resumes one of processes (if any) that invoked x.wait()


 If no x.wait() on the variable, then it has no effect on the variable
Monitor Solution to Dining Philosophers

 Each philosopher i invokes the operations pickup() and putdown() in


the following sequence:

DiningPhilosophers.pickup(i);

…..
Eat
….

DiningPhilosophers.putdown(i);
Cont…
monitor DiningPhilosophers
{
enum { THINKING; HUNGRY, EATING) state [5] ;
condition self [5];

void pickup (int i) {


state[i] = HUNGRY;
test(i);
if (state[i] != EATING)
self[i].wait;
}

void putdown (int i) {


state[i] = THINKING;
// test left and right neighbors
test((i + 4) % 5);
test((i + 1) % 5);
}
Cont…

void test (int i) {


if ((state[(i + 4) % 5] != EATING) && (state[i] == HUNGRY) &&
(state[(i + 1) % 5] != EATING) )
{
state[i] = EATING ;
self[i].signal () ;
}
}

initialization_code()
{
for (int i = 0; i < 5; i++)
state[i] = THINKING;
}
}

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy