0% found this document useful (0 votes)
38 views

MCH 6 Synchronization

The document discusses various algorithms for process synchronization and mutual exclusion when shared resources are involved, including Peterson's solution, semaphores, monitors, and how synchronization issues like race conditions can be resolved. It also provides examples of the producer-consumer problem and how counting methods or comparing in/out variables can ensure orderly access to critical sections shared by concurrent processes.

Uploaded by

addi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
38 views

MCH 6 Synchronization

The document discusses various algorithms for process synchronization and mutual exclusion when shared resources are involved, including Peterson's solution, semaphores, monitors, and how synchronization issues like race conditions can be resolved. It also provides examples of the producer-consumer problem and how counting methods or comparing in/out variables can ensure orderly access to critical sections shared by concurrent processes.

Uploaded by

addi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 84

Synchronization and Mutual Exclusion

Neither the sun may overtake the moon, nor can the night usurp the time
of day, and all of them float through space [in accordance with Our laws].
Al Quraan al Hakeem: 36 Ya-Sin: 40
Process Synchronization and Resource
Sharing
 Background
 The Critical-Section Problem
 Peterson’s Solution
 Synchronization Hardware
 Semaphores
 Classic Problems of Synchronization
 Monitors
 Synchronization Examples
 Atomic Transactions
Shared Critical Resource (count)
Distribution of Alms to 1000 persons using “Count”

20
Count
Shared Critical Resource

If ( count <1000 )
{ // distribute Rs. 5000/
count = count + 1 // update & handover to hang new count
}
Distribution Algorithm

Money Distributed increased Rs, 5,000,000/- (How?)


Producer-Consumer (using shared in, out and Bounded-Buffer
)
(The circular buffer is being used for buffering and
synching)
Producer Consumer
while (true) while (true)
{ {
/* Produce an item , wait to buffer it */ while (in == out) ; // Busy waiting – no item
in = (in + 1) % BUFFER SIZE ; out = (out + 1) % BUFFER SIZE; // circularize
while ((in == out) ; // wait, -- no free buffers item = buffer [out]; // take item from the buffer
buffer [in] = item; // store item // Use item;
} }

Problems: These codes of co-operating processes have following problems.


1. These are using busy waiting in while loop
2. The usable locations are BUFFER SIZE -1 (one remains empty, used for synchronization)
3. The processes can stuck if the producer has made in = out
Modified codes to overcome 3rd problem
Codes for Producer and Consumer processes
in=0;
while (true)
{ // produce an item in nextProduced
while ( ((in + 1) % BUFFER_SIZE) == out); // No Buffer is free
buffer [in] = nextProduced;
in = (in + 1) % BUFFER_SIZE;
}

out=0;
while (true) {
while (in == out); // None of the items is available
nextConsumed = buffer[out]; // consume the item in nextConsumed
out = (out + 1) % BUFFER_SIZE;
}

These algorithms start storing and consuming items from 0th location and are
synchronizing their execution speed by comparing in and out shared variables
Concurrent Processes (Producer Consumer)
 Concurrent Co-operating processes can communicate
through shared data (a Critical Resource)
 Concurrent access to shared data may result in data
inconsistency
 Maintaining data consistency requires mechanisms to
ensure the orderly execution of cooperating processes
 Suppose that we wanted to provide a solution to the
consumer-producer problem that fills all the buffers. We
can do so, by having an integer count that keeps track of
the number of full buffers. Initially, in, out and count are
set to 0. count is incremented by the producer after it
produces and uses a buffer and is decremented by the
consumer after it consumes and frees a buffer.
Producer (using shared count and bounded circular buffer of BUFFER_SIZ

while (true) {
/* produce an item in nextProduced */
while (count == BUFFER_SIZE)
; // wait till availability of buffer
buffer [in] = nextProduced;
in = (in + 1) % BUFFER_SIZE;
count++;
}
Consumer (using shared count and bounded circular buffer )

while (true) {
while (count == 0)
; // wait till the availability of item
nextConsumed = buffer[out];
out = (out + 1) % BUFFER_SIZE;
count--;
/* consume the item in nextConsumed */
}
Race Condition Result depends on the order of execution, of
concurrent processes; e.g. “Producer” and “Consumer” performing count++ and
count++ (being used by Producer) could be implemented as
count--
register1 = count
register1 = register1 + 1
count = register1
 Count-- (being used by Consumer) could be implemented as
register2 = count
register2 = register2 - 1
count = register2
 Consider following sequence of interleaved execution, due to
context switching, with “count = 5” initially:
S0: producer execute register1 = count {register1 = 5}
S1: producer execute register1 = register1 + 1 {register1 = 6}
S2: consumer execute register2 = count {register2 = 5}
S3: consumer execute register2 = register2 - 1 {register2 = 4}
S4: producer execute count = register1 {count = 6 }
S5: consumer execute count = register2 {count = 4}
 Result is inconsistent, either 6 or 4, depending upon order of
execution, which ever is last, whereas, actually it should be 5
Critical-Resource:
A resource needed to be shared but cannot be used
simultaneously
 Problem: Two parallel instructions
Count ++ & count--
being used by different processes are sharing
common critical resource count without any
protocol, which causes inconsistency.
......................................
The critical resources or sections are needed to
be shared. The Sharing needs certain protocol
to be observed that ensures following conditions
ME1: Mutual Exclusion
ME2: Progress / (no blockade)
ME3: Bounded Waiting / (freedom from deadlocks)
Producer-Consumer Algo’s Quiz
L R
In=0;
In=0;
Out=0
while (true) { // producer process Out=0
/* produce an item in nextProduced */ while (true) { // producer process
while (count == BUFFER_SIZE) ; // wait /* produce an item */
buffer [in] = nextProduced; in = (in + 1) % BUFFER_SIZE;
in = (in + 1) % BUFFER_SIZE; while (in == out) ; // wait
count++; buffer [in] = nextProduced;
} }
------------------------------ --------------------------
// consumer process // consumer process
while (true) { while (true) {
while (count == 0) while (in == out); // wait on empty
; // wait till the availability out = (out + 1) % BUFFER_SIZE;
nextConsumed = buffer[out];
out = (out + 1) % BUFFER_SIZE; nextConsumed = buffer[out];
count--; }
}

In the L or R algorithms set, comment and indicate positive and negative points
Quiz Sol
 Right side:
 Positive points
 Cooperating processes sharing same reusable circular bounded buffer for
storage and retrieval of items
 Synchronization of production and consumption comparing in and out
variables
 No modification of shared variable, no threat of inconsistency or race
condition
 Negative points
 Busy waiting at while (in == out ); and block if producer is fast and makes
in=out
 Can use only BUFFER_SIZE-1 locations for storage of items when in==out
 Left side:
 Positive points
 Cooperating processes sharing same reusable circular bounded buffer for
storage and retrieval of items
 Synchronization of production and consumption by comparing count
variable
 Can use entire BUFFER_SIZE locations for storage of items
 Negative points
 Busy waiting at count == BUFFER_SIZE or count == 0
 Modification of shared variable count, without any synchronization protocol
 Positive threat of inconsistency or race condition, when context switching
takes place in between count++ or count– operations
Conditions for Critical-Section (CS)
1.Problem
Mutual Exclusion - If process P is executing in critical
i
section, then no other processes can be executing in the
critical section
2. Progress / (No blockage) - If no process is executing in
critical section and there exist some processes that wish
to enter the critical section, then the selection of the
processes that will enter the critical section next cannot
be postponed indefinitely
3. Bounded Waiting - A bound must exist on the number of
times that other processes are allowed to enter the
critical section after a process has made a request to
enter its critical section and before that request is granted
(no process shall use CS continuously)
Assume that each process executes at a nonzero speed
No assumption concerning the relative speed of the N processes
or order of execution shall be taken into account for designing
any such algorithm.
Example: (shared critical resource)
Two train tracks, sharing single Critical Resource : Bridge
Using filled or empty bowl indicating busy / free bridge

bowl
P1 Mutex-begin
P0
Afghanistan CS Bridge Pakistan
Mutex-end

Needs exclusive use of the resource. Devise better mechanism to


avoid train crashes (Usage, by more than one users, optimum
utilization, meeting Mutual Exclusion, Progress (no blockade), and
Bounded Waiting (starvation free) conditions)
Mechanism: Use critical section between two lock operations, Mutex-begin and Mutex-end
Primitive Algorithms to use CS
Use critical section between the two brackets, Mutex-begin() and Mutex-end()
Mutex-begin();
// Use Critical Section (CS)
Mutex-end ();
Various Implementations of Mutex-begin() and Mutex-end() brackets
Mutex-begin() Mutex-begin() Mutex-begin()
{ { {
while ( busy) ; // wait while ( turn == other) ; // wait need [ mine] = true;
busy = true; // other sets turn = mine; while ( need [other]) ;
} } }

Mutex-end() Mutex-end() Mutex-end()


{ { {
busy = false; turn = other; need [mine] = false;
} } }
What if both set their need as true?

Possibility of collision Needs strict alternation, 2 processes Progress not guaranteed

Mine = 0 , other = 1 for one train (P0 process), and mine = 1 and other = 0 for second P1
Discussion and deductions
The unpredictable context switching at any time, as indicated
by dotted lines in the last slide, and execution of other
process in between, causes problem due to break between
testing and setting (collision) and break between setting and
testing (blockade) of a single variable (Busy or Need)
 Busy flag, same meanings for both process i.e. wait when
busy is set, otherwise use CS. This caused collision in case
of context switching between testing busy and setting busy.
 Turn flag, exclusive meanings ( if 0 process 0 and if 1 then
process 1 will use CS) for both the processes, solved the
problem of collision and ensured mutual exclusion. But
caused problem of strict alternation.
 Two Need flags, solved the problem of strict alternation,
each user sets own Need and clears it after using CS. But
this caused the blockade problem.
 Lesson: Lets use one Turn flag to ensure exclusion and two
Need flags to handle the strict alternation, simultaneously.
Secondly Testing/Setting (or vice versa) the two flags (Turn
and Need), instead of a single flag, solves the problem,
which ends up into the Peterson’s Solution.
Peterson Solution
Two train tracks, sharing single Critical Section : Bridge
using three shared bowls: one Turn and two Need

Need[1] Need[0]

Turn

P1 Mutex-begin
P0
Afghanistan CS Bridge Pakistan
Mutex-end

The break between Testing and Setting of variable does not affect, since in this case two
types of flags are being tested instead of one: while ( need [other] && turn == other);
Solution works good for two processes.
It is extendable for more processes as ‘Bakery Algorithm’ on turn bases, by giving “turn” in
order of the processes’ request. however the algorithm becomes clumsy, needs extra
information i.e. about the maximum number of processes involved in sharing CS.
Peterson Solution using three shared bowls (one Turn and two Need
variables)
It is a two process solution, Where For Process P0:- my = 0, other =1 and for Process P1:- my = 1, other =0
The two brackets around CS can be defined as given below:

Mutex-begin();
{
need [my] = TRUE; // Indicate ‘my’ need
turn = other; // Let ‘other’ complete
while ( need [other] && turn == other); // wait until need and turn of other
}
and
Mutex-end();
{
need [my] = FALSE; // withdraw ‘my’ need
}
These brackets can be used as follows by both of the processes with respective values of my and other as follows.

Mutex-begin();
// Use CS or CR
Mutex-end();
‫خ‬ ‫ول ن‬ ‫ت‬
“Do indicate your need (need [my] = TRUE), but let the other take the turn (turn = other)” ‫ڈی ر اے‬
‫گا س ا ی‬
‫ں ج‬
Peterson’s Algorithm
ensuring “Progress” and avoiding “Strict alternation”
using three shared bowls (one Turn and two Need variables)
while (true) {
need [my] = TRUE; // Indicate ‘my’ need
turn = other; // Let ‘other’ complete
while ( need [other] && turn == other); // wait until other’s turn or need

// use CRITICAL SECTION

need [my] = FALSE; // retract my need

}
For :-----------------------------------
Process P0:- my = 0, other =1
Process P1:- my = 1, other =0
Peterson’s Solution discussion
 Two process solution
 Assumes that the LOAD and STORE instructions
are atomic; that is, these cannot be interrupted.
 The two processes share two types of variables:
 int turn;
 Boolean need[2]
 The need array is used to indicate if a process is
ready to enter the critical section. need[i] = true
implies that process Pi needs CS!
 The variable turn indicates whose turn it is to enter
the Critical Section (CS), in case of booth in need.
 Bakery Algorithm: Extension for more processes, uses
turn to resolve contention (conflict), needs max no. of
processes competing for resource, becomes complex.
Peterson solution (cont)
 Possible combinations in condition for using critical section (CS)
while ( need [other] && turn == other); //wait
If only one process needs, it can use, if both need then turn decides

turn need[1] need[0] Remarks

0 0 0 No one needs CS

0 0 1 P0 needs and can use CS, (P1 doesn’t need)


0 1 0 P1 needs and can use CS (P0 doesn’t need)

0 1 1 P0 and P1 need, but P0 can use CS

1 0 0 No one needs CS
1 0 1 P0 needs and can use CS, (P1 doesn’t need)
1 1 0 P1 needs and can use CS (P0 doesn’t need)

1 1 1 P0 and P1 need, but P1 can use CS


The Problem: (Context Switching, between Testing and
Setting (or vice versa) of shared flags or variables by different
processes)
The main problem in handling the critical section (CS) is
that in between test and set (or vice versa, as indicated
by doted lines) the operation is non-atomic, due to
possible occurrence of context switching of processes,
while the other process, competing for CS takes over.
 Since context switching can occur at any time, unexpectedly and
unscheduled, so we can not handle the problem using simple
primitive algorithms.
 One solution is to disable interrupts to avoid context switching
before TEST and enable after SET (not good)
 TEST and SET instructions are atomic individually but
combination of both (as we need) is non atomic.
 Secondly, let us combine TEST and SET as a single atomic
instruction using hardware.
Need for Synchronization Hardware
 To achieve atomicity, interrupts should be disabled (to avoid context
switching), during (test and set) operation on systems, without
hardware support.
 Uni-processor – could disable interrupts to ensure atomicity while (testing
and setting) flags.
 Currently running code would execute without preemption
 Generally too inefficient on multiprocessor systems
 Operating systems, using this are not broadly scalable
 Interrupt: A dangerous resource, disabling/enabling of interrupts, in hands
of novice user, may be catastrophicA

 Alternatively, Modern machines provide special atomic hardware


instructions for making critical section code.
Atomic = non-interruptible / non-divisible (Single instruction)
 Either test memory word and set value (test-n-set)
 Or swap as a single instruction, to exchange contents of two memory
words, to accommodate test-n-clear instruction.
TestAndndSet Instruction
(Test any Boolean variable and also set it atomically)
 Definition: The original value is returned, while it
is made true (set) also, as a single instruction

boolean TestAndSet (boolean *target)


{
boolean rv = *target;
*target = TRUE;
return rv:
}
Use of TestAndSet atomic
instruction
busy is initialized as false. Non atomic mutex-begin (left) and the atomic one (right)

Mutex-begin()
{ Mutex-begin()
while ( busy) ; // wait {
busy = true; while (TestAndSet( busy)) ; // wait
}
}

Critical Section Critical Section

Mutex-end() Mutex-end()
{ {
busy = false; busy = false;
} }
Atomic Swap Instruction
(Test and set or re-set the target flag as per
Definition:, The flag a is made true (for b =1; to set) or
need)
false (b =0; to reset) depending upon the second value
(b), atomically, while The original value of a, is
returned in b.

void Swap (boolean *a, boolean *b)


{
boolean temp = *a;
*a = *b;
*b = temp:
}

Ex. Use the atomic swap instruction, by setting busy, to lock a resource,
before using the critical resource and reset busy after use.
Use of swap(a,b) atomic instruction
busy is initialized as false. But to enter into CS it is set true. The non atomic mutex-
begin (left) and the atomic one, using swap (right)
Mutex-begin() Mutex-begin()
{ {
while ( busy) ; // wait value=true; // to set busy as true
busy = true; while (value) swap(busy , value) ; // wait
} }

Use Critical Section Use Critical Section

Mutex-end() Mutex-end()
{ {
busy = false; busy = false;
} }
1) May be initialized to a nonnegative integer value
2) The semWait operation decrements the value
3) The semSignal operation increments the value
Semaphore: Implemented at OS level
 Synchronization tool
 proposed by Dijkstra (Dutch scientist), to handle concurrency issues
 implemented at OS level
 to avoid handing over interrupts to users
 no need of hardware supported atomic instructions like test-and-set and swap
 that may not require busy waiting {while ( busy) ; }
 Semaphore S – integer variable (binary Semaphore or counting Semaphore)
 Two standard operations, to modify S: wait (s) and signal (s), Originally called P() and
V(), for Dutch words Proberen (to test) and Verhogen (to increment) respectively
 Set S=1 to indicate availability of resource or semaphore
 Less complicated

 Semaphore S, can only be accessed via two indivisible (atomic) operations


 wait (S) { // wait if locked (S<=0)
while S <= 0 ; // spin over lock
S--; // lock (S--)

}
 signal (S) { // unlock (s++)
S++;
}
Semaphore: General Synchronization Tool
 Binary semaphore – integer value can be only 0 and 1
 0: resource is locked; 1: resource is available
 Associate a semaphore with any critical resource
 Also known as mutex locks
 Provides mutual exclusion
 Semaphore S =1; // initialized to 1, indicating availability of resource
wait (S);
// Critical Section
signal (S);
 Counting semaphore – integer value can range over an
unrestricted domain (MAXINT)
 Can be used as “Count” for available resources
 Can implement a counting semaphore S as a binary
semaphore
Use of Semaphores (atomic wait() and signal()
operations
busy is initialized as false. In non-atomic mutex-begin (left) it is set. The semaphore S,
associated with CS is initialized as 1 (available) and wait is performed on it, in atomic
fashion (right)
Mutex-begin()
{ Mutex-begin()
while ( busy) ; // wait {
busy = true; wait (S) ; // wait
}
}

Use Critical Section Use Critical Section

Mutex-end() Mutex-end()
{ {
busy = false; signal (S);
} }
Semaphore Implementation
 Must guarantee atomicity that no two processes
can execute wait () and signal () on the same
semaphore at the same time
 Thus, implementation itself becomes the critical
section problem where the wait and signal code
are placed in the critical section.
 Could now have busy waiting in critical section
implementation
 But implementation code is short
 Little busy waiting, if critical section is rarely occupied
 Note that applications may spend lots of time in critical
sections and therefore this (busy waiting) is not a good
solution.
Spinlock and Sleeping locks (linux
semaphores)
The semaphore using Busy Waiting (BW) is also called
spinlock (process spins around, waiting for a lock). The
continued waiting is non productive. while S <= 0 ;
 Spinlock affordable when locks are expected to be held for short
times (brief CS)
 Spinlock may not be suitable for uni-processor systems but may be
used on multiprocessor systems. OR there shall be some other
agent that can break BW.
 No context switching is needed for process spinning on lock; in
contrast to sleeping locks (Linux semaphores)
 spinlocks are usually very convenient, since many kernel resources
are locked for a fraction of a millisecond only; therefore, it would be
far more time-consuming to release the CPU and reacquire it later.
 Spinlocks may cause deadlock in case of priority inversion (when
a low priority job holds a resource, needed by a high priority job) , if
proper care is not taken.
 The semaphore without BW are sleeping locks (Linux
semaphores. waiting processes on semaphore S are
blocked and activated when S is signaled.)
Semaphore Implementation with non-Busy-
 waiting
With each semaphore there is an associated a
waiting queue. Each has two data items, value and
entry in queue
 value (of type integer)
 pointer to next record in the list

 Two operations, possible on processes, provided by


operating system to block or unblock any process:
 block – place the process invoking the wait operation for the
semaphore, in the waiting queue of the semaphore.
 wakeup – remove one of processes from the waiting queue
for that semaphore and place it in the ready queue.
Sleeping semaphore: Implementation with non-Busy-waiting

 Implementation of wait() without busy waiting:


wait (S){
value--;
if (value <= 0) {
// add this process to waiting queue for semaphore S
block (P); }
}
 Implementation of signal():
Signal (S){
value++;
if (value >= 0) {
// wakeup a process from the waiting queue for semaphore S
wakeup (P); }
}
P is the process-id (using P=getpid() sys call) of the process, requesting
for non available semaphore S, which will be queued into Blocked queue
for semaphore S and will be woken up by moving P to the ready state, in
case S is signaled (made available) by some other process.
Deadlock and Starvation
 Deadlock – two or more processes are waiting indefinitely for an event that
can be caused only by the processes among each other
 Let S and Q be two semaphores (resources) initialized to 1 (Available)
P0 P1
wait (S); wait (Q);
wait (Q); wait (S);
. .
. .
. .
signal (S); signal (Q);
signal (Q); signal (S);

 Starvation – Indefinite blocking (denial of resources for a very long time).


 Deadlock – The process may never be removed from the queue of the
semaphore in which it is suspended. i.e. Process P0 will remain waiting
for resource Q while P1, waiting for Resource S, to be freed by each other,
which is not possible.
Classical Problems of Synchronization

 Handling of the following classical


problems, using shared critical
resources by attaching certain
semaphores with each resource

 Bounded-Buffer Problem
 Readers and Writers Problem
 Dining-Philosophers Problem
Bounded-Buffer Problem (using
semaphores)
 N buffers, each can hold one item produced by
producer. Consumer can consume, if items are
available in buffer, indicated by semaphore
available
 Semaphore mutex initialized to the value 1, to
guard usage of buffer exclusively.
 Semaphore available initialized to the value 0, it
indicates that some item is available in buffer, to be
consumed.
 Counting semaphore empty initialized to the value
N, to cause producer to wait , if all N buffers are full
Bounded Buffer Problem (Cont.)
 available initialized to 0, mutex initialized to 1 and empty
(counting semaphore), initialized to N = buffer size
 The structure of the Producer process

while (true) {
// produce an item

wait (empty);// wait if no buffer empty


wait (mutex);

// add the item to the buffer

signal (mutex);
signal (available); // signal availability
}
Bounded Buffer Problem (Cont.)
 available initialized to 0, mutex initialized to 1 and empty initialized
to N (counting semaphore)
 The structure of the Consumer process

while (true) {
wait (available); // any item available ?
wait (mutex);

// remove an item from buffer


signal (mutex);
signal (empty); // indicate buffer empty
// consume the removed item
}
Readers-Writers Problem
 A data set is shared among a number of
concurrent processes:
 Readers – only read the data set; they do not perform
any updates
 Writers – can do both read and write.
 Problem – allow multiple readers to read at the
same time. – Only one single writer can access
the shared data for update at the same time.
 Shared semaphores and data
 Semaphore mutex initialized to 1.
 Semaphore write initialized to 1.
 Integer readercount initialized to 0.
Readers-Writers Problem (Cont.)
 The structure of a writer process

while (true) {
wait (write) ;

// writing is performed

signal (write) ;
}
Readers-Writers Problem (Cont.)
 The structure of a reader process
while (true) {
wait (mutex) ; // to safeguard readercount++
readercount ++ ;
if (readercount == 1) wait (write) ;
signal (mutex)
// reading is performed
wait (mutex) ; // to safeguard readercount--
readercount - - ;
if (readercount == 0) signal (write) ;
signal (mutex) ;
}
Problem:- Writer may Starve if readers keep on coming and leaving the reader section
Solution :- No new reader should be allowed, if some writer has requested.
Dining Philosophers Problem
•No two philosophers
can use the same
fork at the same time
(mutual exclusion)

•No philosopher
must starve to death
(avoid deadlock and
starvation)
Dining-Philosophers Problem

 Shared data
 Multiple resources, multiple requesters
 Bowl of rice (non critical) and 5 chopsticks (CS) for five
philosophers. Eating is possible only if two sticks are available to
one philosopher
 Two philosophers (non neighbors) can simultaneously eat
 Semaphore array chopstick [5] initialized to 1 (available)
Dining-Philosophers Problem (Cont.)
 The structure of Philosopher i:
While (true) {
wait ( chopstick[i] ); // wait for ith resource
wait ( chopStick[ (i + 1) % 5] ); // circularize

// eat

signal ( chopstick[i] );
signal (chopstick[ (i + 1) % 5] );

// think
}
Semaphores - comments
 Saves “Interrupts” to be handled by the user
 Intuitively easy to use.
 wait() and signal() are to be implemented as atomic
operations by OS.
 Programming difficulties:
 signal() and wait() may get exchanged

inadvertently by the programmer. This may result


in deadlock or violation of mutual exclusion.
 Any signal() and wait() may be left out / omitted.

 Scattered Related wait() and signal():


 Related wait() and signal() may be scattered all

over the code among the processes. May become


difficult to match corresponding pairs.
Problems with Semaphores (Ordering)
Proper order of wait() and signal() functions yield proper result
for mutex initialized as 1
 Correct use of semaphore operation
 wait (mutex) … signal (mutex)  Exclusive CS

 Incorrect use of semaphore operations:


 signal (mutex) …. wait (mutex)  Crash
 wait (mutex) … wait (mutex)  Deadlock

 Problems:
 Difficult to match wait-signal pairs in large programs.
 Unexpected behavior on omitting wait (mutex) or signal
(mutex) or both
Monitors
 An other synchronization tool
 Monitor is a predecessor of the “class” concept.
 Initially it was implemented as a programming
language construct and more recently as library. The
latter made the monitor facility available for general
use with any programming language.
 Monitor contains multiple procedures, initialization
sequences, and local data. Local data is accessible
only through monitor’s own procedures.
 Only one process can be executing in a monitor at a
time. Other processes that need the monitor must
wait.
Monitors (comments)
 No major user’s role in observing proper sequences of wait/signal.
 A high-level abstraction that provides a convenient and effective
mechanism for process synchronization
 Only one process may be active within the monitor at a time
 Local variable, to be accessed by local procedures. Invisible outside
monitor monitor-name
{
// shared variable declarations
procedure P1 (…) { …. }

procedure Pn (…) {……}

Initialization code ( ….) { … }



}
}
Schematic view of a Monitor

Only one process can be executing in a monitor at a time. Other processes that
need the monitor must wait.
Condition Variables
 condition x, y;

 Two operations on a condition variable:


 x.wait () – a process that invokes the operation is
suspended.
 x.signal () – to resumes one of processes (if any) that
invoked x.wait ()
Condition Variables
 To allow a process to wait within the monitor for certain
condition. condition variables must be declared, such as
condition x, y;
 Two operations permissible on a condition
variables: Condition variables can only be used with the
wait and signal.
 The operation x.wait()

means that the process invoking this operation is


suspended until another process invokes x.signal();
 The x.signal() operation resumes exactly one (if any)

suspended process that invoked x.wait (). If no


process is waiting/suspended, then the signal
operation has no effect.
Monitor with Condition Variables
Solution to Dining Philosophers using monitor
monitor DP
{
enum { THINKING; HUNGRY, EATING) state [5] };
condition self [5]; // for waiting or go ahead for each process, set as 1

void pickup (int i) {


state[i] = HUNGRY;
test(i); // if none of the neighbors eating, set state [i] EATING
if (state[i] != EATING) self [i].wait // else wait;
} // Eating possible

void putdown (int i) { //Go thinking and signal waiting neighbors


state[i] = THINKING;
// test left and right neighbors (why?)
test((i + 4) % 5); // if (i+4) neighbor is waiting, self[i+4].signal
test((i + 1) % 5); // if (i+1) neighbor is waiting, self[i+1].signal
}
Solution to Dining Philosophers (cont)

void test (int i) {


if ( (state[(i + 4) % 5] != EATING) &&
(state[i] == HUNGRY) &&
(state[(i + 1) % 5] != EATING) ) {
state[i] = EATING ;
self[i].signal () ;
}
}

initialization_code() { // set philosophers THINKING and ready to proceed


for (int i = 0; i < 5; i++)
state[i] = THINKING;
self[i] = 1; // set ready to proceed
}
Solution to Dining Philosophers (cont)

 Each philosopher I invokes the operations


pickup() and putdown() in the following
sequence:

dp.pickup (i)

// EAT; use shared critical resources

dp.putdown (i)
Monitor Implementation Using Semaphores
 Variables
semaphore mutex; // (initially = 1)
semaphore next; // (initially = 0)
int next-count = 0;
 Each procedure F will be replaced by

wait(mutex);

body of F;


if (next-count > 0)
signal(next)
else
signal(mutex);
 Mutual exclusion within a monitor is ensured.
Monitor Implementation
 For each condition variable x, we have:

semaphore x-sem; // (initially = 0)


int x-count = 0;
 The operation x.wait can be implemented as:

x-count++;
if (next-count > 0)
signal(next);
else
signal(mutex);
wait(x-sem);
x-count--;
Monitor Implementation
 The operation x.signal can be implemented as:

if (x-count > 0) {
next-count++;
signal(x-sem);
wait(next);
next-count--;
}
Critical Resources
 Mutual exclusion is fundamental problem in
concurrent programming of co-operating
processes
 Flags, in shared memory on single or tightly
coupled systems were used to ensure following
conditions
ME1: Mutual Exclusion (safety property)
ME2: Progress / (No Blockade or liveliness property)
ME3: Bounded Waiting / (freedom from deadlocks)
 Distributed systems
 Shared memory and flags, not possible on distributed
systems
 Synchronization through some sort of communication
between distributed processes
Message passing among distributed
processes
 Both synchronization and communication requirements
are taken care of by this mechanism.
 More over, this mechanism yields to synchronization
methods among distributed processes.
 Basic primitives for messages handling are:
 send (destination, message);
 receive ( source, message);
 Messages are passed through some sort of channels
 Most message passing solutions implement FIFO
fairness or ordered messaging.
Solaris Synchronization
 Implements a variety of locks to support
multitasking, multithreading (including real-time
threads), and multiprocessing
 Uses adaptive mutexes for efficiency when
protecting data from short code segments
 Uses condition variables and readers-writers
locks when longer sections of code need access
to data
 Uses turnstiles to order the list of threads waiting
to acquire either an adaptive mutex or reader-
writer lock
Windows XP Synchronization
 Uses interrupt masks to protect access to global
resources on uniprocessor systems
 Uses spinlocks on multiprocessor systems
 Also provides dispatcher objects which may act
as either mutexes and semaphores
 Dispatcher objects may also provide events
 An event acts much like a condition variable
Linux Synchronization
 Linux:
 disables interrupts to implement short critical
sections

 Linux provides:
 semaphores
 spin locks
Pthreads Synchronization
 Pthreads API is OS-independent
 It provides:
 mutex locks
 condition variables

 Non-portable extensions include:


 read-write locks
 spin locks
Atomic Transactions

 System Model
 Log-based Recovery
 Checkpoints
 Concurrent Atomic Transactions
System Model
 Assures that multiple operations happen as a single
logical unit of work, in its entirety, or not at all
 Related to field of database systems
 Challenge is assuring atomicity despite computer
system failures
 Transaction - collection of instructions or operations that
perform a single logical function
 Here we are concerned with changes to stable storage – disk
 Transaction is series of read and write operations
 Execution sequence is called schedule
 Terminated by commit (transaction successful) or abort
(transaction failed) operation
 Aborted transaction shall be rolled back to undo any changes it
performed
Types of Storage Media
 Volatile storage – information stored here does not
survive system crashes
 Example: main memory, cache
 Nonvolatile storage – Information usually survives
crashes
 Example: disk, tape, optical media
 Stable storage – Information never gets lost
 Not actually possible, approximated via replication or
RAID (Redundant Array of Independent Disks) devices
with independent failure modes, assuming that at least
one copy survives.
 Goal is to assure transaction’s integrity / atomicity where
failures cause loss of information on volatile storage
Log-Based Recovery
 Record information about all modifications, done by a
transaction, to stable storage
 Most common is write-ahead logging
 Log on stable storage. Each log record describes single
transaction write operation, that includes
 Transaction name
 Data item name
 Old value
 New value
 <Ti starts> written to log when transaction Ti starts
 <Ti commits> written when Ti commits
 Log entry must reach stable storage before
operation on data occurs
Log-Based Recovery Algorithm
 Using the log, system can handle any volatile memory
errors
 Undo(Ti) restores value of all data updated by Ti
 Redo(Ti) sets values of all data in transaction Ti to new values

 Undo(Ti) and Redo(Ti) must be idempotent (i.e. multiple


executions must have the same result as that of single
execution)
 Undo(Ti): If log contains <Ti starts> without <Ti commits>,
 Redo(Ti): If log contains <Ti starts> and <Ti commits>,

 If system fails, restore state of all updated data using log


Checkpoints (intermediate saved stages of execution)
 Log could become demanding, long, and
recovery could take longer
 Checkpoints shorten the log and recovery time.
 Checkpoint scheme:
1. Output all log records currently in volatile storage to
stable storage
2. Output all modified data from volatile to stable storage
3. Output a log record <checkpoint> to the log on stable
storage
 Now recovery only includes Ti, such that Ti
started executing before the most recent
checkpoint, and all transactions after Ti All other
transactions already on stable storage
Concurrent Transactions

 Must be equivalent to serial execution –


serializability
 Could perform all transactions in critical section
 Inefficient, too restrictive
 Concurrency-control algorithms provide
serializability
Serializability

 Consider two data items A and B


 Consider Transactions T0 and T1
 Execute T0, T1 atomically
 Execution sequence is called schedule
 Atomically executed transaction order is called
serial schedule
 For N transactions, there are N! valid serial
schedules possible
Schedule 1: T0 then T1
Time:-

T
i
m
e
Nonserial Schedule
 Non-serial schedule allows overlapped execution
 Resulting execution shall not be necessarily incorrect
 Consider schedule S, operations Oi, Oj
 Conflict if access same data item, with at least one
write
 If Oi, Oj consecutive and operations of different
transactions & Oi and Oj don’t conflict
 Then S’ with swapped order Oj Oi equivalent to S
 If S can become S’ via swapping non-conflicting
operations
 S is conflict serializable
Schedule 2: Concurrent Serializable
Schedule
Locking Protocol
 Ensure serializability by associating lock with each data
item
 Follow locking protocol for access control
 Types of Locks
 Shared – Ti has shared-mode lock (S) on item Q, Ti can read Q
but not write Q
 Exclusive – Ti has exclusive-mode lock (X) on Q, Ti can both
read and write Q
 Require every transaction on item Q to acquire
appropriate lock
 If lock already held, new request may have to wait
 Similar to readers-writers algorithm
Two-phase Locking Protocol
 Generally ensures conflict serializability
 The requests are made in two phases:
 Growing – obtaining locks
 Shrinking – releasing locks
 Each transaction issues lock in ‘Growing’
phase and unlock in ‘Shrinking’ phase
 Does not prevent deadlock
 Deadlock may occur, if there is some record needed by
both of the concurrent transactions is locked by one
transaction and later on is needed by the other
transaction, whose ‘Shrinking’ phase has not yet started.
Timestamp-based Protocols
 Select order among transactions in advance –
timestamp-ordering
 Transaction Ti associated with timestamp TS(Ti)
before Ti starts
 TS(Ti) < TS(Tj) if Ti entered system before Tj
 TS can be generated from system clock or as logical
counter incremented at each entry of transaction
 Timestamps determine serializability order
 If TS(Ti) < TS(Tj), system must ensure produced
schedule equivalent to serial schedule where Ti
appears before Tj
Timestamp-based Protocol
 Implementation
Data item Q gets two timestamps
 W-timestamp(Q) – largest timestamp of any transaction that
executed write(Q) successfully
 R-timestamp(Q) – largest timestamp of successful read(Q)
 Updated whenever read(Q) or write(Q) executed
 Timestamp-ordering protocol assures any conflicting
read and write executed in timestamp order
 Suppose Ti executes read(Q)
 If TS(Ti) < W-timestamp(Q), Ti needs to read value of Q that was
already overwritten
 read operation rejected and Ti rolled back

 If TS(Ti) ≥ W-timestamp(Q)
 read executed, R-timestamp(Q) set to max(R-timestamp(Q),
TS(Ti))
Timestamp-ordering Protocol
 Suppose Ti executes write(Q)
 If TS(Ti) < R-timestamp(Q), value Q produced by Ti
was needed previously and Ti assumed it would never
be produced
 Write operation rejected, Ti rolled back
 If TS(Ti) < W-timestamp(Q), Ti attempting to write
obsolete value of Q
 Write operation rejected and Ti rolled back
 Otherwise, write executed
 Any rolled back transaction Ti is assigned new
timestamp and restarted
 Algorithm ensures conflict serializability and
freedom from deadlock
Schedule Possible Under Timestamp
Protocol

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy