Chapter 6. Synchronization Tools
Chapter 6. Synchronization Tools
Chapter 6
Synchronization Tools
Yunmin Go
School of CSEE
Agenda
◼ Background
◼ The Critical-Section Problem
◼ Peterson’s Solution
◼ Hardware Support for Synchronization
◼ Mutex Locks
◼ Semaphores
◼ Monitors
◼ Liveness
Synchronization Tools - 2
Background
◼ Process communication method
◼ Message passing
◼ Shared memory → confliction can occur !!
◼ Producer-consumer problem
◼ An example of communication through shared memory
Buffer Consumer
Producer info. info.
(Shared Memory)
Implemented by
circular queue
Synchronization Tools - 3
[2] [3]
Concurrent Access of Shared Data J2 J3 in
[1] J1 [4]
◼ Producer ◼ Consumer out [0] [5]
while (true) { while (true) {
while (counter == BUFFER_SIZE); while (counter == 0);
buffer[in] = nextProduced; nextConsumed = buffer[out];
in = (in + 1) % BUFFERSIZE; out = (out + 1) % BUFFER_SIZE;
counter++; counter--;
} }
◼ Normal situation
◼ Initial value of counter is 5
◼ Producer increased counter, and consumer decreased counter concurrently.
◼ Ideally, counter should be 5. But…
Synchronization Tools - 4
Synchronization Problem
◼ Implementation of "counter++" ◼ Implementation of "counter--"
register1 = counter register2 = counter
register1 = register1 + 1 register2 = register2 - 1
counter = register1 counter = register2
counter can be 4 or 6 !
Synchronization Tools - 5
Process Synchronization
◼ Race condition
◼ A situation, where several processes access and manipulate the same data
concurrently
◼ The outcome of execution depends on the particular order in which the access
takes place
◼ Synchronization
◼ The coordination of occurrences to operate in unison with respect to time
◼ Ensuring only one process can access the shared data at a time
Synchronization Tools - 6
Agenda
◼ Background
◼ The Critical-Section Problem
◼ Peterson’s Solution
◼ Hardware Support for Synchronization
◼ Mutex Locks
◼ Semaphores
◼ Monitors
◼ Liveness
Synchronization Tools - 7
The Critical-Section Problem
◼ Consider system of n processes {p0, p1, … pn-1}
◼ Each process has critical section segment of code
◼ Process may be changing common variables, updating table, writing file, etc
◼ When one process in critical section, no other may be in its critical section
// General structure of process Pi Remainder Section Remainder Section Remainder Section
Entry Section Entry Section Entry Section
do {
entry section Critical Section Critical Section … Critical Section
remainder section
} while (true); Shared
Data
Synchronization Tools - 8
The Critical-Section Problem
◼ Critical section problem is to design a protocol that the processes
can use to synchronize their activity so as to cooperatively share data
◼ Each process must ask permission to enter critical section in entry
section, may follow critical section with exit section, then remainder
section
◼ Entry section: code section to request permission to enter critical section
◼ Critical section: a segment of code which may change shared resources
(common variables, file, …)
◼ Exit section: code section to notice it leaves the critical section
◼ Remainder section: a segment of code which doesn’t change shared
resources
Synchronization Tools - 9
Requirements for Critical-Section Problem
◼ Mutual Exclusion: If process Pi is executing in its critical section,
then no other processes can be executing in their critical sections
◼ Progress: If no process is executing in its critical section and there
exist some processes that wish to enter their critical section, then the
selection of the processes that will enter the critical section next
cannot be postponed indefinitely
◼ Bounded Waiting: A bound must exist on the number of times that
other processes are allowed to enter their critical sections after a
process has made a request to enter its critical section and before
that request is granted
* Assumptions
- Each process executes at a nonzero speed
- No assumption concerning relative speed of the n processes Synchronization Tools - 10
Critical-Section Handling in OS
◼ Example of race condition when assigning a pid
◼ Processes P0 and P1 are creating
child processes using the fork()
system call
◼ Race condition on kernel variable
next_available_pid which represents
the next available process identifier (pid)
◼ Unless there is mutual exclusion,
the same pid could be assigned to two
different processes!
Synchronization Tools - 11
Critical-Section Handling in OS
◼ Two general approaches are used to handle critical-section in
operating systems
◼ Non-preemptive
◼ Not allow a process running in kernel mode to be preempted
◼ Kernel-mode process will run until it exits kernel mode, blocks, or voluntarily yields CPU
◼ Essentially free of race conditions in kernel mode
◼ Preemptive kernels
◼ Allows preemption of process when running in kernel mode
◼ More responsive, since there is less risk a kernel mode process will run for an arbitrarily
long period before relinquishing the processor to waiting processes
◼ More suitable for real-time process
Synchronization Tools - 12
Agenda
◼ Background
◼ The Critical-Section Problem
◼ Peterson’s Solution
◼ Hardware Support for Synchronization
◼ Mutex Locks
◼ Semaphores
◼ Monitors
◼ Liveness
Synchronization Tools - 13
Peterson’s Solution
◼ Peterson’s Solution: classic software-based solution to the critical-
section problem
◼ Not guaranteed to work on modern architectures
◼ But good algorithmic description of solving the problem!
◼ Restricted to two processes (Pi and Pj) that alternate execution between
critical section and remainder sections
◼ Assume that the load and store machine-language instructions are atomic;
that is, cannot be interrupted
◼ The two processes share two variables:
◼ int turn;
◼ boolean flag[2];
Synchronization Tools - 14
Peterson’s Solution
◼ Algorithm for Process Pi
int turn; // Indicates whose turn it is to enter the critical section
boolean flag[2]; // Used to indicate if a process is ready to enter its critical section
// flag[i] == true implies that process Pi is ready!
∙∙∙
while (true) {
flag[i] = true;
turn = j;
while (flag[j] && turn == j);
/* critical section */
flag[i] = false;
/* remainder section */
Synchronization Tools - 15
Peterson’s Solution
◼ Algorithm for Process P0 ◼ Algorithm for Process P1
while (true) { while (true) {
flag[0] = true; flag[1] = true;
turn = 1; turn = 0;
while (flag[1] && turn == 1); while (flag[0] && turn == 0);
} }
Synchronization Tools - 16
Proof of Peterson’s Solution
◼ Proof of mutual exclusion
◼ If both Pi and Pj enter their critical section, it means
◼ flag[0] = flag[1] = true
◼ turn can be either 0 or 1, but turn cannot be both
// P0 // P1
while (true) { while (true) {
flag[0] = true; flag[1] = true;
turn = 1; turn = 0;
while (flag[1] && turn == 1); while (flag[0] && turn == 0);
} }
Synchronization Tools - 17
Proof of Peterson’s Solution
◼ Proof of progress and bounded waiting
◼ Blocking condition of Pi: while(flag[j] && turn == j);
◼ If Pj is not ready to enter critical section: flag[j] == false
→ Pi can enter critical section
◼ If Pj is waiting in its while statement, turn is either i or j
waiting).
Synchronization Tools - 18
Peterson’s Solution
◼ Although useful for demonstrating an algorithm, Peterson’s Solution
is not guaranteed to work on modern architectures
◼ Understanding why it will not work is also useful for better
understanding race conditions
◼ To improve performance, processors and/or compilers may reorder
operations that have no dependencies
◼ For single-threaded, this is ok as the result will always be the same
◼ For multithreaded, the reordering may produce inconsistent or
unexpected results!
Synchronization Tools - 19
Example: Peterson’s Solution
◼ Two threads share the data:
boolean flag = false;
int x = 0;
Synchronization Tools - 20
Peterson’s Solution
◼ The effects of instruction reordering in Peterson’s Solution
◼ What happens if the assignments of the first two statements that appear in the
entry section of Peterson’s solution are reordered?
flag[i] = true; reorder turn = j;
turn = j; flag[i] = true;
while (flag[j] && turn == j); while (flag[j] && turn == j);
◼ This allows both processes to be in their critical section at the same time!
→ Using proper synchronization tools based on hardware support!
Synchronization Tools - 21
Agenda
◼ Background
◼ The Critical-Section Problem
◼ Peterson’s Solution
◼ Hardware Support for Synchronization
◼ Mutex Locks
◼ Semaphores
◼ Monitors
◼ Liveness
Synchronization Tools - 22
Synchronization Hardware
◼ Many systems provide hardware support for implementing the critical
section code
◼ Single-core processor: could disable interrupts
◼ Currently running code would execute without preemption
◼ Generally too inefficient on multiprocessor systems
◼ Operating systems using this not broadly scalable
Synchronization Tools - 23
Memory Barriers
◼ Memory barrier (memory fence): an instruction that forces any
change in memory to be propagated (made visible) to all other
processors.
◼ The memory barrier instruction ensures that the store operations are
completed in memory and visible to other processors before future load or
store operations are performed
◼ Very low-level operations
◼ Typically only used by kernel developers when writing specialized code that
ensures mutual exclusion
Synchronization Tools - 24
Memory Barrier
◼ We could add a memory barrier to the following instructions to ensure
Thread 1 outputs 100:
◼ Thread 1 ◼ Thread 2
// guarantee that flag is loaded before x // ensure that assignment to x occurs
while (!flag) // before the assignment to flag
memory_barrier(); x = 100;
print x; memory_barrier();
flag = true;
◼ Test-and-Set instruction
◼ Compare-and-Swap instruction
Synchronization Tools - 26
Test-and-Set Instruction
◼ Definition
boolean test_and_set(boolean *target) {
boolean rv = *target;
*target = true;
return rv:
}
◼ Executed atomically
◼ Returns the original value of passed parameter
◼ Set the new value of passed parameter to true
Synchronization Tools - 27
Test-and-Set Instruction
◼ Mutual exclusion implementation using test-and-set instruction
◼ Declare a shared boolean variable lock, initialized to false
do {
while (test_and_set(&lock));
/* critical section */
return rv:
} while (true); }
Synchronization Tools - 28
Compare-and-Swap Instruction
◼ Definition
int compare_and_swap(int *value, int expected, int new_value) {
int temp = *value;
if (*value == expected)
*value = new_value;
return temp;
}
◼ Executed atomically
◼ Returns the original value of passed parameter value
◼ Set value to new_value (passed parameter) but only if *value ==
expected is true. That is, the swap takes place only under this
condition
Synchronization Tools - 29
Compare-and-Swap Instruction
◼ Mutual exclusion implementation using compare-and-swap instruction
◼ Declare a shared integer variable lock, initialized to 0
while (true) {
while (compare_and_swap(&lock, 0, 1) != 0);
return temp;
}
}
Synchronization Tools - 30
Compare-and-Swap Instruction
◼ Bounded-waiting mutual exclusion with compare-and-swap
boolean waiting[n]; // initialized to false
int lock; // initialized to 0
while (true) {
waiting[i] = true;
key = 1;
while (waiting[i] && key == 1)
key = compare_and_swap(&lock,0,1);
waiting[i] = false;
/* critical section */
j = (i + 1) % n;
while ((j != i) && !waiting[j])
j = (j + 1) % n;
if (j == i)
lock = 0;
else
waiting[j] = false;
/* remainder section */
} Synchronization Tools - 31
Atomic Variables
◼ Typically, instructions such as compare-and-swap are used as
building blocks for other synchronization tools
◼ One tool is an atomic variable that provides atomic (uninterruptible)
updates on basic data types such as integers and booleans
◼ For example, the increment() operation on the atomic variable sequence
ensures sequence is incremented without interruption:
increment(&sequence);
void increment(atomic_int *v)
{
int temp;
do {
temp = *v;
} while (temp != compare_and_swap(v,temp,temp+1));
}
Synchronization Tools - 32
Agenda
◼ Background
◼ The Critical-Section Problem
◼ Peterson’s Solution
◼ Hardware Support for Synchronization
◼ Mutex Locks
◼ Semaphores
◼ Monitors
◼ Liveness
Synchronization Tools - 33
Mutex Locks
◼ Mutex Lock: the simplest synchronization tool to implement mutual
exclusion
◼ Protect a critical section by first acquire() a lock then release() the lock
◼ Boolean variable indicating if lock is available or not
◼ Calls to acquire() and release() must be atomic
◼ Usually implemented via hardware atomic instructions such as compare-and-swap
◼ acquire() function ◼ release() function
acquire() { release() {
/* busy waiting */ available = true;
while (!available); }
available = false;
}
while (true) {
acquire lock
critical section
release lock
remainder section
}
Synchronization Tools - 35
Agenda
◼ Background
◼ The Critical-Section Problem
◼ Peterson’s Solution
◼ Hardware Support for Synchronization
◼ Mutex Locks
◼ Semaphores
◼ Monitors
◼ Liveness
Synchronization Tools - 36
Semaphore
◼ Semaphore: synchronization tool that provides more sophisticated
ways (than Mutex locks) for process to synchronize their activities.
◼ Semaphore S is integer variable
◼ Can only be accessed via two indivisible (atomic) operations
◼ wait() operation
wait(S) {
while (S <= 0); // waits for the lock
S--; // holds the lock
}
◼ signal() operation
signal(S) {
S++; // release the lock
}
Synchronization Tools - 37
Semaphores Usage
◼ Counting semaphore
◼ Integer value S can range over an unrestricted range
◼ Initialized to # of available resources
◼ Binary semaphore
◼ Integer value can range only between 0 and 1
◼ Same as mutex lock
◼ Can solve various synchronization problems
◼ Example: consider P1 and P2 that requires S1 to happen before S2
◼ Create a semaphore "synch" initialized to 0
P1: P2:
S1; wait(synch);
signal(synch); S2;
Synchronization Tools - 38
Semaphore Implementation
◼ Problem of previous definition of wait(): spinlock
wait(S) {
while (S <= 0); // spinlock (busy waiting)
S--;
}
Synchronization Tools - 39
Semaphore Implementation
◼ With each semaphore there is an associated waiting queue
◼ Each entry in a waiting queue has two data items:
◼ Value (of type integer)
◼ Pointer to next record in the list
typedef struct {
int value;
struct process *list; // head of linked list for waiting queue
} semaphore;
Synchronization Tools - 40
Semaphore Implementation
◼ Two operations:
◼ sleep: place the process invoking the operation on the appropriate waiting
queue
◼ wakeup: remove one of processes in the waiting queue and place it in the
ready queue
◼ Wait ◼ Signal
wait(semaphore *S) { signal(semaphore *S) {
S->value--; S->value++;
if (S->value < 0) { if (S->value <= 0) {
add this process to S->list; remove a process P from S->list;
sleep(); // suspend wakeup(P); // resume
} }
} }
Synchronization Tools - 41
Semaphore Implementation
◼ It is critical that semaphore operations be executed atomically
◼ We must guarantee that no two processes can execute the wait() and
signal() on the same semaphore at the same time
◼ Thus, the implementation becomes the critical section problem where the
wait() and signal() code are placed in the critical section
◼ Solutions to provide atomicity depending on computer architecture
◼ Single processor environment: disable interrupt
◼ Instruction from different processes cannot be interleaved
◼ Multicore environment: use compare_and_swap() or spinlock
◼ Disable interrupt of all processors → performance degradation
◼ Solutions ensure that wait() and signal() are performed atomically
◼ Could now have busy waiting in critical section implementation
◼ But implementation code is short
Synchronization Tools - 43
Monitors
◼ Motivation: semaphore is still too low-level tool
◼ Correct use of semaphore operations
correct
wait(mutex);
…
critical section
…
signal(mutex);
Synchronization Tools - 46
Monitor in Java
◼ Synchronized method
class Producer {
private int product;
…
private synchronized void produce(); // mutually exclusive
}
Synchronization Tools - 47
Condition Variables
◼ Condition: additional synchronization mechanism
◼ Variable declaration
condition x, y;
◼ Wait
x.wait(); // invoking process is suspended
// until other process calls x.signal()
◼ Signal
x.signal(); // resumes exactly one suspended process
// if no process is waiting, do nothing
Synchronization Tools - 48
Monitor with Condition Variables
Synchronization Tools - 49
Condition Variables Choices
◼ If process P invokes x.signal(), and process Q is suspended in
x.wait(), what should happen next?
◼ Both Q and P cannot execute in parallel. If Q is resumed, then P must wait
◼ Options include
◼ Signal and wait: P waits until Q either leaves the monitor or it waits for
another condition
◼ Signal and continue: Q waits until P either leaves the monitor or it waits for
another condition
◼ Both have pros and cons: language implementer can decide
◼ Monitors implemented in Concurrent Pascal compromise
◼ P executing signal immediately leaves the monitor, Q is resumed
◼ Implemented in other languages including Mesa, C#, Java
Synchronization Tools - 50
Implementing a Monitor Using Semaphores
◼ Functions to implement
◼ External procedures
◼ wait() of condition variable
◼ signal() of condition variable
Synchronization Tools - 51
Implementing a Monitor Using Semaphores
◼ External procedure F
wait(mutex);
…
body of F;
…
if (next_count > 0)
signal(next)
else
signal(mutex);
Synchronization Tools - 52
Implementing a Monitor Using Semaphores
◼ Condition variables
◼ Required data
semaphore x_sem; // (initially, 0)
int x_count = 0;
◼ x.wait() ◼ x.signal()
x.wait() x.signal()
{ {
x_count++; if (x_count > 0) {
next_count++;
if (next_count > 0)
signal(x_sem);
signal(next);
wait(next);
else next_count--;
signal(mutex); }
wait(x_sem); }
x_count--;
}
Synchronization Tools - 53
Resuming Processes within a Monitor
◼ If several processes queued on condition variable x, and x.signal()
is executed, which process should be resumed?
◼ FCFS frequently not adequate
◼ conditional-wait construct of the form x.wait(c)
◼ where c is priority number
◼ Process with lowest number (highest priority) is scheduled next
Synchronization Tools - 54
Single Resource Allocation
◼ Allocate a single resource among competing processes using priority
numbers that specify the maximum time a process plans to use the
resource
R.acquire(t);
...
access the resurce;
...
R.release;
Synchronization Tools - 55
A Monitor to Allocate Single Resource
monitor ResourceAllocator
{
boolean busy;
condition x;
void acquire(int time) {
if (busy)
x.wait(time);
busy = true;
}
void release() {
busy = FALSE;
x.signal();
}
initialization code() {
busy = false;
}
}
Synchronization Tools - 56
Agenda
◼ Background
◼ The Critical-Section Problem
◼ Peterson’s Solution
◼ Hardware Support for Synchronization
◼ Mutex Locks
◼ Semaphores
◼ Monitors
◼ Liveness
Synchronization Tools - 57
Liveness
◼ Liveness: a set of properties that a system must satisfy to ensure
that processes make progress during their execution life cycle
Synchronization Tools - 58
Liveness
◼ Deadlock: two or more processes are waiting indefinitely for an event
that can be caused by only one of the waiting processes
◼ Let S and Q be two semaphores initialized to 1
P0 P1
wait(S); wait(Q); ◼ Consider if P0 executes wait(S) and P1 wait(Q).
When P0 executes wait(Q), it must wait until P1
wait(Q); wait(S); executes signal(Q)
... ... ◼ However, P1 is waiting until P0 execute signal(S).
signal(S); signal(Q); ◼ Since these signal() operations will never be
executed, P0 and P1 are deadlocked.
signal(Q); signal(S);
Synchronization Tools - 60