0% found this document useful (0 votes)
12 views

Lect 4

Chapter 4 discusses process synchronization, focusing on the critical-section problem and its solutions, including software and hardware mechanisms like mutex locks, semaphores, and monitors. It highlights the need for synchronization to prevent data inconsistency due to concurrent access by processes. The chapter also covers classical synchronization problems and their solutions, emphasizing the importance of mutual exclusion, progress, and bounded waiting in critical-section protocols.

Uploaded by

asnake ketema
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views

Lect 4

Chapter 4 discusses process synchronization, focusing on the critical-section problem and its solutions, including software and hardware mechanisms like mutex locks, semaphores, and monitors. It highlights the need for synchronization to prevent data inconsistency due to concurrent access by processes. The chapter also covers classical synchronization problems and their solutions, emphasizing the importance of mutual exclusion, progress, and bounded waiting in critical-section protocols.

Uploaded by

asnake ketema
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 34

Chapter 4: Process

Synchronization
Lecture 4: Process Synchronization

Background
The Critical-Section Problem
Peterson’s Solution
Synchronization Hardware
Mutex Locks
Semaphores
Monitors
Classic Problems of Synchronization
The dining philosopher, reader- writer and Bounded-Buffer
Problem

2
Objectives

To present the concept of process synchronization.


To introduce the critical-section problem, whose solutions can be
used to ensure the consistency of shared data
To present both software and hardware solutions of the critical-
section problem
To examine some classical process-synchronization problems

3
Background

We already mentioned that processes/threads can cooperate with


each other in order to achieve a common task

As a result, processes/threads can execute concurrently


 May be interrupted at any time, partially completing execution

Concurrent access to shared data may result in data inconsistency


 Maintaining data consistency requires mechanisms to ensure the orderly
execution of cooperating processes and that is called synchronization

4
The Need for Synchronization: Example
• Consider the following real-world scenario, involving 2 roommates:
Bob and Carla

• In the example, Bob and Carla represents 2 processes/threads


• Theoretically, they should cooperate to achieve a common task (e.g.,
buying 1liter of milk)
• In practice, though, they might incur in unpleasant situations (e.g.,
buying too much milk!) 5
Another Example
Illustration of the problem:
 Suppose that we wanted to provide a solution to the consumer-
producer problem that fills all the buffers.
 We can do so by having an integer counter that keeps track of the
number of full buffers.
 Initially, counter is set to 0. It is incremented by the producer after it
produces a new buffer and is decremented by the consumer after it
consumes a buffer.

6
Example
const int BUFFER_SIZE=10;
item buffer[BUFFER_SIZE];
int count=0;

Producer Consumer
while (true) { while (true) {
/* produce an item in next
produced */ while (counter == 0)
; /* do nothing */
while (counter == BUFFER_SIZE); next_consumed = buffer[out];
/* do nothing */ out = (out + 1) % BUFFER_SIZE;
buffer[in] = next_produced;
in = (in + 1) % BUFFER_SIZE; counter--;
counter++; /*consume the item in next
} consumed */

7
Race Condition

A situation where several processes manipulate the same data concurrently


and the outcome of the execution depends on the particular order in which the
access takes place, is called a race condition.
counter++ could be implemented as
register1 = counter
register1 = register1 + 1
counter = register1
counter-- could be implemented as
register2 =counter
register2 = register2 - 1
counter=register2
counter = register2
Consider this execution interleaving with “count = 5” initially:
S0: producer execute {register1 = 5}
S1: producer execute {register1 = 6}
S2: consumer execute register2 = counter {register2 = 5}
S3: consumer execute register2 = register2 – 1 {register2 = 4}
S4: producer execute {counter = 6 }
S5: consumer execute counter = register2 {counter = 4}
To guard against the race condition above, we need to ensure that only one
process at a time can be manipulating the variable counter.
8
Critical Section Problem
What kind of mechanisms do we need in order to get independent yet
cooperating processes to communicate and have a consistent view of
the "world" (i.e., computational state)?

The general idea is that, consider system of n processes


{p0, p1, … pn-1}
Each process has critical section segment of code
 Code that may be changing common variables, writing file, etc
When one process in critical section, no other may be in
its critical section
The code preceding the critical section, and which controls
access to the critical section, is termed the entry section.
 It acts like a carefully controlled locking door.

9
Cont...
The code following the critical section is termed the exit
section.
 lets the world know that they are no longer in their critical section.
The rest of the code not included in either the critical
section or the entry or exit sections is termed the
remainder section.

Critical section problem is to design protocol that the


processes/threads can use to cooperate without entering
into race condition.
Race condition:
 A situation where several processes manipulate the same
data concurrently and the outcome of the execution depends
on the particular order in which the access takes place, is
called a race condition.
10
Critical Section
General structure of process Pi

11
Solution to Critical-Section Problem
A solution to the critical-section problem must satisfy the following three
requirements:

1. Mutual Exclusion - If process Pi is executing in its critical section,


then no other processes can be executing in their critical sections
2. Progress - If no process is executing in its critical section and there
exist some processes that wish to enter their critical section, then the
selection of the processes that will enter the critical section next
cannot be postponed indefinitely. (Deadlock free)
3. Bounded Waiting - A bound must exist on the number of times that
other processes are allowed to enter their critical sections after a
process has made a request to enter its critical section and before that
request is granted. (Starvation-free)
 Assume that each process executes at a nonzero speed
 No assumption concerning relative speed of the n processes

12
How synchronization looks like ?

13
Peterson’s Solution

Peterson's Solution is a classic software-based solution to the


critical section problem.
 Two process solution
Assume that the load and store machine-language instructions
are atomic; that is, cannot be interrupted
The two processes share two variables:
 int turn;
 Boolean flag[2]

The variable turn indicates whose turn it is to enter the critical


section
The flag array is used to indicate if a process is ready to enter
the critical section. flag[i] = true implies that process Pi is ready!

14
Algorithm for Process Pi
do {

flag[i] = true;
turn = j;
while (flag[j] && turn = = j)
// Do nothing ;
critical section

flag[i] = false;

remainder section
} while (true);

15
Peterson’s Solution (Cont.)

Provable that the three CS requirement are met:


1. Mutual exclusion is preserved
Pi enters CS only if:
• Other thread does not want to enter (either flag[j] =
false)or
• Other thread wants to enter, but your turn (turn = I)
2. Progress requirement is satisfied
• Runs CS if other process does not want to enter or
• Other process (matching turn) will eventually finish
3. Bounded-waiting requirement is met
• Each process waits at most one critical section

16
Synchronization Hardware
Many systems provide hardware support for implementing the critical
section code.

Uni-processors – could disable interrupts


 The process should disable interrupts when entered its critical section and
 Reset it when finished
 Therefore, currently running code would execute without preemption

Problems/dis-advantage
 Cause problem if user program fails to reset the interrupt
 Consider the effect on system time, if it is updated by interrupts
 A thread/process that enters an infinite loop in its critical section after
disabling interrupts will never yield(give back) its processor

 Generally too inefficient on multiprocessor systems


 Does not prevent them from simultaneously executing inside
their critical sections.

17
Solution to Critical-section Problem Using Locks
Modern machines provide special atomic hardware instructions
 Atomic = non-interruptible
Two of the most known instructions are:
 Test_and_set instruction
 swap instruction

All solutions below based on idea of locking


 Protecting critical regions via locks

do {
acquire lock
critical section
release lock
remainder section
} while (TRUE);

18
test_and_set Instruction

Definition:
boolean test_and_set (boolean *target)
{
boolean rv = *target;
*target = TRUE;
return rv:
}

The followings are its properties


1. Executed atomically
2. Returns the original value of passed parameter
3. Set the new value of passed parameter to “TRUE”.

19
Solution using test_and_set()
 If the machine supports the test_and_set()instruction, then we can
implement mutual exclusion by declaring a Boolean variable lock,
initialized to false.
 Lock=false // implies lock is available
 Lock=true // implies lock is taken and someone is in its CS

 Example
 Shared Boolean variable lock, initialized to FALSE
 Solution:
do {
while (test_and_set(&lock))
; /* do nothing */
/* critical section */
lock = false;
/* remainder section */
} while (true);

20
compare_and_swap Instruction

Definition:
int compare _and_swap(int *value, int expected, int new_value) {
int temp = *value;

if (*value == expected)
*value = new_value;
return temp;
}
Has the following properties
1. Executed atomically
2. Returns the original value of passed parameter “value”
3. Set the variable “value” to the value of the passed parameter
“new_value” but only if “value” ==“expected”. That is, the swap
takes place only under this condition.

21
Solution using compare_and_swap
 Shared integer “lock” initialized to 0;
 Solution:
do {
while (compare_and_swap(&lock, 0, 1) != 0)
; /* do nothing */
/* critical section */
lock = 0;
/* remainder section */
} while (true);

• Drawbacks
Although these algorithms (using comapre_and_swap() and test_and_set())
satisfy the mutual-exclusion requirement, they do not satisfy the bounded-
waiting requirement.. How/why ?

22
Mutex Locks
 The hardware-based solutions are complicated and generally
inaccessible to application programmers
 OS designers build software tools to solve critical section problem
 Simplest is mutex lock
 Protect a critical section by first acquire() a lock then
release() the lock
 Boolean variable (available) indicating if lock is available or not
 Calls to acquire() and release() must be atomic
 Usually implemented via hardware atomic instructions
 But this solution requires busy waiting
 This lock therefore called a spinlock

23
acquire() and release()
The value for the mutex “available” is TRUE initially

acquire() {
while (!available)
; /* busy wait (SPINLOCK)*/
available = false;;
}
release() {
available = true;
}

Solution using mutex


do {
acquire lock
critical section

Release lock
remainder section
} while (true);
24
Pros and cons of spinlock
the process “spins”while waiting for the lock to become available.
 Same is true for solutions using test_and_set and compare_and_swap
Disadvantage
 Busy waiting wastes CPUcycles that some other process might be able to
use productively.
Advantage
 No context switch is required when a process must wait on a lock
 Therefore, when locks are expected to be held for short times, spinlocks
are useful

Proposed solutions for spin locks problems


Solution 1:
 If cannot get the lock, give up the CPU
Solution 2: sleep and wakeup
 When blocked, go to sleep
 Wakeup when it is OK to retry entering the critical region
25
Example pthread synchronization
The mutex syncronization primitive is implemented in “pthread” library
 Therefore, include pthread.h as header
Here are the particulars
1. Declare an object of type pthread_mutex_t.
2. Initialize the object by calling pthread mutex init().
3. Call pthread_mutex_lock() to gain exclusive access to the shared data
object.
4. Call pthread_mutex_unlock() to release the exclusive access and allow
another thread to use the shared data object.
5. Get rid of the object by calling pthread_mutex_destroy().

26
Example
#include <pthread.h> // the main function
int main( void )
#include <unistd.h>
{
pthread_mutex_t lock ; pthread_t thread_ID ;
int shared_data ; // Often shared data is void* exitstatus
more complex than just an int we should int i ;
not allow morethan one thread to modify // Initialize the mutex before trying to use it.
this global */ pthread_mutex_init (&lock , NULL) ;
pthread_create (&thread ID , NULL, thread funct i on , NULL) ;
void *thread_function ( void* arg )
// Try to use the shared data.
{ for ( i = 0 ; i < 10 ; ++i ) {
int i ; s l eep ( 1 ) ;
pthread mutex lock(&l ock ) ;
for ( i = 0 ; i < 1024 *1024; ++i ) {
p r i n t f ( "\rShared integer's value = %d\n" , shared data ) ;
// Access the shared data here. pthread mutex unlock(&l ock ) ;
pthread mutex lock(&l ock ) ; }
Shared_data++; p r i n t f ( "\n" ) ;
pthread_j o i n ( thread ID , &e x i t s t a t u s ) ;
Pthread_mutex_unlock(&l ock ) ;
// Clean up the mutex when we are finished with it.
} pthread_mutex_destroy(&l ock ) ;
return NULL; return 0 ;
} }

27
Semaphore
Synchronization tool that provides more sophisticated ways (than Mutex locks) for
process to synchronize their activities.

Semaphore S is an integer variable

Can only be accessed via two indivisible (atomic) operations


 wait() and signal()(
 Originally called P() and V()
Definition of the wait() operation
wait(S) {
while (S <= 0)
; // busy wait
S--;
}
Definition of the signal() operation
signal(S) {
S++;
}

28
Semaphore Usage
There are two types of semaphores
Counting semaphore – integer value can range over an unrestricted domain
Binary semaphore – integer value can range only between 0 and 1
 Same as a mutex lock
Semaphore can solve various synchronization problems

Consider two concurrently running processes P1 and P2 that require S1 to happen


before S2
Create a semaphore “synch” initialized to 0
P1:
S1;
signal(synch);
P2:
wait(synch);
S2;
How counting semaphores could be used to control access to a given resource
consisting of a finite number of instances ?

29
Problems with Semaphores
Incorrect use of semaphore operations:

 signal (mutex) …. wait (mutex)


 Results in many processes to run in their critical section - violating
mutual exclusion
 wait (mutex) … wait (mutex)
 Results in deadlock, since every waiting processes remain to wait for
others
 Omitting of wait (mutex) or signal (mutex) (or both)
 Results in either deadlock or violation of mutual exclusion rule

Deadlock and starvation are possible.

30
Monitors
Semaphores cause errors if used incorrectly
 One of the solutions to overcome such problems: Monitors
Monitor is a high-level abstraction that provides a convenient and
effective mechanism for process synchronization
 Encapsulates data with a set of functions to operate on that data
 Local variables only accessible by code within the procedure
 Only one process may be active within the monitor at a time
 The programmer does not need to code this synchronization
constraint explicitly
 If a second thread invokes a monitor procedure when a first thread is
already executing one, it blocks
 So the monitor has to have a wait queue…
monitor monitor-name
{
// shared variable declarations
procedure P1 (…) { …. }

procedure Pn (…) {……}

Initialization code (…) { … }


}
}
31
Classical Problems of Synchronization

Classical problems used to test newly-proposed synchronization


schemes
 Bounded-Buffer Problem
 Readers and Writers Problem
 Dining-Philosophers Problem

32
Important points
Make sure you understand the following points
 Race condition
 Critical section problem
 Solution requirements to CS problem
 Mutual exclusiton
 Progress
 Bounded waiting
 Solution to CS problems
 H/W solutions
 Disabling inerupts
 Test_and_set()
 Test_and_swap()
 S/w Solutions
 Peterson’s soltions
 Mutex
 Semaphore
 Monitor

33
End of Chapter 4

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy