Week 04 Lecture Chapter 4
Week 04 Lecture Chapter 4
Inter-process Communications
2 /45
Inter-process Communication
Race Conditions
3 /45
Race Conditions
• A key requirement for computer system design is that
the same inputs will always produce the same output
• Deterministic
• Race conditions undermine this
• Two ways to prevent race conditions
– Communication
– Synchronisation methods
• Looking at a simplified race condition
4 /45
Race Condition Example
• O/S has to manage concurrent (and possibly parallel)
processes, so O/S can provide useful lessons for
programmer
• Example of concurrent computation
– Two processes want to alter your bank balance (which
is $100)
• process A is adding $1000 (your pay)
• process B is deducting $1000 (bill)
5 /45
Race Condition Example
• After both processes have run, your balance may be:
– $100
– $1100
– -$900
• Why? Note: both processes need access to your
balance (i.e. it is shared)
6 /45
Race Condition Example
• ProcessA:
balance = balance + 1000;
7 /45
Race Condition Example
• ProcessB:
balance = balance – 1000;
8 /45
Race Condition Example
• What if CPU scheduler switches between process A
and B as follows:
#A1
#A2
context switch
#B1
#B2
context switch
#A3
context switch
#B3
9 /45
Race Condition Example
• Computation:
#A1 registerX = 100
#A2 registerX = 1100
10 /45
Race Condition Example
context switch (processA back on CPU, value of
register (1100) restored
11 /45
Race Condition & Critical Regions
• The final value of balance depends on which order the
concurrent processes access it - this is a race
condition
• Code that reads or alters the value of the shared
balance is a critical section
• To ensure correct balance calculations, only one
process at a time should be allowed in a critical region
- i.e. mutually exclusive access to balance
12 /45
Critical Regions
13 /45
Critical Regions
14 /45
Busy Waiting
• In the next few example solutions a technique called
“busy waiting” is often used
• In most real world cases busy waiting is a bad idea
– Wastes CPU cycles
– May block the process that holds the lock on the critical
region
– This could mean that the waiting proces delays access
to the region
– Or it could mean the waiting process never gets access
while( x != 1);
15 /45
Mutual Exclusion with Busy Waiting
16 /45
Mutual Exclusion with Busy Waiting
• Another attempt:
– shared Boolean array flag, initialised to false
– flag[i] == true if Pi ready to enter CS
flag[i] = true;
while flag[j];/*wait if other process wants access */
CS(); /* some stuff */
flag[i] = false;
• Violates progress (4) and (3) (if both p0 & p1 are ready, waits
forever!)
• And wastes system resources with the busy-wait at the
while
17 /45
Mutual Exclusion with Busy Waiting
18 /45
Test-and-set solution
• Many machines have special instructions designed to
assist synchronisation
• Test-and-set effectively performs:
boolean TS(boolean target)
{
boolean result;
result = target;
target = true;
return result;
}
• Treated as an “atomic” instruction
• Present on machines as early as late 1960s
19 /45
Mutual Exclusion with Busy Waiting
20 /45
Priority Inversion
• Using a mutual exclusion algorithm that involves busy
waiting – what happens if…
– Low priority process is in the critical section
– High priority process needs access to the section
– Single CPU system
• High priority process has the CPU and sits in the busy
wait loop
• Low priority process is starved of access to the CPU
and thus can never exit the critical section
• Deadlock! Another reason why busy waiting is a “Bad
Idea”
21 /45
Sleep and Wakeup
23 /45
Semaphores
down(S) (also sometimes named wait)
{
while (S <= 0); /* busy wait */
S = S - 1;
}
25 /45
Semaphores using blocking
• Semaphore operations now defined as
down(semaphore *S)
{
S->value = S->value – 1;
if S->value < 0
{
add this process to S->L;
block;
}
}
26 /45
Semaphores using blocking
• Semaphore operations now defined as
up(semaphore *S)
{
S->value = S->value + 1;
if S->value <= 0
{
remove a process P from S->L;
wakeup(P);
}
}
27 /45
Semaphores for mutual exclusion
• Assume a semaphore S = 1
down(&S);
CS;
up(&S);
28 /45
Semaphores for general
synchronisation
• Assume S = 0
– process p0:
p0-stuff;
up(&S);
– process p1:
down(&S);
p1-stuff
29 /45
Semaphores
void producer(void)
{
int item;
while(TRUE){ /* TRUE 1 */
item = produce_item(); /*generate something */
down(&empty); /* decrement empty count */
down(&mutex); /* enter critical region */
insert_item(item); /* put item in buffer */
up(&mutex); /* leave critical region */
up(&full); /* increment full count */
}
}
31 /45
Semaphores -- consumer
void consumer(void)
{
int item;
32 /45
Busy Waiting versus Blocking Semaphores
33 /45
Solaris “Adaptive Semaphore”
• Like any semaphore, but it will block or use busy
waiting depending on conditions
• If a process asks for the semaphore and the process
that currently has the semaphore is executing on a
CPU
– Busy waiting for waiting process
– Saves cost of a context switch
• Does not make an assumption about the number of
CPUs that would violate rule 2
– But the reality is that busy waiting only happens on
multi-CPU systems
34 /45
Mutexes
35 /45
Monitors
Example of a monitor
36 /45
Monitors
• Use of a barrier
– processes approaching a barrier
– all processes but one blocked at barrier
– last process arrives, all are let through
39 /45
Dining Philosophers
• Philosophers eat/think
• Eating needs 2 forks
• Pick one fork at a time
• How to prevent deadlock
• To be more realistic –
consider that they eat
Asian food instead of
spaghetti and need two
chopsticks
40 /45
Dining Philosophers
41 /45
Dining Philosophers
44 /45
Reading for next week
Tanenbaum section 2.5
In Deitel read Chapter 8
45 /45