0% found this document useful (0 votes)
7 views

Week 04 Lecture Chapter 4

The document discusses Inter-Process Communications (IPC), emphasizing the need for IPC to manage shared resources and prevent race conditions. It covers various methods for achieving mutual exclusion, including semaphores and monitors, while also highlighting classic problems such as the dining philosophers problem. The document concludes with a summary of key concepts and suggests readings for further understanding.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views

Week 04 Lecture Chapter 4

The document discusses Inter-Process Communications (IPC), emphasizing the need for IPC to manage shared resources and prevent race conditions. It covers various methods for achieving mutual exclusion, including semaphores and monitors, while also highlighting classic problems such as the dining philosophers problem. The document concludes with a summary of key concepts and suggests readings for further understanding.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 45

Topic 4:

Inter-process Communications

Why we need IPC


Methods of IPC
Classic problems in IPC
Why we need Inter-Process
Communications
• Multiple processes running “simultaneously”
– Accessing shared resources
• Contention
• Race conditions
– Working together to complete some task
• Cooperation
• Coordination
• Co-operating processes make better use of resources
– one process using CPU while another waits on I/O

2 /45
Inter-process Communication
Race Conditions

Two processes want to access shared memory at same time

3 /45
Race Conditions
• A key requirement for computer system design is that
the same inputs will always produce the same output
• Deterministic
• Race conditions undermine this
• Two ways to prevent race conditions
– Communication
– Synchronisation methods
• Looking at a simplified race condition

4 /45
Race Condition Example
• O/S has to manage concurrent (and possibly parallel)
processes, so O/S can provide useful lessons for
programmer
• Example of concurrent computation
– Two processes want to alter your bank balance (which
is $100)
• process A is adding $1000 (your pay)
• process B is deducting $1000 (bill)

5 /45
Race Condition Example
• After both processes have run, your balance may be:
– $100
– $1100
– -$900
• Why? Note: both processes need access to your
balance (i.e. it is shared)

6 /45
Race Condition Example
• ProcessA:
balance = balance + 1000;

• ProcessA (pseudo) Machine code:


(#A1) load registerX  balance
(#A2) add registerX  1000
(#A3) store balance  registerX

7 /45
Race Condition Example
• ProcessB:
balance = balance – 1000;

• ProcessB (pseudo) Machine code:


(#B1) load registerX  balance
(#B2) sub registerX  1000
(#B3) store balance  registerX

8 /45
Race Condition Example
• What if CPU scheduler switches between process A
and B as follows:
#A1
#A2
context switch
#B1
#B2
context switch
#A3
context switch
#B3

9 /45
Race Condition Example

• Computation:
#A1 registerX = 100
#A2 registerX = 1100

context switch (register value will be restored to 1100


when this process is next put on CPU)

#B1 registerX = 100 (balance still 100)


#B2 registerX = -900

10 /45
Race Condition Example
context switch (processA back on CPU, value of
register (1100) restored

#A3 balance = 1100

context switch (processB back on CPU, value of


register (-900) restored

#B3 balance = -900

• Final value of balance is -900

11 /45
Race Condition & Critical Regions
• The final value of balance depends on which order the
concurrent processes access it - this is a race
condition
• Code that reads or alters the value of the shared
balance is a critical section
• To ensure correct balance calculations, only one
process at a time should be allowed in a critical region
- i.e. mutually exclusive access to balance

12 /45
Critical Regions

Four conditions to provide mutual exclusion


1. No two processes simultaneously in critical
region
2. No assumptions made about speeds or
numbers of CPUs
3. No process running outside its critical region
may block another process
4. No process must wait forever to enter its critical
region
See page 102 in Tanenbaum – Deitel does not specify the above directly

13 /45
Critical Regions

• Mutual exclusion using critical regions


• Above is what we want to achieve, but how do
we do it?

14 /45
Busy Waiting
• In the next few example solutions a technique called
“busy waiting” is often used
• In most real world cases busy waiting is a bad idea
– Wastes CPU cycles
– May block the process that holds the lock on the critical
region
– This could mean that the waiting proces delays access
to the region
– Or it could mean the waiting process never gets access
while( x != 1);

15 /45
Mutual Exclusion with Busy Waiting

Proposed solution to critical region problem


• What is wrong with this solution?
– What if the process in (a) needs to access the critical
region twice for every access by process (b)?
– If this is a single CPU system how efficient will this be?

16 /45
Mutual Exclusion with Busy Waiting

• Another attempt:
– shared Boolean array flag, initialised to false
– flag[i] == true if Pi ready to enter CS

flag[i] = true;
while flag[j];/*wait if other process wants access */
CS(); /* some stuff */
flag[i] = false;

• Violates progress (4) and (3) (if both p0 & p1 are ready, waits
forever!)
• And wastes system resources with the busy-wait at the
while

17 /45
Mutual Exclusion with Busy Waiting

Peterson's solution for achieving mutual exclusion


– Solves problem of consecutive access
– Still wastes CPU cycles in busy waiting

18 /45
Test-and-set solution
• Many machines have special instructions designed to
assist synchronisation
• Test-and-set effectively performs:
boolean TS(boolean target)
{
boolean result;
result = target;
target = true;
return result;
}
• Treated as an “atomic” instruction
• Present on machines as early as late 1960s
19 /45
Mutual Exclusion with Busy Waiting

• Entering and leaving a critical region using the


TSL instruction
• Busy waiting is still a bad idea for most cases –
we need a solution that uses a different method

20 /45
Priority Inversion
• Using a mutual exclusion algorithm that involves busy
waiting – what happens if…
– Low priority process is in the critical section
– High priority process needs access to the section
– Single CPU system
• High priority process has the CPU and sits in the busy
wait loop
• Low priority process is starved of access to the CPU
and thus can never exit the critical section
• Deadlock! Another reason why busy waiting is a “Bad
Idea”

21 /45
Sleep and Wakeup

Producer-consumer solution with fatal race condition


22 /45
Semaphores
• Need easier method for correctly synchronising
concurrent processes - e.g. semaphores
• Semaphore is a variable (binary semaphores only
have the values 0 or 1, counting semaphores can be
0..maxpositive)
• Accessed via two atomic functions, up & down
– atomic means not-divisible. If two processes both do,
say, wait(S) at “the same time”, the result is as if they
were done one after the other

23 /45
Semaphores
down(S) (also sometimes named wait)
{
while (S <= 0); /* busy wait */
S = S - 1;
}

up(S) (also sometimes named signal)


{
S = S + 1;
}
These representations use a busy wait – not what really
happens – but a simpler way to look at what actually
happens
24 /45
Semaphores using blocking
• Define a semaphore as a structure
struct semaphore {
int value;
PCB_entry *L; /* list of processes */
};
• Assume two simple operations:
– block suspends the process that invokes it.
– wakeup(P) resumes the execution of a blocked process
P.
• *L is the head of a linked list of PCB entries

25 /45
Semaphores using blocking
• Semaphore operations now defined as
down(semaphore *S)
{
S->value = S->value – 1;
if S->value < 0
{
add this process to S->L;
block;
}
}

26 /45
Semaphores using blocking
• Semaphore operations now defined as
up(semaphore *S)
{
S->value = S->value + 1;
if S->value <= 0
{
remove a process P from S->L;
wakeup(P);
}
}

27 /45
Semaphores for mutual exclusion
• Assume a semaphore S = 1

down(&S);
CS;
up(&S);

– ensures mutual exclusion

28 /45
Semaphores for general
synchronisation
• Assume S = 0
– process p0:
p0-stuff;
up(&S);

– process p1:
down(&S);
p1-stuff

– ensures p0-stuff is done before p1-stuff

29 /45
Semaphores

The producer-consumer problem using semaphores


30 /45
Semaphores -- producer
#define N 100 /* slots in the buffer */
semaphore mutex; /* semaphore struct as */
semaphore empty; /* previously defined */
semaphore full;
mutex.value = 1; /* Initialisation */
empty.value = N;
full.value = 1;

void producer(void)
{
int item;
while(TRUE){ /* TRUE 1 */
item = produce_item(); /*generate something */
down(&empty); /* decrement empty count */
down(&mutex); /* enter critical region */
insert_item(item); /* put item in buffer */
up(&mutex); /* leave critical region */
up(&full); /* increment full count */
}
}

31 /45
Semaphores -- consumer
void consumer(void)
{
int item;

while(true){ /* infinite loop again */


down(&full); /* decrement full count*/
down(&mutex); /*enter critical region */
item = remove_item(); /* take item from buffer */
up(&mutex); /* exit critical region */
up(&empty) /* increment empty count */
consume_item(item); /* do something */
}
}

32 /45
Busy Waiting versus Blocking Semaphores

Busy Waiting Semaphores/Mutexes


• “Waste” CPU cycles • Properly implemented can be
• Can result in deadlock in efficient
some situations • Less likely to deadlock
• But – does not have the cost • Context switch overhead is a
of a context switch (we will negative
discuss context switches in
more detail next week)

33 /45
Solaris “Adaptive Semaphore”
• Like any semaphore, but it will block or use busy
waiting depending on conditions
• If a process asks for the semaphore and the process
that currently has the semaphore is executing on a
CPU
– Busy waiting for waiting process
– Saves cost of a context switch
• Does not make an assumption about the number of
CPUs that would violate rule 2
– But the reality is that busy waiting only happens on
multi-CPU systems

34 /45
Mutexes

Implementation of mutex_lock and mutex_unlock

35 /45
Monitors

Example of a monitor
36 /45
Monitors

• Outline of producer-consumer problem with monitors


– only one monitor procedure active at one time
– buffer has N slots
37 /45
Message Passing

The producer-consumer problem with N messages


38 /45
Barriers

• Use of a barrier
– processes approaching a barrier
– all processes but one blocked at barrier
– last process arrives, all are let through

39 /45
Dining Philosophers

• Philosophers eat/think
• Eating needs 2 forks
• Pick one fork at a time
• How to prevent deadlock
• To be more realistic –
consider that they eat
Asian food instead of
spaghetti and need two
chopsticks

40 /45
Dining Philosophers

A non-solution to the dining philosophers problem

41 /45
Dining Philosophers

Solution to dining philosophers problem (part 1)


42 /45
Dining Philosophers

Solution to dining philosophers problem (part 2)


43 /45
Summary
• Inter-process communications
– Race conditions
• Methods of achieving mutual exclusion
• Semaphores
– Adaptive semaphore
• Monitor function/process
• Dining philosophers

44 /45
Reading for next week
Tanenbaum section 2.5
In Deitel read Chapter 8

45 /45

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy