0% found this document useful (0 votes)
36 views60 pages

OS Chapter 2-1

operating system

Uploaded by

mikimeba2023
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
36 views60 pages

OS Chapter 2-1

operating system

Uploaded by

mikimeba2023
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 60

Inter Process Communication &

Synchronization
Cooperating Processes

• Independent process cannot affect or be


affected by the execution of another process.
• Cooperating process can affect or be affected
by the execution of another process.
• Advantages of process cooperation:
– Information sharing
– Computation speed-up
– Modularity
– Convenience
Producer-Consumer Problem
• Paradigm for cooperating processes; producer
process produces information that is
consumed by a consumer process.
– unbounded-buffer places no practical limit on the
size of the buffer.
– bounded-buffer assumes that there is a fixed
buffer size
Inter process Communication (IPC)
• Provides a mechanism to allow processes to
communicate and to synchronize their actions.
• Message system - processes communicate with each
other without resorting to shared variables.
• IPC facility provides two operations:
– send(message) - messages can be of either fixed or variable
size.
– receive(message)
• If P and Q wish to communicate, they need to:
– establish a communication link between them
– exchange messages via send/receive
Implementation questions:
• How are links established?
• Can a link be associated with more than two
processes?
• How many links can there be between every
pair of communicating processes?
• What is the capacity of a link?
• Is the size of a message that the link can
accommodate fixed or variable?
• Is a link unidirectional or bidirectional?
Direct Communication
• Processes must name each other explicitly:
– send(P, message) - send a message to process P
– receive(Q, message) - receive a message from
process Q
• Properties of communication link
– Links are established automatically.
– A link is associated with exactly one pair of
communicating processes.
– Between each pair there exists exactly one link.
– The link may be unidirectional, but is usually
bidirectional.
Indirect Communication
• Messages are directed and received from mail boxes
(also referred to as ports).
– Each mailbox has a unique id.
– Processes can communicate only if they share a mailbox.
• Properties of communication link
– Link established only if the two processes share a mailbox
in common.
– A link may be associated with many processes.
– Each pair of processes may share several communication
links.
– Link may be unidirectional or bidirectional
Cont’d
• Operations
– create a new mailbox
– send and receive messages through mailbox
– destroy a mailbox
• Mailbox sharing
– P1 , P2 , and P3 share mailbox A.
– P1 sends; P2 and P3 receive.
– Who gets the message?
• Solutions
– Allow a link to be associated with at most two processes.
– Allow only one process at a time to execute a receive
operation.
– Allow the system to select arbitrarily the receiver. Sender is
notified who the receiver was.
Buffering & exceptions
• Buffering queue of messages attached to the link;
implemented in one of three ways.
– Zero capacity - 0 messages Sender must wait for receiver
(rendezvous).
– Bounded capacity - finite length of n messages Sender must
wait if link full.
– Unbounded capacity - infinite length Sender never waits.
• Exception Conditions - error recovery
– Process terminates
– Lost messages
– Scrambled Messages
Pipes
• A pipe is a simple method for communicating
between two processes

• As far as the processes are concerned the pipe


appears to be just like a file.
• When A performs a write, it is buffered in the pipe.
• When B reads then it reads from the pipe,
blocking if there is no input
Cont’d
• in UNIX (and DOS) one process can be piped into
another pipe using the '|‘ character. e.g.
cat classlist | sort | more
• the cat command prints the contents of the file
'classlist', this is piped into the
• sort command which sorts the list. Finally, the
sorted list is sent to the more command that prints
it one screenfull at a time.
• Pipes may be implemented using shared memory
(UNIX) or even with temporary files (DOS).
Process Synchronization

• Background
• The Critical-Section Problem
• Synchronization Hardware
• Semaphores
• Classical Problems of Synchronization
• Critical Regions
• Monitors
• Atomic Transactions
Background
• Concurrent access to shared data may result in
data inconsistency.
• Maintaining data consistency requires
mechanisms to ensure the orderly execution
of cooperating processes.
• Suppose that we modify the producer-
consumer code by adding a variable counter,
initialized to 0 and incremented each time a
new item is added to the buffer.
• The new scheme is illustrated by the following:
Cont’d
• Shared data
typedef .... item;
item buffer[N];
int in, out, counter;
in = 0;
out = 0;
counter = 0;
Cont’d
• Producer process
while(true) {
...
produce an item in nextp
...
while(counter == n)
no-op;
buffer[in] = nextp;
in = (in+1)%n;
counter = counter + 1;
}
Cont’d
• Consumer process
while(true) {
while(counter == 0)
no-op;
nextc = buffer[out];
out = (out+1)%n;
counter = counter - 1;
...
consume the item in nextc
...
}
• The statements:
counter = counter + 1;
counter = counter - 1;
• must be excuted atomically.
The Critical-Section Problem

• n processes all competing to use some shared data


• Each process has a section of code code, called its critical
section, in which the shared data is accessed.
• Problem - ensure that when one process is executing in
its critical section, no other process is allowed to execute
in its critical section.
• Structure of process Pi
while(true) {
entry section
critical section
exit section
remainder section
}
Cont’d
• A solution to the critical-section problem must satisfy the
following three requirements:
1. Mutual Exclusion. If process Pi is executing in its critical
section, then no other processes can be executing in their
critical sections.
2. Progress. If no process is executing in its critical section and
there are some processes that wish to enter their critical
section, then the selection of the processes that will enter the
critical section next cannot be postponed indefinitely.
3. Bounded Waiting. A bound must exist on the number of
times that other processes are allowed to enter their critical
sections after a process has made a request to enter its critical
section and before that request is granted.
• Assumption that each process is executing at a nonzero
speed.
• No assumption concerning relative speed of the n processes.
Cont’d
• Initial attempts to solve the problem.
• Only 2 processes, P0 and P1
• General structure of process Pi (other process Pj )
while(true) {
entry section
critical section
exit section
remainder section
}
• Processes may share some common variables to
synchronize their actions
Algorithm 1
• Shared variables:
int turn;
initially turn = 0
turn = i -> Pi can enter its critical section
• Process Pi
while(true) {
while(turn!=i) no-op;
critical section
turn = j;
remainder section
}
• Satisfies mutual exclusion, but not progress.
Algorithm 2
• Shared variables
bool flag[2];
initially flag[0] = flag[1] = false.
flag[i] = true -> Pi ready to enter its critical section
• Process Pi
while(true) {
flag[i] = true;
while(flag[j]) no-op;
critical section
flag[i] = false;
remainder section
until false;
• Does not satisfy progress because:
• If the two processes set their flags to true at the same time, then
they will both wait forever.
Algorithm 3

• Combined shared variables of algorithms 1 and 2.


• Process Pi
while(true) {
flag[i] = true;
turn = j;
while (flag[j] && turn==j) no-op;
critical section
flag[i] := false;
remainder section
}
• Meets all three requirements; solves the critical section
problem for two processes
Bakery Algorithm - Critical section for n
processes
• Before entering its critical section, process
receives a number. Holder of the smallest
number enters the critical section.
• If processes Pi and Pj receive the same
number, if i < j , then Pi is served first; else Pj is
served first.
• The numbering scheme always generates
numbers in increasing order of enumeration.
• Example: 1,2,3,3,3,3,4,5...
Bakery Algorithm
• Shared data –
bool choosing[n];
int number[n];
• initially:
for(i=0;i<n;i++) {
choosing[i] = false;
number[i] = 0;
}
while(true) {
choosing[i] = true;
max=0;
for(j=0;j<n;j++)
if(max<number[j]) max = number[j];
number[i] = max + 1;
choosing[i] = false;
for (j = 0; j < n; j++) {
while (choosing[j]);
while (number[j] != 0 &&
(number[j] < number[i] ||
(number[j] == number[i] && j < i)));
}
critical section
number[i] = 0;
remainder section
}
Synchronization Hardware
• Test and modify the content of a word atomically.
bool TestandSet (bool *target) {
bool t=*target; /* all this is */
*target=true; /* done by one */
return t; /* machine instruction */
}
void Exchg(bool *a,bool *b) {
bool temp=*a; /* all this is */
*a=*b; /* done by one */
*b=temp; /* machine instruction */
}
Cont’d
• Mutual exclusion algorithm
– Shared data:
bool lock=false;
Process Pi
while(true) {
while TestandSet(lock) no-op;
critical section
lock = false;
remainder section
}
or

• Shared data:
bool lock=false;
Process Pi
bool key;
while(true) {
key = true;
do {
Exchg(lock,key);
}while(key);
critical section
lock = false;
remainder section
}
Semaphore - synchronization tool that does
not require busy waiting
• Semaphore S
• integer variable introduced by Dijkstra can only be
accessed via two indivisible (atomic) operations
wait(S): S = S - 1; if S < 0 then block(S)
signal(S): S = S + 1; if S <= 0 then wakeup(S)
• sometimes wait and signal are called down and up or P
and V
• block(S) - results in suspension of the process invoking it
(sometimes called sleep).
• wakeup(S) - results in resumption of exactly one process
that has invoked block(S)
Cont’d
• Example: critical section for n processes
• Shared variables
semaphore mutex=1;
Process Pi
while(true) {
wait(mutex);
critical section
signal(mutex);
remainder section }
• Implementation of the wait and signal operations so that they must execute
atomically.
• Uniprocessor:
– Disable interrupts around the code segment implementing the wait and signal
operations.
• Multiprocessor:
– If no special hardware provided, use a correct software solution to the critical-
section problem, where the critical sections consist of the wait and signal
operations.
– Use special hardware if available, i.e., TestandSet:
Implementation of wait(S) operation with the TestandSet instruction:

• Shared variables
boolean lock = false;
• Code for wait(S)
while (TestandSet(lock));
S = S - 1;
if (S < 0) {
lock = false;
block(S);
} else
lock = false;
• Code for signal(S)
while (TestandSet(lock));
S = S + 1;
if (S <=0)
wakeup(S);
lock = false;
• Race condition exists!
Cont’d
• Better Code for wait(S)
while (TestandSet(lock1));
while (TestandSet(lock));
S = S - 1;
if (S < 0) {
lock = false;
block(S);
} else
lock = false;
lock1 = false;
lock1 serialises the waits.
• Semaphore can be used as general synchronization tool:
• Execute B in Pj only after A executed in Pi
• Use semaphore flag initialized to 0
• Code:
Pi Pj
-- --
. .
. .
. .
A wait(flag)
signal(flag) B
Cont’d

• Deadlock - two or more processes are waiting indefinitely for an


event that can be caused by only one of the waiting processes.
• Let S and Q be two semaphores initialized to 1
P0 P1
wait(S) wait(Q)
wait(Q) wait(S)
. .
. .
. .
signal(S) signal(Q)
signal(Q) signal(S)
• Starvation - indefinite blocking
• A process is never be removed from the semaphore queue in
which it is suspended.
Two types of semaphores:
• Counting semaphore - integer value can range
over an unrestricted domain.
• Binary semaphore - integer value can range only
between 0 and 1; can be simpler to implement.
• Classical Problems of Synchronization
• Bounded-Buffer Problem
• Readers and Writers Problem
• Dining-Philosophers Problem
Bounded-Buffer Problem
• Shared data
typedef .... item;
item buffer[n];
semaphore full=0, empty=n, mutex=1;
item nextp, nextc;
• Producer process
while(true) {
...
produce an item in nextp
...
wait(empty); /* wait while buffer is full */
wait(mutex);
...
add nextp to buffer
...
signal(mutex);
signal(full); /* one more in buffer */
}
Cont’d

• Consumer process
while(true) {
wait(full); /* wait while no data */
wait(mutex);
...
remove an item from buffer to nextc
...
signal(mutex);
signal(empty); /* one less in buffer */
...
consume the item in nextc
...
}
Readers-Writers Problem

• A number of processes, some reading data, some writing.


Any number of processes can
• read at the same time, but if a writer is writing then no other
process must be able to access the data.
• Shared data
semaphore mutex=1, wrt=1;
int readcount=0;
• Writer process
wait(wrt);
...
writing is performed
...
signal(wrt);
Cont’d
• Reader process
wait(mutex);
readcount = readcount + 1;
if (readcount == 1) wait(wrt);
signal(mutex);
...
reading is performed
...
wait(mutex);
readcount = readcount - 1;
if (readcount == 0) signal(wrt);
signal(mutex);
Dining-Philosophers Problem
A Problem posed by Dijkstra in 1965
Possible solution to the problem:

• void philosopher(int no) {


while(1) {
...think....
take_fork(no); /* get the left fork */
take_fork((no+1) % N); /* get the right fork */
....eat.....
put_fork(no); /* put left fork down */
put_fork((no+1) % N); /* put down right fork */
}
}
• "take_fork" waits until the specified fork is available and then
grabs it.
• Unfortunately this solution will not work... what happens if all
the philosophers grab their left fork at the same time.
Better solution

• Shared data
int p[N]; /* status of the philosophers */
semaphore s[N]=0; /* semaphore for each philosopher */
semaphore mutex=1; /* semaphore for mutual exclusion */
• Code
#define LEFT(n) (n+N-1)%N /* Macros to give left */
#define RIGHT(n) (n+1)%N /* and right around the table */
void test(int no) { /* can philosopher 'no' eat */
if ((p[no] == HUNGRY) &&
(p[LEFT(no)] != EATING) &&
(p[RIGHT(no)] != EATING) ) {
p[no]=EATING;
signal(s[no]); /* if so then eat */
}
}
Cont’d
void take_forks(int no) { /* get both forks */
wait(mutex); /* only one at a time here please */
p[no]=HUNGRY; /* I'm Hungry */
test(no); /* can I eat? */
signal(mutex);
wait(s[no]); /* wait until I can */ }
void put_forks(int no) { /* put the forks down */
wait(mutex); /* only one at a time here */
p[no]=THINKING; /* let me think */
test(LEFT(no)); /* see if my neighbours can now eat */
test(RIGHT(no));
signal(mutex); }
void philosopher(int no) {
while(1) {
...think....
take_forks(no); /* get the forks */
....eat.....
put_forks(no); /* put forks down */
}
return NULL;}
High-level synchronization constructs
Monitors

• High-level synchronization construct that allows the safe sharing of an


abstract data type among concurrent processes. (Hoare and Brinch Hansen
1974)
• A collection of procedures, variables and data structures. Only one process
can be active in a monitor at any instant.
• monitor example
integer i;
condition c;
procedure producer(x);
begin
.
end
procedure consumer(x);
begin
.
end
end monitor;
Cont’d
• To allow a process to wait within the monitor, a condition
variable must be declared, as:
condition x;
• Condition variables can only be used with the operations wait
and signal.
– The operation
wait(x);
• means that the process invoking this operation is suspended
until another process invokes
signal(x);
– The signal(x) operation resumes exactly one suspended process. If no
process is suspended, then the signal operation has no effect.
The producer consumer problem can be solved as follows using monitors:

• monitor ProducerConsumer
condition full, empty;
integer count;
procedure enter;
begin
if count = N then wait(full);
...enter item...
count := count + 1;
if count = 1 then signal(empty)
end;
procedure remove;
begin
if count = 0 then wait(empty);
...remove item...
count := count - 1;
if count = N - 1 then signal(full)
end;
count := 0;
end monitor;
Cont’d

• procedure producer;
begin
while true do
begin
...produce item...
ProducerConsumer.enter
end
end;
procedure consumer;
begin
while true do
begin
ProducerConsumer.remove;
...consume item...
end
end;
The dining philosophers problem can also be solved easily

• monitor dining-philosophers
status state[n];
condition self[n];
procedure pickup (i:integer);
begin
state[i] := hungry;
test (i);
if state[i] <> eating then wait(self[i]);
end;
procedure putdown (i:integer);
begin
state[i] := thinking;
test (i+4 mod 5);
test (i+1 mod 5);
end;
Cont’d
• procedure test (k:integer);
begin
if state[k+4 mod 5] <> eating
and state[k] = hungry
and state[k+1 mod 5] <> eating
then begin
state[k] := eating;
signal(self[k]);
end;
end;
begin
for i := 0 to 4
do state[i] := thinking;
end
end monitor
Cont’d
• procedure philosopher(no:integer);
begin
while true do
begin
...think....
pickup(no);
....eat.....
putdown(no)
end
end
• There are very few languages that support constructs such as
monitors... expect this to change. One language that does is Java.
Here is a Java class that can be used to solve the producer
consumer problem.
Cont’d
class CubbyHole {
private int seq;
private boolean available = false;
public synchronized int get() {
while (available == false) {
try {
wait();
} catch (InterruptedException e) {} }
available = false;
notify();
return seq; }
public synchronized void put(int value) {
while (available == true) {
try {
wait();
} catch (InterruptedException e) {} }
seq = value;
available = true;
notify(); } }
Monitor implementation using semaphores

• What happens when a monitor signals a condition variable?


• A process waiting on the variable can't be active at the same
time as the signaling process, therefore: 2 choices.
1. Signaling process waits until the waiting process either leaves
the monitor or waits for another condition.
2. Waiting process waits until the signaling process either
leaves the monitor or waits for another condition.
• Variables
semaphore mutex=1, next=0;
int next-count=0;
• 'mutex' provides mutual exclusion inside the monitor.
• 'next' is used to suspend signaling processes.
• 'next-count' gives the number of processes suspended on
'next'.
Cont’d
• Each external procedure F will be replaced by
sem_wait(mutex);
...
body of F;
...
if (next-count > 0)
sem_signal(next);
else sem_signal(mutex);
• Mutual exclusion within a monitor is ensured. by 'mutex‘. For each condition
variable x, we have:
semaphore x-sem=0;
int x-count=0;
• The operation wait(x) can be implemented as:
x-count = x-count + 1;
if (next-count > 0)
sem_signal(next);
else sem_signal(mutex);
sem_wait(x-sem);
x-count = x-count - 1;
Cont’d
• The operation signal(x) can be implemented as:
if (x-count > 0) {
next-count = next-count + 1;
sem_signal(x-sem);
sem_wait(next);
next-count = next-count - 1; }
• Conditional-wait construct
cond_wait(x,c);
• 'c' is an integer expression evaluated when the wait operation is
executed.
• The value of c (priority number) is stored with the name of the process
that is suspended. When signal(x) is executed, the process with smallest
associated priority number is resumed next.
• Must check two conditions to establish the correctness of this system:
– User processes must always make their calls on the monitor in a correct
sequence.
– Must ensure that an uncooperative process does not ignore the mutual
exclusion gateway provided by the monitor, and try to access the shared
resource directly, without using the access protocols.
Atomic Transactions
• Transaction - program unit that must be executed
atomically; that is, either all the operations
associated with it are executed to completion, or
none are performed.
• Must preserve atomicity despite possibility of
failure.
• We are concerned here with ensuring transaction
atomicity in an environment where failures result
in the loss of information on volatile storage.
Log-Based Recovery
• Write-ahead log - all updates are recorded on the log,
which is kept in stable storage; log has following fields:
– transaction name
– data item name, old value, new value
• The log has a record of <Ti starts>, and either <Ti
commits> if the transactions commits, or <Ti aborts> if
the transaction aborts.
• Recovery algorithm uses two procedures:
– undo(Ti) - restores value of all data updated by transaction Ti to
the old values. It is invoked if the log contains record <Ti
starts>, but not <Ti commits>.
– redo(Ti ) - sets value of all data updated by transaction Ti to
the new values. It is invoked if the log contains both <Ti starts>
and <Ti commits>.
Checkpoints - reduce recovery overhead
1. Output all log records currently residing in volatile storage
onto stable storage.
2. Output all modified data residing in volatile storage to
stable storage.
3. Output log record <checkpoint> onto stable storage.
• Recovery routine examines log to determine the most
recent transaction Ti that started executing before the
most recent checkpoint took place.
– Search log backward for first <checkpoint> record.
– Find subsequent <Ti start> record.
• redo and undo operations need to be applied to only
transaction Ti and all transactions Tj that started executing
after transaction Ti .
Concurrent Atomic Transactions
• Serial schedule - the transactions are executed sequentially in some order.
• Example of a serial schedule in which T0 is followed by T1 :
T0 | T1
---------------|----------------
read(A) |
write(A) |
read(B) |
write(B) |
| read(A)
| write(A)
| read(B)
| write(B)
• Conflicting operations - Oi and Oj conflict if they access the same data item, and at
• least one of these operations is a write operation.
Cont’d
• Conflict serialisable schedule - schedule that can be transformed
into a serial schedule by a series of swaps of non-conflicting
operations.
• · Example of a concurrent serialisable schedule:
T0 | T1
---------------|----------------
read(A) |
write(A) |
| read(A)
| write(A)
read(B) |
write(B) |
| read(B)
| write(B)
Cont’d
• Locking protocol governs how locks are acquired and
released; data item can be locked in following modes:
– Shared: If Ti has obtained a shared-mode lock on data item Q,
then Ti can read this item, but it cannot write Q.
– Exclusive: If Ti has obtained an exclusive mode lock on data item
Q, then Ti can both read and write Q.
• Two-phase locking protocol
– Growing phase: A transaction may obtain locks, but may not
release any lock.
– Shrinking phase: A transaction may release locks, but may not
obtain any new locks.
• The two-phase locking protocol ensures conflict
serializability, but does not ensure freedom from deadlock.
Cont’d

• Timestamp-ordering scheme - transaction


ordering protocol for determining serialisability
order.
– With each transaction Ti in the system, associate a
unique fixed timestamp, denoted by TS(Ti ).
– If Ti has been assigned timestamp TS(Ti ), and a new
transaction Tj enters the system, then TS(Ti ) < TS(Tj ).
• Implement by assigning two timestamp values to
each data item Q .
– W-timestamp(Q) - denotes largest timestamp of any
transaction that executed write(Q) successfully.
– R-timestamp(Q) - denotes largest timestamp of any
transaction that executed read(Q) successfully.
Cont’d
• Example of a schedule possible under the time stamp protocol:
T0 | T1
---------|----------
read(B) |
| read(B)
| write(B)
read(A)|
| read(A)
| write(A)
• There are schedules that are possible under the two-phase
locking protocol but are not possible under the timestamp
protocol, and vice versa.
• The timestamp-ordering protocol ensures conflict
serializability; conflicting operations are processed in
timestamp order.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy