0% found this document useful (0 votes)
21 views

6 Syncronization

Uploaded by

Muhammad Sair
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views

6 Syncronization

Uploaded by

Muhammad Sair
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 63

Synchronization

Chapter 5
Process Synchronization
 Cooperating processes may share data via
 shared address space (heap) by using threads
 shared memory objects
 files

 What can happen if processes try to access shared data (address)


concurrently?
 concurrent access to shared data may result in data inconsistency
 maintaining data consistency requires mechanisms to ensure the orderly
execution of cooperating processes
Producer – Consumer Problem
 Producer-consumer problem is a common paradigm for cooperating processes
 Producer process produces information that is used by the consumer process
 One solution is to use shared memory for the two processes to
communicate
 Useful to have a buffer that can be filled by the producer and emptied by
the consumer if they are to run concurrently
 Must be synchronized so that the consumer does not try to consume an
item that has not yet been produced. Consumer has to wait for the producer
 Unbounded buffer places no particular limit on the size of the buffer
 Bounded buffers assumes that there is a fixed buffer size
also a shared variable
count

Producer Consumer

Shared Buffer
at most BUFFER_SIZE items
Producer and Consumer Code

Producer ConSUMER

while (true) { while (true) {


/*produce an item*/ while (count == 0)
nextProduced = …. ; // do nothing

while (count == BUFFER_SIZE) nextConsumed = buffer[out];


; // do nothing out = (out + 1) % BUFFER_SIZE;
count--;
buffer [in] = nextProduced;
in = (in + 1) % BUFFER_SIZE; /*consume item nextConsumed*/
count++; }
}
Race Condition

 Race condition
 The situation where several processes
access and manipulate shared data concurrently.
 The final value of the shared data depends upon which
process finishes last.

 To prevent race conditions, concurrent processes must be


synchronized.
Race Condition
 count++ could be implemented as

register1 = count
register1 = register1 + 1
count = register1
 count-- could be implemented as

register2 = count
register2 = register2 - 1
count = register2

 Consider this execution interleaving with “count = 5” initially:


S0: producer execute register1 = count
{register1 =
S1: producer execute register1 = register1 + 1
5}
S2: consumer execute register2 = count
{register1 =
S3: consumer execute register2 = register2 – 1
6}
S4: producer execute count = register1
{register2 =
S5: consumer execute count = register2
5}
{register2 =
4}
The Critical-Section Problem

 n processes all competing to use some shared data

 Each process has a code segment, called critical


section, in which the shared data is accessed.

 Problem – ensure that when one process is executing


in its critical section, no other process is allowed to
execute in its critical section.
Programs and Critical Sections
 The part of the program (process) that is accessing and changing shared data is
called its critical section.
 We should not allow more than one process to be in their critical regions where
they are manipulating the same shared data.
 The general way to do that is:
do {
do {
critical section entry section

remainder section critical section

} while (TRUE) exit section

The general structure of a program remainder

} while (TRUE)
 Entry section will allow only one process to enter and execute critical section
code.
Solution to Critical-Section Problem
1. Mutual Exclusion - If process Pi is executing in its critical section, then no
other processes can be executing in their critical sections

2. Progress - If no process is executing in its critical section and there exist


some processes that wish to enter their critical section, then the selection of
the processes that will enter the critical section next cannot be postponed
indefinitely. //No deadlock

3. Bounded Waiting - A bound must exist on the number of times that other
processes are allowed to enter their critical sections after a process has made
a request to enter its critical section and before that request is granted //No
starvation of a process
 Assume that each process executes at a nonzero speed
 No assumption concerning relative speed of the n processes
Applications and Kernel
 Multiprocess applications sharing a file or shared memory segment may face
critical section problems.
 Multithreaded applications sharing global variables may also face critical
section problems.

 Similarly, kernel itself may face critical section problem. It is also a program.
It may have critical sections.
Kernel Critical Sections
 While kernel is executing a function x(), a hardware interrupt may arrive and
interrupt handler h() can be run. Make sure that interrupt handler h() and x()
do not access the same kernel global variable. Otherwise race condition may
happen.

 While a process is running in user mode, it may call a system call s(). Then
kernel starts running function s(). CPU is executing in kernel mode now. We
say the process is now running in kernel mode (even though kernel code is
running).

 While a process X is running in kernel mode, it may or may not be pre-


empted. In preemptive kernels, the process running in kernel mode can be
preempted and a new process may start running. In non-preemptive kernels,
the process running in kernel mode is not preempted unless it blocks or
returns to user mode.
Kernel Critical Sections
 In a preemptive kernel, a process X running in kernel mode may be
suspended (preempted) at an arbitrary (unsafe) time. It may be in the middle
of updating a kernel variable or data structure at that moment. Then a new
process Y may run and it may also call a system call. Then, process Y starts
running in kernel mode and may also try update the same kernel variable or
data structure (execute the critical section code of kernel). We can have a race
condition if kernel is not synchronized.

 Therefore, we need solve synchronization and critical section problem for the
kernel itself as well. The same problem appears there as well.
Peterson’s Solution
 Good algorithmic description of solving the problem
 Two process solution
 Assume that the load and store machine-language instructions are atomic;
that is, cannot be interrupted
 The two processes share two variables:
 int turn;
 Boolean flag[2]

 The variable turn indicates whose turn it is to enter the critical section.
 The flag array is used to indicate if a process is ready to enter the critical
section. flag[i] = true implies that process Pi is ready!
Algorithm for Process Pi
do {
flag[i] = TRUE;
turn = j; entry section
while (flag[j] && turn = = j);
critical section
flag[i] = FALSE; exit section
remainder section
} while (1);

 Provable that the three Critical Section requirement are met:


1. Mutual exclusion is preserved
Pi enters Critical Section only if:
either flag[j] = false or turn = i
2. Progress requirement is satisfied
3. Bounded-waiting requirement is met
Two processes executing concurrently

PROCESS i (0) PROCESS j (1)


do { do {
flag[i] = TRUE; flag[j] = TRUE;
turn = j; turn = i;
while (flag[j] && turn == while (flag[i] && turn ==
j); i);
critical section….. critical section…..
flag[i] = FALSE; flag[j] = FALSE;
remainder section….. remainder section…..
} while (1) } while (1)

flag[ ]
Shared Variables turn
Bakery Algorithm
 Critical Section for N processes
 Before entering its critical section, a process receives a number (like in a
bakery). Holder of the smallest number enters the critical section.
 The numbering scheme here always generates numbers in increasing order
of enumeration; i.e., 1,2,3,3,3,3,4,5...
 If processes Pi and Pj receive the same number, if i < j, then Pi is served
first; else Pj is served first (PID assumed unique).
 Choosing a number
 max (a0,…, an-1) is a number k, such that k  ai for i = 0, …, n – 1
 Notation for lexicographical order (ticket #, PID #)
 (a,b) < (c,d) if a < c or if a == c and b < d
 Shared Data:
 Boolean choosing[n] initialized to FALSE
 Integer number[n] initialized to 0
Bakery Algorithm
do {
choosing[i] = TRUE;
number[i] = max(number[0], …, number[n – 1]) +1;
choosing[i] = FALSE;

for (j = 0; j < n; j++) {


while (choosing[j]) ;
while ((number[j] != 0) && ((number[j],j) <
(number[i],i))) ;
}

critical section

number[i] = 0;

remainder section
} while (TRUE);
Synchronization Terminology
 Atomic Operation: is an indivisible operation that always runs to
completion or not at all. It cannot be stopped, or its state altered by
someone else in the middle. It’s a fundamental building block . If no
atomic operations, then have no way for threads to work together

 Synchronization: use of atomic operations to ensure cooperation


between processes/threads

 Lock: mechanism to prevent another process from doing something


 Lock before entering a critical section and before accessing shared data
 Unlock when leaving critical section or when access to shared data is
complete
 Wait if locked
 Important idea: all synchronization involves waiting
Desirable properties of Synchronization
 Fair: if several processes waiting, let each in eventually

 Efficient: don’t use up substantial amount of resources when waiting


 e.g. no busy waiting

 Simple: should be easy to use


 e.g. just bracket the critical sections
Synchronization: “Too Much Milk” Problem
 What kind of knowledge and mechanism do we need to get independent
processes to communicate and get a consistent view of the world (computer
state)?
 Example: Too Much Milk
Too Much Milk: Correctness Properties
 Need to be careful about correctness of concurrent programs, since non-
deterministic
 Always write down behavior first
 Impulse is to start coding first, then when it doesn’t work, pull hair out
 Instead, think first, then code

 What are the correctness properties for the “Too much milk” problem???
 Only one person buys milk at a time
 Someone buys milk if you need it

 Restrict ourselves to use only atomic load and store operations as building blocks
Too Much Milk: Solution #1
 Leave a note before buying (a version of “lock”)
 Remove note after buying (a version of “unlock”)
 Don’t buy any milk if there is note (wait)

 Suppose a computer tries this (remember, only memory read/write are


atomic):
Thread A Thread B
if (noMilk & NoNote){ if (noMilk & NoNote){
leave Note; leave Note;
buy milk; buy milk;
remove note; remove note;
} }

 Does it work?
Still too much milk but only occasionally!
Thread context switched after checking milk & note but before buying
milk!
 Solution makes problem worse since fails intermittently
To Much Milk Solution #2
 How about using labeled notes so we can leave note before checking the
milk?

Thread A Thread B
leave note A; leave note B;
if (noNote B) { if (noNoteA){
if (noMilk){ if (noMilk){
buy Milk; buy Milk;
} }
} }
remove note A; remove note B;

 Does this work?


 Possible for neither thread to buy milk
 Context switches at exactly the wrong times can lead each to think that
the other is going to buy
 Extremely unlikely that this would happen, but will at worse possible
time
 This kind of lockup is called “starvation!”
Too Much Milk Solution #3
Thread A Thread B
leave note A; leave note B;
X: while (note B){ Y: if (noNote A){
do nothing; if (noMilk){
} buy milk;
if (noMilk){ }
buy milk; }
} remove note B;
remove note A;
 Does this work?
 Yes. Both can guarantee that:
 It is safe to buy, or
 Other will buy, ok to quit
 At point X, either there is a note B or not:
 if no note B, safe for A to buy since B has either not started or quit
 otherwise A waits until there is no longer a note B, and either finds milk
that B has bought or buys it if needed
 At point Y either there is a note A or not:
 if no note A, safe for B to check & buy milk (Thread A not started yet)
 Otherwise, A is either checking & buying milk or waiting for B to quit, so
B quits by removing note B.
Is Solution #3 a good solution?
 It is too complicated – it’s hard to convince ourselves this solution
works.

 It is asymmetrical – thread A and B are different. Thus adding more


threads would require different code for each new thread and
modifications to existing threads.

 A is busy waiting – A is consuming CPU resources despite doing any


useful work.

 The solution relies on loads and stores to be atomic.


Language Support for Synchronization
Have your programming language provide atomic routines for
synchronization

 Locks: one process holds a lock at a time, does its critical section,
releases lock

 Semaphores: more general version of locks

 Monitors: connects shared data to synchronization primitives

All of these require some hardware support, and waiting


Locks
 Locks provide mutual exclusion to shared data with two “atomic” routines”
 Lock.acquire()- wait until lock is free, then grab it
 Lock.release()- unlock, and wake up any thread waiting in
Acquire.

 Rules of using a lock:


 Always acquire the lock before accessing shared data
 Always release the lock after finishing with shared data
 Lock is initially free
 Do not lock again if already locked
 Do not unlock if not locked by you
 Do not spend large amounts of time in critical section
Too Much Milk: Solution #4
 Implementing “Too Much Milk” with locks

Thread A Thread B

Lock.Acquire(); Lock.Acquire();
if (noMilk) if (noMilk)
buy milk; buy milk;
Lock.Release(); Lock.Release();

 The solution is clean and symmetric

 How do we make Lock.Acquire and Lock.Releaase atomic?


 if two threads are waiting for the lock and both see it’s free, only one
succeeds to grab the lock

 Once again, section of code between Lock.Acquire & Lock.Releaase is called a


“Critical Section”
Semaphores

 A semaphore is
a variable or abstract data type
 that provides a simple but useful abstraction
 for controlling access,
 by multiple processes,
 to a common resource
 in a parallel programming or a multi user
environment.
Semaphores
 Synchronization tools that does not require busy waiting (generalized
locks).

 Semaphore S: integer variable


 shared, and can be a kernel variable

 Two standard operations modify S: Wait() and Signal()


 Semaphores can only be accessed via these two atomic operations
 They can be implemented as system calls by kernel. Kernel makes
sure they are indivisible.

 Less complicated entry and exit sections when semaphores are used

 Invented by Dijkstra in 1965.


Meaning (semantics) of Operations
wait (S):
if S positive
S-- and return
else
block/wait (until somebody wakes you up; then return)

signal(S):
if there is a process waiting
wake it up and return
else
S++ and return
Comments
 Wait body and signal body have to be executed atomically: one
process at a time. Hence the body of wait and signal are critical
sections to be protected by the kernel.

 Note that when Wait() causes the process to block, the operation is
nearly finished (wait body critical section is done).

 That means another process can execute wait body or signal body
Semaphore as General Synchronization Tool
 Binary semaphore – integer value can range from 0 to 1; simpler to
implement
 Guarantees mutually exclusive access to a resource (only one process is in
the critical section at a time)
 It is initialized to free (value = 1)
 Also known as mutex locks

 Counting semaphore – integer value can range over an unrestricted domain


 Useful when multiple units of a resource are available
 The initial count to which the semaphore is initialized is usually the
number of resources
 A process can acquire access so long as at least one unit of the resource is
available
 Can be used for other synchronization problems; for example for resource
allocation
Semaphores: Usage
 Binary semaphores (mutexes) can be used to solve critical section problems.
 A semaphore variable (lets say S) can be shared by N processes, and initialized
to 1
 Each process is structured as follows:

do {
Wait(S); //wait until semaphore S is available
<critical section>
Signal(S); //signals other processes that semaphore S is free
<reminder section>
} while (TRUE);

 Each semaphore supports a queue of processes waiting to access the critical


section
 If a process executes Wait(S) and semaphore S is free (non-zero), it continues
executing. Otherwise the OS puts the process on the wait queue for semaphore S.

Binary Semaphores: Example
 Implementing “Too Much Milk” with semaphores:

Semaphore S; //initialized to 1
Thread A Thread B
do{ do{
Wait(S); Wait(S);
if (noMilk) if (noMilk)
buy milk; buy milk;
Signal(S); Signal(S);
}while (1); }while (1);

 Semaphores can be used for three purposes:


 To ensure mutually exclusive execution of a critical section (as locks do)
 To control access to a shared pool of resources (using a counting
semaphore)
 To cause one thread to wait for a specific action to be signaled from another
thread.
Usage: other synchronization problems

P0 P1
… …
S1; Assume we definitely want to
S2;
…. have S1 executed before S2.
….

semaphore x = 0; // initialized to 0
P0 P1
… …
S1; wait (x);
Solution: signal (x); S2;
…. ….
Uses of Semaphore: synchronization

Buffer is an array of BUF_SIZE Cells (at most BUF_SIZE items can be put)

Producer Consumer
do { do {
// produce item wait (Full_Cells);

put item into buffer ….
remove item from buffer
.. ..
signal (Full_Cells); …
} while (TRUE);
} while (TRUE);
wait() {…} signal() {…}
Kernel
Semaphore Full_Cells = 0; // initialized to 0
Consumer/Producer is Synchronized

Full_Cells

BUF_SIZE
Producer
Sleeps

0 time
Consumer
Sleeps
Consumer/Producer is Synchronized

Ensured by synchronization mechanisms: * Red is always less than Blue


Pt – Ct ≤ BUF_SIZE * (Blue – Red) can never be
Pt – Ct ≥ 0 greater than BUF_SIZE

all items produced (Pt)

BUF_SIZE

times
all items consumed (Ct)
Usage: Resource Allocation
 A library has 10 study rooms, to be used by one student at a time. At most 10
students can use the rooms concurrently. Additional students that need to use
the rooms need to wait until a room is free.
 Students must request a room from the front counter and return to the counter
when finished using a room. The clerk does not keep track of which room is
occupied or who is using it. When a student requests a room, the clerk
decreases this number. When a student releases a room, the clerk increases this
number. The front desk represents a semaphore, the rooms are the resources,
and the students represent processes. How can we code those processes?
 Solution:
One of the processes creates and initializes a semaphore to 10.
Semaphore x = 10;
wait (x);

….use one instance Each process has to be
of the resource… coded in this manner.

signal (x);
Exercise
 X and Y are shared semaphores. The following 3 pseudo-coded threads
are started. What is the output and also mention the updated values of X
and Y after completion of every Thread? ? X = 0, Y = 1

Thread 1 Thread 2 Thread 3

wait(X) wait(X) wait(Y)


print “A" wait(Y) print “C"
signal(Y) Print “B" signal(X)
signal(X) signal(X)

wait(S) signal(S)
{ {
while (S ≤ 0); // busy wait S++;
S--; }
}

 Answer:
 CAB
 X = 1, Y = 0
Exercise
 Write a pseudo code to synchronize processes A, B, C and D by using
semaphores so that process B must finish executing before A starts, and process
A must finish before process C or D starts. Show your solution. You should
assume three semaphores X, Y and Z and all initialized to zero.

 Solution:
 X=Y=Z=0

Process A Process B Process C Process D

wait(X) Do works of wait(Y) wait(Y)


Do works of B; Do works of Do works of
A; signal(X) C; D;
signal(Y)
signal(Y)
Classical Problems of Synchronization
 Bounded-Buffer Problem

 Readers-Writers Problem

 Dining-Philosophers Problem
Bounded-Buffer Problem
 N buffers, each can hold one item
 Semaphore mutex initialized to the value 1
 Semaphore full initialized to the value 0
 Semaphore empty initialized to the value N

buffer
producer consumer

full = 4
empty = 6
Bounded-Buffer Problem

The structure of the producer process The structure of the consumer process
do { do {
// produce an item in nextp wait (full);
wait (mutex);
wait (empty);
wait (mutex); // remove an item from
// buffer to nextc
// add the item to the buffer
signal (mutex);
signal (mutex); signal (empty);
signal (full);
// consume the item in nextc
} while (TRUE);
} while (TRUE);
Readers-Writers Problem
 A data set is shared among a number of concurrent processes
 Readers – only read the data set; they do not perform any updates
 Writers – can both read and write

 Problem – allow multiple readers to read at the same time. Only one single
writer can access the shared data at the same time. If a writer is writing then
no reader is allowed to read it.

reader
writer
reader

Data Set
reader
writer
writer reader
Readers-Writers Problem
 Shared Data
 Data set
 Integer readcount initialized to 0
 Number of readers reading the data at the moment
 Semaphore mutex initialized to 1
 Protects the readcount variable
(multiple readers may try to modify it)
 Semaphore wrt initialized to 1
 Protects the data set
(either writer or reader(s) should access data at a time)
Readers-Writers Problem

The structure of a reader process


do {
wait (mutex) ;
readcount ++ ;
The structure of a writer process
if (readcount == 1)
do {
wait (wrt) ; wait (wrt) ;
signal (mutex);

// writing is performed // reading is performed


signal (wrt) ;
wait (mutex) ;
} while (TRUE); readcount - - ;
if (readcount == 0)
signal (wrt) ;
signal (mutex) ;
} while (TRUE);
Readers-Writers Problem Variations
 First variation – no reader kept waiting unless writer has permission to use
shared object
 no reader should wait for other readers to finish just because a writer is
waiting
 Writers may starve

 Second variation – once writer is ready, it performs write ASAP


 If writer is waiting then no new reader may be allowed to read
 Readers may starve

 Both may have starvation leading to even more variations


Dining-Philosophers Problem

a resource

a process

 Philosophers spend their lives thinking and eating. Assume a philosopher


needs two chopsticks to eat. Chopsticks are like resources. While a
philosopher is holding a chopstick, another one can not have it.
 Philosophers don’t interact with their neighbors, occasionally try to pick up
two chopsticks (one at a time) to eat from bowl and release both when done.
Dining-Philosophers Problem
 Is not a real problem

 But lots of real resource allocation problems look like this. If we can solve this
problem effectively and efficiently, we can also solve the real problems.

 From a satisfactory solution:


 We want to have concurrency: two philosophers that are not sitting next to
each other on the table should be able to eat concurrently.
 We don’t want deadlock: waiting for each other indefinitely.
 We don’t want starvation: no philosopher waits forever.
Dining-Philosophers Problem
 Semaphore chopstick [5] initialized to 1

 The structure of Philosopher i:


do {
wait ( chopstick[i] );
wait ( chopStick[ (i + 1) % 5] );

// eat

signal ( chopstick[i] );
signal (chopstick[ (i + 1) % 5] );

// think
} while (TRUE);
 What is the problem with this algorithm?
 This solution provides concurrency but may result in deadlock.
Dinning-Philosophers Problem Variations
 Represent each chopstick by a semaphore
 No two neighbors can eat simultaneously
 Can still create a deadlock

 Deadlock Handling
 Allow at most 4 philosophers to sit simultaneously on the table.
 Allow to pick up the chopsticks only if both are available (pick up must
be in critical section)

 An odd philosopher picks up first his left chopstick and then his right,
whereas an even philosopher picks up his right chopstick first and then his
left.
Problems with Semaphores
 Incorrect use of semaphore operations:

 signal (mutex) …. wait (mutex)


 wait (mutex) … wait (mutex)
 Omitting of wait (mutex) or signal (mutex) (or both)

 Deadlock and starvation are possible.


Deadlock and Starvation
 Deadlock – A condition where two or more processes are waiting indefinitely
for an event that can only be generated by one of the waiting processes.
 Let S and Q be two semaphores initialized to 1
P0 P1
wait(S); wait(Q);
wait(Q); wait(S);
... ...
signal(S); signal(Q);
signal(Q); signal(S);

 Starvation – indefinite blocking


 A process may never be removed from the semaphore queue in which it is
suspended
 Priority Inversion – Scheduling problem when lower-priority process holds a
lock needed by higher-priority process
 Solved via priority-inheritance protocol
Lost Update Problem
Transaction T: Transaction U:
Bank$Withdraw(A,4); Bank$Withdraw(C, 3);
Bank$Deposit(B, 4) Bank$Deposit(B, 3)

Banlance := A.Read() $100


A.Write (balance-4) $96
Balance := C.Read() $300
C.Write(balance-3) $297
Balance := B.Read() $200
Balance := B.Read() $200
B.Write( balance + 3) $203
B.Write ( balance +4 ) $204
Correctness Requirements
 Threaded programs must work for all interleavings of thread instruction
sequences
 Cooperating threads inherently non-deterministic and non-reproducible
 Really hard to debug unless carefully designed!

 Example: Therac-25
 Machine for radiation therapy
 Software control of electron
accelerator and electronXray
production
 Software control of dosage
 Software errors caused the
death of several patients
 A series of race conditions on
shared variables and poor
software design
 “They determined that data entry speed during editing was the key factor
in producing the error condition: If the prescription data was edited at a
fast pace, the overdose occurred.”
Space Shuttle Example
 Original Space Shuttle launch aborted 20 minutes before scheduled launch

 Shuttle has five computers:


Four run the “Primary Avionics PASS
Software System” (PASS)
 Asynchronous and real-time BFS
 Runs all of the control systems
 Results synchronized and compared every 3 to 4 ms
 The Fifth computer is the “Backup Flight System” (BFS)
 stays synchronized in case it is needed
 Written by completely different team than PASS
 Countdown aborted because BFS disagreed with PASS
 A 1/67 chance that PASS was out of sync one cycle
 Bug due to modifications in initialization code of PASS
 A delayed init request placed into timer queue
 As a result, timer queue not empty at expected time to force use of
hardware clock
 Bug not found during extensive simulation
BACKUP
acquire() and release()
 acquire() {
while (!available)
; /* busy wait */
available = false;
}
 release() {
available = true;
}
 do {
acquire lock
critical section
release lock
remainder section
} while (true);

 But this solution requires busy waiting


 This lock therefore called a spinlock
Semaphore Implementation
 Must guarantee that no two processes can execute the wait()and
signal() on the same semaphore at the same time
 Thus, the implementation becomes the critical section problem where the
wait and signal code are placed in the critical section
 Could now have busy waiting in critical section implementation
 But implementation code is short
 Little busy waiting if critical section rarely occupied
 Note that applications may spend lots of time in critical sections and therefore
this is not a good solution
Semaphore Implementation with no Busy waiting

 With each semaphore there is an associated waiting queue


 Each entry in a waiting queue has two data items:
 value (of type integer)
 pointer to next record in the list
 Two operations:
 block – place the process invoking the operation on the
appropriate waiting queue
 wakeup – remove one of processes in the waiting queue and
place it in the ready queue

 typedef struct{
int value;
struct process *list;
} semaphore;
Implementation with no Busy waiting (Cont.)

wait(semaphore *S) {
S->value--;
if (S->value < 0) {
add this process to S->list;
block();
}
}

signal(semaphore *S) {
S->value++;
if (S->value <= 0) {
remove a process P from S->list;
wakeup(P);
}
}

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy