Cs3551 Unit III Notes None
Cs3551 Unit III Notes None
Mutual exclusion in a distributed system states that only one process is allowed to execute the
critical section (CS) at any given time.
Message passing is the sole means for implementing distributed mutual exclusion.
The decision as to which process is allowed access to the CS next is arrived at by
message passing, in which each process learns about the state of all other processes
in some consistent way.
Critical Section:
When more than one processes try to access the same code segment that segment is known
as the critical section.
The critical section contains shared variables or resources which are needed to be
synchronized to maintain the consistency of data variables.
Advantages :
Prevents race conditions
Provides mutual exclusion
Reduces CPU utilization
Simplifies synchronization
There are three basic approaches for implementing distributed mutual exclusion:
Token-based approach
Non-token-based approach:
The safety property states that at any instant, only one process can execute the
critical section. This is an essential property of a mutual exclusion algorithm.
Liveness property:
This property states the absence of deadlock and starvation. Two or more sites
should not endlessly wait for messages that will never arrive. In addition, a site must
not wait indefinitely to execute the CS while other sites are repeatedly executing the
CS. That is, every requesting site should get an opportunity to execute the CS in finite
time.
Fairness:
Fairness in the context of mutual exclusion means that each process gets a fair
chance to execute the CS. In mutual exclusion algorithms, the fairness property
generally means that the CS execution requests are executed in order of their arrival in
the system.
Performance metrics
Message complexity: This is the number of messages that are required per CS
execution by a site.
Synchronization delay: After a site leaves the CS, it is the time required and before
the next site enters the CS. (Figure 3.1)
Response time: This is the time interval a request waits for its CS execution to be
over after its request messages have been sent out. Thus, response time does not
include the time a request waits at a site before its request messages have been sent
out. (Figure 3.2)
System throughput: This is the rate at which the system executes requests for the
CS. If SD is the synchronization delay and E is the average critical section
execution time.
For examples, the best and worst values of the response time are achieved when load
is, respectively, low and high;
The best and the worse message traffic is generated at low and heavy load conditions,
respectively.
TOKEN BASED ALGORITHM
A queue
Fig 3.4: Suzuki–Kasami‘s broadcast algorithm
Correctness
Mutual exclusion is guaranteed because there is only one token in the system and a site holds
the token during the CS execution.
Theorem: A requesting site enters the CS in finite time.
Proof: Token request messages of a site Si reach other sites in finite time.
Since one of these sites will have token in finite time, site Si ’s request will be placed in the
token queue in finite time.
Since there can be at most N − 1 requests in front of this request in the token queue, site Si
will get the token and execute the CS in finite time.
Illustration Example:
Message Complexity:
The algorithm requires 0 message invocation if the site already holds the idle token at the
time of critical section request or maximum of N message per critical section execution. This
N messages involves
(N – 1) request messages
1 reply message
Performance:
Synchronization delay is 0 and no message is needed if the site holds the idle token at the
time of its request. In case site does not holds the idle token, the maximum synchronization
delay is equal to maximum message transmission time and a maximum of N message is
required per critical section invocation.
A site communicates with other sites in order to determine which sites should
execute critical section next. This requires exchange of two or more successive
round of messages among sites.
This approach use timestamps instead of sequence number to order requests
for the critical section.
When ever a site make request for critical section, it gets a timestamp.
Timestamp is also used to resolve any conflict between critical section
requests.
All algorithm which follows non-token based approach maintains a logical
clock. Logical clocks get updated according to Lamport’s scheme.
Eg: Lamport's algorithm, Ricart–Agrawala algorithm
LAMPORT’S ALGORITHM
In Lamport’s Algorithm critical section requests are executed in the increasing order
of timestamps i.e a request with smaller timestamp will be given permission to
execute critical section first than a request with larger timestamp.
Three type of messages ( REQUEST, REPLY and RELEASE) are used and
communication channels are assumed to follow FIFO order.
A site send a REQUEST message to all other site to get their permission to enter
critical section.
A site send a REPLY message to requesting site to give its permission to enter the
critical section.
A site send a RELEASE message to all other site upon exiting the critical section.
Every site Si, keeps a queue to store critical section requests ordered by their
timestamps.
request_queuei denotes the queue of site Si.
A timestamp is given to each critical section request using Lamport’s logical clock.
Timestamp is used to determine priority of critical section requests. Smaller
timestamp gets high priority over larger timestamp. The execution of critical section
request is always in the order of their timestamp.
When a site Si exits the critical section, it removes its own request from the top of its
request queue and sends a timestamped RELEASE message to all other sites. When a
site Sj receives the timestamped RELEASE message from site Si, it removes the
request of Sia from its request queue.
Correctness
Theorem: Lamport’s algorithm achieves mutual exclusion.
Proof: Proof is by contradiction.
Suppose two sites Si and Sj are executing the CS concurrently. For this to happen
conditions L1 and L2 must hold at both the sites concurrently.
This implies that at some instant in time, say t, both S i and Sj have their own requests
at the top of their request queues and condition L1 holds at them. Without loss of
generality, assume that Si ’s request has smaller timestamp than the request of Sj .
From condition L1 and FIFO property of the communication channels, it is clear that
at instant t the request of Si must be present in request queuej when Sj was executing
its CS. This implies that Sj ’s own request is at the top of its own request queue when
a smaller timestamp request, Si ’s request, is present in the request queuej – a
contradiction!
An Example
In Figures 9.3 to 9.6, we illustrate the operation of Lamport’s algorithm. In Figure
9.3, sites S1 andS2 are making requests for the CS and send out REQUEST messages to other
sites. The time stamps of the requests are (1, 1) and (1, 2), respectively. In Figure 9.4, both the
sites S1 and S2 have received REPLY messages from all other sites. S1 has its request at the
top of its request_queue but site S2 does not have its request at the top of its request_queue.
Consequently, site S1 enters the CS. In Figure 9.5, S1 exits and sends RELEASE messages to
all other sites. In Figure 9.6, site S2 has received REPLY from all other sites and also received
a RELEASE message from siteS1. Site S2 updates its request_queue and its request is now at
the top of its request_queue. Consequently, it enters the CS next.
Message Complexity:
Lamport’s Algorithm requires invocation of 3(N – 1) messages per critical section execution.
These 3(N – 1) messages involves
(N – 1) request messages
(N – 1) reply messages
(N – 1) release messages
Performance:
Synchronization delay is equal to maximum message transmission time. It requires 3(N – 1)
messages per CS execution. Algorithm can be optimized to 2(N – 1) messages by omitting
the REPLY message in some situations.
RICART–AGRAWALA ALGORITHM
Upon exiting site Si sends REPLY message to all the deferred requests.
An Example
Message Complexity:
Performance:
Synchronization delay is equal to maximum message transmission time It requires
2(N – 1) messages per Critical section execution.