0% found this document useful (0 votes)
23 views

Cs3551 Unit III Notes None

Uploaded by

subashri8711
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views

Cs3551 Unit III Notes None

Uploaded by

subashri8711
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

lOMoARcPSD|38367557

CS3551 UNIT III Notes - none

distributed system (Anna University)

Scan to open on Studocu

Studocu is not sponsored or endorsed by any college or university


Downloaded by Yazhini Infanta (infantayazhini@gmail.com)
lOMoARcPSD|38367557

UNIT III- DISTRIBUTED MUTEX & DEADLOCK


Distributed mutual exclusion algorithms: Introduction – Preliminaries – Lamport‘s algorithm –Ricart-Agrawala
algorithm –Token based algorithm- Suzuki–Kasami‘s broadcast algorithm. Deadlock detection in distributed
systems: Introduction – System model – Preliminaries –Models of deadlocks – Chandy- Misra-Haas algorithm
for the AND model and the OR model.

DISTRIBUTED MUTUAL EXCLUSION ALGORITHMS:


 Mutual exclusion is a fundamental problem in distributed computing
systems.
 Mutual Exclusion process makes sure that, concurrent access shared
resources or data in a serialized way.
 Mutual exclusion is a concurrency control property which is
introduced to prevent race conditions.
 It is the requirement that a process cannot access a shared resource
while a nother concurrent process is currently present or executing the
same resource.

Mutual exclusion in a distributed system states that only one process is allowed to execute the
critical section (CS) at any given time.
 Message passing is the sole means for implementing distributed mutual exclusion.
 The decision as to which process is allowed access to the CS next is arrived at by
message passing, in which each process learns about the state of all other processes
in some consistent way.

p1 ask the coordinator for permission to enter CS.Permission is granted


p2 ask the permission to enter the same CS.The coordinator doesn't reply
When p1 exits the CS, the coordinator grant the permission to P2

Critical Section:

 When more than one processes try to access the same code segment that segment is known
as the critical section.
 The critical section contains shared variables or resources which are needed to be
synchronized to maintain the consistency of data variables.

Downloaded by Yazhini Infanta (infantayazhini@gmail.com)


lOMoARcPSD|38367557

 Critical section is a group of instructions/statements or regions of code that need to be


executed atomically such as accessing a resource (file, input or output port, global data,
etc.)
 In concurrent programming, if one thread tries to change the value of shared data at the
same time as another thread tries to read the value (i.e, data race across threads), the result
is unpredictable.

Advantages :
 Prevents race conditions
 Provides mutual exclusion
 Reduces CPU utilization
 Simplifies synchronization

There are three basic approaches for implementing distributed mutual exclusion:
Token-based approach
Non-token-based approach:

System Model : Preliminaries


 The system consists of N sites, S1, S2, S3, …, SN.
 Assume that a single process is running on each site.
 The process at site Si is denoted by pi. All these processes communicate
asynchronously over an underlying communication network.
 A process wishing to enter the CS requests all other or a subset of processes by
sending REQUEST messages, and waits for appropriate replies before entering the
CS.
 While waiting the process is not allowed to make further requests to enter the CS.
 A site can be in one of the following three states: requesting the CS, executing the CS,
or neither requesting nor executing the CS.
 In the requesting the CS state, the site is blocked and cannot make further requests for
the CS.
 In the idle state, the site is executing outside the CS.
 In the token-based algorithms, a site can also be in a state where a site holding the
token is executing outside the CS. Such state is referred to as the idle token state.
 At any instant, a site may have several pending requests for CS. A site queues up
these requests and serves them one at a time.
 N denotes the number of processes or sites involved in invoking the critical section, T
denotes the average message delay, and E denotes the average critical section
execution time.

Requirements of mutual exclusion algorithms


 Safety property:

The safety property states that at any instant, only one process can execute the
critical section. This is an essential property of a mutual exclusion algorithm.
 Liveness property:
This property states the absence of deadlock and starvation. Two or more sites
should not endlessly wait for messages that will never arrive. In addition, a site must
not wait indefinitely to execute the CS while other sites are repeatedly executing the
CS. That is, every requesting site should get an opportunity to execute the CS in finite
time.
 Fairness:

Downloaded by Yazhini Infanta (infantayazhini@gmail.com)


lOMoARcPSD|38367557

Fairness in the context of mutual exclusion means that each process gets a fair
chance to execute the CS. In mutual exclusion algorithms, the fairness property
generally means that the CS execution requests are executed in order of their arrival in
the system.
Performance metrics
 Message complexity: This is the number of messages that are required per CS
execution by a site.
 Synchronization delay: After a site leaves the CS, it is the time required and before
the next site enters the CS. (Figure 3.1)
 Response time: This is the time interval a request waits for its CS execution to be
over after its request messages have been sent out. Thus, response time does not
include the time a request waits at a site before its request messages have been sent
out. (Figure 3.2)
 System throughput: This is the rate at which the system executes requests for the
CS. If SD is the synchronization delay and E is the average critical section
execution time.

Figure 3.1 Synchronization delay

Figure 3.2 Response Time


Low and High Load Performance:
 The performance of mutual exclusion algorithms is classified as two special loading
conditions, viz., “low load” and “high load”.
 The load is determined by the arrival rate of CS execution requests.
 Under low load conditions, there is seldom more than one request for the critical
section present in the system simultaneously.
 Under heavy load conditions, there is always a pending request for critical section at a
site.

Best and worst case performance


 In the best case, prevailing conditions are such that a performance metric attains the
best possible value. For example, the best value of the response time is a roundtrip
message delay plus the CS execution time, 2T +E.

Downloaded by Yazhini Infanta (infantayazhini@gmail.com)


lOMoARcPSD|38367557

 For examples, the best and worst values of the response time are achieved when load
is, respectively, low and high;
 The best and the worse message traffic is generated at low and heavy load conditions,
respectively.
TOKEN BASED ALGORITHM

 A unique token is shared among all the sites.


 If a site possesses the unique token, it is allowed to enter its critical section
 This approach uses sequence number to order requests for the critical section.
 Each requests for critical section contains a sequence number. This sequence
number is used to distinguish old and current requests.
 This approach insures Mutual exclusion as the token is unique.

Eg: Suzuki-Kasami’s Broadcast Algorithm

Data structure and Notations:

An array of integers RN[1…N]

A site Si keeps RNi[1…N], where RNi[j] is the largest sequence


number received so far through REQUEST message from site Si.

An array of integer LN[1…N]

This array is used by the token.LN[J] is the sequence number of the


request that is recently executed by site Sj.

A queue

This data structure is used by the token to keep record of ID of sites


waiting for the token

SUZUKI–KASAMI‘s BROADCAST ALGORITHM

Downloaded by Yazhini Infanta (infantayazhini@gmail.com)


lOMoARcPSD|38367557

 Suzuki–Kasami algorithm is a token-based algorithm for achieving mutual exclusion


in distributed systems.
 This is modification of Ricart–Agrawala algorithm, a permission based (Non-token
based) algorithm which uses REQUEST and REPLY messages to ensure mutual
exclusion.
 In token-based algorithms, A site is allowed to enter its critical section if it possesses
the unique token.
 Non-token based algorithms uses timestamp to order requests for the critical section
where as sequence number is used in token based algorithms.
 Each requests for critical section contains a sequence number. This sequence number
is used to distinguish old and current requests.


Fig 3.4: Suzuki–Kasami‘s broadcast algorithm

Downloaded by Yazhini Infanta (infantayazhini@gmail.com)


lOMoARcPSD|38367557

To enter Critical section:


 When a site Si wants to enter the critical section and it does not have the token then it
increments its sequence number RNi[i] and sends a request message REQUEST(i, sn)
to all other sites in order to request the token.
 Here sn is update value of RNi[i]
 When a site Sj receives the request message REQUEST(i, s n) from site Si, it sets
RNj[i] to maximum of RNj[i] and sni.eRNj[i] = max(RNj[i], sn).
After updating RNj[i], Site Sj sends the token to site Si if it has token and RNj[i] =
LN[i] + 1

To execute the critical section:


 Site Si executes the critical section if it has acquired the token.

To release the critical section:


After finishing the execution Site Si exits the critical section and does following:
 sets LN[i] = RNi[i] to indicate that its critical section request RNi[i] has been executed
 For every site Sj, whose ID is not prsent in the token queue Q, it appends its ID to Q if
RNj[j] = LN[j] + 1 to indicate that site Sj has an outstanding request.
 After above updation, if the Queue Q is non-empty, it pops a site ID from the Q and
sends the token to site indicated by popped ID.
 If the queue Q is empty, it keeps the token

Correctness
Mutual exclusion is guaranteed because there is only one token in the system and a site holds
the token during the CS execution.
Theorem: A requesting site enters the CS in finite time.
Proof: Token request messages of a site Si reach other sites in finite time.
Since one of these sites will have token in finite time, site Si ’s request will be placed in the
token queue in finite time.
Since there can be at most N − 1 requests in front of this request in the token queue, site Si
will get the token and execute the CS in finite time.

Illustration Example:

Downloaded by Yazhini Infanta (infantayazhini@gmail.com)


lOMoARcPSD|38367557

Message Complexity:
The algorithm requires 0 message invocation if the site already holds the idle token at the
time of critical section request or maximum of N message per critical section execution. This
N messages involves
 (N – 1) request messages
 1 reply message

Drawbacks of Suzuki–Kasami Algorithm:


 Non-symmetric Algorithm: A site retains the token even if it does not have requested
for critical section.

Performance:
Synchronization delay is 0 and no message is needed if the site holds the idle token at the
time of its request. In case site does not holds the idle token, the maximum synchronization
delay is equal to maximum message transmission time and a maximum of N message is
required per critical section invocation.

NON TOKEN BASED ALGORITHM

 A site communicates with other sites in order to determine which sites should
execute critical section next. This requires exchange of two or more successive
round of messages among sites.
 This approach use timestamps instead of sequence number to order requests
for the critical section.
 When ever a site make request for critical section, it gets a timestamp.
Timestamp is also used to resolve any conflict between critical section
requests.
 All algorithm which follows non-token based approach maintains a logical
clock. Logical clocks get updated according to Lamport’s scheme.
 Eg: Lamport's algorithm, Ricart–Agrawala algorithm

LAMPORT’S ALGORITHM

 Lamport’s Distributed Mutual Exclusion Algorithm is a permission based algorithm


proposed by Lamport as an illustration of his synchronization scheme for distributed
systems.
 In permission based timestamp is used to order critical section requests and to resolve
any conflict between requests.

Downloaded by Yazhini Infanta (infantayazhini@gmail.com)


lOMoARcPSD|38367557

 In Lamport’s Algorithm critical section requests are executed in the increasing order
of timestamps i.e a request with smaller timestamp will be given permission to
execute critical section first than a request with larger timestamp.
 Three type of messages ( REQUEST, REPLY and RELEASE) are used and
communication channels are assumed to follow FIFO order.
 A site send a REQUEST message to all other site to get their permission to enter
critical section.
 A site send a REPLY message to requesting site to give its permission to enter the
critical section.
 A site send a RELEASE message to all other site upon exiting the critical section.
 Every site Si, keeps a queue to store critical section requests ordered by their
timestamps.
 request_queuei denotes the queue of site Si.
 A timestamp is given to each critical section request using Lamport’s logical clock.
 Timestamp is used to determine priority of critical section requests. Smaller
timestamp gets high priority over larger timestamp. The execution of critical section
request is always in the order of their timestamp.

Fig 3.1: Lamport’s distributed mutual exclusion algorithm

To enter Critical section:


 When a site Si wants to enter the critical section, it sends a request message
Request(tsi, i) to all other sites and places the request on request_queue i. Here, Tsi
denotes the timestamp of Site Si.
 When a site Sj receives the request message REQUEST(ts i, i) from site Si, it returns a
timestamped REPLY message to site Si and places the request of site Si on
request_queuej

To execute the critical section:


 A site Si can enter the critical section if it has received the message with timestamp
larger than (tsi, i) from all other sites and its own request is at the top of
request_queuei.

To release the critical section:

Downloaded by Yazhini Infanta (infantayazhini@gmail.com)


lOMoARcPSD|38367557

 When a site Si exits the critical section, it removes its own request from the top of its
request queue and sends a timestamped RELEASE message to all other sites. When a
site Sj receives the timestamped RELEASE message from site Si, it removes the
request of Sia from its request queue.

Correctness
Theorem: Lamport’s algorithm achieves mutual exclusion.
Proof: Proof is by contradiction.
 Suppose two sites Si and Sj are executing the CS concurrently. For this to happen
conditions L1 and L2 must hold at both the sites concurrently.
 This implies that at some instant in time, say t, both S i and Sj have their own requests
at the top of their request queues and condition L1 holds at them. Without loss of
generality, assume that Si ’s request has smaller timestamp than the request of Sj .
 From condition L1 and FIFO property of the communication channels, it is clear that
at instant t the request of Si must be present in request queuej when Sj was executing
its CS. This implies that Sj ’s own request is at the top of its own request queue when
a smaller timestamp request, Si ’s request, is present in the request queuej – a
contradiction!

Theorem: Lamport’s algorithm is fair.


Proof: The proof is by contradiction.
 Suppose a site Si ’s request has a smaller timestamp than the request of another site S j
and Sj is able to execute the CS before Si .
 For Sj to execute the CS, it has to satisfy the conditions L1 and L2. This implies that
at some instant in time say t, S j has its own request at the top of its queue and it has
also received a message with timestamp larger than the timestamp of its request from
all other sites.
 But request queue at a site is ordered by timestamp, and according to our assumption
Si has lower timestamp. So Si ’s request must be placed ahead of the S j ’s request in
the request queuej . This is a contradiction!

An Example
In Figures 9.3 to 9.6, we illustrate the operation of Lamport’s algorithm. In Figure
9.3, sites S1 andS2 are making requests for the CS and send out REQUEST messages to other
sites. The time stamps of the requests are (1, 1) and (1, 2), respectively. In Figure 9.4, both the
sites S1 and S2 have received REPLY messages from all other sites. S1 has its request at the
top of its request_queue but site S2 does not have its request at the top of its request_queue.
Consequently, site S1 enters the CS. In Figure 9.5, S1 exits and sends RELEASE messages to
all other sites. In Figure 9.6, site S2 has received REPLY from all other sites and also received
a RELEASE message from siteS1. Site S2 updates its request_queue and its request is now at
the top of its request_queue. Consequently, it enters the CS next.

Downloaded by Yazhini Infanta (infantayazhini@gmail.com)


lOMoARcPSD|38367557

Message Complexity:
Lamport’s Algorithm requires invocation of 3(N – 1) messages per critical section execution.
These 3(N – 1) messages involves
 (N – 1) request messages
 (N – 1) reply messages
 (N – 1) release messages

Drawbacks of Lamport’s Algorithm:


 Unreliable approach: failure of any one of the processes will halt the progress of
entire system.
 High message complexity: Algorithm requires 3(N-1) messages per critical section
invocation.

Downloaded by Yazhini Infanta (infantayazhini@gmail.com)


lOMoARcPSD|38367557

Performance:
Synchronization delay is equal to maximum message transmission time. It requires 3(N – 1)
messages per CS execution. Algorithm can be optimized to 2(N – 1) messages by omitting
the REPLY message in some situations.

RICART–AGRAWALA ALGORITHM

 Ricart–Agrawala algorithm is an algorithm to for mutual exclusion in a distributed


system proposed by Glenn Ricart and Ashok Agrawala.
 This algorithm is an extension and optimization of Lamport’s Distributed Mutual
Exclusion Algorithm.
 It follows permission based approach to ensure mutual exclusion.
 Two type of messages ( REQUEST and REPLY) are used and communication
channels are assumed to follow FIFO order.
 A site send a REQUEST message to all other site to get their permission to enter
critical section.
 A site send a REPLY message to other site to give its permission to enter the critical
section.
 A timestamp is given to each critical section request using Lamport’s logical clock.
 Timestamp is used to determine priority of critical section requests.
 Smaller timestamp gets high priority over larger timestamp.
 The execution of critical section request is always in the order of their timestamp.

Fig 3.2: Ricart–Agrawala algorithm

To enter Critical section:


 When a site Si wants to enter the critical section, it send a timestamped REQUEST
message to all other sites.
 When a site Sj receives a REQUEST message from site Si, It sends a REPLY message
to site Si if and only if Site Sj is neither requesting nor currently executing the critical
section.
 In case Site Sj is requesting, the timestamp of Site Si‘s request is smaller than its own
request.
 Otherwise the request is deferred by site Sj.

To execute the critical section:


Site Si enters the critical section if it has received the REPLY message from all other
sites.

To release the critical section:

Downloaded by Yazhini Infanta (infantayazhini@gmail.com)


lOMoARcPSD|38367557

Upon exiting site Si sends REPLY message to all the deferred requests.

Theorem: Ricart-Agrawala algorithm achieves mutual exclusion.


Proof: Proof is by contradiction.
 Suppose two sites Si and Sj ‘ are executing the CS concurrently and Si ’s request has
higher priority than the request of Sj . Clearly, Si received Sj ’s request after it has
made its own request.
 Thus, Sj can concurrently execute the CS with S i only if Si returns a REPLY to S j (in
response to Sj ’s request) before Si exits the CS.
 However, this is impossible because Sj ’s request has lower priority. Therefore, Ricart-
Agrawala algorithm achieves mutual exclusion.

An Example

Figures 9.7 to 9.10 illustrate the operation of Ricart-Agrawala algorithm. In


Figure 9.7, sites S1 and S2 are making requests for the CS and send out REQUEST messages
to other sites. The timestamps of the requests are (2, 1) and (1, 2), respectively. In Figure 9.8,
S2 has received REPLY messages from all other sites and consequently, it enters the CS. In
Figure 9.9, S2 exits the CS and sends a REPLY message to site S1. In Figure 9.10, site S1 has
received REPLY from all other sites and enters the CS next.

Downloaded by Yazhini Infanta (infantayazhini@gmail.com)


lOMoARcPSD|38367557

Message Complexity:

Ricart–Agrawala algorithm requires invocation of 2(N – 1) messages per critical section


execution. These 2(N – 1) messages involve:
 (N – 1) request messages
 (N – 1) reply messages

Drawbacks of Ricart–Agrawala algorithm:


 Unreliable approach: failure of any one of node in the system can halt the progress
of the system. In this situation, the process will starve forever. The problem of failure
of node can be solved by detecting failure after some timeout.

Performance:
Synchronization delay is equal to maximum message transmission time It requires
2(N – 1) messages per Critical section execution.

Downloaded by Yazhini Infanta (infantayazhini@gmail.com)

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy