0% found this document useful (0 votes)
16 views35 pages

05a Mutex Slides

The document discusses different techniques for achieving mutual exclusion in distributed systems, including centralized and token-based algorithms. A centralized algorithm uses a single coordinator process to grant exclusive access to shared resources. A token-based algorithm passes a token around a logical ring of processes to control access.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views35 pages

05a Mutex Slides

The document discusses different techniques for achieving mutual exclusion in distributed systems, including centralized and token-based algorithms. A centralized algorithm uses a single coordinator process to grant exclusive access to shared resources. A token-based algorithm passes a token around a logical ring of processes to control access.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 35

CS 417 – DISTRIBUTED SYSTEMS

Week 5: Part 1
Distributed Mutual Exclusion

© 2022 Paul Krzyzanowski. No part of this

Paul Krzyzanowski content may be reproduced or reposted in


whole or in part in any manner without the
permission of the copyright owner.
Process Synchronization
Techniques to coordinate execution among processes
– One process may have to wait for another
– Shared resource (critical section) may require exclusive access

Mutual exclusion
Examples • Update a fields in database tables
• Modify a shared file
• Modify file contents that are replicated on multiple servers

Easy to handle if the entire request is atomic


• Contained in a single message; server can manage mutual exclusion

Needs to be coordinated if the request comprises multiple messages


or spans multiple systems
October 10, 2022 CS 417 © 2022 Paul Krzyzanowski 2
Centralized Systems
Achieve mutual exclusion via:
– Test & set in hardware
– Semaphores
– Messages (inter-process)
– Condition variables

October 10, 2022 CS 417 © 2022 Paul Krzyzanowski 3


Distributed Mutual Exclusion
Goal:
Create an algorithm to allow a process to request and obtain exclusive access
to a resource that is available on the network
Required properties:
Safety: At any instant, only one process may hold the resource
Liveness:The algorithm should make progress; processes should not wait forever for
messages that will never arrive

Also desired:
Fairness:Each process gets a fair chance to hold the resource: bounded wait time &
in-order processing of requests

October 10, 2022 CS 417 © 2022 Paul Krzyzanowski 4


Assumptions
Resource identification
– Assume there is agreement on how a resource is identified
• Pass this identifier with each request
• e.g., lock("printer"), lock("table:employees"), lock("table:employees;row:15"), lock("shared_file.txt")
– We’ll just use request(R) to request exclusive access to resource R

• Process identification
– Every process has a unique ID (e.g., address.process_id)

• Reliable communication
– Network messages are reliable (may require retransmission of lost/corrupted messages)

• Live processes
– The processes in the system do not die

October 10, 2022 CS 417 © 2022 Paul Krzyzanowski 5


Categories of mutual exclusion algorithms
• Centralized
– A process can access a resource because a central coordinator allowed it to
do so

• Token-based
– A process can access a resource if it is holding a token permitting it to do so

• Contention-based
– A process can access a resource via distributed agreement

October 10, 2022 CS 417 © 2022 Paul Krzyzanowski 6


Centralized algorithm
• Mimic single processor system
• One process elected as coordinator

1. Request resource request(R) C


2. Wait for response
grant(R)
3. Receive grant
4. access resource P
5. Release resource
release(R)

October 10, 2022 CS 417 © 2022 Paul Krzyzanowski 7


Centralized algorithm
If another process claimed resource:
– Coordinator does not reply until release
– Maintain queue: service requests in FIFO order

R in use by: P0 request(R)


R Request Queue
C
P1 grant(R)
P0

request(R) P1

October 10, 2022 CS 417 © 2022 Paul Krzyzanowski 8


Centralized algorithm
If another process claimed resource:
– Coordinator does not reply until release
– Maintain queue: service requests in FIFO order

request(R)
P2
R in use by: P0
R Request Queue
C
P1
P2 P0

P1

October 10, 2022 CS 417 © 2022 Paul Krzyzanowski 9


Centralized algorithm
If another process claimed resource:
– Coordinator does not reply until release
– Maintain queue: service requests in FIFO order

P2
R in use by: P10
R Request Queue
C
P12
P2 P0
grant(R)
release(R)
P1

October 10, 2022 CS 417 © 2022 Paul Krzyzanowski 10


Centralized algorithm
If another process claimed resource:
– Coordinator does not reply until release
– Maintain queue: service requests in FIFO order

P2
R in use by: P12 grant(R)

R Request Queue
C
P2
P0 release(R)

P1

October 10, 2022 CS 417 © 2022 Paul Krzyzanowski 11


Centralized algorithm
Benefits
• Fair: All requests processed in order
• Easy to implement, understand, and verify
• Processes do not need to know group members – just the coordinator
• Efficiency: 2 messages to enter, 1 message to exit
Problems
• Process cannot distinguish being blocked from a dead coordinator
– single point of failure
• Centralized server can be a bottleneck (unlikely!)

October 10, 2022 CS 417 © 2022 Paul Krzyzanowski 12


Token Ring algorithm
Assume known group of processes
– Some ordering can be imposed on group (unique process IDs)
– Construct logical ring in software
– Process communicates with its neighbor

P0

P5 P1

P4 P2

P3

October 10, 2022 CS 417 © 2022 Paul Krzyzanowski 13


Token Ring algorithm
• Initialization
– Process 0 creates a token for resource R

• Token circulates around ring from Pi to P(i+1)mod N


• When process acquires token
– Checks to see if it needs to enter critical section P0
– If no, send ring to neighbor P1
P5
– If yes, access resource
• Hold token until done
P4 P2

P3
token(R)

October 10, 2022 CS 417 © 2022 Paul Krzyzanowski 14


Token Ring algorithm

Your turn to access resource R

P0

P5 P1

P4 P2

P3

October 10, 2022 CS 417 © 2022 Paul Krzyzanowski 15


Token Ring algorithm

P0
Your turn to access
P5 P1
resource R

P4 P2

P3

October 10, 2022 CS 417 © 2022 Paul Krzyzanowski 16


Token Ring algorithm

P0

P5 P1

P4 P2 Your turn to access


resource R

P3

October 10, 2022 CS 417 © 2022 Paul Krzyzanowski 17


Token Ring algorithm

P0

P5 P1

P4 P2

P3 Your turn to access resource R

October 10, 2022 CS 417 © 2022 Paul Krzyzanowski 18


Token Ring algorithm

P0

P5 P1

Your turn to access


P4 P2
resource R

P3

October 10, 2022 CS 417 © 2022 Paul Krzyzanowski 19


Token Ring algorithm

P0

Your turn to access P1


P5
resource R

P4 P2

P3

October 10, 2022 CS 417 © 2022 Paul Krzyzanowski 20


Token Ring algorithm

Your turn to access resource R

P0

P5 P1

P4 P2

P3

October 10, 2022 CS 417 © 2022 Paul Krzyzanowski 21


Token Ring algorithm

P0
Your turn to access
P5 P1
resource R

P4 P2

P3

October 10, 2022 CS 417 © 2022 Paul Krzyzanowski 22


Token Ring algorithm summary
• Safety: Only one process at a time has token
– Mutual exclusion guaranteed

• Liveness: Order well-defined (but not necessarily first-come, first-served)


– Starvation cannot occur
– Lack of FCFS ordering may be undesirable sometimes

• Delay:
– Request = 0…N-1 messages
– Release = 1 message

October 10, 2022 CS 417 © 2022 Paul Krzyzanowski 23


Token Ring algorithm summary
• Downsides/Problems
– Constant activity
– Token loss (e.g., process died)
• It will have to be regenerated
• Detecting loss may be a problem – is the token lost or in just use by someone?
– Process loss: what if you can't talk to your neighbor?

October 10, 2022 CS 417 © 2022 Paul Krzyzanowski 24


Lamport’s Mutual Exclusion
Distributed algorithm using reliable multicast and logical clocks

• Messages are sent reliably and in single-source FIFO order


– Each message is time stamped with totally ordered Lamport timestamps
• Ensures that each timestamp is unique
• Every node can make the same decision by comparing timestamps

• Each process maintains request queue


– Queue contains mutual exclusion requests
– Queues are sorted by message timestamps

October 10, 2022 CS 417 © 2022 Paul Krzyzanowski 25


1. Request a Resource
Request a critical section: Lamport time

• Process Pi sends Request(R, i, Ti ) to all nodes


It also places the same request onto its own queue Process Time stamp
P4 1021
• When a process Pj receives a request: P8 1022
– It returns a timestamped Reply(Tj ) P1 3944
– Places the request on its request queue P6 8201
P12 9638
Every process will have an identical queue
Sample request queue for R
– Same contents in the same order Identical at each process

October 10, 2022 CS 417 © 2022 Paul Krzyzanowski 26


2. Use the Resource
Enter a critical section (accessing resource):
• Pi has received Reply messages from every process
Pj where Tj > Ti Process Time stamp
P4 1021
• Pi’s request has the earliest timestamp P8 1022
in its queue
P1 3944
P6 8201
P12 9638

If your request is at the head of the queue Sample request queue for R
Identical at each process
AND you received Replies for that request
… then you can access the critical section

October 10, 2022 CS 417 © 2022 Paul Krzyzanowski 27


3. Release the resource
Release a critical section:
• Process Pi removes its request from its queue
• Sends Release(i, Ti ) to all nodes
• Each process now checks if its request is the earliest in its queue
• If so, that process now has the critical section

October 10, 2022 CS 417 © 2022 Paul Krzyzanowski 28


Assessment: Lamport’s Mutual Exclusion
• Safety: Replicated queues – same process on top
• Liveness: Sorted queue & Lamport timestamps ensure earlier processes go first
• Delay/Bandwidth:
– Request = 2(N-1) messages: (N-1) Request msgs + (N-1) Reply msgs
– Release = (N-1) Release msgs

• Problems
– N points of failure
– A lot of messaging traffic
• Requests & releases are sent to the entire group

Not great … but demonstrates that a fully distributed algorithm is possible

October 10, 2022 CS 417 © 2022 Paul Krzyzanowski 29


Ricart & Agrawala algorithm
Another contention-based distributed algorithm
using reliable multicast and logical clocks

When a process wants to enter critical section:


1. Compose a Request(R, i, Ti ) message containing:
• R: Name of resource
• i: Process Identifier (machine ID, process ID)
• Ti: Timestamp (totally-ordered Lamport)

2. Reliably multicast request to all processes in group

3. Wait until everyone gives permission (sends a Reply)

4. Enter critical section / use resource

October 10, 2022 CS 417 © 2022 Paul Krzyzanowski 30


Ricart & Agrawala algorithm
When process receives a request:
– If receiver not interested: send Reply to sender
– If receiver is in critical section: do not reply; add request to queue
– If receiver just sent a request as well: (potential race condition)
• Compare timestamps on received & sent messages: earliest timestamp wins
• If receiver is the loser: send Reply
• If receiver is the winner: do not reply – queue the request
– When done with critical section: send Reply to all queued requests

October 10, 2022 CS 417 © 2022 Paul Krzyzanowski 31


Assessment: Ricart & Agrawala Mutual Exclusion
• Safety: Two competing processes will not send a REPLY to each other
– Timestamps in the requests are unique – one will be earlier than the other

• Liveness: Ordered by Lamport timestamp if there is contention


• Delay/Bandwidth:
– Request = 2(N-1) messages: (N-1) Request msgs + (N-1) Reply msgs
– Release = 0 … (N-1) Reply msgs to queued requests

• Problems
– N points of failure
– A lot of messaging traffic: requests & releases are sent to the entire group

October 10, 2022 CS 417 © 2022 Paul Krzyzanowski 32


Lamport vs. Ricart & Agrawala
Lamport
– Everyone replies … always – no hold-back
– 3(N-1) messages
• Request – Reply – Release
– Process decides to go based on whether its request is the earliest in its queue

Ricart & Agrawala


– If you are in the critical section (or won a tie)
• Don’t respond with a Reply until you are done with the critical section
– 2(N-1) messages
• Request – ACK
– Process decides to go if it gets ACKs from everyone

October 10, 2022 CS 417 © 2022 Paul Krzyzanowski 33


Other distributed mutex algorithms
• Suzuki-Kasami
– Adds a token to Ricart & Agrawala
– Improves performance to (N-1) requests and 1 reply

• Maekawa
– Partitions the group – each subgroup has at least one process in common with
another subgroup
– Performance improved to 3 𝑁 … 6 𝑁 messages

• Many more…

October 10, 2022 CS 417 © 2022 Paul Krzyzanowski 34


The End

October 10, 2022 CS 417 © 2022 Paul Krzyzanowski 35

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy