0% found this document useful (0 votes)
15 views53 pages

CST402 Distributed Computing M3

The document discusses distributed mutual exclusion algorithms and deadlock detection in distributed systems, detailing various algorithms such as Lamport's, Ricart-Agrawala, and quorum-based methods like Maekawa's algorithm. It outlines the system model, requirements for mutual exclusion, and performance metrics, emphasizing the importance of message passing for mutual exclusion in distributed environments. Additionally, it addresses the challenges of deadlock detection and the strategies for handling deadlocks in distributed systems.

Uploaded by

Gautam Uttam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views53 pages

CST402 Distributed Computing M3

The document discusses distributed mutual exclusion algorithms and deadlock detection in distributed systems, detailing various algorithms such as Lamport's, Ricart-Agrawala, and quorum-based methods like Maekawa's algorithm. It outlines the system model, requirements for mutual exclusion, and performance metrics, emphasizing the importance of message passing for mutual exclusion in distributed environments. Additionally, it addresses the challenges of deadlock detection and the strategies for handling deadlocks in distributed systems.

Uploaded by

Gautam Uttam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 53

CST402 DISTRIBUTED COMPUTING

1
Module – 3 (Mutual exclusion and Deadlock detection)

Distributed mutual exclusion algorithms – System model, Requirements of


mutual exclusion algorithm. Lamport’s algorithm, Ricart–Agrawala algorithm,
Quorum-based mutual exclusion algorithms – Maekawa’s algorithm. Token-
based algorithm – Suzuki–Kasami’s broadcast algorithm.
Deadlock detection in distributed systems – System model, Deadlock handling
strategies, Issues in deadlock detection, Models of deadlocks.

2
Distributed Mutual Exclusion Algorithms

• Mutual exclusion: Concurrent access of processes to a shared resource or data


is executed in mutually exclusive manner.
• Only one process is allowed to execute the critical section (CS) at any given
time.
• In a distributed system, shared variables (semaphores) or a local kernel cannot
be used to implement mutual exclusion.
• Message passing is the sole means for implementing distributed mutual
exclusion.

3
Distributed Mutual Exclusion Algorithms

• Distributed mutual exclusion algorithms must deal with


unpredictable message delays and incomplete knowledge of the
system state.
• Three basic approaches for distributed mutual exclusion:
1 Token based approach
2 Non-token based approach
3 Quorum based approach

4
Distributed Mutual Exclusion Algorithms

• Token-based approach:
 A unique token is shared among the sites.
 A site is allowed to enter its CS if it possesses the token.
 Mutual exclusion is ensured because the token is unique
• Non-token based approach:
 Two or more successive rounds of messages are exchanged among the
sites to determine which site will enter the CS next.
• Quorum based approach:
 Each site requests permission to execute the CS from a subset of sites
(called a quorum).
 Any two quorums contain a common site.
 This common site is responsible to make sure that only one request
executes the CS at any time. 5
System Model
• The system consists of N sites, S1, S2, ..., SN.
• We assume that a single process is running on each site. The process at site Si is
denoted by pi

• A site can be in one of the following three states: requesting the CS, executing the
CS, or neither requesting nor executing the CS (i.e., idle).
• In the ‘requesting the CS’ state, the site is blocked and can not make further
requests for the CS.
• In the ‘idle’ state, the site is executing outside the CS.
• In token-based algorithms, a site can also be in a state where a site holding the
token is executing outside the CS (called the idle token state).
• At any instant, a site may have several pending requests for CS. A site queues up
these requests and serves them one at a time. 6
Requirements of Mutual Exclusion Algorithms
1. Safety Property: At any instant, only one process can execute the critical
section.
2. Liveness Property: This property states the absence of deadlock and
starvation. Two or more sites should not endlessly wait for messages which
will never arrive.
3. Fairness: Each process gets a fair chance to execute the CS. Fairness property
generally means the CS execution requests are executed in the order of their
arrival (time is determined by a logical clock) in the system.

7
Distributed mutual exclusion algorithms
Performance metrics: The performance of mutual exclusion
algorithms is generally measured by the following four metrics:
1. Message complexity : This is the number of messages that are required per
CS execution by a site
2. Synchronization delay : After a site leaves the CS, it is the time required
before the next site enters the CS

Figure 1: Synchronization Delay. 8


Distributed
Performance metricsmutual exclusion algorithms
3. Response time : This is the time interval a request waits for its CS execution to be
over after its request messages have been sent out

Figure 2: Response Time

4. System throughput: This is the rate at which the system executes requests for the
CS. If SD is the synchronization delay and E is the average critical section execution
time, then the throughput is given by the following equation:

9
Lamport’s algorithm
• Lamport developed a distributed mutual exclusion algorithm as an illustration
of his clock synchronization scheme
• The algorithm is fair in the sense that a request for CS are executed in the
order of their timestamps and time is determined by logical clocks.
• When a site processes a request for the CS, it updates its local clock and
assigns the request a timestamp.
• The algorithm executes CS requests in the increasing order of timestamps.
• Every site Si keeps a queue, request_queuei , which contains mutual exclusion
requests ordered by their timestamps.
• This algorithm requires communication channels to deliver messages in FIFO
order.

10
Lamport’s algorithm

11
Lamport’s algorithm
Correctness
• Theorem 9.1 Lamport’s algorithm achieves mutual exclusion.
• Proof
• Proof is by contradiction.
• Suppose two sites Si and Sj are executing the CS concurrently. For this to
happen conditions L1 and L2 must hold at both the sites concurrently. This
implies that at some instant in time, say t, both Si and Sj have their own
requests at the top of their request_queues and condition L1 holds at them.
Without loss of generality, assume that Si’s request has smaller timestamp
than the request of Sj. From condition L1 and FIFO property of the
communication channels, it is clear that at instant t the request of Si must be
present in request_queue j when Sj was executing its CS. This implies that Sj’s
own request is at the top of its own request_queue when a smaller
timestamp request, Si’s request, is present in the request_queue j – a
contradiction! Hence, Lamport’s algorithm achieves mutual exclusion. 12
Lamport’s algorithm
Theorem 9.2 Lamport’s algorithm is fair.
Proof
A distributed mutual exclusion algorithm is fair if the requests for CS are
executed in the order of their timestamps. The proof is by contradiction.
Suppose a site Si’s request has a smaller timestamp than the request of another
site Sj and Sj is able to execute the CS before Si. For Sj to execute the CS, it has
to satisfy the conditions L1 and L2. This implies that at some instant in time Sj
has its own request at the top of its queue and it has also received a message
with timestamp larger than the timestamp of its request from all other sites.
But request_queue at a site is ordered by timestamp, and according to our
assumption Si has lower timestamp. So Si’s request must be placed ahead of
the Sj’s request in the request_queue j. This is a contradiction. Hence Lamport’s
algorithm is a fair mutual exclusion algorithm

13
Lamport’s algorithm
Example
In Figure 9.3, sites S1 and S2 are making requests for the CS and send out REQUEST messages
to other sites. The timestamps of the requests are (1,1) and (1,2), respectively. In Figure 9.4,
both the sites S1 and S2 have received REPLY messages from all other sites. S1 has its request
at the top of its request_queue but site S2 does not have its request at the top of its
request_queue.

Figure 9.3 Sites S1 and S2 are Making Requests for the CS

Figure 9.4 Site S1 enters the CS.

14
Lamport’s algorithm
Example
In Figure 9.5, S1 exits and sends RELEASE messages to all other sites. In Figure
9.6, site S2 has received REPLY from all other sites and also received a RELEASE
message from siteS1. Site S2 updates its request_queue and its request is now at the
top of its request_queue. Consequently, it enters the CS next

Figure 9.5 S1 exits and sends RELEASE messages to


all other sites

Figure 9.6 Site S2 enters the CS.

15
Lamport’s algorithm

• Performance

• For each CS execution,


• Lamport’s algorithm requires N − 1 REQUEST messages,
• N −1 REPLY messages,
• and N −1 RELEASE messages.
• Thus, Lamport’s algorithm requires 3(N − 1) messages per CS invocation. The
synchronization delay in the algorithm is T

16
Ricart–Agrawala algorithm
 The Ricart–Agrawala algorithm assumes that the communication channels are
FIFO.
 The algorithm uses two types of messages: REQUEST and REPLY.
 A process sends a REQUEST message to all other processes to request their
permission to enter the critical section.
 A process sends a REPLY message to a process to give its permission to that
process.
 Processes use Lamport-style logical clocks to assign a timestamp to critical
section requests.
 Timestamps are used to decide the priority of requests in case of conflict

17
Ricart–Agrawala algorithm
 if a process pi that is waiting to execute the critical section receives a
REQUEST message from process pj, then if the priority of pj’s request is
lower, pi defers the REPLY to pj and sends a REPLY message to pj only after
executing the CS for its pending request.
 Otherwise, pi sends a REPLY message to pj immediately, provided it is
currently not executing the CS.
 Each process pi maintains the request-deferred array, RDi, the size of which
is the same as the number of processes in the system.
 Initially, ∀i ∀j: Rdi[j] = 0.

18
Ricart–Agrawala algorithm

19
Ricart–Agrawala algorithm
• Correctness
• Theorem 9.3 Ricart–Agrawala algorithm achieves mutual exclusion.
• Proof
• Proof is by contradiction. Suppose two sites Si and Sj are executing the CS
concurrently and Si’s request has higher priority (i.e., smaller timestamp)
than the request of Sj. Clearly, Si received Sj’s request after it has made its
own request. (Otherwise, Si’s request will have lower priority.) Thus, Sj can
concurrently execute the CS with Si only if Si returns a REPLY to Sj (in
response to Sj’s request) before Si exits the CS. However, this is impossible
because Sj’s request has lower priority. Therefore, the Ricart–Agrawala
algorithm achieves mutual exclusion.

20
Ricart–Agrawala
• Example
algorithm
• In Figure 9.7, sites S1 and S2 are each making requests for the CS and sending out
REQUEST messages to other sites.

Figure 9.7 Sites S1 and S2 each make a


request for the CS.

Figure 9.8 Site S1 enters the CS

21
Ricart–Agrawala
• Example
algorithm
• In Figure 9.7, sites S1 and S2 are each making requests for the CS and sending out
REQUEST messages to other sites.

Figure 9.9 Site S1 exits the CS and


sends a REPLY message to S2’s
deferred request.

Figure 9.10 Site S2 enters the CS.

22
Ricart–Agrawala algorithm

Performance
For each CS execution, the Ricart–Agrawala algorithm requires
• N − 1 REQUEST messages
• and N − 1 REPLY messages.
• Thus, it requires 2(N −1) messages per CS execution. The synchronization
delay in the algorithm is T.

23
Quorum-based mutual exclusion algorithms
 Quorum-based mutual exclusion algorithms represented a departure from
the trend in the following two ways:
 A site does not request permission from all other sites, but only from a
subset of the sites.
 This is a radically different approach as compared to the Lamport and
Ricart–Agrawala algorithms, where all sites participate in conflict resolution
of all other sites
 In quorum-based mutual exclusion algorithm, a site can send out only one
REPLY message at any time.
 A site can send a REPLY message only after it has received a RELEASE
message for the previous REPLY message.
 Therefore, a site Si locks all the sites in Ri in exclusive mode before executing
its CS.
24
Quorum-based mutual exclusion algorithms
 Quorum-based mutual exclusion algorithms significantly reduce the message
complexity of invoking mutual exclusion by having sites ask permission from only
a subset of sites.
 Since these algorithms are based on the notion of “Coteries” and “Quorums,” we
first describe the idea of coteries and quorums.
 A coterie C is defined as a set of sets, where each set g ∈C is called a quorum.
The following properties hold for quorums in a coterie:
1. Intersection property:For every quorum g, h∈C, g ∩h ≠∅. For
example, sets {1,2,3}, {2,5,7} and {5,7,9} cannot be quorums in a
coterie because the first and third sets do not have a common
element
2. Minimality property: There should be no quorums g, h in coterie C
such that g ⊇ h. For example, sets {1,2,3} and {1,3} cannot be
quorums in a coterie because the first set is a superset of the second
 Coteries and quorums can be used to develop algorithms to ensure mutual
exclusion in a distributed environment. 25
Quorum-based mutual exclusion algorithms
 A simple protocol works as follows: let “a” be a site in quorum “A.”
 If “a” wants to invoke mutual exclusion, it requests permission from all
sites in its quorum “A.”
 Minimality property ensures efficiency

26
Maekawa’s algorithm
 Maekawa’s
algorithm was the
first quorum-based
mutual exclusion
algorithm.
 This algorithm
requires delivery of
messages to be in
the order they are
sent between every
pair of sites.

27
Maekawa’s algorithm
• Correctness
• Theorem 9.3 Maekawa’s algorithm achieves mutual exclusion.
• Proof Proof is by contradiction. Suppose two sites Si and Sj are concurrently
executing the CS. This means site Si received a REPLY message from all sites in Ri
and concurrently site Sj was able to receive a REPLY message from all sites in Rj.
If Ri ∩ Rj = Sk}, then site Sk must have sent REPLY messages to both Si and Sj
concurrently, which is a contradiction
• Performance Note that the size of a request set is √N. Therefore, an execution of
the CS requires √N REQUEST, √N REPLY, and √N RELEASE messages, resulting in
3√N messages per CS execution. Synchronization delay in this algorithm is 2T.
This is because after a site Si exits the CS, it first releases all the sites in Ri and
then one of those sites sends a REPLY message to the next site that executes the
CS. Thus, two sequential message transfers are required between two successive
CS executions. Maekawa’s algorithm is deadlock-prone. Measures to handle
deadlocks require additional messages. 28
Token-based algorithms
 In token-based algorithms, a unique token is shared among the
sites.
 A site is allowed to enter its CS if it possesses the token.
 A site holding the token can enter its CS repeatedly until it sends
the token to some other site.
 Depending upon the way a site carries out the search for the token,
there are numerous token-based algorithms token-based
algorithms use sequence numbers instead of timestamps.
 Every request for the token contains a sequence number

29
Suzuki–Kasami’s broadcast algorithm
 In Suzuki–Kasami’s algorithm if a site that wants to enter the CS does not
have the token, it broadcasts a REQUEST message for the token to all
other sites.
 A site that possesses the token sends it to the requesting site upon the
receipt of its REQUEST message.
 If a site receives a REQUEST message when it is executing the CS, it sends
the token only after it has completed the execution of the CS
 Although the basic idea underlying this algorithm may sound rather
simple, there are two design issues that must be efficiently addressed:
1. How to distinguishing an outdated REQUEST message from a current
REQUEST message
2. How to determine which site has an outstanding request for the CS

30
Suzuki–Kasami’s broadcast algorithm

Ri[1,..N] – a request
queue at each site.
The token consists of a
queue of requesting
sites, Q, and an array of
integers LN[1, … ,N],
where LN[j] is the
sequence
number(number of times
a site requested token) of
the request which site Sj
executed most recently.

31
Suzuki–Kasami’s broadcast algorithm
• Correctness Mutual exclusion is guaranteed because there is only one token
in the system and a site holds the token during the CS execution.
• Theorem 9.3 A requesting site enters the CS in finite time.

• Proof Token request messages of a site Si reach other sites in finite time.
Since one of these sites will have token in finite time, site Si’s request will be
placed in the token queue in finite time. Since there can be at most N −1
requests in front of this request in the token queue, site Si will get the token
and execute the CS in finite time.
• Performance The beauty of the Suzuki–Kasami algorithm lies in its simplicity
and efficiency. No message is needed and the synchronization delay is zero if
a site holds the idle token at the time of its request. If a site does not hold
the token when it makes a request, the algorithm requires N messages to
obtain the token. The synchronization delay in this algorithm is 0 or T. 32
Deadlock detection in distributed systems – System
model, Deadlock handling strategies, Issues in
deadlock detection, Models of deadlocks.

33
Deadlock detection in distributed systems
• Deadlocks are a fundamental problem in distributed systems
• In distributed systems, a process may request resources in any order, which
may not be known a priori, and a process can request a resource while holding
others.
• If the allocation sequence of process resources is not controlled in such
environments, deadlocks can occur.
• A deadlock can be defined as a condition where a set of processes request
resources that are held by other processes in the set.
• Deadlocks can be dealt with using any one of the following three strategies:
1. deadlock prevention 2. deadlock avoidance 3. and deadlock detection

34
Deadlock detection in distributed systems
1. Deadlock prevention is commonly achieved by either having a process
acquire all the needed resources simultaneously before it begins execution
or by pre-empting a process that holds the needed resource.

2. In the deadlock avoidance approach to distributed systems, a resource is


granted to a process if the resulting global system is safe.

3. Deadlock detection requires an examination of the status of the process–


resources interaction for the presence of a deadlock condition.
• To resolve the deadlock, we have to abort a deadlocked process.

35
Deadlock detection in distributed systems
System model
• A distributed system consists of a set of processors that are
connected by a communication network.
• The communication delay is finite but unpredictable.
• A distributed program is composed of a set of n asynchronous
processes P1, P2,… , Pi, , Pn that communicate by message passing
over the communication network

36
Deadlock detection in distributed systems
• Without loss of generality we assume that each process is running on a
different processor.
• The processors do not share a common global memory and communicate
solely by passing messages over the communication network.
• There is no physical global clock in the system to which processes have
instantaneous access.
• The communication medium may deliver messages out of order, messages
may be lost, garbled, or duplicated due to timeout and retransmission,
processors may fail, and communication links may go down.
• The system can be modeled as a directed graph in which vertices represent
the processes and edges represent unidirectional communication channels.

37
Deadlock detection in distributed systems
• We make the following assumptions:
▪ The systems have only reusable resources.
▪ Processes are allowed to make only exclusive access to resources.
▪ There is only one copy of each resource.
▪ A process can be in two states, running or blocked. In the running state
(also called active state),
▪ a process has all the needed resources and is either executing or is ready
for execution.
▪ In the blocked state, a process is waiting to acquire some resource.

38
Deadlock
Wait-for graph detection
(WFG) in distributed systems
▪ In distributed systems, the state of the system can be modeled by directed
graph, called a wait-for graph (WFG).
▪ In a WFG, nodes are processes and there is a directed edge from node P1 to
node P2 if P1 is blocked and is waiting for P2 to release some resource.
▪ A system is deadlocked if and only if there exists a directed cycle or knot in the
WFG

Figure 10.1: An Example of a


WFG
39
Deadlock detection in distributed systems
Deadlock handling strategies
There are three strategies for handling deadlocks,
1. deadlock prevention,
2. deadlock avoidance,
3. deadlock detection.
Handling of deadlocks becomes highly complicated in distributed systems
because no site has accurate knowledge of the current state of the system
and because every inter-site communication involves a finite and
unpredictable delay.
Deadlock prevention is commonly achieved either by having a process
acquire all the needed resources simultaneously before it begins executing or
by preempting a process that holds the needed resource.
This approach is highly inefficient and impractical in distributed systems
40
Deadlock detection in distributed systems

• In deadlock avoidance approach to distributed systems, a resource is


granted to a process if the resulting global system state is safe.
• Due to several problems, however, deadlock avoidance is impractical in
distributed systems.
• Deadlock detection requires an examination of the status of process–
resource interactions for the presence of cyclic wait.
• Deadlock detection in distributed systems seems to be the best approach to
handle deadlocks in distributed systems

41
Deadlock detection in distributed systems
• Issues in deadlock detection
Deadlock handling using the approach of deadlock detection entails addressing
two basic issues:
1. detection of existing deadlocks
2. resolution of detected deadlocks.
Detection of deadlocks
Detection of deadlocks involves addressing two issues:
3. maintenance of the WFG
4. and searching of the WFG for the presence of cycles
Since, in distributed systems, a cycle or knot may involve several sites, the search
for cycles greatly depends upon how the WFG of the system is represented across
the system.

42
Deadlock detection in distributed systems
• Depending upon the way WFG information is maintained and the search for
cycles is carried out
• Correctness criteria: A deadlock detection algorithm must satisfy the following
two conditions:
1. Progress (no undetected deadlocks) :
• The algorithm must detect all existing deadlocks in a finite time.
• After all wait-for dependencies for a deadlock have formed, the algorithm
should not wait for any more events to occur to detect the deadlock
2. Safety (no false deadlocks) :
• The algorithm should not report deadlocks that do not exist (called phantom or
false deadlocks).
• In distributed systems where there is no global memory and there is no global
clock, it is difficult to design a correct deadlock detection algorithm because
sites may obtain an out-of-date and inconsistent WFG of the system. As a result,
sites may detect a cycle that never existed 43
Deadlock detection in distributed systems
Resolution of a detected deadlock

• Deadlock resolution involves breaking existing wait-for dependencies


between the processes to resolve the deadlock.
• It involves rolling back one or more deadlocked processes and assigning
their resources to blocked processes so that they can resume execution

44
Deadlock detection in distributed systems
Models of deadlocks
• Distributed systems allow many kinds of resource requests.
• A process might require a single resource or a combination of resources for its
execution
• Models of deadlocks introduces a hierarchy of request models starting with
very restricted forms to the ones with no restrictions
1. The single-resource model
2. The AND model
3. The OR model
4. The AND-OR model
5. The
6. Unrestricted model

45
1. The single-resource model
• The single-resource model is the simplest resource model in a distributed
system, where a process can have at most one outstanding request for
only one unit of a resource.

• Since the maximum out-degree of a node in a WFG for the single resource
model can be 1, the presence of a cycle in the WFG shall indicate that
there is a deadlock

46
2. The AND model
• In the AND model, a process can request more than one resource simultaneously
and the request is satisfied only after all the requested resources are granted to the
process.
• The requested resources may exist at different locations.
• The out degree of a node in the WFG for AND model can be more than 1.
• The presence of a cycle in the WFG indicates a deadlock in the AND model.
• Example:
• Each node of the WFG in such a model is called an AND node. Consider the example
WFG described in the Figure 10.1. Process P11 has two outstanding resource
requests. In case of the AND model, P11 shall become active from idle state only
after both the resources are granted. There is a cycle P11 → P21 → P24 → P54 →
P11, which corresponds to a deadlock situation
• Consider process P44 in Figure 10.1. It is not a part of any cycle but is still
deadlocked as it is dependent on P24, which is deadlocked. Since in the single-
resource model, a process can have at most one outstanding request, the AND
model is more general than the single-resource model. 47
Figure 10.1:
An Example
of a WFG

48
3. The OR model
• In the OR model, a process can make a request for numerous resources simultaneously
and the request is satisfied if any one of the requested resources is granted.
• The requested resources may exist at different locations.
• If all requests in the WFG are OR requests, then the nodes are called OR nodes.
• Presence of a cycle in the WFG of an OR model does not imply a deadlock in the OR
model.
• Example
• consider Figure 10.1. If all nodes are OR nodes, then process P11 is not deadlocked
because once process P33 releases its resources, P32 shall become active as one of its
requests is satisfied. After P32 finishes execution and releases its resources, process
P11 can continue with its processing. In the OR model, the presence of a knot indicates
a deadlock . A set of processes is deadlocked, the following conditions hold true:
1. Each of the process in the set S is blocked.
2. The dependent set for each process in S is a subset of S.
3. No grant message is in transit between any two processes in set S. 49
4. The AND-OR model

• A generalization of the previous two models (OR model and AND model) is
the AND-OR model.
• In the AND-OR model, a request may specify any combination of and and or
in the resource request.
• For example, in the ANDOR model, a request for multiple resources can be
of the form x and (y or z).
• A deadlock in the AND-OR model can be detected by repeated application of
the test for OR-model deadlock

50
5. The

• Another form of the AND-OR model is the model (called the P-out-of-Q
model), which allows a request to obtain any k available resources from a pool
of n resources.
• Both the models are the same in expressive power.
• model lends itself to a much more compact formation of a request
• Every request in the model can be expressed in the AND-OR model and vice-
versa.

51
6. Unrestricted model

• In the unrestricted model, no assumptions are made regarding the underlying


structure of resource requests.
• Only one assumption that the deadlock is stable is made and hence it is the
most general model.
• This model helps separate concerns: Concerns about properties of the
problem (stability and deadlock) are separated from underlying distributed
systems computations(e.g., message passing versus synchronous
communication).

52
Module-3
END

53

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy