0% found this document useful (0 votes)
16 views18 pages

Lecture 4 - Failure Detection and Membership

Uploaded by

Asad Javed
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views18 pages

Lecture 4 - Failure Detection and Membership

Uploaded by

Asad Javed
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

Parallel and Distributed

Computing
Fall 2022
Dr. Zeshan Iqbal
Lecture 4: Failure Detection and
Membership
1

A Challenge
• You’ve been put in charge of a datacenter, and your
manager has told you, “Oh no! We don’t have any failures
in our datacenter!”

• Do you believe him/her?

• What would be your first responsibility?


• Build a failure detector
• What are some things that could go wrong if you didn’t do
this? 2

1
Failures are the Norm
… not the exception, in datacenters.

Say, the rate of failure of one machine (OS/disk/motherboard/network, etc.) is


once every 10 years (120 months) on average.

When you have 120 servers in the DC, the mean time to failure (MTTF) of the
next machine is 1 month.

When you have 12,000 servers in the DC, the MTTF is about once every 7.2
hours!

Soft crashes and failures are even more frequent!


3

To build a failure detector


• You have a few options

1. Hire 1000 people, each to monitor one machine in the datacenter and
report to you when it fails.
2. Write a failure detector program (distributed) that automatically detects
failures and reports to your workstation.

Which is more preferable, and why?

2
Target Settings
• Process ‘group’-based systems
– Clouds/Datacenters
– Replicated servers
– Distributed databases

• Fail-stop (crash) process failures 5

Group Membership Service


Application Queries Application Process pi
e.g., gossip, overlays,
DHT’s, etc.
joins, leaves, failures
of members
Membership
Protocol
Membership
Group List
Membership List
Unreliable
Communication 6

3
Two sub-protocols
Application Process pi
Group
Membership List
pj

•Complete list all the time (Strongly consistent) Dissemination


•Virtual synchrony Failure Detector
•Almost-Complete list (Weakly consistent)
•Gossip-style, SWIM, …
•Or Partial-random list (other systems)
•SCAMP, T-MAN, Cyclon,… Unreliable
Focus of this series of lecture Communication 7

Large Group: Scalability A Goal


this is us (pi) Process Group
“Members”

1000’s of processes

Unreliable Communication
Network
8

4
Group Membership Protocol
II Failure Detector
Some process
pi finds out quickly
I pj crashed

III Dissemination
Unreliable Communication
Network
Fail-stop Failures only 9

Next
• How do you design a group membership
protocol?

10

10

5
I. pj crashes
• Nothing we can do about it!
• A frequent occurrence
• Common case rather than exception
• Frequency goes up linearly with size of
datacenter

11

11

II. Distributed Failure Detectors:


Desirable Properties
• Completeness = each failure is detected
• Accuracy = there is no mistaken detection
• Speed
– Time to first detection of a failure
• Scale
– Equal Load on each member
– Network Message Load 12

12

6
Distributed Failure Detectors: Properties

Impossible together in
• Completeness
lossy networks [Chandra
• Accuracy and Toueg]
• Speed
If possible, then can
– Time to first detection of a failure
solve consensus! (but
• Scale consensus is known to be
– Equal Load on each member unsolvable in
– Network Message Load asynchronous systems)
13

13

What Real Failure Detectors Prefer


• Completeness Guaranteed
Partial/Probabilistic
• Accuracy guarantee
• Speed
– Time to first detection of a failure
• Scale
– Equal Load on each member
– Network Message Load 14

14

7
What Real Failure Detectors Prefer
• Completeness Guaranteed
Partial/Probabilistic
• Accuracy guarantee
• Speed
– Time to first detection of a failure
Time until some
• Scale process detects the failure
– Equal Load on each member
– Network Message Load 15

15

What Real Failure Detectors Prefer


• Completeness Guaranteed
Partial/Probabilistic
• Accuracy guarantee
• Speed
– Time to first detection of a failure
Time until some
• Scale process detects the failure
– Equal Load on each member No bottlenecks/single
– Network Message Load failure point 16

16

8
Failure Detector Properties
• Completeness In spite of
arbitrary simultaneous
• Accuracy process failures
• Speed
– Time to first detection of a failure
• Scale
– Equal Load on each member
– Network Message Load 17

17

Centralized Heartbeating
L Hotspot
pi


pi, Heartbeat Seq. l++
pj •Heartbeats sent periodically
•If heartbeat not received from pi within
18
timeout, mark pi as failed
18

9
Ring Heartbeating
L Unpredictable on
pi simultaneous multiple
pi, Heartbeat Seq. l++
failures
pj


19

19

All-to-All Heartbeating
J Equal load per member
pi, Heartbeat Seq. l++ pi L Single hb loss à false
detection

pj

20

20

10
Next
• How do we increase the robustness of all-to-all
heartbeating?

21

21

Gossip-style Heartbeating
J Good accuracy
Array of pi properties
Heartbeat Seq. l
for member subset

22

22

11
Gossip-Style Failure Detection
1 10118 64
1 10120 66 2 10110 64
2 10103 62 3 10090 58
3 10098 63 4 10111 65
4 10111 65 2
1
Address Time (local) 1 10120 70
Heartbeat Counter 2 10110 64
Protocol:
3 10098 70
•Nodes periodically gossip their membership
list: pick random nodes, send it list
4 4 10111 65

•On receipt, it is merged with local


3
Current time : 70 at node 2
membership list
•When an entry times out, member is marked
(asynchronous clocks)
as failed
23

23

Gossip-Style Failure Detection


• If the heartbeat has not increased for more than
Tfail seconds,
the member is considered failed
• And after a further Tcleanup seconds, it will
delete the member from the list
• Why an additional timeout? Why not delete
right away? 24

24

12
Gossip-Style Failure Detection
• What if an entry pointing to a failed node is
deleted right after Tfail (=24) seconds?
1 10120 66
2 10110 64
1 10120 66 34 10098
10111 75
50
65
2 10103 62 4 10111 65

3 10098 55 2
4 10111 65 1
Current time : 75 at node 2

4
3 25

25

• Fix: remember for another Tfail


Next
• Is there a better failure detector?

26

26

13
SWIM Failure Detector Protocol
pi pj
•random pj
ping K random
ack processes

•random K X
ping-req
X
Protocol period ping
= T’ time units
ack
ack

27

27

Time-bounded Completeness
• Key: select each membership element once as a ping
target in a traversal
– Round-robin pinging
– Random permutation of list after each traversal
• Each failure is detected in worst case 2N-1 (local)
protocol periods
• Preserves FD properties
28

28

14
Next
• How do failure detectors fit into the big picture
of a group membership protocol?
• What are the missing blocks?

29

29

Group Membership Protocol


II Failure Detector
Some process
pi finds out quickly
I pj crashed

III Dissemination
Unreliable Communication
Network
Fail-stop Failures only 30

30

HOW ?
HOW ? 15
HOW ?
HOW ?
Dissemination Options
• Multicast (Hardware / IP)
– unreliable
– multiple simultaneous multicasts
• Point-to-point (TCP / UDP)
– expensive
• Zero extra messages: Piggyback on Failure Detector messages
– Infection-style Dissemination

31

31

Infection-style Dissemination
pi pj
•random pj
ping K random
ack processes
•random K X
ping-req
X
Protocol period ping
= T time units
ack
ack Piggybacked
membership
information
32

32

16
Suspicion Mechanism
• False detections, due to
– Perturbed processes
– Packet losses, e.g., from congestion
• Indirect pinging may not solve the problem
• Key: suspect a process before declaring it as
failed in the group
33

33

Suspicion Mechanism pi

Dissmn (Suspect pj) Dissmn


FD34
ed
f ail t pj) Suspected
p in g s p e c Tim
i
: : p ::( S u e ss eo
ut
D n c
F sm
i s g s u c p j)
D e
i p in A liv
Alive D ::p n : : ( Failed
F ssm
Di
Dissmn (Alive pj) Dissmn (Failed pj)

34

17
Suspicion Mechanism
• Distinguish multiple suspicions of a process
– Per-process incarnation number
– Inc # for pi can be incremented only by pi
• e.g., when it receives a (Suspect, pi) message
– Somewhat similar to DSDV (routing protocol in ad-hoc nets)
• Higher inc# notifications over-ride lower inc#’s
• Within an inc#: (Suspect inc #) > (Alive, inc #)
• (Failed, inc #) overrides everything else 35

35

Wrap Up
• Failures the norm, not the exception in datacenters
• Every distributed system uses a failure detector
• Many distributed systems use a membership service

• Ring failure detection underlies


– IBM SP2 and many other similar clusters/machines

• Gossip-style failure detection underlies


– Amazon EC2/S3 (rumored!)
36

36

18

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy