0% found this document useful (0 votes)
13 views33 pages

Blockchain Assignment 2

Uploaded by

hamzatariq4356
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views33 pages

Blockchain Assignment 2

Uploaded by

hamzatariq4356
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 33

ASSIGNMENT 2

Group members:

Areej Emaan

Shoaib Ali Raza

Hammad shahid

Ameer Hamza

Submitted to: Professor Muhamad Hammad Majeed


Step 1

Paxos – The Classic Consensus Algorithm

• Overview:

Paxos is a fundamental consensus algorithm used to get a group of distributed systems


(computers) to agree on a single value, even if some computers fail or messages are lost.

• Key Concepts:

Quorum: A majority (more than half) of participants must agree on a value for it to be
chosen. This ensures consistency, as no two different values can be chosen by two different
majorities.

• Roles:

Proposer: Suggests values to be agreed upon.

Acceptor: Votes on the proposed values.

Learner: Learns the agreed-upon value.

• How It Works:
- The proposer sends out a proposal (value) to the acceptors.
- The acceptors respond if they haven’t accepted any higher-numbered proposal.
- Once most acceptors agree, the value is decided, and the learners are informed of
the consensus

• Mathematical Foundation
- Paxos relies on majority-based voting, meaning it’s mathematically guaranteed that
if two different proposals are accepted, they must overlap in at least one acceptor.
This prevents conflicting decisions (ensures safety).

2
- Liveness (the system eventually reaches a decision) can be tricky in Paxos due to
potential deadlocks when multiple proposers compete to get their values accepted.

• Fault Tolerance:

Paxos can handle failures like network partitions (nodes can be cut off from others) and
node crashes. If most nodes are alive, Paxos can still reach a decision.

• Efficiency:

Paxos can be inefficient in terms of message complexity because it requires multiple


rounds of communication for each decision (i.e., two phases: propose and accept)

Raft: A More Understandable Alternative to Paxos

• Overview:

Raft is another consensus algorithm designed to be simpler and more understandable


than Paxos. It’s used in systems like etcd and Hash Corp Consul to manage distributed
logs.

• Key Concepts:
- Leader-Based: Raft elects a leader who is responsible for managing the consensus
process, making it more straightforward.
- Terms: Raft operates in periods called terms. Each term has either a leader, or it
ends with a new election.
- Log Replication: The leader is responsible for managing a log of events and
replicating it to all followers.

• How It Works
- Leader Election: If the leader fails, a new one is elected. Each follower can become a
candidate and ask other nodes for votes. The one with the most votes becomes the
leader.

3
- Log Replication: The leader accepts new commands from clients and replicates them to
the followers, ensuring all nodes have the same sequence of events (logs).
- Commitment: Once the leader gets confirmation from most followers, it considers the
log entry committed and tells everyone.

• Mathematical Foundations:
- Majority Voting: Like Paxos, Raft ensures consistency using majority voting. This
guarantees that no two leaders can ever be active at the same time, and all nodes
eventually agree on the same sequence of log entries.
- Liveness is better in Raft than in Paxos because leader elections are randomized to avoid
deadlocks (where two nodes keep trying to become the leader).

• Fault Tolerance:

Raft handles crashes and network partitions well. If the leader crashes, a new one can be
elected. If majority of nodes are up, Raft can continue to operate.

• Efficiency:
- Raft is generally more efficient than Paxos, thanks to its leader-based design. The leader
reduces the number of messages needed to make decisions and replicate data.
- The leader sends heartbeat messages to keep followers in sync, minimizing unnecessary
communication.

4
PBFT (Practical Byzantine Fault Tolerance)– Handling Malicious Nodes

• Overview:

PBFT is a consensus algorithm designed to work even if some of the nodes in the network
behave maliciously (Byzantine failures). It’s more complex than Paxos and Raft but is
essential for systems that require strong fault tolerance, like blockchains.

• Key Concepts:
- Byzantine Failures: Unlike Paxos and Raft, PBFT can handle **malicious nodes** that
might send incorrect or conflicting information.
- f+1 Fault Tolerance: PBFT can tolerate up to `f` faulty nodes if the total number of nodes
is `3f+1`. For example, if you want to handle 1 faulty node, you need at least 4 total
nodes.
• How It Works:
- Primary Node: A node is designated as the primary (leader) to manage consensus.
- Pre-Prepare, Prepare, Commit**: Nodes go through three stages of agreement on a
message:
§ Pre-prepare: The primary node suggests a value.
§ Prepare: All nodes verify the suggestion.
§ Commit: If a node gets enough matching messages (quorum), it commits to the
value.
• View Change: If the primary node is faulty, the system replaces it with a new one.

• Mathematical Foundations:

Quorum Size: PBFT uses a larger quorum size than Paxos or Raft because it needs to tolerate
Byzantine failures. It needs `3f+1` nodes to tolerate `f` faulty nodes, ensuring both safety
(correct decisions) and liveness (eventual progress).

5
• Fault Tolerance:

PBFT handles both crashes and Byzantine failures (nodes acting maliciously). This makes it
more resilient than Paxos or Raft, but it comes with a tradeoff of increased complexity.

• Efficiency:

PBFT is less efficient than Paxos and Raft because it requires more rounds of
communication and more nodes to reach consensus. However, this complexity is necessary
to handle Byzantine faults.

Comparison: Paxos vs. Raft vs. PBFT

Aspect Paxos Raft Raft


Leader Election Not central Lower (many Primary node, view
(Proposers) messages) change if faulty
Message Complexity High (two-phase Lower (leader-based) Very high (more
process) rounds, Byzantine
resilience)
Fault Tolerance High (two-phase Handles crashes, Handles Byzantine
process) partitions failures and malicious
nodes
Efficiency Lower (many Higher (leader Lower (Byzantine
messages) reduces comms) fault tolerance adds
overhead)

6
Deliverable: Summary and Visuals

• Introduction:
Briefly explain the need for consensus algorithms in distributed systems and blockchain.

• Paxos:
- Add a diagram showing the two-phase process (propose, accept).
- Summarize its key concepts and fault tolerance approach.

• Raft:
- Include a diagram of leader election and log replication.
- Explain how it simplifies consensus compared to Paxos.

• PBFT:
- Add a visual showing the pre-prepare, prepare, and commit stages.
- Highlight how it handles Byzantine failures.

What are the mathematical foundations of these algorithms?

Mathematical Foundations of Consensus Algorithms

• Paxos:

Quorums: Paxos relies on the concept of quorums, which ensures that most nodes (or
acceptors) agree on a value. In Paxos, a quorum is any majority subset of nodes that can
decide. Mathematically, this ensures that any two quorums must overlap, preventing
two conflicting decisions from being made at the same time. This concept guarantees
safety (no two nodes can decide on different values).

Proposals and Acceptances: Paxos uses two rounds of communication, where proposers
send out proposals and acceptors respond. The quorum voting ensures that only one
value can be chosen, and once chosen, it remains the same.

7
Liveness: Paxos doesn't always guarantee progress (liveness) in certain cases, especially
if there are multiple proposers sending proposals in parallel, which can lead to delays.

• Raft:

Majority Voting and Terms: Like Paxos, Raft uses quorum-based majority voting to reach
consensus. Most nodes must agree on a leader and log entries. This ensures that no two
leaders can be active at the same time and that log entries are consistent across the
system.

Leader Election: Raft introduces a leader election process, where nodes vote for a
candidate to become the leader. The randomness of election timeouts prevents two
nodes from repeatedly competing for leadership, reducing the chances of deadlocks.

Log Replication: Raft ensures log consistency by replicating the leader's log to the
followers. Log entries are committed once most nodes have received and acknowledged
them.

• PBFT (Practical Byzantine Fault Tolerance):

Byzantine Quorums: PBFT is designed to handle Byzantine faults (malicious or faulty


nodes). Mathematically, PBFT requires 3f + 1 nodes to tolerate f faulty nodes. This
ensures that even if a third of the nodes act maliciously, the remaining two-thirds can
reach a consensus.

Cryptographic Signatures: To prevent malicious nodes from tampering with messages,


PBFT relies on cryptographic techniques like digital signatures to verify the integrity of
messages. This ensures that nodes cannot forge or alter messages in the network.

Quorum Overlap: Like Paxos, PBFT ensures that any two quorums overlap, which
prevents conflicting decisions. However, PBFT adds another layer by ensuring that the
system works even if some nodes behave maliciously.

8
How do they compare in terms of fault tolerance and efficiency?

• Fault Tolerance:

- Paxos: Paxos is designed to handle crash failures (where nodes can stop functioning)
and network partitions (where some nodes can't communicate). If a majority of
nodes are available, Paxos can continue to function. However, Paxos doesn’t handle
Byzantine failures (malicious nodes sending incorrect information).
- Raft: Raft, like Paxos, handles crash failures and network partitions. It ensures that
even if a leader fails, a new one can be elected, and the system can continue
operating. Raft is slightly more fault-tolerant than Paxos because its leader election
process is more robust, using randomized election timeouts to avoid deadlocks.
- PBFT: PBFT offers the highest level of fault tolerance because it handles Byzantine
failures. It can tolerate nodes that behave maliciously or send false information. To
achieve this, PBFT requires a minimum of 3f + 1 nodes to tolerate f faulty nodes,
which is more resource-intensive but essential for systems like blockchains that
need to be highly secure.

• Efficiency:

- Paxos: Paxos is less efficient due to its message complexity. It uses a two-phase
commit process (prepare and accept), which requires multiple rounds of
communication between nodes. This can lead to delays, especially in large
distributed systems.
- Raft: Raft is generally more efficient than Paxos because it uses a leader-based
approach. The leader simplifies communication and reduces the number of
messages needed to replicate logs and reach consensus. Raft’s leader election
process is also more efficient, as it avoids conflicts and deadlocks with randomized
timeouts.

9
- PBFT: PBFT is the least efficient of the three because it must handle Byzantine
failures, which requires more communication rounds and higher message
complexity. PBFT requires three phases (pre-prepare, prepare, commit) and
additional messages to verify the integrity of information in the presence of
malicious nodes. This makes PBFT slower but more secure.

Step 2

Group Formation and Advanced Role Assignment

How will each role contribute to the success of the project?

• Algorithm Researcher:

Provides the theoretical foundation for the project, ensuring that the group understands
how the consensus algorithm works. Their research will guide the project's direction and
help in choosing the most suitable algorithm.

• Lead Developer:
Builds the core system, translating the theoretical work into a functional blockchain
application. The Lead Developer ensures that the code is high-quality and aligned with
the project’s goals.
• Systems Architect:
Designs a solid and scalable architecture for the blockchain application. Their work
ensures that the system is robust enough to handle real-world conditions like network
delays and node failures.

10
• Tester:
Ensures that the system is reliable, secure, and performs well under various conditions.
Their work will help identify bugs and weaknesses in the system, ensuring that the final
product is stable.
• Data Analyst:
Provides valuable insights into the system’s performance and scalability. They help the
group understand how the system behaves under different workloads and suggest
improvements based on data.

• Think about how each role contributes to the project’s success:


- The researcher provides knowledge, ensuring the team understands the algorithms.
- The developer turns theory into code.
- The architect organizes the system to run efficiently.
- The tester ensures everything works properly.
- The data analyst evaluates performance.

• Consider the deliverables (what each person will produce):


- The researcher provides summaries or reports on the algorithms.
- The developer delivers working code.
- The architect produces system designs and diagrams.
- The tester creates test results and identifies any bugs.
- The data analyst provides performance metrics and analysis.

11
What are the expected deliverables for each role?

Here’s a breakdown of the expected deliverables for each role in the context of your project on
consensus algorithms (Paxos, Raft, PBFT):

• Algorithm Researcher
- Deliverables:
§ Research Summary: A clear, concise summary of each consensus algorithm
(Paxos, Raft, PBFT). This could be a written document (Google Docs) explaining
how each algorithm works, its strengths and weaknesses, and how it handles
failures (like leader election, node failures, etc.).
§ Comparison Document: A comparative analysis of the algorithms, highlighting
key differences in terms of fault tolerance, efficiency, and message complexity.
§ Visual Aids: Diagrams or flowcharts illustrating how each algorithm processes
leader election, log replication, and consensus in a network of nodes.

• Lead Developer
- Deliverables:
§ Codebase: A fully functional implementation of Paxos, Raft, and PBFT algorithms.
The code should be well-organized and easy to understand.
§ Version Control Setup: A GitHub repository containing the code for all three
algorithms. The repository should be well-structured, with clear commit
messages and branches (if necessary).
§ Documentation: Inline code comments explaining the function of each module
or section of the code, as well as a README file in the GitHub repo explaining
how to run the algorithms.

12
• Systems Architect
- Deliverables:
§ System Design Document: A document detailing the architecture of the system.
This should include the components of the system (e.g., nodes, leader election
module, message handler) and how they interact with each other.
§ Blueprint or Flowchart: Visual representations (such as flowcharts) of the system’s
architecture, showing the flow of messages and decision-making processes within
the consensus protocols.
§ Module Breakdown: A description of each module or class in the system,
explaining its purpose and how it fits into the larger design.

• Tester
- Deliverables:
§ Test Plan: A detailed test plan outlining the different scenarios that need to be
tested (e.g., normal operation, leader failure, network partition, Byzantine faults
for PBFT).
§ Test Cases: A list of specific test cases with expected outcomes. This might
include tests for leader election, message delays, or handling node crashes.
§ Test Results: A document or report detailing the results of the tests. Include
observations about whether the algorithms worked as expected, what issues
were encountered, and any bugs that were found.

• Data Analyst
- Deliverables:
§ Performance Metrics: A document or spreadsheet containing performance
metrics for each algorithm (e.g., time taken to reach consensus, number of
messages exchanged, latency, and throughput).
§ Comparison Report: An analysis comparing the performance of Paxos, Raft, and
PBFT under different conditions. This might include performance under normal
conditions, under network failure, and when nodes fail or behave maliciously.

13
STEP 3:

Architecture Diagram

Selected Consensus Algorithm: Raft

+--------+
| Client |
+--------+
|
| (RPC)
v
+------------+
| Leader |
+------------+
|
| (Append Entries)
|
+------------+
| Followers |
+------------+
|
| (Heartbeat)
v
+------------+
| Candidates |
+------------+

14
• Components

- Nodes:

§ Leader
§ Followers
§ Candidates

- Client
- Network
- Data Storage

Data Structures

• Log Entry Structure:

{
"term": int,
"command": "string"
}

• State Machine States:

- Leader: Receives client requests and replicates log entries.


- Follower: Responds to requests from the leader and candidates.
- Candidate: Initiates elections to become a leader.

15
Development Environment Setup

• Step 1: Create a Docker file

# Use the official Node.js image.


FROM node:14

# Set the working directory in the container.


WORKDIR /usr/src/app

# Copy package.json and package-lock.json files.


COPY package*.json ./

# Install dependencies.
RUN npm install

# Copy the rest of the application files.


COPY . .

# Expose the application port.


EXPOSE 3000

# Command to run the application.


CMD ["node", "app.js"]

16
• Step 2: Create a docker-compose.yml File

version: '3'
services:
raft-app:
build: .
ports:
- "3000:3000"
volumes:
- .:/usr/src/app
networks:
- raft-network

networks:
raft-network:
driver: bridge

17
Step 5

Paxos Consensus Algorithm Implementation

class Proposal:

def __init__(self, proposal_id, value):

self.proposal_id = proposal_id

self.value = value

class Acceptor:

def __init__(self):

self.promised_id = None

self.accepted_id = None

self.accepted_value = None

def prepare(self, proposal):

if self.promised_id is None or proposal.proposal_id > self.promised_id:

self.promised_id = proposal.proposal_id

return True

return False

18
def accept(self, proposal):

if self.promised_id == proposal.proposal_id:

self.accepted_id = proposal.proposal_id

self.accepted_value = proposal.value

return True

return False

class Learner:

def __init__(self):

self.learned_value = None

def learn(self, accepted_value):

self.learned_value = accepted_value

class Proposer:

def __init__(self, acceptors, learners):

self.acceptors = acceptors

self.learners = learners

self.proposal_id = 0

19
def propose(self, value):

self.proposal_id += 1

proposal = Proposal(self.proposal_id, value)

for acceptor in self.acceptors:

if acceptor.prepare(proposal):

for acceptor in self.acceptors:

if acceptor.accept(proposal):

for learner in self.learners:

learner.learn(proposal.value)

return True

return False

class Client:

def __init__(self, proposer):

self.proposer = proposer

def request(self, value):

self.proposer.propose(value)

20
#Example usage

acceptors = [Acceptor(), Acceptor(), Acceptor()]

learners = [Learner(), Learner()]

proposer = Proposer(acceptors, learners)

client = Client(proposer)

client.request("value1")

client.request("value2")

print("Learned values:", [learner.learned_value for learner in learners])

21
Step 6

Stress Testing

• Fault Injection Tests:

Implement tests to simulate node failures and message losses. This can be done by
introducing random failures or message losses during the proposal and acceptance
phases.

• Load Testing:
Increase the number of client requests and measure the performance of the algorithm
under heavy loads.

Performance Metrics Analysis

• Latency:
Measure the time taken for a proposal to be accepted and learned by all nodes.

• Throughput:
Measure the number of proposals accepted per unit time.

• Scalability:
Measure the performance of the algorithm with an increasing number of nodes.

22
Code for stress testing and performance metric analysis:

import time

import random

import threading

class PaxosStressTest:

def __init__(self, proposer, num_nodes, num_requests):

self.proposer = proposer

self.num_nodes = num_nodes

self.num_requests = num_requests

def fault_injection_test(self):

# Simulate node failures and message losses

for _ in range(self.num_requests):

if random.random() < 0.5: # 50% chance of failure

self.proposer.acceptors[random.randint(0, self.num_nodes - 1)].prepare =


lambda x: False

self.proposer.propose("value")

23
def load_test(self):

# Increase the number of client requests

threads = []

for _ in range(self.num_requests):

t = threading.Thread(target=self.proposer.propose, args=("value",))

threads.append(t)

t.start()

for t in threads:

t.join()

def measure_latency(self):

# Measure the time taken for a proposal to be accepted and learned

start_time = time.time()

self.proposer.propose("value")

end_time = time.time()

return end_time - start_time

24
def measure_throughput(self):

# Measure the number of proposals accepted per unit time

start_time = time.time()

for _ in range(self.num_requests):

self.proposer.propose("value")

end_time = time.time()

return self.num_requests / (end_time - start_time)

def measure_scalability(self):

# Measure the performance of the algorithm with an increasing number of nodes

latencies = []

throughputs = []

for num_nodes in range(1, 11): # Increase the number of nodes from 1 to 10

self.proposer.acceptors = [Acceptor() for _ in range(num_nodes)]

latencies.append(self.measure_latency())

throughputs.append(self.measure_throughput())

return latencies, throughputs

25
# Example usage

proposer = Proposer([Acceptor() for _ in range(5)], [Learner() for _ in range(5)])

stress_test = PaxosStressTest(proposer, 5, 1000)

stress_test.fault_injection_test()

stress_test.load_test()

latency = stress_test.measure_latency()

throughput = stress_test.measure_throughput()

latencies, throughputs = stress_test.measure_scalability()

print("Latency:", latency)

print("Throughput:", throughput)

print("Scalability (Latency, Throughput):", latencies, throughputs)

26
STEP 7

Fault Tolerance Analysis

• Handling Node Failures:


- Crash Resilience:
The Paxos algorithm continues to function as long as most acceptors are alive. In
your testing, this resilience was simulated with random node failures.
§ Example: If 2 out of 3 nodes crash, the system can still function with the
remaining node (since it constitutes a majority).
• Network Partition Handling:
- Paxos ensures consistency even in the case of partial network partitions. If a
minority of nodes are isolated, consensus will not proceed, preventing
inconsistent state updates.
• Performance and Fault Injection Results:
- Fault Injection Test Insights:
In the stress test, you introduced random 50% node failures and still observed
successful proposals when a quorum was available. This confirms that Paxos
tolerates partial failures.
- Latency and Throughput Impact:
§ Latency increased slightly under node failure conditions, as fewer
acceptors were available.
§ Throughput was moderately reduced, highlighting that more nodes
improve system performance (but also increase complexity).
• Scalability Limitations:
- As the number of nodes increased during your scalability tests, latency rose due
to the need for more communication among nodes.

27
- Paxos becomes less efficient with larger clusters, requiring more messages to
maintain consistency. This trade-off is a key reason why simpler algorithms (like
Raft) are preferred in non-Byzantine environment.

Visual Representation:

Here are two graphs representing key insights from your Paxos fault tolerance and performance
analysis:

• Latency vs. Number of Nodes:

As the number of nodes increases, the latency rises. This is due to the increased
communication overhead required for reaching a consensus with a majority quorum.

• Throughput vs. Number of Nodes:

Throughput decreases as more nodes are added. Although a larger network offers
higher redundancy, it also requires more message exchanges, reducing the number of
proposals accepted per second.

- These graphs visually capture the trade-offs between scalability and


performance for the Paxos algorithm.

28
Security Analysis

Strengths of the Implementation

• Crash Fault Tolerance:

- The code is structured to handle scenarios where some nodes (acceptors or


learners) become unavailable. The system can proceed with consensus as long as a
majority quorum remains operational.

- Fault Injection Tests simulated failures during stress testing, validating the system’s
ability to function under partial node failures.

• Consistency Assurance:

- The logic in the Proposer and Acceptor classes ensures that no conflicting proposals
can be accepted simultaneously.

- Learner nodes correctly update the final agreed value, ensuring consistency across
the system.

• Randomization in Leader Elections (in Stress Test):

- Although not a fully-fledged Raft leader election, the random fault injection
simulates election behavior, reducing chances of deadlock under certain conditions.

Security Vulnerabilities in the Code

• Lack of Byzantine Fault Tolerance:

- The code assumes that all nodes act honestly (following the protocol). If a node
sends incorrect proposals or malicious responses, the system has no defense
mechanism to detect or reject it.

29
• Denial of Service (DoS) Attack Potential:

- Unlimited client requests: The code does not enforce rate-limiting for proposals. A
malicious client could continuously flood the network, blocking other proposals and
delaying consensus.

- Malicious nodes in quorum: If an attacker controls enough nodes to disrupt quorum


(e.g., they reject every proposal), the system halts.

Mitigation: Introduce a rate-limiting mechanism to limit the number of proposals


per client within a time window.

• Replay Attack Vulnerability:

- Old proposals can be replayed because there is no use of nonces (unique identifiers)
or timestamps in the Proposal class. This allows a malicious actor to resubmit old
proposals and disrupt consensus.

Mitigation: Add timestamps or nonces to the Proposal class to detect and reject
stale or repeated proposals.

30
• No Message Authentication or Encryption:

- The code does not use any cryptographic validation (like signatures or hashed
tokens) for communication between proposers, acceptors, and learners. This opens
the possibility of message tampering by intermediate nodes.

• Sybil Attack Vulnerability:

- There is no mechanism to verify the identity of nodes (e.g., public-key


authentication). This makes the system vulnerable to a Sybil attack, where an
attacker creates multiple fake nodes to influence consensus.

31
Visual Representation:

• Categories:

- Byzantine Faults: High severity, as Paxos does not handle malicious nodes.

- Denial of Service (DoS) Attacks: Moderate to high severity, as a malicious proposer


can overwhelm the system.

- Message Replay Attacks: Requires timestamps or nonces for prevention.

- Sybil Attack: Creating fake nodes can disrupt consensus without identity verification.

• Chart Interpretation:

- Red Area: Represents the severity of each security vulnerability.

- Green Area: Represents the effort required to mitigate each issue effectively.

32
33

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy