0% found this document useful (0 votes)
35 views14 pages

Adhoc Unit 3

Uploaded by

ganeshkota962
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
35 views14 pages

Adhoc Unit 3

Uploaded by

ganeshkota962
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

Here's a simplified explanation of how TCP works in ad hoc networks:

### Introduction to TCP in Ad Hoc Networks

- **TCP (Transmission Control Protocol)** is the most widely used protocol to handle reliable data transfer
over the internet.

- Ad hoc networks are unique because they’re made up entirely of wireless links. There are no fixed routers,
and any device can move around unpredictably.

- TCP was originally designed for stable, wired networks. In ad hoc networks, however, we need to adjust TCP
to handle the unique challenges, like frequent disconnections, mobility, and higher error rates.

### How TCP Was Originally Designed

- TCP's design heavily relies on the **end-to-end argument**, which means it expects all the reliability
checks (like error handling, encryption, and flow control) to happen at the endpoints.

- In wired networks, TCP uses **flow control** and **congestion control** to keep data flowing smoothly and
prevent overloads.

- **Timeouts and retransmissions** are used to recover lost data.

- But since ad hoc networks are less stable, adding **error detection and correction at the link layer** (the
layer that directly handles connections) helps improve TCP performance.

### Challenges in Ad Hoc Networks

Ad hoc networks are tricky for TCP because:

- There are more errors due to wireless signals.

- There can be longer delays, and nodes (devices) keep moving.

- TCP needs to adjust to work effectively in this unpredictable environment.

### TCP Basics: How TCP Works

TCP has some key features:

1. **Byte Stream Delivery**: TCP takes data from applications (like a web browser) and decides how to split it
into manageable chunks (called segments) to send across the network.
2. **Connection-Oriented**: Before any data transfer, both devices (sender and receiver) need to "agree" to
start communicating, forming a connection.

3. **Full-Duplex Communication**: TCP usually allows data to flow in both directions at the same time (full-
duplex). But, during connection setup or closure, it may only go one way for a short time.

### How TCP Ensures Reliable Data Transfer

To make sure data is received accurately, TCP uses several methods:

1. **Checksums**: Each segment (data chunk) has a checksum (a type of "fingerprint") so the receiver can
detect errors in the data.

2. **Duplicate Data Detection**: TCP tracks which bytes of data have already been received to avoid
processing the same data twice.

3. **Retransmissions**: If data is lost, TCP resends it. If the sender doesn’t get an acknowledgment
(confirmation of receipt) in time, it retransmits.

4. **Sequencing**: TCP keeps the segments in order, so data is delivered to the application in the correct
sequence, even if packets arrive out of order.

5. **Timers**: TCP uses timers to keep track of how long it waits for an acknowledgment. If time runs out, it
assumes data was lost and resends it.

In summary, TCP in ad hoc networks needs adjustments to handle wireless challenges, such as high error
rates and device mobility, by enhancing its reliability features, like error checking and retransmission.

Congestion Control Mechanisms

TCP uses several methods to control data flow to avoid overloading the network. Let’s break them down:

1. Slow Start:
a. When a TCP connection begins, the sender doesn’t know how much data the network can
handle at once, so it starts slowly.
b. The sender sets a Congestion Window (CWND), which is a limit on the amount of
unacknowledged data it can send.
c. Initially, this window is very small, usually one segment (the smallest unit of data TCP can
send).
d. For each acknowledgment (ACK) received from the receiver, the sender increases the
congestion window by one segment. This allows the sender to increase its transmission rate
gradually.
e. This "slow start" approach helps the sender avoid flooding the network with data right away.
2. Congestion Avoidance:
a. If the network becomes congested (too much data sent at once), packets may start getting
dropped.
b. When TCP detects this (e.g., through lost packets), it switches from Slow Start to
Congestion Avoidance.
c. In Congestion Avoidance, the sender grows its congestion window more slowly to avoid
overwhelming the network, usually increasing it by one segment per Round-Trip Time (RTT)
(the time it takes for a packet to go from sender to receiver and back).
d. This helps the sender balance between sending enough data and avoiding network
congestion.
3. Fast Retransmit:
a. If a sender gets duplicate ACKs (i.e., the same ACK received more than once), it could mean
a packet was lost or delayed.
b. After receiving three duplicate ACKs, TCP assumes the missing packet was lost rather than
just delayed.
c. Instead of waiting for a Retransmission Timeout (the usual waiting period before
retransmitting lost data), the sender retransmits the lost packet right away.
d. This allows TCP to recover from packet loss faster, reducing delays.
4. Fast Recovery:
a. After Fast Retransmit, the sender doesn’t immediately go back to Slow Start.
b. Since duplicate ACKs indicate some packets are still being received, the sender can assume
there’s still some data flowing.
c. The sender then goes into Congestion Avoidance mode instead of Slow Start, allowing it to
send data at a more balanced rate.

Time Estimation and Round-Trip Time (RTT)

In TCP, estimating the time it takes for data to travel from the sender to the receiver and back (Round-Trip
Time or RTT) is crucial for retransmissions. Here’s how it works:

1. RTT Estimation:
a. When the sender transmits a packet, it starts an internal timer.
b. When it receives an acknowledgment from the receiver, it checks the time and calculates
the RTT (the time from sending to receiving the ACK).
2. Why RTT is Important:
a. The sender uses RTT to set the Retransmission Timeout period.
b. If the RTT estimate is too low, the sender may retransmit packets too soon, which wastes
network resources.
c. If the RTT estimate is too high, the sender might wait too long to retransmit, causing
unnecessary delays.
3. Fine-Tuning RTT:
a. On a fast network (like Ethernet), replies usually come back in a few microseconds.
b. TCP dynamically adjusts its RTT estimate based on real-time measurements, allowing it to
optimize its retransmission timing to match the network conditiones

TCP Challenges in MANETs

In a MANET (Mobile Ad Hoc Network), nodes (devices) connect wirelessly and can move
freely. This dynamic setup creates challenges for TCP, which was originally designed for
stable, wired networks. Here’s why:

1. Changing Topology:
a. Nodes in MANETs move around, which often changes the network paths
(routes) between sender and receiver.
b. When a path between two nodes breaks, TCP detects this as a packet loss
and enters retransmission mode.
c. TCP uses timeouts to decide when to retransmit lost packets. If packets
keep getting lost due to changing paths, TCP repeats these timeouts, and
each time the timeout period doubles (exponentially increases).
d. This “exponential backoff” (increasing timeouts repeatedly) causes TCP to
send fewer packets over time, which can severely affect its performance in
MANETs.
2. Dependence on Congestion Window (CWND):
a. TCP adjusts its sending rate based on a "congestion window" (CWND), which
is the amount of data it thinks the network can handle without dropping
packets.
b. In MANETs, with their frequent route changes, the CWND is often too large or
too small for the current network conditions, making it hard for TCP to send
data smoothly.

TCP Unfairness in MANETs

TCP's rate adjustments lead to some fairness issues when used in MANETs. Here’s how:

1. Increasing Transmission Rate:


a. TCP keeps increasing its sending rate (CWND) as long as no packet loss is detected. It does
this to maximize network usage.
2. Responding to Packet Loss:
a. When TCP detects packet loss (due to congestion or route changes), it assumes the network
is congested.
b. It then shrinks the CWND, retransmits the lost packet, and resumes transmission at a
lower rate.
3. Exponential Backoff:
a. If losses keep happening with each retransmission, TCP uses a strategy called exponential
backoff.
b. With exponential backoff, TCP doubles the Retransmission Timeout (RTO) each time a
packet is lost and not acknowledged, which means it waits increasingly longer before
resending the data.
c. This method can cause TCP to slow down significantly, which might unfairly benefit some
TCP flows while severely slowing down others.

Slide 1: Impact of Lower Layers on TCP – MAC Layer Impact

This slide discusses the role of the MAC (Medium Access Control) layer in TCP (Transmission Control
Protocol) communication, particularly for mobile nodes in a network. Key points are:

1. Shared Broadcast Channel: The MAC layer is responsible for efficiently managing a shared
broadcast channel, which is essential for mobile nodes to communicate within the network.
2. RTS/CTS Handshake: In IEEE 802.11 networks, the Request to Send/Clear to Send (RTS/CTS)
mechanism is used as a precaution. This mechanism only triggers when the DATA packet size is
above a specific threshold. RTS/CTS helps prevent data collisions by coordinating access to the
network.
3. Transmission Duration Information: Each RTS or CTS frame also informs nearby nodes of the
remaining time required to complete the transmission. This notification allows nodes within range to
delay their transmissions, reducing the chance of collision.
4. Inter-Frame Space (IFS): After a transmission, nodes must wait for a short period (IFS interval)
before trying to access the channel again.
5. Binary Exponential Backoff: If contention occurs, a backoff mechanism kicks in. Each node waits
for a random interval before retrying transmission, and the wait time increases exponentially with
each collision, helping manage network traffic.

Slide 2: Issues at the MAC Layer

This slide covers common issues encountered in MAC-layer communication within a linear topology:

1. Linear Topology: In this setup, each node can only communicate with its neighboring nodes.
2. Hidden Node Problem:
a. This issue arises when two nodes, like node 1 and node 3, are both trying to communicate
with node 2 but are unaware of each other’s transmissions.
b. This results in collisions at node 2, as both nodes can’t detect each other’s signals.
3. Exposed Node Problem:
a. Here, a node (like node 3) detects a transmission in progress between two distant nodes
(like node 4 and 5), mistakenly thinking it must remain silent.
b. This results in unnecessary silence, reducing the network's overall efficiency.
4. Capture Effect:
a. This phenomenon occurs when a stronger signal (from node 3) overpowers a weaker one
(from node 2) at the receiver end.
b. The receiver will prioritize the stronger signal, ignoring the weaker one, which can lead to
unfair channel access.

Slide 3: TCP Throughput

This slide explains how TCP throughput (data transfer rate) is affected by the number of nodes (or hops) in a
network:

• More Hops, Lower Throughput: As the number of hops in a TCP connection increases, the
throughput decreases. This drop happens because every additional node introduces contention for
network resources, which slows down data transfer.
• Inverse Relationship: TCP throughput is inversely proportional to the number of hops. The more
nodes a TCP packet must pass through, the greater the delays and the lower the overall efficiency of
the network.
NETWORK LAYER IMPACT

Dynamic Source Routing (DSR) Protocol Overview

1. On-Demand Operation:

• DSR is like a "request-on-demand" system. It only finds a route when it's needed.
• If Node A wants to send data to Node B, it sends out a message called a Route Request (RREQ) to
discover a path to Node B.

2. Route Reply (RREP):

• Once Node B (or another node that knows the way to Node B) gets the RREQ, it responds with a
Route Reply (RREP), which is like a roadmap back to Node A.

3. Route Caching:

• Nodes in the network save paths they've learned or overheard, storing these in something called a
route cache.
• This caching reduces the need for regular route updates, saving network resources.
Problem with Stale Routes in DSR

Stale Routes:

• Imagine a fast-changing network where nodes are constantly moving.


• In such a scenario, a saved route can quickly become outdated if, for example, Node C in the path
has moved or a link has broken by the time the data is ready to be sent.

Impact on TCP:

• If TCP tries to send data over an outdated route, it might get delayed or fail.
• TCP, expecting a response, may go into a "backoff" mode, where it slows down to avoid congestion.
This slows down data transfer even more, hurting network performance.

Possible Solution:

• One way to reduce stale route issues is to stop nodes from responding using their cached routes.
Instead, a fresh route discovery is done every time.
• This makes routes more accurate but increases network traffic because every new path discovery
requires a broadcast of RREQ messages.

Temporally Ordered Routing Algorithm (TORA) Protocol Overview

1. Hybrid Nature:

• TORA is mostly on-demand, but it also has some proactive features.


• It’s designed for dynamic situations, where routes need to be updated quickly with minimal control
messages. This minimizes unnecessary updates to nodes unaffected by a change.

2. Multiple Routes:

• Unlike DSR, TORA keeps multiple paths between nodes, so if one path breaks, others are available.
• TORA uses Directed Acyclic Graphs (DAGs), where each node has multiple alternative paths to the
destination.

3. Efficient Route Maintenance:

• TORA only initiates route discovery when all available routes are gone.
• When an invalid route is detected, affected nodes send a clear packet to neighbors, removing stale
routes locally without affecting the entire network.

Problems with Stale Routes in TORA

Stale Routes:
• Like DSR, TORA can also have issues with stale routes. However, because TORA maintains multiple
paths, it’s less vulnerable.
• If TCP tries to send data over an invalid route, delays or errors can still happen, but this is less
frequent since TORA only needs to rediscover routes if all options fail.

Impact on TCP with Multiple Paths:

• TORA's approach to having multiple paths can cause packets to arrive out of order.
• For example, one packet might take a long path and arrive after a newer packet that took a shorter
path, causing TCP to think a packet was lost.
• TCP then retransmits data unnecessarily, reducing overall performance.

Solution:

• To fix this, a self-adaptive mechanism could help, prioritizing shorter paths to reduce out-of-
sequence deliveries, or managing the selection of paths more efficiently.

Impact on TCP and Routing Protocol Design (Cross-Layer Design)

• Cross-Layer Design:
o The issues in both DSR and TORA show that routing protocols should consider their impact
on TCP.
o By coordinating the network layer (routing) with the transport layer (TCP), these problems
could be reduced, leading to better overall network performance.
In short, network and transport layers should "talk" to each other to prevent TCP from reacting poorly to the
route changes in DSR and TORA.

Solutions for TCP over Ad Hoc Networks

1. TCP-Feedback:
a. What it does: If there’s a route failure in the network, this solution sends a
notification (called Route Failure Notification or RFN) to TCP. This helps TCP
know that the issue is due to a route failure, not network congestion.
b. Why it's useful: It helps TCP avoid mistakenly reducing its sending rate due
to congestion when the actual problem is a broken route.
2. ELFN Approach (Explicit Link Failure Notification):
a. What it does: This approach uses communication between different layers
of the network (called cross-layer interaction) to tell TCP when a link fails.
When a failure is detected, TCP pauses its transmission until the route is
restored.
b. Why it's useful: It helps TCP avoid sending data over a broken route, making
it more efficient.
3. Fixed Retransmission Timeout (RTO):
a. What it does: TCP usually waits a certain time before retransmitting lost
data. With this solution, if TCP detects a route failure, it tries to recover
quickly by retransmitting. After two timeouts, it assumes the failure is due to
a route issue and stops increasing the timeout duration.
b. Why it's useful: It tries to recover the route faster but avoids making the
timeout wait too long, which helps the network perform better.
4. ATCP Protocol (Ad hoc TCP):
a. What it does: ATCP adds a layer between the network and transport layers.
This layer helps improve performance by avoiding unnecessary congestion
control that happens when TCP doesn't know the network's state.
b. Why it's useful: It optimizes TCP performance by preventing it from acting
too conservatively when the network is actually fine.
5. TCP-DOOR (Detection Out-Of-Order and Response):
a. What it does: In ad hoc networks, packets may arrive out of order due to the
dynamic nature of the network. TCP-DOOR detects these out-of-order
packets and adapts to this behavior.
b. Why it's useful: It makes TCP more robust in handling situations where
packet delivery is not in the expected order.

Main Drawbacks

1. Reliance on network feedback or explicit notifications:


a. Problem: These solutions depend on receiving feedback or notifications
from the network (like RFN or ELFN). If there’s a security breach or the
notifications are unreliable, this can lead to issues where TCP might make
wrong decisions, causing data loss or delays.
b. Why it’s an issue: In real-world networks, feedback can be delayed, lost, or
tampered with, making it unreliable for TCP to adjust correctly.
2. Assumptions in protocols like TCP-DOOR may not always hold:
a. Problem: TCP-DOOR assumes that out-of-order packets are the main
problem in mobile networks and tries to adapt to this. However, in some real
situations, other issues might cause packet delivery problems, and TCP-
DOOR might not work as expected.
b. Why it’s an issue: Real networks can be much more complex than the
assumptions that protocols like TCP-DOOR make, leading to performance
issues or even failures if the assumptions don't hold true.

COPAS: COntention-based PAth Selection

COPAS is a protocol designed to improve TCP performance in ad hoc networks by


addressing two specific issues:

1. Capture Problem
2. TCP Unfairness

1. Capture Problem:

• What it is: In ad hoc networks, when two nodes try to send data to the same
destination, the one with the stronger signal often "captures" the receiver's
attention, while the weaker signal is ignored or lost.
• Why it’s a problem for TCP: TCP relies on reliable delivery, so when packets are
lost due to the capture problem, TCP has to retransmit them, which can lead to
delays and performance degradation.

2. TCP Unfairness:

• What it is: In ad hoc networks, multiple TCP flows sharing the same channel might
face unfairness. This happens when some flows dominate the network bandwidth,
while others get less than their fair share.
• Why it’s a problem for TCP: Unfairness can cause congestion and slower
transmission for certain flows, which harms overall network efficiency.

How COPAS Helps:

COPAS proposes two novel routing techniques to balance contention (or conflicts)
between data packets and acknowledgments (ACKs) in the network.

Key Techniques in COPAS:

1. Disjoint Forward and Reverse Paths:


a. What it does: COPAS suggests using separate paths for TCP data (forward
path) and TCP ACKs (reverse path).
i. The forward path carries data packets from the sender to the
receiver.
ii. The reverse path carries ACK packets from the receiver back to the
sender.
b. Why it's useful: By using disjoint paths, the TCP data and ACK packets no
longer travel over the same path. This reduces conflicts or congestion
between the two types of packets, especially in scenarios where the forward
and reverse packets might interfere with each other.
2. Contention-Balancing:
a. What it does: COPAS focuses on balancing contention in the network by
reducing the competition for the same resources (like bandwidth) between
TCP data and ACKs.
b. Why it’s useful: Reducing contention between the two types of packets
leads to better network performance and fairness. This ensures that neither
the data nor the ACK packets dominate the network resources.

Benefits of COPAS:

• Reduces Capture Problem: By using separate paths for data and ACKs, the
network becomes less likely to experience the capture problem, leading to fewer
retransmissions and better TCP performance.
• Improves Fairness: By balancing the use of network resources between data and
ACKs, COPAS helps prevent unfairness, where one TCP flow might dominate the
bandwidth over others.
• Improves TCP Efficiency: With less contention and fewer packet collisions, TCP
connections can achieve higher throughput and lower latency.

In Simple Terms:

Imagine you're in a crowded room (the network), and you're trying to have a conversation
(data packets). There’s someone else in the room also trying to talk to the same person
(ACK packets). If both of you are speaking at the same time, you’ll both be ignored (lost
packets). COPAS solves this by setting up two separate channels — one for you (data)
and one for the other person (ACK) — so you can both communicate clearly without
interfering with each other.
Main Drawback:

While COPAS improves performance in ad hoc networks, it introduces additional


complexity in terms of path management and requires careful coordination between
different routing paths, which might add overhead in certain situations.

In essence, COPAS helps solve TCP issues in wireless networks by balancing data and
ACKs efficiently, ensuring better throughput and fairness.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy