Transport Layer and Security Protocols For Ad Hoc Wireless Networks
Transport Layer and Security Protocols For Ad Hoc Wireless Networks
9.1 INTRODUCTION
The objectives of a transport layer protocol include the setting up of an end-to-end
connection, end-to-end delivery of data packets, flow control, and congestion control.
There exist simple, unreliable, and connection-less transport layer protocols such
as UDP, and reliable, byte-stream-based, and connection-oriented transport layer
protocols such as TCP for wired networks. These traditional wired transport layer
protocols are not suitable for ad hoc wireless networks due to the inherent problems
associated with the latter. The first half of this chapter discusses the issues and
challenges in designing a transport layer protocol for ad hoc wireless networks, the
reasons for performance degradation when TCP is employed in ad hoc wireless
networks, and it also discusses some of the existing TCP extensions and other
transport layer protocols for ad hoc wireless networks.
The previous chapters discussed various networking protocols for ad hoc wire-
less networks. However, almost all of them did not take into consideration one very
important aspect of communication: security. Due to the unique characteristics of
ad hoc wireless networks, which have been mentioned in the previous chapters, such
networks are highly vulnerable to security attacks compared to wired networks or
infrastructure-based wireless networks (such as cellular networks). Therefore, secu-
rity protocols being used in the other networks (wired networks and infrastructure-
based wireless networks) cannot be directly applied to ad hoc wireless networks.
The second half of this chapter focuses on the security aspect of communication in
ad hoc wireless networks. Some of the recently proposed protocols for achieving
secure communication are discussed.
451
452 Transport Layer and Security Protocols for Ad Hoc Wireless Networks Chapter 9
• The protocol should incur minimum connection setup and connection main-
tenance overheads. It should minimize the resource requirements for setting
up and maintaining the connection in order to make the protocol scalable in
large networks.
• The transport layer protocol should have mechanisms for congestion control
and flow control in the network.
• The protocol should be able to adapt to the dynamics of the network such as
the rapid change in topology and changes in the nature of wireless links from
uni-directional to bidirectional or vice versa.
• One of the important resources, the available bandwidth, must be used effi-
ciently.
• The transport layer protocol should make use of information from the lower
layers in the protocol stack for improving the network throughput.
ACTP [14]
ATP [15]
Split approach End−to−end approach
ATCP [12]
longing to the same TCP connection, containing stream of bytes, may be combined into a single
packet, or a single packet may be split into multiple packets, but delivered as a stream of bytes.
Hence, a TCP packet is considered as a segment containing several bytes rather than a packet.
However, segment and packet are used interchangeably in this chapter.
456 Transport Layer and Security Protocols for Ad Hoc Wireless Networks Chapter 9
Linear increase
(congestion avoidance)
18 C
17 1
0
0
1
B
16
15 0
1 TCP Reno M
Congestion window size (MSS)
14
13 L
12
11 O
10 11
00
J K
00Regular TCP
11
00(TCP Tahoe)
11 1F
0
9
8 0
1 N
E 11
00
7
6 00
11 I
5 Multiplicative
decrease
11
00
00
11
H
4
3
2 G
1 A D
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
11
00
00
11 time
Congestion detected
00slow start threshold
11
Figure 9.2. Illustration of TCP congestion window.
ACK packet. If the ACK packet does not arrive within the RTO period, then it
assumes that the packet is lost. TCP assumes that the packet loss is due to the
congestion in the network and it invokes the congestion control mechanism. The
TCP sender does the following during congestion control: (i) reduces the slow-start
threshold to half the current congestion window or two MSSs whichever is larger,
(ii) resets the congestion window size to one MSS, (iii) activates the slow-start al-
gorithm, and (iv) resets the RTO with an exponential back-off value which doubles
with every subsequent retransmission. The slow-start process further doubles the
congestion window with every successfully acknowledged window and, upon reach-
ing the slow-start threshold, it enters into the congestion avoidance phase.
The TCP sender also assumes a packet loss if it receives three consecutive du-
plicate ACKs (DUPACKs) [repeated acknowledgments for the same TCP segment
that was successfully received in-order at the receiver]. Upon reception of three
DUPACKs, the TCP sender retransmits the oldest unacknowledged segment. This
is called the fast retransmit scheme. When the TCP receiver receives out-of-order
packets, it generates DUPACKs to indicate to the TCP sender about the sequence
number of the last in-order segment received successfully.
Among the several extensions of TCP, some of the important schemes are dis-
cussed below. The regular TCP which was discussed above is also called as TCP
Tahoe [2] (in most of the existing literature). TCP Reno [3] is similar to TCP
Tahoe with fast recovery. On timeout or arrival of three DUPACKs, the TCP Reno
sender enters the fast recovery during which (refer to points C-J-K in Figure 9.2)
Section 9.5. TCP Over Ad Hoc Wireless Networks 457
the TCP Reno sender retransmits the lost packet, reduces the slow-start threshold
and congestion window size to half the size of the current congestion window, and
increments the congestion window linearly (one MSS per DUPACK) with every
subsequent DUPACK. On reception of a new ACK (not a DUPACK, i.e., an ACK
with a sequence number higher than the highest seen sequence number so far), the
TCP Reno resets the congestion window with the slow-start threshold and enters
the congestion avoidance phase similar to TCP Tahoe (points K-L-M in Figure 9.2).
J. C. Hoe proposed TCP-New Reno [4] extending the TCP Reno in which the
TCP sender does not exit the fast-recovery state, when a new ACK is received. In-
stead it continues to remain in the fast-recovery state until all the packets originated
are acknowledged. For every intermediate ACK packet, TCP-New Reno assumes
the next packet after the last acknowledged one is lost and is retransmitted.
TCP with selective ACK (SACK) [5], [6] improves the performance of TCP by
using the selective ACKs provided by the receiver. The receiver sends a SACK
instead of an ACK, which contains a set of SACK blocks. These SACK blocks
contain information about the recently received packets which is used by the TCP
sender while retransmitting the lost packets.
bandwidth of the channel, traffic load in the network, and the nature of the
routing protocol. If the route reestablishment time is greater than the RTO
period of the TCP sender, then the TCP sender assumes congestion in the
network, retransmits the lost packets, and initiates the congestion control al-
gorithm. These retransmissions can lead to wastage of bandwidth and battery
power. Eventually, when a new route is found, the TCP throughput continues
to be low for some time, as it has to build up the congestion window since the
traditional TCP undergoes a slow start.
• Effect of path length: It is found that the TCP throughput degrades rapidly
with an increase in path length in string (linear chain) topology ad hoc wireless
networks [7], [8]. This is shown in Figure 9.3. The possibility of a path break
increases with path length. Given that the probability of a link break is pl ,
the probability of a path break (pb ) for a path of length k can be obtained as
pb = 1 − (1 − pl )k . Figure 9.4 shows the variation of pb with path length for
pl = 0.1. Hence as the path length increases, the probability of a path break
increases, resulting in the degradation of the throughput in the network.
1500
TCP throughput (Kbps)
1000
500
2 4 6 8 10 12
Path length (number of hops)
0.8
Probability of path break
0.6
0.4
0.2
0
0 2 4 6 8 10 12
Path length (number of hops)
breaks, the congestion window may not reflect the maximum transmission
rate acceptable to the network and the receiver.
A path break on an entirely different reverse path can affect the performance
of the network as much as a path break in the forward path.
• Multipath routing: There exists a set of QoS routing and best-effort rout-
ing protocols that use multiple paths between a source-destination pair. There
are several advantages in using multipath routing. Some of these advantages
include the reduction in route computing time, the high resilience to path
breaks, high call acceptance ratio, and better security. For TCP, these ad-
vantages may add to throughput degradation. These can lead to a significant
amount of out-of-order packets, which in turn generates a set of duplicate ac-
knowledgments (DUPACKs) which cause additional power consumption and
invocation of congestion control.
sender receives an RFN packet, it goes into a state called snooze. In the snooze
state, a sender stops sending any more packets to the destination, cancels all the
timers, freezes its congestion window, freezes the retransmission timer, and sets up
a route failure timer. This route failure timer is dependent on the routing protocol,
network size, and the network dynamics and is to be taken as the worst-case route
reconfiguration time. When the route failure timer expires, the TCP sender changes
from the snooze state to the connected state. Figure 9.6 shows the operation of the
TCP-F protocol. In the figure, a TCP session is set up between node A and node
D over the path A-B-C-D [refer to Figure 9.6 (a)]. When the intermediate link
between node C and node D fails, node C originates an RFN packet and forwards
it on the reverse path to the source node [see Figure 9.6 (b)]. The sender’s TCP
state is changed to the snooze state upon receipt of an RFN packet. If the link CD
rejoins, or if any of the intermediate nodes obtains a path to destination node D,
a route reestablishment notification (RRN) packet is sent to node A and the TCP
state is updated back to the connected state [Figure 9.6 (c)].
As soon as a node receives an RRN packet, it transmits all the packets in its
buffer, assuming that the network is back to its original state. This can also take
care of all the packets that were not acknowledged or lost during transit due to the
path break. In fact, such a step avoids going through the slow-start process that
would otherwise have occurred immediately after a period of congestion. The route
failure timer set after receiving the RFN packet ensures that the sender does not
remain in the snooze state indefinitely. Once the route failure timer expires, the
sender goes back to the connected state in which it reactivates the frozen timers and
starts sending the buffered and unacknowledged packets. This can also take care
of the loss of the RRN packet due to any possible subsequent congestion. TCP-F
permits the TCP congestion control algorithm to be in effect when the sender is
not in the snooze state, thus making it sensitive to congestion in the network.
TCP state−connected
A B C D
to inform the sender about path breaks so that the sender can recompute a fresh route to the
destination. This is especially used in on-demand routing protocols such as DSR.
464 Transport Layer and Security Protocols for Ad Hoc Wireless Networks Chapter 9
to the origination of periodic probe packets consuming bandwidth and power and
(ii) the congestion window used after a new route is obtained may not reflect the
achievable transmission rate acceptable to the network and the TCP receiver.
9.5.5 TCP-BuS
TCP with buffering capability and sequence information (TCP-BuS) [10] is similar
to the TCP-F and TCP-ELFN in its use of feedback information from an interme-
diate node on detection of a path break. But TCP-BuS is more dependent on the
routing protocol compared to TCP-F and TCP-ELFN. TCP-BuS was proposed,
with associativity-based routing (ABR) [11] protocol as the routing scheme. Hence,
it makes use of some of the special messages such as localized query (LQ) and
REPLY, defined as part of ABR for finding a partial path. These messages are
modified to carry TCP connection and segment information. Upon detection of
a path break, an upstream intermediate node [called pivot node (PN)] originates
an explicit route disconnection notification (ERDN) message. This ERDN packet
is propagated to the TCP-BuS sender and, upon reception of it, the TCP-BuS
sender stops transmission and freezes all timers and windows as in TCP-F. The
packets in transit at the intermediate nodes from the TCP-BuS sender to the PN
are buffered until a new partial path from the PN to the TCP-BuS receiver is ob-
tained by the PN. In order to avoid unnecessary retransmissions, the timers for the
buffered packets at the TCP-BuS sender and at the intermediate nodes up to PN
use timeout values proportional to the round-trip time (RTT). The intermediate
nodes between the TCP-BuS sender and the PN can request the TCP-BuS sender
to selectively retransmit any of the lost packets. Upon detection of a path break,
the downstream node originates a route notification (RN) packet to the TCP-BuS
receiver, which is forwarded by all the downstream nodes in the path. An interme-
diate node that receives an RN packet discards all packets belonging to that flow.
The ERDN packet is propagated to the TCP-BuS sender in a reliable way by using
an implicit acknowledgment and retransmission mechanism. The PN includes the
sequence number of the TCP segment belonging to the flow that is currently at
the head of its queue in the ERDN packet. The PN also attempts to find a new
partial route to the TCP-BuS receiver, and the availability of such a partial path to
destination is intimated to the TCP-BuS sender through an explicit route successful
notification (ERSN) packet. TCP-BuS utilizes the route reconfiguration mechanism
of ABR to obtain the partial route to the destination. Due to this, other routing
protocols may require changes to support TCP-BuS. The LQ and REPLY messages
are modified to carry TCP segment information, including the last successfully re-
ceived segment at the destination. The LQ packet carries the sequence number of
the segment at the head of the queue buffered at the PN and the REPLY carries
the sequence number of the last successful segment the TCP-BuS receiver received.
This enables the TCP-BuS receiver to understand the packets lost in transition and
those buffered at the intermediate nodes. This is used to avoid fast retransmission
requests usually generated by the TCP-BuS receiver when it notices an out-of-order
packet delivery. Upon a successful LQ-REPLY process to obtain a new route to the
Section 9.5. TCP Over Ad Hoc Wireless Networks 465
TCP-BuS receiver, PN informs the TCP-BuS sender of the new partial path using
the ERSN packet. When the TCP-BuS sender receives an ERSN packet, it resumes
the data transmission.
Since there is a chance for ERSN packet loss due to congestion in the network,
it needs to be sent reliably. The TCP-BuS sender also periodically originates probe
packets to check the availability of a path to the destination. Figure 9.7 shows an
illustration of the propagation of ERDN and RN messages when a link between
nodes 4 and 12 fails.
When a TCP-BuS sender receives the ERSN message, it understands, from the
sequence number of the last successfully received packet at the destination and
the sequence number of the packet at the head of the queue at PN, the packets
lost in transition. The TCP-BuS receiver understands that the lost packets will
be delayed further and hence uses a selective acknowledgment strategy instead of
fast retransmission. These lost packets are retransmitted by the TCP-BuS sender.
During the retransmission of these lost packets, the network congestion between the
TCP-BuS sender and PN is handled in a way similar to that in traditional TCP.
TCP−BuS receiver
15
Network link
14 13
Broken link
12
11 TCP data flow
9
8
10 ERDN
11
00
00
11
4
7 RN
LQ
6 5
REPLY
3
New partial path
11
00
2
00
11
1
Pivot node
TCP−BuS sender
1111
0000
0000
1111
DISCONN
Data
DUR
0000
1111
0000
1111
DUR
0000
1111
TCP layer TCP_Input() TCP_Output()
DupACK/ DUR
0000
1111
Packet
0000
1111
ATCP ATCP_Input() ATCP_Output() Before RTO/
3 Dup ACKs NORMAL
1111
0000 0000
1111
TXPacket
Network layer IP_Input() IP_Output()
0000
1111 1111
0000
Data
0000
1111
ACK ECN
0000
1111
0000
1111
0000
1111 0000
1111
ATCP
retransmits
segments ECN
in TCP buffer LOSS CONGESTED
1111
0000
(a) ATCP thin layer implementation
0000
1111
(b) State transition diagram for the ATCP sender
0000
1111
0000
1111
TCP sender in persist state DUR − Receive destination unreachable
Figure 9.8. An illustration of ATCP thin layer and ATCP state diagram.
it is forwarded to TCP and the TCP sender is removed from the persist state and
then the ATCP sender changes to the NORMAL state.
When the ATCP sender is in the LOSS state, the receipt of an ECN message or
an ICMP source quench message changes it to the CONGESTED state. Along with
this state transition, the ATCP sender removes the TCP sender from the persist
state. When the network gets congested, the ECN4 flag is set in the data and the
ACK packets. When the ATCP sender receives this ECN message in the normal
state, it changes to the CONGESTED state and just remains invisible, permitting
TCP to invoke normal congestion control mechanisms. When a route failure or
a transient network partition occurs in the network, ATCP expects the network
layer to detect these and inform the ATCP sender through an ICMP destination
unreachable (DUR) message. Upon reception of the DUR message, ATCP puts the
TCP sender into the persist state and enters into the DISCONN state. It remains in
the DISCONN state until it is connected and receives any data or duplicate ACKs.
On the occurrence of any of these events, ATCP changes to the NORMAL state.
The connected status of the path can be detected by the acknowledgments for the
periodic probe packets generated by the TCP sender. The receipt of an ICMP
DUR message in the LOSS state or the CONGESTED state causes a transition
to the DISCONN state. When ATCP puts TCP into the persist state, it sets
4 ECN is currently under consideration by IETF and is now a standard (IETF RFC 3168).
468 Transport Layer and Security Protocols for Ad Hoc Wireless Networks Chapter 9
Event Action
Packet loss due to high Retransmits the lost packets without reducing con-
BER gestion window
Route recomputation de- Makes the TCP sender go to persist state and stop
lay transmission until new route has been found
Transient partitions Makes the TCP sender go to persist state and stop
transmission until new route has been found
Out-of-order packet de- Maintains TCP sender unaware of this and retrans-
livery due to multipath mits the packets from TCP buffer
routing
Change in route Recomputes the congestion window
the congestion window to one segment in order to make TCP probe for the new
congestion window when the new route is available. In summary, ATCP tries to
perform the activities listed in Table 9.1.
Split-TCP [13] provides a unique solution to this problem by splitting the trans-
port layer objectives into congestion control and end-to-end reliability. The conges-
tion control is mostly a local phenomenon due to the result of high contention and
high traffic load in a local region. In the ad hoc wireless network environment, this
demands local solutions. At the same time, reliability is an end-to-end requirement
and needs end-to-end acknowledgments.
In addition to splitting the congestion control and reliability objectives, split-
TCP splits a long TCP connection into a set of short concatenated TCP connections
(called segments or zones) with a number of selected intermediate nodes (known as
proxy nodes) as terminating points of these short connections. Figure 9.9 illustrates
the operation of split-TCP where a three segment split-TCP connection exists be-
tween source node 1 and destination node 15. A proxy node receives the TCP pack-
ets, reads its contents, stores it in its local buffer, and sends an acknowledgment to
the source (or the previous proxy). This acknowledgment called local acknowledg-
ment (LACK) does not guarantee end-to-end delivery. The responsibility of further
delivery of packets is assigned to the proxy node. A proxy node clears a buffered
packet once it receives LACK from the immediate successor proxy node for that
packet. Split-TCP maintains the end-to-end acknowledgment mechanism intact, ir-
respective of the addition of zone-wise LACKs. The source node clears the buffered
packets only after receiving the end-to-end acknowledgment for those packets.
Destination Node
15
14 11
00
00
11
13
Network Link
11
00
00
11
7
11
00
00
11
4
6 5 Proxy Node
3
2 End−to−end TCP ACK
1
Source Node
In Figure 9.9, node 1 initiates a TCP session to node 15. Node 4 and node 13
are chosen as proxy nodes. The number of proxy nodes in a TCP session is deter-
mined by the length of the path between source and destination nodes. Based on a
distributed algorithm, the intermediate nodes that receive TCP packets determine
whether to act as a proxy node or just as a simple forwarding node. The most sim-
ple algorithm makes the decision for acting as proxy node if the packet has already
traversed more than a predetermined number of hops from the last proxy node or
the sender of the TCP session. In Figure 9.9, the path between node 1 and node
4 is the first zone (segment), the path between nodes 4 and 13 is the second zone
(segment), and the last zone is between node 13 and 15.
The proxy node 4, upon receipt of each TCP packet from source node 1, ac-
knowledges it with a LACK packet, and buffers the received packets. This buffered
packet is forwarded to the next proxy node (in this case, node 13) at a transmission
rate proportional to the arrival of LACKs from the next proxy node or destination.
The transmission control window at the TCP sender is also split into two windows,
that is, the congestion window and the end-to-end window. The congestion window
changes according to the rate of arrival of LACKs from the next proxy node and
the end-to-end window is updated based on the arrival of end-to-end ACKs. Both
these windows are updated as per traditional TCP except that the congestion win-
dow should stay within the end-to-end window. In addition to these transmission
windows at the TCP sender, every proxy node maintains a congestion window that
governs the segment level transmission rate.
Application layer
ACTP_SendTo (delay, message number, priority)
ACTP layer
IsACKed (message number)
IP_Output( )
Network layer
IsACKed<message number> function call can reflect (i) a successful delivery of the
packet (ACK received), (b) a possible loss of the packet (no ACK received and the
deadline has expired), (iii) remaining time for the packet (no ACK received but the
deadline has not expired), and (iv) no state information exists at the ACTP layer
regarding the message under consideration. A zero in the delay field refers to the
highest priority packet, which requires immediate transmission with minimum pos-
sible delay. Any other value in the delay field refers to the delay that the message
can experience. On getting the information about the delivery status, the applica-
tion layer can decide on retransmission of a packet with the same old priority or
with an updated priority. Well after the packet’s lifetime expires, ACTP clears the
packet’s state information and delivery status. The packet’s lifetime is calculated
as 4×retransmit timeout (RTO) and is set as the lifetime when the packet is sent to
the network layer. A node estimates the RTO interval by using the round-trip time
between the transmission time of a message and the time of reception of the corre-
sponding ACK. Hence, the RTO value may not be available if there are no existing
reliable connections to a destination. A packet without any message number (i.e.,
no delivery status required) is handled exactly the same way as in UDP without
maintaining any state information.
TCP are (i) coordination among multiple layers, (ii) rate based transmissions, (iii)
decoupling congestion control and reliability, and (iv) assisted congestion control.
Similar to other TCP variants proposed for ad hoc wireless networks, ATP uses
services from network and MAC layers for improving its performance. ATP uses
information from lower layers for (i) estimation of the initial transmission rate, (ii)
detection, avoidance, and control of congestion, and (iii) detection of path breaks.
Unlike TCP, ATP utilizes a timer-based transmission, where the transmission
rate is decided by the granularity of the timer which is dependent on the congestion
in the network. The congestion control mechanism is decoupled from the reliability
and flow control mechanisms. The network congestion information is obtained from
the intermediate nodes, whereas the flow control and reliability information are
obtained from the ATP receiver. The intermediate nodes attach the congestion
information to every ATP packet and the ATP receiver collates it before including
it in the next ACK packet. The congestion information is expressed in terms of the
weighted averaged6 queuing delay (DQ ) and contention delay (DC ) experienced by
the packets at every intermediate node. The field in which this delay information
is included is referred to as the rate feedback field and the transmission rate is the
inverse of the delay information contained in the rate feedback field. Intermediate
nodes attach the current delay information to every ATP data packet if the already
existing value is smaller than the current delay. The ATP receiver collects this
delay information and the weighted average value is attached in the periodic ACK
(ATP uses SACK mechanism, hence ACK refers to SACK) packet sent back to the
ATP sender. During a connection startup process or when ATP recovers from a
path break, the transmission rate to be used is determined by a process called quick
start. During the quick start process, the ATP sender propagates a probe packet to
which the intermediate nodes attach the transmission rate (in the form of current
delay), which is received by the ATP receiver, and an ACK is sent back to the
ATP sender. The ATP sender starts using the newly obtained transmission rate by
setting the data transmission timers. During a connection startup, the connection
request and the ACK packets are used as probe packets in order to reduce control
overhead. When there is no traffic around an intermediate node, the transmission
delay is approximated as β × (DQ + DC ), where β is the factor that considers the
induced traffic load. This is to consider the induced load (load on a particular link
due to potential contention introduced by the upstream and downstream nodes in
the path) when the actual transmission begins. A default value of 3 is used for β.
ATP uses SACK packets periodically to ensure the selective retransmission of lost
packets, which ensures the reliability of packet delivery. The SACK period is chosen
such that it is more than the round-trip time and can track the network dynamics.
The receiver performs a weighted average of the delay/transmission rate information
for every incoming packet to obtain the transmission rate for an ATP flow and this
value is included in the subsequent SACK packet it sends. In addition to the rate
feedback, the ATP receiver includes flow control information in the SACK packets.
6 Originally called “exponentially averaged,” renamed here with a more appropriate term,
• Lack of association: Since these networks are dynamic in nature, a node can
join or leave the network at any point of the time. If no proper authentication
mechanism is used for associating nodes with a network, an intruder would
be able to join into the network quite easily and carry out his/her attacks.
works closely with a router program and filters all packets entering the network to determine
whether or not to forward those packets toward their intended destinations. A firewall protects
the resources of a private network from malicious intruders on foreign networks such as the Internet.
In an ad hoc wireless network, the firewall software could be installed on each node on the network.
478 Transport Layer and Security Protocols for Ad Hoc Wireless Networks Chapter 9
actually part of the network. Since the adversaries are already part of the network
as authorized nodes, internal attacks are more severe and difficult to detect when
compared to external attacks.
Figure 9.11 shows a classification of the different types of attacks possible in
ad hoc wireless networks. The following sections describe the various attacks listed
in the figure.
Snooping
MAC layer Network layer Transport layer Application layer Other attacks
attacks attacks attacks attacks
479
480 Transport Layer and Security Protocols for Ad Hoc Wireless Networks Chapter 9
resources that are targeted are battery power, bandwidth, and computational
power, which are only limitedly available in ad hoc wireless networks. The
attacks could be in the form of unnecessary requests for routes, very frequent
generation of beacon packets, or forwarding of stale packets to nodes. Using
up the battery power of another node by keeping that node always busy by
continuously pumping packets to that node is known as a sleep deprivation
attack.
• Routing attacks: There are several types attacks mounted on the routing
protocol which are aimed at disrupting the operation of the network. In what
follows, the various attacks on the routing protocol are described briefly.
– Routing table overflow: In this type of attack, an adversary node
advertises routes to non-existent nodes, to the authorized nodes present
in the network. The main objective of such an attack is to cause an
overflow of the routing tables, which would in turn prevent the creation
of entries corresponding to new routes to authorized nodes. Proactive
routing protocols are more vulnerable to this attack compared to reactive
routing protocols.
– Routing table poisoning: Here, the compromised nodes in the net-
works send fictitious routing updates or modify genuine route update
packets sent to other uncompromised nodes. Routing table poisoning
may result in sub-optimal routing, congestion in portions of the network,
or even make some parts of the network inaccessible.
– Packet replication: In this attack, an adversary node replicates stale
packets. This consumes additional bandwidth and battery power re-
sources available to the nodes and also causes unnecessary confusion in
the routing process.
– Route cache poisoning: In the case of on-demand routing protocols
(such as the AODV protocol [18]), each node maintains a route cache
which holds information regarding routes that have become known to the
node in the recent past. Similar to routing table poisoning, an adversary
can also poison the route cache to achieve similar objectives.
– Rushing attack: On-demand routing protocols that use duplicate sup-
pression during the route discovery process are vulnerable to this attack
[19]. An adversary node which receives a RouteRequest packet from the
source node floods the packet quickly throughout the network before
other nodes which also receive the same RouteRequest packet can react.
Nodes that receive the legitimate RouteRequest packets assume those
packets to be duplicates of the packet already received through the ad-
versary node and hence discard those packets. Any route discovered by
the source node would contain the adversary node as one of the inter-
mediate nodes. Hence, the source node would not be able to find secure
routes, that is, routes that do not include the adversary node. It is
extremely difficult to detect such attacks in ad hoc wireless networks.
Section 9.10. Network Security Attacks 481
Multi-layer Attacks
Multi-layer attacks are those that could occur in any layer of the network protocol
stack. Denial of service and impersonation are some of the common multi-layer
attacks. This section discusses some of the multi-layer attacks in ad hoc wireless
networks.
exploit the routing protocol to disrupt the normal functioning of the network.
For example, an adversary node could participate in a session but simply drop
a certain number of packets, which may lead to degradation in the QoS being
offered by the network. On the higher layers, an adversary could bring down
critical services such as the key management service (key management will be
described in detail in the next section). Some of the DoS attacks are described
below.
layer.
Section 9.11. Key Management 483
type of impersonation attack. Here, the adversary reads and possibly mod-
ifies, messages between two end nodes without letting either of them know
that they have been attacked. Suppose two nodes X and Y are communicat-
ing with each other; the adversary impersonates node Y with respect to node
X and impersonates node X with respect to node Y , exploiting the lack of
third-party authentication of the communication between nodes X and Y.
Device Tampering
Unlike nodes in a wired network, nodes in ad hoc wireless networks are usually
compact, soft, and hand-held in nature. They could get damaged or stolen easily.
provably secure. They rely upon the difficulty of solving certain mathematical prob-
lems, and the network would be open to attacks once the underlying mathematical
problem is solved.
Substitution EFGHIJKLMNOPQRSTUVWXYZABCD
(a)
Transposition
1 2 3 4 5
3 5 1 4 2
(b)
substitution, and Figure 9.12 (b) shows a transposition cipher. The block length
used is five.
A stream cipher is, in effect, a block cipher of block length one. One of the
simplest stream ciphers is the Vernam cipher, which uses a key of the same length
as the plaintext for encryption. For example, if the plaintext is the binary string
10010100, and the key is 01011001, then the encrypted string is given by the XOR
of the plaintext and key, to be 11001101. The plaintext is again recovered by XOR-
ing the ciphertext with the same key. If the key is randomly chosen, transported
securely to the receiver, and used for only one communication, this forms the one-
time pad which has proven to be the most secure of all cryptographic systems. The
only bottleneck here is to be able to securely send the key to the receiver.
Key Predistribution
Key predistribution, as the name suggests, involves distributing keys to all inter-
ested parties before the start of communication. This method involves much less
communication and computation, but all participants must be known a priori, dur-
ing the initial configuration. Once deployed, there is no mechanism to include
new members in the group or to change the key. As an improvement over the basic
predistribution scheme, sub-groups may be formed within the group, and some com-
munication can be restricted to a subgroup. However, the formation of sub-groups
is also an a priori decision with no flexibility during the operation.
Key Transport
In key transport systems, one of the communicating entities generates keys and
transports them to the other members. The simplest scheme assumes that a shared
key already exists among the participating members. This prior shared key is
used to encrypt a new key and is transmitted to all corresponding nodes. Only
those nodes which have the prior shared key can decrypt it. This is called the
key encrypting key (KEK) method. However, the existence of a prior key cannot
always be assumed. If the public key infrastructure (PKI) is present, the key can be
encrypted with each participant’s public key and transported to it. This assumes
the existence of a TTP, which may not be available for ad hoc wireless networks.
An interesting method for key transport without prior shared keys is the Shamir’s
three-pass protocol [22]. The scheme is based on a special type of encryption called
commutative encryption schemes [which are reversible and composable (composi-
tion of two functions f and g is defined as f(g(x)))]. Consider two nodes X and
Y which wish to communicate. Node X selects a key K which it wants to use in
its communication with node Y . It then generates another random key kx, using
which it encrypts K with f, and sends to node Y . Node Y encrypts this with a
random key ky using g, and sends it back to node X. Now, node X decrypts this
message with its key kx, and after applying the inverse function f −1 , sends it to
node Y . Finally, node Y decrypts the message using ky and g−1 to obtain the key
K. The message exchanges of the protocol are illustrated in Figure 9.13.
Key Arbitration
Key arbitration schemes use a central arbitrator to create and distribute keys among
all participants. Hence, they are a class of key transport schemes. Networks which
have a fixed infrastructure use the AP as an arbitrator, since it does not have strin-
gent power or computation constraints. In ad hoc wireless networks, the problem
with implementation of arbitrated protocols is that the arbitrator has to be powered
on at all times to be accessible to all nodes. This leads to a power drain on that
particular node. An alternative would be to make the keying service distributed,
but simple replication of the arbitration at different nodes would be expensive for
resource-constrained devices and would offer many points of vulnerability to at-
tacks. If any one of the replicated arbitrators is attacked, the security of the whole
system breaks down.
Section 9.11. Key Management 487
X Y
Generate K, k x
Encrypt with f
f(k x , K)
Generate k y
g(k y f(k x, K)) Encrypt with g
Decrypt f
g(k y , K)
Decrypt g
Obtain K
Key Agreement
Most key agreement schemes are based on asymmetric key algorithms. They are
used when two or more people want to agree upon a secret key, which will then be
used for further communication. Key agreement protocols are used to establish a
secure context over which a session can be run, starting with many parties who wish
to communicate and an insecure channel. In group key agreement schemes, each
participant contributes a part to the secret key. These need the least amount of
preconfiguration, but such schemes have high computational complexity. The most
popular key agreement schemes use the Diffie-Hellman exchange [21], an asymmetric
key algorithm based on discrete logarithms.
mobile devices want to start a secure session. Here, the parties involved in the
session are to be identified based on their location, that is, all devices in the room
can be part of the session. Hence, relative location is used as the criterion for access
control. If a TTP which knows the location of the participants exists, then it can
implement location-based access control. A prior shared secret can be obtained
by a physically more secure medium such as a wired network. This secret can be
obtained by plugging onto a wired network first, before switching to the wireless
mode.
A password-based system has been explored where, in the simplest case, a long
string is given as the password for users for one session. However, human be-
ings tend to favor natural language phrases as passwords, over randomly generated
strings. Such passwords, if used as keys directly during a session, are very weak
and open to attack because of high redundancy, and the possibility of reuse over
different sessions. Hence, protocols have been proposed to derive a strong key (not
vulnerable to attacks) from the weak passwords given by the participants. This
password-based system could be two-party, with a separate exchange between any
two participants, or it could be for the whole group, with a leader being elected to
preside over the session. Leader election is a special case of establishing an order
among all participants. The protocol used is as follows. Each participant generates
a random number, and sends it to all others. When every node has received the
random number of every other node, a common predecided function is applied on
all the numbers to calculate a reference value. The nodes are ordered based on the
difference between their random number and the reference value.
Threshold Cryptography
Public key infrastructure (PKI) enables the easy distribution of keys and is a scal-
able method. Each node has a public/private key pair, and a certifying authority
(CA) can bind the keys to the particular node. But the CA has to be present at
all times, which may not be feasible in ad hoc wireless networks. It is also not
advisable to simply replicate the CA at different nodes. In [20], a scheme based on
threshold cryptography has been proposed by which n servers exist in the ad hoc
wireless network, out of which any (t+1) servers can jointly perform any arbitration
or authorization successfully, but t servers cannot perform the same. Hence, up to
t compromised servers can be tolerated. This is called an (n, t + 1) configuration,
where n ≥ 3t + 1.
To sign a certificate, each server generates a partial signature using its private
key and submits it to a combiner. The combiner can be any one of the servers. In
order to ensure that the key is combined correctly, t + 1 combiners can be used to
account for at most t malicious servers. Using t + 1 partial signatures (obtained
from itself and t other servers), the combiner computes a signature and verifies its
validity using a public key. If the verification fails, it means that at least one of the
t + 1 keys is not valid, so another subset of t + 1 partial signatures is tried. If the
combiner itself is malicious, it cannot get a valid key, because the partial signature
of itself is always invalid.
Section 9.11. Key Management 489
Having seen the various key management techniques employed in ad hoc wireless
networks, we now move on to discuss some of the security-aware routing schemes
for ad hoc wireless networks.
should take care that these attacks do not permanently disrupt the routing
process. The protocol must also ensure Byzantine robustness, that is, the
protocol should work properly even if some of the nodes, which were earlier
participating in the routing process, turn out to become malicious at a later
point of time or are intentionally damaged.
Chosen route S2
S3
S1
O2
O1
P3
P1
The SAR protocol can be explained using any one of the traditional routing pro-
tocols. This section explains SAR using the AODV protocol [18] discussed in detail
in Chapter 7. In the AODV protocol, the source node broadcasts a RouteRequest
packet to its neighbors. An intermediate node, on receiving a RouteRequest packet,
forwards it further if it does not have a route to the destination. Otherwise, it ini-
tiates a RouteReply packet back to the source node using the reverse path traversed
by the RouteRequest packet. In SAR, a certain level of security is incorporated into
the packet-forwarding mechanism. Here, each packet is associated with a security
level which is determined by a number calculation method (explained later in this
section). Each intermediate node is also associated with a certain level of security.
On receiving a packet, the intermediate node compares its level of security with that
defined for the packet. If the node’s security level is less than that of the packet,
the RouteRequest is simply discarded. If it is greater, the node is considered to be
a secure node and is permitted to forward the packet in addition to being able to
view the packet. If the security levels of the intermediate node and the received
packet are found to be equal, then the intermediate node will not be able to view
the packet (which can be ensured using a proper authentication mechanism); it just
forwards the packet further.
Nodes of equal levels of trust distribute a common key among themselves and
with those nodes having higher levels of trust. Hence, a hierarchical level of security
could be maintained. This ensures that an encrypted packet can be decrypted (using
the common key) only by nodes of the same or higher levels of security compared
to the level of security of the packet. Different levels of trust can be defined using
a number calculated based on the level of security required. It can be calculated
using many methods. Since timeliness, in-order delivery of packets, authenticity,
authorization, integrity, confidentiality, and non-repudiation are some of the desired
characteristics of a routing protocol, a suitable number can be defined for the trust
level for nodes and packets based on the number of such characteristics taken into
account.
The SAR mechanism can be easily incorporated into the traditional routing pro-
tocols for ad hoc wireless networks. It could be incorporated into both on-demand
and table-driven routing protocols. The SAR protocol allows the application to
choose the level of security it requires. But the protocol requires different keys for
different levels of security. This tends to increase the number of keys required when
the number of security levels used increases.
along with it. If the authentication value used is hkm+j , then the attacker who tries
to modify this value can do so only if he/she knows hkm+j−1 . Since it is a one-way
hash chain, calculating hkm+j−1 becomes impossible. An intermediate node, on
receiving this authenticated update, calculates the new hash value based on the
earlier updates (hkm+j−1 ), the value of the metric, and the sequence number. If the
calculated value matches with the one present in the route update message, then
the update is effected; otherwise, the received update is just discarded.
SEAD avoids routing loops unless the loop contains more than one attacker.
This protocol could be implemented easily with slight modifications to the existing
distance vector routing protocols. The protocol is robust against multiple unco-
ordinated attacks. The SEAD protocol, however, would not be able to overcome
attacks where the attacker uses the same metric and sequence number which were
used by the recent update message, and sends a new routing update.
Issue of Certificates
This section discusses the certification process in which the certificates are issued
to the nodes in the ad hoc wireless network. There exists an authenticated trusted
server whose public key is known to all legal nodes in the network. The ARAN
protocol assumes that keys are generated a priori by the server and distributed to
all nodes in the network. The protocol does not specify any specific key distribution
algorithm. On joining the network, each node receives a certificate from the trusted
server. The certificate received by a node A from the trusted server T looks like
the following:
Here, IPA , KA+ , t, e, and KT − represent the IP address of node A, the public key
of node A, the time of creation of the certificate, the time of expiry of the certificate,
and the private key of the server, respectively.
Section 9.12. Secure Routing in Ad Hoc Wireless Networks 495
Whenever the source sends a route discovery message, it increments the value of
nonce. Nonce is a counter used in conjunction with the time-stamp in order to
make the nonce recycling easier. When a node receives an RDP packet from the
source with a higher value of the source’s nonce than that in the previously received
RDP packets from the same source node, it makes a record of the neighbor from
which it received the packet, encrypts the packet further with its own certificate,
and broadcasts it further. The process can be denoted as follows:
where node X is the neighbor of the destination node D, which had originally
forwarded the RDP packet to node D. The REP packet follows the same procedure
on the reverse path as that followed by the route discovery packet. An error message
is generated if the time-stamp or nonce do not match the requirements or if the
certificate fails. The error message looks similar to the other packets except that
the packet identifier is replaced by the ERR message.
Table 9.3 shows a comparison between the AODV, DSR, and ARAN protocols
with respect to their security-related features. ARAN remains robust in the presence
of attacks such as unauthorized participation, spoofed route signaling, fabricated
routing messages, alteration of routing messages, securing shortest paths, and replay
attacks.
Table 9.3. Comparison of vulnerabilities of ARAN with DSR and AODV protocols
Attacks Protocols
AODV DSR ARAN
Modifications required Sequence num- Source routes None
during remote redirection ber and hop-
counts
Tunneling during remote Yes Yes Yes
redirection
Spoofing Yes Yes No
Cache poisoning No Yes No
the route discovery process is initiated by sending RouteRequest packets only when
data packets arrive at a node for transmission. A malicious intermediate node could
advertise that it has the shortest path to the destination, thereby redirecting all
the packets through itself. This is known as a blackhole attack, as explained in
Section 9.10.1. The blackhole attack is illustrated in Figure 9.15. Let node M be
the malicious node that enters the network. It advertises that it has the shortest
path to the destination node D when it receives the RouteRequest packet sent by
node S. The attacker may not be able to succeed if node A, which also receives
the RouteRequest packet from node S, replies earlier than node M . But a major
advantage for the malicious node is that it does not have to search its routing table
B
C
A
PATH 1
S D
1
0 G
0
1
M
PATH 2
E
F
for a route to the destination. Also, the RouteReply packets originate directly from
the malicious node and not from the destination node. Hence, the malicious node
would be able to reply faster than node A, which would have to search its routing
table for a route to the destination node. Thus, node S may tend to establish a
route to destination D through the malicious node M , allowing node M to listen
to all packets meant for the destination node.
B
C
A
S D
1
0 F
0
1
M
E
FurtherRouteRequest
FurtherRouteReply
node M is a malicious node which is not present in the routing list of node E, the
FurtherRouteReply packet sent by node E will not contain a route to the malicious
node M . But if it contains a route to the destination node D, then the new route to
the destination through node E is selected, and the earlier selected route through
node M is rejected. This protocol completely eliminates the blackhole attack caused
by a single attacker. The major disadvantage of this scheme is that the control
overhead of the routing protocol increases considerably. Also, if the malicious nodes
work in a group, this protocol fails miserably.
9.13 SUMMARY
This chapter discussed the major challenges that a transport layer protocol faces in
ad hoc wireless networks. The major design goals of a transport layer protocol were
listed and a classification of existing transport layer solutions was provided. TCP is
the most widely used transport layer protocol and is considered to be the backbone
of today’s Internet. It provides end-to-end, reliable, byte-streamed, in-order deliv-
ery of packets to nodes. Since TCP was designed to handle problems present in
traditional wired networks, many of the issues that are present in dynamic topology
networks such as ad hoc wireless networks are not addressed. This causes reduction
of throughput when TCP is used in ad hoc wireless networks. It is very important to
employ TCP in ad hoc wireless networks as it is important to seamlessly communi-
cate with the Internet whenever and wherever it is available. This chapter provided
a discussion on the major reasons for the degraded performance of traditional TCP
in ad hoc wireless networks and explained a number of recently proposed solutions
to improve TCP’s performance. Other non-TCP solutions were also discussed in
detail.
The second half of this chapter dealt with the security aspect of communication
in ad hoc wireless networks. The issues and challenges involved in provisioning se-
curity in ad hoc wireless networks were identified. This was followed by a layer-wise
classification of the various types of attacks. Detailed discussions on key manage-
ment techniques and secure routing techniques for ad hoc wireless networks were
provided. Table 9.4 lists out the various attacks possible in ad hoc wireless networks
along with the solutions proposed for countering those attacks.
Section 9.14. Problems 499
9.14 PROBLEMS
1. Assume that when the current size of the congestion window is 48 KB, the
TCP sender experiences a timeout. What will be the congestion window size
if the next three transmission bursts are successful? Assume that MSS is 1
KB. Consider (a) TCP Tahoe and (b) TCP Reno.
2. Find out the probability of a path break for an eight-hop path, given that the
probability of a link break is 0.2.
3. Discuss the effects of multiple breaks on a single path at the TCP-F sender.
5. Mention one advantage and one disadvantage of using probe packets for de-
tection of a new path.
6. Mention one advantage and one disadvantage of using LQ and REPLY for
finding partial paths in TCP-BuS.
9. What are the pros and cons of assigning the responsibility of end-to-end reli-
ability to the application layer?
500 Transport Layer and Security Protocols for Ad Hoc Wireless Networks Chapter 9
10. What is the default value of β used for handling induced traffic in ATP and
why is such a value chosen?
11. Explain how network security requirements vary in the following application
scenarios of ad hoc wireless networks:
(a) Home networks
(b) Classroom networks
(c) Emergency search-and-rescue networks
(d) Military networks
12. Explain how security provisioning in ad hoc wireless networks differs from
that in infrastructure-based networks?
13. Explain the key encrypting key (KEK) method.
14. Nodes A and B want to establish a secure communication, and node A gener-
ates a random key 11001001. Suppose the function used by both nodes A and
B for encryption is XOR, and let node A generate a random transport key
10010101, and let node B generate 00101011. Explain the three-pass Shamir
protocol exchanges.
15. Why is it not advisable to use natural-language passwords directly for cryp-
tographic algorithms?
16. Consider the certificate graph shown in Figure 9.17, with the local certificate
repositories of nodes A and B as indicated. Find the possible paths of trust
from node A to node B which can be obtained using a chain of keys.
17. List a few inherent security flaws present in the following types of routing
protocols: (a) table-driven and (b) on-demand routing.
18. List and explain how some of the inherent properties of the wireless ad hoc
networks introduce difficulties while implementing security in routing proto-
cols.
19. Mark the paths chosen by the following secure-routing protocols for the net-
work topology shown in Figure 9.18: (a) Shortest path routing and (b) SAR
protocol. Assume that node 2 is a secure node. (c) If node 2 (which lies in the
path chosen by SAR protocol) is suddenly attacked and becomes a malicious
node, then mark an alternative path chosen by SAODV protocol.
Section 9.14. Problems 501
1
0
1
01
3
6
1
0
2
1
0
Source 01 7
5
01 10
4
01
01
9
8 12
11 01 01 Destination
13
Private node
01 Malicious node
01 Secure node
[1] J. Postel, “Transmission Control Protocol,” IETF RFC 793, September 1981.
[6] S. Floyd, J. Mahdavi, M. Mathis, and M. Podolsky, “An Extension to the Selec-
tive Acknowledgment (SACK) Option for TCP,” IETF RFC 2883, July 2000.
[8] G. Holland and N. Vaidya, “Analysis of TCP Performance over Mobile Ad Hoc
Networks,” Proceedings of ACM MOBICOM 1999, pp. 219-230, August 1999.
[12] J. Liu and S. Singh,“ATCP: TCP for Mobile Ad Hoc Networks,” IEEE Journal
on Selected Areas in Communications, vol. 19, no. 7, pp. 1300-1315, July 2001.
502
Bibliography 503