Chapter 3:the Transport Layer: Internet Protocols
Chapter 3:the Transport Layer: Internet Protocols
Internet Protocols
Internet transport services:
reliable, in-order unicast delivery (TCP)
congestion
flow control
connection setup
unreliable (best-effort), unordered unicast or multicast delivery: UDP
services not available:
real-time
bandwidth guarantees
reliable multicast
UDP
no frills, bare bones Internet transport protocol
best effort service, UDP segments may be:
lost
delivered out of order to applications
connectionless:
no handshaking between UDP sender, receiver
each UDP segment handled independently of others
Why is there a UDP?
no connection establishment (which can add delay)
simple: no connection state at sender, receiver
small segment header
no congestion control: UDP can blast away as fast as desired
UDP header
Bit: 0 16 31
8 Octets
Length Checksum
Header details
Source and destination port numbers
The source and destination processes
Length = length of header + data
1
Checksum covers header and data
Optional in UDP but mandatory in TCP
UDP Checksum
Sender:
treat segment contents as sequence of 16-bit integers
checksum: addition (1s complement sum) of segment contents
sender puts checksum value into UDP checksum field
Receiver:
compute checksum of received segment
check if computed checksum equals checksum field value:
NO - error detected
YES - no error detected
Uses of UDP
Both TCP and UDP use port (or socket) numbers to pass information to the upper
layers.
Port numbers are used to keep track of different conversations that cross the
network at the same time.
Application software developers have agreed to use the well-known port numbers
that are defined in RFC1700.
The range of numbers are below 255 for TCP and UDP appilcations.
Applications of UDP
2
Remote Procedure Call
Mechanisms
Client process calls the client stub
Marshalling-packing the parameters
Kernel receives from client stub and sends to server machine
Kernel on server OS passes the message to server stub
The server stub processes it and the reply follows the same path in the other
direction
Problems may occur in RPC
Passing pointer parameters from client place to server space
weakly typed language- C may not be suitable
Type conversion
Use of global variables since two different space involved
Still UDP is commonly used in RPC
3
(a) The position of RTP in the protocol stack.
P padded bit
X extension header present or not
CC contributing sources
M marker bit
Version field
Payload type
Seq no
Time stamp
Synchronization and contributing source identifier
RTP Header
4
----------------------------------------------------------------------------------------------------
a p p lic a t io n a p p lic a t io n
w r it e s d a t a re a d s d a ta
socket socket
door door
TCP TCP
s e n d b u ffe r r e c e iv e b u f f e r
segm ent
Specially designed to provide a reliable end to end byte stream over a unreliable network
The inter network differs from a single network in terms of topology and bandwidth
delay packet size. TCP adapts to properties of such network. Each machine supporting
TCP has TCP entity. IP layer provide no guarantee that the datagrams will be delivered so
the TCP has to provide the reliability
TCP
point-to-point:
one sender, one receiver
reliable, in-order byte steam:
no message boundaries
pipelined:
TCP congestion and flow control set window size at the time of
connection setup
send & receive buffers the buffer size negotiated
5
full duplex data:
bi-directional data flow in same connection
MSS: maximum segment size
connection-oriented:
handshaking (exchange of control msgs) inits sender, receiver state before
data exchange
flow controlled:
sender will not overwhelm receiver
TCP Header
Every segment of TCP has a sequence number so it is easy to reassemble and also take
care of the loss of packet and retransmission is done
6
The SYN bit used for connection setup and the FIN bit for the release
Urgent data means it has to be delivered faster which indicate by the pointer
32 bits
URG: urgent data counting
(generally not used)
source port dest port
by bytes
# #
ACK: ACK # sequence number of data
valid (not segments!)
acknowledgement number
PSH: push data now head not
len used
UA P R S F rcvr window size # bytes
(generally not used)
checksum ptr urgent data rcvr willing
RST, SYN, FIN: Options (variable to accept
connection estab
length)
(setup, teardown
commands)
application
Internet
data
checksum
(as in UDP) (variable length)
Step 1: client end system sends TCP SYN control segment to server
specifies initial seq number
Step 2: server end system receives SYN, replies with SYNACK control segment
7
ACKs received SYN
allocates buffers
specifies server-> receiver initial seq. number
Step 3: client sends the request and the ack for the server seq number
Connection Release
8
Connection management
The states used in the TCP connection management finite state machine.
9
10
TCP connection management finite state machine.
The heavy solid line is the normal path for a client.
The heavy dashed line is the normal path for a server.
The light lines are unusual events.
Each transition is labeled by the event causing it and the action resulting from it,
separated by a slash.
---------------------------------------------------------------------------------------------------------
11
TCP connection management
12
Silly window syndrome
At the receiver side even if a byte available at its buffer it advertised and the sender sends
the buffer is full the sender waits again and probes to get the window size so this will
continue and a loop formed to avoid this the receiver is forced to wait till good amount of
buffer space availability and then advertises and avoids the loop.
13
Congestion:
informally: too many sources sending too much data too fast for network to
handle
different from flow control!
manifestations:
lost packets (buffer overflow at routers)
long delays (queueing in router buffers)
14
one router, finite buffers
sender retransmission of lost packet
15
Another cost of congestion:
when packet dropped, any upstream transmission capacity
used for that packet was wasted!
16
TCP Congestion control
How TCP prevents congestion
when connection established, window size chosen
Receiver specifies seeing its buffer size
Still congestion occurs
The two problems are Network Capacity and Receiver Capacity
Solution?
Solution
Sender maintains two windows: one the receiver granted
the other Congestion Window
at the connection establishment- the congestion window is set to the size of the
maximum segment in use on the connection
Each burst acknowledged doubles the congestion window
Congestion window grow exponentially
This is called the Slow Start algorithm
Another Solution?
Host A Host B
Slowstart algorithm one segm
RTT ent
initialize: Congwin = 1
for (each segment ACKed) two segm
ents
Congwin++
until (loss event OR
CongWin > threshold) four segm
ents
time
Solution
Uses threshold
initially some value in addition to the receiver and congestion window
When timeout threshold is set to half of the current congestion window
Congestion window is set to one max segment
Slow start is used to find what the network can handle
17
Exponential growth stops when threshold hit
From that point congestion window grow linearly
Congestion avoidance
/* slowstart is over */
/* Congwin > threshold */
Until (loss event) {
every w segments ACKed:
Congwin++
}
threshold = Congwin/2
Congwin = 1
perform slowstart 1
Example
Segment size=1K
Congwin=64KB
when timeout threshold=34KB
Congwin=1KB
the congstion window grows exponentially until it hits threshold and then linearly
TCP RTT
18
Jacobson algorithm
Another smoothed value D deviation it is the difference between the expected and
observed value |RTT-M|
D= D+(1- )|RTT-M|
Timeout interval =RTT+4*D
The problem with retransmission answered by Karns algorithm
RTT not updated for retransmitted segment timeout is doubled on each failure till
the segment gets through first time
There is another timer called the persistence timer- it is used when the sender is
made to wait due to lack of buffer space at the receiver. Once this timer goes off
the sender sends the probe to find about the receiver buffer space otherwise a
deadlock occurs so this timer is used to resolve the same
The third timer is the keepalive timer- it is used for the connections which are
idle for a long time suppose this timer goes off then the connection is closed
Wireless TCP
Indirect TCP to split the TCP connection into two separate connections
first one from sender to base station the second from base station to receiver
the advantage is both connections are homogeneous
The disadvantage is that it breaks the semantics of TCP
There is another solution for keeping the semantics of TCP is the Transactional
TCP
Transactional TCP
19
The above figure (a) shows the normal RPC call where nine messages are exchanged
between the client and the server
Figure (b) shows the one with Transactional TCP T/TCP where request and SYN and also
FIN are sent together thus reducing the messages and providing faster service
--------------------------------------------------------------------------------------------------
20
Different performance issues in network
21
The basic loop for improving network performance.
Measure relevant network parameters, performance.
Try to understand what is going on.
Change one parameter
Rules:
CPU speed is more important than network speed.
Reduce packet count to reduce software overhead.
Minimize context switches.
Minimize copying.
You can buy more bandwidth but not lower delay.
Avoiding congestion is better than recovering from it.
Avoid timeouts.
22
The fast path from sender to receiver is shown with a heavy line.
The processing steps on this path are shaded.
Another example
In the TCP header the fields that are same between consecutive TPDUs on a one
way flow are shaded
All sending TCP entity has to copy from the prototype header into the output
buffer
It handovers the header and data to the special IP procedure for sending a regular
max TPDU
IP then copies its prototype header and makes the packet ready
23
The other two areas where major performance gain are possible are
Buffer management
Timer Management
The timer management done by the timing wheel
There are some problems and the possible solution posed by the Gigabit protocols
Problems
Sequence Numbers
Communication Speeds
Go back n protocol and its poor performance
gigabit lines are bandwidth limited
Results of new application
------------------------------------------------------------------------------------------------------------
24
1: Define the following terms:
(a) Slow start
Answer
The phase in TCP congestion control when the window size starts at one segment and
increases by one segment for every ACK received (that is, it sends first one segment, the
two, the four, then eight, and so on, as ACKs arrive for the segments transmitted.
3: When doing a connection setup in TCP both parties are required to pick a random
number for the initial sequence number.
(a) Ignoring security concerns, why do they not just pick 0 or 1?
Answer
This would substantially increase the likelihood of a lost segment from a previous
connection re-appearing and messging up an existing connection.
(b) Why do they not just increment the last used sequence number for the particular
source/destination pair (assuming that we could readily keep track of this information)?
Answer
It allows a third party to fake a connection.
4: When TCP receives a segment that it has already received and acknowledged, it will
reply with an acknowledgement.
(a) Why is this acknowledgment necessary?
Answer
The previous acknowledgement may have been lost.
5:The sequence number of the segment received is 1234, and the length of the segment is
10 bytes.
(a) Do we know what the acknowledgement number will be that TCP will reply with?
If so, what is it? If not, why not? What can we say about the acknowledgement number
that TCP will reply with?
25
5:Answer
No. We do not. If this is the greatest contiguous segment currently received, then the
ACK will be 1244. However, if a prior segment has been lost, then the acknowledgement
number will be less than 1234. Likewise, if this is a retransmission of segment 1234, and
a subsequent segment has been received, the acknowledgement may be greater than 1244.
We do know that it will be either less than 1234 or greater than or equal to 1244.
6: If TCP retransmits a segment, what impact, if any, does this have on the RTT
calculation?
Answer
This transmission/retransmission cannot be included in the estimate, as we cannot
distinguish where the acknowledgement came from: the first segment, and it was delayed,
or the second segment.
8: A network has a maximum packet size of 128 bytes, a maximum packet life time as 10
sec and a 8bit sequence number. Find the maximum data rate per connection
Answer
Given 8 bit sequence numbers 2(pow)8=256 packets sent in 10 sec
In 10 sec 128*8*255=261120 bits can be sent
Max data rate per connection=261120/10 seconds
=26112 bits/sec
9: A TCP machine is sending full windows 65535 bytes over a 1Gbps channel that has a
10msec delay one way. What is the maximum throughput achievable? What is the line
efficiency?
Answer
Given RTT=10+10=20msec=1/20*10(pow)-3
= 50bits/sec
Max throughput=(65535*8)bits*50bits/sec
=26.214Mbps
Line efficiency=Max throughput/Bandwidth
= (26.214Mbps/1Gbps)*100 = 2.62%
26
Slow start
13:What is meant by nesting of TPDUs? Illustrate with the diagram the connection
establishment between a client and a server using TPDUs
-----------------------------------------------------------------------------------------------------
27
28