ch23 - TCP - DKK - Upto Slide 82
ch23 - TCP - DKK - Upto Slide 82
Process-to-Process Delivery:
UDP, TCP, and SCTP
23.1 Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
23-1 PROCESS-TO-PROCESS DELIVERY
23.3
Figure 23.1 Types of data deliveries
23.4
Types of data deliveries
23.5
Port Address or Service Point Address
23.6
Port Address or Service Point Address
23.7
Figure 23.2 Port numbers
23.8
Client Server Model for process to process
communication
To achieve process-to-process communication, the
most common way is through the client/server
paradigm
A process on the local host, called a client, needs
services from a process usually on the remote host,
called a server
For example, to get the day and time from a remote
machine, we need a Daytime client process running on
the local host and a Daytime server process running
on a remote machine
23.9
Addressing at transport layer
At the transport layer, a transport layer address, called a port
number is used, to choose among multiple processes running
on the destination host.
The destination port number is needed for delivery; the source
port number is needed for the reply
In the Internet model, the port numbers are 16-bit integers
between 0 and 65,535.
The client program defines itself with a port number, chosen
randomly by the transport layer software running on the client
host. This is the ephemeral port (temporary port) number
The server process uses an universal well-known port number
identified in the Internet model which is known to all the client
site processes
eg. as shown in the figure, Day time client process uses 52000
(an ephemeral port number) and server process uses well-
known port number 13
23.10
Figure 23.3 IP addresses versus port numbers
23.11
Figure 23.4 IANA ranges
23.12
Figure 23.5 Socket address
23.13
Figure 23.6 Multiplexing and demultiplexing
23.14
Connectionless and Connection Oriented Services
In the transport layer, a message is normally divided into
transmittable segments
A transport layer protocol can be either connectionless or
connection-oriented
A connectionless transport layer treats each segment as
an independent packet and delivers it to the transport
layer at the destination machine. It is unreliable
communication as it does not involves numbering of
packets, flow and error control e.g. UDP
A connection-oriented transport layer makes a connection
with the transport layer at the destination machine first
before delivering the packets. It is reliable communication
as it includes packet sequencing, flow and error control
e.g. TCP and SCTP (Stream control transport protocol)
After all the data is transferred, the connection is
terminated.
23.15
Figure 23.7 Error control
23.16
Figure 23.8 Position of UDP, TCP, and SCTP in TCP/IP suite
23.17
23-2 USER DATAGRAM PROTOCOL (UDP)
23.19
Example 23.1
23.20
Example 23.1 (continued)
SNMP uses two port numbers (161 and 162), each for a
different purpose, as we will see in Chapter 28.
23.21
Figure 23.9 User datagram format
23.22
Note
UDP length
= IP length – IP header’s length
23.23
Figure 23.10 Pseudoheader for checksum calculation
23.24
Example 23.2
23.25
Figure 23.11 Checksum calculation of a simple UDP user datagram
23.26
UDP Operation
UDP packets, called user datagrams
Connectionless Services
• Each user datagram sent by UDP is an independent
datagram
• No relationship between the different user datagrams
• User datagrams are not numbered
• No connection establishment and no connection
termination
• Each user datagram can travel on a different path
• Each request must be small enough to fit into one user
datagram
Flow and error control
• No flow control. The receiver may overflow with incoming
messages
• No error control mechanism except for the checksum
• This means that the sender does not know if a message
has been lost or duplicated
• When the receiver detects an error through the
checksum, the user datagram is silently discarded
23.27
Encapsulation and Decapsulation
To send a message from one process to another, the UDP
protocol encapsulates and decapsulates messages in an IP
datagram
Queuing
In UDP, queues are associated with ports
At the client site: when a process starts, it requests a port
number from the operating system.
Some implementations create both an incoming and an
outgoing queue associated with each process.
Other implementations create only an incoming queue
associated with each process
The queues opened by the client are, in most cases, identified
by ephemeral port numbers.
The queues function as long as the process is running
At the server site: a server asks for incoming and outgoing
queues, using its well-known port, when it starts running.
The queues remain open as long as the server is running
23.28
Figure 23.12 Queues in UDP
23.29
Use of UDP
UDP is suitable for a process that requires simple request-
response communication with little concern for flow and error
control. It is not usually used for a process such as FTP that
needs to send bulk data
UDP is suitable for a process with internal flow and error
control mechanisms. For example, the Trivial File Transfer
Protocol (TFTP) process includes flow and error control. It can
easily use UDP.
UDP is a suitable transport protocol for multicasting.
Multicasting capability is embedded in the UDP software but
not in the TCP software.
UDP is used for management processes such as SNMP
UDP is used for some route updating protocols such as
Routing Information Protocol (RIP)
23.30
23-3 TCP
• connection oriented.
• full duplex.
TCP Services:
• Reliable transport
• Flow control
• Congestion control
23.32
TCP Sevices
23.33
TCP Services…
However, there are a few important differences between the
transport layer and the data link layer.
23.35
Table 23.2 Well-known ports used by TCP
23.36
Figure 23.13 Stream delivery
23.37
[B] Stream delivery System
In UDP, a process (an application program) sends messages,
with predefined boundaries, to UDP for delivery.
UDP adds its own header to each of these messages and
delivers them to IP for transmission.
Each message from the process is calIed a user datagram and
becomes, eventually, one IP datagram.
Neither IP nor UDP recognizes any relationship between the
datagrams.
• TCP is a stream-oriented protocol, allows the sending process to
deliver data as a stream of bytes and allows the receiving
process to obtain data as a stream of bytes.
• TCP creates an environment in which the two processes seem to
be connected by an imaginary "tube“ that carries their data
across the Internet.
• The sending process produces (writes to) the stream of bytes,
and the receiving process consumes (reads from) them.
23.38
Figure 23.14 Sending and receiving buffers
23.39
[C] Sending and Receiving Buffers
Why Buffers are required?
Required as the sending and the receiving processes may
not write or read data at the same speed, TCP needs buffers for
storage.
Two buffers, the sending buffer and the receiving buffer, one for
each direction.
These buffers are also necessary for flow and error control
mechanisms used by TCP.
One way to implement a buffer is to use a circular array of I-byte
locations
Normally the buffers are hundreds or thousands of bytes in
sizes, depending on the implementation.
The figure shows the buffers as the same size, which is not
always the case.
23.40
Figure 23.15 TCP segments
23.41
[D] Segments
The IP layer, as a service provider for TCP, needs to send
data in packets, not as a stream of bytes.
At the transport layer, TCP groups a number of bytes
together into a packet called a segment.
TCP adds a header to each segment (for control
purposes) and delivers the segment to the IP layer for
transmission.
The segments are encapsulated in IP datagrams and
transmitted.
This entire operation is transparent to the receiving
process
Segments may be received out of order, lost, or corrupted
and resent. All these are handled by TCP with the
receiving process unaware of any activities
The segments may differ in sizes (100s of bytes in reality)
23.42
[E] Full-Duplex Communication
• TCP offers full-duplex service, in which data can flow in both
directions at the same time.
• Each TCP then has a sending and receiving buffer, and
segments move in both directions.
Numbering System
Byte number: Number all the bytes that are transmitted in a
connection
Sequence number: The sequence number for each segment is
the number of the first byte carried in that segment
Acknowledgment Number: Each party also uses an acknowledgment
number to confirm the bytes it has received. It is cumulative number
Flow Control
The receiver of the data controls the amount of data that are to be sent
by the sender. This is done to prevent the receiver from being
overwhelmed with data. The numbering system allows TCP to use a
byte-oriented flow control.
Error Control
To provide reliable service, TCP implements an error control
mechanism. Although error control considers a segment as the unit of
data for error detection (loss or corrupted segments), error control is
byte-oriented
23.44
Congestion Control
• TCP takes into account congestion in the network.
• The amount of data sent by a sender is not only controlled by
the receiver (flow control), but is also detained by the level of
congestion in the network.
23.45
Note
23.46
Example 23.3
23.47
Note
23.48
Note
23.49
Figure 23.16 TCP segment format
23.50
TCP Segment Format Description
Source port address: A 16-bit field that defines the port number of
the application program in the host that is sending the segment
Destination port address: A 16-bit field that defines the port number
of the application program in the host that is receiving the segment
Sequence number: This 32-bit field defines the number assigned to
the first byte of data contained in this segment
Acknowledgment number: This 32-bit field defines the byte number
that the receiver of the segment is expecting to receive from the other
party
Header length: This 4-bit field indicates the number of 4-byte words
in the TCP header. The length of the header can be between 20 and
60 bytes. Therefore, the value of this field can be between 5 (5 x 4
=20) and 15 (15 x 4 =60).
Reserved: This is a 6-bit field reserved for future use.
Control: This field defines 6 different control bits or flags
23.51
Window size:
This field defines the size of the window, in bytes, that the other
party must maintain.
The length of this field is 16 bits, which means that the
maximum size of the window is 65,535 bytes.
This value is normally referred to as the receiving window
(rwnd) and is determined by the receiver.
The sender must obey the dictation of the receiver in this case
D Checksum: This 16-bit field contains the checksum
Urgent pointer:
This l6-bit field, which is valid only if the urgent flag is set, is
used when the segment contains urgent data.
It defines the number that must be added to the sequence
number to obtain the number of the last urgent byte in the data
section of the segment
Options: There can be up to 40 bytes of optional information in the
TCP header.
23.52
Figure 23.17 Control field
23.53
Table 23.3 Description of flags in the control field
23.54
A TCP Connection
• As a connection-oriented transport protocol, TCP establishes a
virtual path between the source and destination.
• All the segments belonging to a message are then sent over
this virtual path.
• Using a single virtual pathway for the entire message facilitates
the acknowledgment process as well as retransmission of
damaged or lost frames.
• A TCP connection is virtual, not physical. TCP operates at a
higher level. TCP uses the services of IP to deliver individual
segments to the receiver, but it controls the connection itself.
• If a segment is lost or corrupted, it is retransmitted.
• Unlike TCP, IP is unaware of this retransmission. If a
segment arrives out of order, TCP holds it until the missing
segments arrive; IP is unaware of this reordering.
• In TCP, connection-oriented transmission requires three
phases: connection establishment, data transfer, and
connection termination.
23.55
Figure 23.18 Connection establishment using three-way handshaking
23.56
Note
23.57
Note
23.58
Note
23.59
Simultaneous Open: A rare situation, called a simultaneous open,
may occur when both processes issue an active open.
• In this case, both TCPs transmit a SYN + ACK segment to each
other, and one single connection is established between them.
23.62
Figure 23.20 Connection termination using three-way handshaking
23.63
Note
23.64
Note
23.65
Figure 23.21 Half-close
23.66
Figure 23.22 Sliding window
23.67
Flow Control
• TCP uses a sliding window to handle flow control
• The sliding window protocol used by TCP, however, is
something between the Go-Back-N and Selective Repeat
sliding window
• Look like the Go-Back-N protocol because it does not use
NAKs; it looks like Selective Repeat because the receiver
holds the out-of-order segments until the missing ones arrive
23.68
Note
23.69
Example 23.4
Solution
The value of rwnd = 5000 − 1000 = 4000. Host B can
receive only 4000 bytes of data before overflowing its
buffer. Host B advertises this value in its next segment to
A.
23.70
Example 23.5
Solution
The size of the window is the smaller of rwnd and cwnd,
which is 3000 bytes.
23.71
Example 23.6
23.72
Figure 23.23 Example 23.6
23.73
Note
23.74
Note
23.75
Note
In modern implementations, a
retransmission occurs if the
retransmission timer expires or three
duplicate ACK segments have arrived.
23.76
Note
23.77
Note
23.78
Figure 23.24 Normal operation
23.79
Figure 23.25 Lost segment
23.80
Note
23.81
Figure 23.26 Fast retransmission
23.82
23-4 SCTP
23.84
Table 23.4 Some SCTP applications
23.85
Figure 23.27 Multiple-stream concept
23.86
Note
23.87
Figure 23.28 Multihoming concept
23.88
Note
23.89
Note
23.90
Note
23.91
Note
23.92
Note
23.93
Figure 23.29 Comparison between a TCP segment and an SCTP packet
23.94
Note
23.95
Figure 23.30 Packet, data chunks, and streams
23.96
Note
23.97
Note
23.98
Figure 23.31 SCTP packet format
23.99
Note
23.100
Figure 23.32 General header
23.101
Table 23.5 Chunks
23.102
Note
23.103
Note
23.104
Figure 23.33 Four-way handshaking
23.105
Note
23.106
Figure 23.34 Simple data transfer
23.107
Note
23.108
Figure 23.35 Association termination
23.109
Figure 23.36 Flow control, receiver site
23.110
Figure 23.37 Flow control, sender site
23.111
Figure 23.38 Flow control scenario
23.112
Figure 23.39 Error control, receiver site
23.113
Figure 23.40 Error control, sender site
23.114