0% found this document useful (0 votes)
13 views7 pages

CN Module 4 PR

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views7 pages

CN Module 4 PR

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 7

6.1 THE TRANSPORT SERVICE this feature.

The data packets are thus passed


on to the receiver in order. Head of line
6.1.1 Services Provided to the Upper blocking is a consequence of implementing
Layers this.
 Reliability: During transportation some data
packets might be lost because of errors and
problems such as network congestion. By
using error detection mechanism such as CRC
(cyclic redundancy check), the data might be
checked by the transport protocol for any
corruption and for the verification whether the
correct reception of the data by either sending
a NACK or an ACK signal to the sending host.
Some schemes such as the ARR (automatic
repeat request) are sometimes used for the
retransmission of the corrupted or the lost
 The ultimate goal of the transport layer is to data.
provide efficient, reliable, and cost-effective  Flow control: The rate at which the data is
data transmission service to its users, normally transmitted between two nodes is managed
processes in the application layer. To achieve forpreventing a sending host with a fast speed
this, the transport layer makes use of the from the transmission of data more than what
services provided by the network layer. The the receiver’s data buffer can take at a time.
software and or hardware within the transport Otherwise, it might cause a buffer overrun.
layer that does the work is called the transport  Congestion avoidance: Traffic entry into the
entity. network can be controlled by means of
 The transport entity can be located in the congestion control by avoiding congestive
operating system kernel, in a library package collapse. The network might be kept in a state
bound into network applications, in a separate of congestive collapse by automatic repeat
user process, or even on the network interface requests.
card. The first two options are most common
on the Internet. The (logical) relationship of 6.1.2 Transport Service Primitives
the network, transport, and application layers
 A service is specified by a set of primitives. A
is illustrated in Fig. 6-1.
primitive means operation. To access the
 Connection-oriented communication: It is
service a user process can access these
quite easy for the application to interpret the
primitives. These primitives are different for
connection as a data stream instead of having
connection-oriented service and
to cope up with the connectionless models that
connectionless service.
underlie it. For example, internet protocol (IP)
 There are five types of service primitives:
and the UDP’s datagram protocol.
1. LISTEN: When a server is ready to accept an
 Byte orientation: Processing the data stream
incoming connection, it executes the LISTEN
is quite easy when compared with using the
primitive. It blocks waiting for an incoming
communication system format for processing
connection.
the messages. Because of such simplification,
2. CONNECT: It connects the server by
it becomes possible for the applications to
establishing a connection. Response is
work up on message formats that underlie.
awaited.
 Same order delivery: Usually, it is not
3. RECIEVE: Then the RECIEVE call blocks
guaranteed by the transport layer that the data
the server.
packets will be received in the same order in
4. SEND: Then the client executes SEND
which they were sent. But this is one of the
primitive to transmit its request followed by
desired features of the transport layer.
the execution of RECIEVE to get the reply.
Segment numbering is used for incorporating
Send the message.
5. DISCONNECT: This primitive is used for  It then unblocks the server and sends a
terminating the connection. After this CONNECTION ACCEPTED TPDU back to
primitive one can’t send any message. When the client.
the client sends DISCONNECT packet then  When this TPDU arrives, the client is
the server also sends the DISCONNECT unblocked, and the connection is established.
packet to acknowledge the client. When the Data can now be exchanged using the SEND
server package is received by client then the and RECEIVE primitives.
process is terminated.  In the simplest form, either party can do
a(blocking)RECEIVE to wait for the other
party to do a SEND. When the TPDU arrives,
the receiver is unblocked.
 It can then process the TPDU and send a reply.
As long as both sides can keep track of whose
turn it is to send, this scheme works fine.
 When a connection is no longer needed, it
 Consider an application with a server and must be released to free up table space within
several remote clients. the two transport entities.
 To start with, the server executes a LISTEN
primitive, typically by calling a library
procedure that makes a system call to block
the server until a client turns up.
 For lack of a better term, we will reluctantly
use the somewhat ungainly acronym
TPDU(Transport Protocol Data Unit) for
message sent from transport entity to transport
entity.
 Thus, TPDUs (exchanged by the transport
layer) are contained in packets (exchanged by
the network layer).
 In turn, packets are contained in frames
(exchanged by the data link layer).
 When a frame arrives, the data link layer
processes the frame header and passes the 6.1.3 Berkeley Sockets
contents of the frame payload field up to the  Sockets were first released as part of the
network entity. Berkeley UNIX 4.2BSD software distribution
 When a client wants to talk to the server, it in 1983.
executes a CONNECT primitive.  The primitives are now widely used for
 The transport entity carries out this primitive Internet programming on many operating
by blocking the caller and sending a packet to systems, especially UNIX-based systems, and
the server. there is a socket-style API for Windows called
 Encapsulated in the payload of this packet is a ‘‘winsock”.
transport layer message for the server’s
transport entity.
 The client’s CONNECT call causes a
CONNECTION REQUEST TPDU to be sent
to the server.
 When it arrives, the transport entity checks to
see that the server is blocked on a LISTEN.
 The first four primitives in the list are
executed in that order by servers.
o The SOCKET primitive creates a new efficiently while preventing congestion.
endpoint and allocates table space for it within Additionally, it promotes fairness among
the transport entity. competing entities and enables prompt
o A successful SOCKET call returns an ordinary adjustment to changes in traffic demands.
file descriptor for use in succeeding calls, the Each of these criteria will be further delineated
same way an OPEN call on a file does. for clarity.
o Newly created sockets do not have network Efficiency and Power
addresses. These are assigned using the BIND
primitive.
o Next comes the LISTEN call, which allocates
space to queue incoming calls for the case that
several clients try to connect at the same time.
o To block waiting for an incoming connection,
the server executes an ACCEPT primitive.
 Now let us look at the client side.
o Here, too, a socket must first be created using
the SOCKET primitive, but BIND is not
required since the address used does not  Efficient bandwidth allocation across transport
matter to the server. The CONNECT primitive entities utilizes available network capacity
blocks the caller and actively starts the optimally. However, it's inaccurate to assume
connection process. that each entity should receive an equal share
o When it completes, the client process is of bandwidth, especially on bursty networks.
unblocked and the connection is established.  As the load increases in Fig. 6-19(a), goodput
 Both sides can now use SEND and RECEIVE initially rises at a steady rate but eventually
to transmit and receive data over the full- plateaus due to occasional bursts causing
duplex connection. packet loss and delays. Poorly designed
 Connection release with sockets is symmetric. transport protocols can lead to congestion
When both sides have executed a CLOSE collapse, where senders overwhelm the
primitive, the connection is released. network without achieving useful work. The
corresponding delay is given in Fig. 6-19(b).
6.3 CONGESTION CONTROL  Similarly, delay initially remains constant but
rises rapidly as load nears capacity, mainly due
 Congestion control is a critical aspect of to traffic bursts. Performance degradation
network performance, jointly managed by the begins with the onset of congestion,
network and transport layers. While emphasizing the importance of bandwidth
congestion is detected at the network layer, it allocation below the point where delay
is caused by the rapid transmission of packets escalates rapidly.
from the transport layer.  Kleinrock (1979) proposed the power metric,
 Effective congestion control involves where power = load / delay, to identify the
regulating the packet transmission rate from optimal load for efficient network
hosts to prevent network congestion and performance. The load with the highest power
performance degradation. TCP and other represents an efficient allocation for transport
protocols implement algorithms for congestion entities.
control, ensuring optimal network
performance. Max-Min Fairness

6.3.1 Desirable Bandwidth Allocation


 To effectively implement a congestion control
algorithm, it's crucial to define the desired
state in which the algorithm should operate.
Beyond merely avoiding congestion, the goal
is to achieve optimal bandwidth allocation
among transport entities utilizing the network.
 This optimal allocation ensures high
performance by utilizing available bandwidth
 A max-min fair allocation is shown for a
network with four flows, A, B, C, and D, in
Fig. 6-20.
 Each of the links between routers has the same
capacity, taken to be 1 unit, though in the
general case the links will have different
capacities. Three flows compete for the
bottom-left link between routers R4 and R5.
 Each of these flows therefore gets 1/3 of the
link. The remaining flow, A, competes with B
on the link from R2 to R3. Since B has an
allocation of 1/3, A gets the remaining 2/3 of
the link.
 Notice that all of the other links have spare
capacity. However, this capacity cannot be
given to any of the flows without decreasing
the capacity of another, lower flow.  Fig. 6-22(a), we see a thick pipe leading to a
small-capacity receiver. This is a flow-control
 However, the capacity of flow C or D (or
limited situation. As long as the sender does
both) must be decreased to give more
not send more water than the bucket can
bandwidth to B, and these flows will have less
contain, no water will be lost.
bandwidth than B. Thus, the allocation is max-
min fair.  In Fig. 6-22(b), the limiting factor is not the
bucket capacity, but the internal carrying
Convergence capacity of the network. If too much water
comes in too fast, it will back up and some
will be lost (in this case, by overflowing the
funnel).
 These cases may appear similar to the sender,
as transmitting too fast causes packets to be
lost. However, they have different causes and
call for different solutions.
 We have already talked about a flow-control
solution with a variable-sized window. Now
we will consider a congestion control solution.
 A bandwidth allocation that changes over time Since either of these problems can occur, the
and converges quickly is shown in Fig. 6-21. transport protocol will in general need to run
Initially, flow 1 has all of the bandwidth. One both solutions and slow down if either
second later, flow 2 starts. It needs bandwidth problem occurs.
as well. The allocation quickly changes to give
each of these flows half the bandwidth. 6.3.3 Wireless Issues
 At 4 seconds, a third flow joins. However, this
flow uses only 20% of the bandwidth, which is
less than its fair share (which is a third). Flows
1 and 2 quickly adjust, dividing the available
bandwidth to each have 40% of the bandwidth.
 At 9 seconds, the second flow leaves, and the
third flow remains unchanged. The first flow
quickly captures 80% of the bandwidth. At all
times, the total allocated bandwidth is  Fig. 6-26 shows a path with a wired and
approximately 100%, so that the network is wireless link for which the masking strategy is
fully used, and competing flows get equal used. There are two aspects to note. First, the
treatment. sender does not necessarily know that the path
includes a wireless link, since all it sees is the
6.3.2 Regulating the Sending Rate wired link to which it is attached.
 Internet paths are heterogeneous and there is sending the reply can specify which process
no general method for the sender to tell what on the sending machine is to get it.
kind of links comprise the path. This  The UDP length field includes the 8-byte
complicates the congestion control problem, as header and the data. The minimum length is 8
there is no easy way to use one protocol for bytes, to cover the header. The maximum
wireless links and another protocol for wired length is 65,515 bytes, which is lower than the
links. largest number that will fit in 16 bits because
 The second aspect is a puzzle. The figure of the size limit on IP packets.
shows two mechanisms that are driven by loss:  An optional Checksum is also provided for
link layer frame retransmissions, and transport extra reliability. It checksums the header, the
layer congestion control. The puzzle is how data, and a conceptual IP pseudoheader. When
these two mechanisms can co-exist without performing this computation, the Checksum
getting confused. field is set to zero and the data field is padded
 After all, a loss should cause only one out with an additional zero byte if its length is
mechanism to take action because it is either a an odd number.
transmission error or a congestion signal. It
cannot be both. If both mechanisms take then
we are back to the original problem of
transports that run far too slowly over wireless
links.
6.4 The Internet Transport Protocols:
UDP
6.4.1 Introduction to UDP

 The pseudoheader for the case of IPv4 is


shown in Fig. 6-28.
 It contains the 32-bit IPv4 addresses of the
source and destination machines, the protocol
number for UDP (17), and the byte count for
the UDP segment (including the header). It is
different but analogous for IPv6.
 The Internet protocol suite supports a  Including the pseudoheader in the UDP
connectionless transport protocol called UDP checksum computation helps detect
(User Datagram Protocol). UDP provides a misdelivered packets, but including it also
way for applications to send encapsulated IP violates the protocol hierarchy since the IP
datagrams without having to establish a addresses in it belong to the IP layer, not to the
connection. UDP is described in RFC 768. UDP layer.
 UDP transmits segments consisting of an 8-  TCP uses the same pseudoheader for its
byte header followed by the payload. The checksum.
header is shown in Fig. 6-27. The two ports
serve to identify the end-points within the 6.4.2 Remote Procedure Call(RPC)
source and destination machines.
 When a UDP packet arrives, its payload is
handed to the process attached to the
destination port.
 The source port is primarily needed when a
reply must be sent back to the source. By
copying the Source port field from the
incoming segment into the Destination port
field of the outgoing segment, the process
 When a process on machine 1 calls a the right time. These functions fit into the
procedure on machine 2, the calling process on protocol stack as shown in Fig. 6-30.
1 is suspended and execution of the called  RTP normally runs in user space over UDP (in
procedure takes place on 2. the operating system). It operates as follows:
 Information can be transported from the caller  The multimedia application consists of
to the callee in the parameters and can come multiple audio, video, text, and possibly other
back in the procedure result. No message streams. These are fed into the RTP library,
passing is visible to the application which is in user space along with the
programmer. This technique is known as RPC application.
(Remote Procedure Call) and has become the  This library multiplexes the streams and
basis for many networking applications. encodes them in RTP packets, which it stuffs
 To call a remote procedure, the client program into a socket.
must be bound with a small library procedure,  On the operating system side of the socket,
called the client stub, that represents the server UDP packets are generated to wrap the RTP
procedure in the client’s address space. packets and handed to IP for transmission over
 Similarly, the server is bound with a procedure a link such as Ethernet.
called the server stub. These procedures hide  The reverse process happens at the receiver.
the fact that the procedure call from the client The multimedia application eventually
to the server is not local. receives multimedia data from the RTP library.
 The actual steps in making an RPC are shown It is responsible for playing out the media.
in Fig. 6-29.  The protocol stack for this situation is shown
 Step 1: is the client calling the client stub. in Fig. 6-30(a). The packet nesting is shown in
This call is a local procedure call, with the Fig. 6-30(b)
parameters pushed onto the stack in the
normal way.
 Step 2: is the client stub packing the
parameters into a message and making a
system call to send the message. Packing the
parameters is called marshaling.
 Step 3: is the operating system sending the
message from the client machine to the server
machine.
 Step 4: is the operating system passing the  The RTP header is illustrated in Fig. 6-31. It
incoming packet to the server stub. consists of three 32-bit words and potentially
 Step 5: finally, the server stub calling the some extensions.
server procedure with the unmarshaled  The first word contains the Version field,
parameters. The reply traces the same path in which is already at 2.
the other direction.  The P bit indicates that the packet has been
padded to a multiple of 4 bytes.
6.4.3 Real-Time Transport Protocols  The X bit indicates that an extension header is
present.
 The CC field tells how many contributing
sources are present, from 0 to 15.
 The M bit is an application-specific marker
bit.
 The Payload type field tells which encoding
algorithm has been used.
 The Sequence number is just a counter that is
incremented on each RTP packet sent. It is
used to detect lost packets.
 Two aspects of real-time transport:
 The Timestamp is produced by the stream’s
o 1)The RTP protocol for transporting audio and
source to note when the first sample in the
video data in packets.
packet was made.
o 2)The processing that takes place, mostly at
the receiver, to play out the audio and video at
 The Synchronization source identifier tells they will reach the receiver with different
which stream the packet belongs to. relative times. This variation in delay is called
 Finally, the Contributing source identifiers, jitter.
if any, are used when mixers are present in the  Even a small amount of packet jitter can cause
studio. In that case, the mixer is the distracting media artifacts, such as jerky video
synchronizing source, and the streams being frames and unintelligible audio, if the media is
mixed are listed here. simply played out as it arrives.
 The solution to this problem is to buffer
RTCP—The Real-time Transport packets at the receiver before they are played
Control Protocol out to reduce the jitter.
 As an example, in Fig. 6-32 we see a stream of
 It is defined along with RTP in RFC 3550 and
packets being delivered with a substantial
handles feedback, synchronization, and the
amount of jitter. Packet 1 is sent from the
user interface. It does not transport any media
server at t = 0 sec and arrives at the client at t
samples.
= 1 sec.
 The first function can be used to provide
 Packet 2 undergoesvmore delay and takes 2
feedback on delay, variation in delay or jitter,
sec to arrive. As the packets arrive, they are
bandwidth, congestion, and other network
buffered on the client machine.
properties to the sources.
 At t = 10 sec, playback begins. At this time,
 This information can be used by the encoding
packets 1 through 6 have been buffered so that
process to increase the data rate when the
they can be removed from the buffer at
network is functioning well and to cut back the
uniform intervals for smooth play.
data rate when there is trouble in the network.
 Unfortunately, we can see that packet 8 has
 By providing continuous feedback, the
been delayed so much that it is not available
encoding algorithms can be continuously
when its play slot comes up. There are two
adapted to provide the best quality possible
options. Packet 8 can be skipped and the
under the current circumstances.
player can move on to subsequent packets.
 The Payload type field is used to tell the
Alternatively, playback can stop until packet 8
destination what encoding algorithm is used
arrives, creating an annoying gap in the music
for the current packet, making it possible to
or movie.
vary it on demand.
 The difference between a low-jitter and high-
 RTCP also handles interstream
jitter connection is shown in Fig. 6-33.
synchronization.
 Finally, RTCP provides a way for naming the
various sources (e.g., in ASCII text). This
information can be displayed on the receiver’s
screen to indicate who is talking at the
moment.
Playout with Buffering and Jitter
Control

 Once the media information reaches the


receiver, it must be played out at the right
time.
 Even if the packets are injected with exactly
the right intervals between them at the sender,

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy