1%.IV-Sem - Computer Network Notes
1%.IV-Sem - Computer Network Notes
Tech
Subject Name: Computer Networks
Subject Code: CS-602
Semester: 6th
Downloaded from www.rgpvnotes.in
Syllabus: Computer Network: Definitions, goals, components, Architecture, Classifications & Types.
Layered Architecture: Protocol hierarchy, Design Issues, Interfaces and Services, Connection Oriented &
Connectionless Services, Service primitives, Design issues & its functionality. ISO OSI Reference Model:
Principle, Model, Descriptions of various layers and its comparison with TCP/IP. Principals of physical
layer: Media, Bandwidth, Data rate and Modulations.
#Goals
Several machines can share printers, tape drives, etc.
Reduced cost
Resource and load sharing
Programs do not need to run on a single machine
High reliability
If a machine goes down, another can take over
Mail and communication
#Components
A data communications system has five components.
1. Message. The message is the information (data) to be communicated. Popular forms of information
include text, numbers, pictures, audio, and video.
2. Sender. The sender is the device that sends the data message. It can be a computer, workstation,
telephone handset, video camera, and so on.
3. Receiver. The receiver is the device that receives the message. It can be a computer, workstation,
telephone handset, television, and so on.
4. Transmission medium. The transmission medium is the physical path by which a message travels from
sender to receiver. Some examples of transmission media include twisted-pair wire, coaxial cable, fiber-
optic cable, and radio waves
5. Protocol. A protocol is a set of rules that govern data communications. It represents an agreement
between the communicating devices. Without a protocol, two devices may be connected but not
communicating.
#Architecture
Network architecture is the design of a communications network. It is a framework for the specification of
a network's physical components and their functional organization and configuration.
In telecommunication, the specification of a network architecture may also include a detailed description
of products and services delivered via a communications network, as well as detailed rate and billing
structures under which services are compensated. The network architecture of the Internet is
predominantly expressed by its use of the Internet Protocol Suite, rather than a specific model for
interconnecting networks or nodes in the network, or the usage of specific types of hardware link
#Computer Network’s: Classifications & Types.
There are three types of network classification
1) LAN (Local area network)
2) MAN (Metropolitan Area network)
3) WAN (Wide area network)
Characteristics of MAN
1) Its covers the towns and cities (50km)
2) MAN is used by the communication medium for optical fibre cables, it also used for other media.
#Layered Architecture:
Protocol hierarchy: - To tackle with the design complexity most of the networks are organize as a set of
layers or levels. The fundamental idea of layered architecture is to divide the design into small pieces. The
layering provides modularity to the network design. The main duty of each layer is to provide offer services
to higher layers, and provide abstraction. The main benefits of layered architecture are modularity and
clear interfaces.
#Connection Oriented & Connectionless Services, Service primitives, Design issues & its functionality
Connection-oriented
There is a sequence of operation to be followed by the users of connection-oriented service. They are:
1. Connection is established
2. Information is sent
3. Connection is released
In connection-oriented service we must establish a connection before starting the communication. When
connection is established we send the message or the information. Then we release the connection.
Connection oriented service is more reliable than connectionless service. Example of connection oriented
is TCP (Transmission Control Protocol) protocol.
Connectionless
It is similar to postal services, as it carries the full address where the message (letter) is to be carried. Each
message is routed independently from source to destination. The order of message sent can be different
from the order received.
In connectionless the data is transferred in one direction from source to destination without checking that
destination is still there or not or if it prepared to accept the message. Authentication is not needed in this.
Example of Connectionless service is UDP (User Datagram Protocol) protocol.
#Service Primitives
Connection Oriented Service Primitives
LISTEN Block waiting for an incoming connection
CONNECTION Establish a connection with a waiting peer
RECEIVE Block waiting for an incoming message
SEND Sending a message to the peer
DISCONNECT Terminate a connection
Connectionless Service Primitives
UNIDATA This primitive sends a packet of data
FACILITY, REPORT Primitive for enquiring about the performance of the network, like delivery statistics.
Fig 1.9 The TCP/IP reference model. Fig 1.10 Protocols in the TCP/IP model initially.
Media:-
Network media refers to the communication channels used to interconnect nodes on a computer
network. Typical examples of network media include copper coaxial cable, copper twisted pair
cables and optical fiber cables used in wired networks, and radio waves used in wireless data
communications networks.
In data communication terminology, a transmission medium is a physical path between the
transmitter and the receiver i.e it is the channel through which data is sent from one place to
another.
Bandwidth
Bandwidth describes the maximum data transfer rate of a network or Internet connection. It
measures how much data can be sent over a specific connection in a given amount of time. For
example, a gigabit Ethernet connection has a bandwidth of 1,000 Mbps (125 megabytes per
second). An Internet connection via cable modem may provide 25 Mbps of bandwidth. Bandwidth
also refers to a range of frequencies used to transmit a signal. This type of bandwidth is measured
in hertz and is often referenced in signal processing applications.
Data rate
The data rate is a term to denote the transmission speed, or the number of bits per second transferred.
The useful data rate for the user is usually less than the actual data rate transported on the network. One
reason for this is that additional bits are transferred for e.g signalling, the address, the recovery of timing
information at the receiver or error correction to compensate for possible transmission errors. In
telecommunications, it is common use to express the data rate in bits per seconds (bit/s), see bit rate. In
data communication, the data rate is often expressed in bytes per second (B/s).
Modulation
Modulation plays a key role in communication system to encode information digitally in analog world. It is
very important to modulate the signals before sending them to the receiver section for larger distance
transfer, accurate data transfer and low-noise data reception.
Note: Bandwidth and data rate are related by the modulation format.
Different modulation formats will require different bandwidths for the same data rate. For
FM modulation, the bandwidth is approximately 2*(df + fm) where df is the
maximum frequency deviation and fm is the frequency of the message
Syllabus: Data Link Layer: Need, Services Provided, Framing, Flow Control, Error control. Data Link Layer
Protocol: Elementary &Sliding Window protocol: 1-bit, Go-Back-N, Selective Repeat, Hybrid ARQ.
Protocol verification: Finite State Machine Models & Petri net models. ARP/RARP/GARP
Character Count
This method uses a field in the header to specify the number of characters in the frame. When the data
link layer at the destination sees the character count, it knows how many characters follow, and hence
where the end of the frame is. The disadvantage is that if the count is garbled by a transmission error, the
destination will lose synchronization and will be unable to locate the start of the next frame. So, this
method is rarely used.
Character stuffing
In the second method, each frame starts with the ASCII character sequence DLE STX and ends with the
sequence DLE ETX. This method overcomes the drawbacks of the character count method. However,
character stuffing is closely associated with 8-bit characters and this is a major hurdle in transmitting
arbitrary sized characters.
Bit stuffing
The third method allows data frames to contain an arbitrary number of bits and allows character codes
with an arbitrary number of bits per character. At the start and end of each frame is a flag byte consisting
of the special bit pattern 01111110. Whenever the sender's data link layer encounters five consecutive 1s
in the data, it automatically stuffs a zero bit into the outgoing bit stream. This technique is called bit
stuffing
Physical layer coding violations
The final framing method is physical layer coding violations and is applicable to networks in which the
encoding on the physical medium contains some redundancy. In such cases normally, a 1 bit is a high-low
pair and a 0 bit is a low-high pair. The combinations of low-low and high-high which are not used for data
may be used for marking frame boundaries.
To ensure reliable communication, there needs to exist flow control (managing the amount of data the
sender sends), and error control (that data arrives at the destination error free).
• Flow and error control needs to be done at several layers.
• For node-to-node links, flow and error control is carried out in the data-link layer.
Page no: 2 Get real-time updates from RGPV
Downloaded from www.rgpvnotes.in
• For end-point to end-point, flow and error control is carried out in the transport layer.
There may be three types of errors:
Problem with Stop-And-Wait protocol is that it is very inefficient. At any one moment, only in frame is in
transition. The sender will have to wait at least one round trip time before sending next. The waiting can
be long for a slow network such as satellite link.
Stop and Wait Protocol
Characteristics
• Used in Connection-oriented communication.
• It offers error and flow control
• It is used in Data Link and Transport Layers
• Stop and Wait ARQ mainly implements Sliding Window Protocol concept with Window Size 1
Useful Terms:
• Propagation Delay: Amount of time taken by a packet to make a physical journey from one router to
another router.
Propagation Delay = (Distance between routers) / (Velocity of propagation)
• RoundTripTime (RTT) = 2* Propagation Delay
• TimeOut (TO) = 2* RTT
• Time To Live (TTL) = 2* TimeOut. (Maximum TTL is 180 seconds)
3. Delayed Acknowledgement:
This is resolved by introducing sequence number for acknowledgement also.
Page no: 5 Get real-time updates from RGPV
Downloaded from www.rgpvnotes.in
HYBRID ARQ
The HARQ is the use of conventional ARQ along with an Error Correction technique called 'Soft Combining',
which no longer discards the received bad data (with error).
With the 'Soft Combining' data packets that are not properly decoded are not discarded anymore. The
received signal is stored in a 'buffer', and will be combined with next retransmission.
That is, two or more packets received each one with insufficient SNR to allow individual decoding can be
combined in such a way that the total signal can be decoded!
The following image explains this procedure. The transmitter sends a package [1]. The package [1] arrives,
and is 'OK'. If the package [1] is 'OK' then the receiver sends an 'ACK'.
In our example, we see that the package arrived 2 times 'wrong'. And what is the limit of these
retransmissions? Up to 4. IE, we can have up to 4 retransmissions in each process. This is the maximum
number supported by 'buffer'.
SDLC identifies two types of network nodes: primary and secondary. Primary nodes control the operation
of other stations, called secondary. The primary polls the secondary in a predetermined order and
secondary can then transmit if they have outgoing data. The primary also sets up and tears down links and
manages the link while it is operational. Secondary nodes are controlled by a primary, which means that
secondary can send information to the primary only if the primary grants permission.
1. Information (I) frame: Carries upper-layer information and some control information. This
frame sends and receives sequence numbers, and the poll final (P/F) bit performs flow and
error control. The send-sequence number refers to the number of the frame to be sent next.
The receive-sequence number provides the number of the frame to be received next. Both
sender and receiver maintain send- and receive-sequence numbers.
A primary station uses the P/F bit to tell the secondary whether it requires an immediate
response. A secondary station uses the P/F bit to tell the primary whether the current frame is
the last in its current response.
2. Supervisory (S) frame: Provides control information. An S frame can request and suspend
transmission, reports on status, and acknowledge receipt of I frames. S frames do not have an
information field.
3. Unnumbered (U) frame: Supports control purposes and is not sequenced. A U frame can be
used to initialize secondary. Depending on the function of the U frame, its control field is 1 or 2
bytes. Some U frames have an information field.
• Data---Contains path information unit (PIU) or exchange identification (XID) information.
• Frame Check Sequence (FCS) ---Precedes the ending flag delimiter and is usually a cyclic
redundancy check (CRC) calculation remainder. The CRC calculation is redone in the receiver. If the
result differs from the value in the original frame, an error is assumed.
HDLC
High-Level Data Link Control (HDLC) is a bit-oriented code-transparent synchronous data link layer
protocol. HDLC provides both connection-oriented and connectionless service. HDLC can be used for point
to multipoint connections, but is now used almost exclusively to connect one device to another, using what
is known as Asynchronous Balanced Mode (ABM). The original master-slave modes Normal Response
Mode (NRM) and Asynchronous Response Mode (ARM) are rarely used.
FRAMING
HDLC frames can be transmitted over synchronous or asynchronous serial communication links. Those links
have no mechanism to mark the beginning or end of a frame, so the beginning and end of each frame has
to be identified. This is done by using a frame delimiter, or flag, which is a unique sequence of bits that is
guaranteed not to be seen inside a frame. This sequence is '01111110', or, in hexadecimal notation, 0x7E.
Each frame begins and ends with a frame delimiter. A frame delimiter at the end of a frame may also mark
the start of the next frame. A sequence of 7 or more consecutive 1-bits within a frame will cause the frame
to be aborted.
When no frames are being transmitted on a simplex or full-duplex synchronous link, a frame delimiter is
continuously transmitted on the link. Using the standard NRZI encoding from bits to line levels (0 bit =
transition, 1 bit = no transition), this generates one of two continuous waveforms, depending on the initial
state:
8 bits 8 or more bits 8 or 16 bits Variable length, 0 or more bits 16 or 32 bits 8 bits
BISYNC
Binary Synchronous Communication (BSC or Bisync) is an IBM character-oriented, half duplex link protocol,
announced in 1967 after the introduction of System/360. It replaced the synchronous transmit-receive
(STR) protocol used with second generation computers. The intent was that common link management
rules could be used with three different character encodings for messages. Six-bit Transcode looked
backwards to older systems.
BISYNC establishes rules for transmitting binary-coded data between a terminal and a host computer's
BISYNC port. While BISYNC is a half-duplex protocol, it will synchronize in both directions on a full-duplex
channel. BISYNC supports both point-to-point (over leased or dial-up lines) and multipoint transmissions.
Each message must be acknowledged, adding to its overhead.
BISYNC is character oriented, meaning that groups of bits (bytes) are the main elements of transmission,
rather than a stream of bits. The BISYNC frame is pictured next. It starts with two sync characters that the
receiver and transmitter use for synchronizing. This is followed by a start of header (SOH) command, and
then the header. Following this are the start of text (STX) command and the text. Finally, an end of text
(EOT) command and a cyclic redundancy check (CRC) end the frame. The CRC provides error detection and
correction.
Frame types
• I-Frames (Information frames): Carries upper-layer information and some control information. I-
frame functions include sequencing, flow control, and error detection and recovery. I-frames carry
send and receive sequence numbers.
• S-Frames (Supervisory Frames): Carries control information. S-frame functions include requesting
and suspending transmissions, reporting on status, and acknowledging the receipt of I-frames. S-
frames carry only receive sequence numbers.
• U-Frames (Unnumbered Frames): carries control information. U-frame functions include link setup
and disconnection, as well as error reporting. U-frames carry no sequence numbers
Frame format
biology and artificial intelligence research, state machines or hierarchies of state machines have been used
to describe neurological systems. In linguistics, they are used to describe simple parts of the grammars of
natural languages.
The FSM Consist of
States are those instants that the protocol machine is waiting for, the next event to happen e.g.
waiting for ACK.
Transitions occur when some event happens. E.g. when a frame is sent, when a frame is arriving,
when timer goes off, when an interrupt occurs.
Initial State gives description of the system i.e. when it starts running.
A deadlock is a situation in which the protocol can make no more forward progress, there exists a
set of states from which there is no exit and no progress can be made.
How to know a protocol really works → specifies and verify protocol using, e.g. finite state machine
–Each protocol machine (sender or receiver) is at a specific state at every time instant
–Each state has zero or more possible transitions to other states
–One particular state is initial state: from initial state, some or possibly all other states may be reachable
by a sequence of transitions.
• Simplex stop and wait ARQ protocol:
–State SRC: S = 0, 1 → which frame sender is sending;
R = 0, 1 → which frame receiver is expecting;
C = 0, 1, A (ACK), − (empty) → channel state, i.e. what is in channel
There are 9 transitions
Transition Who runs? Frame Accepted Frame Emitted To Network Layer
0 – Frame lost Frame lost
1 R 0 A –
2 S A 1 Yes
3 R 1 A –
4 S A 0 Yes
5 R 0 A –
6 R 1 A No
7 S Time out 0 No
8 S Time out 1 –
Table 2.1 List of Transitions
–Initial state (000): sender has just sent frame 0, receiver is expecting frame 0, and frame 0 is currently in
channel
–Transition 0: States channel losing its content.
–Transition 1: consists of channel correctly delivering packet 0 to receiver, and receiver expecting frame 1
and emitting ACK 0. Also receiver delivering packet 0 to the network layer.
–During normal operation, transitions 1,2,3,4 are repeated in order over and over: in each cycle, two
frames are delivered, bringing sender back to initial state.
Fig 2.27 FSM for Stop and Wait Protocol (Half Duplex)
Petri Net
Petri Net(PN) is an abstract model to show the interaction between asynchronous processes. It is only one
of the many ways to represent these interactions. Asynchronous means that the designer doesn't know
when the processes start and in which sequence they'll take place. A common manner to visualize the
concepts is with the use of places, tokens, transitions and arcs. We refer to the basics of Petri Net for a first
introduction in notations. We want to mention that a transition can only fire when there are tokens in
every input-place. When it fires, one token is taken from every input-place and every output-place from
the transition gets an (extra) token.
The Basics:
A Petri Net is a collection of directed arcs connecting places and transitions. Places may hold tokens. The
state or marking of a net is its assignment of tokens to places. Here is a simple net containing all
components of a Petri Net:
A transition is enabled when the number of tokens in each of its input places is at least equal to the arc
weight going from the place to the transition. An enabled transition may fire at any time. When fired, the
tokens in the input places are moved to output places, according to arc weights and place capacities. This
results in a new marking of the net, a state description of all places.
Fig 2.29
When arcs have different weights, we have what might at first seem confusing behaviour. Here is a similar net, ready
to fire:
Fig 2.30
and here it is after firing:
Fig 2.31
When a transition fires, it takes the tokens that enabled it from the input places; it then distributes tokens
to output places according to arc weights. If the arc weights are all the same, it appears that tokens are
moved across the transition. If they differ, however, it appears that tokens may disappear or be created.
That, in fact, is what happens; think of the transition as removing its enabling tokens and producing output
tokens according to arc weight.
A special kind of arc, the inhibitor arc, is used to reverse the logic of an input place. With an inhibitor arc,
the absence of a token in the input place enables, not the presence:
Fig 2.32
This transition cannot fire, because the token in P2 inhibits it.
Tokens can play the following roles:
A physical object: a robot;
• An information object: a message between two robots;
• A collection of objects: the people mover;
• An indicator of a state: the state in which a robot is: defender/attacker;
• An indicator of a condition: a token indicates whether a certain condition is fulfilled (ex. Soccer game
starts when the referee gives the signal).
Transitions can play the following roles:
• An event: start a thread, the switching of a machine from normal to safe mode;
• A transformation of an object: a robot that changes his role, see further;
• A transport of an object: the ball is passed between the robots.
An arc connects only places and transitions and indicates the direction in which the token travels.
ARP:
ARP or Address Resolution Protocol is a simple communications protocol used primarily today in IP and
Ethernet networks. It’s main purpose is to discover and associate IP addresses to physical MAC hardware
addresses. ARP is used to find the MAC address of device on a network using only the IP address. The ARP protocol
will make a broadcast out to the network asking for the MAC address of the destination IP address. The machine
with the IP address will respond with its MAC address. The communication then drops to the link layer for physical
to physical data communication between computers. ARP’s job is to basically discover and associate IP
addresses to physical MAC addresses.
RARP:
RARP (Reverse ARP) is a legacy protocol that has now been replaced by BOOTP and later by DHCP. Its
purpose was for diskless workstations (i.e no ability to store an IP address) to discover what their own IP
address was - based on their MAC address. At the point of boot, the workstation would send a request
requesting its IP, a RARP server would then respond with the appropriate IP. For example:
RARP Request: What is my IP address (MAC address is within Ethernet header)?
RARP Response: Your IP address is 192.168.1.11.
The main problem with RARP was that:
The RARP server needed to be populated with the MAC to IP mappings.
No additional data (DNS, NTP) could be sent other than the IP address.
Page no: 17 Get real-time updates from RGPV
Downloaded from www.rgpvnotes.in
In more advanced networking situations you may run across something known as Gratuitous ARP (GARP).
A gratuitous arp something that is often performed by a computer when it is first booted up. When a
NIC’s is first powered on, it will do what’s known as a gratuitous ARP and automatically ARP out it’s MAC
address to the entire network. This allows any switches to know the location of the physical devices and
DHCP servers to know where to send an IP address if needed and requested. Gratuitous ARP is also used
by many high availability routing and load balancing devices. Routers or load balancers are often
configured in an HA (high availability) pair to provide optimum reliability and maximum uptime. Usually
these devices will be configured in an Active/Standby pair. One device will be active while the second will
be sleeping waiting for the active device to fail. Think of it as an understudy for the lead role in a movie. If
the leading lady gets sick, the understudy will gladly and quickly take her place in the lime light.
When a failure occurs, the standby device will assert itself as the new active device and issue a gratuitous
ARP out to the network instructing all other devices to send traffic to it’s MAC address instead of the failed
device.
IPV4 ADDRESSES:
An IPv4 address is a 32-bit address that uniquely and universally defines the connection of a host
or a router to the Internet. IPv4 addresses are unique in the sense that each address defines one,
and only one, connection to the Internet.
IPv4 addresses are universal in the sense that the addressing system must be accepted by any host
that wants to be connected to the Internet.
Address Space
An address space is the total number of addresses used by the protocol. If a protocol uses b bits
to define an address, the address space is 2b because each bit can have two different values (0 or
1). IPv4 uses 32-bit addresses, which means that the address space is 232 or 4,294,967,296 (more
than four billion). If there were no restrictions, more than 4 billion devices could be connected to
the Internet.
Notation
There are three common notations to show an IPv4 address: binary notation (base 2), dotted-
decimal notation (base 256), and hexadecimal notation (base 16).
Hierarchy in Addressing
A 32-bit IPv4 address is also hierarchical, but divided only into two parts. The first part of the
address, called the prefix, defines the network; the second part of the address, called the suffix,
defines the node (connection of a device to the Internet). The prefix length is n bits and the suffix
length is (32- n) bits.
Classful Addressing:
Address Depletion:
The reason that classful addressing has become obsolete is address depletion. To understand the
problem, let us think about class A. This class can be assigned to only 128 organizations in the
world, but each organization needs to have a single network with 16,777,216 nodes. Class B
addresses were designed for midsize organizations, but many of the addresses in this class also
remained unused. Class C addresses have a completely different flaw in design. The number of
addresses that can be used in each network (256) was so small that most companies were not
comfortable using a block in this address class.
In subnetting, a class A or class B block is divided into several subnets. Each subnet has a larger
prefix length than the original network. For example, if a network in class A is divided into four
subnets, each subnet has a prefix of nsub = 10. At the same time, if all of the addresses in a network
are not used, subnetting allows the addresses to be divided among several organizations.
While subnetting was devised to divide a large block into smaller ones, supernetting was devised
to combine several class C blocks into a larger block to be attractive to organizations that need
3
more than the 256 addresses available in a class C block. This idea did not work either because it
makes the routing of packets more difficult.
Given an address, we can easily find the class of the address and, since the prefix length for each
class is fixed, we can find the prefix length immediately. In other words, the prefix length in
classful addressing is inherent in the address; no extra information is needed to extract the prefix
and the suffix.
Classless Addressing:
In 1996, the Internet authorities announced a new architecture called classless addressing. In
classless addressing, variable-length blocks are used that belong to no classes. We can have a
block of 1 address, 2 addresses, 4 addresses, 128 addresses, and so on. In classless addressing, the
whole address space is divided into variable length blocks. The prefix in an address defines the
block (network); the suffix defines the node (device). Theoretically, we can have a block of 20, 21,
22, . . . , 232 addresses.
Unlike classful addressing, the prefix length in classless addressing is variable. We can have a
prefix length that ranges from 0 to 32. The size of the network is inversely proportional to the
length of the prefix. A small prefix means a larger network; a large prefix means a smaller
network. the idea of classless addressing can be easily applied to classful addressing. An address
in class A can be thought of as a classless address in which the prefix length is 8. An address in
class B can be thought of as a classless address in which the prefix is 16, and so on. In other words,
classful addressing is a special case of classless addressing.
In this case, the prefix length, n, is added to the address, separated by a slash. The notation is
informally referred to as slash notation and formally as classless interdomain routing or CIDR
(pronounced cider) strategy.
Extracting Information from an Address
Three pieces of information about the block to which the address belongs: the number of
addresses, the first address in the block, and the last address.
Subnetting
An organization (or an ISP) that is granted a range of addresses may divide the range into several
subranges and assign each subrange to a subnetwork (or subnet). Note that nothing stops the
organization from creating more levels. A subnetwork can be divided into several sub-
subnetworks. A sub-subnetwork can be divided into several sub-sub-subnetworks, and so on.
Designing Subnets
We assume the total number of addresses granted to the organization is N, the prefix length is n,
the assigned number of addresses to each subnetwork is Nsub, and the prefix length for each
subnetwork is nsub. Then the following steps need to be carefully followed to guarantee the proper
operation of the subnetworks.
nsub = 32 − log2Nsub
❑ The starting address in each subnetwork should be divisible by the number of addresses
in that subnetwork. This can be achieved if we first assign addresses to larger subnetworks.
Example
An organization is granted a block of addresses with the beginning address 14.24.74.0/24. The
organization needs to have 3 subblocks of addresses to use in its three subnets: one subblock of 10
addresses, one subblock of 60 addresses, and one subblock of 120 addresses. Design the
subblocks.
5
Page 1
Network Address Translation (NAT)
A technology that can provide the mapping between the private and universal addresses, and at
the same time support virtual private networks, is Network Address Translation (NAT). The
8
technology allows a site to use a set of private addresses for internal communication and a set of
global Internet addresses (at least one) for communication with the rest of the world.
IPv6 ADDRESSING:
The main reason for migration from IPv4 to IPv6 is the small size of the address space in
IPv4. An IPv6 address is 128 bits or 16 bytes (octets) long, four times the address length in IPv4.
Representation
The following shows two of these notations: binary and colon hexadecimal.
9
Abbreviation
Although the IP address, even in hexadecimal format, is very long, many of the digits are
zeros. In this case, we can abbreviate the address. The leading zeros of a section (four
digits between two colons) can be omitted. Only the leading zeros can be dropped, not
the trailing zeros.
Congestion Control
Congestion control refers to techniques and mechanisms that can either prevent congestion
before it happens or remove congestion after it has happened. In general, we can divide
congestion control mechanisms into two broad categories: open-loop congestion control
(prevention) and closed-loop congestion control (removal).
Window Policy The type of window at the sender may also affect congestion. The Selective Repeat
window is better than the Go-Back-N window for congestion control.
Acknowledgment Policy The acknowledgment policy imposed by the receiver may also affect
congestion. If the receiver does not acknowledge every packet it receives, it may slow down the
sender and help prevent congestion. A receiver may send an acknowledgment only if it has a
packet to be sent or a special timer expires. A receiver may decide to acknowledge only N packets
at a time. the acknowledgments are also part of the load in a network.
Discarding Policy A good discarding policy by the routers may prevent congestion and at the
same time may not harm the integrity of the transmission. For example, in audio transmission, if
the policy is to discard less sensitive packets when congestion is likely to happen, the quality of
sound is still preserved and congestion is prevented or alleviated.
Admission Policy An admission policy, which is a quality-of-service mechanism , can also prevent
congestion in virtual-circuit networks. Switches in a flow first check the resource requirement of a
flow before admitting it to the network. A router can deny establishing a virtual-circuit connection
if there is congestion in the network or if there is a possibility of future congestion.
Implicit Signaling In implicit signaling, there is no communication between the congested node
or nodes and the source. The source guesses that there is congestion somewhere in the network
from other symptoms. For example, when a source sends several packets and there is no
acknowledgment for a while, one assumption is that the network is congested. The delay in
receiving an acknowledgment is interpreted as congestion in the network; the source should slow
down.
Explicit Signaling The node that experiences congestion can explicitly send a signal to the source
or destination. The explicit-signaling method, however, is different from the choke-packet method.
In the choke-packet method, a separate packet is used for this purpose; in the explicit-signaling
method, the signal is included in the packets that carry data. Explicit signaling can occur in either
the forward or the backward direction. This type of congestion control can be seen in an ATM
network,
ADDRESS MAPPING
A physical address is a local address. It is called a physical address because it is usually (but not
always) implemented in hardware. An example of a physical address is the 48-bit MAC address in
the Ethernet protocol, which is imprinted on the NIC installed in the host or router. The physical
address and the logical address are two different identifiers.
The system on the left (A) has a packet that needs to be delivered to another system (B) with IP
address 141.23.56.23. System A needs to pass the packet to its data link layer for the actual
delivery, but it does not know the physical address of the recipient. It uses the services of ARP by
asking the ARP protocol to send a broadcast ARP request packet to ask for the physical address of
a system with an IP address of 141.23.56.23.
This packet is received by every system on the physical network, but only system B will answer it,
as shown in Figure 21.1 b. System B sends an ARP reply packet that includes its physical address.
Now system A can send all the packets it has for this destination by using the physical address it
received.
Four cases using ARP: The following are four different cases in which the services of ARP can
be used
1. The sender is a host and wants to send a packet to another host on the same network. In this
case, the logical address that must be mapped to a physical address is the destination IP address in
the datagram header.
2. The sender is a host and wants to send a packet to another host on another network. In this
case, the host looks at its routing table and finds the IP address of the next hop (router) for this
destination. If it does not have a routing table, it looks for the IP address of the default router. The
IP address of the router becomes the logical address that must be mapped to a physical address.
3. The sender is a router that has received a datagram destined for a host on another network. It
checks its routing table and finds the IP address of the next router. The IP address of the next
router becomes the logical address that must be mapped to a physical address.
4. The sender is a router that has received a datagram destined for a host on the same network.
The destination IP address of the datagram becomes the logical address that must be mapped to a
physical address.
An ARP request is broadcast; an ARP reply is unicast.
1. A diskless station is just booted. The station can find its physical address by checking its
interface, but it does not know its IP address.
2. An organization does not have enough IP addresses to assign to each station; it needs to
assign IP addresses on demand. The station can send its physical address and ask for a
short time lease.
15
RARP
Reverse Address Resolution Protocol (RARP) finds the logical address for a machine that knows
only its physical address. The machine can get its physical address (by reading its NIC, for
example), which is unique locally. It can then use the physical address to get the logical address by
using the RARP protocol. A RARP request is created and broadcast on the local network. Another
machine on the local network that knows all the IP addresses will respond with a RARP reply. The
requesting machine must be running a RARP client program; the responding machine must be
running a RARP server program.
There is a serious problem with RARP: Broadcasting is done at the data link layer. The physical
broadcast address, all 1s in the case of Ethernet, does not pass the boundaries of a network. This
means that if an administrator has several networks or several subnets, it needs to assign a RARP
server for each network or subnet. This is the reason that RARP is almost obsolete. Two protocols,
BOOTP and DHCP, are replacing RARP.
BOOTP
The Bootstrap Protocol (BOOTP) is a client/server protocol designed to provide physical address
to logical address mapping. BOOTP is an application layer protocol. The administrator may put the
client and the server on the same network or on different networks.
How a client can send an IP datagram when it knows neither its own IP address (the source
address) nor the server's IP address (the destination address). The client simply uses all 0s as the
source address and all 1s as the destination address.
16
One of the advantages of BOOTP over RARP is that the client and server are application-
layer processes. As in other application-layer processes, a client can be in one network and
the server in another, separated by several other networks. However, there is one problem
that must be solved. The BOOTP request is broadcast because the client does not know the
IP address of the server. A broadcast IP datagram cannot pass through any router. To solve
the problem, there is a need for an intermediary.
One of the hosts (or a router that can be configured to operate at the application layer) can
be used as a relay. The host in this case is called a relay agent. The relay agent knows the
unicast address of a BOOTP server. When it receives this type of packet, it encapsulates the
message in a unicast datagram and sends the request to the BOOTP server.
The Packet carrying a unicast destination address, is routed by any router and reaches the
BOOTP server. The BOOTP server knows the message comes from a relay agent because
one of the fields in the request message defines the IP address of the relay agent. The relay
agent, after receiving the reply, sends it to the BOOTP client.
DHCP
BOOTP is not a dynamic configuration protocol. When a client requests its IP address, the
BOOTP server consults a table that matches the physical address of the client with its IP
address. This implies that the binding between the physical address and the IP address of
the client already exists. The binding is predetermined.
However, what if a host moves from one physical network to another? What if a host wants
a temporary IP address? BOOTP cannot handle these situations because the binding
between the physical and IP addresses is static and fixed in a table until changed by the
administrator. BOOTP is a static configuration protocol.
The Dynamic Host Configuration Protocol (DHCP) has been devised to provide static and
dynamic address allocation that can be manual or automatic.
17
DHCP provides static and dynamic address allocation that can be manual or
automatic.
Static Address Allocation In this capacity DHCP acts as BOOTP does. It is backward
compatible with BOOTP, which means a host running the BOOTP client can request a static
address from a DHCP server. A DHCP server has a database that statically binds physical
addresses to IP addresses.
Dynamic Address Allocation DHCP has a second database with a pool of available IP
addresses. This second database makes DHCP dynamic. When a DHCP client requests a
temporary IP address, the DHCP server goes to the pool of available (unused) IP addresses
and assigns an IP address for a negotiable period of time.
Routing:
The goal of the network layer is to deliver a datagram from its source to its destination or
destinations. If a datagram is destined for only one destination (one-to-one delivery), we have
unicast routing. If the datagram is destined for several destinations (one-to-many delivery), we
have multicast routing.
A routing table can be either static or dynamic. A static table is one with manual entries.
A dynamic table, on the other hand, is one that is updated automatically when there is a change
somewhere in the internet. The tables need to be updated as soon as there is a change in the
internet. For instance, they need to be updated when a router is down, and they need to be
updated whenever a better route has been found.
Initialization
The tables in Figure 22.14 are stable; each node knows how to reach any other node and the cost.
At the beginning, however, this is not the case. Each node can know only the distance between
itself and its immediate neighbors, those directly connected to it. So for the moment, we assume
that each node can send a message to the immediate neighbors and find the distance between
itself and these neighbors. The distance for any entry that is not a neighbor is marked as infinite
(unreachable).
19
Sharing
The whole idea of distance vector routing is the sharing of information between neighbors.
Although node A does not know about node E, node C does. So if node C shares its routing table
with A, node A can also know how to reach node E. On the other hand, node C does not know how
to reach node D, but node A does. If node A shares its routing table with node C, node C also knows
how to reach node D. In other words, nodes A and C, as immediate neighbors, can improve their
routing tables if they help each other.
In other words, sharing here means sharing only the first two columns.
In distance vector routing, each node shares its routing table with its immediate neighbors
periodically and when there is a change.
Updating
When a node receives a two-column table from a neighbor, it needs to update its routing table.
When to Share
The question now is, When does a node send its partial routing table (only two columns) to all its
immediate neighbors? The table is sent both periodically and when there is a change in the table.
Periodic Update A node sends its routing table, normally every 30 s, in a periodic update. The
period depends on the protocol that is using distance vector routing.
Triggered Update A node sends its two-column routing table to its neighbors anytime there is a
change in its routing table. This is called a triggered update. The change can result from the
following.
1. A node receives a table from a neighbor, resulting in changes in its own table after
updating.
2. A node detects some failure in the neighboring links which results in a distance change to
infinity.
Count to Infinity:
A problem with distance-vector routing is that any decrease in cost (good news) propagates
quickly, but any increase in cost (bad news) will propagate slowly. For a routing protocol to work
properly, if a link is broken (cost becomes infinity), every other router should be aware of it
immediately, but in distance-vector routing, this takes some time. The problem is referred to as
count to infinity.
Split Horizon
One solution to instability is called split horizon. In this strategy, instead of flooding the table
through each interface, each node sends only part of its table through each interface. If, according
to its table, node B thinks that the optimum route to reach X is via A, it does not need to advertise
this piece of information to A; the information has come from A (A already knows). Taking
information from node A, modifying it, and sending it back to node A is what creates the confusion.
In our scenario, node B eliminates the last line of its forwarding table before it sends it to A. In this
case, node A keeps the value of infinity as the distance to X. Later, when node A sends its
forwarding table to B, node B also corrects its forwarding table. The system becomes stable after
the first update: both node A and node B know that X is not reachable.
Poison Reverse
Using the split-horizon strategy has one drawback. Normally, the corresponding protocol uses a
timer, and if there is no news about a route, the node deletes the route from its table. When node B
in the previous scenario eliminates the route to X from its advertisement to A, node A cannot
guess whether this is due to the split-horizon strategy (the source of information was A) or
because B has not received any news about X recently.
In the poison reverse strategy B can still advertise the value for X, but if the source of information
is A, it can replace the distance with infinity as a warning: “Do not use this value; what I know
about this route comes from you.”
22
Link-State Routing:
In Link State Routing algorithm the cost associated with an edge defines the state of the link. Links
with lower costs are preferred to links with higher costs; if the cost of a link is infinity, it means
that the link does not exist or has been broken.
Now the question is how each node can create this LSDB that contains information about the
whole internet. This can be done by a process called flooding. Each node can send some greeting
messages to all its immediate neighbors (those nodes to which it is connected directly) to collect
two pieces of information for each neighboring node: the identity of the node and the cost of the
link. The combination of these two pieces of information is called the LS packet (LSP); the LSP is
sent out of each interface, When a node receives an LSP from one of its interfaces, it compares the
LSP with the copy it may already have. If the newly arrived LSP is older than the one it has (found
by checking the sequence number), it discards the LSP. If it is newer or the first one received, the
node discards the old LSP (if there is one) and keeps the received one. It then sends a copy of it out
of each interface except the one from which the packet arrived. This guarantees that flooding stops
somewhere in the network (where a node has only one interface).
In other words, a node can make the whole map if it needs to, using this LSDB.
23
In the distance-vector routing algorithm, each router tells its neighbors what it knows
about the whole internet; in the link-state routing algorithm, each router tells the whole
internet what it knows about its neighbors.
1. The node chooses itself as the root of the tree, creating a tree with a single node, and
sets the total cost of each node based on the information in the LSDB.
2. The node selects one node, among all nodes not in the tree, which is closest to the root,
and adds this to the tree. After this node is added to the tree, the cost of all other nodes
not in the tree needs to be updated because the paths may have been changed.
3. The node repeats step 2 until all nodes are added to the tree.
TECHNIQUES TO IMPROVE QoS
There are some techniques that can be used to improve the quality of service. four common
methods: scheduling, traffic shaping, admission control, and resource reservation.
Scheduling
Packets from different flows arrive at a switch or router for processing. A good scheduling
technique treats the different flows in a fair and appropriate manner. Several scheduling
techniques are designed to improve the quality of service.
FIFO Queuing
In first-in, first-out (FIFO) queuing, packets wait in a buffer (queue) until the node (router
or switch) is ready to process them. If the average arrival rate is higher than the average
processing rate, the queue will fill up and new packets will be discarded. A FIFO queue is
familiar to those who have had to wait for a bus at a bus stop.
Priority Queuing
In priority queuing, packets are first assigned to a priority class. Each priority class has its own
queue. The packets in the highest-priority queue are processed first. Packets in the lowest-priority
queue are processed last. Note that the system does not stop serving a queue until it is empty.
A priority queue can provide better QoS than the FIFO queue because higher priority traffic, such
as multimedia, can reach the destination with less delay. However, there is a potential drawback. If
there is a continuous flow in a high-priority queue, the packets in the lower-priority queues will
never have a chance to be processed. This is a condition called starvation.
Traffic Shaping
Traffic shaping is a mechanism to control the amount and the rate of the traffic sent to the
network. Two techniques can shape traffic: leaky bucket and token bucket.
Leaky Bucket
If a bucket has a small hole at the bottom, the water leaks from the bucket at a constant rate
as long as there is water in the bucket. The rate at which the water leaks does not depend
26
on the rate at which the water is input to the bucket unless the bucket is empty. The input
rate can vary, but the output rate remains constant. Similarly, in networking, a technique
called leaky bucket can smooth out bursty traffic. Bursty chunks are stored in the bucket
and sent out at an average rate.
A leaky bucket algorithm shapes bursty traffic into fixed-rate traffic by averaging the
data rate. It may drop the packets if the bucket is full.
Token Bucket
The leaky bucket is very restrictive. It does not credit an idle host. For example, if a host is
not sending for a while, its bucket becomes empty. Now if the host has bursty data, the
leaky bucket allows only an average rate. The time when the host was idle is not taken into
account. On the other hand, the token bucket algorithm allows idle hosts to accumulate
credit for the future in the form of tokens.
Assume the capacity of the bucket is c tokens and tokens enter the bucket at the rate of r
tokens per second. The system removes one token for every cell of data sent. The maximum
number of cells that can enter the network during any time interval of length t is shown
below.
27
The maximum average rate for the token bucket is shown below.
This means that the token bucket limits the average packet rate to the network.
Example 30.2
Let’s assume that the bucket capacity is 10,000 tokens and tokens are added at the rate of 1000
tokens per second. If the system is idle for 10 seconds (or more), the bucket collects 10,000 tokens
and becomes full. Any additional tokens will be discarded. The maximum average rate is shown
below.
Maximum average rate = (1000t + 10,000)/t
The token bucket can easily be implemented with a counter. The counter is initialized to
zero. Each time a token is added, the counter is incremented by 1. Each time a unit of data
is sent, the counter is decremented by 1. When the counter is zero, the host cannot send
data.
User Datagram
UDP packets, called user datagrams, have a fixed-size header of 8 bytes made of four fields, each
of 2 bytes (16 bits). The first two fields define the source and destination port numbers. The third
field defines the total length of the user datagram, header plus data. The 16 bits can define a total
length of 0 to 65,535 bytes. However, the total length needs to be less because a UDP user
datagram is stored in an IP datagram with the total length of 65,535 bytes. The last field can carry
the optional checksum.
UDP Services:
Process-to-Process Communication
UDP provides process-to-process communication using socket addresses, a combination of IP
addresses and port numbers.
Connectionless Services
UDP provides a connectionless service. This means that each user datagram sent by UDP is an
independent datagram. There is no relationship between the different user datagrams even if they
are coming from the same source process and going to the same destination program. The user
datagrams are not numbered. Also, unlike TCP, there is no connection establishment and no
connection termination. This means that each user datagram can travel on a different path.
Flow Control
UDP is a very simple protocol. There is no flow control, and hence no window mechanism. The
receiver may overflow with incoming messages. The lack of flow control means that the process
using UDP should provide for this service, if needed.
Error Control
There is no error control mechanism in UDP except for the checksum. This means that the sender
does not know if a message has been lost or duplicated. When the receiver detects an error
through the checksum, the user datagram is silently discarded. The lack of error control means
that the process using UDP should provide for this service, if needed.
Congestion Control
Since UDP is a connectionless protocol, it does not provide congestion control. UDP assumes that
the packets sent are small and sporadic and cannot create congestion in the network.
TCP Services:
Process-to-Process Communication
As with UDP, TCP provides process-to-process communication using port numbers.
Stream Delivery Service
TCP, unlike UDP, is a stream-oriented protocol. TCP, on the other hand, allows the sending process
to deliver data as a stream of bytes and allows the receiving process to obtain data as a stream of
bytes. TCP creates an environment in which the two processes seem to be connected by an
imaginary “tube” that carries their bytes across the Internet. The sending process produces
(writes to) the stream and the receiving process consumes (reads from) it.
Sending and Receiving Buffers
Because the sending and the receiving processes may not necessarily write or read data at the
same rate, TCP needs buffers for storage. There are two buffers, the sending buffer and the
receiving buffer, one for each direction. These buffers are also necessary for flow- and error-
control mechanisms used by TCP. One way to implement a buffer is to use a circular array of 1-
byte locations as shown in
Figure
At the sender, the buffer has three types of chambers. The white section contains empty chambers
that can be filled by the sending process (producer). The colored area holds bytes that have been
sent but not yet acknowledged. The TCP sender keeps these bytes in the buffer until it receives an
acknowledgment. The shaded area contains bytes to be sent by the sending TCP.
The operation of the buffer at the receiver is simpler. The circular buffer is divided into two areas
(shown as white and colored). The white area contains empty chambers to be filled by bytes
received from the network. The colored sections contain received bytes that can be read by the
receiving process. When a byte is read by the receiving process, the chamber is recycled and added
to the pool of empty chambers.
Segments:
The network layer, as a service provider for TCP, needs to send data in packets, not as a stream of
bytes. At the transport layer, TCP groups a number of bytes together into a packet called a segment.
TCP adds a header to each segment (for control purposes) and delivers the segment to the
network layer for transmission. The segments are encapsulated in an IP datagram and
transmitted. This entire operation is transparent to the receiving process.
Full-Duplex Communication
TCP offers full-duplex service, where data can flow in both directions at the same time Each TCP
endpoint then has its own sending and receiving buffer, and segments move in both directions.
Connection-Oriented Service
TCP, unlike UDP, is a connection-oriented protocol. When a process at site A wants to send to and
receive data from another process at site B, the following three phases occur:
Reliable Service
TCP is a reliable transport protocol. It uses an acknowledgment mechanism to check the safe and
sound arrival of data.
TCP Segment
A packet in TCP is called a segment.
Source port address. This is a 16-bit field that defines the port number of the application program
in the host that is sending the segment.
Destination port address. This is a 16-bit field that defines the port number of the application
program in the host that is receiving the segment.
Sequence number. This 32-bit field defines the number assigned to the first byte of data contained
in this segment. TCP is a stream transport protocol. To ensure connectivity, each byte to be
transmitted is numbered. The sequence number tells the destination which byte in this sequence
is the first byte in the segment. During connection establishment (discussed later) each party uses
a random number generator to create an initial sequence number (ISN), which is usually
different in each direction.
Acknowledgment number. This 32-bit field defines the byte number that the receiver of the
segment is expecting to receive from the other party. If the receiver of the segment has successfully
received byte number x from the other party, it returns x+1 as the acknowledgment number.
Acknowledgment and data can be piggybacked together.
Header length. This 4-bit field indicates the number of 4-byte words in the TCP header. The length
of the header can be between 20 and 60 bytes. Therefore, the value of this field is always between
5 (5x4=20) and 15 (15x4=60).
Control. This field defines 6 different control bits or flags, as shown in Figure 24.8. One or more of
these bits can be set at a time.
Window size. This field defines the window size of the sending TCP in bytes. Note that the length
of this field is 16 bits, which means that the maximum size of the window is 65,535 bytes. This
value is normally referred to as the receiving window (rwnd) and is determined by the receiver.
The sender must obey the dictation of the receiver in this case.
Checksum. This 16-bit field contains the checksum. The calculation of the checksum for TCP
follows the same procedure as the one described for UDP
Urgent pointer. This 16-bit field, which is valid only if the urgent flag is set, is used when the
segment contains urgent data.
Connection Establishment
TCP transmits data in full-duplex mode. When two TCPs in two machines are connected, they are
able to send segments to each other simultaneously.
Three-Way Handshaking: The connection establishment in TCP is called three-way
handshaking. The process starts with the server. The server program tells its TCP that it is ready
to accept a connection. This request is called a passive open. Although the server TCP is ready to
accept a connection from any machine in the world, it cannot make the connection itself.
The client program issues a request for an active open. A client that wishes to connect to an open
server tells its TCP to connect to a particular server. TCP can now start the three-way
handshaking process.
1. The client sends the first segment, a SYN segment, in which only the SYN flag is set. This
segment is for synchronization of sequence numbers. The client in our example chooses a
random number as the first sequence number and sends this number to the server. This
sequence number is called the initial sequence number (ISN).
A SYN segment cannot carry data, but it consumes one sequence number.
2. The server sends the second segment, a SYN + ACK segment with two flag bits set as: SYN
and ACK. This segment has a dual purpose.
A SYN + ACK segment cannot carry data, but it does consume one sequence number.
3. The client sends the third segment. This is just an ACK segment. It acknowledges the
receipt of the second segment with the ACK flag and acknowledgment number field.
An ACK segment, if carrying no data, consumes no sequence number.
TCP server then sends the SYN + ACK segments to the fake clients, which are lost. When the
server waits for the third leg of the handshaking process, however, resources are allocated
without being used. If, during this short period of time, the number of SYN segments is large, the
server eventually runs out of resources and may be unable to accept connection requests from
valid clients. This SYN flooding attack belongs to a group of security attacks known as a denial of
service attack, in which an attacker monopolizes a system with so many service requests that
the system overloads and denies service to valid requests.
Data Transfer:
Connection Termination:
Using Three-Way Handshaking:
1. The sending TCP sends the first piece of data it receives from the sending application
program even if it is only 1 byte.
2. After sending the first segment, the sending TCP accumulates data in the output buffer
and waits until either the receiving TCP sends an acknowledgment or until enough data
have accumulated to fill a maximum-size segment. At this time, the sending TCP can send
the segment.
3. Step 2 is repeated for the rest of the transmission. Segment 3 is sent immediately if an
acknowledgment is received for segment 2, or if enough data have accumulated to fill a
maximum-size segment.
2. The second solution is to delay sending the acknowledgment. This means that when a
segment arrives, it is not acknowledged immediately. The receiver waits until there is a
decent amount of space in its incoming buffer before acknowledging the arrived
segments. The delayed acknowledgment prevents the sending TCP from sliding its
window. After the sending TCP has sent the data in the window, it stops. This kills the
syndrome.
Congestion Policy
TCP's general policy for handling congestion is based on three phases: slow start, congestion
avoidance, and congestion detection. In the slow-start phase, the sender starts with a very slow
rate of transmission, but increases the rate rapidly to reach a threshold. When the threshold is
reached, the data rate is reduced to avoid congestion. Finally if congestion is detected, the sender
goes back to the slow-start or congestion avoidance phase based on how the congestion is
detected.
2. If three ACKs are received, there is a weaker possibility of congestion; a segment may
have been dropped, but some segments after that may have arrived safely since three
ACKs are received. This is called fast transmission and fast recovery.
Example: We assume that the maximum window size is 32 segments. The threshold is set to 16
segments (one-half of the maximum window size). In the slow-start phase the window size starts
from 1 and grows exponentially until it reaches the threshold. After it reaches the threshold, the
congestion avoidance (additive increase) procedure allows the window size to increase linearly
until a timeout occurs or the maximum window size is reached. In Figure 24.11, the time-out
occurs when the window size is 20. At this moment, the multiplicative decrease procedure takes
over and reduces the threshold to one-half of the previous window size. The previous window
size was 20 when the time-out happened so the new threshold is now 10.
TCP moves to slow start again and starts with a window size of 1, and TCP moves to additive
increase when the new threshold is reached. When the window size is 12, a three duplicate ACKs
event happens. The multiplicative decrease procedure takes over again. The threshold is set to 6
and TCP goes to the additive increase phase this time. It remains in this phase until another time-
out or another three duplicate ACKs happen.
Sockets:
Communication between a client process and a server process is communication between two
sockets, created at two ends The client thinks that the socket is the entity that receives the
request and gives the response; the server thinks that the socket is the one that has a request and
needs the response. If we create two sockets, one at each end, and define the source and
destination addresses correctly, we can use the available instructions to send and receive data.
Assignment:
Question: Consider the effect of using slow start on a line with a 10 m-sec RTT and no
congestion. The receive window is 30KB and the maximum segment size is 2KB. How long does it
take before the first full window can be sent?
Question: Consider the effect of using congestion avoidance on a line with a 10 m-sec RTT and
no congestion. The receive window is 30KB and the maximum segment size is 2KB. How long
does it take before the first full window can be sent?
Socket Addresses: