CN Mid 2
CN Mid 2
IEEE 802.11 defines the Physical (PHY) layer and Medium Access Control (MAC) layer for
wireless communication:
PHY Layer: Manages data transmission over radio frequencies. The protocol has various
specifications for different frequencies and modulation techniques (such as DSSS,
OFDM).
MAC Layer: Manages channel access and data packet handling over the wireless
medium. The MAC layer utilizes techniques like CSMA/CA to avoid packet collision.
Control Frames: Used for controlling access to the medium, like the RTS/CTS (Request-
to-Send and Clear-to-Send) frames, which help manage access and avoid collisions.
Data Frames: Carry the actual data between devices once a connection is established.
The IEEE 802.11 MAC layer employs Carrier Sense Multiple Access with Collision Avoidance
(CSMA/CA) for channel access:
Before transmitting, a device listens to check if the channel is free. If the channel is busy,
the device waits for a random backoff time before checking again.
When the channel is free, the device sends an RTS (Request to Send) signal, and if the
recipient responds with a CTS (Clear to Send), the data transmission begins.
Security is a crucial aspect of IEEE 802.11, and there are multiple methods to secure data:
WEP (Wired Equivalent Privacy): An early security standard, which has now been
deemed insecure due to vulnerabilities.
WPA (Wi-Fi Protected Access) and WPA2: Enhanced security mechanisms using TKIP and
AES encryption.
WPA3: The latest standard providing stronger encryption and better protections against
brute-force attacks.
802.11n: Improved speed and range using MIMO (Multiple Input, Multiple Output).
802.11ac: Increased speeds with wider channel bandwidth and higher-order modulation
(256-QAM).
802.11ax (Wi-Fi 6): Increased efficiency and capacity, handling more simultaneous
connections, often used in dense network environments.
The IEEE 802.11 protocol allows devices to communicate wirelessly by defining the
physical and MAC layer operations. It includes mechanisms for channel access, frame
management, and security, evolving with each amendment to support faster speeds,
higher capacities, and better security.
2) Describe the significance of each field in IPv4 header with a neat sketch
The IPv4 header is crucial for routing and delivering data packets over an IP-based network. Each
field in the IPv4 header carries specific information to guide the packet through the network to its
intended destination. Here’s a description of each field, along with a sketch to help visualize the
layout.
The IPv4 header is typically 20 bytes in length (if no options are used) and includes the following
fields:
1. Version (4 bits): Indicates the IP protocol version. For IPv4, this value is set to 4.
2. Internet Header Length (IHL) (4 bits): Specifies the length of the IP header in 32-bit words.
The minimum value is 5 (for a 20-byte header), and it varies if options are included.
3. Type of Service (ToS) (8 bits): This field, also known as DSCP (Differentiated Services Code
Point), specifies the quality of service or priority level of the packet, helping to manage traffic
in QoS (Quality of Service) implementations.
4. Total Length (16 bits): Defines the entire length of the IP packet, including both the header
and the data. The maximum size is 65,535 bytes.
5. Identification (16 bits): Used to identify fragments of the original packet if it is broken up
into smaller fragments. Each fragment of a packet carries the same Identification value for
reassembly.
7. Fragment Offset (13 bits): Specifies the position of this fragment relative to the start
of the original unfragmented packet, allowing the receiving device to reassemble
fragments correctly.
8. Time to Live (TTL) (8 bits): Indicates the maximum number of hops a packet can
take before it’s discarded, preventing packets from circulating indefinitely in the
network.
9. Protocol (8 bits): Identifies the protocol used in the data portion of the IP packet
(e.g., TCP, UDP, ICMP).
10. Header Checksum (16 bits): A checksum for error-checking the IP header,
recalculated at each hop.
11. Source Address (32 bits): Contains the IP address of the original sender.
12. Destination Address (32 bits): Contains the IP address of the intended recipient.
13. Options (variable length, optional): Provides additional options for security, record
routing, timestamping, etc., if required. This field varies in size, and its presence
extends the header length beyond 20 bytes.
14. Padding (variable, optional): Ensures the header is a multiple of 32 bits by adding
extra bits if necessary.
Each field in the IPv4 header provides specific information crucial for successful data
packet delivery. From identifying packet length and fragmentation to specifying TTL and
protocol type, these fields work together to ensure efficient, reliable packet transport
across IP networks.
Several algorithms for computing the shortest path between two nodes of a graphare known.
Bellman-FordAlgorithm[DistanceVector]
A to B with a cost of 1
A to C with a cost of 3
B to C with a cost of 1
B to D with a cost of 4
C to D with a cost of 1
C to E with a cost of 6
D to E with a cost of 2
Each router needs to determine the shortest path to all other routers. Let’s see how Router A would
compute these paths using Dijkstra’s algorithm.
1. Initialize:
2. Step 1: Visit A:
B = 1 (A to B)
C = 3 (A to C)
o Mark A as visited.
o Mark B as visited.
o Mark C as visited.
5. Step 4: Visit D:
o Mark D as visited.
6. Step 5: Visit E:
o All nodes have been visited, and the shortest paths are finalized.
The shortest paths from A to each router, along with their costs, are as follows:
A to B: Path = A → B, Cost = 1
A to C: Path = A → B → C, Cost = 2
A to D: Path = A → B → C → D, Cost = 3
A to E: Path = A → B → C → D → E, Cost = 5
Each router will perform this calculation independently, resulting in optimal routes for all
destinations in the network.
Simple Mail Transfer mechanism (SMTP) is a mechanism for exchanging email messages between
servers. It is an essential component of the email communication process and operates at the
application layer of the TCP/IP protocol stack. SMTP is a protocol for transmitting and receiving email
messages. In this article, we are going to discuss every point about SMTP.
SMTP is an application layer protocol. The client who wants to send the mail opens a TCP connection
to the SMTP server and then sends the mail across the connection. The SMTP server is an always-on
listening mode. As soon as it listens for a TCP connection from any client, the SMTP process initiates
a connection through port 25. After successfully establishing a TCP connection the client process
sends the mail instantly.
SMTP Protocol
End-to-End Method
Store-and-Forward Method
The end-to-end model is used to communicate between different organizations whereas the store
and forward method is used within an organization. An SMTP client who wants to send the mail will
contact the destination’s host SMTP directly, to send the mail to the destination. The SMTP server
will keep the mail to itself until it is successfully copied to the receiver’s SMTP.
The client SMTP is the one that initiates the session so let us call it the client-SMTP and the server
SMTP is the one that responds to the session request so let us call it receiver-SMTP. The client-SMTP
will start the session and the receiver SMTP will respond to the request.
In the SMTP model user deals with the user agent (UA), for example, Microsoft Outlook, Netscape,
Mozilla, etc. To exchange the mail using TCP, MTA is used. The user sending the mail doesn’t have to
deal with MTA as it is the responsibility of the system admin to set up a local MTA. The MTA
maintains a small queue of mail so that it can schedule repeat delivery of mail in case the receiver is
not available. The MTA delivers the mail to the mailboxes and the information can later be
downloaded by the user agents.
Components of SMTP
Mail User Agent (MUA): It is a computer application that helps you in sending and retrieving
mail. It is responsible for creating email messages for transfer to the mail transfer
agent(MTA).
Mail Submission Agent (MSA): It is a computer program that receives mail from a Mail User
Agent(MUA) and interacts with the Mail Transfer Agent(MTA) for the transfer of the mail.
Mail Transfer Agent (MTA): It is software that has the work to transfer mail from one system
to another with the help of SMTP.
Mail Delivery Agent (MDA): A mail Delivery agent or Local Delivery Agent is basically a
system that helps in the delivery of mail to the local system.
Communication between the sender and the receiver: The sender’s user agent prepares the
message and sends it to the MTA. The MTA’s responsibility is to transfer the mail across the
network to the receiver’s MTA. To send mail, a system must have a client MTA, and to receive
mail, a system must have a server MTA.
Sending Emails: Mail is sent by a series of request and response messages between
the client and the server. The message which is sent across consists of a header and a body.
A null line is used to terminate the mail header and everything after the null line is
considered the body of the message, which is a sequence of ASCII characters. The message
body contains the actual information read by the receipt.
Receiving Emails: The user agent on the server-side checks the mailboxes at a particular time
of intervals. If any information is received, it informs the user about the mail. When the user
tries to read the mail it displays a list of emails with a short description of each mail in the
mailbox. By selecting any of the mail users can view its contents on the terminal.
Advantages of SMTP
5) Mention the header of TCP and elaborate each field of the header
A summary of these fields follows:
Source port: this is a 16 bit field that specifies the port number of the sender.
Destination port: this is a 16 bit field that specifies the port number of the receiver.
Sequence number: the sequence number is a 32 bit field that indicates how much
data is sent during the TCP session. When you establish a new TCP connection (3 way
handshake) then the initial sequence number is a random 32 bit value. The receiver
will use this sequence number and sends back an acknowledgment.
Acknowledgment number: this 32 bit field is used by the receiver to request the next
TCP segment. This value will be the sequence number incremented by 1.
DO: this is the 4 bit data offset field, also known as the header length. It indicates the
length of the TCP header so that we know where the actual data begins.
RSV: these are 3 bits for the reserved field. They are unused and are always set to 0.
Flags: there are 9 bits for flags, we also call them control bits. We use them to
establish connections, send data and terminate connections:
URG: urgent pointer. When this bit is set, the data should be treated as priority over other data.
ACK: used for the acknowledgment.
PSH: this is the push function. This tells an application that the data should be
transmitted immediately and that we don’t want to wait to fill the entire TCP
segment.
RST: this resets the connection, when you receive this you have to terminate
the connection right away. This is only used when there are unrecoverable
errors and it’s not a normal way to finish the TCP connection.
SYN: we use this for the initial three way handshake and it’s used to set the
initial sequence number.
FIN: this finish bit is used to end the TCP connection. TCP is full duplex so
both parties will have to use the FIN bit to end the connection. This is the
normal method how we end an connection.
Window: the 16 bit window field specifies how many bytes the receiver is willing to
receive. It is used so the receiver can tell the sender that it would like to receive more
data than what it is currently receiving. It does so by specifying the number of bytes
beyond the sequence number in the acknowledgment field.
Checksum: 16 bits are used for a checksum to check if the TCP header is OK or not.
Urgent pointer: these 16 bits are used when the URG bit has been set, the urgent
pointer is used to indicate where the urgent data ends.
TCP Options: this field is optional and can be anywhere between 0 and 320 bits.
6) Explain in detail about IEEE-802.3 protocol
Ethernet is the standard way to connect computers on a network over a wired connection. It
provides a simple interface and for connecting multiple devices, such computers, routers, and
switches.
Ethernet is commonly used in local area networks (LAN), metropolitan area networks (MAN) and
wide area networks (WAN).
It was commercially introduced in 1980 and first standardized in 1983 as IEEE 802.3.
The basic frame format which is required for all MAC implementation is defined in IEEE 802.3
standard. Though several optional formats are being used to extend the protocol’s basic capability.
Ethernet frame starts with the Preamble and SFD, both work at the physical layer. The ethernet
header contains both the Source and Destination MAC address, after which the payload of the frame
is present. The last field is CRC which is used to detect the error
A standard Ethernet network can transmit data at a rate up to 10 Megabits per second (10 Mbps).
Preamble This field contains 7 bytes (56 bits) of alternating 1’s and 0’s that alert the receiving system
to the coming frame and enable it to synchronize its clock if it's out of synchronization. The pattern
provides only an alert and a timing pulse. The 56-bit pattern allows the stations to miss some bits at
the beginning of the frame. The preamble is actually added at the physical layer and is not (formally)
part of the frame.
Start frame delimiter (SFD) This field (1 byte: 10101011) signals the beginning of the frame. The SFD
warns the station or stations that this is the last chance for synchronization. The last 2 bits are 11 and
alerts the receiver that the next field is the destination address. This field is actually a flag that
defines the beginning of the frame. We need to remember that an Ethernet frame is a variable-
length frame. It needs a flag to define the beginning of the frame. The SFD field is also added at the
physical layer.
Destination address (DA) This field is six bytes (48 bits) and contains the link layer address of the
destination station or stations to receive the packet. When the receiver sees its own link-layer
address, or a multicast address for a group that the receiver is a member of, or a broadcast address,
it decapsulates the data from the frame and passes the data to the upper layer protocol defined by
the value of the type field.
Source address (SA) This field is also six bytes and contains the link-layer address of the sender of the
packet. This is a 6-Byte field that contains the MAC address of the source machine. As Source
Address is always an individual address (Unicast), the least significant bit of the first byte is always 0.
Length or type This field is defined as a type field or length field. The original Ethernet used this field
as the type field to define the upper-layer protocol whose packet is encapsulated in the MAC frame.
This protocol can be IP, ARP, OSPF, and so on. In other words, it serves the same purpose as the
protocol field in a datagram and the port number in a segment or user datagram. It is used for
multiplexing and demultiplexing.
The IEEE standard used it as the length field to define the number of bytes in the data field. Both
uses are common today.
Data This field carries data encapsulated from the upper-layer protocols. It is a minimum of 46 and a
maximum of 1500 bytes. We discuss the reason for these minimum and maximum values . This is
the place where actual data is inserted, also known as Payload . Both IP header and data will be
inserted here if Internet Protocol is used over Ethernet. The maximum data present may be as long
as 1500 Bytes. In case data length is less than minimum length i.e. 46 bytes, then padding 0’s is
added to meet the minimum possible length.
CRC The last field contains error detection information, in this case a CRC-32.
CRC is 4 Byte field. This field contains a 32-bits hash code of data, which is generated over the
Destination Address, Source Address, Length, and Data field. If the checksum computed by
destination is not the same as sent checksum value, data received is corrupted.
`
7) Explain the concept of classful IP addressing
An IP address is an address that has information about how to reach a specific host, especially
outside the LAN. An IP address is a 32-bit unique address having an address space of 232.
Classful IP addressing is a way of organizing and managing IP addresses, which are used to identify
devices on a network. Think of IP addresses like street addresses for houses; each device on a
network needs its unique address to communicate with other devices
IPv4 is a classful ip addressing where IP addresses are allocated according to the classes- A to E.
The 32-bit IP address is divided into five sub-classes. These are given below:
Class A
Class B
Class C
Class D
Class E
Each of these classes has a valid range of IP addresses. Classes D and E are reserved for multicast and
experimental purposes respectively. The order of bits in the first octet determines the classes of the
IP address.
The class of IP address is used to determine the bits used for network ID and host ID and the number
of total networks and hosts possible in that particular class. Each ISP or network administrator
assigns an IP address to each device that is connected to its network.
Class A
IP addresses belonging to class A are assigned to the networks that contain a large number of hosts.
The higher-order bit of the first octet in class A is always set to 0. The remaining 7 bits in the first
octet are used to determine network ID. The 24 bits of host ID are used to determine the host in any
network. The default subnet mask for Class A is 255.x.x.x. Therefore, class A has a total of:
Class B
IP address belonging to class B is assigned to networks that range from medium-sized to large-sized
networks.
The higher-order bits of the first octet of IP addresses of class B are always set to 10. The remaining
14 bits are used to determine the network ID. The 16 bits of host ID are used to determine the host
in any network. The default subnet mask for class B is 255.255.x.x. Class B has a total of:
Class C
The higher-order bits of the first octet of IP addresses of class C is always set to 110. The remaining
21 bits are used to determine the network ID. The 8 bits of host ID are used to determine the host in
any network. The default subnet mask for class C is 255.255.255.x. Class C has a total of:
Class D
IP address belonging to class D is reserved for multi-casting. The higher-order bits of the first octet of
IP addresses belonging to class D is always set to 1110. The remaining bits are for the address that
interested hosts recognize.
Class D does not possess any subnet mask. IP addresses belonging to class D range from 224.0.0.0 –
239.255.255.255.
Class E
IP addresses belonging to class E are reserved for experimental and research purposes. IP addresses
of class E range from 240.0.0.0 – 255.255.255.255. This class doesn’t have any subnet mask. The
higher-order bits of the first octet of class E are always set to 1111.
8) Demonstrate the distance vector routing algorithm with an example.
Distance Vector Routing (DVR) Protocol is a method used by routers to find the best path for data to
travel across a network. Each router keeps a table that shows the shortest distance to every other
router, based on the number of hops (or steps) needed to reach them. Routers share this information
with their neighbors, allowing them to update their tables and find the most efficient routes. This
protocol helps ensure that data moves quickly and smoothly through the network.
The protocol requires that a router inform its neighbors of topology changes periodically. Historically
known as the old ARPANET routing algorithm (or known as the Bellman-Ford algorithm).
Each router maintains a Distance Vector table containing the distance between itself and All possible
destination nodes. Distances, based on a chosen metric, are computed using information from the
neighbors’ distance vectors.
Distance to itself = 0
A router transmits its distance vector to each of its neighbors in a routing packet.
Each router receives and saves the most recently received distance vector from each of its
neighbors.
Note:
From time-to-time, each node sends its own distance vector estimate to neighbors.
When a node x receives new DV estimate from any neighbor v, it saves v’s distance vector
and it updates its own DV using B-F equation:
Example :
Consider 3-routers X, Y and Z as shown in figure. Each router have their routing table. Every routing
table will contain distance to the destination nodes.
Consider router X , X will share it routing table to neighbors and neighbors will share it routing table
to it to X and distance from node X to destination will be calculated using bellmen- ford equation.
As we can see that distance will be less going from X to Z when Y is intermediate node(hop) so it will
be update in routing table X.
User Datagram Protocol (UDP) is a Transport Layer protocol. UDP is a part of the Internet Protocol
suite, referred to as UDP/IP suite. Unlike TCP, it is an unreliable and connectionless protocol. So,
there is no need to establish a connection before data transfer. The UDP helps to establish low-
latency and loss-tolerating connections over the network. The UDP enables process-to-process
communication.
User Datagram Protocol (UDP) is one of the core protocols of the Internet Protocol (IP) suite. It is a
communication protocol used across the internet for time-sensitive transmissions such as video
playback or DNS lookups . Unlike Transmission Control Protocol (TCP), UDP is connectionless and
does not guarantee delivery, order, or error checking, making it a lightweight and efficient option for
certain types of data transmission.
UDP header is an 8-byte fixed and simple header, while for TCP it may vary from 20 bytes to 60 bytes.
The first 8 Bytes contain all necessary header information and the remaining part consists of data.
UDP port number fields are each 16 bits long, therefore the range for port numbers is defined from 0
to 65535; port number 0 is reserved. Port numbers help to distinguish different user requests or
processes.
Source Port: Source Port is a 2 Byte long field used to identify the port number of the source.
Destination Port: It is a 2 Byte long field, used to identify the port of the destined packet.
Length: Length is the length of UDP including the header and the data. It is a 16-bits field.
Checksum: Checksum is 2 Bytes long field. It is the 16-bit one’s complement of the one’s
complement sum of the UDP header, the pseudo-header of information from the IP header,
and the data, padded with zero octets at the end (if necessary) to make a multiple of two
octets.
Unlike TCP, the Checksum calculation is not mandatory in UDP. No Error control or flow control is
provided by UDP. Hence UDP depends on IP and ICMP for error reporting. Also UDP provides port
numbers so that is can differentiate between users requests.
10. Explain with example about HTTP Request and Response methods.
HTTP (Hypertext Transfer Protocol) is a fundamental protocol of the Internet, enabling the transfer
of data between a client and a server. It is the foundation of data communication for the World Wide
Web.
HTTP is a request-response protocol, which means that for every request sent by a client (typically a
web browser), the server responds with a corresponding response. The basic flow of an HTTP
request-response cycle is as follows:
Client sends an HTTP request: The client (usually a web browser) initiates the process by
sending an HTTP request to the server. This request includes a request method (GET, POST,
PUT, DELETE, etc.), the target URI (Uniform Resource Identifier, e.g., a URL), headers, and an
optional request body.
Server processes the request: The server receives the request and processes it based on the
requested method and resource. This may involve retrieving data from a database, executing
server-side scripts, or performing other operations.
Server sends an HTTP response: After processing the request, the server sends an HTTP
response back to the client. The response includes a status code (e.g., 200 OK, 404 Not
Found), response headers, and an optional response body containing the requested data or
content.
Client processes the response: The client receives the server's response and processes it
accordingly. For example, if the response contains an HTML page, the browser will render
and display it. If it's an image or other media file, the browser will display or handle it
appropriately.
Advantages
Disadvantages
11. Describe the frame structure, working principle and limitations of Bluetooth.
Bluetooth is used for short-range wireless voice and data communication. It is a Wireless Personal
Area Network (WPAN) technology and is used for data communications over smaller distances.. The
architecture of Bluetooth defines two types of networks:
Piconet
Scatternet
The following diagram shows us a schematic representation of different fields used in the Bluetooth
Frame structure :
Bluetooth Frame Structure is of a baseband packet format type and it consists of three main fields
namely Access code, Packet header, and payload.
Access Code: It is the first field of Frame Structure. It is of size 72 bits. it is again divided
into three parts first part is the preamble which is of size 4 bits, the second part is
synchronization which is of 64 bits and the third part is a trailer which is of size 4 bits. Access
Code field is used for timing synchronization and piconet identification.
Packet header: Its size is 54 bits. It contains six subfields the first field or part is an address
which is of 3 bits in size and can define up to 7 slaves. the second field is the type which is of
4 bits in size and used to identify the type of data. The third subfield is flow used for flow
control, the fourth field is ARQN used for acknowledgement. the fifth part is SEQN which
contains sequence numbers of frames and the sixth field is HEC used to detect errors in the
header.
Payload: This field can be 0-2744 bits long and its structure depends on the type of link
established.
Disadvantages
12. Explain the concept of congestion. How many ways congestion can be handled in network.
Congestion in a computer network happens when there is too much data being sent at the same
time, causing the network to slow down. Just like traffic congestion on a busy road, network
congestion leads to delays and sometimes data loss. When the network can’t handle all the incoming
data, it gets “clogged,” making it difficult for information to travel smoothly from one place to
another.
Congestion control is a crucial concept in computer networks. It refers to the methods used to
prevent network overload and ensure smooth data flow. When too much data is sent through the
network at once, it can cause delays and data loss. Congestion control techniques help manage the
traffic, so all users can enjoy a stable and efficient network connection. These techniques are
essential for maintaining the performance and reliability of modern networks.
Congestion Control is a mechanism that controls the entry of data packets into the network,
enabling a better use of a shared network infrastructure and avoiding congestive collapse.
Congestive-avoidance algorithms (CAA) are implemented at the TCP layer as the mechanism
to avoid congestive collapse in a network.
The leaky bucket algorithm discovers its use in the context of network traffic shaping or rate-
limiting.
A leaky bucket execution and a token bucket execution are predominantly used for traffic
shaping algorithms.
This algorithm is used to control the rate at which traffic is sent to the network and shape
the burst traffic to a steady traffic stream.
The disadvantages compared with the leaky-bucket algorithm are the inefficient use of
available network resources.
The large area of network resources such as bandwidth is not being used effectively.
Let us consider an example to understand Imagine a bucket with a small hole in the bottom. No
matter at what rate water enters the bucket, the outflow is at constant rate. When the bucket is full
with water additional water entering spills over the sides and is lost.
Similarly, each network interface contains a leaky bucket and the following steps are involved in leaky
bucket algorithm:
When host wants to send packet, packet is thrown into the bucket.
The bucket leaks at a constant rate, meaning the network interface transmits packets at a
constant rate.
To learn more about Leaky Bucket Algorithm please refer the article.
The leaky bucket algorithm has a rigid output design at an average rate independent of the
bursty traffic.
In some applications, when large bursts arrive, the output is allowed to speed up. This calls
for a more flexible algorithm, preferably one that never loses information. Therefore, a token
bucket algorithm finds its uses in network traffic shaping or rate-limiting.
It is a control algorithm that indicates when traffic should be sent. This order comes based on
the display of tokens in the bucket.
The bucket contains tokens. Each of the tokens defines a packet of predetermined size.
Tokens in the bucket are deleted for the ability to share a packet.
When tokens are shown, a flow to transmit traffic appears in the display of tokens.
No token means no flow sends its packets. Hence, a flow transfers traffic up to its peak burst
rate in good tokens in the bucket.
To learn more about Token Bucket Algorithm please refer the article.
The leaky bucket algorithm enforces output pattern at the average rate, no matter how bursty the
traffic is. So in order to deal with the bursty traffic we need a flexible algorithm so that the data is not
lost. One such algorithm is token bucket algorithm.
If there is a ready packet, a token is removed from the bucket, and the packet is sent.
The process of communication between devices over the internet happens according to the
current TCP/IP suite model(stripped-out version of OSI reference model). The Application layer is a
top pile of a stack of TCP/IP models from where network-referenced applications like web browsers
on the client side establish a connection with the server. From the application layer, the information
is transferred to the transport layer where our topic comes into the picture. The two important
protocols of this layer are – TCP, and UDP(User Datagram Protocol) out of which TCP is
prevalent(since it provides reliability for the connection established). However, you can find an
application of UDP in querying the DNS server to get the binary equivalent of the Domain Name used
for the website.
TCP provides reliable communication with something called Positive Acknowledgement with Re-
transmission(PAR) . The Protocol Data Unit(PDU) of the transport layer is called a segment. Now a
device using PAR resend the data unit until it receives an acknowledgement. If the data unit received
at the receiver’s end is damaged(It checks the data with checksum functionality of the transport
layer that is used for Error Detection ), the receiver discards the segment. So the sender has to
resend the data unit for which positive acknowledgement is not received. You can realize from the
above mechanism that three segments are exchanged between sender(client) and receiver(server)
for a reliable TCP connection to get established. Let us delve into how this mechanism works
Synchronization Sequence Number (SYN) − The client sends the SYN to the server
When the client wants to connect to the server, then it sends the message to the server by
setting the SYN flag as 1.
The message carries some additional information like the sequence number (32-bit random
number).
The ACK is set to 0. The maximum segment size and the window size are also set. For
example, if the window size is 1000 bits and the maximum segment size is 100 bits, then a
maximum of 10 data segments can be transmitted in the connection by dividing
(1000/100=10).
The server acknowledges the client request by setting the ACK flag to 1.
The ACK indicates the response of the segment it received and SYN indicates with what
sequence number it will start the segments.
For example, if the client has sent the SYN with sequence number = 500, then the server will
send the ACK using acknowledgment number = 5001.
The server will set the SYN flag to '1' and send it to the client if the server also wants to
establish the connection.
The sequence number used for SYN will be different from the client's SYN.
The server also advertises its window size and maximum segment size to the client. And, the
connection is established from the client-side to the server-side.
The client sends the acknowledgment (ACK) to the server after receiving the synchronization
(SYN) from the server.
After getting the (ACK) from the client, the connection is established between the client and
the server.
Now the data can be transmitted between the client and server sides.
First, the client requests the server to terminate the established connection by sending FIN.
After receiving the client request, the server sends back the FIN and ACK request to the
client.
After receiving the FIN + ACK from the server, the client confirms by sending an ACK to the
server.
15. Explain the File Transfer Protocol (FTP) and its features.
File transfer protocol (FTP) is an Internet tool provided by TCP/IP. It helps to transfer files from one
computer to another by providing access to directories or folders on remote computers and allows
software, data, text file to be transferred between different kinds of computers. The end-user in the
connection is known as localhost and the server which provides data is known as the remote host.
It shields users from system variations (operating system, directory structures, file
structures, etc.)
Anonymous FTP
Some sites can enable anonymous FTP whose files are available for public access.
So, the user can access those files without any username or password. Instead, the
username is set to anonymous and the password to the guest by default. Here, the
access of the user is very limited. For example, the user can copy the files but not
allowed to navigate through directories.
Detail steps of FTP
FTP client contacts FTP server at port 21 specifying TCP as transport protocol.
When server receives a command for a file transfer, the server open a TCP data connection
to client.
Stream Mode: It is the default mode. In stream mode, the data is transferred from FTP to
TCP in stream bytes. Here TCP is the cause for fragmenting data into small segments. The
connection is automatically closed if the transforming data is in the stream bytes. Otherwise,
the sender will close the connection.
Block Mode: In block mode, the data is transferred from FTP to TCP in the form of blocks,
and each block followed by a 3-byte header. The first byte of the block contains the
information about the block so it is known as the description block and the other two bytes
contain the size of the block.
Compressed Mode: This mode is used to transfer big files. As we know that, due to the size
limit we can not transfer big files on the internet, so the compressed mode is used to
decrease the size of the file into small and send it on the internet.
Applications of FTP
FTP connection is used by different big business organizations for transferring files in
between them, like sharing files to other employees working at different locations or
different branches of the organization.
FTP connection is used by IT companies to provide backup files at disaster recovery sites.
Financial services use FTP connections to securely transfer financial documents to the
respective company, organization, or government.
Employees use FTP connections to share any data with their co-workers.
16. Define the type of the following destination addresses: a. 4A:30:10:21:10:1A b.
47:20:1B:2E:08:EE c. FF:FF:FF:FF:FF:FF
To find the type of the address, we need to look at the second hexadecimal digit from the left. If it is
even, the address is unicast. If it is odd, the address is multicast. If all digits are Fs, the address is
broadcast.
4A:30:10:21:10:1A
In a MAC address, if the second least significant bit of the first byte (4A in hexadecimal) is 0,
it indicates a unicast address.
In this case, 4A in binary is 01001010, where the second least significant bit is 0, so this is a
unicast address.
47:20:1B:2E:08
If the second least significant bit of the first byte is 1, it indicates a multicast address.
47 in hexadecimal is 01000111 in binary, and the second least significant bit is 1, confirming
it as a multicast address.
FF:FF:FF:FF:FF
It is used to communicate with all devices on a network segment, and all bits are set to 1 in
hexadecimal (FF).
The address is sent left to right, byte by byte; for each byte, it is sent right to left, bit by bit, as shown
below:
Token Bus (IEEE 802.4) is a popular standard for token passing LANs. In a token bus LAN, the physical
media is a bus or a tree, and a logical ring is created using a coaxial cable. The token is passed from
one user to another in a sequence (clockwise or anticlockwise). Each station knows the address of
the station to its “left” and “right” as per the sequence in the logical ring. A station can only transmit
data when it has the token. The working of a token bus is somewhat similar to Token Ring.
The Token Bus (IEEE 802.4) is a standard for deploying token rings in LANs over a virtual ring. The
physical medium uses coaxial cables and has a bus or tree architecture. The nodes/stations form a
virtual ring, and the token is transmitted from one node to the next in a sequence along the virtual
ring. Each node knows the address of the station before it and the station after it. When a station has
the token, it can only broadcast data. The token bus works in a similar way as the Token Ring.
The above diagram shows a logical ring formed in a bus-based token-passing LAN. The logical ring is
shown with the arrows.
Frame Format:
2. Start Delimiter – These bits mark the beginning of the frame. It is a 1-byte field.
3. Frame Control – This field specifies the type of frame – data frame and control frames. It is a
1-byte field.
4. Destination Address – This field contains the destination address. It is a 2 to 6 bytes field.
5. Source Address – This field contains the source address. It is a 2 to 6 bytes field.
6. Data – If 2-byte addresses are used then the field may be up to 8182 bytes and 8174 bytes in
the case of 6-byte addresses.
7. Checksum – This field contains the checksum bits which are used to detect errors in the
transmitted data. It is 4 bytes field.
8. End Delimiter – This field marks the end of a frame. It is a 1-byte field.
2. Under heavy traffic, token passing makes ring topology perform better than bus topology.
Characteristics:-
1. Bus Topology: Token Bus uses a bus topology, where all devices on the network are
connected to a single cable or “bus”.
2. Token Passing: A “token” is passed around the network, which gives permission for a device
to transmit data.
3. Priority Levels: Token Bus uses three priority levels to prioritize data transmission. The
highest priority level is reserved for control messages and the lowest for data transmission.
4. Collision Detection: Token Bus employs a collision detection mechanism to ensure that two
devices do not transmit data at the same time.
5. Maximum Cable Length: The maximum cable length for Token Bus is limited to 1000 meters.
6. Data Transmission Rates: Token Bus can transmit data at speeds of up to 10 Mbps.
7. Limited Network Size: Token Bus is typically used for small to medium-sized networks with up
to 72 devices.
8. No Centralized Control: Token Bus does not require a central controller to manage network
access, which can make it more flexible and easier to implement.
9. Vulnerable to Network Failure: If the token is lost or a device fails, the network can become
congested or fail altogether.
10. Security: Token Bus has limited security features, and unauthorized devices can potentially
gain access to the network.
Overall, Token Bus has some advantages and disadvantages depending on the specific requirements
of the network. It is suitable for use in some situations where reliability and deterministic behavior
are important, but may not be the best choice for high-speed, high-bandwidth applications.
19. Change the multicast IP address 232.43.14.7 to an Ethernet multicast physical address.
– We add the result of part a to the starting Ethernet multicast address, which is 01:00:5E:00:00:00.
Subnetting is the procedure to divide the network into sub-networks or small networks, these
smaller networks are known as subnets. The subnet is also defined as an internal address made up of
a combination of a small network and host segments. In a subnet, a few bits from the host portion
are used to design small-sized subnetworks from the original network. In subnetting, network bits
are converted into host bits.
Supernetting is the procedure to combine small networks into larger spaces. In subnetting, Network
addresses’ bits are increased. on the other hand, in supernetting, Host addresses’ bits are increased.
Subnetting is implemented via Variable-length subnet masking, While super netting is implemented
via Classless interdomain routing.
Subnetting Supernetting
In subnetting, Network addresses’ bits are While in supernetting, Host addresses’ bits are
increased. increased.
In subnetting, The mask bits are moved While In supernetting, The mask bits are moved
towards the right. towards the left.
In subnetting, Address depletion is reduced While It is used for simplifying the routing
or removed. process.
Subnetting
Advantages of subnetting
1. Effective IP address use: Subnetting enables the division of a large network into smaller
subnets, which aids in the efficient use of IP address allocation. It lessens IP address wastage
and enables organizations to allocate IP addresses in accordance with their unique
requirements.
2. Subnetting can help reduce network congestion and enhance overall network performance
by breaking up a large network into smaller subnets. Smaller subnets improve the efficiency
of routing and switching operations and allow for better network traffic control.
Disadvantages of subnetting
1. Complexity: Subnetting can make network configuration and design more complicated. It can
be difficult, especially for large networks, to choose the right subnet sizes, plan IP address
ranges, and manage routing between subnets.
2. Subnetting requires more administrative work, especially when adding new subnets or
changing the configuration of existing ones. In addition to maintaining routing tables and
ensuring proper connectivity between subnets, it entails managing IP address ranges.
Supernetting
Advantages of supernetting
1. Supernetting enables the consolidation of several smaller networks into a single, larger
network block, which reduces the size of the routing table and maximizes the use of IP
address space.
2. Routing can be made easier by combining several smaller networks into a supernet because
fewer routing updates and table entries are required. This may result in increased routing
effectiveness and decreased router overhead.
3. A reduced number of routing lookups needed for packet forwarding thanks to supernetting
can help improve network performance. As a result, packet processing may be accelerated
and latency may be decreased.
Disadvantages of supernetting
1. Loss of network granularity: Supernetting involves aggregating multiple networks into larger
network blocks. This can result in a loss of granularity, making it more challenging to
implement fine-grained network management, security policies, and traffic control.
2. Increased risk of network failures: If a single supernet experiences a network failure, it can
affect multiple smaller networks within that supernet. This makes troubleshooting and
isolating network issues more complex.
3. Limited flexibility: Supernetting requires careful planning and coordination to ensure that
the aggregated networks have compatible address ranges. It may limit the ability to make
independent changes to individual subnets within a supernet without affecting the entire
supernet.
22. The following is the content of a UDP header in hexadecimal format. CB84000D001C001C a.
What is the source port number? b. What is the destination port number? c. What is the total length
of the user datagram? d. What is the length of the data?
a) The source port number is the first four hexadecimal digits (CB84)16, which means that the source
port number is 52100.
b) The destination port number is the second four hexadecimal digits (000D)16, which means that
the destination port number is 13.
c) The third four hexadecimal digits (001C)16 define the length of the whole UDP packet as 28 bytes.
d) The length of the data is the length of the whole packet minus the length of the header, or 28 - 8 =
20 bytes.
23. Describe the role of resource records and name servers in resolving the domain names.
Name Servers
The host requests the DNS name server to resolve the domain name. And the name server returns
the IP address corresponding to that domain name to the host so that the host can future connect to
that IP address.
Hierarchy of Name Servers Root Name Servers: It is contacted by name servers that can not
resolve the name. It contacts the authoritative name server if name mapping is not known. It
then gets the mapping and returns the IP address to the host.
Top-level Domain (TLD) Server: It is responsible for com, org, edu, etc, and all top-level
country domains like uk, fr, ca, in, etc. They have info about authoritative domain servers and
know the names and IP addresses of each authoritative name server for the second-level
domains.
Authoritative Name Servers are the organization’s DNS servers, providing authoritative
hostnames to IP mapping for organization servers. It can be maintained by an organization or
service provider. In order to reach cse.dtu.in we have to ask the root DNS server, then it will
point out to the top-level domain server and then to the authoritative domain name server
which actually contains the IP address. So the authoritative domain server will return the
associative IP address.
Resource Records (RRs)
DNS servers has different types of records to manage resolution efficiently and provide important
information about a domain. These records are the details which are cached by DNS servers. Each
records have a TTL(Time To Live) value in seconds associated with it, these values set time for the
expiration of cached record in DNS server which ranges to 60 to 86400 depending on the DNS
provider.
Resource records are entries in a DNS database that provide information about a
domain, such as its IP address or mail server.
Each record has a specific format and purpose. Some common types of resource
records include:
A records – points to IPv4 address of machine where website is hosted
AAAA records – points to IPv6 address of machine where website is hosted
MX – points to email servers
CNAME – canonical name for alias points hostname to hostname
ANAME – Auto resolved alias, works like cname but points hostname to IP of hostname
NS – nameservers for subdomains
PTR – IP address to hostname
SOA – containing administrative information about the DNS zone
SRV – service record for other services
TXT – Text records mostly used for verification, SPF, DKIM, DMARC and more
CAA – certificate authority record for SSL/TLS certificate