0% found this document useful (0 votes)
5 views44 pages

DECAP256 Computer Networks

Unit 1 of the Computer Networks course introduces fundamental concepts of data communication and computer networks, including types of networks (LAN, WAN, MAN), network topologies, and key characteristics like fault tolerance and security. Unit 2 focuses on data communication principles, signal classifications, transmission modes, and performance metrics, emphasizing the importance of protocols and network standards. The document also addresses factors affecting network performance, signal impairments, and the differences between transmission modes.

Uploaded by

ggta2361
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views44 pages

DECAP256 Computer Networks

Unit 1 of the Computer Networks course introduces fundamental concepts of data communication and computer networks, including types of networks (LAN, WAN, MAN), network topologies, and key characteristics like fault tolerance and security. Unit 2 focuses on data communication principles, signal classifications, transmission modes, and performance metrics, emphasizing the importance of protocols and network standards. The document also addresses factors affecting network performance, signal impairments, and the differences between transmission modes.

Uploaded by

ggta2361
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 44

DECAP256: Computer Networks

Unit 1: Introduction to Computer Networks


Unit 1 of the Computer Networks course introduces the basic concepts of data communication and computer
networks. It starts by explaining what communication is, emphasizing that it is a two-way process between a
sender and a receiver, with feedback completing the process. The unit covers the components of a data
communication system, including the sender, receiver, message, medium, and protocols. It also highlights the
role of physical and wireless media in transmitting data between devices.

The concept of computer networks is discussed, describing them as interconnected systems that enable
resource sharing, file exchange, and communication between computers. Various types of networks are
introduced, including Local Area Networks (LAN), Wide Area Networks (WAN), and Metropolitan Area
Networks (MAN), with each network serving different purposes based on geographical scope and
connectivity.

The unit also focuses on network topologies, which define the physical or logical layout of network
components. Different topologies, such as bus, star, and ring, are discussed, with considerations for choosing
the most suitable one based on performance and resource requirements. Social issues originating from the use
of computer networks, such as security and privacy concerns, are also addressed. Finally, the unit emphasizes
the importance of fault tolerance, scalability, and security as key characteristics of computer networks

Detailed answers to review questions


1. What are the major factors that have made the use of computer networks as an integral part of
business?

The use of computer networks has become an integral part of modern business operations due to
several key factors. Firstly, communication efficiency has improved drastically with the advent of
computer networks. Businesses can now communicate internally and externally via emails, video
conferencing, and instant messaging, enhancing collaboration across different geographical locations.
Secondly, data sharing and accessibility have been simplified. Networks allow businesses to store
data on central servers, making it easier for employees to access and share information. This improves
decision-making and workflow efficiency. Thirdly, cost reduction is another major factor, as
businesses can share resources like printers, scanners, and other hardware, reducing the need for
duplicate equipment. Scalability also plays a role, as networks allow businesses to expand their IT
infrastructure easily, adding new employees or branches without major overhauls. Lastly, remote work
and cloud computing have become more feasible with secure network infrastructure, enabling
employees to work from anywhere and access critical applications and resources through the cloud.
These factors, combined with technological advancements, have made computer networks an essential
component for modern businesses to remain competitive and agile.

2. How are computer networks classified? Mention some of the important reasons for the
classification of computer networks.

Computer networks are typically classified based on their geographical coverage, architecture, and
functional purposes. Some of the main classifications include:

1
 Local Area Network (LAN): This type of network covers a small geographical area, such as a single
building or a campus. It is typically used within an organization for sharing resources like files and
printers.
 Wide Area Network (WAN): WANs cover large geographical areas, often spanning cities, countries,
or continents. The Internet is the most common example of a WAN.
 Metropolitan Area Network (MAN): A MAN is larger than a LAN but smaller than a WAN, often
connecting networks across a city or a large campus.
 Personal Area Network (PAN): A PAN is used for personal devices like smartphones, laptops, and
tablets, typically within a small area like a room.

The classification of networks is essential for understanding their design, scope of use, capacity, and
performance requirements. It helps businesses and organizations select the most appropriate type of
network based on factors like coverage area, the number of devices to be connected, data transfer
speed, and security concerns.

3. How is LAN characterized? Explain.

A Local Area Network (LAN) is a network designed to connect computers and devices within a
limited geographical area, such as an office building, school campus, or home. LANs are characterized
by high data transfer speeds, typically ranging from 100 Mbps to 10 Gbps, depending on the type of
technology used (e.g., Ethernet, Wi-Fi). The primary feature of a LAN is its ability to provide resource
sharing: users can share files, printers, and other devices over the network, which increases efficiency
and reduces hardware costs. Reliability is another key characteristic, as LANs typically use wired
connections (Ethernet) that offer stable and consistent performance. LANs also support the connection
of multiple devices, from computers and printers to smartphones and other peripherals. Additionally,
LANs are usually privately owned and can be managed by a single organization or individual, offering
centralized control over network security and resources. Security protocols such as firewalls,
passwords, and encryption methods are often implemented to protect the network from unauthorized
access.

4. What are the different technologies available for implementing WAN?

Wide Area Networks (WANs) use various technologies to enable long-distance data transmission. The
main technologies available for implementing WANs include:

 Leased Lines: These are private, dedicated communication circuits provided by telecommunication
companies. They offer high reliability and performance for WAN connections but can be expensive.
 Frame Relay: Frame Relay is a cost-effective WAN technology that provides reliable data transfer by
dividing data into frames. It is commonly used for connecting LANs across wide distances but is being
replaced by newer technologies like MPLS.
 Multiprotocol Label Switching (MPLS): MPLS is a technology that directs data packets based on
predefined labels instead of IP addresses, offering faster and more efficient routing. It is widely used in
large-scale enterprise WANs.
 Satellite Communication: Satellite links are used for remote or rural WAN connections where
terrestrial options may not be feasible. They offer global coverage but can have higher latency.
 Public Switched Telephone Network (PSTN): Traditional telephone lines can be used for WAN
connections, though this method is less common today due to slower speeds.
 Virtual Private Networks (VPN): VPNs allow the creation of secure connections over the public
Internet, enabling businesses to establish a private WAN using existing internet infrastructure. VPNs
use encryption protocols to ensure security.
2
Each of these technologies has its advantages and is selected based on the specific needs of the
business, such as cost, speed, and geographic coverage.

5. What is WAN? How does it differ from LANs and MANs? Give at least two examples of
popular WANs.

A Wide Area Network (WAN) is a network that spans a large geographical area, such as a city,
country, or even globally. WANs are designed to connect multiple LANs and MANs, allowing
organizations to communicate and share data across vast distances. WANs typically rely on leased
lines, satellite links, or public Internet infrastructure to transmit data over long distances.

The primary differences between WANs, LANs, and MANs are:

 Geographical Coverage: A LAN is confined to a small area like a home, office, or building, whereas a
MAN covers a larger area like a city, and a WAN covers even larger areas, often extending across
countries or continents.
 Speed: LANs usually offer faster data transfer speeds compared to WANs, as they operate over shorter
distances with dedicated connections. MANs fall in between, providing higher speeds than WANs but
less than LANs.
 Cost: WANs are typically more expensive to set up and maintain than LANs and MANs, as they
require advanced technology, long-distance communication infrastructure, and often third-party service
providers.

Examples of popular WANs include:

 The Internet: The largest and most widely known WAN, connecting millions of computers and
networks around the world.
 Private Corporate WANs: Large organizations often set up their own WANs to connect their global
offices and facilities, ensuring secure communication between them.

In summary, while LANs and MANs are confined to smaller areas with higher speeds and lower costs,
WANs enable large-scale connectivity across vast distances, serving the needs of global businesses and
governments.

Unit 02: Data Communication

Unit 2 of the Computer Networks course focuses on data communication, exploring its fundamental concepts,
signal classifications, transmission modes, impairments, protocols, and network standards. It begins by
defining data communication as the exchange of digital data between devices using wired or wireless
transmission media. The unit then classifies signals into periodic and non-periodic, as well as analog and
digital signals, discussing their characteristics and transmission efficiency.

Different transmission modes are explained, including simplex (one-way communication), half-duplex
(alternating two-way communication), and full-duplex (simultaneous two-way communication), with real-
world examples such as keyboards, walkie-talkies, and telephone networks. The discussion extends to
performance metrics like bandwidth, throughput, latency, packet loss, and retransmission rates, which help
assess network efficiency.
3
The unit also addresses transmission impairments that degrade data signals, including attenuation (signal
weakening), distortion (alteration in signal shape), and noise (unwanted signal interference). Various types of
noise, such as thermal noise, induced noise, and crosstalk, are explained along with methods to mitigate their
effects.

Protocols, which define the rules for communication between network devices, are covered extensively. The
unit explains how different layers in a network architecture interact using protocol stacks to ensure reliable
data transfer. Finally, it highlights network standards, differentiating between de facto (industry-adopted) and
de jure (legally established) standards, which ensure interoperability among devices from different
manufacturers

Detailed answers to review questions


1. Underline the key differences between Bit Rate and Baud Rate. Also elaborate on Bit Length
and Bit Interval.

The terms Bit Rate and Baud Rate are often used interchangeably, but they have distinct meanings in
data communication. Bit Rate refers to the number of bits transmitted per second over a
communication channel. It is measured in bits per second (bps), kilobits per second (kbps), or megabits
per second (Mbps). For example, if a channel transmits 1,000 bits every second, the bit rate is 1,000
bps. On the other hand, Baud Rate refers to the number of signal changes or symbols transmitted per
second in a communication system. It is measured in baud, where each signal change or symbol can
represent multiple bits depending on the modulation technique used. If each signal represents one bit,
the baud rate and bit rate will be the same, but if each signal represents more than one bit (such as in
quadrature amplitude modulation), the bit rate will exceed the baud rate.

Bit Length refers to the physical size of a bit on the transmission medium. It is the distance between
two consecutive bits in a digital signal, which depends on the bit rate and the speed at which the signal
travels through the medium. Bit Interval, on the other hand, is the time duration that a single bit takes
to travel from the sender to the receiver. It is the inverse of the bit rate and indicates how long it takes
for one bit to be transmitted.

2. Signals can be classified based on different parameters. Elaborate on the different


classification categories.

Signals can be classified in various ways based on their physical properties, signal structure, and
modulation techniques. Broadly, signals are classified into analog and digital signals:

 Analog Signals: These are continuous signals that vary smoothly over time, representing information
in a continuous form. The voltage level of an analog signal can take any value within a range. Analog
signals are used for audio and video transmission, such as sound waves or television signals.
 Digital Signals: These signals consist of discrete values, typically represented by binary digits (0 and
1). Digital signals are less susceptible to noise and distortion compared to analog signals and are widely
used in modern communication systems like computers and digital phones.

Signals can also be classified based on their frequency into:

 Low-Frequency Signals: These have a frequency range between 3 Hz to 300 Hz, often used for very
low-frequency communications.
 Medium-Frequency Signals: Typically between 300 Hz and 3 MHz, used for radio broadcasts.
4
 High-Frequency Signals: These include signals from 3 MHz to 300 MHz, used in television and
satellite communications.

Moreover, signals can be classified based on their modulation techniques:

 Amplitude Modulation (AM): The signal's amplitude is varied to encode information.


 Frequency Modulation (FM): The signal's frequency is varied for encoding.
 Phase Modulation (PM): The phase of the signal is altered to carry information.

3. Explain the different factors that can affect Network Performance. Explain the metrics that
are essential for any business to consider.

Network performance is influenced by several factors, each impacting how efficiently and reliably data
can be transmitted across a network. These factors include:

 Bandwidth: The maximum rate of data transfer across a network, measured in bits per second. A
higher bandwidth allows more data to be transmitted simultaneously, improving performance.
 Latency: The delay in transmitting data from the sender to the receiver. High latency can cause lag,
affecting real-time applications like video conferencing or online gaming.
 Packet Loss: Occurs when data packets fail to reach their destination. Packet loss can lead to
incomplete or corrupt data, negatively affecting applications like VoIP or video streaming.
 Jitter: The variation in packet arrival times, causing irregularities in the delivery of data packets. Jitter
can be problematic for real-time services, such as VoIP or video calls.
 Network Congestion: When too much data is transmitted simultaneously, it can overwhelm the
network's capacity, causing delays, packet loss, or slower throughput.

For businesses, the following network performance metrics are essential to consider:

 Throughput: The actual amount of data successfully transmitted over the network, often lower than
the bandwidth due to inefficiencies.
 Quality of Service (QoS): Refers to prioritizing certain types of traffic (such as voice or video) to
ensure better performance for critical applications.
 Error Rate: The percentage of corrupted packets transmitted over the network. Lower error rates are
crucial for maintaining data integrity.

Businesses should focus on these metrics to ensure optimal network performance, as they impact
everything from productivity to customer satisfaction.

4. Explain how the imperfections in the transmission media cause signal impairments. Explain
the various types of impairments.

Transmission media, whether wired or wireless, can introduce imperfections that lead to signal
impairments, which degrade the quality and reliability of communication. These impairments occur due
to various physical and environmental factors affecting the signal as it travels from the sender to the
receiver. The main types of impairments include:

 Attenuation: This refers to the loss of signal strength as it travels through the transmission medium.
Over long distances, the signal weakens due to resistance in cables or atmospheric conditions in
wireless transmission, leading to the need for signal amplification.

5
 Noise: External interference from sources such as electrical equipment, machinery, or other signals can
introduce unwanted disturbances, which corrupt the transmitted signal. Noise can be random (thermal
noise) or caused by other devices operating at similar frequencies (cross-talk).
 Distortion: Occurs when different parts of the signal arrive at the receiver at different times due to
variations in the transmission medium. This can cause signal components to become misaligned,
leading to errors in interpretation. For example, signal distortion occurs when high-frequency signals
travel at different speeds compared to low-frequency signals.
 Interference: This occurs when external electromagnetic signals interfere with the transmission
medium. For example, in wireless communications, interference from other radio signals can degrade
the quality of the transmitted signal.
 Dispersion: In fiber optics and other transmission media, different signal frequencies can travel at
different speeds, causing the signal to spread out over time, which results in a degradation of the
signal's clarity.

These impairments affect the quality of the received signal and can cause data loss, slower transmission
speeds, or misinterpretation of the transmitted information.

5. Compare and contrast the various types of transmission modes. Take suitable examples to
explain.

Transmission modes refer to the manner in which data is transmitted between devices over a
communication channel. The main types of transmission modes are simplex, half-duplex, and full-
duplex:

 Simplex Mode: In simplex mode, data flows in only one direction. There is no provision for a response
or feedback from the receiver. This mode is used in applications where data transmission only needs to
go one way. An example of simplex mode is the transmission of TV signals, where the signal is sent
from the broadcasting station to the viewers without any need for feedback from the viewers.
 Half-Duplex Mode: Half-duplex transmission allows data to flow in both directions, but not
simultaneously. The transmission direction alternates, meaning one device sends data while the other
receives, but only one device can transmit at any given time. An example of half-duplex mode is
walkie-talkies, where one person speaks while the other listens, and they must take turns.
 Full-Duplex Mode: Full-duplex transmission allows for bidirectional communication at the same time.
Both devices can send and receive data simultaneously, which improves communication efficiency. An
example of full-duplex mode is a telephone call, where both parties can talk and listen at the same time
without interference.

The choice of transmission mode depends on the requirements of the communication system, such as
the need for real-time feedback, data volume, and the type of application. Full-duplex is typically used
for modern communication systems like the Internet and telephones, whereas simplex and half-duplex
are used in more specific scenarios.

Unit 03: Network Models

Unit 3 of the Computer Networks course focuses on network models, particularly the layered architecture
used in modern networking. The unit introduces the importance of breaking down complex networking

6
functions into manageable layers, highlighting how this approach reduces design complexity, enhances
modularity, and makes modifications and testing easier.

The unit covers two main networking models: the OSI (Open Systems Interconnection) model and the TCP/IP
model. The OSI model, developed by ISO, consists of seven layers, each performing distinct functions related
to network communication. These layers are categorized into upper layers (Application, Presentation, and
Session) that handle software-based functions and lower layers (Transport, Network, Data Link, and Physical)
responsible for data transmission and network hardware interactions. Each layer in the OSI model has defined
responsibilities, such as addressing, packetization, routing, error handling, encryption, and session
management. The encapsulation and de-encapsulation process, where data passes through each layer while
being wrapped in protocol-specific headers, is also discussed.

The TCP/IP model, which is widely used in practical networking, simplifies the OSI model into four layers:
Application, Transport, Internet (equivalent to the Network layer in OSI), and Network Interface (combining
the Data Link and Physical layers). This model is compared with the OSI model, highlighting key differences
in design philosophy and real-world implementation.

Other key topics in the unit include addressing mechanisms, which define how devices in a network
communicate. The four levels of addresses—physical (MAC), logical (IP), port, and specific—are explained
in relation to how data travels between source and destination. The unit also touches on important networking
protocols such as ARP (Address Resolution Protocol) for mapping IP addresses to MAC addresses and ICMP
(Internet Control Message Protocol) for error reporting.

By understanding the layered approach, network models, and addressing schemes, students gain a solid
foundation in how networks function and how different protocols and architectures interact to ensure reliable
data communication

Detailed answers to review question

1. What are the important design issues for the information exchange among
computers?

The design of information exchange among computers involves several critical factors to ensure
effective and reliable communication. These design issues include:

 Data Representation: One of the primary concerns is how data is represented and encoded. Different
systems may have different formats for representing data (e.g., text encoding or number formatting), so
it's important to standardize the format to ensure that data sent from one system can be interpreted
correctly by another system.
 Data Integrity and Error Handling: During transmission, data is often subject to errors due to noise
or other factors. Ensuring data integrity by implementing error detection and correction techniques,
such as checksums, parity bits, and retransmission protocols, is essential.
 Data Synchronization: Both sender and receiver must maintain proper synchronization during
communication to ensure that the data is sent and received correctly. This includes managing timing,
packet sequencing, and flow control to prevent data loss or corruption.
 Routing and Addressing: Efficient routing and addressing mechanisms are necessary to guide data
across networks, especially large, decentralized systems. Proper addressing schemes allow the data to
reach the correct destination, while routing protocols determine the best path to follow.

7
 Security: Data exchanged among computers must be protected from unauthorized access, tampering,
or eavesdropping. Encryption, authentication, and secure communication protocols are critical to
ensuring the confidentiality, integrity, and authenticity of the exchanged information.
 Scalability and Efficiency: As the number of connected devices increases, the network must scale
efficiently. Efficient use of bandwidth, resource allocation, and minimal latency are key considerations
to handle large volumes of traffic effectively.
 Interoperability: Different systems, networks, and devices must be able to communicate with each
other. Designing systems that can work together despite differences in hardware, software, or protocol
is a key challenge.

2. What are the major functions of the network layer in the ISO-OSI model? How the
function of packet delivery of network layer is different from data link layer?

The Network Layer in the OSI model is responsible for the logical addressing, routing, and forwarding
of packets across the network. Its major functions include:

 Routing: Determining the best path for data to travel from the source to the destination across multiple
networks, based on factors like network topology, traffic load, and the cost of the route.
 Logical Addressing: Assigning unique network addresses (such as IP addresses) to each device on the
network to ensure that data is routed correctly.
 Packet Forwarding: After determining the optimal route, the network layer forwards packets from
one router to another until they reach the destination network.
 Fragmentation and Reassembly: The network layer may break down large packets into smaller
fragments suitable for transmission over networks with smaller maximum transmission units (MTUs).
These fragments are then reassembled at the destination.

The function of packet delivery in the Network Layer differs from the Data Link Layer in the
following ways:

 The Network Layer handles the logical addressing and routing of packets across different networks,
making sure the packet reaches its destination, even if it requires passing through multiple intermediate
networks.
 The Data Link Layer, on the other hand, is responsible for the physical transmission of data over a
specific link between two directly connected devices (e.g., within a local network). It manages how
data is packaged into frames and ensures error-free communication between adjacent nodes. While the
Data Link Layer handles local addressing (e.g., MAC addresses), the Network Layer handles global
addressing and end-to-end routing.

Thus, while both layers are involved in data transmission, the Network Layer is concerned with end-to-
end delivery across networks, while the Data Link Layer focuses on direct communication between
adjacent devices.

3. What is the purpose of layer isolation in the OSI reference model?

Layer isolation in the OSI reference model serves several key purposes:

 Modularity: Each layer of the OSI model has its own specific functionality and interacts with adjacent
layers in a well-defined manner. This isolation helps simplify the development, maintenance, and
troubleshooting of networking systems, as issues in one layer do not directly affect others. For example,
8
changes to the physical layer, such as adopting new hardware, don't require changes to the transport or
application layers.
 Abstraction: Each layer abstracts the underlying complexity of the layers below it. This means that
upper layers don’t need to know the details of how data is transmitted at the lower layers. For instance,
the Application Layer doesn’t need to concern itself with the details of how the data is routed or
transmitted over physical wires.
 Interoperability: By isolating layers, systems built by different vendors can communicate more
easily. Each layer is standardized, which allows products from different manufacturers to work together
as long as they follow the same layer-specific standards (e.g., TCP/IP for the transport layer or Ethernet
for the data link layer).
 Simplified Troubleshooting: If a problem arises in the network, isolating issues to a specific layer
makes it easier to identify and solve the problem. For example, if there’s an issue with data being
corrupted during transmission, it can be traced to the Data Link or Physical Layer, rather than affecting
the entire stack.

4. Why OSI Reference model was widely adopted? What did it make to set itself as a
standard for data communication?

The OSI Reference Model was widely adopted due to its comprehensive, structured approach to data
communication. It provided a unified framework that offered standardized protocols and guidelines
for designing and implementing network communication. The adoption of the OSI model set it apart as
a global standard for the following reasons:

 Interoperability: The OSI model provided a common ground for different networking technologies to
communicate with each other. It promoted the development of networking standards, ensuring that
equipment and software from different vendors could interact seamlessly.
 Modular Design: By breaking down network functions into seven distinct layers, the OSI model
allowed developers to focus on specific network functionalities, reducing complexity. It also allowed
for flexibility, as developers could innovate within each layer without disrupting the entire network
architecture.
 Clear Definition of Functions: The OSI model clearly defined the functions of each layer, allowing
for easier understanding of how data flows through a network and where different types of protocols fit
into the communication process. This made it easier to design and implement effective and efficient
networks.
 Scalability and Flexibility: The model's layered approach allowed networks to scale easily and
integrate new technologies as they emerged. This made OSI adaptable to the rapid evolution of
networking hardware, software, and protocols.
 Global Standardization: The OSI model became widely recognized as the global standard for data
communication. It facilitated the development of protocols and services that could be used
internationally, creating a universal framework for networking.

5. Highlight the differences between OSI reference model and TCP/IP model.

The OSI Reference Model and the TCP/IP Model are both conceptual frameworks for understanding
and implementing networking protocols, but they differ in structure, functionality, and usage.

 Number of Layers: The OSI model consists of seven layers: Physical, Data Link, Network,
Transport, Session, Presentation, and Application. The TCP/IP model, in contrast, has four layers:

9
Link (or Network Interface), Internet, Transport, and Application. The OSI model is more granular,
whereas the TCP/IP model consolidates some layers.
 Layer Functions: The OSI model separates the Session and Presentation layers, which are
responsible for managing sessions and formatting data for presentation. In the TCP/IP model, these
functionalities are combined into the Application layer. This makes the TCP/IP model more
streamlined but less detailed in its differentiation between these functions.
 Protocol Orientation: The OSI model was developed by the International Organization for
Standardization (ISO) as a theoretical framework to guide the development of network protocols, while
the TCP/IP model was developed by the U.S. Department of Defense as a practical, working protocol
suite to support Internet communications. As a result, TCP/IP is more widely adopted for real-world
network implementations.
 Standardization: While OSI was developed as an open standard for networking, it did not gain the
same widespread adoption as TCP/IP, which became the de facto standard for networking, especially in
the context of the Internet.

In essence, the OSI model provides a more theoretical and detailed approach to networking, while the
TCP/IP model is more practical, with a focus on real-world Internet-based communication. Despite
these differences, both models share similar underlying principles of network architecture.

Unit 04: Physical Layer

Unit 4 of the Computer Networks course delves into the physical layer, which is the foundational layer of
network communication. This unit explores the essential role of the physical layer in transmitting raw
bitstreams over physical transmission media and ensuring efficient and reliable data transfer.

The unit starts by defining the physical layer and explaining its primary functions, including data encoding,
transmission, and synchronization. It discusses different types of transmission media, such as guided media
(twisted pair cables, coaxial cables, and fiber optics) and unguided media (radio waves, microwaves, and
infrared). Each medium has its own characteristics, advantages, and limitations based on factors like
bandwidth, attenuation, and interference.

Propagation modes of signals are also covered, explaining how signals travel in simplex, half-duplex, and
full-duplex modes. The unit further introduces networking devices such as hubs, repeaters, and modems,
which operate at the physical layer to enhance data transmission. Signal impairments like attenuation, noise,
and distortion are analyzed, along with methods to mitigate their effects and ensure data integrity.

Finally, the unit highlights the importance of bandwidth in network performance and discusses modulation
techniques used in data transmission, such as amplitude, frequency, and phase modulation. By understanding
these fundamental concepts, students gain insight into the mechanisms that enable data transmission and
networking at the most basic level

Detailed answers to review questions


1. What are the different transmission mediums over which data communication devices can
provide service?

10
Data communication devices can operate over a variety of transmission mediums, each offering different
advantages and limitations. The primary transmission mediums include:

 Twisted Pair Cable: A pair of insulated copper wires twisted together, used widely for telephone
networks and Ethernet connections. It comes in two types: Unshielded Twisted Pair (UTP) and Shielded
Twisted Pair (STP).
 Coaxial Cable: A copper cable with an inner conductor, an insulating layer, a metal shield, and an outer
insulating layer. Coaxial cables are often used in cable television and broadband Internet connections.
 Fiber Optic Cable: Composed of glass or plastic fibers that carry light signals, fiber optics offer high
bandwidth and long-distance transmission with minimal loss and electromagnetic interference. They are
increasingly used for high-speed internet and backbone networks.
 Wireless Transmission: This includes technologies like radio waves, microwaves, and infrared signals.
Wireless media is often used for mobile communications, Wi-Fi, Bluetooth, and satellite
communications.
 Satellite Communication: Uses satellites in space to transmit data signals between Earth-based stations.
This medium is used for long-distance communication and broadcasting.
 Free Space Optics (FSO): This is a technology that transmits data through the atmosphere using light,
particularly laser beams. It is useful for point-to-point communication over short distances.

Each of these transmission mediums offers different characteristics such as bandwidth, speed, and range,
making them suitable for various applications in data communication.

2. What are the major limitations of twisted pair wire?

While twisted pair cables are widely used due to their low cost and flexibility, they have several
limitations:

 Limited Bandwidth: Twisted pair cables, particularly Unshielded Twisted Pair (UTP), have relatively
low bandwidth compared to other media like fiber optics, which limits their use for high-speed data
transfer.
 Susceptibility to Interference: Twisted pair cables, especially UTP, are highly susceptible to
electromagnetic interference (EMI) and radio frequency interference (RFI), which can degrade signal
quality.
 Signal Attenuation: Over long distances, the signal strength in twisted pair cables degrades, requiring
the use of repeaters or signal boosters to maintain communication quality.
 Distance Limitations: Due to attenuation and interference, twisted pair cables are not suitable for long-
distance transmission without amplification, limiting their use in larger networks.
 Security Concerns: The data transmitted over twisted pair cables can be intercepted more easily than
signals transmitted through optical fiber or wireless methods.

3. Describe how satellite communication is different from radio broadcast?

Satellite communication and radio broadcast are both wireless communication technologies, but they
differ in terms of their applications, range, and technology:

 Technology: Satellite communication uses satellites positioned in orbit around the Earth to relay signals
between distant locations, whereas radio broadcast relies on terrestrial transmitters and antennas to
broadcast signals to a local or regional area.

11
 Range: Satellite communication can cover vast geographical areas, including remote and international
regions, by bouncing signals off satellites. Radio broadcasts are limited to local ranges determined by
the power of the transmitter and the type of frequency used.
 Two-Way Communication: Satellite communication typically supports two-way communication,
enabling data transmission in both directions (e.g., in satellite telephony or satellite Internet), whereas
radio broadcasting is primarily one-way communication from a broadcaster to a listener.
 Bandwidth: Satellite communication typically has higher bandwidth capacity compared to radio
broadcasts, allowing for a wide range of applications including internet access, TV, and voice
communication. Radio broadcasting, on the other hand, has limited bandwidth and is used primarily for
audio transmission.

4. State with the help of a diagram the different components of a typical fiber optic link. Mention
the various components of signal loss.

A typical fiber optic link consists of the following components:

 Transmitter: The transmitter converts electrical signals into optical signals. It usually contains a laser
or LED.
 Optical Fiber Cable: The core of the fiber optic cable, where light signals travel. The core is
surrounded by the cladding, which reflects the light back into the core to prevent signal loss.
 Receiver: The receiver converts the optical signals back into electrical signals. It typically contains a
photodetector (such as a photodiode).
 Amplifiers: These may be included in long-distance fiber optic links to boost the signal strength,
compensating for attenuation.

Signal Loss Components:

 Attenuation: The gradual loss of signal strength as light travels through the fiber, caused by scattering,
absorption, and other factors.
 Dispersion: The spreading of light pulses over time, which can cause signal distortion, especially over
long distances.
 Connector Losses: Losses occur when fibers are connected or spliced together. Imperfect connections
lead to scattering or reflection of light.

A diagram would show the transmitter and receiver connected via the fiber optic cable, with amplifiers
in between to maintain signal integrity.

5. What is reflection? What happens to a beam of light as it travels to a less dense medium? What
happens if it travels to a denser medium?

Reflection occurs when a wave, such as light, bounces off a surface or boundary instead of passing
through it. In the context of fiber optics, when light travels within the fiber, it reflects off the walls of the
core due to total internal reflection.

 When light travels to a less dense medium: If light travels from a denser medium (like the core of a
fiber optic cable) to a less dense medium (like the cladding), the angle of refraction increases, and the
light may refract (bend) away from the normal line at the boundary. This could cause some loss of light
through the boundary if the angle exceeds the critical angle.

12
 When light travels to a denser medium: When light travels from a less dense medium to a denser
medium, it bends towards the normal, causing the angle of refraction to decrease. This allows light to
stay confined within the fiber, maintaining the integrity of the signal.

6. What advantages do coaxial cables offer over twisted pair cables?

Coaxial cables have several advantages over twisted pair cables:

 Higher Bandwidth: Coaxial cables offer a much higher bandwidth than twisted pair cables, making
them more suitable for high-speed data transmission and broadband services like cable TV and internet.
 Reduced Electromagnetic Interference: Coaxial cables are shielded with a metal layer, which reduces
the impact of electromagnetic interference (EMI) and radio frequency interference (RFI) compared to
twisted pair cables.
 Longer Distance: Coaxial cables can transmit signals over longer distances without significant
degradation of signal quality, whereas twisted pair cables experience attenuation more rapidly.
 Better Security: Coaxial cables are more difficult to tap into without detection, providing more security
compared to twisted pair cables.

7. Compare fiber optic cable with UTP cable when used as transmission media in LANs.

When comparing fiber optic cables with Unshielded Twisted Pair (UTP) cables in Local Area Networks
(LANs):

 Speed and Bandwidth: Fiber optic cables support much higher speeds and bandwidth compared to UTP
cables. Fiber optics can carry data at speeds up to 100 Gbps and beyond, while UTP cables typically
support up to 10 Gbps.
 Distance: Fiber optic cables can transmit data over much longer distances than UTP cables without
signal degradation. Fiber optics can cover distances of several kilometers, whereas UTP cables are
limited to about 100 meters in length.
 Signal Integrity: Fiber optics are immune to electromagnetic interference, whereas UTP cables can
suffer from EMI and crosstalk, especially in electrically noisy environments.
 Cost: UTP cables are cheaper to install than fiber optic cables. Fiber optics require more expensive
components and installation, though prices have been decreasing.
 Future-Proofing: Fiber optic cables offer better scalability for future technologies and higher speeds,
making them a more future-proof option for expanding networks.

8. What is the purpose of cladding in an optical fiber? Discuss its density with respect to the core.

The cladding in an optical fiber serves as a reflective layer that surrounds the core. Its purpose is to
reflect light back into the core of the fiber through total internal reflection, ensuring that the light signals
stay confined within the core during transmission.

The density of the cladding is lower than that of the core. The refractive index of the cladding is
intentionally made lower than the core’s refractive index, which is crucial for the phenomenon of total
internal reflection. This difference in refractive indices allows the light to be guided through the fiber
without escaping into the cladding, maintaining signal integrity over long distances.

9. What is skin effect and how does it affect the performance of TP cables? How does coaxial cable
reduce the problem of skin effect and becomes an appropriate media for higher frequency data
transmission?
13
The skin effect refers to the tendency of alternating current (AC) to flow near the surface of a
conductor, with less current flowing through the core. This effect increases with the frequency of the
signal, meaning that high-frequency signals tend to be carried by the outer surface of the conductor.

In twisted pair cables, this results in higher resistance at higher frequencies, which causes signal
attenuation and degradation. The signal’s energy is concentrated on the surface of the wires, leading to
inefficient transmission.

Coaxial cables help reduce the skin effect because of their design. The central conductor carries the
signal, and the surrounding metal shield provides a path for the return current. This shield reduces the
impact of skin effect by providing a return path that reduces electromagnetic interference and allows for
more efficient transmission of higher frequency signals.

10. Which type of transmission media does find extensive deployment for digital transmission and
why?

Fiber optic cables are the preferred choice for digital transmission due to their significant advantages
over other media. Fiber optics offer:

 Higher bandwidth: They can transmit large amounts of data at very high speeds, making them ideal for
digital communications, such as internet, television, and high-definition video.
 Lower signal loss: Fiber optics experience very little attenuation and can transmit data over much
longer distances without the need for repeaters, making them highly efficient for long-distance digital
transmission.
 Immunity to interference: Unlike copper cables, fiber optics are immune to electromagnetic
interference, ensuring clean, high-quality signal transmission.

For these reasons, fiber optics are increasingly used for high-speed internet and backbone connections in
digital communication systems.

Unit 05: Data Link Layer - Error Detection and Correction Methods

Unit 5 of the Computer Networks course focuses on the Data Link Layer, specifically error detection and
correction methods. It introduces the importance of ensuring accurate data transmission in networks, as errors
can occur due to interference, noise, or hardware issues.

The unit begins by explaining the functions of the Data Link Layer, including framing, flow control, and error
control. It categorizes errors into single-bit errors (where only one bit in the data frame is altered) and burst
errors (where multiple bits are affected). Various error detection techniques are introduced, including the
Parity Check Method, which appends a parity bit to data to maintain an even or odd count of ones. Another
method, the Cyclic Redundancy Check (CRC), involves treating the data as a polynomial and using division
to generate a remainder that acts as a checksum. The checksum method is also discussed, where a summation
of data segments is performed to detect errors.

Error correction methods are also highlighted, distinguishing between error detection (which identifies errors
but does not correct them) and forward error correction (which allows data to be corrected at the receiving end

14
without retransmission). Techniques such as Hamming Code are explored, demonstrating how redundant bits
are used to identify and correct single-bit errors.

By covering these topics, the unit provides a foundation for understanding how modern networks ensure data
integrity and reliability

Detailed answers to review questions


1. What is the data link protocol?

A data link protocol is a set of rules and procedures that define how data is transmitted across a
physical link in a network. It operates at the data link layer of the OSI model and is responsible for
establishing, maintaining, and terminating connections between devices on a network. The protocol
handles data framing, error detection, and correction, and ensures that the data sent is reliably received
by the destination device. It controls access to the shared medium and provides mechanisms to detect
and manage transmission errors such as data loss or corruption. Examples of data link protocols include
Ethernet, Point-to-Point Protocol (PPP), and High-Level Data Link Control (HDLC).

2. What advantages does Selective Repeat sliding window protocol offer over Go Back N
protocol?

The Selective Repeat sliding window protocol is an improvement over the Go Back N protocol in
terms of efficiency and performance in data transmission, particularly in handling lost or corrupted
packets. The key advantages of Selective Repeat over Go Back N are:

 Less Redundant Transmission: In Go Back N, when a single packet is lost or corrupted, all
subsequent packets in the window must be retransmitted, which can result in significant inefficiencies.
In contrast, Selective Repeat only retransmits the specific lost or corrupted packets, reducing
unnecessary retransmissions.
 Better Utilization of Bandwidth: Since Selective Repeat allows for the transmission of non-
retransmitted packets while waiting for a retransmission of the lost packet, it better utilizes available
bandwidth. This is especially important in networks with high error rates.
 Improved Throughput: Selective Repeat can maintain higher throughput in scenarios where packet
loss occurs, as it avoids the need to resend the entire window of data, unlike Go Back N, which resends
all packets after a lost packet.

In summary, Selective Repeat provides a more efficient way of handling data transfer with fewer
retransmissions and better bandwidth utilization compared to Go Back N.

3. What is the purpose of flow control?

Flow control is a mechanism used in data communication to manage the rate of data transmission
between two devices to ensure that the receiver is not overwhelmed by too much data. It is crucial in
maintaining an efficient and stable communication link by preventing buffer overflow at the receiving
end. Flow control is particularly important when there is a significant difference in processing speeds or
buffer sizes between the sender and receiver. Without flow control, the sender might send data too
quickly for the receiver to handle, leading to data loss. Common flow control methods include:

 Stop-and-Wait: The sender waits for an acknowledgment after sending each packet before sending the
next one.
15
 Sliding Window: The sender can send multiple packets before waiting for acknowledgments, but the
number of packets sent is limited by the receiver’s available buffer space.

By regulating the rate of data flow, flow control ensures that communication remains smooth and
reliable, preventing congestion and data loss.

4. Describe how does finite state machine model carry out protocol verification?

A Finite State Machine (FSM) model is a mathematical model used to represent the states and
transitions of a system, making it a powerful tool for protocol verification. In the context of network
protocols, FSMs are used to describe the sequence of operations, state transitions, and expected actions
of both the sender and receiver in a protocol. Here's how FSM models help in protocol verification:

 State Representation: Each state in the FSM represents a particular condition or stage in the protocol’s
operation (e.g., waiting for data, receiving data, or acknowledging data).
 Transition Rules: Transitions between states are triggered by events or actions such as receiving a
packet, sending a response, or timing out. These transitions are defined based on the rules of the
protocol.
 Verification: FSMs can be used to formally verify that a protocol operates correctly by checking all
possible state transitions and ensuring that the protocol adheres to the expected behaviors, such as error
handling, flow control, and correct message ordering. Verification tools can analyze the FSM to ensure
that all states and transitions are covered and that the protocol can handle different scenarios like
message loss, duplication, or corruption.

Through FSM-based verification, protocol designers can identify potential issues like deadlocks,
unreachable states, or incorrect state transitions, ensuring the protocol operates as intended in all
conditions.

5. What are different data link protocols available? Why does PPP have become popular?

There are several data link protocols used in networking, each with its own characteristics and
applications. Some of the key data link protocols include:

 Ethernet: The most widely used protocol for local area networks (LANs), Ethernet defines how data is
formatted and transmitted over wired LANs. It uses a method called Carrier Sense Multiple Access
with Collision Detection (CSMA/CD) to manage access to the shared medium.
 High-Level Data Link Control (HDLC): A bit-oriented protocol that provides both error detection
and correction. It is widely used in point-to-point communication and in both synchronous and
asynchronous modes.
 Point-to-Point Protocol (PPP): A widely used data link protocol that operates over serial links and
provides features such as error detection, link management, and support for multiple network layer
protocols (e.g., IP, IPX). PPP is flexible, simple, and offers dynamic configuration features, making it
suitable for dial-up connections and VPNs.
 Asynchronous Transfer Mode (ATM): A cell-based protocol used for high-speed data transmission.
It is often used in wide area networks (WANs) for real-time communications.
 Frame Relay: A protocol used for WAN communications, particularly in leased-line applications. It
provides a simple, low-overhead method of packet-switched communication.

Why PPP has become popular:

16
 Multilayer Support: PPP can encapsulate multiple network layer protocols (such as IP, IPv6, and
AppleTalk) within the same frame, making it a versatile choice for various network environments.
 Error Detection: PPP provides error detection mechanisms that ensure data integrity, ensuring that
corrupted data is retransmitted.
 Authentication: PPP supports authentication protocols such as PAP (Password Authentication
Protocol) and CHAP (Challenge Handshake Authentication Protocol), which help secure the
connection between devices.
 Link Management: PPP dynamically configures the link using the Link Control Protocol (LCP),
allowing automatic negotiation of link parameters, such as maximum transmission unit (MTU) size and
authentication options.

These advantages have made PPP a standard choice for establishing point-to-point connections,
especially in dial-up and VPN applications.

Unit 06: Data Link layer - Flow and Error Control Protocols

Unit 6 of the Computer Networks course focuses on data link layer flow and error control protocols, which are
crucial for reliable data transmission in networks. It begins by explaining the need for flow control
mechanisms to prevent overwhelming a receiver with data it cannot process quickly enough. The two primary
flow control techniques discussed are Stop-and-Wait and Sliding Window protocols. The Stop-and-Wait
protocol ensures that a sender transmits a frame and waits for an acknowledgment before sending the next
frame, which is simple but inefficient for large data transfers. In contrast, the Sliding Window protocol allows
multiple frames to be sent before requiring an acknowledgment, significantly improving efficiency.

Error control mechanisms are also covered in detail, ensuring that data corruption or loss during transmission
does not impact communication. Automatic Repeat Request (ARQ) strategies, including Stop-and-Wait ARQ,
Go-Back-N ARQ, and Selective Repeat ARQ, are introduced to handle retransmissions in case of errors.
Stop-and-Wait ARQ retransmits a frame if an acknowledgment is not received within a specified time. Go-
Back-N ARQ retransmits all frames starting from the last acknowledged frame, while Selective Repeat ARQ
retransmits only the erroneous frames, optimizing bandwidth usage.

The unit also discusses the role of acknowledgments (ACK) and negative acknowledgments (NAK) in error
control, ensuring data integrity and synchronization between sender and receiver. By understanding these
fundamental protocols, students gain insight into how modern networks maintain reliable and efficient data
transmission despite challenges such as network congestion and transmission errors

Detailed answers to review questions

1. What is the data link protocol?

A data link protocol refers to a set of rules that govern the transmission of data across a
communication link between two devices within a network. It operates at the second layer (Data Link
Layer) of the OSI model and ensures reliable communication by managing data framing, error
detection, and flow control. The protocol is responsible for the proper packaging of data into frames,
ensuring the data is transmitted without errors, and reassembling the frames at the destination. It also
handles addressing, using MAC (Media Access Control) addresses, and manages how devices share the
communication medium (especially in shared environments like Ethernet). Common examples of data
17
link protocols include Ethernet, HDLC (High-Level Data Link Control), and PPP (Point-to-Point
Protocol).

2. What advantages does Selective Repeat sliding window protocol offer over Go Back N
protocol?

The Selective Repeat sliding window protocol offers several significant advantages over the Go Back
N protocol, especially in environments with high error rates. The key advantages include:

 Efficiency in Data Transmission: Unlike Go Back N, where all frames after a lost or corrupted frame
are retransmitted, Selective Repeat only retransmits the specific frame(s) that were lost or damaged.
This prevents unnecessary retransmissions and makes better use of bandwidth, especially in networks
with occasional packet loss.
 Higher Throughput: Since Selective Repeat only retransmits the problematic frames, the sender can
continue transmitting other frames that were successfully received. This results in a higher throughput,
especially when frame losses are infrequent.
 Lower Latency: With Selective Repeat, as long as the frames are correctly received, they don’t need to
be retransmitted, reducing delays compared to Go Back N, where retransmissions may cause
substantial delays for the entire window.

In summary, Selective Repeat improves efficiency by reducing redundant retransmissions, enhancing


throughput, and reducing network congestion compared to Go Back N.

3. What is the purpose of flow control?

Flow control in data communications refers to the process of managing the rate of data transmission
between two devices to ensure that the receiver is not overwhelmed by incoming data. Without flow
control, if a sender transmits data too quickly for the receiver to process or store, the receiver’s buffer
may overflow, leading to data loss. Flow control helps maintain a balance between the sender’s
transmission speed and the receiver’s ability to process and store the data. It also ensures that data is
transmitted at a rate that is manageable for both devices involved in the communication. Common flow
control mechanisms include:

 Stop-and-Wait: The sender transmits one frame at a time and waits for an acknowledgment before
sending the next one.
 Sliding Window: This allows multiple frames to be sent before requiring an acknowledgment, but with
a limit on the number of frames that can be in transit at any given time.

Flow control is essential for preventing data congestion, maintaining data integrity, and ensuring
efficient communication.

4. Describe how does finite state machine model carry out protocol verification?

A Finite State Machine (FSM) is a mathematical model used to represent the various states of a
protocol and the transitions between these states based on events or inputs. In the context of protocol
verification, FSMs are used to ensure that the protocol behaves correctly under all possible conditions
by systematically checking state transitions and ensuring compliance with the protocol rules. Here's
how FSMs are used for protocol verification:

18
 State Representation: FSM models represent the protocol's behavior in terms of its states, such as
waiting for a packet, receiving data, sending an acknowledgment, etc.
 Transition Rules: FSM defines transitions between states, which occur based on events like receiving
a packet, sending an acknowledgment, or encountering an error.
 Protocol Verification: FSMs help in verifying whether the protocol will handle all potential situations
correctly. Verification tools simulate the behavior of the protocol under different scenarios and check
for any unreachable states, deadlocks, or incorrect state transitions. This helps ensure that the protocol
operates as expected in all conditions.

Thus, FSM-based verification is a powerful tool for detecting flaws in the protocol's design, ensuring
its robustness in a variety of operating conditions.

5. What are different data link protocols available? Why does PPP have become popular?

There are several data link protocols available, each catering to different network types and
communication requirements. Some of the widely used protocols include:

 Ethernet: Predominantly used in local area networks (LANs), Ethernet defines the rules for packet
structure, addressing, and access to the physical medium.
 HDLC (High-Level Data Link Control): A bit-oriented protocol used in point-to-point
communication. It ensures the reliable transmission of data between two devices and is often used in
WAN connections.
 PPP (Point-to-Point Protocol): PPP is widely used in serial connections, such as dial-up Internet or
VPN connections. It provides error detection, authentication, and encapsulation of various network
layer protocols (e.g., IP, IPX). PPP allows for dynamic configuration and supports link establishment,
maintenance, and termination.
 Frame Relay: A WAN protocol used for connecting devices over long distances, offering efficient data
transfer and relatively low overhead.
 ATM (Asynchronous Transfer Mode): A cell-based protocol used for high-speed networking, often
used in large-scale networks like broadband services.

Why PPP has become popular: PPP has become a popular choice for point-to-point communication
due to its flexibility and ease of use. It supports multiple network protocols, provides robust error
detection, offers efficient link management through the Link Control Protocol (LCP), and allows for
authentication using protocols like PAP and CHAP. It also allows dynamic configuration of the
connection, which is particularly useful for dial-up and VPN applications.

6. How does the data link layer accomplish the transmission of data from the source network
layer to the destination network layer?

The data link layer is responsible for the reliable transmission of data between two devices over a
physical medium. Here's how it accomplishes this task:

1. Framing: The data link layer receives the raw data from the network layer (Layer 3) and breaks it into
smaller units called frames. Each frame contains a header with addressing information (e.g., MAC
address) and control information (e.g., error checking), followed by the payload, which contains the
data from the network layer.
2. Error Detection and Correction: The data link layer adds error detection mechanisms, such as
checksums or CRC (Cyclic Redundancy Check), to the frame header. Upon receiving the frame, the
destination device checks for errors and requests a retransmission if any errors are detected.
19
3. Flow Control: The data link layer manages the rate of data transmission using flow control methods
(like sliding window or stop-and-wait), ensuring that the sender does not overwhelm the receiver.
4. Addressing: The data link layer uses physical addresses (such as MAC addresses) to uniquely identify
devices on the network. This ensures that the data is directed to the correct destination.
5. Medium Access Control: The data link layer also manages how devices share the communication
medium. In Ethernet, for example, it uses CSMA/CD (Carrier Sense Multiple Access with Collision
Detection) to avoid collisions when two devices try to send data simultaneously.

By performing these tasks, the data link layer ensures that data is transmitted reliably and efficiently
from the source to the destination over the physical medium.

7. What are the major advantages of the Stop and Wait Automatic Repeat Request (ARQ)?

The Stop-and-Wait ARQ protocol is a simple yet effective mechanism used in data communication
for reliable transmission. The key advantages of Stop-and-Wait ARQ are:

 Simplicity: The Stop-and-Wait ARQ protocol is easy to implement due to its straightforward approach.
After transmitting a frame, the sender stops and waits for an acknowledgment before sending the next
frame.
 Error Control: The protocol ensures that any frame lost or corrupted during transmission is detected.
If an acknowledgment is not received within a specified timeout period, the sender will retransmit the
frame, ensuring reliable delivery.
 Flow Control: Since the sender must wait for an acknowledgment before sending the next frame, the
flow of data is inherently controlled, preventing the receiver from being overwhelmed by too many
frames.

Despite its simplicity, Stop-and-Wait ARQ can be inefficient in high-latency networks, as the sender
must wait for an acknowledgment before sending more data. However, its reliability and ease of
implementation make it a good choice for certain use cases.

8. Explain how damaged frames are managed in the Selective Repeat ARQ.

In the Selective Repeat ARQ (Automatic Repeat Request) protocol, damaged frames are managed
efficiently by retransmitting only the specific frame(s) that were corrupted or lost during transmission.
This approach is more efficient than protocols like Go-Back-N ARQ, which would retransmit all
frames in a window after a single error. Here’s how Selective Repeat handles damaged frames:

 Error Detection: Each frame in Selective Repeat contains an error-detection mechanism, such as a
CRC (Cyclic Redundancy Check). When the receiver detects a corrupted frame, it sends a negative
acknowledgment (NACK) or simply does not acknowledge the frame.
 Selective Retransmission: The sender maintains a buffer of sent frames and retransmits only the
frames that were not successfully received or acknowledged by the receiver. This reduces redundant
transmissions and improves bandwidth utilization.
 Receiver Buffering: The receiver can buffer out-of-order frames and process them once the missing or
damaged frame is successfully retransmitted. This ensures that the receiver does not need to discard
frames that are received correctly but are part of a sequence that includes a missing frame.

By retransmitting only the necessary frames and allowing out-of-order frames to be stored, Selective
Repeat ARQ significantly improves the efficiency of data transmission in networks where frame loss or
corruption is common.
20
Unit 07: Data Link Layer - Medium Access Control

Unit 7 of the Computer Networks course explores Medium Access Control (MAC) protocols, which manage
how devices share a common transmission medium efficiently while minimizing collisions. The unit begins
by introducing the two sublayers of the Data Link Layer: the Logical Link Control (LLC) sublayer,
responsible for error control and framing, and the Medium Access Control (MAC) sublayer, which regulates
access to the shared transmission medium.

The unit categorizes MAC protocols into three main types: random access, controlled access, and
channelization protocols. Random access protocols, such as ALOHA and Carrier Sense Multiple Access
(CSMA), rely on devices transmitting data without prior coordination. Pure ALOHA allows devices to send
data whenever they have it, leading to high collision rates, whereas Slotted ALOHA reduces collisions by
enforcing time slots. CSMA improves efficiency by requiring devices to sense the medium before
transmitting. It comes in variations like CSMA/CD (Collision Detection) and CSMA/CA (Collision
Avoidance), used in Ethernet and wireless networks, respectively.

Controlled access protocols, including polling and token passing, ensure only one device transmits at a time,
reducing collisions but adding overhead. Token Ring and Token Bus networks use token-based access, where
a device must receive a special frame (token) before transmitting. Channelization protocols, such as
Frequency Division Multiple Access (FDMA), Time Division Multiple Access (TDMA), and Code Division
Multiple Access (CDMA), allocate separate frequency bands, time slots, or unique codes to devices, ensuring
simultaneous transmission with minimal interference.

By examining these protocols, students gain an understanding of how networks efficiently manage multiple
devices communicating over a shared medium, balancing factors like collision prevention, fairness, and
network performance

Detailed answers to review questions


1. How can a collision be avoided in CSMA/CD network?

In a Carrier Sense Multiple Access with Collision Detection (CSMA/CD) network, collisions are
inevitable when multiple devices try to send data over the same channel at the same time. However,
there are mechanisms in place to minimize and handle collisions. Collisions can be avoided (or at least
reduced) through several steps:

 Carrier Sensing: Before sending data, a device listens to the network to check if the channel is already
in use. If the channel is free, it begins transmission. If the channel is busy, the device waits until it
becomes idle.
 Collision Detection: While sending data, the transmitting device continues to listen to the channel. If it
detects a collision (i.e., if the signal on the network differs from what the device is transmitting), it
stops transmitting immediately.
 Backoff Mechanism: After a collision is detected, both devices involved in the collision stop
transmitting and wait for a random amount of time before attempting to send their data again. This
random backoff helps reduce the chances of further collisions when they retry.

These methods reduce the occurrence of collisions, though they cannot entirely eliminate them in a
CSMA/CD network. Collisions still happen, especially in highly congested networks, but the backoff
and retransmission protocols help ensure that communication eventually succeeds.
21
2. Compare and contrast CSMA/CD and token passing access methods.

CSMA/CD (Carrier Sense Multiple Access with Collision Detection) and token passing are two
popular methods for controlling access to a shared communication medium, but they differ significantly
in how they manage data transmission:

 Collision Handling:
o CSMA/CD: This method relies on detecting and handling collisions. Devices listen to the channel
before transmitting to ensure it is clear. If a collision occurs, the devices involved stop transmitting and
wait for a random backoff period before trying again. The efficiency of CSMA/CD can degrade in
highly congested networks due to frequent collisions.
o Token Passing: In this method, a "token" (a special data packet) is passed around the network in a
controlled manner. Only the device holding the token is allowed to transmit data. There are no
collisions, as the token ensures only one device transmits at a time.
 Efficiency:
o CSMA/CD: The efficiency of CSMA/CD decreases as network traffic increases because the
probability of collisions rises, leading to delays and retransmissions.
o Token Passing: Token passing provides a more predictable and efficient way of transmitting data, as
the token is only passed from one device to the next in a controlled sequence. This avoids collisions
altogether, making it more efficient in networks with high traffic.
 Usage:
o CSMA/CD: Commonly used in Ethernet networks, particularly in shared media environments like
coaxial cable or older hub-based LANs.
o Token Passing: Commonly used in Token Ring and Fiber Distributed Data Interface (FDDI) networks,
where a physical token circulates through the network.

In summary, CSMA/CD is more suitable for environments where the network is lightly used, but it can
suffer from performance issues in high traffic. Token passing, on the other hand, avoids collisions
entirely and is more efficient in high-traffic environments.

3. Is Slotted Aloha always better than Aloha? Explain your answer with justification.

Slotted Aloha is generally considered more efficient than Aloha due to its structured approach to
transmission. Both Aloha and Slotted Aloha are random access protocols used to manage data
transmission in shared communication channels, but they differ in how they handle the timing of data
transmission.

 Aloha: In the Aloha protocol, devices transmit data whenever they have data to send, without checking
whether the channel is currently in use. If two devices transmit at the same time, a collision occurs, and
both devices must retransmit their data. The major drawback of Aloha is that it allows for uncontrolled
access, leading to a higher probability of collisions, especially as the network load increases.
 Slotted Aloha: Slotted Aloha improves on Aloha by dividing time into fixed-length slots. Devices are
required to send their data only at the beginning of a time slot. If a collision occurs, devices will wait
for the next slot to attempt retransmission. This synchronization reduces the chances of collisions
compared to pure Aloha, as it minimizes the overlap of transmissions by allowing devices to only
transmit at predetermined intervals.
 Comparison:
o Efficiency: Slotted Aloha is more efficient than Aloha, particularly in high-traffic situations. In Aloha,
the system has a throughput of about 18.4% of the total available bandwidth due to frequent collisions.

22
In Slotted Aloha, the throughput can reach up to 36.8%, almost doubling that of pure Aloha, as the
fixed time slots reduce the possibility of collision.
o Collision Probability: Slotted Aloha reduces the probability of collision compared to Aloha. In Aloha,
any random overlap can cause a collision, while Slotted Aloha ensures that only devices that are
waiting for the next slot can transmit, reducing random overlaps.

However, Slotted Aloha is not always "better" in every scenario. In situations where the network has
very low traffic, the additional complexity of synchronizing time slots may not provide significant
improvements over Aloha. Therefore, Slotted Aloha is generally better in environments with moderate
to high traffic, where the risk of collisions is a significant concern.

4. Explain the two functionality-oriented sublayers of the Data Link Layer.

The Data Link Layer (Layer 2 of the OSI model) is responsible for the reliable transmission of data
between two directly connected nodes over a physical medium. It is divided into two sublayers, each of
which performs specific functions:

 Logical Link Control (LLC) Sublayer:


o The LLC sublayer is responsible for managing communication between the network layer and the
media access control sublayer. It provides a standardized interface for the upper layers, including error
detection and flow control, ensuring that data is transferred reliably across the network.
o LLC provides functions like addressing and multiplexing for different network layer protocols. It can
handle different types of frame formats and can support multiple protocols, such as IP, IPX, and others,
on the same physical network.
 Media Access Control (MAC) Sublayer:
o The MAC sublayer is responsible for the actual process of accessing the transmission medium. It
controls how devices on the network take turns transmitting data over the shared medium, ensuring that
no two devices transmit simultaneously, which could result in a collision.
o The MAC sublayer handles tasks such as framing, addressing (using MAC addresses), and determining
when and how devices can transmit data using specific protocols (e.g., Ethernet’s CSMA/CD or token
passing in Token Ring).

Together, the LLC and MAC sublayers ensure reliable and efficient data communication between
devices on the same network, with the LLC handling higher-level control and the MAC focusing on
access to the transmission medium.

5. List the various Multiple Access Protocols and explain the various Random Access Protocols.

Multiple Access Protocols manage how multiple devices share a communication medium in a
network. These protocols are essential in determining when each device can transmit data, especially
when the channel is shared. Here are the major types of multiple access protocols:

 Random Access Protocols: Devices transmit whenever they have data to send, without first checking
whether the medium is idle. These protocols are generally simpler but can lead to collisions. Key
random access protocols include:
o Aloha: As discussed earlier, in Aloha, devices transmit whenever they have data to send. If there is a
collision, the sender will retransmit after a random delay. Aloha is simple but not very efficient because
collisions are frequent.
o Slotted Aloha: An improvement over Aloha, Slotted Aloha divides time into slots, allowing devices to
transmit only at the beginning of these time slots. This reduces collisions and increases efficiency.
23
o CSMA (Carrier Sense Multiple Access): In CSMA, devices listen to the channel before transmitting.
If the channel is idle, they transmit; otherwise, they wait. While CSMA reduces collisions by ensuring
the channel is free, it can still result in collisions if multiple devices sense the channel as idle and
transmit simultaneously.
o CSMA/CD (Carrier Sense Multiple Access with Collision Detection): This is an enhancement of
CSMA that detects collisions while transmitting. If a collision is detected, the devices involved stop
transmitting and wait for a random period before retransmitting. It is commonly used in Ethernet
networks.
 Controlled Access Protocols: These protocols avoid the randomness of random access methods by
using a central controller or specific rules for access. Examples include:
o Token Passing: In this method, a special token circulates around the network. Only the device holding
the token is allowed to transmit, ensuring that there is no conflict for the medium.
o Polling: A central device (such as a master in a network) polls each device to see if it has data to
transmit. This approach ensures that only one device transmits at a time, but it requires more overhead
from the central device.

In conclusion, random access protocols like Aloha, Slotted Aloha, and CSMA/CD are best for networks
with bursty or unpredictable traffic, while controlled access protocols like Token Passing are better for
more predictable and controlled environments.

Unit 08 : Network layer - Logical Addressing

Unit 8 of the Computer Networks course focuses on the Network Layer, particularly logical addressing. It
begins by explaining the concept of the Internet Protocol (IP) and its role in uniquely identifying devices
within a network. The unit differentiates between IPv4 and IPv6, highlighting their structural differences,
address formats, and limitations. IPv4, a 32-bit addressing system, is currently the most widely used, but its
limitations in address availability have led to the development of IPv6, a 128-bit addressing system designed
to accommodate the growing number of internet-connected devices.

The unit explores IP addressing types, including unicast, multicast, and broadcast addressing, and classful
versus classless addressing schemes. It explains subnetting and supernetting, which optimize IP address
allocation and improve routing efficiency. Network Address Translation (NAT) is discussed as a method to
enable multiple devices in a private network to share a single public IP address, enhancing security and
conserving address space.

Additionally, the Address Resolution Protocol (ARP) and Reverse ARP (RARP) are introduced as essential
mechanisms for mapping IP addresses to physical MAC addresses and vice versa. The unit emphasizes the
importance of efficient logical addressing for proper data delivery across networks and the role of the
Network Layer in ensuring seamless communication between devices

Detailed answers to review questions


1. What is the role of the Data Link Layer in computer networks, and how does it ensure reliable
communication?

The Data Link Layer (DLL) is the second layer in the OSI model and plays a crucial role in ensuring
reliable communication between devices on the same network. It is responsible for framing data,
24
detecting errors, and managing flow control. One of the key tasks of the Data Link Layer is to divide
the data into frames, which are smaller, manageable units of data that can be transmitted over the
physical medium. This layer also implements error detection and correction methods such as parity
checks, Cyclic Redundancy Checks (CRC), and checksum algorithms, which help identify and fix
errors that occur during transmission. In addition to error handling, the Data Link Layer manages flow
control, ensuring that the sender does not overwhelm the receiver with too much data at once. Protocols
like Stop-and-Wait and Sliding Window are used to regulate the amount of data in transit and ensure
that the receiver can process the information efficiently without data loss. Through these functions, the
Data Link Layer ensures the reliability and integrity of the communication between directly connected
devices.

2. Explain the differences between IPv4 and IPv6 addressing in computer networks. Why is IPv6
necessary?

IPv4 and IPv6 are both Internet Protocol versions used for assigning unique IP addresses to devices in a
network, but they differ significantly in structure, capacity, and functionality. IPv4 uses a 32-bit address
space, which allows for approximately 4.3 billion unique IP addresses. While this seemed sufficient at
the time of its creation, the rapid growth of internet-connected devices has led to IPv4 address
exhaustion. IPv6 was developed to solve this problem by using a 128-bit address space, which provides
an astronomically larger number of unique addresses—around 340 undecillion addresses (3.4 × 10^38),
ensuring that the world’s devices have unique identifiers well into the future. Apart from address
capacity, IPv6 also simplifies network configuration with features such as stateless address
autoconfiguration, which allows devices to automatically generate their IP addresses without the need
for a central DHCP server. Additionally, IPv6 enhances security by incorporating IPsec, a protocol for
secure communication, as a mandatory feature. In summary, IPv6 is necessary to address the
limitations of IPv4, particularly the shortage of available IP addresses, and to support the growing
number of connected devices in the modern world.

3. What is the purpose of flow control in networking, and what are the differences between Stop-
and-Wait and Sliding Window protocols?

Flow control is essential in networking because it prevents a sender from overwhelming a receiver with
too much data, ensuring that both devices can communicate effectively without causing congestion or
loss of data. There are two primary flow control mechanisms discussed in networking: Stop-and-Wait
and Sliding Window. The Stop-and-Wait protocol is simpler and involves the sender sending a single
frame of data and waiting for an acknowledgment from the receiver before sending the next frame.
While this approach is reliable, it is inefficient for large data transfers because the sender must wait for
an acknowledgment before proceeding, leading to idle time. On the other hand, the Sliding Window
protocol allows the sender to transmit multiple frames without waiting for an acknowledgment of each
frame. The receiver can send acknowledgments for several frames at once, and the sender can "slide"
its window forward to send additional frames while still waiting for acknowledgments for earlier
frames. This approach significantly improves efficiency, particularly in high-latency environments, by
reducing the idle time and allowing multiple frames to be in transit simultaneously. The Sliding
Window protocol can be further optimized with variations like Go-Back-N and Selective Repeat ARQ,
which handle retransmissions in case of errors.

4. What is Medium Access Control (MAC) in the context of computer networks, and how do
different MAC protocols handle access to shared communication channels?

25
Medium Access Control (MAC) is a sublayer of the Data Link Layer that regulates how devices access
and share a common communication medium, especially in networks where multiple devices must
transmit over the same channel. MAC protocols are essential for preventing collisions and ensuring
efficient use of the shared medium. There are three primary types of MAC protocols: random access,
controlled access, and channelization. Random access protocols, such as ALOHA and Carrier Sense
Multiple Access (CSMA), allow devices to transmit whenever they have data to send. In ALOHA,
devices send data whenever they have it, which can lead to collisions, while CSMA improves this by
requiring devices to listen to the channel before transmitting. CSMA further evolves into CSMA/CD
(Collision Detection) for Ethernet networks and CSMA/CA (Collision Avoidance) for wireless
networks to reduce the likelihood of collisions. Controlled access protocols like polling and token
passing ensure that only one device can transmit at a time, thus avoiding collisions but introducing
overhead, as a centralized or distributed control mechanism is required. Finally, channelization
protocols, including Time Division Multiple Access (TDMA), Frequency Division Multiple Access
(FDMA), and Code Division Multiple Access (CDMA), allocate separate resources such as time slots,
frequency bands, or codes for each device, allowing multiple devices to communicate simultaneously
without interfering with each other. By managing access to the medium, MAC protocols ensure that
devices can share the network efficiently and fairly while minimizing collisions.

5. How does the concept of subnetting enhance network management, and what is the role of a
subnet mask?

Subnetting is a technique used in IP networking to divide a larger network into smaller, more
manageable sub-networks or subnets. This helps in optimizing the use of available IP addresses,
improving network performance, and increasing security by isolating different segments of a network.
A subnet mask is a 32-bit binary number used to distinguish the network portion and the host portion of
an IP address. The subnet mask "masks" the network portion of the IP address, leaving the host portion
to identify individual devices within a subnet. For example, in IPv4 addressing, a common subnet mask
is 255.255.255.0, which means that the first three octets (24 bits) represent the network part of the
address, and the last octet (8 bits) represents the host part. Subnetting allows network administrators to
create multiple smaller networks within a large network, each with its own range of IP addresses,
thereby reducing network congestion, optimizing routing, and improving security by limiting broadcast
traffic to smaller groups of devices. By using subnetting, organizations can make more efficient use of
their IP address space and tailor the network's structure to meet specific needs, such as isolating
departments, controlling traffic flow, or securing sensitive data.

These review questions and answers cover key concepts and protocols in networking, helping to
reinforce understanding and preparation for exams or practical applications in computer networks

Unit 09: Network Layer – Routing

Unit 9 of the Computer Networks course focuses on Routing in the Network Layer. The unit introduces the
essential concept of routing, which is responsible for selecting paths in a network to ensure that data packets
reach their destination. It discusses various routing techniques, including unicast, broadcast, and multicast
routing, explaining their respective applications and challenges.

The unit compares routing with flooding, where flooding involves sending data packets to all possible paths
without considering destination information. In contrast, routing uses algorithms to determine the most

26
efficient path. The two main types of routing algorithms are adaptive and non-adaptive. Adaptive algorithms
change their routing paths based on network conditions like congestion, whereas non-adaptive algorithms use
predetermined static paths.

Unicast routing, the most common form of routing, involves sending data from one sender to one receiver.
The unit highlights the various unicast routing protocols like RIP (Routing Information Protocol) and OSPF
(Open Shortest Path First), explaining their functionalities and scenarios where they are used. The unit also
covers broadcast algorithms, used for sending data to all devices in a network, and multicast routing, which
involves sending data to multiple, but not all, devices. Multicast is crucial for applications like video
conferencing and group emails, where the same data needs to be sent to a group of receivers. The unit
discusses the differences between multicast and multiple unicast, where multiple unicast sends individual
messages to each receiver, making it less efficient than multicast.

In addition to discussing these routing schemes, the unit examines spanning-tree creation for maintaining a
loop-free network and the challenges faced in multicasting, such as multicast address management and
delivery at the Data Link Layer. Finally, it covers the advantages of multicasting over unicast, especially in
scenarios that involve broadcasting data to multiple users with minimal bandwidth usage

Detailed answers to review questions

1. Discuss the role of network layer in the OSI model.

The Network Layer (Layer 3) of the OSI model is responsible for the delivery of data packets from the
source to the destination across multiple networks. Its primary role is to provide logical addressing,
routing, and forwarding, ensuring that data can travel across different devices and networks. Key
functions of the network layer include:

 Routing: It determines the best path for data to travel from the source to the destination. This is
accomplished through routing algorithms, which use various metrics (e.g., distance, cost, delay) to
choose the most efficient path.
 Logical Addressing: The network layer assigns logical addresses, such as IP addresses, to devices on a
network. These addresses are used to uniquely identify devices across different networks, allowing for
data transfer between them.
 Packet Forwarding: Once the path is determined, the network layer forwards packets of data to their
next hop, ultimately reaching the destination network or device.
 Fragmentation and Reassembly: If the size of a data packet exceeds the Maximum Transmission Unit
(MTU) of a network, the network layer can break it into smaller fragments for transmission. The
destination device is then responsible for reassembling the fragments into the original packet.

Thus, the network layer enables communication between different networks and ensures that data
reaches its destination, regardless of intermediate devices, network protocols, or distances involved.

2. What are the main issues of concerns for the design of the network layer?

Designing the network layer requires addressing several key concerns to ensure efficient and reliable
communication across diverse and potentially complex network environments. The main issues
include:

27
 Addressing and Address Space: The network layer must provide a system for assigning unique
identifiers (logical addresses) to devices, ensuring that each device on the network can be reliably
located. The system must support scalability, as the number of devices on the internet and networks
continues to grow.
 Routing and Path Selection: The network layer is responsible for determining the optimal route for
data packets across a network or series of interconnected networks. The routing algorithm must handle
dynamic network conditions such as congestion, topology changes, and failures, ensuring data reaches
its destination efficiently.
 Scalability: The network layer design must support an increasing number of devices and networks.
This requires routing protocols that can handle large address spaces, flexible routing tables, and
adaptable network architectures.
 Error Handling and Quality of Service (QoS): Ensuring reliable data transmission in a network
involves detecting and correcting errors that may occur in packet forwarding or routing. The network
layer must also be able to provide quality-of-service guarantees, such as latency, bandwidth, and
priority for certain types of traffic.
 Security and Privacy: As networks are exposed to potential threats, the network layer design must
address issues of data security, including encryption, authentication, and access control, to protect data
as it traverses different networks.

These concerns must be tackled to ensure a network layer that supports seamless, reliable, and secure
data transmission across a wide range of network configurations.

3. Describe briefly how the hierarchical algorithm works.

A hierarchical algorithm in networking is used to simplify the complexity of routing and address
management by organizing the network into smaller, manageable segments or levels. It divides the
network into a hierarchy of routing domains or regions, with each domain responsible for managing its
internal routing and passing information to neighboring domains. The hierarchical approach can be
used in routing protocols to minimize the size of routing tables and improve the scalability of the
network.

In the context of routing, hierarchical algorithms work by:

1. Dividing the network into regions or zones, each handled by a local router or a set of routers, which
only need to maintain knowledge about the internal topology of their own region.
2. Routing information is exchanged between these regions through border routers or gateways. These
routers only need to know how to route packets to other regions, rather than the detailed paths within
each region.
3. Top-level routers or control points manage the routing information between different zones, allowing
them to handle more general routing tasks and significantly reduce the overall complexity of routing
across the entire network.

Hierarchical algorithms are widely used in large-scale networks like the Internet, where protocols such
as BGP (Border Gateway Protocol) use a hierarchical approach to reduce routing table size and
improve efficiency.

4. What is the main purpose of using a router in a network?

The main purpose of a router in a network is to direct traffic between different networks by
determining the optimal path for data packets to travel. Routers are responsible for forwarding data
28
packets from one network to another based on their destination IP addresses. Key functions of routers
include:

 Packet Forwarding: Routers inspect incoming packets and forward them to the appropriate next hop
based on their destination address. This process ensures that data can travel between devices on
different networks.
 Routing and Path Selection: Routers use routing algorithms and protocols (e.g., RIP, OSPF, BGP) to
determine the best possible path for data transmission, considering factors such as network congestion,
link reliability, and topology changes.
 Segmentation of Broadcast Domains: Routers divide large networks into smaller broadcast domains,
which helps reduce network traffic and increase efficiency.
 Network Address Translation (NAT): Routers often perform NAT, which allows multiple devices
within a private network to share a single public IP address for communication with external networks,
such as the Internet.

By performing these tasks, routers play a critical role in interconnecting networks, managing data
traffic, and ensuring that data reaches its destination efficiently and accurately.

5. Differentiate between:

a) Connectionless and connection-oriented service

 Connectionless Service: In a connectionless service, each data packet is treated independently, and
there is no need for establishing a connection before transmission. Each packet contains enough
information (e.g., destination address) for the network to deliver it. IP (Internet Protocol) is an example
of a connectionless protocol, where packets can be sent without establishing a prior connection between
the source and destination.
 Connection-Oriented Service: In a connection-oriented service, a logical connection is established
between the sender and receiver before the data transfer begins. The communication is guaranteed, and
the data is transferred in a specific order. TCP (Transmission Control Protocol) is an example of a
connection-oriented protocol, as it ensures reliable transmission with acknowledgment of data receipt.

b) Interior and Exterior Routing

 Interior Routing: This type of routing occurs within an autonomous system (AS), typically within a
single organization or network. It uses protocols like OSPF (Open Shortest Path First) and RIP
(Routing Information Protocol) to manage routing within the network.
 Exterior Routing: Exterior routing occurs between different autonomous systems, typically between
organizations or across the internet. The most common exterior routing protocol is BGP (Border
Gateway Protocol), which is used to exchange routing information between different ISPs or large
networks.

c) Link State and Distance Vector Routing

 Link State Routing: In link state routing, each router maintains a map of the entire network topology.
It periodically exchanges information about the state of its links (connections) with all other routers,
ensuring that all routers have the same view of the network. OSPF (Open Shortest Path First) is an
example of link state routing.
 Distance Vector Routing: In distance vector routing, each router maintains a table that lists the
distance to all other routers in the network. Routers exchange their tables with neighbors periodically,
29
and the router chooses the path with the least distance. RIP (Routing Information Protocol) is an
example of distance vector routing.

In summary, link state routing provides a complete view of the network, whereas distance vector
routing focuses on finding the shortest path based on distance calculations. Link state protocols tend to
be more efficient and scalable for larger networks compared to distance vector protocols.

Unit 10:Transport layer – Protocols

Unit 10 of the Computer Networks course focuses on the Transport Layer and its associated protocols,
which are essential for providing end-to-end communication services between applications on different hosts.
The transport layer ensures the delivery of data between these applications, offering services such as
multiplexing, reliable delivery, flow control, and error handling.

The unit begins by explaining the services provided by the transport layer, which are fundamental for
maintaining communication between application processes. These services include end-to-end delivery,
where the transport layer ensures the data is transmitted from the source to the destination, and reliable
delivery, which includes error control, sequence control, loss control, and duplication control. The unit details
how the transport layer is responsible for retransmitting lost packets, ensuring that data is delivered in the
correct sequence, and managing any duplicates that may arise during transmission.

The unit further distinguishes between connection-oriented services, like TCP (Transmission Control
Protocol), and connectionless services, such as UDP (User Datagram Protocol). TCP provides a reliable,
connection-oriented service that guarantees delivery, error-free transmission, and data integrity, using a three-
way handshake to establish a connection and ensure reliability. UDP, on the other hand, offers a faster,
connectionless service where there is no guarantee of data delivery, making it suitable for applications where
speed is prioritized over reliability, such as streaming or online gaming.

The unit also includes an analysis of the packet segment format of both TCP and UDP protocols,
highlighting key fields like the source and destination ports, sequence numbers, acknowledgment numbers
(for TCP), and checksums. Understanding the differences between TCP and UDP helps to recognize their
respective strengths and use cases in network applications, where TCP is used for applications requiring
reliable delivery (e.g., web browsing and file transfer) and UDP is used in scenarios where speed and low
overhead are critical (e.g., voice and video streaming).

In summary, Unit 10 provides a comprehensive understanding of the transport layer's crucial role in ensuring
reliable and efficient communication across networks, covering both connection-oriented and connectionless
protocols, with in-depth insights into their applications and differences

Detailed answers to review questions

1. How is transport layer different from data link layer when the services provided at both the
layers are almost similar?

While both the transport layer and the data link layer provide services related to data
communication, their roles and scope in a network are fundamentally different. The data link layer
(Layer 2) operates within a single network, focusing on providing reliable data transfer between
30
adjacent devices within the same network or over a local link. It handles tasks such as framing, error
detection, error correction, and the physical addressing of devices (using MAC addresses). The data
link layer is primarily concerned with node-to-node communication on a local scale, and it provides
services like flow control and access to the shared medium.

In contrast, the transport layer (Layer 4) operates at a higher level in the OSI model and is concerned
with end-to-end communication between devices across potentially multiple networks, often using IP
addresses for logical addressing. While the data link layer is concerned with communication between
devices on the same network segment, the transport layer ensures that data is reliably delivered to the
correct application on the destination device, across multiple networks, using protocols like TCP
(Transmission Control Protocol) or UDP (User Datagram Protocol). The transport layer handles tasks
such as segmentation, reassembly, flow control, error correction, and multiplexing data from multiple
applications.

In summary, while both layers deal with data transfer, the data link layer operates within a local
network or segment, whereas the transport layer handles communication across multiple networks,
providing end-to-end delivery guarantees and addressing higher-level concerns like application data
integrity and sequencing.

2. Why is the transport layer required when both the network and transport layers provide
connectionless and connection-oriented services?

The transport layer is required because while both the network layer and data link layer can offer
connectionless and connection-oriented services, they are generally focused on different aspects of
communication. The network layer provides services for packet forwarding and routing between
different networks, such as addressing and path selection (e.g., IP protocol). It handles communication
between devices across various networks and does not guarantee end-to-end data integrity or reliability.
The network layer is primarily concerned with the efficient delivery of packets from source to
destination, without ensuring the correct order or ensuring that every packet has been delivered.

The transport layer, on the other hand, is responsible for ensuring reliable, error-free, and ordered
delivery of data between applications on different devices. While the network layer handles packet
routing, the transport layer handles error detection and correction, flow control, and data integrity at the
application level. This means that the transport layer ensures that data from a source application is
reliably transmitted to the corresponding application on the destination device, providing services such
as reliable connection-oriented delivery (TCP) or unreliable connectionless delivery (UDP).

Therefore, while the network and data link layers facilitate the movement of packets across networks
and local links, the transport layer provides essential end-to-end communication guarantees that ensure
the integrity, sequencing, and proper delivery of data to the application layer.

3. Why is UDP used when it provides unreliable connectionless service to the transport layer?

UDP (User Datagram Protocol) is used despite providing an unreliable connectionless service
because of its simplicity and low overhead. UDP does not perform connection establishment, error
recovery, or flow control like TCP (Transmission Control Protocol). It simply sends packets, called
datagrams, to the destination without any acknowledgment or retransmission if the packets are lost or
corrupted.

This makes UDP particularly useful in scenarios where:


31
 Speed is critical: Applications such as streaming video, voice over IP (VoIP), and online gaming
require low-latency communication, and the overhead of error checking, acknowledgment, and
retransmissions (as in TCP) would negatively affect performance. UDP provides a faster alternative
because it doesn’t include these features.
 The application can handle errors: Some applications, like real-time multimedia or DNS (Domain
Name System), may tolerate some loss of data packets without any significant negative impact. These
applications are designed to function correctly even if some packets are lost or arrive out of order.
 Multicasting: UDP supports one-to-many communication (multicasting), which is ideal for
applications that need to send data to multiple destinations simultaneously without the overhead of
maintaining multiple connections as required by TCP.

In essence, UDP is chosen when the advantages of faster transmission and lower overhead outweigh the
need for guaranteed delivery, error recovery, and order, making it ideal for time-sensitive applications
where occasional packet loss is acceptable.

4. What is the purpose of flow control?

Flow control is a mechanism used to ensure that the sender does not overwhelm the receiver with too
much data too quickly. It is an essential part of data communication to prevent congestion and ensure
efficient data transmission. In networking, flow control typically involves managing the rate at which
data is sent and making sure that the sender and receiver are synchronized in their data handling
capacity.

The primary goals of flow control include:

 Preventing Buffer Overflow: If the sender transmits data faster than the receiver can process or store
it, the receiver's buffer may overflow, leading to data loss. Flow control ensures that the sender adjusts
its transmission speed to avoid overwhelming the receiver.
 Efficient Use of Resources: Flow control mechanisms help maintain a balance between efficient use of
network resources and avoiding network congestion or dropped packets, which can degrade
performance.
 Maintaining Reliable Communication: Flow control is especially important in reliable
communication protocols like TCP, where it helps to ensure that data is transmitted at an appropriate
rate without causing congestion or unnecessary delays.

Protocols such as TCP use flow control mechanisms like the sliding window technique to manage how
much data can be sent before waiting for acknowledgment from the receiver, ensuring that data is
transmitted smoothly and reliably.

5. Describe TCP and its major advantages over UDP.

TCP (Transmission Control Protocol) is a connection-oriented, reliable transport layer protocol.


Unlike UDP, which is connectionless and unreliable, TCP ensures reliable delivery of data packets by
implementing several key mechanisms:

 Connection Establishment and Termination: TCP establishes a connection between the sender and
receiver through a process known as three-way handshake before data transmission begins. This
connection ensures that both parties are ready to communicate, which is not required in UDP.

32
 Reliable Delivery: TCP provides acknowledgments for received packets and ensures that lost packets
are retransmitted. It also provides mechanisms for sequencing data packets, ensuring that they are
delivered in the correct order, regardless of the order in which they were sent.
 Flow Control: TCP includes flow control mechanisms, such as the sliding window technique, to ensure
that the sender does not overwhelm the receiver with too much data at once.
 Error Detection and Correction: TCP includes error checking and provides automatic retransmission
of corrupted data. If data is received incorrectly, the receiver can request the sender to resend the data.
 Congestion Control: TCP uses congestion control algorithms to detect and mitigate congestion in the
network, ensuring that data is transmitted at a rate that does not overwhelm the network.

The major advantages of TCP over UDP include reliable delivery, order preservation, flow control,
and error detection. TCP is suitable for applications where data integrity and reliability are essential,
such as file transfers, web browsing, and email. On the other hand, UDP is favored when speed and low
latency are more important than reliability, such as in streaming or online gaming.

Unit 11: Introduction to Computer Networks

Unit 11 of the Computer Networks course covers Congestion Control and Quality of Service (QoS). The
unit begins by discussing congestion, a condition where the network resources, such as bandwidth, become
overwhelmed due to excessive traffic. It highlights the impact of congestion on network performance,
including delays, packet loss, and decreased throughput. Understanding Quality of Service (QoS) is essential
to managing these issues, as it ensures that the network can meet specific performance requirements for
different types of traffic, such as voice or video, which are more sensitive to delays than typical data traffic.

The unit introduces the Leaky Bucket and Token Bucket algorithms for congestion management. The
Leaky Bucket algorithm helps in controlling data transmission by smoothing out bursty traffic, while the
Token Bucket algorithm allows for more flexibility in handling traffic flows, ensuring that bursts of traffic
are accommodated within certain limits. These algorithms help in enforcing a predictable traffic flow,
preventing congestion from disrupting network performance.

Furthermore, the unit covers congestion control mechanisms that are either implemented in the network
layer or transport layer, depending on the architecture. It compares open-loop (pre-emptive) and closed-
loop (feedback-based) congestion control strategies. Open-loop strategies attempt to prevent congestion by
controlling traffic before it happens, while closed-loop strategies rely on feedback from the network to adjust
the flow of data dynamically.

Finally, the unit explores the costs of congestion, such as the impact on network bandwidth and service
quality, and how proper congestion control can mitigate these issues. It emphasizes the importance of
effective QoS management to ensure the network's ability to deliver reliable, high-quality services even in
high-demand situations

Detailed answers to review questions


1. Explain the general principles of congestion.

Congestion in networking refers to the condition where the demand for network resources exceeds the
available capacity, resulting in the degradation of network performance. When congestion occurs,
33
packets may be delayed, lost, or dropped, leading to a reduction in throughput, increased latency, and
packet retransmissions. It typically happens in data networks when routers, switches, or network links
become overwhelmed with traffic, surpassing their processing capabilities. Congestion is often caused
by high traffic volumes, network failures, or inefficient routing decisions.

The general principles of congestion revolve around the balance between the amount of data being
transmitted and the available capacity of the network. Key factors that contribute to congestion include
network bandwidth limitations, buffer sizes in routers and switches, and the nature of traffic (such as
bursty vs. smooth traffic). Effective congestion control mechanisms aim to minimize packet loss,
reduce delay, and ensure that the network operates efficiently even under high load. The goal is to
avoid network overload by dynamically adjusting transmission rates, applying traffic shaping
techniques, and optimizing routing paths to maintain smooth and reliable communication.

2. What do you understand by QoS? Describe the basic QoS structure.

Quality of Service (QoS) refers to the overall performance of a network, particularly in terms of its
ability to deliver traffic with the desired level of service. QoS guarantees that certain applications or
types of traffic receive higher priority than others, ensuring that critical services like voice calls or
video conferencing are not disrupted due to network congestion. It is especially important in networks
with varying types of data traffic and where bandwidth may be limited or fluctuating.

The basic QoS structure consists of several components that work together to ensure optimal network
performance:

 Traffic Classification: The process of categorizing network traffic into different types or classes, often
based on attributes like IP address, protocol, or application type. This allows different types of traffic to
be handled differently based on their requirements.
 Traffic Policing: This involves monitoring the traffic to ensure that it adheres to predefined rate limits
and quality standards. If traffic exceeds the allowed threshold, it may be delayed or dropped.
 Traffic Shaping: Traffic shaping smooths out bursts of traffic by controlling the flow rate to fit the
network’s capacity. It is used to ensure that traffic is transmitted at a consistent and manageable rate.
 Queue Management: When network traffic reaches a router or switch, it is placed in a queue. QoS
ensures that higher-priority traffic is served first, and lower-priority traffic may be delayed or dropped
during congestion.
 Congestion Management: QoS also incorporates congestion management techniques to prioritize
certain traffic flows during periods of congestion, ensuring that critical services maintain their
performance.

Overall, QoS guarantees that applications with strict requirements on delay, jitter, or throughput, such
as VoIP or online gaming, receive the necessary resources to function smoothly.

3. Discuss the following two algorithms:

(a) Leaky Bucket Algorithm


The Leaky Bucket Algorithm is a traffic shaping and congestion control mechanism that smoothens
out bursts of data in a network. In this algorithm, data is added to a "bucket," which has a fixed
capacity. The data is then transmitted from the bucket at a constant rate, regardless of the rate at which
it enters the bucket. If data arrives when the bucket is full, it is discarded, effectively controlling the

34
traffic flow and preventing bursts from overwhelming the network. The constant rate at which data is
sent ensures that traffic is evenly distributed, helping to prevent congestion.

This algorithm is used to enforce smooth traffic flow, making it suitable for applications that require
consistent and predictable data rates. However, it may lead to packet loss if there is a sudden burst of
traffic that exceeds the bucket's capacity.

(b) Token Bucket Algorithm


The Token Bucket Algorithm is another traffic shaping mechanism, but unlike the Leaky Bucket, it
allows for bursts of traffic. In this algorithm, tokens are added to a bucket at a fixed rate. Each token
represents permission to send a certain amount of data. For data to be transmitted, the sender must have
enough tokens in the bucket. If the bucket contains tokens, data can be transmitted; if not, the data must
wait until more tokens are available.

The advantage of the Token Bucket algorithm over the Leaky Bucket is that it allows bursts of traffic,
as long as the burst rate is within the capacity of the token generation rate. This is especially useful for
applications that need to handle sudden spikes in traffic while still enforcing an overall traffic limit.

4. What are two types of congestion control? Where is congestion control implemented in each
case?

There are two main types of congestion control:

 Open Loop Congestion Control: Open loop congestion control refers to preventing congestion from
occurring by adjusting network parameters and routing decisions before congestion becomes a
problem. This type of control does not rely on feedback from the network but instead focuses on
managing traffic patterns and the allocation of network resources. Examples of open loop congestion
control include traffic shaping and rate limiting, which help to prevent the onset of congestion by
limiting the amount of traffic entering the network. It is typically implemented at the sender side, where
the transmission rate can be adjusted to prevent overloading the network.
 Closed Loop Congestion Control: Closed loop congestion control involves reacting to congestion
when it is detected. This type of control requires feedback from the network, such as signals from
routers or switches, to indicate congestion levels. When congestion is detected, the sender reduces its
transmission rate to alleviate the congestion. A common example of closed-loop congestion control is
the TCP congestion control mechanism, where the sender adjusts the transmission window size based
on the feedback from the receiver and the network (e.g., through packet loss or delay). This type of
control is implemented at both the sender and receiver ends, with feedback loops between them.

5. Discuss the various causes of the costs of congestion.

The costs of congestion can be significant and arise from several factors:

 Increased Delays: Congestion often leads to packet delays as routers and switches become overloaded
with traffic. This can affect the timely delivery of data, particularly for real-time applications like voice
or video calls, leading to poor user experiences.
 Packet Loss: When network devices such as routers or switches are overloaded, they may drop
packets, which results in data loss. This requires the sender to retransmit the lost packets, which
increases traffic and further exacerbates congestion.

35
 Reduced Throughput: Congestion leads to lower throughput because data transmission is delayed,
and the network resources are not being used efficiently. As a result, the overall data rate for the
network decreases.
 Increased Overhead: Congestion control mechanisms, such as retransmissions, acknowledgments, and
error detection, introduce additional overhead. These mechanisms, while essential for reliable
communication, add to the network traffic and reduce its efficiency.
 Resource Wastage: Network resources such as bandwidth, processing power, and memory are wasted
when congestion occurs. Devices are often forced to store, forward, and manage excessive traffic,
which consumes resources that could be better utilized elsewhere.
 Impact on Application Performance: Congestion can severely affect the performance of applications,
especially those requiring real-time data delivery (e.g., streaming, VoIP). Applications may experience
increased latency, jitter, or loss of packets, which degrades their quality and reliability.

In summary, congestion creates costs in terms of network performance, user experience, and resource
utilization. Effective congestion control mechanisms are essential to mitigate these costs and ensure
that the network remains efficient and reliable under heavy traffic conditions.

Unit 12: Application Layer – Services and Protocols

Unit 12 of the Computer Networks course focuses on the Application Layer, which is the topmost layer of
the OSI model responsible for providing various network services to end-users and applications. The unit
starts with an introduction to the essential role of the Application Layer in network communication,
particularly in enabling interaction between software applications and the underlying network infrastructure.

Key protocols discussed in this unit include Telnet, FTP (File Transfer Protocol), DNS (Domain Name
System), SMTP (Simple Mail Transfer Protocol), POP (Post Office Protocol), and IMAP (Internet
Message Access Protocol), each serving distinct purposes in the networked communication process. Telnet is
a client-server protocol that provides a bidirectional interactive communication service, enabling users to log
into remote machines and access applications as though they were physically present at the machine. FTP is
used for transferring files between computers over the internet, allowing users to upload and download files to
and from servers.

The unit also covers the Domain Name System (DNS), explaining its critical role in mapping domain names
to IP addresses and its hierarchical structure, which includes root name servers, top-level domain servers, and
authoritative name servers. DNS helps solve the problem of human-readable domain names by converting
them into machine-readable IP addresses. The security challenges related to DNS, such as DNS cache
poisoning, are also discussed.

Additionally, SMTP, POP, and IMAP are explored in the context of email communication. SMTP is used to
send emails, while POP and IMAP are used for retrieving emails from the server, with IMAP offering more
advanced features like email synchronization across multiple devices. These protocols are essential for the
smooth operation of email systems and the efficient handling of electronic mail across different platforms.

Overall, the unit emphasizes the crucial role of the Application Layer in providing services and protocols that
directly interact with end-users, facilitating tasks such as file transfer, email communication, and domain
resolution. It provides an understanding of how these protocols work together to ensure seamless internet
communication and application access
36
Detailed answers to review questions
1. Key Differences between Bit Rate and Baud Rate, and Elaboration on Bit Length and Bit
Interval

Bit Rate and Baud Rate are two key terms used in data communication, and although they are related,
they are not the same. Bit Rate refers to the number of bits transmitted per second over a
communication channel. It indicates the amount of data transferred in a given period of time, typically
measured in bits per second (bps). Baud Rate, on the other hand, refers to the number of signal
changes or symbols transmitted per second in a communication channel. Each signal change or symbol
can represent one or more bits, depending on the modulation technique used.

The difference between the two lies in the fact that Bit Rate measures the total amount of data
transferred (in bits), whereas Baud Rate measures the speed of signal changes (in symbols or
waveforms). For example, if a signal uses one symbol to represent more than one bit (such as in
Quadrature Amplitude Modulation, QAM), the Bit Rate can be higher than the Baud Rate.

Additionally, Bit Length is the physical duration of a single bit in a transmission, i.e., how long it takes
to transmit one bit. The Bit Interval is the time interval between the start of one bit and the start of the
next bit. The Bit Interval is the reciprocal of the Bit Rate, indicating the time taken to transmit one bit.
For instance, in a system transmitting at 1 Mbps (1 million bits per second), the Bit Interval is 1
microsecond (1/1,000,000 seconds).

2. Classification of Signals Based on Different Parameters

Signals can be classified based on multiple parameters, including signal type, frequency, amplitude,
and modulation. The main types of signal classifications include:

 Analog vs. Digital Signals: Analog signals vary continuously over time and can take any value within
a range, such as sound waves. Digital signals, on the other hand, have discrete levels (usually binary)
and represent data in the form of bits (0s and 1s).
 Baseband vs. Passband Signals: A baseband signal is a signal that occupies the entire bandwidth of a
communication medium and is typically used for short-distance communication (such as Ethernet). A
passband signal, conversely, is modulated to fit within a specific frequency band, often used in radio
communications.
 Unipolar, Polar, and Bipolar Signals: Signals can also be classified based on their voltage levels.
Unipolar signals have only one voltage level (either positive or zero). Polar signals have two voltage
levels, typically positive and negative. Bipolar signals oscillate between multiple voltage levels,
allowing more data to be encoded per unit of time.
 Modulated vs. Non-modulated Signals: Modulation involves varying the signal's properties
(amplitude, frequency, or phase) to encode information for transmission over a communication
medium. Non-modulated signals, such as baseband signals, do not undergo modulation.

These classifications help in designing and analyzing the transmission of data over communication
networks and can significantly affect the efficiency, reliability, and bandwidth requirements of the
system.

3. Factors Affecting Network Performance and Metrics Essential for Businesses

37
Several factors can influence network performance, and these must be considered for optimal network
operation, especially for businesses relying on continuous and efficient communication. These factors
include:

 Bandwidth: The amount of data that can be transmitted over a network in a given time. Higher
bandwidth allows for faster transmission of data, essential for tasks like file transfers or video
conferencing.
 Latency: The time it takes for data to travel from the source to the destination. Low latency is critical
for real-time applications such as voice and video calls.
 Packet Loss: Occurs when data packets are lost during transmission, often due to congestion or
unreliable network components. High packet loss leads to degraded performance and requires
retransmissions, which waste bandwidth.
 Jitter: The variation in packet arrival times. High jitter can result in poor performance for real-time
applications such as VoIP and online gaming, where timing is crucial.
 Error Rates: High error rates can lead to data corruption and retransmissions, which impact the
network's overall efficiency.
 Network Topology: The physical and logical layout of network devices impacts how traffic flows, the
risk of congestion, and the ease of fault isolation.

The essential metrics for businesses to monitor include throughput, availability, response time, and
uptime, as these directly impact productivity and the quality of service (QoS) delivered to customers.

4. Imperfect Transmission Media and Signal Impairments

Transmission media imperfections are the main cause of signal impairments in communication
systems. Imperfections may arise from various sources, leading to issues such as:

 Attenuation: The reduction in signal strength as it travels through the transmission medium. This
results in weaker signals at the receiver end, especially over long distances. For example, in copper
cables, attenuation increases with distance, requiring amplification or regeneration of the signal.
 Distortion: Occurs when different frequency components of a signal travel at different speeds through
the medium, leading to changes in the signal shape. This can be particularly problematic in wide-
bandwidth signals, where the distortion can cause a loss of information.
 Noise: External electromagnetic interference from various sources, such as electrical devices or other
communication systems, can corrupt the transmitted signal. Thermal noise and crosstalk (signal
interference from nearby cables) are common forms of noise that degrade signal quality.
 Delay: Signals may experience delay as they propagate through a medium, especially in fiber optics
and wireless communication, affecting the performance of real-time applications.

These impairments reduce the quality of the communication channel, requiring techniques such as
error correction, modulation, and signal amplification to mitigate their effects and maintain signal
integrity.

5. Comparison of Different Transmission Modes

Transmission modes describe the direction in which data flows between two devices. The three main
types are:

38
 Simplex: In simplex transmission, data flows in only one direction, from sender to receiver. There is no
capability for the receiver to send data back to the sender. An example of simplex communication is a
radio broadcast.
 Half-Duplex: Half-duplex communication allows data to flow in both directions, but not
simultaneously. One device transmits while the other listens, and then they switch roles. A good
example is a walkie-talkie, where one person speaks while the other listens, and they alternate.
 Full-Duplex: Full-duplex communication allows data to flow in both directions simultaneously. Both
the sender and receiver can transmit and receive at the same time. A common example of full-duplex
communication is a telephone call, where both parties can speak and listen at the same time.

The choice of transmission mode depends on the application requirements. Simplex is useful in
broadcast scenarios, while half-duplex is suitable for two-way communication where only one party
needs to communicate at a time (like two-way radios). Full-duplex is ideal for interactive applications
where continuous bidirectional communication is needed, such as in internet browsing or
telecommunication.

In conclusion, understanding the differences between transmission modes, signal impairments, and
factors affecting network performance is crucial for designing and maintaining efficient communication
systems. This knowledge allows businesses to optimize network performance and ensure smooth data
transmission for various applications.

Unit 13 : Introduction to Computer Networks

Unit 13 of the Computer Networks course focuses on Introduction to Computer Networks, offering a
comprehensive understanding of how the Internet works, key protocols, and the underlying infrastructure. The
unit starts with a discussion on the Internet as a global network of networks, emphasizing its role in enabling
communication across the globe by connecting various computer systems. The Internet Protocol (IP), which
is essential for routing data packets between devices, is explored, particularly the transition from IPv4 to IPv6
due to the exhaustion of IPv4 addresses.

The unit also introduces Uniform Resource Identifiers (URI), the system used for identifying resources on
the Internet, such as web pages and services. It explains the structure of URLs (Uniform Resource Locators)
and highlights the roles of protocols like HTTP and HTTPS in facilitating web communication. The security
implications of HTTP, such as the need for secure data transmission (via HTTPS), are also covered.

Another key topic is the World Wide Web (WWW), distinguishing it from the Internet and explaining its
role as a system for accessing information using the Internet. The unit delves into how web servers and clients
interact, providing an overview of how websites and web services are structured and accessed by users.

Finally, the unit touches on Virtual Private Networks (VPNs), their types, and how they provide secure
communication over public networks by using encryption techniques like IPSec. This section also discusses
the lifecycle of an IPSec tunnel, which ensures confidentiality and integrity of data as it travels across the
internet. In summary, Unit 13 provides foundational knowledge about the structure and functioning of the
Internet, web protocols, security techniques, and how modern communication and data sharing occur over
networks.

Detailed answers to review questions


39
1. General Principles of Congestion

Congestion in computer networks refers to a situation where the demand for network resources exceeds
the available capacity, causing delays, packet loss, and reduced network performance. Congestion can
occur at various levels in the network, such as in routers, switches, or the communication channels. The
general principle behind congestion is that when too many data packets are injected into the network at
once, the network devices (like routers) become overwhelmed and can no longer handle the traffic. As
a result, packets are either queued or dropped, leading to delays in transmission and possible loss of
data. To manage congestion effectively, networks implement congestion control mechanisms that
attempt to regulate traffic flow and ensure that data is transmitted efficiently without overwhelming the
network infrastructure.

2. What is QoS (Quality of Service)? Describe the Basic QoS Structure

Quality of Service (QoS) refers to the overall performance of a network, particularly the ability to
guarantee specific performance levels for data traffic. It is crucial in environments where network
resources are shared by multiple applications with different performance requirements, such as voice,
video, and data. QoS ensures that high-priority traffic, like real-time voice or video calls, is given
precedence over less critical traffic like email or file transfers. The basic structure of QoS involves
several key parameters, including bandwidth, latency, jitter, and packet loss. QoS mechanisms
include traffic shaping, traffic policing, resource reservation, and scheduling algorithms to provide
guaranteed service levels and minimize disruptions. QoS can be implemented using various techniques
such as Differentiated Services (DiffServ) and Integrated Services (IntServ) models, each focusing on
prioritizing traffic and managing congestion.

3. Discuss the Following Two Algorithms:

(a) Leaky Bucket Algorithm


The Leaky Bucket algorithm is used for traffic shaping in networks. It controls the rate at which data
packets are sent into the network by using a "bucket" analogy. The "bucket" has a fixed capacity, and
packets enter it at varying rates. The algorithm ensures that packets are sent out of the bucket at a fixed
rate, thereby smoothing out bursty traffic and providing a more consistent flow. If the bucket
overflows, it means that the incoming traffic is too high for the available capacity, and the excess
packets are discarded. This helps in preventing network congestion by regulating the traffic flow into
the network.

(b) Token Bucket Algorithm


The Token Bucket algorithm is another method used for traffic shaping and rate limiting. In this
algorithm, a "bucket" holds tokens, and tokens are generated at a fixed rate. To send a packet, a device
must first acquire a token from the bucket. If the bucket is empty (i.e., no tokens are available), the
packet must wait until a token becomes available. The advantage of the Token Bucket algorithm over
the Leaky Bucket is that it allows bursts of traffic to be sent as long as there are tokens available in the
bucket, while still enforcing an average rate over time. This makes the Token Bucket algorithm more
flexible, allowing occasional bursts while still limiting overall traffic flow.

4. Types of Congestion Control and Where They Are Implemented

There are two main types of congestion control mechanisms used in networks:

40
 Open Loop Congestion Control: This type of control is proactive and does not rely on feedback from
the network. It attempts to avoid congestion by controlling the rate at which data is sent into the
network, often by limiting the data injection rate before congestion can occur. For example, a sender
might reduce its sending rate based on an estimated available bandwidth. Open loop congestion control
is usually implemented at the source of the transmission, such as in the transport layer.
 Closed Loop Congestion Control: This method relies on feedback from the network itself. The
network components (such as routers) inform the sender when congestion is detected, prompting the
sender to reduce the transmission rate or implement corrective actions. Common closed-loop
mechanisms include TCP congestion control, where the sender adjusts its rate based on feedback from
the receiver, such as packet loss or delayed acknowledgments. Closed-loop congestion control is
typically implemented in the transport layer or network layer, depending on the protocol.

5. Causes of the Costs of Congestion

Congestion can incur significant costs in terms of performance and resource utilization. Some of the
main causes of these costs include:

 Increased Latency: As congestion builds, the time it takes for data to travel from the source to the
destination increases. This is especially noticeable in real-time applications like voice and video, where
delays can affect the quality of the service. High latency can lead to slow communication and poor user
experiences.
 Packet Loss: When the network becomes congested, routers and switches may drop packets because
they are unable to handle the traffic load. Packet loss leads to retransmissions, which consume
additional bandwidth and time, further exacerbating congestion.
 Reduced Throughput: As congestion increases, the overall throughput of the network decreases. This
is because the network resources (such as bandwidth and buffer capacity) are being shared among too
many competing flows. The reduced throughput leads to inefficient use of the network infrastructure
and slower data transmission rates.
 Increased Operational Costs: Networks affected by congestion may require additional resources, such
as more bandwidth, better hardware, or additional network infrastructure to handle the traffic. These
resources come at an additional operational cost, increasing the financial burden on businesses that rely
on network services.
 Fairness Issues: Congestion can also result in unfairness, where certain traffic flows are
disproportionately affected. Some applications or users may receive a much higher share of resources
than others, leading to suboptimal performance for other users. This issue is critical in shared network
environments where fairness is important for maintaining equal access.

In summary, congestion in networks is a complex issue that impacts various aspects of performance,
including latency, packet loss, throughput, and overall network efficiency. Understanding congestion
control techniques and the underlying causes of congestion is essential for maintaining optimal network
performance.

Unit 14: Network Security

Unit 14 of the Computer Networks course focuses on Network Security, an essential area in safeguarding
digital communications and protecting data from various security threats. The unit introduces key concepts of
cybersecurity, which includes the protection of systems, networks, and data from cyber-attacks. The
41
objective of cybersecurity is not only to defend systems but also to ensure confidentiality, integrity, and
availability of data. These principles form the foundation of network security practices, including protecting
sensitive information, ensuring that data remains untampered during transmission, and making sure that only
authorized users can access systems and data.

The unit also addresses different types of cyber-attacks, such as phishing, malware, denial-of-service (DoS),
and man-in-the-middle attacks, which attempt to compromise network security by exploiting vulnerabilities.
Specific attention is given to cryptography, the science of securing communication through encryption and
decryption techniques. It explores both symmetric and asymmetric encryption, detailing how they work,
their benefits, and their applications. For example, symmetric encryption uses a single key for both
encryption and decryption, while asymmetric encryption uses a pair of keys: a public key for encryption and
a private key for decryption.

Message integrity is another vital topic covered in the unit, focusing on ensuring that the data received is
identical to the data sent, without alteration during transmission. The unit discusses methods such as message
digests and cryptographic hash functions to validate data integrity. It also highlights the importance of
firewalls in preventing unauthorized access to networks, serving as a barrier between trusted internal
networks and external threats.

Finally, the unit introduces the concept of email security, covering protocols and techniques such as Privacy
Enhanced Mail (PEM), which ensures the confidentiality and integrity of email communications. This
section also touches on security protocols like SMTP for sending emails and IMAP/POP for retrieving them,
as well as methods for ensuring that emails are sent securely and can be verified for authenticity.

In summary, Unit 14 provides an overview of the critical security measures necessary for protecting networks
and data from attacks, ensuring that data remains secure, accessible, and unaltered as it travels across
potentially vulnerable networks. The principles of encryption, firewalls, and secure communication protocols
are essential to maintaining a safe and reliable network environment.

Detailed answers to review questions


1) Different Criterions to Keep Information Private When Sent Over a Public Network

When sending information over a public network, it is crucial to ensure its confidentiality, integrity,
and authenticity. The following are key criteria for maintaining privacy:

 Encryption: This transforms the original data into an unreadable format using algorithms and keys,
ensuring that only authorized parties can decrypt and access the information.
 Authentication: To verify the identities of users and systems involved, ensuring that the data is sent
from a legitimate source and received by the intended recipient.
 Data Integrity: Ensuring that the information has not been tampered with during transmission. This
can be achieved using hash functions and digital signatures.
 Secure Protocols: Using secure communication protocols such as SSL/TLS (Secure Sockets
Layer/Transport Layer Security) to encrypt data during transmission.
 Access Control: Restricting access to sensitive data by employing strong user authentication
mechanisms like passwords, biometrics, or two-factor authentication.

2) How Encryption Affects Network Performance

42
Encryption, while crucial for ensuring data privacy and security, can have an impact on network
performance. This is due to the computational overhead involved in encrypting and decrypting data.
The encryption process requires additional processing power, which can slow down the transmission
speed, especially if the encryption algorithm is complex. For large datasets, encryption can lead to
higher latency, reduced throughput, and increased CPU load on both the sender and the receiver.
However, advancements in hardware and more efficient encryption algorithms, such as AES
(Advanced Encryption Standard), have minimized these performance losses. Despite this, encryption
remains essential for protecting sensitive data, and the performance trade-off is often considered
acceptable in most secure communication systems.

3) How to Prevent Undesirable Persons from Accessing Information on the Internet

To prevent unauthorized individuals from accessing sensitive information, several strategies can be
employed:

 Firewalls: Implementing both hardware and software firewalls that monitor incoming and outgoing
network traffic, blocking unauthorized access to the system.
 Encryption: Encrypting sensitive data ensures that even if intercepted, the information remains
unreadable without the appropriate decryption keys.
 Authentication and Authorization: Using strong user authentication mechanisms, including multi-
factor authentication, to ensure only authorized users can access certain information.
 Intrusion Detection Systems (IDS): These systems monitor network traffic for unusual patterns or
unauthorized access attempts and can alert system administrators to potential threats.
 Access Control Lists (ACLs): These lists define who can access certain resources on a network,
effectively limiting access to sensitive information.
 Secure Websites (HTTPS): Websites should implement HTTPS (Hypertext Transfer Protocol Secure),
ensuring that data transferred between users and websites is encrypted.

4) How to Keep Computers Safe from Hackers: Hypothetical Situation

To protect computers from hackers, several practices should be followed. For example, let's consider a
hypothetical situation where a small business has a website that handles sensitive customer data. To
keep both the business's and customers' computers safe:

 Regular Software Updates: Ensure that all software, including operating systems and antivirus
programs, is regularly updated to patch vulnerabilities that hackers could exploit.
 Firewall Implementation: Install firewalls both at the network and device levels to block malicious
traffic and unauthorized access.
 Strong Passwords and Multi-Factor Authentication: Use complex passwords and enable multi-
factor authentication (MFA) for all accounts accessing critical systems.
 Data Encryption: Use SSL/TLS encryption on the website to protect data exchanged between the
website and its customers.
 Employee Awareness and Training: Employees should be educated about phishing attacks, social
engineering tactics, and how to avoid unsafe downloads or clicking on malicious links.

5) What is a Cipher? Why are Ciphers Used for Large Messages?

A cipher is an algorithm used to encrypt or decrypt information. It transforms plaintext (original data)
into ciphertext (encoded data) to ensure that unauthorized individuals cannot easily understand the
information. Ciphers are used for large messages because they help secure large volumes of data
43
without exposing sensitive information to unauthorized parties. Using a cipher ensures that, even if the
message is intercepted, it cannot be read without the decryption key. Advanced ciphers are highly
efficient and capable of encrypting large amounts of data quickly while maintaining strong security.

6) Two Kinds of Security Attacks on an Internet-Connected Computer System

 Denial of Service (DoS) Attack: In a DoS attack, the attacker floods a target system with excessive
traffic or requests, causing the system to become overwhelmed and unavailable to legitimate users. In
its distributed form (DDoS), the attack comes from multiple sources, making it harder to mitigate.
 Man-in-the-Middle (MitM) Attack: This type of attack occurs when an attacker intercepts
communication between two parties, allowing them to eavesdrop, alter, or inject malicious data into the
communication. MitM attacks often target unencrypted communication channels.

7) Difference Between Secret Key and Public Key Encryption

 Secret Key Encryption (Symmetric Encryption): In this method, the same key is used for both
encryption and decryption. Both the sender and the receiver must have the same secret key, which
makes the system fast and efficient but requires secure key distribution to prevent interception by
unauthorized users. An example of symmetric encryption is AES (Advanced Encryption Standard).
 Public Key Encryption (Asymmetric Encryption): This method uses a pair of keys: a public key and
a private key. The public key is used to encrypt data, and the private key is used for decryption. The
main advantage is that the private key is never shared, making it more secure for communication over
untrusted networks. RSA (Rivest-Shamir-Adleman) is a common example of public key encryption.

8) What is Cryptography? What Are the Benefits of Using This Technique?

Cryptography is the practice and study of techniques for securing communication and data from third
parties. It involves the use of mathematical algorithms to protect data confidentiality, integrity, and
authenticity. The benefits of cryptography include:

 Data Confidentiality: Ensures that sensitive data remains private and can only be accessed by
authorized parties.
 Data Integrity: Ensures that data has not been altered or tampered with during transmission.
 Authentication: Verifies the identity of users and systems, ensuring that both parties in a
communication are legitimate.
 Non-repudiation: Ensures that the sender cannot deny having sent a message, providing
accountability.

9) Substitution and Transposition Ciphers: Differences

 Substitution Cipher: In a substitution cipher, each element of the plaintext (such as a letter or a
number) is replaced by another symbol or letter. The classic example is the Caesar cipher, where each
letter of the alphabet is shifted by a fixed number. This method only changes the characters but keeps
their order intact.
 Transposition Cipher: In a transposition cipher, the positions of the symbols in the plaintext are
rearranged according to a specific system, without changing the actual symbols themselves. An
example is the Rail Fence cipher, where the characters are written in a zigzag pattern and then read off
in rows. The main difference between the two is that substitution ciphers alter the characters
themselves, while transposition ciphers alter the arrangement of the characters without changing the
characters themselves. Both are commonly used in combination to enhance security.
44

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy