0% found this document useful (0 votes)
3 views9 pages

Sir Joy CN

The document outlines the OSI model's seven layers, detailing their functions in network communication, and discusses network standards that ensure interoperability among devices. It also covers various network devices, transmission impairments, multiple-access techniques, data transmission methods, and encryption types, including symmetric and asymmetric encryption. Additionally, it explains the RSA algorithm, digital signatures, and the Ethernet frame format, emphasizing their roles in secure and efficient data transmission.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views9 pages

Sir Joy CN

The document outlines the OSI model's seven layers, detailing their functions in network communication, and discusses network standards that ensure interoperability among devices. It also covers various network devices, transmission impairments, multiple-access techniques, data transmission methods, and encryption types, including symmetric and asymmetric encryption. Additionally, it explains the RSA algorithm, digital signatures, and the Ethernet frame format, emphasizing their roles in secure and efficient data transmission.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

1.

OSI 7 Layers

The OSI (Open Systems Interconnection) model is a conceptual framework that divides
network communication into seven distinct layers, standardizing how different systems can
interact. Developed by the ISO, it acts as a universal guide for network design,
troubleshooting, and protocol development. Each layer performs specific functions, building
upon the services of the layer below it and providing services to the layer above, a process
called encapsulation on sending and de-encapsulation on receiving.

Here's a breakdown of the layers:

• Layer 1: Physical Layer: Deals with the physical transmission of raw bits over the
network medium (cables, Wi-Fi signals). It defines electrical signals, cabling, and
connectors. (e.g., Ethernet cables, hubs)
• Layer 2: Data Link Layer: Responsible for reliable frame transfer between adjacent
network nodes on a single link. It handles MAC addressing, error detection, and flow
control. (e.g., Ethernet, Wi-Fi, switches)
• Layer 3: Network Layer: Manages packet routing across different networks
(internetworking). It uses logical IP addresses to determine the best path from source
to destination. (e.g., IP, routers)
• Layer 4: Transport Layer: Provides end-to-end communication between
applications. It ensures reliable, ordered delivery of segments (TCP) or efficient,
connectionless delivery of datagrams (UDP), using port numbers. (e.g., TCP, UDP)
• Layer 5: Session Layer: Establishes, manages, and terminates communication
sessions between applications. It handles dialog control and synchronization. (e.g.,
NetBIOS, RPC)
• Layer 6: Presentation Layer: Focuses on data format and representation. It handles
data translation, encryption/decryption, and compression/decompression so that data
is understandable by the receiving application. (e.g., JPEG, ASCII, SSL/TLS)
• Layer 7: Application Layer: The layer closest to the end-user, providing network
services directly to applications. It defines protocols for specific application functions.
(e.g., HTTP, FTP, DNS)

2. Network Standard

A network standard (or protocol standard) is a set of formal rules and specifications that
define how data is formatted, transmitted, and received across a computer network. These
standards are crucial for ensuring interoperability, allowing devices from different
manufacturers to communicate effectively. They specify everything from physical
connections and signaling methods to data encapsulation, addressing, error detection, and
routing algorithms. Organizations like the IEEE (Institute of Electrical and Electronics
Engineers), IETF (Internet Engineering Task Force), and ISO (International Organization for
Standardization) develop and publish these standards. Examples include IEEE 802.3 for
Ethernet, IEEE 802.11 for Wi-Fi, TCP/IP (a suite of protocols that forms the backbone of the
Internet), and various application-level protocols like HTTP, FTP, and SMTP. Adherence to
standards is fundamental to the global interconnectedness of the Internet and local networks,
ensuring seamless data exchange.
3. Network Devices

Network devices are specialized hardware components that facilitate communication and data
exchange within a computer network. They perform various functions, from simply
connecting cables to intelligently routing data and securing network traffic.

• Hubs (obsolete): Simple devices that broadcast all incoming data to all connected
ports, operating at the Physical layer (Layer 1).
• Bridges: Connect two LAN segments, forwarding data frames based on MAC
addresses, operating at the Data Link layer (Layer 2).
• Switches: Advanced bridges that intelligently forward data frames to specific
destination ports based on MAC addresses, vastly improving network efficiency
compared to hubs. They also operate at Layer 2.
• Routers: Connect different networks (LANs to WANs, or the Internet), forwarding
data packets based on IP addresses, operating at the Network layer (Layer 3). They
use routing tables to determine the best path.
• Modems: Convert digital signals from computers to analog signals for transmission
over phone lines or cable, and vice versa.
• Access Points (APs): Enable wireless devices to connect to a wired network.
• Firewalls: Network security devices that monitor and filter incoming and outgoing
network traffic based on predefined security rules.
• Gateways: Connect networks using different communication protocols, often
functioning as protocol converters. These devices are the building blocks that enable
data flow and connectivity across diverse network infrastructures.

4. Transmission Impairment

Transmission impairment refers to any degradation of the signal quality that occurs during
data transmission over a communication medium, preventing the received signal from being a
perfect replica of the transmitted signal. These impairments can lead to errors in the data
received. The three primary types of transmission impairment are:

• Attenuation: The loss of signal strength as it travels through the medium. The signal
energy dissipates over distance, becoming weaker. This is compensated for by using
amplifiers or repeaters.
• Distortion: Occurs when the signal changes its form or shape. This can happen due to
different frequency components of a composite signal traveling at different speeds
(delay distortion) or different signal components arriving with different phases (phase
distortion).
• Noise: Unwanted electrical or electromagnetic energy that interferes with the desired
signal. Common types include thermal noise (random electron motion), impulse noise
(spikes from power lines or lightning), crosstalk (interference from adjacent wires),
and intermodulation noise (sum/difference frequencies from multiple signals sharing a
medium). These impairments necessitate various error detection and correction
techniques in network protocols to ensure reliable data delivery.
5. FDMA, TDMA, CDMA

These are three fundamental multiple-access techniques used in wireless communication to


allow multiple users to share a common communication channel without interfering with
each other.

• FDMA (Frequency Division Multiple Access): In FDMA, the total available


bandwidth is divided into smaller, non-overlapping frequency bands (channels). Each
user is assigned a unique frequency band for the duration of their communication.
Users transmit and receive simultaneously on their assigned frequency. It's like
having separate radio stations, each broadcasting on a different frequency. While
simple, it can be inefficient if users don't utilize their assigned bandwidth fully,
leading to wasted spectrum.
• TDMA (Time Division Multiple Access): In TDMA, users share the same frequency
channel, but they are allocated specific, non-overlapping time slots within that
channel. Each user transmits data in bursts during their assigned time slot. It's like
multiple people talking on the same walkie-talkie, but each person waits for their turn.
TDMA is more efficient than FDMA in terms of spectrum utilization as it allocates all
available bandwidth to a user during their time slot, but it requires precise
synchronization.
• CDMA (Code Division Multiple Access): In CDMA, all users transmit on the same
frequency band at the same time. However, each user is assigned a unique spreading
code (a pseudorandom noise sequence). The user's data is multiplied by this code,
spreading the signal across the entire bandwidth. At the receiver, the same unique
code is used to despread and recover only the desired signal, while other users' signals
appear as noise and are rejected. It's like multiple people talking simultaneously in a
room, but each speaking a different language that only the intended listener
understands. CDMA offers excellent spectral efficiency, strong interference rejection,
and enhanced security, but it is more complex to implement.

6. Synchronous and Asynchronous Data Transmission

These terms describe two fundamental methods of timing and controlling the flow of data
between a sender and a receiver. The key difference lies in how the sender and receiver
maintain synchronization.

• Asynchronous Data Transmission:


o Description: Data is transmitted character by character or byte by byte. Each
character is framed with start and stop bits. The start bit signals the beginning
of a character, and the stop bit signals its end. There is no common clock
signal shared between the sender and receiver.
o Synchronization: The receiver synchronizes with each incoming character
using the start bit. The timing within the character (for reading individual bits)
is derived from its own clock, which must be reasonably accurate to the
sender's clock.
o Characteristics: Simple to implement, less efficient due to framing bits
overhead, suitable for low-speed, bursty data transmission (e.g., keyboard
input, serial port communication). Gaps (idle times) can occur between
characters.
• Synchronous Data Transmission:
o Description: Data is transmitted in continuous blocks or frames of many
characters/bits without start and stop bits for each individual character. The
sender and receiver share a common clock signal or derive timing from the
data stream itself (e.g., using encoding techniques like Manchester encoding).
o Synchronization: Both sender and receiver are continuously synchronized
through the shared clock or embedded timing information. Special
synchronization characters or patterns (preambles) are often used at the
beginning of each block to establish initial synchronization.
o Characteristics: More efficient due to reduced overhead, suitable for high-
speed, continuous data transmission (e.g., network communications like
Ethernet, T1 lines). Requires more complex hardware for clock recovery and
synchronization.

7. Data Link Layer Protocol

A Data Link Layer protocol operates at Layer 2 of the OSI model, responsible for reliable
and efficient transmission of data frames between adjacent nodes on a single physical link. It
addresses issues like error detection, flow control, and physical addressing. Key functions
include:

• Framing: Dividing the raw bit stream from the Physical layer into logical units called
frames for transmission. Each frame has a header and a trailer.
• Physical Addressing (MAC Addressing): Adding source and destination MAC
addresses to the frame header to ensure it reaches the correct device on the local
network segment.
• Flow Control: Preventing a fast sender from overwhelming a slow receiver by
regulating the data transmission rate.
• Error Control: Detecting and, in some cases, correcting errors that may occur during
transmission over the physical medium. This often involves checksums or cyclic
redundancy checks (CRCs).
• Access Control: For shared media (like Wi-Fi), managing how devices share the
channel to avoid collisions (e.g., CSMA/CD for Ethernet, CSMA/CA for Wi-Fi).
Common examples of Data Link Layer protocols include Ethernet (IEEE 802.3), Wi-
Fi (IEEE 802.11), Point-to-Point Protocol (PPP), and HDLC. These protocols ensure
the integrity and orderly delivery of data across individual network links before it is
handed off to the Network layer for routing.

8. RSA Algorithm

The RSA (Rivest-Shamir-Adleman) algorithm is a widely used public-key cryptographic


algorithm, named after its inventors. It is fundamental to secure communications like HTTPS,
secure email, and digital signatures. RSA relies on the mathematical difficulty of factoring
large numbers into their prime factors. It uses a pair of keys: a public key, which can be
freely distributed, and a private key, which must be kept secret by its owner.

• Key Generation: Involves selecting two large prime numbers, calculating their
product (n) and a totient, then deriving a public exponent (e) and a private exponent
(d) such that e⋅d≡1(modϕ(n)). The public key is (e,n), and the private key is (d,n).
• Encryption: To encrypt a message (M) for someone, you use their public key (e,n) to
compute the ciphertext (C) as C=Me(modn).
• Decryption: The recipient uses their private key (d,n) to decrypt the ciphertext (C)
and recover the original message (M) as M=Cd(modn).
• Digital Signatures: RSA can also be used for digital signatures by reversing the
process: the sender "encrypts" a hash of the message with their private key, and the
recipient verifies it with the sender's public key. RSA's security relies on the
assumption that factoring large integers is computationally infeasible within a
reasonable time, making it a cornerstone of modern asymmetric cryptography.

9. Working Principle of Digital Signature

A digital signature is a cryptographic mechanism used to verify the authenticity, integrity,


and non-repudiation of digital documents or messages. It ensures that the message has not
been tampered with and confirms the sender's identity, analogous to a handwritten signature.
The working principle involves asymmetric (public-key) cryptography and hashing:

1. Hashing: The sender first computes a cryptographic hash (a fixed-size digest) of the
original message. This hash function is one-way, meaning it's computationally
infeasible to reverse engineer the message from the hash, and even a tiny change in
the message will produce a drastically different hash.
2. Signing (Encryption with Private Key): The sender then "encrypts" this hash value
using their own private key. This encrypted hash is the digital signature. It's not true
encryption for confidentiality, but rather a mathematical operation that proves
ownership of the private key.
3. Transmission: The sender transmits the original message along with the digital
signature.
4. Verification (Decryption with Public Key): The recipient receives the message and
the digital signature. They first compute their own hash of the received message.
Then, they use the sender's public key to "decrypt" the received digital signature,
which reveals the original hash that the sender generated.
5. Comparison: The recipient compares the hash they computed from the received
message with the hash recovered from the digital signature. If the two hash values
match, it confirms two things:
o Integrity: The message has not been altered since it was signed.
o Authenticity/Non-repudiation: The signature was indeed created by the
holder of the private key (the sender), and they cannot later deny having sent
it. This process provides strong cryptographic assurances for digital
communication.
10. Difference between Symmetric and Asymmetric Encryption

Symmetric and asymmetric encryption are two fundamental approaches to cryptography,


differing primarily in their use of keys for encryption and decryption.

• Symmetric Encryption (Private Key Cryptography):


o Key Usage: Uses a single, shared secret key for both encryption and
decryption.
o Key Management: The biggest challenge is securely exchanging this shared
secret key between communicating parties before any secure communication
can begin.
o Speed: Generally much faster than asymmetric encryption, making it suitable
for encrypting large amounts of data.
o Examples: AES (Advanced Encryption Standard), DES (Data Encryption
Standard), Triple DES.
o Primary Use: Confidentiality of data.
• Asymmetric Encryption (Public Key Cryptography):
o Key Usage: Uses a pair of mathematically related keys: a public key and a
private key. The public key can be freely distributed, while the private key
must be kept secret. Data encrypted with the public key can only be decrypted
with the corresponding private key, and vice versa.
o Key Management: Solves the key exchange problem of symmetric
encryption. The public key can be openly shared.
o Speed: Significantly slower than symmetric encryption, making it impractical
for encrypting large data volumes.
o Examples: RSA (Rivest-Shamir-Adleman), ECC (Elliptic Curve
Cryptography).
o Primary Use: Secure key exchange (for symmetric keys), digital signatures,
and non-repudiation.

In practice, a hybrid approach is often used: asymmetric encryption is used to securely


exchange a symmetric key, which is then used for the bulk of data encryption due to its
speed.

11. Ethernet Frame Format

An Ethernet frame is the data link layer (Layer 2) unit of transmission over an Ethernet
network. It encapsulates the IP packet (and higher-layer data) for physical transmission. The
standard Ethernet II frame format, widely used today, includes several fields:

• Preamble (7 bytes): A sequence of alternating 1s and 0s (10101010) that provides


synchronization for the receiving station's clock.
• Start Frame Delimiter (SFD) (1 byte): The sequence 10101011 that signals the
beginning of the actual frame content.
• Destination MAC Address (6 bytes): The physical address of the receiving network
interface card. Can be unicast, multicast, or broadcast.
• Source MAC Address (6 bytes): The physical address of the sending network
interface card.
• EtherType (2 bytes): Identifies the protocol of the payload data encapsulated within
the frame (e.g., 0x0800 for IPv4, 0x0806 for ARP, 0x86DD for IPv6).
• Payload Data (46-1500 bytes): The actual data being transmitted, typically an IP
packet. If the payload is less than 46 bytes, padding bits are added to meet the
minimum frame size requirement.
• Frame Check Sequence (FCS) (4 bytes): Contains a Cyclic Redundancy Check
(CRC) value, calculated over the Destination MAC, Source MAC, EtherType, and
Data fields. The receiver recalculates the CRC and compares it to the received FCS to
detect transmission errors.

The minimum Ethernet frame size is 64 bytes (including header and FCS, excluding
preamble/SFD), and the maximum is 1518 bytes. This structured format ensures that Ethernet
devices can properly interpret and process data transmitted over the network.

12. QoS (Quality of Service)

Quality of Service (QoS) refers to a set of technologies and techniques used to manage
network resources and prioritize different types of network traffic to ensure a certain level of
performance for critical applications. In networks without QoS, all traffic is treated equally
on a first-come, first-served basis, which can lead to unpredictable delays, packet loss, and
jitter (variation in delay) for real-time or business-critical applications when congestion
occurs.

QoS mechanisms address these issues by:

• Classification: Identifying and categorizing different types of traffic (e.g., VoIP,


video, web Browse, file transfers).
• Marking: Tagging classified traffic with specific QoS labels (e.g., DSCP values in IP
headers) to indicate their priority.
• Queuing: Managing buffers and prioritizing traffic based on their marks, ensuring
high-priority packets are sent first.
• Congestion Avoidance: Proactively dropping lower-priority packets before
congestion becomes severe (e.g., Weighted Random Early Detection - WRED).
• Traffic Shaping/Policing: Controlling the rate at which traffic is sent or received to
ensure it conforms to a predefined bandwidth limit.

Common benefits of QoS include clear voice calls, smooth video conferencing, reliable
performance for enterprise applications, and efficient utilization of bandwidth by preventing
less critical traffic from overwhelming the network.

13. Adaptive and Non-Adaptive Routing Algorithms

Routing algorithms are protocols that determine the optimal paths for data packets to travel
from source to destination across an internetwork. They can be broadly classified as adaptive
or non-adaptive based on their ability to react to network changes.
• Non-Adaptive Routing Algorithms (Static Routing):
o Description: These algorithms make routing decisions based on pre-
computed, fixed routing tables that do not change in response to real-time
network conditions (like traffic load, link failures, or congestion). Routes are
typically configured manually by network administrators or derived from a
central database offline.
o Characteristics:
 Simplicity: Easier to configure and manage for small, stable networks.
 Predictability: Routes are fixed and deterministic.
 No Overhead: No signaling overhead for route updates.
 Lack of Responsiveness: Cannot adapt to network failures or
congestion, leading to inefficient routing or network outages if paths
change.
o Example: Manual static routes configured on routers.
• Adaptive Routing Algorithms (Dynamic Routing):
o Description: These algorithms adjust routing decisions and update their
routing tables dynamically in response to changes in network topology, traffic
load, or link costs. They use routing protocols to exchange information with
other routers and calculate optimal paths in real-time.
o Characteristics:
 Responsiveness: Can quickly adapt to network failures, congestion,
and changes in link metrics.
 Complexity: More complex to implement and manage due to
continuous route calculations and information exchange.
 Overhead: Involves routing protocol overhead (CPU, memory,
bandwidth) for maintaining routing tables.
 Scalability: Essential for large, complex, and constantly changing
networks like the Internet.
o Examples: Distance Vector algorithms (e.g., RIP) and Link State algorithms
(e.g., OSPF, IS-IS).

Adaptive routing is generally preferred for most modern networks due to its resilience and
ability to optimize routes, while non-adaptive routing is typically used for very small, stable
networks or for specific purposes like default routes.
Dijkstra's Algorithm

Dijkstra's algorithm is a greedy algorithm used to find the shortest paths from a single source
node to all other nodes in a graph with non-negative edge weights (costs). It is a fundamental
algorithm in computer science and is extensively used in network routing protocols, most
notably by Link State Routing protocols like OSPF to calculate the best paths across
networks.

The core idea is to iteratively explore the graph, building up a set of visited nodes for which
the shortest path from the source has already been determined.

Working Principle:

1. Initialization:
o Assign a distance of 0 to the starting (source) node and infinity (∞) to all other
nodes. These distances represent the currently known shortest path from the
source.
o Maintain a set of "unvisited" nodes, initially containing all nodes.
2. Iteration:
o While there are unvisited nodes:
 Select the unvisited node with the smallest current distance from the
source. This node becomes the "current node."
 Move the current node from the "unvisited" set to the "visited" set.
 For each neighbor of the current node:
 Calculate a tentative distance to the neighbor through the
current node (current node's distance + weight of the edge to
the neighbor).
 If this tentative distance is shorter than the neighbor's currently
recorded distance, update the neighbor's distance to this new,
shorter value. Also, record the current node as the predecessor
for that neighbor (to reconstruct the path later).
3. Termination: The algorithm finishes when all nodes have been visited, or when the
smallest distance among unvisited nodes is infinity (meaning the remaining unvisited
nodes are unreachable).

Result: Upon completion, the final distance assigned to each node represents the shortest
path cost from the source node to that node. By tracing back through the recorded
predecessors, the actual shortest path can be reconstructed. In routing, each router runs
Dijkstra's with itself as the source to build its own routing table.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy