CN Unit 5
CN Unit 5
Unit 5
1
Agenda
Key Benefits:
Increased Efficiency: Multiple access techniques allow more users to connect simultaneously,
maximizing the use of available bandwidth.
Cost-Effectiveness: By sharing a single communication channel, organizations can reduce
infrastructure costs associated with dedicated links for each user.
Enhanced Connectivity: Supports various applications, from telecommunication to data
transmission, ensuring seamless communication among devices.
6
Types of Multiple Access Links
1. Time Division Multiple Access (TDMA):
Divides the channel into time slots.
Each user transmits in their allocated time slot.
Efficient for synchronous communication and suitable for applications requiring fixed
bandwidth.
2. Frequency Division Multiple Access (FDMA):
Allocates separate frequency bands to each user.
Users transmit simultaneously but on different frequencies.
Commonly used in analog communication systems and broadcasting.
3. Code Division Multiple Access (CDMA):
Assigns unique codes to each user for transmission.
Users can transmit simultaneously over the same frequency.
Provides robust performance against interference and is widely used in mobile communication.
7
Multiple Access Protocols
Multiple Access Protocols are essential for managing how multiple devices share a communication
medium. Key protocols include:
ALOHA Protocol: A simple, uncoordinated protocol where devices transmit whenever they
have data. If a collision occurs (simultaneous transmission), devices wait a random time before
retrying. ALOHA is easy to implement but has low efficiency, especially under high traffic.
Carrier Sense Multiple Access (CSMA): This protocol requires devices to listen to the medium
before transmitting. If the channel is idle, the device sends its data. If the channel is busy, the
device waits until it becomes free, reducing the chance of collisions.
CSMA/CD (Collision Detection): An extension of CSMA used in wired networks. After
transmitting, devices listen for collisions. If a collision is detected, they stop transmitting and
wait a random time before retrying. This method improves efficiency in busy networks.
CSMA/CA (Collision Avoidance): Commonly used in wireless networks, CSMA/CA minimizes
collisions by using a request-to-send (RTS) and clear-to-send (CTS) mechanism. Devices must
request permission to transmit, reducing the likelihood of overlapping transmissions.
8
Switched Local Area Networks (LANs)
Definition of LANs:
A Local Area Network (LAN) is a network that connects computers and devices in a limited
geographical area, such as a home, school, or office building.
LANs facilitate high-speed communication and data sharing among connected devices.
Architecture of LANs:
Star Topology: Most common architecture, where devices are connected to a central switch. It
offers improved performance and easier troubleshooting.
Bus Topology: All devices share a single communication line. It’s simpler but can lead to
performance issues with increased traffic.
Ring Topology: Devices are connected in a circular format, where each device acts as a repeater
for the signals. It can offer better performance but is less robust to failures.
9
Switched Local Area Networks (LANs)
Function of Switches:
Switches are integral components of a LAN, functioning as intelligent devices that connect
multiple computers and devices.
They operate at the Data Link Layer (Layer 2) of the OSI model, facilitating communication
between devices within the same network.
Key Roles of Switches:
Packet Forwarding: Switches receive incoming data packets, read their MAC addresses, and
forward them only to the intended recipient, minimizing unnecessary traffic.
Collision Domain Management: By creating separate collision domains for each connected
device, switches reduce the chances of data collisions, enhancing overall network performance.
VLAN Support: Switches can create Virtual Local Area Networks (VLANs), allowing for
logical separation of network segments while utilizing the same physical infrastructure,
improving security and performance.
10
Link Virtualization
Concept of Link Virtualization
Link virtualization refers to the abstraction of physical network links, allowing multiple logical
connections to share the same physical infrastructure. It enables efficient resource allocation and
management by treating a single physical link as multiple virtual links, which can be dynamically
adjusted based on network demands.
Benefits of Link Virtualization
Enhanced Flexibility: Supports diverse network applications by allocating bandwidth as needed.
Improved Resource Utilization: Maximizes the use of physical links by consolidating multiple
virtual connections.
Scalability: Facilitates easy network expansion without requiring additional physical
infrastructure.
Fault Tolerance: Allows for redundancy and load balancing, enhancing network reliability.
11
Data Center Networking
Structure and Design of Data Center Networks:
Data center networks are typically organized in a tiered architecture, consisting of core,
aggregation, and access layers. The core layer provides high-speed backbone connectivity, the
aggregation layer manages traffic and connects multiple access switches, while the access layer
connects servers and storage devices. This layered approach ensures efficient data flow and
resource allocation, enhancing overall performance.
Higher bandwidth than twisted pair More expensive than twisted pair
Coaxial Cable
Longer distances than twisted pair Bulkier and less flexible for installation