Chapter 1
Chapter 1
Spring 2024
Foundations
24/02/2024 1
Goals of CMPE 344
• Emphasize on design choices made when
building computer networks
• Understand why networks are designed the way
they are
• Discuss “systems” approach to understand the
big picture
• Gain insight on architectural implications and
interaction of components rather than focusing
on rigidly defined layers
2
Network Applications
• Most people are familiar with network through
applications:
– World Wide Web
– Email
– Online social network
– Streaming audio/video
– Videoconferencing (e.g. SkypeTM)
– File sharing
– Instant messaging
– … and many more
3
Network Applications
4
Links and nodes
• A link is a physical medium that connects
computers or nodes.
– A node is a piece of hardware such as a PC,
a switch, a router, etc….
• Key challenge in network design: Scalability
– A system that is designed to support growth to
an arbitrarily large size is said to scale
– How should we connect all the nodes in the
world?
5
Connectivity: Direct links
(a)
(b)
(a) Point-to-point
(b) Multiple access (Sharing a single physical link)
6
Connectivity: Switched network
9
Store-and-forward
• In packet switched networks, each node
– first receives complete packet over some link
– stores the packet in its internal memory
– and then forwards the complete packet to the
next node
• Primary function of packet switches is to store
and forward packets
10
Interconnection of networks
An internetwork or an internet
12
Unicasting, broadcasting, and
multicasting
• Unicast: Source sends a message to a single destination
node
• Broadcast: Source sends a message to all the nodes on
the network
• Multicast: Source sends a message to some subset of
the other nodes (but not all of them)
• Thus, in addition to node-specific addresses, a network
may have to support broadcast and multicast addresses
as well
• Other “cast”s:
– Anycast: One-to-one-of-many (Applications?)
– Convergecast: Many-to-one (Applications?) 13
Cost-effective resource sharing
• How do hosts share the network and the same
link when they all want to use it at the same
time?
• Multiplexing: Sharing a system resource among
multiple users
L1 R1
L2 R2
Switch 1 Switch 2
L3 R3
Multiplexing/demultiplexing 14
TDM, FDM, CDMA
• Synchronous time-division multiplexing (STDM) or just
time-division multiplexing (TDM)
– Time is divided into equal-sized quanta and each user
is given a chance to send data in a round-robin manner
• Frequency-division multiplexing (FDM)
– Each source transmits at a different frequency
• Code-division multiple access (CDMA)
– Each source has its own “code”
– Suitable for bursty data (we’ll define “bursty” later)
– In fact, it is a “multiple access” method (see more later)
Q: Find out what kind of multiplexing is used in cellular networks
and radio/TV broadcasting.
Q: What kind of multiplexing is used in fiber optic communications?
Hint: Optical carriers can be described by their wavelengths. 15
TDM vs. FDM
Example:
FDM
4 users
frequency
time
TDM
frequency
time 16
Limitations of TDM and FDM
• If one of the pairs does not have any data to
send, its share of physical link (time quantum or
frequency) remains idle even if one of the other
pairs has data to transmit
• In computer communications the idle time can
be large! (Q: Why?)
– We say that computer communications traffic is
bursty.
• Also, the maximum number of flows in these
schemes is fixed ahead of time
17
Statistical Multiplexing
• Not all of the pairs are active at the same time!
• Like STDM: Link is shared over time
• Unlike STDM: Data is transmitted from each
user on demand rather than in a predetermined
slot
• Statistical multiplexing is efficient!
Q: How does one ensure that transmission decisions are made fairly?
18
More on statistical multiplexing
• A packet switched network fragments a large
message into packets (that have a maximum
size) so that other flows can have a turn to
transmit their own
• Receiver will have to reassemble the packets
back into original message
Multiplexing packets
■■■
onto a shared link
19
More on statistical multiplexing
• The switch may have to buffer packets in its
memory if it receives packets faster than the
shared link can handle
• If switch receives packets faster than it can send
them for an extended period of time, resource
will be congested
– Packets will be dropped
– Packets will be delayed excessively
20
Statistical multiplexing vs. circuit
switching
• Suppose users share a 1 Mbps link and each user
alternates between periods of activity (generates data at
constant rate 100 kbps) and periods of inactivity
• Further, each user is active 10% of time
• With circuit switching (TDM, FDM), 100 kbps must be
reserved for each user at all times
– E.g., TDM: 1-sec. frame is divided into 10 time slots of
100 ms each
– The link can support only 10 simultaneous users
21
Statistical multiplexing vs. circuit
switching
• With statistical multiplexing, the probability that each user is active is
0.1 (10%)
• If there are 35 users, the probability that there are 11 or more
simultaneously active users is approx. 0.0004 (see next slide)
• When there are 10 or fewer simultaneously active users (which
happens with probability 0.9996), the aggregate arrival rate of data
is less than or equal to 1 Mbps (the output rate of the link)
• When there are more than 10 active users, queue will begin to grow
(until aggregate rate falls below 1 Mbps)
• The probability of having more than 10 simultaneously active users
is miniscule, the performance is the same as circuit switching, and 3
times the number of users are allowed
22
Statistical multiplexing vs. circuit
switching
• Let p=0.1
• Probability of having n simultaneously active
users out of 35 at any given time is
pn(1-p)35-n
• Probability that there are 11 or more users
transmitting simultaneously is
ଵ ଷହି
ୀ
23
Support for common services
• Goal: Hide the complexity of the network from
the application
• Logical channels: Application-to-application
communication path or a pipe
Processes communicating
over an abstract channel
Channel must meet the
functionality requirements
of applications
24
Common communication
patterns
• After understanding the communication needs of a
representative collection of applications, extract their
common communication requirements, and finally
incorporate the functionality that meets these
requirements in the network
• Examples: Request/reply channels and message stream
channels
– Can you give example applications that make use of
these channels?
25
Characterization of networks
according to their sizes
PAN Personal area network Around an
individual
SAN Storage area network In a room
26
Three types of failures
• Bit errors
– 1 is turned into a 0 or vice versa
– Causes: Lightning, power surges, microwave ovens
– These are rare: 1 out of 106-107 on copper cable,
1 out of 1012-1014 on fiber
– Single bit errors and burst errors
• Packet losses
– Uncorrectable bit errors or packet drops at intermediate nodes
due to lack of buffer space
• Node- or link-level failures
– Crashed computers, broken links, misconfiguration
– Q: How do you distinguishing between a failed computer and
one that is merely slow? What about the difference between
between one that has been cut and one that is very flaky and
therefore introduces a high number of bit errors? (See more in
CMPE 449 Distributed Systems) 27
Layering
• A fundamental design and implementation
concept
• When designing and analyzing complex
systems, we usually abstract away the details of
components and provide an interface for other
components of the system
• Services provided at higher layers are
implemented in terms of services provided by
lower layers
28
Protocols
• Protocols
– are abstract objects that make up the layers of a
network system
– provide communication service that higher level
objects (e.g. application processes or higher level
protocols) use to exchange messages
– In fact, the term “protocol” is “overloaded”:
Specification of interface or module that implements it
• Two different interfaces are generally provided
– Service interface: To other objects on the same
computer
– Peer interface: To a protocol’s counterpart (peer) on
another machine 29
Service and peer interfaces of
protocols
30
Encapsulation
• A header (and/or trailer) is attached to a
message body or payload
• Referred to as multiplexing and demultiplexing
up and down the protocol graph
• Headers attached to messages contain an
identifier that records the higher level object to
which the message belongs
• Trailers may contain information for error
detection/correction
31
Encapsulation (cont.)
Messages
can be broken
down into smaller
messages
33
Open Systems Interconnection
(OSI) reference model
End host End host
Application Application
Presentation Presentation
Session Session
Transport Transport
35
Description of layers
• Transport Layer
– Implements a process-to-process channel
– Unit of data exchanges in this layer is called a message
• Session Layer
– Provides a name space that is used to tie together the potentially
different transport streams that are part of a single application
– Example: Management of an audio stream and a video stream
that are being combined in a teleconferencing application
• Presentation Layer
– Concerned about the format of data exchanged between peers
– Is an integer is 16, 32, or 64 bits long? Is the most significant
byte transmitted first or last? How is a video stream formatted?
• Application Layer
– Standardize common type of exchanges
The transport layer and the higher layers typically run only on end-
hosts and not on the intermediate switches and routers 36
Internet Architecture
FTP HTTP NV TFTP Application
TCP UDP
IP
TCP UDP Network
IP
Reference model:
No strict layering
NET 1 NET 2 NET n
■■■
41
Network performance
• Network performance is measured in two fundamental
ways
– Bandwidth (or sometimes throughput): Number of bits
that can be transmitted over the network in a certain
period of time (e.g., 10 Mbps: i.e. it takes 0.1 µs to
transmit each bit)
• Notation: KB = 210 bytes; Mbps = 106 bits per second
– Latency (or delay): Time it takes a message to travel
from one end of a network to the other (e.g., 24 ms in
transcontinental networks)
• Round-trip time (RTT): Time it takes to send a message from
one end of a network to the other and back
42
Bandwidth
(a)
(b)
44
Delay and throughput
• In general, latency has 3 main components
– Latency = Propagation + Transmit + Queue
– Propagation = Distance / SpeedOfLight
– Transmit = Size (bits) / Bandwidth (bps)
• Be careful since delay/latency/RTT are context
dependent
– We will make it explicit whenever necessary
• Effective end-to-end throughput
– Throughput = TransferSize (bits) / TransferTime (sec)
45
Bandwidth vs. Latency
• Relative importance of bandwidth and latency
depends on application
– For large file transfer, bandwidth is critical
– For small messages (HTTP, NFS, etc.), latency is
critical
– Variance in latency (jitter) can also affect some
applications (e.g., audio/video conferencing) (see
more later)
46
Latency vs. bandwidth domination
10,000
5000
2000
1000
500
10
1
10 100
RTT (ms)
47
Delay x Bandwidth product
• Volume of the pipe: The maximum number of
bits that could be in transit through the pipe at
any given interval
• A transcontinental channel with a one-way
latency of 50 ms and a bandwidth of 45 Mbps is
able to hold 50x10-3 sec x 45x106 bits/sec =
2.25x106 bits = 280 KB of data
Delay
Bandwidth
48
Sample delay x bandwidth products
Link type Bandwidth Distance RTT Delay x BW
50
Keeping the pipe full
• Delay x bandwidth is important to know when
constructing high-performance networks
because it corresponds to how many bits the
sender must transmit before the first bit arrives
at the receiver
• If the sender is expecting the receiver to
somehow signal that bits are starting to arrive
the sender can send up to 2 delay x bandwidth
(= RTT x bandwidth) worth of data before
hearing from the receiver
51
Keeping the pipe full
• TCP will have performance problems when
there are paths with high bandwidth and long
round-trip delays (LFNs)
• The relevant parameter here is the product of
bandwidth (bits per second) and round-trip delay
(RTT in seconds)
– Number of bits it takes to "fill the pipe", i.e.,
the amount of unacknowledged data that TCP
must handle to keep the pipe full
52
Time scales and performance
• Average vs. “instantaneous” bandwidth:
– Time interval over which average is computed is
important
– Suppose a video application needs 2 Mbps on
average:
• If it transmits 1 Mb in first second and 3 Mb in the following
second, over the 2-second interval average rate is 2 Mbps
• However, just knowing the average may not be enough.
What if the network can support no more than 2 Mb in any
one second
• Generally, one puts an upper bound on how large a “burst”
can be (Burst: Peak rate maintained for some period of time)
53
Delay and jitter
• Delay requirement: As little delay as possible
• Jitter requirement: As little variation in delay as
possible
• Smoothing out jitter is important for better
performance in video applications
– Jitter can be smoothed out by delaying the
time at which playback starts
Interpacket gap
4 3 2 1 4 3 2 1
Packet Packet
source
Network sink
54
Network induces jitter!
Cloudification or softwarization
of the network
• Network operators are trying to simultaneously accelerate the pace
of innovation (known as feature velocity) and yet continue to offer a
reliable service (preserve stability)
• Adopt the best practices of cloud providers:
– Take advantage of commodity hardware and move all
intelligence into software
– Adopt agile engineering processes that break down barriers
between development and operations
• Software Defined Networks (SDNs) is a game changer in terms of
how rapidly the network evolves to support new features (See more
in CMPE 448 Modern Networking Concepts)
55