0% found this document useful (0 votes)
12 views129 pages

CS3201 Ch2

Uploaded by

edwinchiu618
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views129 pages

CS3201 Ch2

Uploaded by

edwinchiu618
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 129

Application layer: overview

 Principles of network  P2P applications


applications  video streaming and content
 Web and HTTP distribution networks
 E-mail, SMTP, IMAP
 The Domain Name System
DNS

Application Layer: 2-2


Creating a network app
application
transport
write program that: mobile network
network
data link
physical
 run on (different) end systems national or global ISP

 communicate over network


 e.g., web server software
communicates with browser software
local or
no need to write software for regional ISP

network-core devices home network content


application
 network-core devices do not run user transport
network
provider
network datacenter
application
applications data link
physical
transport
network
network

 applications on end systems allow for data link


physical

rapid app development, propagation enterprise


network

Application Layer: 2-3


Some network apps
 Web
 social networking  voice over IP (e.g., Skype)
 text messaging (Whatsapp,  real-time video conferencing
WeChat, …) (e.g. Zoom)
 e-mail  Internet search
 multi-user network games  remote login
 streaming stored video …
(YouTube, Netflix, …)
 P2P file sharing

Application Layer: 2-4


Client-server paradigm
server: mobile network
 always-on host national or global ISP

 permanent IP address
 often in data centers, for scaling
clients: local or
regional ISP
 contact, communicate with server
 may be intermittently connected home network content
provider
 may have dynamic IP addresses network datacenter
network

 do not communicate directly with


each other
enterprise
 examples: HTTP, IMAP, SMTP network

Application Layer: 2-5


Peer-peer architecture analserver
 no always-on server
 arbitrary end systems directly mobile network
national or global ISP
communicate
 peers request service from other
peers, provide service in return to
other peers
local or
• self scalability – new peers bring new regional ISP
service capacity, as well as new service
home network
demands content
provider
 peers are intermittently connected network datacenter
network

and change IP addresses


• complex management
 example: P2P file sharing enterprise
network

Application Layer: 2-6


Processes communicating
process: program running clients, servers
within a host client process: process that
initiates communication
within same host, two server process: process
processes communicate that waits to be contacted
using inter-process
communication (defined by
OS)  note: applications with
P2P architectures have
processes in different hosts client processes &
communicate by exchanging server processes
messages
Application Layer: 2-7
Sockets
 process sends/receives messages to/from its socket
 socket analogous to door
• sending process shoves message “out the door”
• sending process relies on transport infrastructure on other side of
door to deliver message to socket at receiving process
• two sockets involved: one on each side

application application
socket controlled by
process process app developer

transport transport
network network controlled
link by OS
link Internet
physical physical

Application Layer: 2-8


Addressing processes
 to receive messages, process  identifier includes both IP address
must have identifier and port number associated with
 host device has unique 32-bit process on host.
IP address  example port numbers:
 Q: does IP address of host on • HTTP server: 80
which process runs suffice for • mail server: 25
identifying the process?  to send HTTP message to
gaia.cs.umass.edu web server:
 A: no, many processes can • IP address: 128.119.245.12
be running on same host • port number: 80

Application Layer: 2-9


An application-layer protocol defines:
 types of messages exchanged, open protocols:
• e.g., request, response  defined in RFCs, everyone
 message syntax: has access to protocol
• what fields in messages & definition
how fields are delineated  allows for interoperability
 message semantics  e.g., HTTP, SMTP
• meaning of information in proprietary protocols:
fields
 e.g., Skype
 rules for when and how
processes send & respond to
messages
Application Layer: 2-10
Application layer: overview
 P2P applications
 Principles of network  video streaming and content
applications distribution networks
 Web and HTTP
 E-mail, SMTP, IMAP
 The Domain Name System
DNS

Application Layer: 2-11


Web and HTTP
A quick review…
 web page consists of objects, each of which can be stored on
different Web servers
 object can be HTML file, JPEG image, Javascript file, audio
file, …
 web page consists of base HTML-file which includes several
referenced objects, each addressable by a URL, e.g.,
www.someschool.edu/someDept/pic.gif

host name path name


Application Layer: 2-12
HTTP overview
HTTP: hypertext transfer protocol
 Web’s application layer protocol
 client/server model: PC running
• client: browser that requests, Firefox browser
receives, (using HTTP protocol) and
“displays” Web objects
server running
• server: Web server sends (using Apache Web
HTTP protocol) objects in response server
to requests
iPhone running
Safari browser

Application Layer: 2-13


HTTP overview (continued)
HTTP uses TCP: HTTP is “stateless”
 client initiates TCP connection  server maintains no
(creates socket) to server, port 80 information about past client
 server accepts TCP connection requests
from client aside
protocols that maintain “state”
 HTTP messages (application-layer are complex!
protocol messages) exchanged
 past history (state) must be
between browser (HTTP client) and maintained
Web server (HTTP server)  if server/client crashes, their views
 TCP connection closed of “state” may be inconsistent,
must be reconciled

Application Layer: 2-14


HTTP connections: two types
Non-persistent HTTP Persistent HTTP
1. TCP connection opened TCP connection opened
2. at most one object sent multiple objects can be
over TCP connection sent over a single TCP
3. TCP connection closed connection between client
and that server
downloading multiple TCP connection closed
objects required multiple
connections

Application Layer: 2-15


Non-persistent HTTP: example
User enters URL: www.someSchool.edu/someDepartment/home.index
(containing text, references to 10 jpeg images)

1a. HTTP client initiates TCP


connection to HTTP server 1b. HTTP server at host
(process) at www.someSchool.edu on www.someSchool.edu waiting for TCP
port 80 connection at port 80 “accepts”
connection, notifying client
2. HTTP client sends HTTP
request message (containing
URL) into TCP connection 3. HTTP server receives request message,
socket. Message indicates forms response message containing
time that client wants object requested object, and sends message
someDepartment/home.index into its socket
Application Layer: 2-16
Non-persistent HTTP: example (cont.)
User enters URL: www.someSchool.edu/someDepartment/home.index
(containing text, references to 10 jpeg images)

4. HTTP server closes TCP


5. HTTP client receives response connection.
message containing html file,
displays html. Parsing html file,
finds 10 referenced jpeg objects

6. Steps 1-5 repeated for


each of 10 jpeg objects
time

Application Layer: 2-17


Non-persistent HTTP: response time
RTT (definition): time for a small
packet to travel from client to
server and back initiate TCP
connection
HTTP response time (per object): RTT
 one RTT to initiate TCP connection
request file
 one RTT for HTTP request and first few
RTT time to
bytes of HTTP response to return transmit
 object/file transmission time file received
file

time time
Non-persistent HTTP response time = 2RTT+ file transmission time
Application Layer: 2-18
Persistent HTTP (HTTP 1.1)
Non-persistent HTTP issues: Persistent HTTP (HTTP1.1):
 requires 2 RTTs per object  server leaves connection open after
 OS overhead for each TCP sending response
connection  subsequent HTTP messages
 browsers often open multiple between same client/server sent
parallel TCP connections to over open connection
fetch referenced objects in  client sends requests as soon as it
parallel encounters a referenced object
 as little as one RTT for all the
referenced objects (cutting
response time in half)
Application Layer: 2-19
HTTP request message
 two types of HTTP messages: request, response
 HTTP request message:
• ASCII (human-readable format) carriage return character
line-feed character
request line (GET, POST,
GET /index.html HTTP/1.1\r\n
HEAD commands) Host: www-net.cs.umass.edu\r\n
User-Agent: Firefox/3.6.10\r\n
Accept: text/html,application/xhtml+xml\r\n
header Accept-Language: en-us,en;q=0.5\r\n
lines Accept-Encoding: gzip,deflate\r\n
Accept-Charset: ISO-8859-1,utf-8;q=0.7\r\n
Keep-Alive: 115\r\n
Connection: keep-alive\r\n
carriage return, line feed \r\n
at start of line indicates
end of header lines * Check out the online interactive exercises for more
examples: http://gaia.cs.umass.edu/kurose_ross/interactive/ Application Layer: 2-20
HTTP request message: general format
method sp URL sp version cr lf request
line
header field name value cr lf
header
~
~ ~
~ lines

header field name value cr lf


cr lf

~
~ entity body ~
~ body

Application Layer: 2-21


Other HTTP request messages
POST method: HEAD method:
 web page often includes form  requests headers (only) that
input would be returned if specified
 user input sent from client to URL were requested with an
server in entity body of HTTP HTTP GET method.
POST request message
PUT method:
 uploads new file (object) to server
GET method (for sending data to server):  completely replaces file that exists
 include user data in URL field of HTTP at specified URL with content in
GET request message (following a ‘?’): entity body of POST HTTP request
www.somesite.com/animalsearch?monkeys&banana
message

Application Layer: 2-22


HTTP response message
status line (protocol HTTP/1.1 200 OK\r\n
status code status phrase) Date: Sun, 26 Sep 2010 20:09:20 GMT\r\n
Server: Apache/2.0.52 (CentOS)\r\n
Last-Modified: Tue, 30 Oct 2007 17:00:02
GMT\r\n
header ETag: "17dc6-a5c-bf716880"\r\n
Accept-Ranges: bytes\r\n
lines Content-Length: 2652\r\n
Keep-Alive: timeout=10, max=100\r\n
Connection: Keep-Alive\r\n
Content-Type: text/html; charset=ISO-8859-
1\r\n
\r\n
data, e.g., requested data data data data data ...
HTML file

* Check out the online interactive exercises for more examples: http://gaia.cs.umass.edu/kurose_ross/interactive/
Application Layer: 2-23
HTTP response status codes
 status code appears in 1st line in server-to-client response message.
 some sample codes:
200 OK
• request succeeded, requested object later in this message
301 Moved Permanently
• requested object moved, new location specified later in this message (in
Location: field)
400 Bad Request
• request msg not understood by server
404 Not Found
• requested document not found on this server
505 HTTP Version Not Supported
Application Layer: 2-24
Maintaining user/server state: cookies

Recall: HTTP GET/response interaction is stateless


 no notion of multi-step exchanges of HTTP messages to complete a
Web “transaction”
• no need for client/server to track “state” of multi-step exchange
• all HTTP requests are independent of each other
• no need for client/server to “recover” from a partially-
completed-but-never-completely-completed transaction

Application Layer: 2-25


Maintaining user/server state: cookies
Web sites and client browser use Example:
cookies to maintain some state  Susan uses browser on laptop,
visits specific e-commerce site
between transactions for first time
four components:  when initial HTTP requests
1) cookie header line of HTTP response arrives at site, site creates:
message • unique ID (aka “cookie”)
• entry in backend database
2) cookie header line in next HTTP for ID
request message
• subsequent HTTP requests
3) cookie file kept on user’s host, from Susan to this site will
managed by user’s browser contain cookie ID value,
4) back-end database at Web site allowing site to “identify”
Susan
Application Layer: 2-26
Maintaining user/server state: cookies
client
server
ebay 8734 usual HTTP request msg Amazon server
cookie file creates ID
usual HTTP response 1678 for user backend
create
ebay 8734 set-cookie: 1678 entry database
amazon 1678

usual HTTP request msg


cookie: 1678 cookie- access
specific
usual HTTP response msg action

one week later:


access
ebay 8734 usual HTTP request msg
amazon 1678 cookie: 1678 cookie-
specific
usual HTTP response msg action
time time Application Layer: 2-27
HTTP cookies: comments
aside
What cookies can be used for: cookies and privacy:
 authorization  cookies permit sites to
 shopping carts learn a lot about you on
their site.
 recommendations
 third party persistent
 user session state (Web e-mail) cookies (tracking cookies)
allow common identity
(cookie value) to be
Challenge: How to keep state: tracked across multiple
 protocol endpoints: maintain state at
web sites
sender/receiver over multiple transactions
 cookies: HTTP messages carry state

Application Layer: 2-28


Web caches (proxy servers)
Goal: satisfy client request without involving origin server
 user configures browser to
point to a Web cache proxy
 browser sends all HTTP server
requests to cache client
origin
• if object in cache: cache server

returns object to client


• else cache requests object
from origin server, caches
received object, then client
returns object to client origin
server

Application Layer: 2-29


Web caches (proxy servers)
 Web cache acts as both Why Web caching?
client and server  reduce response time for client
• server for original request
requesting client
• cache is closer to client
• client to origin server
 reduce traffic on an institution’s
 typically cache is access link
installed by ISP
(university, company,  Internet is dense with caches
residential ISP) • enables “poor” content providers
to more effectively deliver content

Application Layer: 2-30


Caching example
Scenario:
 access link rate: 1.54 Mbps origin
 RTT from institutional router to server: 2 sec servers
public
 Web object size: 100K bits Internet
 Average request rate from browsers to origin
servers: 15/sec
 average data rate to browsers: 1.50 Mbps
1.54 Mbps
Performance: access link
problem: large
institutional
 LAN traffic intensity: .0015 delays at high
network
 access link traff.intensity = .97 utilization! 1 Gbps LAN

 end-end delay = Internet delay +


access link delay + LAN delay
= 2 sec + minutes + usecs
Application Layer: 2-31
Caching example: buy a faster access link
Scenario: 154 Mbps
 access link rate: 1.54 Mbps origin
 RTT from institutional router to server: 2 sec servers
public
 Web object size: 100K bits Internet
 Avg request rate from browsers to origin
servers: 15/sec
 avg data rate to browsers: 1.50 Mbps 154 Mbps
1.54 Mbps
Performance: access link
institutional
 LAN traffic intensity: .0015 network
1 Gbps LAN
 access link traff.intensity = .97 .0097
 end-end delay = Internet delay +
access link delay + LAN delay
= 2 sec + minutes + usecs
Cost: faster access link (expensive!) msecs Application Layer: 2-32
Caching example: install a web cache
Scenario:
 access link rate: 1.54 Mbps origin
 RTT from institutional router to server: 2 sec servers
public
 Web object size: 100K bits Internet
 Avg request rate from browsers to origin
servers: 15/sec
 avg data rate to browsers: 1.50 Mbps
1.54 Mbps
Performance: access link
institutional
 LAN traffic intensity: .? network
1 Gbps LAN
 access link traff.intensity = ?
 average end-end delay = ?
Cost: web cache (cheap!) How to compute local web cache
traffic intensity, delay?
Application Layer: 2-33
Caching example: install a web cache
Calculating access link traffic intensity,
end-end delay: origin
 suppose cache hit rate is 0.4: 40% requests servers
satisfied at cache, 60% requests satisfied at public
Internet
origin
 access link: 60% of requests use access link
 data rate to browsers
Cavgidatarafe
over access link
tobwser)
1.54 Mbps
= 0.6 * 1.50 Mbps = .9 Mbps access link
 traffic intensity = 0.9/1.54 = .58 institutional
accesslinkrate
network
 average end-end delay 1 Gbps LAN
= 0.6 * (delay from origin servers)
+ 0.4 * (delay when satisfied at cache)
= 0.6 (2.01) + 0.4 (~msecs) = ~ 1.2 secs local web cache

lower average end-end delay than with 154 Mbps link (and cheaper too!) Application Layer: 2-34
Conditional GET
client server

Goal: don’t send object if cache has


HTTP request msg
up-to-date cached version If-modified-since: <date> object
not
• no object transmission delay
modified
• lower link utilization HTTP response
before
HTTP/1.0
<date>
 cache: specify date of cached copy 304 Not Modified

in HTTP request
If-modified-since: <date>
HTTP request msg
 server: response contains no If-modified-since: <date> object
object if cached copy is up-to-date: modified
HTTP response after
HTTP/1.0 304 Not Modified HTTP/1.0 200 OK <date>
<data>

Application Layer: 2-35


Application layer: overview
 P2P applications
 Principles of network  video streaming and content
applications distribution networks
 Web and HTTP
 E-mail, SMTP, IMAP
 The Domain Name System
DNS

Application Layer: 2-36


outgoing
E-mail message queue
user mailbox
user
Three major components: agent

 user agents mail user


server
 mail servers agent

 simple mail transfer protocol: SMTP SMTP mail user


server agent
SMTP
User Agent SMTP user
 a.k.a. “mail reader” mail agent
server
 composing, editing, reading mail messages user
 e.g., Outlook, iPhone mail client agent
user
 outgoing, incoming messages stored on agent
server
Application Layer: 2-37
outgoing
E-mail: mail servers message queue
user mailbox
user
mail servers: agent

 mailbox contains incoming mail


server
user
agent
messages for user
SMTP mail user
 message queue of outgoing (to server agent
be sent) mail messages SMTP
 SMTP protocol between mail SMTP user
agent
servers to send email messages mail
server
• client: sending mail server user
agent
• “server”: receiving mail server user
agent

Application Layer: 2-38


Scenario: Alice sends e-mail to Bob
1) Alice uses UA to compose e-mail 4) SMTP client sends Alice’s message
message “to” bob@someschool.edu over the TCP connection
2) Alice’s UA sends message to her 5) Bob’s mail server places
mail server; message placed in the message in Bob’s
message queue mailbox
3) client side of SMTP opens TCP 6) Bob invokes his user
connection with Bob’s mail server agent to read message

1 user mail user


mail agent
agent server server
2 3 6
4
5
Alice’s mail server Bob’s mail server
Application Layer: 2-39
Mail access protocols
user
e-mail access user
SMTP SMTP protocol
agent agent
(e.g., IMAP,
HTTP)

sender’s e-mail receiver’s e-mail


server server

 SMTP: delivery/storage of e-mail messages to receiver’s server


 mail access protocol: retrieval from server
• IMAP: Internet Mail Access Protocol [RFC 3501]: messages stored on server, IMAP
provides retrieval, deletion, folders of stored messages on server
 HTTP: gmail, Hotmail, Yahoo!Mail, etc. provides web-based interface on
top of SMTP (to send), IMAP (or POP) to retrieve e-mail messages
Application Layer: 2-40
Application Layer: Overview
 P2P applications
 Principles of network  video streaming and content
applications distribution networks
 Web and HTTP
 E-mail, SMTP, IMAP
 The Domain Name System
DNS

Application Layer: 2-41


DNS: Domain Name System
Internet hosts, routers: Domain Name System:
• IP address (32 bit) - used for  distributed database implemented in
addressing datagrams
hierarchy of many name servers
• “name”, e.g., cityu.edu.hk
used by humans  application-layer protocol: hosts,
Q: how to map name to IP name servers communicate to resolve
address? names (address/name translation)
• provides core Internet function, but
implemented as application-layer
protocol
• complexity at network’s “edge”

Application Layer: 2-42


DNS: services, structure
DNS services Q: Why not centralize DNS?
 hostname to IP address translation  single point of failure
 traffic volume
 host aliasing
 distant centralized database
• canonical, alias names
 maintenance
 mail server aliasing
 load distribution  doesn‘t scale!
• replicated Web servers: many IP
addresses correspond to one
name

Application Layer: 2-43


DNS: a distributed, hierarchical database
Root DNS Servers Root
… …
.com DNS servers .org DNS servers .edu DNS servers Top Level Domain
… … … …
yahoo.com amazon.com pbs.org nyu.edu umass.edu
DNS servers DNS servers DNS servers DNS servers DNS servers Authoritative
cocal Doman

Client wants IP address for www.amazon.com; (1st approximation):


 client queries root server to find .com DNS server
 client queries .com DNS server to get amazon.com DNS server
 client queries amazon.com DNS server to get IP address for www.amazon.com
Application Layer: 2-44
DNS: root name servers
 at the “root” of name server hierarchy
 contacted by local name server if it cannot resolve name
 13 logical root name “servers” worldwide each “server” replicated
many times c. Cogent, Herndon, VA (5 other sites)
d. U Maryland College Park, MD k. RIPE London (17 other sites)
h. ARL Aberdeen, MD
j. Verisign, Dulles VA (69 other sites )
i. Netnod, Stockholm (37 other sites)

e. NASA Mt View, CA
f. Internet Software C.
Palo Alto, CA (and 48 other sites)

a. Verisign, Los Angeles CA


(5 other sites)
b. USC-ISI Marina del Rey, CA
l. ICANN Los Angeles, CA
(41 other sites)

g. US DoD Columbus, OH (5
other sites)
Application Layer: 2-45
TLD and Authoritative Servers
Top-Level Domain (TLD) servers:
 responsible for .com, .org, .net, .edu, .aero, .jobs, .museums, and all
top-level country domains, e.g.: .cn, .uk, .fr, .ca, .jp
 Network Solutions: authoritative registry for .com, .net TLD
 Educause: .edu TLD
Authoritative DNS servers:
 organization’s own DNS server(s), providing authoritative hostname
to IP mappings for organization’s named hosts
 can be maintained by organization or service provider

Application Layer: 2-46


Local DNS name servers
 does not strictly belong to hierarchy
 each ISP (residential ISP, company, university) has one
• also called “default name server”
 when host makes DNS query, query is sent to its local DNS
server:
• has local cache of recent name-to-address translation pairs (but may
be out of date!)
• acts as proxy, forwards query into hierarchy

Application Layer: 2-47


DNS name resolution: iterated query
root DNS server
Example: host at engineering.nyu.edu
wants IP address for gaia.cs.umass.edu 2
3
TLD DNS server
Iterated query: 1 4

 contacted server replies 8 5


with name of server to requesting host at local DNS server
contact engineering.nyu.edu dns.nyu.edu
gaia.cs.umass.edu
 “I don’t know this name, 7 6
but ask this server”
authoritative DNS server
dns.cs.umass.edu

Application Layer: 2-48


DNS name resolution: recursive query
root DNS server
Example: host at engineering.nyu.edu
wants IP address for gaia.cs.umass.edu 2 3

7 6
Recursive query: 1 TLD DNS server
 puts burden of name 8
resolution on requesting host at local DNS server
5 4
engineering.nyu.edu dns.nyu.edu
contacted name gaia.cs.umass.edu

server
 heavy load at upper authoritative DNS server
levels of hierarchy? dns.cs.umass.edu

Application Layer: 2-49


Caching, Updating DNS Records
 if name server learns mapping: adds to cache
• cache entries discarded after some time (TTL)
• TLD servers typically cached in local name servers
•  thus root name servers not often visited
 cached entries may be out-of-date (best-effort name-to-
address translation!)
• if name host changes IP address, may not be known Internet-wide
until all TTLs expire!

Application Layer: 2-50


DNS records
DNS: distributed database storing resource records (RR)
RR format: (name, value, type, ttl)
type=A type=CNAME
 name is hostname  name is alias name for some “canonical”
 value is IP address (the real) name
 value is canonical name
type=NS  Ex: www.ibm.com is really
servereast.backup2.ibm.com
 name is domain (e.g., foo.com)
 value is hostname of
type=MX
authoritative name server for  value is canonical name of
this domain mailserver that has an alias
hostname in field name
Application Layer: 2-51
DNS protocol messages
DNS query and reply messages, both have same format:
2 bytes 2 bytes

identification flags
message header:
 identification: 16 bit # for query, # questions # answer RRs
reply to query uses same # # authority RRs # additional RRs
 flags:
questions (variable # of questions)
• query or reply
• recursion desired
answers (variable # of RRs)
• recursion available
• reply is authoritative authority (variable # of RRs)

additional info (variable # of RRs)

Application Layer: 2-52


DNS protocol messages
DNS query and reply messages, both have same format:
2 bytes 2 bytes

identification flags

# questions # answer RRs

# authority RRs # additional RRs

name, type fields for a query questions (variable # of questions)

RRs in response to query answers (variable # of RRs)

records for other authoritative authority (variable # of RRs)


servers
additional “ helpful” info that may additional info (variable # of RRs)
be used
Application Layer: 2-53
Inserting records into DNS
Example: new startup “Network Utopia”
 register name networkuptopia.com at DNS registrar (e.g., Network
Solutions)
• provide names, IP addresses of authoritative name server (primary and
secondary)
• registrar inserts NS and A RRs into .com TLD server:
(networkutopia.com, dns1.networkutopia.com, NS)
(dns1.networkutopia.com, 212.212.212.1, A)
 create authoritative server locally with IP address 212.212.212.1
• type A record for www.networkuptopia.com
• type MX record for networkutopia.com

Application Layer: 2-54


Application Layer: Overview
 P2P applications
 Principles of network  video streaming and content
applications distribution networks
 Web and HTTP
 E-mail, SMTP, IMAP
 The Domain Name System
DNS

Application Layer: 2-55


Peer-to-peer (P2P) architecture
 no always-on server mobile network

 arbitrary end systems directly national or global ISP

communicate
 peers request service from other
peers, provide service in return to
other peers local or
regional ISP
• self scalability – new peers bring new
service capacity, and new service demands home network content
provider
 peers are intermittently connected network datacenter
network
and change IP addresses
• complex management
 examples: P2P file sharing (BitTorrent), enterprise
network
streaming (KanKan), VoIP (Skype)
Application Layer: 2-56
File distribution: client-server vs P2P
Q: How much time to distribute file (size F) from
1 server to N peers?
• upload/download capacity is limited resource
us: server upload
capacity
di: peer i download
file, size F u1 d1 u2 capacity
us d2
server
di
uN Internet core (with abundant
bandwidth) ui
dN
ui: peer i upload
capacity
Introduction: 1-57
File distribution time: client-server
 server transmission: must sequentially
send (upload) N file copies:
F
• time to send one copy: F/u
Flus
s us
• time to send N copies: NF/u
FlM s
di
us
network
ui
 client: each client must download
file copy
• dmin = minimum client download rate
• min client download time: F/dmin

time to distribute file


to N clients using Dc-s ≥ max{NF/us,,F/dmin}
client-server approach

increases linearly in N Introduction: 1-58


File distribution time: P2P
 server transmission: must upload at
least one copy:
Flus
• time to send one copy: F/u F
s us

 client: each client must download di


network
file copy ui
Fllmin
• min client download time: F/d min
 clients: as aggregate must download NF bits
s + Σu
• max upload rate (limiting max download rate) is: uust Ʃ ur
i

time to distribute file


to N clients using DP2P ≥ max{F/us,,F/dmin,,NF/(us + Σui)}
P2P approach
increases linearly in N …
… but so does this, as each peer brings service capacity Application Layer: 2-59
Client-server vs. P2P: example
client upload rate = u, F/u = 1 hour, us = 10u, dmin ≥ us
3.5
P2P

Minimum Distribution Time


3
Client-Server
2.5

1.5

0.5

0
0 5 10 15 20 25 30 35

N
Application Layer: 2-60
P2P file distribution: BitTorrent
 file divided into 256Kb chunks
 peers in torrent send/receive file chunks
tracker: tracks peers torrent: group of peers
participating in torrent exchanging chunks of a file

Alice arrives …
… obtains list
of peers from tracker
… and begins exchanging
file chunks with peers in torrent

Application Layer: 2-61


P2P file distribution: BitTorrent
 peer joining torrent:
• has no chunks, but will accumulate them
over time from other peers
• registers with tracker to get list of peers,
connects to subset of peers
(“neighbors”)
 while downloading, peer uploads chunks to other peers
 peer may change group of peers with whom it exchanges chunks
 churn: peers may come and go
 once peer has entire file, it may (selfishly) leave or (altruistically) remain
in torrent

Application Layer: 2-62


BitTorrent: requesting, sending file chunks
Requesting chunks: Sending chunks: tit-for-tat
 at any given time, different  Alice sends chunks to those four
peers have different peers currently sending her chunks
subsets of file chunks at highest rate
 periodically, Alice asks • other peers are “choked” by Alice (do
each peer for list of chunks not receive chunks from her)
that they have • re-evaluate top 4 every 10 secs
 Alice requests missing  every 30 secs: randomly select
chunks from peers another peer, starts sending
 “rarest first” rule chunks
• “optimistically unchoke” this peer
• newly chosen peer may join top 4
Application Layer: 2-63
BitTorrent: tit-for-tat
(1) Alice “optimistically unchokes” Bob
(2) Alice becomes one of Bob’s top-four providers; Bob reciprocates
(3) Bob becomes one of Alice’s top-four providers

higher upload rate: find better trading


partners, get file faster !

Application Layer: 2-64


Application layer: overview
 P2P applications
 Principles of network  video streaming and content
applications distribution networks
 Web and HTTP
 E-mail, SMTP, IMAP
 The Domain Name System
DNS

Application Layer: 2-65


Video Streaming and CDNs
 stream video traffic: major consumer of Internet
bandwidth
• Netflix, YouTube, Amazon Prime: 80% of residential ISP
traffic (2020)
 challenge: scale - how to reach ~1B users?
• single mega-video server won’t work (why?)
 challenge: heterogeneity
 different users have different capabilities (e.g., wired
versus mobile; bandwidth rich versus bandwidth poor)
 solution: distributed, application-level infrastructure
Application Layer: 2-66
Streaming stored video
simple scenario:

Internet

video server client


(stored video)

Main challenges:
 server-to-client bandwidth will vary over time, with changing network
congestion levels (in house, in access network, in network core, at
video server)
 packet loss and delay due to congestion will delay playout, or result in
poor video quality
Application Layer: 2-67
Streaming stored video: challenges
 continuous playout constraint: once client
playout begins, playback must match original
timing
• … but network delays are variable (jitter), so will
need client-side buffer to match playout
requirements
 other challenges:
• client interactivity: pause, fast-forward, rewind,
jump through video
• video packets may be lost, retransmitted
Application Layer: 2-68
Streaming stored video: playout buffering
constant bit
rate video client video constant bit
transmission reception rate video
playout at client
variable

buffered
network

video
delay

t0 t0+1 client playout time


delay

client-side buffering and playout delay: compensate for


network-added delay, delay jitter
Application Layer: 2-69
Streaming multimedia: DASH
 DASH: Dynamic, Adaptive Streaming over HTTP
 server:
• divides video file into multiple chunks
• each chunk stored, encoded at different rates
• manifest file: provides URLs for different chunks Internet
client
 client:
• periodically measures server-to-client bandwidth
• consulting manifest, requests one chunk at a time
• chooses maximum coding rate sustainable given current bandwidth
• can choose different coding rates at different points in time (depending
on available bandwidth at time)

Application Layer: 2-70


Streaming multimedia: DASH
“intelligence” at client:
client determines…
• when to request chunk (so that buffer
starvation, or overflow does not occur) Internet
• what encoding rate to request (higher client
quality when more bandwidth
available)
• where to request chunk (can request from URL server that is “close”
to client or has high available bandwidth)

Streaming video = encoding + DASH + playout buffering


Application Layer: 2-71
Content distribution networks (CDNs)
 challenge: how to stream content (selected from millions of
videos) to hundreds of thousands of simultaneous users?

 option 1: single, large “mega-server”


• single point of failure
• point of network congestion
• long path to distant clients
• multiple copies of video sent over outgoing link

….quite simply: this solution doesn’t scale


Application Layer: 2-72
Content distribution networks (CDNs)
 challenge: how to stream content (selected from millions of
videos) to hundreds of thousands of simultaneous users?
 option 2: store/serve multiple copies of videos at multiple
geographically distributed sites (CDN)
• enter deep: push CDN servers deep into many
access networks
• close to users
• Akamai: 240,000 servers deployed in more than 120
countries (2015)
• bring home: smaller number (10’s) of larger
clusters in IXPs (internet exchange points) near (but not
within) access networks
• used by Limelight
Application Layer: 2-73
Content distribution networks (CDNs)
 CDN: stores copies of content at CDN nodes
• e.g. Netflix stores copies of <your favorite show>
 subscriber requests content from CDN
• directed to nearby copy, retrieves content
• may choose different copy if network path congested

manifest file
where’s S1E5?

Application Layer: 2-74


CDN content access: a closer look
Bob (client) requests video http://video.netcinema.com/6Y7
 video stored in CDN at http://KingCDN.com/NetC6y&B23V

1. Bob gets URL for video


http://video.netcinema.com/6Y7
from netcinema.com web page 2. resolve http://video.netcinema.com
2 via Bob’s local DNS
1
6. request video from 5 Bob’s
KINGCDN server, local DNS
streamed via HTTP server
3. netcinema’s DNS returns CNAME
netcinema.com 4
a1105.kingCDN.com

netcinema’s
authoritative DNS KingCDN.com KingCDN
authoritative DNS
Application Layer: 2-75
Case study: Netflix
Amazon cloud upload copies of
multiple versions of
video to CDN servers
CDN
server
Netflix registration,
accounting servers
Bob browses
Netflix video CDN
2 Manifest file, server
requested
1 3 returned for
Bob manages specific video
Netflix account
CDN
4 server

DASH server
selected, contacted,
streaming begins
Application Layer: 2-76
CS3201 Computer Networks
Tutorial 2 (Week 2)

Prof Weifa Liang


Weifa.liang@cityu.edu.hk

Slides based on book Computer Networking: A Top-Down Approach.


Packet queueing delay
 R: link bandwidth (bps)

average queueing
delay
 L: packet length (bits)
 a: average packet arrival rate

 L*a/R ~ 0: avg. queueing delay traffic intensity = 1


La/R
small
La/R ~ 0
 L*a/R --> 1: avg. queueing delay
large
 L*a/R > 1: the arriving “workload”
is more than the servicing
workload La/R --> 1
=> average delay infinite (in
theory)!
HTTP connections: two types
Non-persistent HTTP Persistent HTTP
1. TCP connection opened  TCP connection opened
2. at most one object  multiple objects can
sent over TCP be sent over a single
connection TCP connection
3. TCP connection closed between a client and
the server of the
downloading multiple client
objects requires multiple  TCP connection closed
connections
Non-persistent HTTP: an example (requesting 10
objects)

1a. HTTP client initiates TCP


connection to HTTP server 1b. HTTP server at host
(process) at www.someSchool.edu waiting for
www.someSchool.edu on port 80 TCP connection at port 80
“accepts” connection, notifying
2. HTTP client sends HTTP client
request message
(containing URL) into 3. HTTP server receives request
TCP connection socket. message, forms response
time Message indicates that message containing requested
client wants object object, and sends message into
someDepartment/home.index its socket
Non-persistent HTTP: example (cont.)

4. HTTP server closes


5. HTTP client receives response TCP connection.
message containing html file,
displays html. Parsing html file,
finds 10 referenced jpeg objects

6. Steps 1-5 repeated


for each of 10 jpeg
time objects
Non-persistent HTTP: response time
RTT (definition): time for a
small packet to travel from a
client to a server and back initiate TCP
connection
HTTP response time (per RTT
object): request file
 one RTT to initiate TCP connection time to
RTT
 one RTT for HTTP request and transmit
file
first few bytes of HTTP response file
to return received
 object/file transmission time
time time
Non-persistent HTTP response time = 2RTT+ file transmission
time
Time to work on questions…

2-7
1. Suppose that you click a URL link within a web browser to retrieve a web page.
Suppose that the web page associated with that link contains 8 objects.
Let RTT denote the Round Trip Time between the local host, and the server contains the
base HTML and all 8 objects. Suppose that the transmission time is negligible. How long
does it take before the host can receive all objects?

a) Non-persistent HTTP is used

Answer:
If non-persistent HTTP is used, it needs to setup a TCP connection for each HTTP
request.
- It takes 2 RTTs to retrieve the Base HTML.
- It takes 2 RTTs to retrieve each referenced object.
- Thus, it takes 2 RTT +8 * 2 RTT=18 RTTs.

2-8
b) Persistent HTTP is used.

Answer:
- Only one TCP connection is established.
- It takes 2RTTs to receive the Base HTML file.
- All requests for the following 8 objects will be sent back-to-back.
- The responses for the 8 objects will be sent by the server back-to-back (in pipeline).
Thus, it takes 2RTT+RTT=3RTTs.

Application Layer 2-9


2. Consider the figure below. Suppose that each link between the
server and the client has a packet loss probability p, and the
packet loss probability for these links is independent.
a) What is the probability that a packet (sent by the server) is
successfully received by the client?
b) If a packet is lost in the path, then the server will eventually
re-transmit the packet. On average, how many times will the
server have to transmit a packet until the client successfully
receives the packet?

2-
Solution:
a) Probability that the i-th link does not fail: (1 – p).
This is the same for all i.
Since a packet is successfully received only if none of the links fails, we
thus have

Pr[ no link fails ] = Pr[ 1st link doesn’t fail ] … Pr[ n-th link doesn’t fail]
= (1 – p)n

b) The server “succeeds” if the packet is not lost; otherwise it “fails” if


the packet is lost on some link.
We know from part a) that
Pr[ server succeeds ] = (1 – p)n
What we want to compute is the expected number of times that the
server has to transmit until the event that ”server succeeds” happens.
This quantity is the expected value of the geometric distribution
with parameter Pr[ server succeeds ] and thus its expected value is 1 /
(1 – p)n .
3. A packet switch receives a packet P and determines the
outbound link to which the packet should be forwarded. When
packet P arrives, another packet is halfway done being transmitted
on this outbound link and four other packets are in the waiting
queue of the switch waiting to be transmitted. Packets are
transmitted in order of arrival. Suppose all packets have a length
of 1,500 bits and the transmission rate of the outbound link is 2
Mbps. What is the queueing delay for packet P ?
Answer:
(a) The delay due to the packet that is on a halfway is:

750 / (2*106) sec = 0.000375 seconds

(b) The delay due to the 4 packets that are ahead of the
arriving packet P is:

4 * 1500 / (2 * 106) sec = 0.003 seconds

In total, there is a queueing delay of 0.003375 seconds.


4. [Harder] Consider a sequence of N packets with each
having a length of L bits, and a router with an outbound link
with the transmission rate R. Suppose that, at time 0, the
first N/2 packets arrive simultaneously at the router and,
after (L/R) seconds, the remaining N/2 packets arrive. Apart
from these N packets, no other packets are currently being
queued or transmitted by the router. (You can assume that
N is even, i.e., N/2 is an integer.)

a) What is the queueing delay for the i-th packet, if i ≤ N/2?


b) What is the queueing delay for the i-th packet, if i > N/2?
c) What is the average queueing delay for these N packets?

2-
Solution:
a) We number the packets 1,…,N.
 Here we are looking at the first N/2 packets that arrive at the router
at time 0.
 The 1st packet has 0 delay.
 The 2nd packet has delay of L/R.
  The i-th packet will have a delay of (i-1)(L/R) sec.

 b) We now are looking at the second group, i.e., the remaining N/2
packets, that simultaneously arrive at the router at time L/R.
 At time L/R, the router has just finished sending one packet of the
first group of packets, therefore:
 The (N/2+1)-th packet will have a delay of:
 (N/2 – 1)(L/R) sec.
 The (N/2+2)-th packet will have a delay of:
 (N/2)(L/R) sec.
 (N/2+3)-th packet will have a delay of;
 (N/2 + 1)(L/R) sec.
  The i-th packet (i> N/2) will have a delay of (i – 2)(L/R) sec.
Solution: c) To compute the average delay, we will first compute the total delay
of all N packets and then divide by the number of packets. The total delay is
calculated as follows (assuming N is even):

𝑁𝑁/2 𝐿𝐿 𝐿𝐿
 ∑𝑖𝑖=1 (𝑖𝑖 − 1) + ∑𝑁𝑁 𝑖𝑖=𝑁𝑁/2+1 𝑅𝑅 𝑖𝑖 − 2
𝑅𝑅
𝐿𝐿 𝑁𝑁/2 𝐿𝐿 𝑁𝑁
 = ∑𝑁𝑁/2
𝑖𝑖=1 𝑅𝑅 (𝑖𝑖 − 1) + ∑ 𝑖𝑖=1 𝑅𝑅 + 𝑖𝑖 − 2
2
𝐿𝐿 𝑁𝑁
 = ∑𝑁𝑁/2
𝑖𝑖=1 𝑖𝑖 − 1 + + 𝑖𝑖 − 2
𝑅𝑅 2
𝐿𝐿 𝑁𝑁
 = ∑𝑁𝑁/2𝑖𝑖=1 2𝑖𝑖 − 3 +
𝑅𝑅 2
𝐿𝐿 𝑁𝑁/2 𝑁𝑁/2 𝑁𝑁/2 𝑁𝑁
 = 2 ∑𝑖𝑖=1 𝑖𝑖 − ∑𝑖𝑖=1 3 + ∑𝑖𝑖=1
𝑅𝑅 2
𝐿𝐿 𝑁𝑁/2 3𝑁𝑁 𝑁𝑁 𝑁𝑁
 = 2(∑𝑖𝑖=1 𝑖𝑖) − +
𝑅𝑅 2 2 2
𝐿𝐿 𝑁𝑁 𝑁𝑁 3𝑁𝑁 𝑁𝑁2
 = +1 − +
𝑅𝑅 2 2 2 4
𝐿𝐿 𝑁𝑁2 𝑁𝑁 3𝑁𝑁 𝑁𝑁2 𝐿𝐿 𝑁𝑁2
 = + − + = ( − 𝑵𝑵)
𝑅𝑅 4 2 2 4 𝑅𝑅 2
𝐿𝐿
 Therefore, the average queueing delay is (N/2 – 1).
𝑅𝑅
CS3201 Computer Networks
Tutorial (Week 3)

Prof Weifa Liang


Weifa.liang@cityu.edu.hk

Slides based on book Computer Networking: A Top-Down Approach.


Maintaining user/server state: cookies
client
server
ebay 8734 usual HTTP request msg Amazon server
cookie file creates ID
usual HTTP response 1678 for user create backend
ebay 8734
amazon 1678 set-cookie: 1678 entry database
usual HTTP request msg
cookie: 1678 cookie- access
specific
usual HTTP response action

one week later: msg


access
ebay 8734 usual HTTP request msg
amazon 1678 cookie: 1678 cookie-
specific
usual HTTP response action
time
msg time
Web caches (proxy servers)
Goal: satisfy client request without involving origin server
 user configures browser to
point to a Web cache proxy
server
 browser sends all HTTP client
requests to cache origin
server
• if an object is in cache:
cache returns the object
to the client
• else cache requests the client
origin
object from origin server, server
caches the received
object, then returns the
object to the client
DNS: a distributed, hierarchical database
Root DNS Servers Root
… …
.com DNS servers .org DNS servers .edu DNS servers Top Level Domain
… … … …
yahoo.com amazon.com pbs.org nyu.edu umass.edu
DNS servers DNS servers DNS servers DNS servers DNS servers Authoritative

Client wants IP address for www.amazon.com; (1st approximation):


 client queries root server to find .com DNS server
 client queries .com DNS server to get amazon.com DNS server
 client queries amazon.com DNS server to get IP address for
www.amazon.com
Application Layer: 2-4
Application Layer: 2-5

DNS name resolution: iterated query


root DNS server
Example: host at
engineering.nyu.edu wants IP 2
address for gaia.cs.umass.edu 3
TLD DNS server
Iterated query: 1 4

 contacted server 8 5
replies with name of requesting host at local DNS server
server to contact engineering.nyu.edu dns.nyu.edu
gaia.cs.umass.edu
7 6
 “I don’t know this
name, but ask this
server” authoritative DNS server
dns.cs.umass.edu
Application Layer: 2-6

DNS name resolution: recursive query


root DNS server
Example: host at
engineering.nyu.edu wants IP 2 3
address for gaia.cs.umass.edu 7 6
Recursive query: 1 TLD DNS server

 puts burden of 8
name resolution requesting host at
engineering.nyu.edu
local DNS server
dns.nyu.edu 5 4
on contacted gaia.cs.umass.edu

name server
 heavy load at authoritative DNS server
dns.cs.umass.edu
upper levels of
hierarchy?
Work on questions

2-7
1. Suppose that A has a file with size of 1 Gbits to
send to B through the following path. How much
time (in sec) will pass from the time when B
receives the first bit of the file until B has
received the whole file?

40Mbps 10Mbps
A B

3-8
Answer:
1Gbits=1000M bits
 The throughput of the routing path is 10Mbps,
i.e., B can receive 10M bits per sec from A.
 It thus takes 1Gbits/10Mbps=100 sec for B to
receive the file.
2. Suppose within your Web browser you click on a link to obtain
a Web page. The IP address for the associated URL is not cached
in your local host, so a DNS lookup is necessary to obtain the IP
address. It takes RTT0 = 5 msecs for the host to send a DNS
lookup and get the IP address. The Web page associated with the
link does not reference any other objects. The RTT between the
local host and the Web server containing the object is
RTTHTTP = 15 msecs.

Assuming it takes zero transmission time for the


HTML object, how much time elapses from when the
client clicks on the link until the client receives the
object?

3-10
 Answer:
 The time from when the Web request is made in
the browser until the page is displayed in the
browser is
 RTT0 + 2*RTTHTTP = 5 + 2*15 = 35 msecs.
 Note that 2 RTTHTTPs are needed to fetch the
HTML object - one RTTHTTP to establish the TCP
connection, and the other RTTHTTP to perform the
HTTP GET/response over that TCP connection.
Authoritative
3. Consider the networks shown in the figure in www.b.com
DNS server
for b.com
the right side. There are two user machines
m1.a.com and m2.a.com in the network a.com.
Suppose the user at m1.a.com types in the URL
www.b.com/bigfile.htm into a browser to
retrieve a 1Gbit (or 1,000 Mbit) file from 1 Gbps
www.b.com. We have the following LAN
assumptions: R2

Internet 1 Mbps (in


each direction)

R1
 The packets containing any DNS commands and
HTTP commands such as GET are very small
compared to the size of the file, and thus their
transmission times (but not their propagation
1 Gbps
times) can be neglected. m1.a.com
LAN
 Propagation delays within the
HTTP Local
LAN are small enough to be m2.a.com DNS
ignored. The propagation from cache
server
router R1 to router R2 is small enough to be
ignored.
 The propagation delay from anywhere
in a.com to any other site in the Internet
(except b.com) is 500 ms (=0.5 seconds).
3-12
List the sequence of DNS and HTTP messages sent/received from/by m1.a.com as well
as any other messages that leave/enter the a.com network that are not directly
sent/received by m1.a.com from the point that the URL is entered into the browser
until the file is completely received. Indicate the source and destination of each message.
You can assume that every HTTP request by m1.a.com is first directed to the HTTP
cache in a.com. Assume that all caches (HTTP cache and local DNS server) are initially
empty. Moreover, all DNS requests are iterated queries. Calculate the time for
m1.a.com to receive the file.
Answer:
Name resolution messages and delay:
 M1.a.com needs to resolve the name www.b.com to an IP address so it sends a DNS
REQUEST message to its local DNS resolver
• (this takes no time given the assumptions)
 Local DNS server does not have any information so it contacts a root DNS server with a
REQUEST message
• (this take 500 ms given the assumptions)
 Root DNS server returns name of DNS Top Level Domain server for .com
• (this takes 500 ms given the assumptions)
 Local DNS server contacts .com TLD
• (this takes 500 ms given the assumptions)
 TLD .com server returns the authoritative name server for b.com
(this takes 500 ms given the assumptions)
 Local DNS server contacts the authoritative name server for b.com
• ( this takes no time given the assumptions)
 Authoritative name server for b.com returns IP address of www.b.com.
• (this takes no time given the assumptions)
3-13
Answer (continued):

HTTP messages and delay


 Since we ignore the propagation delay and transmission delay for short messages, RTT for
TCP connection is ignored.
 HTTP client sends HTTP GET message to www.b.com, which it sends to the HTTP cache in
the a.com network (this takes no time given the assumptions).
 The HTTP cache does not find the requested document in its cache, so it sends the GET
request to www.b.com. (this takes no time given the assumptions)
 www.b.com receives the GET request. It takes 1000 seconds to send a file of size of 1Gbps
from www.b.com to the cache, and then 1 second to send it m1.a.com.
 The total delay is thus: .5 + .5 + .5 +.5 + 1,000 +1= 1,003 sec.

3-14
Now assume that machine m2.a.com makes a request to exactly the same URL that
m1.a.com made.
List the sequence of DNS and HTTP messages sent/received from/by m2.a.com as well
as any other messages that leave/enter the a.com network that are not directly
sent/received by m2.a.com from the point that the URL is entered into the browser
until the file is completely received. Indicate the source and destination of each message.
Answer:
 m2.a.com needs to resolve the name www.b.com to an IP address, so it sends a DNS
REQUEST message to its local DNS resolver
• this takes no time given the assumptions
 The local DNS server looks in its cache and finds the IP address for www.b.com, since
m1.a.com had just requested that name which has been resolved already, and returns the
IP address to m2.a.com.
• this takes no time given the assumptions
 HTTP client at m2.a.com sends a HTTP GET message to www.b.com, which it sends to the
HTTP cache in the a.com network (this takes no time given the assumptions).
 The HTTP cache finds the requested document in its cache, so it sends a GET request with
an If-Modified-Since to www.b.com. (this takes no time given the assumptions)
 www.b.com receives the GET request. The document has not changed, so www.b.com sends
a short HTTP RESPONSE message to the HTTP cache in a.com indicating that the cached
copy is valid. (this takes no time given the assumptions)
 There is a 1 sec delay to send the 1Gbps file from the HTTP cache to m2.a.com.
 The total delay is thus: 1 sec
3-15
4. You accessed Amazon.com before. When you access Amazon.com
again, the website lists the items that you browsed before and provide
some recommendations to you. Explain how this happens.

Answer:
 Amazon’s server has a backend database.
 When a client accesses Amazon’s web server for the first time, a cookie
(for the user) will be generated and an entry will be added to its
backend database.
 This cookie can be stored at the client’s computer permanently.
 When a client browses an item, the cookie and the item’s id are saved
in the backend database of Amazon’s web server.
 When the client visits Amazon’s web server again, the cookie is
included in the HTTP request sent to Amazon’s web server.
 By using the cookie id, all items that the client browsed previously can
be retrieved from the backend database of the Amazon’s web server.
5. Suppose that a web browser wants to display a web
page that contains references to 10 objects. Assume that
the web page and its referenced objects are very small,
hence their transmission times can be ignored.
For each one of the scenarios (a-d) stated below, answer
the following two questions:

 How many HTTP request messages does the web


browser need to send (in total) to retrieve all objects?

 How many RTTs does it take until the client has


received all objects?
 Scenarios:
a) The web browser can open up to 5 parallel TCP connections
to the server over which it can send/receive HTTP messages. Assume
that non-persistent HTTP is used.
• Answer: For all scenarios, the number of requests is the same.
• Non-persistent HTTP means we need to create a new TCP
connection for each object. Therefore,
• 2 RTTs for the base HTML file (one for TCP and one for HTML).
• 2 RTTs for the first 5 objects since we can perform these in
parallel (through the 5 parallel TCP connections).
• Similarly, 2 RTTs for the remaining 5 objects.
• In total: 6 RTTs.
b) The web browser can open up to 5 parallel TCP connections
to the server over which it can send/receive HTTP messages. Assume
that persistent HTTP is used.
• Answer: Since we are using the persistent HTTP, all requests/responses
can be sent back-to-back, so we can just use a single TCP connection
(each parallel connection transmits 2 objects). In total 3 RTTs (2 RTTs
for the base HTML file + 1 RTT for 10 objects)
 Scenarios:
c) The web browser can create a single TCP connection to the
server over which it can send/receive HTTP messages. Assume that
non-persistent HTTP is used.
Answer:
• Each HTTP request/response requires a separate TCP connection
and these are established sequentially.
• In total, we need 2*11 = 22 RTTs, where 2RTTs for a TCP
connection and the base HTML file, and
• 2 RTTS for each object downloading with one RTT for the TCP
connection and another for request/response messages between
the client and the server.
d) The web browser can create a single TCP connection to the
server over which it can send/receive HTTP messages. Assume that
persistent HTTP is used.

• Answer: Same as for scenario b). 3RTTs


CS3201 Computer Networks
Tutorial 4 (Week 4)

Prof Weifa Liang


Weifa.liang@cityu.edu.hk

Slides based on book Computer Networking: A Top-Down Approach.


Connectionless demultiplexing: an example
DatagramSocket
serverSocket = new
DatagramSocket
DatagramSocket mySocket2 DatagramSocket mySocket1 =
= new DatagramSocket (6428); new DatagramSocket (5775);
(9157); application
application P1 application
P3 P4
transport
transport transport
network
network link network
link physical link
physical physical

source port: 6428 source port: ?


dest port: 9157 dest port: ?

If segments from different IP addresses


source port: 9157 source port: ? have same destination IP address and port
dest port: 6428 dest port: ?
number, they will be directed to the same
process Transport Layer: 3-2
Connection-oriented demultiplexing: example
application
application P4 P5 P6 application
P1 P2 P3
transport
transport transport
network
network link network
link physical link
physical server: IP physical
address B
host: IP address C
host: IP
address A source IP,port: B,80
dest IP,port: A,9157 source IP,port: C,5775
dest IP,port: B,80
source IP,port: A,9157
dest IP, port: B,80
source IP,port: C,9157
dest IP,port: B,80
Three segments, all destined to IP address: B,
dest port: 80 are demultiplexed to different sockets
tion: 1-4

File distribution: client-server vs P2P


Q: How much time to distribute file (size F) from
1 server to N peers?
• upload/download capacity is limited resource
us: server upload
capacity
di: peer i download
file, size F u1 d1 u2 capacity
us d2
server
di
uN Internet core (with abundant
bandwidth) ui
dN
ui: peer i upload
capacity
tion: 1-5

File distribution time: client-server


 server transmission: must sequentially
send (upload) N file copies:
 time to send one copy: F/us
 time to send N copies: NF/us
F
us
di
network
ui
 client: each client must download
file copy
• dmin = minimum client download
rate
• min client download time: F/dmin

time to distribute file


to N clients using Dc-s ≥ max{NF/us,,F/dmin}
client-server approach

increases linearly in N
File distribution time: P2P
 server transmission: must upload at
least one copy:
F
• time to send one copy: F/us us
 client: each client must download file copy di
• min client download time: F/dmin network
ui
 clients: as aggregate must download NF bits
• max upload rate (limiting max download rate) is: us + Σui

time to distribute file


to N clients using
P2P approach
DP2P ≥ max{F/us,,F/dmin,,NF/(us + Σui)}
increases linearly in N …
… but so does this, as each peer brings service capacity Application Layer: 2-
6
P2P file distribution: BitTorrent
 file divided into 256Kb chunks
 peers in torrent send/receive file chunks
tracker: tracks peers torrent: group of peers
participating in torrent exchanging chunks of a file

Alice arrives …
… obtains list
of peers from tracker
… and begins exchanging
file chunks with peers in torrent

Application Layer: 2-
7
P2P file distribution: BitTorrent
 peer joining torrent:
• has no chunks, but will accumulate them
over time from other peers
• registers with tracker to get list of
peers, connects to subset of peers
(“neighbors”)
 while downloading, peer uploads chunks to other peers
 peer may change group of peers with whom it exchanges chunks
 churn: peers may come and go
 once peer has an entire file, it may (selfishly) leave or (altruistically)
remain in torrent

Application Layer: 2-
8
BitTorrent: requesting, sending file chunks
Requesting chunks: Sending chunks: tit-for-tat
 at any given time, different  Alice sends chunks to those four
peers have different peers currently sending her chunks
subsets of file chunks at highest rate
 periodically, Alice asks each • other peers are “choked” by Alice
peer for list of chunks that (do not receive chunks from her)
they have • re-evaluate top 4 every 10 secs
 Alice requests missing  every 30 secs: randomly select
chunks from peers another peer, starts sending
 “rarest first” rule chunks
• “optimistically unchoke” this peer
• newly chosen peer may join top 4

Application Layer: 2-
9
UDP checksum
Goal: detect errors (i.e., flipped bits) in transmitted segment
sender: receiver:
 treat contents of UDP  compute checksum of received
segment (including UDP header segment
fields and IP addresses) as
sequence of 16-bit integers  check if computed checksum equals
 checksum: addition (1s checksum field value:
complement sum) of segment • Not equal - error detected
content • Equal - no error detected. But maybe
 checksum value put into errors nonetheless?
UDP checksum field
 Work on your questions

Transport Layer 3-11


1. Suppose we used 8-bit sums instead of 16-
bit sums to compute UDP checksum. What’s
the checksum of three bytes: 01010011,
01010111, 01110100?

01010011
01010111
+ 01110100
100011110
• Add the carry out to the least significant bit, we have 00011111,
• the checksum is: 11100000
• Q: what do we get if we compute: sum + checksum ?

Transport Layer 3-12


2. With the 1’s complement of the sum, how
does the receiver detect errors? Is it possible
that a 1-bit error will go undetected? How about
a 2-bit error?
 Answer: at the receiver, we can add up the 4
bytes (3 bytes + checksum). If the result is all
1s, there was no error; otherwise, there is an
error.
 It is impossible that a 1-bit error will go
undetected, as any 1-bit error will make the
result have at least one 0.
 It is possible that a 2-bit error will go
undetected as it is possible that the sum
remains the same when 2 bits are flipped.
3-13
UDP checksum: undetected error!

0 1
1 1 1 0 0 1 1 0 0 1 1 0 0 1 1 0 1 0
1 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1
wraparound 1 1 0 1 1 1 0 1 1 1 0 1 1 1 0 1 1 Even though
numbers have
sum 1 0 1 1 1 0 1 1 1 0 1 1 1 1 0 0 changed (bit
flips), no change
checksum 0 1 0 0 0 1 0 0 0 1 0 0 0 0 1 1 in checksum!
3. Consider distributing a file of F = 15 Gbits to N peers. The server has an upload rate
of us = 30 Mbps, and each peer has a download rate of di = 2 Mbps and an upload rate of
u = 700 Kbps. Give the minimum distribution time for N = 10 and N = 1,000 for both
(a) client-server distribution; and (b) P2P distribution.
 For calculating the minimum distribution time, we use:
• For client-server: DCS >= max{ N F / us, F / dmin}, dmin is the minimum
download rate
• For p2p: Dp2p >= max{ F / us, F / dmin, N F / (us + ∑ui)}
 Before plugging the values of F and u into these formulas, we convert them
to Mbps.
 N=10
 DCS >= max{ 10 * 15*103 / 30, 15*103 / 2 }
 = 7,500 sec (2nd term dominates because # of users is small)
 DP2P >= max{15*103 / 30, 15*103 / 2, 10 * 15*103 / (30 + 10*0.7) }
 = max{ 500, 7,500, 4,054 } = 7,500 sec
Numerator and denominator
 N=1,000 both scale with N, (almost)
 DCS >= max{ 1,000*(15*10 / 30), (15*10 / 2)}
3 3
cancel each other out
 = max{ 500,000, 7,500 } = 500,000 sec
 DP2P >= max{15*103/30, 15*103 / 2, 1,000 * 15*103 / (30 + 1,000*0.7) }
 = max{ 500, 7,500, 20,548 } = 20,548 sec
3-15
4. Suppose that Alice writes a Bittorrent client that
doesn’t allow other clients to download any data from
her system. She claims that, by using this client, she can
join a torrent and still receive a complete copy of the
shared file. Is Alice’s claim possible? Why or why not?
 Answer:
 Yes, it is possible.
 Note that even though Alice will never become one of
the top-4 hosts of other peers, she might nevertheless
become “optimistically unchoked” by other peers and
start receiving chunks for some time.
 Thus, if she waits long enough, she is likely to receive
all chunks of the file.

3-16
 5. Consider a short, 10-meter link with a propagation speed of 300*106 m/sec,
over which a sender can transmit at a rate of 150 bits/sec in both directions.
Suppose that packets containing data are 100,000 bits long, and packets that contain
only control messages (TCP or HTTP GET request ) are 200 bits long. Assume that
there are N parallel connections with each having 1/N of the link bandwidth.
 Consider the HTTP protocol and suppose that each downloaded object is 100 Kbits
long, and that the base HTML file contains 10 referenced objects located at the same
host.
a) Would parallel downloads via 10 parallel instances of non-persistent HTTP make
sense in this case? How long does it take?

3-17
 5. Consider a short, 10-meter link with a propagation speed of 300*106 m/sec,
over which a sender can transmit at a rate of 150 bits/sec in both directions.
Suppose that packets containing data are 100,000 bits long, and packets that contain
only control messages (TCP or HTTP GET request ) are 200 bits long. Assume that
there are N parallel connections with each having 1/N of the link bandwidth.
 Consider the HTTP protocol and suppose that each downloaded object is 100 Kbits
long, and that the base HTML file contains 10 referenced objects located at the same
host.
a) Would parallel downloads via 10 parallel instances of non-persistent HTTP make
sense in this case? How long does it take?
Answer:
 Let Tp denote the one-way propagation delay between the client and the server.
• Tp = 10 / (300 * 10^6) = 0.03333 microsec. (1 second=10^6 micorseconds)
 Each downloaded object can be completely put into one data packet.
 Time to get the base HTML file:
• B = (200/150+Tp + 200/150 +Tp + 200/150+Tp + 100,000/150+ Tp )
 Time to get 10 objects over parallel connections of non-persistent HTTP:
• 200/(150/10)+Tp + 200/(150/10) +Tp + 200/(150/10)+Tp + 100,000/(150/10)+
Tp
 In total: Only (1/10)-th of bandwidth
• 7,377 + 8*Tp (seconds)

3-18
 5. Consider a short, 10-meter link with a propagation speed of 300*106 m/sec,
over which a sender can transmit at a rate of 150 bits/sec in both directions.
Suppose that packets containing data are 100,000 bits long, and packets containing
only control (TCP or HTTP GET request ) are 200 bits long. Assume that N parallel
connections each gets 1/N of the link bandwidth. Now consider the HTTP protocol
and suppose that each downloaded object is 100 Kbits long, and that the base HTML
file contains 10 referenced objects located at the same host.
b) Consider persistent HTTP (without parallel connections). Do you expect significant
gains over the non-persistent case?
tcp
• Answer: total time = time to get the base HTML file (B) + 10 tcp
req/res sent back-to-back:
• After having received the base HTML file, the client sends 10
requests back-to-back
• This takes 10 * 200/150 + Tp = 13 sec start transm.
• However, as soon as the server receives the 1st request, it will obj 1

start transmitting the first object, which takes 100,000 / 150 =


667 sec
• Since 13 < 667, the next 9 client requests arrive at the server
before the server has finished transmitting the 1st object.
• Therefore, the server can transmit each of the 10 objects as
end obj 1
soon as it is done with the previous one. This takes: start transm.
10*100,000/150 + Tp obj 2
• In total:
• B + (200/150 + Tp) + (10*100,000/150 + Tp) = 6668 + 6 * Tp
(seconds)
end obj 2
• Conclusion: Persistent HTTP is faster.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy