0% found this document useful (0 votes)
19 views43 pages

History of The Internet

The history of the Internet began with efforts by scientists and engineers to interconnect computer networks, leading to the development of the Internet Protocol Suite through international collaboration. Key milestones include the establishment of ARPANET in 1969, the introduction of packet switching, and the creation of the World Wide Web by Tim Berners-Lee in the early 1990s. The Internet has since transformed global communication, commerce, and technology, evolving rapidly with advancements in data transmission and connectivity.

Uploaded by

Mansoor Ali Khan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views43 pages

History of The Internet

The history of the Internet began with efforts by scientists and engineers to interconnect computer networks, leading to the development of the Internet Protocol Suite through international collaboration. Key milestones include the establishment of ARPANET in 1969, the introduction of packet switching, and the creation of the World Wide Web by Tim Berners-Lee in the early 1990s. The Internet has since transformed global communication, commerce, and technology, evolving rapidly with advancements in data transmission and connectivity.

Uploaded by

Mansoor Ali Khan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 43

The history of the Internet originated in the efforts of scientists and

engineers to build and interconnect computer networks. The Internet


Protocol Suite, the set of rules used to communicate between networks and
devices on the Internet, arose from research and development in the United
States and involved international collaboration, particularly with researchers
in the United Kingdom and France.[1][2][3]

Computer science was an emerging discipline in the late 1950s that began to
consider time-sharing between computer users, and later, the possibility of
achieving this over wide area networks. J. C. R. Licklider developed the idea
of a universal network at the Information Processing Techniques Office (IPTO)
of the United States Department of Defense (DoD) Advanced Research
Projects Agency (ARPA). Independently, Paul Baran at the RAND
Corporation proposed a distributed network based on data in message blocks
in the early 1960s, and Donald Davies conceived of packet switching in 1965
at the National Physical Laboratory (NPL), proposing a national commercial
data network in the United Kingdom.

ARPA awarded contracts in 1969 for the development of


the ARPANET project, directed by Robert Taylor and managed by Lawrence
Roberts. ARPANET adopted the packet switching technology proposed by
Davies and Baran. The network of Interface Message Processors (IMPs) was
built by a team at Bolt, Beranek, and Newman, with the design and
specification led by Bob Kahn. The host-to-host protocol was specified by a
group of graduate students at UCLA, led by Steve Crocker, along with Jon
Postel and others. The ARPANET expanded rapidly across the United States
with connections to the United Kingdom and Norway.

Several early packet-switched networks emerged in the 1970s which


researched and provided data networking. Louis Pouzin and Hubert
Zimmermann pioneered a simplified end-to-end approach
to internetworking at the IRIA. Peter Kirstein put internetworking into practice
at University College London in 1973. Bob Metcalfe developed the theory
behind Ethernet and the PARC Universal Packet. ARPA initiatives and
the International Network Working Group developed and refined ideas for
internetworking, in which multiple separate networks could be joined into
a network of networks. Vint Cerf, now at Stanford University, and Bob Kahn,
now at DARPA, published their research on internetworking in 1974. Through
the Internet Experiment Note series and later RFCs this evolved into
the Transmission Control Protocol (TCP) and Internet Protocol (IP), two
protocols of the Internet protocol suite. The design included concepts
pioneered in the French CYCLADES project directed by Louis Pouzin. The
development of packet switching networks was underpinned by
mathematical work in the 1970s by Leonard Kleinrock at UCLA.

showInternet history
timeline

In the late 1970s, national and international public data networks emerged
based on the X.25 protocol, designed by Rémi Després and others. In the
United States, the National Science Foundation (NSF) funded
national supercomputing centers at several universities in the United States,
and provided interconnectivity in 1986 with the NSFNET project, thus
creating network access to these supercomputer sites for research and
academic organizations in the United States. International connections to
NSFNET, the emergence of architecture such as the Domain Name System,
and the adoption of TCP/IP on existing networks in the United States and
around the world marked the beginnings of the Internet.[4][5]
[6]
Commercial Internet service providers (ISPs) emerged in 1989 in the
United States and Australia.[7] Limited private connections to parts of the
Internet by officially commercial entities emerged in several American cities
by late 1989 and 1990.[8] The optical backbone of the NSFNET was
decommissioned in 1995, removing the last restrictions on the use of the
Internet to carry commercial traffic, as traffic transitioned to optical networks
managed by Sprint, MCI and AT&T in the United States.

Research at CERN in Switzerland by the British computer scientist Tim


Berners-Lee in 1989–90 resulted in the World Wide Web,
linking hypertext documents into an information system, accessible from
any node on the network.[9] The dramatic expansion of the capacity of the
Internet, enabled by the advent of wave division multiplexing (WDM) and the
rollout of fiber optic cables in the mid-1990s, had a revolutionary impact on
culture, commerce, and technology. This made possible the rise of near-
instant communication by electronic mail, instant messaging, voice over
Internet Protocol (VoIP) telephone calls, video chat, and the World Wide Web
with its discussion forums, blogs, social networking services, and online
shopping sites. Increasing amounts of data are transmitted at higher and
higher speeds over fiber-optic networks operating at 1 Gbit/s, 10 Gbit/s, and
800 Gbit/s by 2019.[10] The Internet's takeover of the global communication
landscape was rapid in historical terms: it only communicated 1% of the
information flowing through two-way telecommunications networks in the
year 1993, 51% by 2000, and more than 97% of the telecommunicated
information by 2007.[11] The Internet continues to grow, driven by ever
greater amounts of online information, commerce, entertainment, and social
networking services. However, the future of the global network may be
shaped by regional differences.[12]

Foundations

Precursors

Telegraphy

The practice of transmitting messages between two different places through


an electromagnetic medium dates back to the electrical telegraph in the late
19th century, which was the first fully digital communication
system. Radiotelegraphy began to be used commercially in the early 20th
century. Telex became an operational teleprinter service in the 1930s. Such
systems were limited to point-to-point communication between two end
devices.

Information theory

Fundamental theoretical work in telecommunications technology was


developed by Harry Nyquist and Ralph Hartley in the 1920s. Information
theory, as enunciated by Claude Shannon in 1948, provided a firm
theoretical underpinning to understand the trade-offs between signal-to-
noise ratio, bandwidth, and error-free transmission in the presence of noise.

Computers and modems

Early fixed-program computers in the 1940s were operated manually by


entering small programs via switches in order to load and run a series of
programs. As transistor technology evolved in the 1950s, central processing
units and user terminals came into use by 1955. The mainframe
computer model was devised, and modems, such as the Bell 101,
allowed digital data to be transmitted over regular unconditioned telephone
lines at low speeds by the late 1950s. These technologies made it possible to
exchange data between remote computers. However, a fixed-line link was
still necessary; the point-to-point communication model did not allow for
direct communication between any two arbitrary systems. In addition, the
applications were specific and not general purpose. Examples
included SAGE (1958) and SABRE (1960).

Time-sharing
Christopher Strachey, who became Oxford University's first Professor
of Computation, filed a patent application in the United Kingdom for time-
sharing in February 1959.[13][14] In June that year, he gave a paper "Time
Sharing in Large Fast Computers" at the UNESCO Information Processing
Conference in Paris where he passed the concept on to J. C. R. Licklider.[15]
[16]
Licklider, a vice president at Bolt Beranek and Newman, Inc. (BBN),
promoted the idea of time-sharing as an alternative to batch processing.
[14]
John McCarthy, at MIT, wrote a memo in 1959 that broadened the concept
of time sharing to encompass multiple interactive user sessions, which
resulted in the Compatible Time-Sharing System (CTSS) implemented at MIT.
Other multi-user mainframe systems developed, such as PLATO at
the University of Illinois Chicago.[17] In the early 1960, the Advanced Research
Projects Agency (ARPA) of the United States Department of Defense funded
further research into time-sharing at MIT through Project MAC.

Inspiration

J. C. R. Licklider, while working at BBN, proposed a computer network in his


March 1960 paper Man-Computer Symbiosis:[18]

A network of such centers, connected to one another by wide-band


communication lines [...] the functions of present-day libraries together with
anticipated advances in information storage and retrieval and symbiotic
functions suggested earlier in this paper

In August 1962, Licklider and Welden Clark published the paper "On-Line
Man-Computer Communication"[19] which was one of the first descriptions of
a networked future.

In October 1962, Licklider was hired by Jack Ruina as director of the newly
established Information Processing Techniques Office (IPTO) within ARPA,
with a mandate to interconnect the United States Department of Defense's
main computers at Cheyenne Mountain, the Pentagon, and SAC HQ. There he
formed an informal group within DARPA to further computer research. He
began by writing memos in 1963 describing a distributed network to the
IPTO staff, whom he called "Members and Affiliates of the Intergalactic
Computer Network".[20]

Although he left the IPTO in 1964, five years before the ARPANET went live, it
was his vision of universal networking that provided the impetus for one of
his successors, Robert Taylor, to initiate the ARPANET development. Licklider
later returned to lead the IPTO in 1973 for two years. [21]
Packet switching

The "message block", designed by Paul


Baran in 1962 and refined in 1964, is the first proposal of a data packet.[22][23]

Main article: Packet switching

The infrastructure for telephone systems at the time was based on circuit
switching, which requires pre-allocation of a dedicated communication line
for the duration of the call. Telegram services had developed store and
forward telecommunication techniques. Western Union's Automatic Telegraph
Switching System Plan 55-A was based on message switching. The U.S.
military's AUTODIN network became operational in 1962. These systems, like
SAGE and SBRE, still required rigid routing structures that were prone
to single point of failure.[24]

The technology was considered vulnerable for strategic and military use
because there were no alternative paths for the communication in case of a
broken link. In the early 1960s, Paul Baran of the RAND Corporation produced
a study of survivable networks for the U.S. military in the event of nuclear
war.[25] Information would be transmitted across a "distributed" network,
divided into what he called "message blocks". [26][27][28][29][30] Baran's design was
not implemented.

In addition to being prone to a single point of failure, existing telegraphic


techniques were inefficient and inflexible. Beginning in 1965 Donald Davies,
at the National Physical Laboratory in the United Kingdom, independently
developed a more advanced proposal of the concept, designed for high-
speed computer networking, which he called packet switching, the term that
would ultimately be adopted.[31][32][33][34]

Packet switching is a technique for transmitting computer data by splitting it


into very short, standardized chunks, attaching routing information to each
of these chunks, and transmitting them independently through a computer
network. It provides better bandwidth utilization than traditional circuit-
switching used for telephony, and enables the connection of computers with
different transmission and receive rates. It is a distinct concept to message
switching.[35]

Networks that led to the Internet


NPL network

Main article: NPL network

Following discussions with J. C. R. Licklider in 1965, Donald Davies became


interested in data communications for computer networks.[36][37] Later that
year, at the National Physical Laboratory (NPL) in the United Kingdom,
Davies designed and proposed a national commercial data network based on
packet switching.[38] The following year, he described the use of "switching
nodes" to act as routers in a digital communication network.[39][40] The
proposal was not taken up nationally but he produced a design for a local
network to serve the needs of the NPL and prove the feasibility of packet
switching using high-speed data transmission. [41][42] To deal with packet
permutations (due to dynamically updated route preferences) and to
datagram losses (unavoidable when fast sources send to a slow
destinations), he assumed that "all users of the network will provide
themselves with some kind of error control",[43] thus inventing what came to
be known as the end-to-end principle. In 1967, he and his team were the first
to use the term 'protocol' in a modern data-commutation context. [44]

In 1968,[45] Davies began building the Mark I packet-switched network to


meet the needs of his multidisciplinary laboratory and prove the technology
under operational conditions.[46][47] The network's development was described
at a 1968 conference.[48][49] Elements of the network became operational in
early 1969,[46][50] the first implementation of packet switching,[51][52] and the
NPL network was the first to use high-speed links. [53] Many other packet
switching networks built in the 1970s were similar "in nearly all respects" to
Davies' original 1965 design.[36] The Mark II version which operated from
1973 used a layered protocol architecture.[53] In 1976, 12 computers and 75
terminal devices were attached,[54] and more were added. The NPL team
carried out simulation work on wide-area packet networks,
including datagrams and congestion; and research
into internetworking and secure communications.[46][55][56] The network was
replaced in 1986.[53]

ARPANET

Main article: ARPANET

Robert Taylor was promoted to the head of the Information Processing


Techniques Office (IPTO) at Advanced Research Projects Agency (ARPA) in
1966. He intended to realize Licklider's ideas of an interconnected
networking system.[57] As part of the IPTO's role, three network terminals had
been installed: one for System Development Corporation in Santa Monica,
one for Project Genie at University of California, Berkeley, and one for
the Compatible Time-Sharing System project at Massachusetts Institute of
Technology (MIT).[58] Taylor's identified need for networking became obvious
from the waste of resources apparent to him.

For each of these three terminals, I had three different sets of user
commands. So if I was talking online with someone at S.D.C. and I wanted to
talk to someone I knew at Berkeley or M.I.T. about this, I had to get up from
the S.D.C. terminal, go over and log into the other terminal and get in touch
with them.... I said, oh man, it's obvious what to do: If you have these three
terminals, there ought to be one terminal that goes anywhere you want to go
where you have interactive computing. That idea is the ARPAnet. [58]

Bringing in Larry Roberts from MIT in January 1967, he initiated a project to


build such a network. Roberts and Thomas Merrill had been researching
computer time-sharing over wide area networks (WANs).[59] Wide area
networks emerged during the late 1950s and became established during the
1960s. At the first ACM Symposium on Operating Systems Principles in
October 1967, Roberts presented a proposal for the "ARPA net", based
on Wesley Clark's idea to use Interface Message Processors (IMP) to create
a message switching network.[60][61][62] At the conference, Roger
Scantlebury presented Donald Davies' work on a hierarchical digital
communications network using packet switching and referenced the work
of Paul Baran at RAND. Roberts incorporated the packet switching and
routing concepts of Davies and Baran into the ARPANET design and upgraded
the proposed communications speed from 2.4 kbit/s to 50 kbit/s.[22][63][64][65]

ARPA awarded the contract to build the network to Bolt Beranek & Newman.
The "IMP guys", led by Frank Heart and Bob Kahn, developed the routing,
flow control, software design and network control. [36][66] The first ARPANET link
was established between the Network Measurement Center at the University
of California, Los Angeles (UCLA) Henry Samueli School of Engineering and
Applied Science directed by Leonard Kleinrock, and the NLS system
at Stanford Research Institute (SRI) directed by Douglas Engelbart in Menlo
Park, California at 22:30 hours on October 29, 1969.[67][68]

"We set up a telephone connection between us and the guys at SRI ...",
Kleinrock ... said in an interview: "We typed the L and we asked on the
phone,

"Do you see the L?"


"Yes, we see the L," came the response.

We typed the O, and we asked, "Do you see the O."

"Yes, we see the O."

Then we typed the G, and the system crashed ...

Yet a revolution had begun" ....[69][70]

Postage stamp of Azerbaijan (2004): 35 Years


of the Internet, 1969–2004

By December 1969, a four-node network was connected by adding the Culler-


Fried Interactive Mathematics Center at the University of California, Santa
Barbara followed by the University of Utah Graphics Department.[71] In the
same year, Taylor helped fund ALOHAnet, a system designed by
professor Norman Abramson and others at the University of Hawaiʻi at
Mānoa that transmitted data by radio between seven computers on four
islands on Hawaii.[72]

Steve Crocker formed the "Network Working Group" in 1969 at UCLA.


Working with Jon Postel and others,[73] he initiated and managed the Request
for Comments (RFC) process, which is still used today for proposing and
distributing contributions. RFC 1, entitled "Host Software", was written by
Steve Crocker and published on April 7, 1969. The protocol for establishing
links between network sites in the ARPANET, the Network Control
Program (NCP), was completed in 1970. These early years were documented
in the 1972 film Computer Networks: The Heralds of Resource Sharing.

Roberts presented the idea of packet switching to the communication


professionals, and faced anger and hostility. Before ARPANET was operating,
they argued that the router buffers would quickly run out. After the ARPANET
was operating, they argued packet switching would never be economic
without the government subsidy. Baran faced the same rejection and thus
failed to convince the military into constructing a packet switching network.
[74][75]
Early international collaborations via the ARPANET were sparse. Connections
were made in 1973 to the Norwegian Seismic Array (NORSAR),[76] via a
satellite link at the Tanum Earth Station in Sweden, and to Peter Kirstein's
research group at University College London, which provided a gateway
to British academic networks, the first international heterogenous resource
sharing network.[77] Throughout the 1970s, Leonard Kleinrock developed the
mathematical theory to model and measure the performance of packet-
switching technology, building on his earlier work on the application
of queueing theory to message switching systems.[78] By 1981, the number of
hosts had grown to 213.[79] The ARPANET became the technical core of what
would become the Internet, and a primary tool in developing the
technologies used.

Merit Network

Main article: Merit Network

The Merit Network[80] was formed in 1966 as the Michigan Educational


Research Information Triad to explore computer networking between three of
Michigan's public universities as a means to help the state's educational and
economic development.[81] With initial support from the State of Michigan and
the National Science Foundation (NSF), the packet-switched network was first
demonstrated in December 1971 when an interactive host to host
connection was made between the IBM mainframe computer systems at
the University of Michigan in Ann Arbor and Wayne State
University in Detroit.[82] In October 1972 connections to the CDC mainframe
at Michigan State University in East Lansing completed the triad. Over the
next several years in addition to host to host interactive connections the
network was enhanced to support terminal to host connections, host to host
batch connections (remote job submission, remote printing, batch file
transfer), interactive file transfer, gateways to the Tymnet and Telenet public
data networks, X.25 host attachments, gateways to X.25 data
networks, Ethernet attached hosts, and eventually TCP/IP and
additional public universities in Michigan join the network.[82][83] All of this set
the stage for Merit's role in the NSFNET project starting in the mid-1980s.

CYCLADES

Main article: CYCLADES

The CYCLADES packet switching network was a French research network


designed and directed by Louis Pouzin. In 1972, he began planning the
network to explore alternatives to the early ARPANET design and to
support internetworking research. First demonstrated in 1973, it was the first
network to implement the end-to-end principle conceived by Donald Davies
and make the hosts responsible for reliable delivery of data, rather than the
network itself, using unreliable datagrams.[84][85] Concepts implemented in
this network influenced TCP/IP architecture.[86][87]

X.25 and public data networks

Main articles: X.25 and public data network

1974 interview with Arthur C. Clarke by the Australian Broadcasting


Corporation, in which he describes a future of ubiquitous networked personal
computers

Based on international research initiatives, particularly the contributions


of Rémi Després, packet switching network standards were developed by
the International Telegraph and Telephone Consultative Committee (ITU-T) in
the form of X.25 and related standards.[88][89] X.25 is built on the concept
of virtual circuits emulating traditional telephone connections. In 1974, X.25
formed the basis for the SERCnet network between British academic and
research sites, which later became JANET, the United Kingdom's high-
speed national research and education network (NREN). The initial ITU
Standard on X.25 was approved in March 1976.[90] Existing networks, such
as Telenet in the United States adopted X.25 as well as new public data
networks, such as DATAPAC in Canada and TRANSPAC in France.[88]
[89]
X.25 was supplemented by the X.75 protocol which enabled
internetworking between national PTT networks in Europe and commercial
networks in North America.[91][92][93]

The British Post Office, Western Union International, and Tymnet collaborated
to create the first international packet-switched network, referred to as
the International Packet Switched Service (IPSS), in 1978. This network grew
from Europe and the US to cover Canada, Hong Kong, and Australia by 1981.
By the 1990s it provided a worldwide networking infrastructure. [94]

Unlike ARPANET, X.25 was commonly available for business


use. Telenet offered its Telemail electronic mail service, which was also
targeted to enterprise use rather than the general email system of the
ARPANET.

The first public dial-in networks used asynchronous teleprinter (TTY) terminal
protocols to reach a concentrator operated in the public network. Some
networks, such as Telenet and CompuServe, used X.25 to multiplex the
terminal sessions into their packet-switched backbones, while others, such
as Tymnet, used proprietary protocols. In 1979, CompuServe became the
first service to offer electronic mail capabilities and technical support to
personal computer users. The company broke new ground again in 1980 as
the first to offer real-time chat with its CB Simulator. Other major dial-in
networks were America Online (AOL) and Prodigy that also provided
communications, content, and entertainment features. [95] Many bulletin board
system (BBS) networks also provided on-line access, such as FidoNet which
was popular amongst hobbyist computer users, many of
them hackers and amateur radio operators.[citation needed]

UUCP and Usenet

Main articles: UUCP and Usenet

In 1979, two students at Duke University, Tom Truscott and Jim Ellis,
originated the idea of using Bourne shell scripts to transfer news and
messages on a serial line UUCP connection with nearby University of North
Carolina at Chapel Hill. Following public release of the software in 1980, the
mesh of UUCP hosts forwarding on the Usenet news rapidly expanded.
UUCPnet, as it would later be named, also created gateways and links
between FidoNet and dial-up BBS hosts. UUCP networks spread quickly due
to the lower costs involved, ability to use existing leased lines, X.25 links or
even ARPANET connections, and the lack of strict use policies compared to
later networks like CSNET and BITNET. All connects were local. By 1981 the
number of UUCP hosts had grown to 550, nearly doubling to 940 in 1984. [96]

Sublink Network, operating since 1987 and officially founded in Italy in 1989,
based its interconnectivity upon UUCP to redistribute mail and news groups
messages throughout its Italian nodes (about 100 at the time) owned both
by private individuals and small companies. Sublink Network evolved into
one of the first examples of Internet technology coming into use through
popular diffusion.

1973–1989: Merging the networks and creating the Internet


Map of the TCP/IP test network in February 1982

TCP/IP

Main article: Internet protocol suite

See also: Transmission Control Protocol and Internet Protocol

First Internet demonstration, linking


the ARPANET, PRNET, and SATNET on November 22, 1977

With so many different networking methods seeking interconnection, a


method was needed to unify them. Louis Pouzin initiated
the CYCLADES project in 1972,[97] building on the work of Donald Davies and
the ARPANET.[98] An International Network Working Group formed in 1972;
active members included Vint Cerf from Stanford University, Alex McKenzie
from BBN, Donald Davies and Roger Scantlebury from NPL, and Louis Pouzin
and Hubert Zimmermann from IRIA.[99][100][101] Pouzin coined the
term catenet for concatenated network. Bob Metcalfe at Xerox PARC outlined
the idea of Ethernet and PARC Universal Packet (PUP)
for internetworking. Bob Kahn, now at DARPA, recruited Vint Cerf to work
with him on the problem. By 1973, these groups had worked out a
fundamental reformulation, in which the differences between network
protocols were hidden by using a common internetworking protocol. Instead
of the network being responsible for reliability, as in the ARPANET, the hosts
became responsible.[2][102]

Cerf and Kahn published their ideas in May 1974, [103] which incorporated
concepts implemented by Louis Pouzin and Hubert Zimmermann in the
CYCLADES network.[104][105] The specification of the resulting protocol,
the Transmission Control Program, was published as RFC 675 by the Network
Working Group in December 1974.[106] It contains the first attested use of the
term internet, as a shorthand for internetwork. This software was monolithic
in design using two simplex communication channels for each user session.

With the role of the network reduced to a core of functionality, it became


possible to exchange traffic with other networks independently from their
detailed characteristics, thereby solving the fundamental problems of
internetworking. DARPA agreed to fund the development of prototype
software. Testing began in 1975 through concurrent implementations at
Stanford, BBN and University College London (UCL).[3] After several years of
work, the first demonstration of a gateway between the Packet Radio
network (PRNET) in the SF Bay area and the ARPANET was conducted by
the Stanford Research Institute. On November 22, 1977, a three network
demonstration was conducted including the ARPANET, the SRI's Packet Radio
Van on the Packet Radio Network and the Atlantic Packet Satellite
Network (SATNET) including a node at UCL.[107][108]

The software was redesigned as a modular protocol stack, using full-duplex


channels; between 1976 and 1977, Yogen Dalal and Robert Metcalfe among
others, proposed separating TCP's routing and transmission control functions
into two discrete layers,[109][110] which led to the splitting of the Transmission
Control Program into the Transmission Control Protocol (TCP) and the Internet
Protocol (IP) in version 3 in 1978.[110][111] Version 4 was described
in IETF publication RFC 791 (September 1981), 792 and 793. It was installed
on SATNET in 1982 and the ARPANET in January 1983 after the DoD made it
standard for all military computer networking.[112][113] This resulted in a
networking model that became known informally as TCP/IP. It was also
referred to as the Department of Defense (DoD) model or DARPA model.
[114]
Cerf credits his graduate students Yogen Dalal, Carl Sunshine, Judy
Estrin, Richard Karp, and Gérard Le Lann with important work on the design
and testing.[115] DARPA sponsored or encouraged the development of TCP/IP
implementations for many operating systems.
Decomposition of the quad-dotted IPv4
address representation to its binary value

From ARPANET to NSFNET

Main article: NSFNET

BBN Technologies TCP/IP Internet


map of early 1986

After the ARPANET had been up and running for several years, ARPA looked
for another agency to hand off the network to; ARPA's primary mission was
funding cutting-edge research and development, not running a
communications utility. In July 1975, the network was turned over to
the Defense Communications Agency, also part of the Department of
Defense. In 1983, the U.S. military portion of the ARPANET was broken off as
a separate network, the MILNET. MILNET subsequently became the
unclassified but military-only NIPRNET, in parallel with the SECRET-
level SIPRNET and JWICS for TOP SECRET and above. NIPRNET does have
controlled security gateways to the public Internet.

The networks based on the ARPANET were government funded and therefore
restricted to noncommercial uses such as research; unrelated commercial
use was strictly forbidden.[116] This initially restricted connections to military
sites and universities. During the 1980s, the connections expanded to more
educational institutions, and a growing number of companies such as Digital
Equipment Corporation and Hewlett-Packard, which were participating in
research projects or providing services to those who were. Data transmission
speeds depended upon the type of connection, the slowest being analog
telephone lines and the fastest using optical networking technology.

Several other branches of the U.S. government, the National Aeronautics and
Space Administration (NASA), the National Science Foundation (NSF), and
the Department of Energy (DOE) became heavily involved in Internet
research and started development of a successor to ARPANET. In the mid-
1980s, all three of these branches developed the first Wide Area Networks
based on TCP/IP. NASA developed the NASA Science Network, NSF
developed CSNET and DOE evolved the Energy Sciences Network or ESNet.

T3 NSFNET Backbone, c. 1992

NASA developed the TCP/IP based NASA Science Network (NSN) in the mid-
1980s, connecting space scientists to data and information stored anywhere
in the world. In 1989, the DECnet-based Space Physics Analysis Network
(SPAN) and the TCP/IP-based NASA Science Network (NSN) were brought
together at NASA Ames Research Center creating the first multiprotocol wide
area network called the NASA Science Internet, or NSI. NSI was established
to provide a totally integrated communications infrastructure to the NASA
scientific community for the advancement of earth, space and life sciences.
As a high-speed, multiprotocol, international network, NSI provided
connectivity to over 20,000 scientists across all seven continents.

In 1981, NSF supported the development of the Computer Science


Network (CSNET). CSNET connected with ARPANET using TCP/IP, and ran
TCP/IP over X.25, but it also supported departments without sophisticated
network connections, using automated dial-up mail exchange. CSNET played
a central role in popularizing the Internet outside the ARPANET. [23]

In 1986, the NSF created NSFNET, a 56 kbit/s backbone to support the NSF-
sponsored supercomputing centers. The NSFNET also provided support for
the creation of regional research and education networks in the United
States, and for the connection of university and college campus networks to
the regional networks.[117] The use of NSFNET and the regional networks was
not limited to supercomputer users and the 56 kbit/s network quickly
became overloaded. NSFNET was upgraded to 1.5 Mbit/s in 1988 under a
cooperative agreement with the Merit Network in partnership with IBM, MCI,
and the State of Michigan. The existence of NSFNET and the creation
of Federal Internet Exchanges (FIXes) allowed the ARPANET to be
decommissioned in 1990.

NSFNET was expanded and upgraded to dedicated fiber, optical lasers and
optical amplifier systems capable of delivering T3 start up speeds or
45 Mbit/s in 1991. However, the T3 transition by MCI took longer than
expected, allowing Sprint to establish a coast-to-coast long-distance
commercial Internet service. When NSFNET was decommissioned in 1995, its
optical networking backbones were handed off to several commercial
Internet service providers, including MCI, PSI Net and Sprint.[118] As a result,
when the handoff was complete, Sprint and its Washington DC Network
Access Points began to carry Internet traffic, and by 1996, Sprint was the
world's largest carrier of Internet traffic.[119]

The research and academic community continues to develop and use


advanced networks such as Internet2 in the United States and JANET in the
United Kingdom.

Transition towards the Internet

The term "internet" was reflected in the first RFC published on the TCP
protocol (RFC 675:[120] Internet Transmission Control Program, December
1974) as a short form of internetworking, when the two terms were used
interchangeably. In general, an internet was a collection of networks linked
by a common protocol. In the time period when the ARPANET was connected
to the newly formed NSFNET project in the late 1980s, the term was used as
the name of the network, Internet, being the large and global TCP/IP network.
[121]

Opening the Internet and the fiber optic backbone to corporate and
consumers increased demand for network capacity. The expense and delay
of laying new fiber led providers to test a fiber bandwidth expansion
alternative that had been pioneered in the late 1970s by Optelecom using
"interactions between light and matter, such as lasers and optical devices
used for optical amplification and wave mixing".[122] This technology became
known as wave division multiplexing (WDM). Bell Labs deployed a 4-channel
WDM system in 1995.[123] To develop a mass capacity (dense) WDM
system, Optelecom and its former head of Light Systems Research, David R.
Huber formed a new venture, Ciena Corp., that deployed the world's first
dense WDM system on the Sprint fiber network in June 1996. [123] This was
referred to as the real start of optical networking. [124]

As interest in networking grew by needs of collaboration, exchange of data,


and access of remote computing resources, the Internet technologies spread
throughout the rest of the world. The hardware-agnostic approach in TCP/IP
supported the use of existing network infrastructure, such as
the International Packet Switched Service (IPSS) X.25 network, to carry
Internet traffic.

Many sites unable to link directly to the Internet created simple gateways for
the transfer of electronic mail, the most important application of the time.
Sites with only intermittent connections used UUCP or FidoNet and relied on
the gateways between these networks and the Internet. Some gateway
services went beyond simple mail peering, such as allowing access to File
Transfer Protocol (FTP) sites via UUCP or mail.[125]

Finally, routing technologies were developed for the Internet to remove the
remaining centralized routing aspects. The Exterior Gateway Protocol (EGP)
was replaced by a new protocol, the Border Gateway Protocol (BGP). This
provided a meshed topology for the Internet and reduced the centric
architecture which ARPANET had emphasized. In 1994, Classless Inter-
Domain Routing (CIDR) was introduced to support better conservation of
address space which allowed use of route aggregation to decrease the size
of routing tables.[126]

Optical networking

The MOS transistor underpinned the rapid growth of telecommunication


bandwidth over the second half of the 20th century. [127] To address the need
for transmission capacity beyond that provided by radio, satellite and analog
copper telephone lines, engineers developed optical
communications systems based on fiber optic cables powered
by lasers and optical amplifier techniques.

The concept of lasing arose from a 1917 paper by Albert Einstein, "On the
Quantum Theory of Radiation". Einstein expanded upon a conversation
with Max Planck on how atoms absorb and emit light, part of a thought
process that, with input from Erwin Schrödinger, Werner Heisenberg and
others, gave rise to quantum mechanics. Specifically, in his quantum theory,
Einstein mathematically determined that light could be generated not only
by spontaneous emission, such as the light emitted by an incandescent
light or the Sun, but also by stimulated emission.

Forty years later, on November 13, 1957, Columbia University physics


student Gordon Gould first realized how to make light by stimulated emission
through a process of optical amplification. He coined the term LASER for this
technology—Light Amplification by Stimulated Emission of Radiation.
[128]
Using Gould's light amplification method (patented as "Optically Pumped
Laser Amplifier"),[129] Theodore Maiman made the first working laser on May
16, 1960.[130]

Gould co-founded Optelecom in 1973 to commercialize his inventions in


optical fiber telecommunications,[131] just as Corning Glass was producing the
first commercial fiber optic cable in small quantities. Optelecom configured
its own fiber lasers and optical amplifiers into the first commercial optical
communication systems which it delivered to Chevron and the US Army
Missile Defense.[132] Three years later, GTE deployed the first optical
telephone system in 1977 in Long Beach, California. [133] By the early 1980s,
optical networks powered by lasers, LED and optical amplifier equipment
supplied by Bell Labs, NTT and Perelli[clarification needed] were used by select
universities and long-distance telephone providers. [citation needed]

TCP/IP goes global (1980s)

CERN and the European Internet

See also: Protocol Wars

In 1982, NORSAR/NDRE and Peter Kirstein's research group at University


College London (UCL) left the ARPANET and began to use TCP/IP over
SATNET.[102] There were 40 British academic research groups using UCL's link
to the ARPANET in 1975.[77][134]

Between 1984 and 1988, CERN began installation and operation of TCP/IP to
interconnect its major internal computer systems, workstations, PCs, and an
accelerator control system. CERN continued to operate a limited self-
developed system (CERNET) internally and several incompatible (typically
proprietary) network protocols externally. There was considerable resistance
in Europe towards more widespread use of TCP/IP, and the CERN TCP/IP
intranets remained isolated from the Internet until 1989, when a
transatlantic connection to Cornell University was established. [135][136][137]
The Computer Science Network (CSNET) began operation in 1981 to provide
networking connections to institutions that could not connect directly to
ARPANET. Its first international connection was to Israel in 1984. Soon after,
connections were established to computer science departments in Canada,
France, and Germany.[23]

In 1988, the first international connections to NSFNET was established by


France's INRIA,[138][139] and Piet Beertema at the Centrum Wiskunde &
Informatica (CWI) in the Netherlands.[140] Daniel Karrenberg, from CWI,
visited Ben Segal, CERN's TCP/IP coordinator, looking for advice about the
transition of EUnet, the European side of the UUCP Usenet network (much of
which ran over X.25 links), over to TCP/IP. The previous year, Segal had met
with Len Bosack from the then still small company Cisco about purchasing
some TCP/IP routers for CERN, and Segal was able to give Karrenberg advice
and forward him on to Cisco for the appropriate hardware. This expanded the
European portion of the Internet across the existing UUCP networks.
The NORDUnet connection to NSFNET was in place soon after, providing open
access for university students in Denmark, Finland, Iceland, Norway, and
Sweden.[141] In January 1989, CERN opened its first external TCP/IP
connections.[142] This coincided with the creation of Réseaux IP Européens
(RIPE), initially a group of IP network administrators who met regularly to
carry out coordination work together. Later, in 1992, RIPE was formally
registered as a cooperative in Amsterdam.

The United Kingdom's national research and education


network (NREN), JANET, began operation in 1984 using the UK's Coloured
Book protocols and connected to NSFNET in 1989. In 1991, JANET adopted
Internet Protocol on the existing network.[143][144] The same year, Dai Davies
introduced Internet technology into the pan-European NREN, EuropaNet,
which was built on the X.25 protocol.[145][146] The European Academic and
Research Network (EARN) and RARE adopted IP around the same time, and
the European Internet backbone EBONE became operational in 1992.[135]

Nonetheless, for a period in the late 1980s and early 1990s, engineers,
organizations and nations were polarized over the issue of which standard,
the OSI model or the Internet protocol suite would result in the best and most
robust computer networks.[100][147][148]

The link to the Pacific

South Korea set up a two-node domestic TCP/IP network in 1982, the System
Development Network (SDN), adding a third node the following year. SDN
was connected to the rest of the world in August 1983 using UUCP (Unix-to-
Unix-Copy); connected to CSNET in December 1984; [23] and formally
connected to the NSFNET in 1990.[149][150][151]

Japan, which had built the UUCP-based network JUNET in 1984, connected to
CSNET,[23] and later to NSFNET in 1989, marking the spread of the Internet to
Asia.

In Australia, ad hoc networking to ARPA and in-between Australian


universities formed in the late 1980s, based on various technologies such as
X.25, UUCPNet, and via a CSNET.[23] These were limited in their connection to
the global networks, due to the cost of making individual international UUCP
dial-up or X.25 connections. In 1989, Australian universities joined the push
towards using IP protocols to unify their networking
infrastructures. AARNet was formed in 1989 by the Australian Vice-
Chancellors' Committee and provided a dedicated IP based network for
Australia.

New Zealand adopted the UK's Coloured Book protocols as an interim


standard and established its first international IP connection to the U.S. in
1989.[152]

A "digital divide" emerges

Internet users in 2023 as a percentage of a country's population

Source: International Telecommunication Union.[153]

Main articles: Global digital divide and Digital divide


Fixed broadband Internet subscriptions in 2012
as a percentage of a country's population

Source: International Telecommunication Union.[154]

Mobile broadband Internet subscriptions in 2012


as a percentage of a country's population

Source: International Telecommunication Union.[155]

While developed countries with technological infrastructures were joining the


Internet, developing countries began to experience a digital
divide separating them from the Internet. On an essentially continental basis,
they built organizations for Internet resource administration and to share
operational experience, which enabled more transmission facilities to be put
into place.

Africa

At the beginning of the 1990s, African countries relied upon X.25 IPSS and
2400 baud modem UUCP links for international and internetwork computer
communications.

In August 1995, InfoMail Uganda, Ltd., a privately held firm in Kampala now
known as InfoCom, and NSN Network Services of Avon, Colorado, sold in
1997 and now known as Clear Channel Satellite, established Africa's first
native TCP/IP high-speed satellite Internet services. The data connection was
originally carried by a C-Band RSCC Russian satellite which connected
InfoMail's Kampala offices directly to NSN's MAE-West point of presence using
a private network from NSN's leased ground station in New Jersey. InfoCom's
first satellite connection was just 64 kbit/s, serving a Sun host computer and
twelve US Robotics dial-up modems.

In 1996, a USAID funded project, the Leland Initiative, started work on


developing full Internet connectivity for the continent. Guinea,
Mozambique, Madagascar and Rwanda gained satellite earth stations in
1997, followed by Ivory Coast and Benin in 1998.

Africa is building an Internet infrastructure. AFRINIC, headquartered


in Mauritius, manages IP address allocation for the continent. As with other
Internet regions, there is an operational forum, the Internet Community of
Operational Networking Specialists.[156]

There are many programs to provide high-performance transmission plant,


and the western and southern coasts have undersea optical cable. High-
speed cables join North Africa and the Horn of Africa to intercontinental cable
systems. Undersea cable development is slower for East Africa; the original
joint effort between New Partnership for Africa's Development (NEPAD) and
the East Africa Submarine System (Eassy) has broken off and may become
two efforts.[157]

Asia and Oceania

The Asia Pacific Network Information Centre (APNIC), headquartered in


Australia, manages IP address allocation for the continent. APNIC sponsors
an operational forum, the Asia-Pacific Regional Internet Conference on
Operational Technologies (APRICOT).[158]

In South Korea, VDSL, a last mile technology developed in the 1990s by


NextLevel Communications, connected corporate and consumer copper-
based telephone lines to the Internet.[159]

The People's Republic of China established its first TCP/IP college


network, Tsinghua University's TUNET in 1991. The PRC went on to make its
first global Internet connection in 1994, between the Beijing Electro-
Spectrometer Collaboration and Stanford University's Linear Accelerator
Center. However, China went on to implement its own digital divide by
implementing a country-wide content filter.[160]
Japan hosted the annual meeting of the Internet Society, INET'92, in Kobe.
Singapore developed TECHNET in 1990, and Thailand gained a global
Internet connection between Chulalongkorn University and UUNET in 1992.
[161]

Latin America

As with the other regions, the Latin American and Caribbean Internet
Addresses Registry (LACNIC) manages the IP address space and other
resources for its area. LACNIC, headquartered in Uruguay, operates DNS root,
reverse DNS, and other key services.

1990–2003: Rise of the global Internet, Web 1.0

Main articles: History of the World Wide Web and Information Age

Development

Initially, as with its predecessor networks, the system that would evolve into
the Internet was primarily for government and government body use.
Although commercial use was forbidden, the exact definition of commercial
use was unclear and subjective. UUCPNet and the X.25 IPSS had no such
restrictions, which would eventually see the official barring of UUCPNet use
of ARPANET and NSFNET connections.

Number of Internet hosts worldwide: 1969–2019

Source: Internet Systems Consortium.[162]


As a result, during the late 1980s, the first Internet service provider (ISP)
companies were formed. Companies like PSINet, UUNET, Netcom, and Portal
Software were formed to provide service to the regional research networks
and provide alternate network access, UUCP-based email and Usenet
News to the public. In 1989, MCI Mail became the first commercial email
provider to get an experimental gateway to the Internet. [163] The first
commercial dialup ISP in the United States was The World, which opened in
1989.[164]

In 1992, the U.S. Congress passed the Scientific and Advanced-Technology


Act, 42 U.S.C. § 1862(g), which allowed NSF to support access by the
research and education communities to computer networks which were not
used exclusively for research and education purposes, thus permitting
NSFNET to interconnect with commercial networks. [165][166] This caused
controversy within the research and education community, who were
concerned commercial use of the network might lead to an Internet that was
less responsive to their needs, and within the community of commercial
network providers, who felt that government subsidies were giving an unfair
advantage to some organizations.[167]

By 1990, ARPANET's goals had been fulfilled and new networking


technologies exceeded the original scope and the project came to a close.
New network service providers including PSINet, Alternet, CERFNet, ANS
CO+RE, and many others were offering network access to commercial
customers. NSFNET was no longer the de facto backbone and exchange point
of the Internet. The Commercial Internet eXchange (CIX), Metropolitan Area
Exchanges (MAEs), and later Network Access Points (NAPs) were becoming
the primary interconnections between many networks. The final restrictions
on carrying commercial traffic ended on April 30, 1995, when the National
Science Foundation ended its sponsorship of the NSFNET Backbone Service.
[168][169]
NSF provided initial support for the NAPs and interim support to help
the regional research and education networks transition to commercial ISPs.
NSF also sponsored the very high speed Backbone Network Service (vBNS)
which continued to provide support for the supercomputing centers and
research and education in the United States. [170]

An event held on 11 January 1994, The Superhighway Summit at UCLA's


Royce Hall, was the "first public conference bringing together all of the major
industry, government and academic leaders in the field [and] also began the
national dialogue about the Information Superhighway and its implications".
[171]
Internet use in wider society

The invention of the World Wide Web by Tim Berners-Lee at CERN, as an


application on the Internet,[172] brought many social and commercial uses to
what was, at the time, a network of networks for academic and research
institutions.[173][174] The Web opened to the public in 1991 and began to enter
general use in 1993–4, when websites for everyday use started to become
available.[175]

Stamped envelope of Russian Post issued in


1993 with stamp and graphics dedicated to first Russian underwater digital
optic cable laid in 1993 by Rostelecom from Kingisepp to Copenhagen

During the first decade or so of the public Internet, the immense changes it
would eventually enable in the 2000s were still nascent. In terms of providing
context for this period, mobile cellular devices ("smartphones" and other
cellular devices) which today provide near-universal access, were used for
business and not a routine household item owned by parents and children
worldwide. Social media in the modern sense had yet to come into existence,
laptops were bulky and most households did not have computers. Data rates
were slow and most people lacked means to video or digitize video; media
storage was transitioning slowly from analog tape to digital optical
discs (DVD and to an extent still, floppy disc to CD). Enabling technologies
used from the early 2000s such as PHP, modern JavaScript and Java,
technologies such as AJAX, HTML 4 (and its emphasis on CSS), and
various software frameworks, which enabled and simplified speed of web
development, largely awaited invention and their eventual widespread
adoption.

The Internet was widely used for mailing lists, emails, creating and
distributing maps with tools like MapQuest, e-commerce and early
popular online shopping (Amazon and eBay for example), online
forums and bulletin boards, and personal websites and blogs, and use was
growing rapidly, but by more modern standards, the systems used were
static and lacked widespread social engagement. It awaited a number of
events in the early 2000s to change from a communications technology to
gradually develop into a key part of global society's infrastructure.

Typical design elements of these "Web 1.0" era websites included: [176] Static
pages instead of dynamic HTML;[177] content served from filesystems instead
of relational databases; pages built using Server Side Includes or CGI instead
of a web application written in a dynamic programming language; HTML 3.2-
era structures such as frames and tables to create page layouts;
online guestbooks; overuse of GIF buttons and similar small graphics
promoting particular items;[178] and HTML forms sent via email. (Support
for server side scripting was rare on shared servers so the usual feedback
mechanism was via email, using mailto forms and their email program.[179]

During the period 1997 to 2001, the first speculative


investment bubble related to the Internet took place, in which "dot-com"
companies (referring to the ".com" top level domain used by businesses)
were propelled to exceedingly high valuations as investors rapidly
stoked stock values, followed by a market crash; the first dot-com bubble.
However this only temporarily slowed enthusiasm and growth, which quickly
recovered and continued to grow.

The history of the World Wide Web up to around 2004 was retrospectively
named and described by some as "Web 1.0".[180]

IPv6

In the final stage of IPv4 address exhaustion, the last IPv4 address block was
assigned in January 2011 at the level of the regional Internet registries.
[181]
IPv4 uses 32-bit addresses which limits the address space to
2 addresses, i.e. 4294967296 addresses.[111] IPv4 is in the process of
32

replacement by IPv6, its successor, which uses 128-bit addresses, providing


2128 addresses, i.e. 340282366920938463463374607431768211456,[182] a
vastly increased address space. The shift to IPv6 is expected to take a long
time to complete.[181]

2004–present: Web 2.0, global ubiquity, social media

Main articles: Web 2.0 and History of the World Wide Web § Web 2.0

The rapid technical advances that would propel the Internet into its place as
a social system, which has completely transformed the way humans interact
with each other, took place during a relatively short period from around 2005
to 2010, coinciding with the point in time in which IoT devices surpassed the
number of humans alive at some point in the late 2000s. They included:

 The call to "Web 2.0" in 2004 (first suggested in 1999).

 Accelerating adoption and commoditization among households of, and


familiarity with, the necessary hardware (such as computers).

 Accelerating storage technology and data access speeds – hard


drives emerged, took over from far smaller, slower floppy discs, and
grew from megabytes to gigabytes (and by around
2010, terabytes), RAM from hundreds of kilobytes to gigabytes as
typical amounts on a system, and Ethernet, the enabling technology
for TCP/IP, moved from common speeds of kilobits to tens of megabits
per second, to gigabits per second.

 High speed Internet and wider coverage of data connections, at lower


prices, allowing larger traffic rates, more reliable simpler traffic, and
traffic from more locations.

 The public's accelerating perception of the potential of computers to


create new means and approaches to communication, the emergence
of social media and websites such as Twitter and Facebook to their
later prominence, and global collaborations such as Wikipedia (which
existed before but gained prominence as a result).

 The mobile device revolution, particularly with smartphones and tablet


computers becoming widespread, which began to provide easy access
to the Internet to much of human society of all ages, in their daily lives,
and allowed them to share, discuss, and continually update, inquire,
and respond.

 Non-volatile RAM rapidly grew in size and reliability, and decreased in


price, becoming a commodity capable of enabling high levels of
computing activity on these small handheld devices as well as solid-
state drives (SSD).

 An emphasis on power efficient processor and device design, rather


than purely high processing power; one of the beneficiaries of this
was Arm, a British company which had focused since the 1980s on
powerful but low cost simple microprocessors. The ARM architecture
family rapidly gained dominance in the market for mobile and
embedded devices.
Web 2.0

The term "Web 2.0" describes websites that emphasize user-generated


content (including user-to-user interaction), usability, and interoperability. It
first appeared in a January 1999 article called "Fragmented Future" written
by Darcy DiNucci, a consultant on electronic information design, where she
wrote:[183][184][185][186]

The Web we know now, which loads into a browser window in essentially
static screenfuls, is only an embryo of the Web to come. The first
glimmerings of Web 2.0 are beginning to appear, and we are just starting to
see how that embryo might develop. The Web will be understood not as
screenfuls of text and graphics but as a transport mechanism, the ether
through which interactivity happens. It will [...] appear on your computer
screen, [...] on your TV set [...] your car dashboard [...] your cell phone [...]
hand-held game machines [...] maybe even your microwave oven.

The term resurfaced during 2002–2004,[187][188][189][190] and gained prominence


in late 2004 following presentations by Tim O'Reilly and Dale Dougherty at
the first Web 2.0 Conference. In their opening remarks, John Battelle and Tim
O'Reilly outlined their definition of the "Web as Platform", where software
applications are built upon the Web as opposed to upon the desktop. The
unique aspect of this migration, they argued, is that "customers are building
your business for you".[191][non-primary source needed] They argued that the activities of
users generating content (in the form of ideas, text, videos, or pictures)
could be "harnessed" to create value.

"Web 2.0" does not refer to an update to any technical specification, but
rather to cumulative changes in the way Web pages are made and used.
"Web 2.0" describes an approach, in which sites focus substantially upon
allowing users to interact and collaborate with each other in a social
media dialogue as creators of user-generated content in a virtual community,
in contrast to Web sites where people are limited to the passive viewing
of content. Examples of Web 2.0 include social networking
services, blogs, wikis, folksonomies, video sharing sites, hosted
services, Web applications, and mashups.[192] Terry Flew, in his 3rd edition
of New Media, described what he believed to characterize the differences
between Web 1.0 and Web 2.0:

[The] move from personal websites to blogs and blog site aggregation, from
publishing to participation, from web content as the outcome of large up-
front investment to an ongoing and interactive process, and from content
management systems to links based on tagging (folksonomy). [193]

This era saw several household names gain prominence through their
community-oriented operation – YouTube, Twitter, Facebook, Reddit and
Wikipedia being some examples.

Telephone networks convert to VoIP

Telephone systems have been slowly adopting voice over IP since 2003. Early
experiments proved that voice can be converted to digital packets and sent
over the Internet. The packets are collected and converted back to analog
voice.[194][195][196]

The mobile revolution

Main articles: History of mobile phones, Mobile web, and Responsive web
design

The process of change that generally coincided with Web 2.0 was itself
greatly accelerated and transformed only a short time later by the increasing
growth in mobile devices. This mobile revolution meant that computers in
the form of smartphones became something many people used, took with
them everywhere, communicated with, used for photographs and videos
they instantly shared or to shop or seek information "on the move" – and
used socially, as opposed to items on a desk at home or just used for work.
[citation needed]

Location-based services, services using location and other sensor


information, and crowdsourcing (frequently but not always location based),
became common, with posts tagged by location, or websites and services
becoming location aware. Mobile-targeted websites (such as
"m.example.com") became common, designed especially for the new
devices used. Netbooks, ultrabooks, widespread 4G and Wi-Fi, and mobile
chips capable or running at nearly the power of desktops from not many
years before on far lower power usage, became enablers of this stage of
Internet development, and the term "App" (short for "Application program" or
"Program") became popularized, as did the "App store".

This "mobile revolution" has allowed for people to have a nearly unlimited
amount of information at all times. With the ability to access the internet
from cell phones came a change in the way media was consumed. Media
consumption statistics show that over half of media consumption between
those aged 18 and 34 were using a smartphone. [197]
Networking in outer space

Main article: Interplanetary Internet

The first Internet link into low Earth orbit was established on January 22,
2010, when astronaut T. J. Creamer posted the first unassisted update to his
Twitter account from the International Space Station, marking the extension
of the Internet into space.[198] (Astronauts at the ISS had used email and
Twitter before, but these messages had been relayed to the ground through
a NASA data link before being posted by a human proxy.) This personal Web
access, which NASA calls the Crew Support LAN, uses the space station's
high-speed Ku band microwave link. To surf the Web, astronauts can use a
station laptop computer to control a desktop computer on Earth, and they
can talk to their families and friends on Earth using Voice over IP equipment.
[199]

Communication with spacecraft beyond Earth orbit has traditionally been


over point-to-point links through the Deep Space Network. Each such data
link must be manually scheduled and configured. In the late 1990s NASA and
Google began working on a new network protocol, delay-tolerant
networking (DTN), which automates this process, allows networking of
spaceborne transmission nodes, and takes the fact into account that
spacecraft can temporarily lose contact because they move behind the Moon
or planets, or because space weather disrupts the connection. Under such
conditions, DTN retransmits data packages instead of dropping them, as the
standard TCP/IP Internet Protocol does. NASA conducted the first field test of
what it calls the "deep space internet" in November 2008. [200] Testing of DTN-
based communications between the International Space Station and Earth
(now termed disruption-tolerant networking) has been ongoing since March
2009, and was scheduled to continue until March 2014. [201][needs update]

This network technology is supposed to ultimately enable missions that


involve multiple spacecraft where reliable inter-vessel communication might
take precedence over vessel-to-Earth downlinks. According to a February
2011 statement by Google's Vint Cerf, the so-called "bundle protocols" have
been uploaded to NASA's EPOXI mission spacecraft (which is in orbit around
the Sun) and communication with Earth has been tested at a distance of
approximately 80 light seconds.[202]

Internet governance

Main article: Internet governance


As a globally distributed network of voluntarily interconnected autonomous
networks, the Internet operates without a central governing body. Each
constituent network chooses the technologies and protocols it deploys from
the technical standards that are developed by the Internet Engineering Task
Force (IETF).[203] However, successful interoperation of many networks
requires certain parameters that must be common throughout the network.
For managing such parameters, the Internet Assigned Numbers
Authority (IANA) oversees the allocation and assignment of various technical
identifiers.[204] In addition, the Internet Corporation for Assigned Names and
Numbers (ICANN) provides oversight and coordination for the two
principal name spaces in the Internet, the Internet Protocol address
space and the Domain Name System.

NIC, InterNIC, IANA, and ICANN

The IANA function was originally performed by USC Information Sciences


Institute (ISI), and it delegated portions of this responsibility with respect to
numeric network and autonomous system identifiers to the Network
Information Center (NIC) at Stanford Research Institute (SRI International)
in Menlo Park, California. ISI's Jonathan Postel managed the IANA, served as
RFC Editor and performed other key roles until his death in 1998. [205]

As the early ARPANET grew, hosts were referred to by names, and a


HOSTS.TXT file would be distributed from SRI International to each host on
the network. As the network grew, this became cumbersome. A technical
solution came in the form of the Domain Name System, created by ISI's Paul
Mockapetris in 1983.[206] The Defense Data Network—Network Information
Center (DDN-NIC) at SRI handled all registration services, including the top-
level domains (TLDs) of .mil, .gov, .edu, .org, .net, .com and .us, root
nameserver administration and Internet number assignments under a United
States Department of Defense contract.[204] In 1991, the Defense Information
Systems Agency (DISA) awarded the administration and maintenance of
DDN-NIC (managed by SRI up until this point) to Government Systems, Inc.,
who subcontracted it to the small private-sector Network Solutions, Inc.[207]
[208]

The increasing cultural diversity of the Internet also posed administrative


challenges for centralized management of the IP addresses. In October 1992,
the Internet Engineering Task Force (IETF) published RFC 1366, [209] which
described the "growth of the Internet and its increasing globalization" and
set out the basis for an evolution of the IP registry process, based on a
regionally distributed registry model. This document stressed the need for a
single Internet number registry to exist in each geographical region of the
world (which would be of "continental dimensions"). Registries would be
"unbiased and widely recognized by network providers and subscribers"
within their region. The RIPE Network Coordination Centre (RIPE NCC) was
established as the first RIR in May 1992. The second RIR, the Asia Pacific
Network Information Centre (APNIC), was established in Tokyo in 1993, as a
pilot project of the Asia Pacific Networking Group. [210]

Since at this point in history most of the growth on the Internet was coming
from non-military sources, it was decided that the Department of
Defense would no longer fund registration services outside of the .mil TLD. In
1993 the U.S. National Science Foundation, after a competitive bidding
process in 1992, created the InterNIC to manage the allocations of addresses
and management of the address databases, and awarded the contract to
three organizations. Registration Services would be provided by Network
Solutions; Directory and Database Services would be provided by AT&T; and
Information Services would be provided by General Atomics.[211]

Over time, after consultation with the IANA, the IETF, RIPE NCC, APNIC, and
the Federal Networking Council (FNC), the decision was made to separate the
management of domain names from the management of IP numbers.
[210]
Following the examples of RIPE NCC and APNIC, it was recommended that
management of IP address space then administered by the InterNIC should
be under the control of those that use it, specifically the ISPs, end-user
organizations, corporate entities, universities, and individuals. As a result,
the American Registry for Internet Numbers (ARIN) was established as in
December 1997, as an independent, not-for-profit corporation by direction of
the National Science Foundation and became the third Regional Internet
Registry.[212]

In 1998, both the IANA and remaining DNS-related InterNIC functions were
reorganized under the control of ICANN, a California non-profit
corporation contracted by the United States Department of Commerce to
manage a number of Internet-related tasks. As these tasks involved technical
coordination for two principal Internet name spaces (DNS names and IP
addresses) created by the IETF, ICANN also signed a memorandum of
understanding with the IAB to define the technical work to be carried out by
the Internet Assigned Numbers Authority.[213] The management of Internet
address space remained with the regional Internet registries, which
collectively were defined as a supporting organization within the ICANN
structure.[214] ICANN provides central coordination for the DNS system,
including policy coordination for the split registry / registrar system, with
competition among registry service providers to serve each top-level-domain
and multiple competing registrars offering DNS services to end-users.

Internet Engineering Task Force

The Internet Engineering Task Force (IETF) is the largest and most visible of
several loosely related ad-hoc groups that provide technical direction for the
Internet, including the Internet Architecture Board (IAB), the Internet
Engineering Steering Group (IESG), and the Internet Research Task
Force (IRTF).

The IETF is a loosely self-organized group of international volunteers who


contribute to the engineering and evolution of Internet technologies. It is the
principal body engaged in the development of new Internet standard
specifications. Much of the work of the IETF is organized into Working
Groups. Standardization efforts of the Working Groups are often adopted by
the Internet community, but the IETF does not control or patrol the Internet.
[215][216]

The IETF grew out of quarterly meetings with U.S. government-funded


researchers, starting in January 1986. Non-government representatives were
invited by the fourth IETF meeting in October 1986. The concept of Working
Groups was introduced at the fifth meeting in February 1987. The seventh
meeting in July 1987 was the first meeting with more than one hundred
attendees. In 1992, the Internet Society, a professional membership society,
was formed and IETF began to operate under it as an independent
international standards body. The first IETF meeting outside of the United
States was held in Amsterdam, the Netherlands, in July 1993. Today, the IETF
meets three times per year and attendance has been as high as ca. 2,000
participants. Typically one in three IETF meetings are held in Europe or Asia.
The number of non-US attendees is typically ca. 50%, even at meetings held
in the United States.[215]

The IETF is not a legal entity, has no governing board, no members, and no
dues. The closest status resembling membership is being on an IETF or
Working Group mailing list. IETF volunteers come from all over the world and
from many different parts of the Internet community. The IETF works closely
with and under the supervision of the Internet Engineering Steering
Group (IESG)[217] and the Internet Architecture Board (IAB).[218] The Internet
Research Task Force (IRTF) and the Internet Research Steering Group (IRSG),
peer activities to the IETF and IESG under the general supervision of the IAB,
focus on longer-term research issues.[215][219]

RFCs

RFCs are the main documentation for the work of the IAB, IESG, IETF, and
IRTF.[220] Originally intended as requests for comments, RFC 1, "Host
Software", was written by Steve Crocker at UCLA in April 1969. These
technical memos documented aspects of ARPANET development. They were
edited by Jon Postel, the first RFC Editor.[215][221]

RFCs cover a wide range of information from proposed standards, draft


standards, full standards, best practices, experimental protocols, history, and
other informational topics.[222] RFCs can be written by individuals or informal
groups of individuals, but many are the product of a more formal Working
Group. Drafts are submitted to the IESG either by individuals or by the
Working Group Chair. An RFC Editor, appointed by the IAB, separate from
IANA, and working in conjunction with the IESG, receives drafts from the IESG
and edits, formats, and publishes them. Once an RFC is published, it is never
revised. If the standard it describes changes or its information becomes
obsolete, the revised standard or updated information will be re-published as
a new RFC that "obsoletes" the original.[215][221]

The Internet Society

The Internet Society (ISOC) is an international, nonprofit organization


founded during 1992 "to assure the open development, evolution and use of
the Internet for the benefit of all people throughout the world". With offices
near Washington, DC, US, and in Geneva, Switzerland, ISOC has a
membership base comprising more than 80 organizational and more than
50,000 individual members. Members also form "chapters" based on either
common geographical location or special interests. There are currently more
than 90 chapters around the world.[223]

ISOC provides financial and organizational support to and promotes the work
of the standards settings bodies for which it is the organizational home:
the Internet Engineering Task Force (IETF), the Internet Architecture
Board (IAB), the Internet Engineering Steering Group (IESG), and the Internet
Research Task Force (IRTF). ISOC also promotes understanding and
appreciation of the Internet model of open, transparent processes and
consensus-based decision-making.[224]

Globalization and Internet governance in the 21st century


Since the 1990s, the Internet's governance and organization has been of
global importance to governments, commerce, civil society, and individuals.
The organizations which held control of certain technical aspects of the
Internet were the successors of the old ARPANET oversight and the current
decision-makers in the day-to-day technical aspects of the network. While
recognized as the administrators of certain aspects of the Internet, their roles
and their decision-making authority are limited and subject to increasing
international scrutiny and increasing objections. These objections have led to
the ICANN removing themselves from relationships with first the University of
Southern California in 2000,[225] and in September 2009 gaining autonomy
from the US government by the ending of its longstanding agreements,
although some contractual obligations with the U.S. Department of
Commerce continued.[226][227][228] Finally, on October 1, 2016, ICANN ended its
contract with the United States Department of Commerce National
Telecommunications and Information Administration (NTIA), allowing
oversight to pass to the global Internet community. [229]

The IETF, with financial and organizational support from the Internet Society,
continues to serve as the Internet's ad-hoc standards body and
issues Request for Comments.

In November 2005, the World Summit on the Information Society, held


in Tunis, called for an Internet Governance Forum (IGF) to be convened
by United Nations Secretary General. The IGF opened an ongoing, non-
binding conversation among stakeholders representing governments, the
private sector, civil society, and the technical and academic communities
about the future of Internet governance. The first IGF meeting was held in
October/November 2006 with follow up meetings annually thereafter.
[230]
Since WSIS, the term "Internet governance" has been broadened beyond
narrow technical concerns to include a wider range of Internet-related policy
issues.[231][232]

Tim Berners-Lee, inventor of the web, was becoming concerned about


threats to the web's future and in November 2009 at the IGF in Washington
DC launched the World Wide Web Foundation (WWWF) to campaign to make
the web a safe and empowering tool for the good of humanity with access to
all.[233][234] In November 2019 at the IGF in Berlin, Berners-Lee and the WWWF
went on to launch the Contract for the Web, a campaign initiative to
persuade governments, companies and citizens to commit to nine principles
to stop "misuse" with the warning "If we don't act now - and act together - to
prevent the web being misused by those who want to exploit, divide and
undermine, we are at risk of squandering" (its potential for good). [235]

Politicization of the Internet

Due to its prominence and immediacy as an effective means of mass


communication, the Internet has also become more politicized as it has
grown. This has led in turn, to discourses and activities that would once have
taken place in other ways, migrating to being mediated by internet.

Examples include political activities such as public protest and canvassing of


support and votes, but also:

 The spreading of ideas and opinions;

 Recruitment of followers, and "coming together" of members of the


public, for ideas, products, and causes;

 Providing and widely distributing and sharing information that might be


deemed sensitive or relates to whistleblowing (and efforts by specific
countries to prevent this by censorship);

 Criminal activity and terrorism (and resulting law enforcement use,


together with its facilitation by mass surveillance);

 Politically motivated fake news.

Net neutrality

Main article: Net neutrality

The examples and


perspective in this
section may not
represent a worldwide
view of the subject. You
may improve this section ,
discuss the issue on
the talk page, or create a
new section, as
appropriate. (April
2015) (Learn how and
when to remove this
message)
On April 23, 2014, the Federal Communications Commission (FCC) was
reported to be considering a new rule that would permit Internet service
providers to offer content providers a faster track to send content, thus
reversing their earlier net neutrality position.[236][237][238] A possible solution to
net neutrality concerns may be municipal broadband, according to Professor
Susan Crawford, a legal and technology expert at Harvard Law School.[239] On
May 15, 2014, the FCC decided to consider two options regarding Internet
services: first, permit fast and slow broadband lanes, thereby compromising
net neutrality; and second, reclassify broadband as a telecommunication
service, thereby preserving net neutrality.[240][241] On November 10,
2014, President Obama recommended the FCC reclassify broadband Internet
service as a telecommunications service in order to preserve net neutrality.
[242][243][244]
On January 16, 2015, Republicans presented legislation, in the form
of a U.S. Congress HR discussion draft bill, that makes concessions to net
neutrality but prohibits the FCC from accomplishing the goal or enacting any
further regulation affecting Internet service providers (ISPs).[245][246] On
January 31, 2015, AP News reported that the FCC will present the notion of
applying ("with some caveats") Title II (common carrier) of
the Communications Act of 1934 to the internet in a vote expected on
February 26, 2015.[247][248][249][250][251] Adoption of this notion would reclassify
internet service from one of information to one
of telecommunications[252] and, according to Tom Wheeler, chairman of the
FCC, ensure net neutrality.[253][254] The FCC is expected to enforce net
neutrality in its vote, according to The New York Times.[255][256]

On February 26, 2015, the FCC ruled in favor of net neutrality by


applying Title II (common carrier) of the Communications Act of
1934 and Section 706 of the Telecommunications act of 1996 to the Internet.
[257][258][259]
The FCC chairman, Tom Wheeler, commented, "This is no more a
plan to regulate the Internet than the First Amendment is a plan to regulate
free speech. They both stand for the same concept." [260]

On March 12, 2015, the FCC released the specific details of the net neutrality
rules.[261][262][263] On April 13, 2015, the FCC published the final rule on its new
"Net Neutrality" regulations.[264][265]

On December 14, 2017, the FCC repealed their March 12, 2015 decision by a
3–2 vote regarding net neutrality rules.[266]

Use and culture

Email and Usenet


Email has often been called the killer application of the Internet. It predates
the Internet, and was a crucial tool in creating it. Email started in 1965 as a
way for multiple users of a time-sharing mainframe computer to
communicate. Although the history is undocumented, among the first
systems to have such a facility were the System Development
Corporation (SDC) Q32 and the Compatible Time-Sharing System (CTSS) at
MIT.[267]

The ARPANET computer network made a large contribution to the evolution


of electronic mail. An experimental inter-system transferred mail on the
ARPANET shortly after its creation.[268] In 1971 Ray Tomlinson created what
was to become the standard Internet electronic mail addressing format,
using the @ sign to separate mailbox names from host names.[269]

A number of protocols were developed to deliver messages among groups of


time-sharing computers over alternative transmission systems, such
as UUCP and IBM's VNET email system. Email could be passed this way
between a number of networks, including ARPANET, BITNET and NSFNET, as
well as to hosts connected directly to other sites via UUCP. See the history of
SMTP protocol.

In addition, UUCP allowed the publication of text files that could be read by
many others. The News software developed by Steve Daniel and Tom
Truscott in 1979 was used to distribute news and bulletin board-like
messages. This quickly grew into discussion groups, known as newsgroups,
on a wide range of topics. On ARPANET and NSFNET similar discussion
groups would form via mailing lists, discussing both technical issues and
more culturally focused topics (such as science fiction, discussed on the
sflovers mailing list).

During the early years of the Internet, email and similar mechanisms were
also fundamental to allow people to access resources that were not available
due to the absence of online connectivity. UUCP was often used to distribute
files using the 'alt.binary' groups. Also, FTP e-mail gateways allowed people
that lived outside the US and Europe to download files using ftp commands
written inside email messages. The file was encoded, broken in pieces and
sent by email; the receiver had to reassemble and decode it later, and it was
the only way for people living overseas to download items such as the earlier
Linux versions using the slow dial-up connections available at the time. After
the popularization of the Web and the HTTP protocol such tools were slowly
abandoned.
File sharing

Main articles: File sharing, Peer-to-peer file sharing, and Timeline of file
sharing

Resource or file sharing has been an important activity on computer


networks from well before the Internet was established and was supported in
a variety of ways including bulletin board
systems (1978), Usenet (1980), Kermit (1981), and many others. The File
Transfer Protocol (FTP) for use on the Internet was standardized in 1985 and
is still in use today.[270] A variety of tools were developed to aid the use of FTP
by helping users discover files they might want to transfer, including
the Wide Area Information Server (WAIS) in 1991, Gopher in 1991, Archie in
1991, Veronica in 1992, Jughead in 1993, Internet Relay Chat (IRC) in 1988,
and eventually the World Wide Web (WWW) in 1991 with Web
directories and Web search engines.

In 1999, Napster became the first peer-to-peer file sharing system.


[271]
Napster used a central server for indexing and peer discovery, but the
storage and transfer of files was decentralized. A variety of peer-to-peer file
sharing programs and services with different levels of decentralization
and anonymity followed, including: Gnutella, eDonkey2000, and Freenet in
2000, FastTrack, Kazaa, Limewire, and BitTorrent in 2001, and Poisoned in
2003.[272]

All of these tools are general purpose and can be used to share a wide
variety of content, but sharing of music files, software, and later movies and
videos are major uses.[273] And while some of this sharing is legal, large
portions are not. Lawsuits and other legal actions caused Napster in 2001,
eDonkey2000 in 2005, Kazaa in 2006, and Limewire in 2010 to shut down or
refocus their efforts.[274][275] The Pirate Bay, founded in Sweden in 2003,
continues despite a trial and appeal in 2009 and 2010 that resulted in jail
terms and large fines for several of its founders. [276] File sharing remains
contentious and controversial with charges of theft of intellectual property on
the one hand and charges of censorship on the other.[277][278]

File hosting services

Main article: File-hosting service

This section relies


excessively
on references to primary
sources. Please improve this
section by adding secondary
or tertiary sources.
Find sources: "History of the
Internet" – news · newspapers
· books · scholar · JSTOR (Oct
ober 2024) (Learn how and
when to remove this
message)

File hosting allowed for people to expand their computer's hard drives and
"host" their files on a server. Most file hosting services offer free storage, as
well as larger storage amount for a fee. These services have greatly
expanded the internet for business and personal use.

Google Drive, launched on April 24, 2012, has become the most popular file
hosting service. Google Drive allows users to store, edit, and share files with
themselves and other users. Not only does this application allow for file
editing, hosting, and sharing. It also acts as Google's own free-to-access
office programs, such as Google Docs, Google Slides, and Google Sheets.
This application served as a useful tool for University professors and
students, as well as those who are in need of Cloud storage.[279][280]

Dropbox, released in June 2007 is a similar file hosting service that allows
users to keep all of their files in a folder on their computer, which is synced
with Dropbox's servers. This differs from Google Drive as it is not web-
browser based. Now, Dropbox works to keep workers and files in sync and
efficient.[281]

Mega, having over 200 million users, is an encrypted storage and


communication system that offers users free and paid storage, with an
emphasis on privacy.[282] Being three of the largest file hosting services,
Google Drive, Dropbox, and Mega all represent the core ideas and values of
these services.

Online piracy

Main article: Online piracy

The earliest form of online piracy began with a P2P (peer to peer) music
sharing service named Napster, launched in 1999. Sites like LimeWire, The
Pirate Bay, and BitTorrent allowed for anyone to engage in online piracy,
sending ripples through the media industry. With online piracy came a
change in the media industry as a whole.[283]

Mobile telephone data traffic

See also: Mobile web

Total global mobile data traffic reached 588 exabytes during 2020, [284] a 150-
fold increase from 3.86 exabytes/year in 2010.[285] Most recently,
smartphones accounted for 95% of this mobile data traffic with video
accounting for 66% by type of data.[284] Mobile traffic travels by radio
frequency to the closest cell phone tower and its base station where the
radio signal is converted into an optical signal that is transmitted over high-
capacity optical networking systems that convey the information to data
centers. The optical backbones enable much of this traffic as well as a host
of emerging mobile services including the Internet of things, 3-D virtual
reality, gaming and autonomous vehicles. The most popular mobile phone
application is texting, of which 2.1 trillion messages were logged in 2020.
[286]
The texting phenomenon began on December 3, 1992, when Neil
Papworth sent the first text message of "Merry Christmas" over a commercial
cell phone network to the CEO of Vodafone.[287]

The first mobile phone with Internet connectivity was the Nokia 9000
Communicator, launched in Finland in 1996. The viability of Internet services
access on mobile phones was limited until prices came down from that
model, and network providers started to develop systems and services
conveniently accessible on phones. NTT DoCoMo in Japan launched the first
mobile Internet service, i-mode, in 1999 and this is considered the birth of
the mobile phone Internet services. In 2001, the mobile phone email system
by Research in Motion (now BlackBerry Limited) for their BlackBerry product
was launched in America. To make efficient use of the small screen and tiny
keypad and one-handed operation typical of mobile phones, a specific
document and networking model was created for mobile devices,
the Wireless Application Protocol (WAP). Most mobile device Internet services
operate using WAP. The growth of mobile phone services was initially a
primarily Asian phenomenon with Japan, South Korea and Taiwan all soon
finding the majority of their Internet users accessing resources by phone
rather than by PC.[288] Developing countries followed, with India, South Africa,
Kenya, the Philippines, and Pakistan all reporting that the majority of their
domestic users accessed the Internet from a mobile phone rather than a PC.
The European and North American use of the Internet was influenced by a
large installed base of personal computers, and the growth of mobile phone
Internet access was more gradual, but had reached national penetration
levels of 20–30% in most Western countries.[289] The cross-over occurred in
2008, when more Internet access devices were mobile phones than personal
computers. In many parts of the developing world, the ratio is as much as 10
mobile phone users to one PC user.[290]

Growth in demand

Global Internet traffic continues to grow at a rapid rate, rising 23% from 2020
to 2021[291] when the number of active Internet users reached 4.66 billion
people, representing half of the global population. Further demand for data,
and the capacity to satisfy this demand, are forecast to increase to 717
terabits per second in 2021.[292] This capacity stems from the optical
amplification and WDM systems that are the common basis of virtually every
metro, regional, national, international and submarine telecommunications
networks.[293] These optical networking systems have been installed
throughout the 5 billion kilometers of fiber optic lines deployed around the
world.[294] Continued growth in traffic is expected for the foreseeable future
from a combination of new users, increased mobile phone adoption,
machine-to-machine connections, connected homes, 5G devices and the
burgeoning requirement for cloud and Internet services such
as Amazon, Facebook, Apple Music and YouTube.

Historiography

Further information: Protocol Wars § Historiography

There are nearly insurmountable problems in supplying a historiography of


the Internet's development. The process of digitization represents a twofold
challenge both for historiography in general and, in particular, for historical
communication research.[295] A sense of the difficulty in documenting early
developments that led to the internet can be gathered from the quote:

"The Arpanet period is somewhat well documented because the corporation


in charge – BBN – left a physical record. Moving into the NSFNET era, it
became an extraordinarily decentralized process. The record exists in
people's basements, in closets. ... So much of what happened was done
verbally and on the basis of individual trust."

— Doug Gale (2007)[296]

Notable works on the subject were published by Katie Hafner and Matthew
Lyon, Where Wizards Stay Up Late: The Origins Of The Internet (1996), Roy
Rosenzweig, Wizards, Bureaucrats, Warriors, and Hackers: Writing the
History of the Internet (1998), and Janet Abbate, Inventing the
Internet (2000).[297]

Most scholarship and literature on the Internet lists ARPANET as the prior
network that was iterated on and studied to create it, [298] although other early
computer networks and experiments existed alongside or before ARPANET.
[299]

These histories of the Internet have since been characterized


as teleologies or Whig history; that is, they take the present to be the end
point toward which history has been unfolding based on a single cause:

In the case of Internet history, the epoch-making event is usually said to be


the demonstration of the 4-node ARPANET network in 1969. From that single
happening the global Internet developed.

— Martin Campbell-Kelly, Daniel D. Garcia-Swartz[300]

In addition to these characteristics, historians have cited methodological


problems arising in their work:

"Internet history" ... tends to be too close to its sources. Many Internet
pioneers are alive, active, and eager to shape the histories that describe
their accomplishments. Many museums and historians are equally eager to
interview the pioneers and to publicize their stories.

— Andrew L. Russell (2012)[301]

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy