History of The Internet
History of The Internet
Computer science was an emerging discipline in the late 1950s that began to
consider time-sharing between computer users, and later, the possibility of
achieving this over wide area networks. J. C. R. Licklider developed the idea
of a universal network at the Information Processing Techniques Office (IPTO)
of the United States Department of Defense (DoD) Advanced Research
Projects Agency (ARPA). Independently, Paul Baran at the RAND
Corporation proposed a distributed network based on data in message blocks
in the early 1960s, and Donald Davies conceived of packet switching in 1965
at the National Physical Laboratory (NPL), proposing a national commercial
data network in the United Kingdom.
showInternet history
timeline
In the late 1970s, national and international public data networks emerged
based on the X.25 protocol, designed by Rémi Després and others. In the
United States, the National Science Foundation (NSF) funded
national supercomputing centers at several universities in the United States,
and provided interconnectivity in 1986 with the NSFNET project, thus
creating network access to these supercomputer sites for research and
academic organizations in the United States. International connections to
NSFNET, the emergence of architecture such as the Domain Name System,
and the adoption of TCP/IP on existing networks in the United States and
around the world marked the beginnings of the Internet.[4][5]
[6]
Commercial Internet service providers (ISPs) emerged in 1989 in the
United States and Australia.[7] Limited private connections to parts of the
Internet by officially commercial entities emerged in several American cities
by late 1989 and 1990.[8] The optical backbone of the NSFNET was
decommissioned in 1995, removing the last restrictions on the use of the
Internet to carry commercial traffic, as traffic transitioned to optical networks
managed by Sprint, MCI and AT&T in the United States.
Foundations
Precursors
Telegraphy
Information theory
Time-sharing
Christopher Strachey, who became Oxford University's first Professor
of Computation, filed a patent application in the United Kingdom for time-
sharing in February 1959.[13][14] In June that year, he gave a paper "Time
Sharing in Large Fast Computers" at the UNESCO Information Processing
Conference in Paris where he passed the concept on to J. C. R. Licklider.[15]
[16]
Licklider, a vice president at Bolt Beranek and Newman, Inc. (BBN),
promoted the idea of time-sharing as an alternative to batch processing.
[14]
John McCarthy, at MIT, wrote a memo in 1959 that broadened the concept
of time sharing to encompass multiple interactive user sessions, which
resulted in the Compatible Time-Sharing System (CTSS) implemented at MIT.
Other multi-user mainframe systems developed, such as PLATO at
the University of Illinois Chicago.[17] In the early 1960, the Advanced Research
Projects Agency (ARPA) of the United States Department of Defense funded
further research into time-sharing at MIT through Project MAC.
Inspiration
In August 1962, Licklider and Welden Clark published the paper "On-Line
Man-Computer Communication"[19] which was one of the first descriptions of
a networked future.
In October 1962, Licklider was hired by Jack Ruina as director of the newly
established Information Processing Techniques Office (IPTO) within ARPA,
with a mandate to interconnect the United States Department of Defense's
main computers at Cheyenne Mountain, the Pentagon, and SAC HQ. There he
formed an informal group within DARPA to further computer research. He
began by writing memos in 1963 describing a distributed network to the
IPTO staff, whom he called "Members and Affiliates of the Intergalactic
Computer Network".[20]
Although he left the IPTO in 1964, five years before the ARPANET went live, it
was his vision of universal networking that provided the impetus for one of
his successors, Robert Taylor, to initiate the ARPANET development. Licklider
later returned to lead the IPTO in 1973 for two years. [21]
Packet switching
The infrastructure for telephone systems at the time was based on circuit
switching, which requires pre-allocation of a dedicated communication line
for the duration of the call. Telegram services had developed store and
forward telecommunication techniques. Western Union's Automatic Telegraph
Switching System Plan 55-A was based on message switching. The U.S.
military's AUTODIN network became operational in 1962. These systems, like
SAGE and SBRE, still required rigid routing structures that were prone
to single point of failure.[24]
The technology was considered vulnerable for strategic and military use
because there were no alternative paths for the communication in case of a
broken link. In the early 1960s, Paul Baran of the RAND Corporation produced
a study of survivable networks for the U.S. military in the event of nuclear
war.[25] Information would be transmitted across a "distributed" network,
divided into what he called "message blocks". [26][27][28][29][30] Baran's design was
not implemented.
ARPANET
For each of these three terminals, I had three different sets of user
commands. So if I was talking online with someone at S.D.C. and I wanted to
talk to someone I knew at Berkeley or M.I.T. about this, I had to get up from
the S.D.C. terminal, go over and log into the other terminal and get in touch
with them.... I said, oh man, it's obvious what to do: If you have these three
terminals, there ought to be one terminal that goes anywhere you want to go
where you have interactive computing. That idea is the ARPAnet. [58]
ARPA awarded the contract to build the network to Bolt Beranek & Newman.
The "IMP guys", led by Frank Heart and Bob Kahn, developed the routing,
flow control, software design and network control. [36][66] The first ARPANET link
was established between the Network Measurement Center at the University
of California, Los Angeles (UCLA) Henry Samueli School of Engineering and
Applied Science directed by Leonard Kleinrock, and the NLS system
at Stanford Research Institute (SRI) directed by Douglas Engelbart in Menlo
Park, California at 22:30 hours on October 29, 1969.[67][68]
"We set up a telephone connection between us and the guys at SRI ...",
Kleinrock ... said in an interview: "We typed the L and we asked on the
phone,
Merit Network
CYCLADES
The British Post Office, Western Union International, and Tymnet collaborated
to create the first international packet-switched network, referred to as
the International Packet Switched Service (IPSS), in 1978. This network grew
from Europe and the US to cover Canada, Hong Kong, and Australia by 1981.
By the 1990s it provided a worldwide networking infrastructure. [94]
The first public dial-in networks used asynchronous teleprinter (TTY) terminal
protocols to reach a concentrator operated in the public network. Some
networks, such as Telenet and CompuServe, used X.25 to multiplex the
terminal sessions into their packet-switched backbones, while others, such
as Tymnet, used proprietary protocols. In 1979, CompuServe became the
first service to offer electronic mail capabilities and technical support to
personal computer users. The company broke new ground again in 1980 as
the first to offer real-time chat with its CB Simulator. Other major dial-in
networks were America Online (AOL) and Prodigy that also provided
communications, content, and entertainment features. [95] Many bulletin board
system (BBS) networks also provided on-line access, such as FidoNet which
was popular amongst hobbyist computer users, many of
them hackers and amateur radio operators.[citation needed]
In 1979, two students at Duke University, Tom Truscott and Jim Ellis,
originated the idea of using Bourne shell scripts to transfer news and
messages on a serial line UUCP connection with nearby University of North
Carolina at Chapel Hill. Following public release of the software in 1980, the
mesh of UUCP hosts forwarding on the Usenet news rapidly expanded.
UUCPnet, as it would later be named, also created gateways and links
between FidoNet and dial-up BBS hosts. UUCP networks spread quickly due
to the lower costs involved, ability to use existing leased lines, X.25 links or
even ARPANET connections, and the lack of strict use policies compared to
later networks like CSNET and BITNET. All connects were local. By 1981 the
number of UUCP hosts had grown to 550, nearly doubling to 940 in 1984. [96]
Sublink Network, operating since 1987 and officially founded in Italy in 1989,
based its interconnectivity upon UUCP to redistribute mail and news groups
messages throughout its Italian nodes (about 100 at the time) owned both
by private individuals and small companies. Sublink Network evolved into
one of the first examples of Internet technology coming into use through
popular diffusion.
TCP/IP
Cerf and Kahn published their ideas in May 1974, [103] which incorporated
concepts implemented by Louis Pouzin and Hubert Zimmermann in the
CYCLADES network.[104][105] The specification of the resulting protocol,
the Transmission Control Program, was published as RFC 675 by the Network
Working Group in December 1974.[106] It contains the first attested use of the
term internet, as a shorthand for internetwork. This software was monolithic
in design using two simplex communication channels for each user session.
After the ARPANET had been up and running for several years, ARPA looked
for another agency to hand off the network to; ARPA's primary mission was
funding cutting-edge research and development, not running a
communications utility. In July 1975, the network was turned over to
the Defense Communications Agency, also part of the Department of
Defense. In 1983, the U.S. military portion of the ARPANET was broken off as
a separate network, the MILNET. MILNET subsequently became the
unclassified but military-only NIPRNET, in parallel with the SECRET-
level SIPRNET and JWICS for TOP SECRET and above. NIPRNET does have
controlled security gateways to the public Internet.
The networks based on the ARPANET were government funded and therefore
restricted to noncommercial uses such as research; unrelated commercial
use was strictly forbidden.[116] This initially restricted connections to military
sites and universities. During the 1980s, the connections expanded to more
educational institutions, and a growing number of companies such as Digital
Equipment Corporation and Hewlett-Packard, which were participating in
research projects or providing services to those who were. Data transmission
speeds depended upon the type of connection, the slowest being analog
telephone lines and the fastest using optical networking technology.
Several other branches of the U.S. government, the National Aeronautics and
Space Administration (NASA), the National Science Foundation (NSF), and
the Department of Energy (DOE) became heavily involved in Internet
research and started development of a successor to ARPANET. In the mid-
1980s, all three of these branches developed the first Wide Area Networks
based on TCP/IP. NASA developed the NASA Science Network, NSF
developed CSNET and DOE evolved the Energy Sciences Network or ESNet.
NASA developed the TCP/IP based NASA Science Network (NSN) in the mid-
1980s, connecting space scientists to data and information stored anywhere
in the world. In 1989, the DECnet-based Space Physics Analysis Network
(SPAN) and the TCP/IP-based NASA Science Network (NSN) were brought
together at NASA Ames Research Center creating the first multiprotocol wide
area network called the NASA Science Internet, or NSI. NSI was established
to provide a totally integrated communications infrastructure to the NASA
scientific community for the advancement of earth, space and life sciences.
As a high-speed, multiprotocol, international network, NSI provided
connectivity to over 20,000 scientists across all seven continents.
In 1986, the NSF created NSFNET, a 56 kbit/s backbone to support the NSF-
sponsored supercomputing centers. The NSFNET also provided support for
the creation of regional research and education networks in the United
States, and for the connection of university and college campus networks to
the regional networks.[117] The use of NSFNET and the regional networks was
not limited to supercomputer users and the 56 kbit/s network quickly
became overloaded. NSFNET was upgraded to 1.5 Mbit/s in 1988 under a
cooperative agreement with the Merit Network in partnership with IBM, MCI,
and the State of Michigan. The existence of NSFNET and the creation
of Federal Internet Exchanges (FIXes) allowed the ARPANET to be
decommissioned in 1990.
NSFNET was expanded and upgraded to dedicated fiber, optical lasers and
optical amplifier systems capable of delivering T3 start up speeds or
45 Mbit/s in 1991. However, the T3 transition by MCI took longer than
expected, allowing Sprint to establish a coast-to-coast long-distance
commercial Internet service. When NSFNET was decommissioned in 1995, its
optical networking backbones were handed off to several commercial
Internet service providers, including MCI, PSI Net and Sprint.[118] As a result,
when the handoff was complete, Sprint and its Washington DC Network
Access Points began to carry Internet traffic, and by 1996, Sprint was the
world's largest carrier of Internet traffic.[119]
The term "internet" was reflected in the first RFC published on the TCP
protocol (RFC 675:[120] Internet Transmission Control Program, December
1974) as a short form of internetworking, when the two terms were used
interchangeably. In general, an internet was a collection of networks linked
by a common protocol. In the time period when the ARPANET was connected
to the newly formed NSFNET project in the late 1980s, the term was used as
the name of the network, Internet, being the large and global TCP/IP network.
[121]
Opening the Internet and the fiber optic backbone to corporate and
consumers increased demand for network capacity. The expense and delay
of laying new fiber led providers to test a fiber bandwidth expansion
alternative that had been pioneered in the late 1970s by Optelecom using
"interactions between light and matter, such as lasers and optical devices
used for optical amplification and wave mixing".[122] This technology became
known as wave division multiplexing (WDM). Bell Labs deployed a 4-channel
WDM system in 1995.[123] To develop a mass capacity (dense) WDM
system, Optelecom and its former head of Light Systems Research, David R.
Huber formed a new venture, Ciena Corp., that deployed the world's first
dense WDM system on the Sprint fiber network in June 1996. [123] This was
referred to as the real start of optical networking. [124]
Many sites unable to link directly to the Internet created simple gateways for
the transfer of electronic mail, the most important application of the time.
Sites with only intermittent connections used UUCP or FidoNet and relied on
the gateways between these networks and the Internet. Some gateway
services went beyond simple mail peering, such as allowing access to File
Transfer Protocol (FTP) sites via UUCP or mail.[125]
Finally, routing technologies were developed for the Internet to remove the
remaining centralized routing aspects. The Exterior Gateway Protocol (EGP)
was replaced by a new protocol, the Border Gateway Protocol (BGP). This
provided a meshed topology for the Internet and reduced the centric
architecture which ARPANET had emphasized. In 1994, Classless Inter-
Domain Routing (CIDR) was introduced to support better conservation of
address space which allowed use of route aggregation to decrease the size
of routing tables.[126]
Optical networking
The concept of lasing arose from a 1917 paper by Albert Einstein, "On the
Quantum Theory of Radiation". Einstein expanded upon a conversation
with Max Planck on how atoms absorb and emit light, part of a thought
process that, with input from Erwin Schrödinger, Werner Heisenberg and
others, gave rise to quantum mechanics. Specifically, in his quantum theory,
Einstein mathematically determined that light could be generated not only
by spontaneous emission, such as the light emitted by an incandescent
light or the Sun, but also by stimulated emission.
Between 1984 and 1988, CERN began installation and operation of TCP/IP to
interconnect its major internal computer systems, workstations, PCs, and an
accelerator control system. CERN continued to operate a limited self-
developed system (CERNET) internally and several incompatible (typically
proprietary) network protocols externally. There was considerable resistance
in Europe towards more widespread use of TCP/IP, and the CERN TCP/IP
intranets remained isolated from the Internet until 1989, when a
transatlantic connection to Cornell University was established. [135][136][137]
The Computer Science Network (CSNET) began operation in 1981 to provide
networking connections to institutions that could not connect directly to
ARPANET. Its first international connection was to Israel in 1984. Soon after,
connections were established to computer science departments in Canada,
France, and Germany.[23]
Nonetheless, for a period in the late 1980s and early 1990s, engineers,
organizations and nations were polarized over the issue of which standard,
the OSI model or the Internet protocol suite would result in the best and most
robust computer networks.[100][147][148]
South Korea set up a two-node domestic TCP/IP network in 1982, the System
Development Network (SDN), adding a third node the following year. SDN
was connected to the rest of the world in August 1983 using UUCP (Unix-to-
Unix-Copy); connected to CSNET in December 1984; [23] and formally
connected to the NSFNET in 1990.[149][150][151]
Japan, which had built the UUCP-based network JUNET in 1984, connected to
CSNET,[23] and later to NSFNET in 1989, marking the spread of the Internet to
Asia.
Africa
At the beginning of the 1990s, African countries relied upon X.25 IPSS and
2400 baud modem UUCP links for international and internetwork computer
communications.
In August 1995, InfoMail Uganda, Ltd., a privately held firm in Kampala now
known as InfoCom, and NSN Network Services of Avon, Colorado, sold in
1997 and now known as Clear Channel Satellite, established Africa's first
native TCP/IP high-speed satellite Internet services. The data connection was
originally carried by a C-Band RSCC Russian satellite which connected
InfoMail's Kampala offices directly to NSN's MAE-West point of presence using
a private network from NSN's leased ground station in New Jersey. InfoCom's
first satellite connection was just 64 kbit/s, serving a Sun host computer and
twelve US Robotics dial-up modems.
Latin America
As with the other regions, the Latin American and Caribbean Internet
Addresses Registry (LACNIC) manages the IP address space and other
resources for its area. LACNIC, headquartered in Uruguay, operates DNS root,
reverse DNS, and other key services.
Main articles: History of the World Wide Web and Information Age
Development
Initially, as with its predecessor networks, the system that would evolve into
the Internet was primarily for government and government body use.
Although commercial use was forbidden, the exact definition of commercial
use was unclear and subjective. UUCPNet and the X.25 IPSS had no such
restrictions, which would eventually see the official barring of UUCPNet use
of ARPANET and NSFNET connections.
During the first decade or so of the public Internet, the immense changes it
would eventually enable in the 2000s were still nascent. In terms of providing
context for this period, mobile cellular devices ("smartphones" and other
cellular devices) which today provide near-universal access, were used for
business and not a routine household item owned by parents and children
worldwide. Social media in the modern sense had yet to come into existence,
laptops were bulky and most households did not have computers. Data rates
were slow and most people lacked means to video or digitize video; media
storage was transitioning slowly from analog tape to digital optical
discs (DVD and to an extent still, floppy disc to CD). Enabling technologies
used from the early 2000s such as PHP, modern JavaScript and Java,
technologies such as AJAX, HTML 4 (and its emphasis on CSS), and
various software frameworks, which enabled and simplified speed of web
development, largely awaited invention and their eventual widespread
adoption.
The Internet was widely used for mailing lists, emails, creating and
distributing maps with tools like MapQuest, e-commerce and early
popular online shopping (Amazon and eBay for example), online
forums and bulletin boards, and personal websites and blogs, and use was
growing rapidly, but by more modern standards, the systems used were
static and lacked widespread social engagement. It awaited a number of
events in the early 2000s to change from a communications technology to
gradually develop into a key part of global society's infrastructure.
Typical design elements of these "Web 1.0" era websites included: [176] Static
pages instead of dynamic HTML;[177] content served from filesystems instead
of relational databases; pages built using Server Side Includes or CGI instead
of a web application written in a dynamic programming language; HTML 3.2-
era structures such as frames and tables to create page layouts;
online guestbooks; overuse of GIF buttons and similar small graphics
promoting particular items;[178] and HTML forms sent via email. (Support
for server side scripting was rare on shared servers so the usual feedback
mechanism was via email, using mailto forms and their email program.[179]
The history of the World Wide Web up to around 2004 was retrospectively
named and described by some as "Web 1.0".[180]
IPv6
In the final stage of IPv4 address exhaustion, the last IPv4 address block was
assigned in January 2011 at the level of the regional Internet registries.
[181]
IPv4 uses 32-bit addresses which limits the address space to
2 addresses, i.e. 4294967296 addresses.[111] IPv4 is in the process of
32
Main articles: Web 2.0 and History of the World Wide Web § Web 2.0
The rapid technical advances that would propel the Internet into its place as
a social system, which has completely transformed the way humans interact
with each other, took place during a relatively short period from around 2005
to 2010, coinciding with the point in time in which IoT devices surpassed the
number of humans alive at some point in the late 2000s. They included:
The Web we know now, which loads into a browser window in essentially
static screenfuls, is only an embryo of the Web to come. The first
glimmerings of Web 2.0 are beginning to appear, and we are just starting to
see how that embryo might develop. The Web will be understood not as
screenfuls of text and graphics but as a transport mechanism, the ether
through which interactivity happens. It will [...] appear on your computer
screen, [...] on your TV set [...] your car dashboard [...] your cell phone [...]
hand-held game machines [...] maybe even your microwave oven.
"Web 2.0" does not refer to an update to any technical specification, but
rather to cumulative changes in the way Web pages are made and used.
"Web 2.0" describes an approach, in which sites focus substantially upon
allowing users to interact and collaborate with each other in a social
media dialogue as creators of user-generated content in a virtual community,
in contrast to Web sites where people are limited to the passive viewing
of content. Examples of Web 2.0 include social networking
services, blogs, wikis, folksonomies, video sharing sites, hosted
services, Web applications, and mashups.[192] Terry Flew, in his 3rd edition
of New Media, described what he believed to characterize the differences
between Web 1.0 and Web 2.0:
[The] move from personal websites to blogs and blog site aggregation, from
publishing to participation, from web content as the outcome of large up-
front investment to an ongoing and interactive process, and from content
management systems to links based on tagging (folksonomy). [193]
This era saw several household names gain prominence through their
community-oriented operation – YouTube, Twitter, Facebook, Reddit and
Wikipedia being some examples.
Telephone systems have been slowly adopting voice over IP since 2003. Early
experiments proved that voice can be converted to digital packets and sent
over the Internet. The packets are collected and converted back to analog
voice.[194][195][196]
Main articles: History of mobile phones, Mobile web, and Responsive web
design
The process of change that generally coincided with Web 2.0 was itself
greatly accelerated and transformed only a short time later by the increasing
growth in mobile devices. This mobile revolution meant that computers in
the form of smartphones became something many people used, took with
them everywhere, communicated with, used for photographs and videos
they instantly shared or to shop or seek information "on the move" – and
used socially, as opposed to items on a desk at home or just used for work.
[citation needed]
This "mobile revolution" has allowed for people to have a nearly unlimited
amount of information at all times. With the ability to access the internet
from cell phones came a change in the way media was consumed. Media
consumption statistics show that over half of media consumption between
those aged 18 and 34 were using a smartphone. [197]
Networking in outer space
The first Internet link into low Earth orbit was established on January 22,
2010, when astronaut T. J. Creamer posted the first unassisted update to his
Twitter account from the International Space Station, marking the extension
of the Internet into space.[198] (Astronauts at the ISS had used email and
Twitter before, but these messages had been relayed to the ground through
a NASA data link before being posted by a human proxy.) This personal Web
access, which NASA calls the Crew Support LAN, uses the space station's
high-speed Ku band microwave link. To surf the Web, astronauts can use a
station laptop computer to control a desktop computer on Earth, and they
can talk to their families and friends on Earth using Voice over IP equipment.
[199]
Internet governance
Since at this point in history most of the growth on the Internet was coming
from non-military sources, it was decided that the Department of
Defense would no longer fund registration services outside of the .mil TLD. In
1993 the U.S. National Science Foundation, after a competitive bidding
process in 1992, created the InterNIC to manage the allocations of addresses
and management of the address databases, and awarded the contract to
three organizations. Registration Services would be provided by Network
Solutions; Directory and Database Services would be provided by AT&T; and
Information Services would be provided by General Atomics.[211]
Over time, after consultation with the IANA, the IETF, RIPE NCC, APNIC, and
the Federal Networking Council (FNC), the decision was made to separate the
management of domain names from the management of IP numbers.
[210]
Following the examples of RIPE NCC and APNIC, it was recommended that
management of IP address space then administered by the InterNIC should
be under the control of those that use it, specifically the ISPs, end-user
organizations, corporate entities, universities, and individuals. As a result,
the American Registry for Internet Numbers (ARIN) was established as in
December 1997, as an independent, not-for-profit corporation by direction of
the National Science Foundation and became the third Regional Internet
Registry.[212]
In 1998, both the IANA and remaining DNS-related InterNIC functions were
reorganized under the control of ICANN, a California non-profit
corporation contracted by the United States Department of Commerce to
manage a number of Internet-related tasks. As these tasks involved technical
coordination for two principal Internet name spaces (DNS names and IP
addresses) created by the IETF, ICANN also signed a memorandum of
understanding with the IAB to define the technical work to be carried out by
the Internet Assigned Numbers Authority.[213] The management of Internet
address space remained with the regional Internet registries, which
collectively were defined as a supporting organization within the ICANN
structure.[214] ICANN provides central coordination for the DNS system,
including policy coordination for the split registry / registrar system, with
competition among registry service providers to serve each top-level-domain
and multiple competing registrars offering DNS services to end-users.
The Internet Engineering Task Force (IETF) is the largest and most visible of
several loosely related ad-hoc groups that provide technical direction for the
Internet, including the Internet Architecture Board (IAB), the Internet
Engineering Steering Group (IESG), and the Internet Research Task
Force (IRTF).
The IETF is not a legal entity, has no governing board, no members, and no
dues. The closest status resembling membership is being on an IETF or
Working Group mailing list. IETF volunteers come from all over the world and
from many different parts of the Internet community. The IETF works closely
with and under the supervision of the Internet Engineering Steering
Group (IESG)[217] and the Internet Architecture Board (IAB).[218] The Internet
Research Task Force (IRTF) and the Internet Research Steering Group (IRSG),
peer activities to the IETF and IESG under the general supervision of the IAB,
focus on longer-term research issues.[215][219]
RFCs
RFCs are the main documentation for the work of the IAB, IESG, IETF, and
IRTF.[220] Originally intended as requests for comments, RFC 1, "Host
Software", was written by Steve Crocker at UCLA in April 1969. These
technical memos documented aspects of ARPANET development. They were
edited by Jon Postel, the first RFC Editor.[215][221]
ISOC provides financial and organizational support to and promotes the work
of the standards settings bodies for which it is the organizational home:
the Internet Engineering Task Force (IETF), the Internet Architecture
Board (IAB), the Internet Engineering Steering Group (IESG), and the Internet
Research Task Force (IRTF). ISOC also promotes understanding and
appreciation of the Internet model of open, transparent processes and
consensus-based decision-making.[224]
The IETF, with financial and organizational support from the Internet Society,
continues to serve as the Internet's ad-hoc standards body and
issues Request for Comments.
Net neutrality
On March 12, 2015, the FCC released the specific details of the net neutrality
rules.[261][262][263] On April 13, 2015, the FCC published the final rule on its new
"Net Neutrality" regulations.[264][265]
On December 14, 2017, the FCC repealed their March 12, 2015 decision by a
3–2 vote regarding net neutrality rules.[266]
In addition, UUCP allowed the publication of text files that could be read by
many others. The News software developed by Steve Daniel and Tom
Truscott in 1979 was used to distribute news and bulletin board-like
messages. This quickly grew into discussion groups, known as newsgroups,
on a wide range of topics. On ARPANET and NSFNET similar discussion
groups would form via mailing lists, discussing both technical issues and
more culturally focused topics (such as science fiction, discussed on the
sflovers mailing list).
During the early years of the Internet, email and similar mechanisms were
also fundamental to allow people to access resources that were not available
due to the absence of online connectivity. UUCP was often used to distribute
files using the 'alt.binary' groups. Also, FTP e-mail gateways allowed people
that lived outside the US and Europe to download files using ftp commands
written inside email messages. The file was encoded, broken in pieces and
sent by email; the receiver had to reassemble and decode it later, and it was
the only way for people living overseas to download items such as the earlier
Linux versions using the slow dial-up connections available at the time. After
the popularization of the Web and the HTTP protocol such tools were slowly
abandoned.
File sharing
Main articles: File sharing, Peer-to-peer file sharing, and Timeline of file
sharing
All of these tools are general purpose and can be used to share a wide
variety of content, but sharing of music files, software, and later movies and
videos are major uses.[273] And while some of this sharing is legal, large
portions are not. Lawsuits and other legal actions caused Napster in 2001,
eDonkey2000 in 2005, Kazaa in 2006, and Limewire in 2010 to shut down or
refocus their efforts.[274][275] The Pirate Bay, founded in Sweden in 2003,
continues despite a trial and appeal in 2009 and 2010 that resulted in jail
terms and large fines for several of its founders. [276] File sharing remains
contentious and controversial with charges of theft of intellectual property on
the one hand and charges of censorship on the other.[277][278]
File hosting allowed for people to expand their computer's hard drives and
"host" their files on a server. Most file hosting services offer free storage, as
well as larger storage amount for a fee. These services have greatly
expanded the internet for business and personal use.
Google Drive, launched on April 24, 2012, has become the most popular file
hosting service. Google Drive allows users to store, edit, and share files with
themselves and other users. Not only does this application allow for file
editing, hosting, and sharing. It also acts as Google's own free-to-access
office programs, such as Google Docs, Google Slides, and Google Sheets.
This application served as a useful tool for University professors and
students, as well as those who are in need of Cloud storage.[279][280]
Dropbox, released in June 2007 is a similar file hosting service that allows
users to keep all of their files in a folder on their computer, which is synced
with Dropbox's servers. This differs from Google Drive as it is not web-
browser based. Now, Dropbox works to keep workers and files in sync and
efficient.[281]
Online piracy
The earliest form of online piracy began with a P2P (peer to peer) music
sharing service named Napster, launched in 1999. Sites like LimeWire, The
Pirate Bay, and BitTorrent allowed for anyone to engage in online piracy,
sending ripples through the media industry. With online piracy came a
change in the media industry as a whole.[283]
Total global mobile data traffic reached 588 exabytes during 2020, [284] a 150-
fold increase from 3.86 exabytes/year in 2010.[285] Most recently,
smartphones accounted for 95% of this mobile data traffic with video
accounting for 66% by type of data.[284] Mobile traffic travels by radio
frequency to the closest cell phone tower and its base station where the
radio signal is converted into an optical signal that is transmitted over high-
capacity optical networking systems that convey the information to data
centers. The optical backbones enable much of this traffic as well as a host
of emerging mobile services including the Internet of things, 3-D virtual
reality, gaming and autonomous vehicles. The most popular mobile phone
application is texting, of which 2.1 trillion messages were logged in 2020.
[286]
The texting phenomenon began on December 3, 1992, when Neil
Papworth sent the first text message of "Merry Christmas" over a commercial
cell phone network to the CEO of Vodafone.[287]
The first mobile phone with Internet connectivity was the Nokia 9000
Communicator, launched in Finland in 1996. The viability of Internet services
access on mobile phones was limited until prices came down from that
model, and network providers started to develop systems and services
conveniently accessible on phones. NTT DoCoMo in Japan launched the first
mobile Internet service, i-mode, in 1999 and this is considered the birth of
the mobile phone Internet services. In 2001, the mobile phone email system
by Research in Motion (now BlackBerry Limited) for their BlackBerry product
was launched in America. To make efficient use of the small screen and tiny
keypad and one-handed operation typical of mobile phones, a specific
document and networking model was created for mobile devices,
the Wireless Application Protocol (WAP). Most mobile device Internet services
operate using WAP. The growth of mobile phone services was initially a
primarily Asian phenomenon with Japan, South Korea and Taiwan all soon
finding the majority of their Internet users accessing resources by phone
rather than by PC.[288] Developing countries followed, with India, South Africa,
Kenya, the Philippines, and Pakistan all reporting that the majority of their
domestic users accessed the Internet from a mobile phone rather than a PC.
The European and North American use of the Internet was influenced by a
large installed base of personal computers, and the growth of mobile phone
Internet access was more gradual, but had reached national penetration
levels of 20–30% in most Western countries.[289] The cross-over occurred in
2008, when more Internet access devices were mobile phones than personal
computers. In many parts of the developing world, the ratio is as much as 10
mobile phone users to one PC user.[290]
Growth in demand
Global Internet traffic continues to grow at a rapid rate, rising 23% from 2020
to 2021[291] when the number of active Internet users reached 4.66 billion
people, representing half of the global population. Further demand for data,
and the capacity to satisfy this demand, are forecast to increase to 717
terabits per second in 2021.[292] This capacity stems from the optical
amplification and WDM systems that are the common basis of virtually every
metro, regional, national, international and submarine telecommunications
networks.[293] These optical networking systems have been installed
throughout the 5 billion kilometers of fiber optic lines deployed around the
world.[294] Continued growth in traffic is expected for the foreseeable future
from a combination of new users, increased mobile phone adoption,
machine-to-machine connections, connected homes, 5G devices and the
burgeoning requirement for cloud and Internet services such
as Amazon, Facebook, Apple Music and YouTube.
Historiography
Notable works on the subject were published by Katie Hafner and Matthew
Lyon, Where Wizards Stay Up Late: The Origins Of The Internet (1996), Roy
Rosenzweig, Wizards, Bureaucrats, Warriors, and Hackers: Writing the
History of the Internet (1998), and Janet Abbate, Inventing the
Internet (2000).[297]
Most scholarship and literature on the Internet lists ARPANET as the prior
network that was iterated on and studied to create it, [298] although other early
computer networks and experiments existed alongside or before ARPANET.
[299]
"Internet history" ... tends to be too close to its sources. Many Internet
pioneers are alive, active, and eager to shape the histories that describe
their accomplishments. Many museums and historians are equally eager to
interview the pioneers and to publicize their stories.