Cyber Security 01
Cyber Security 01
🔍 What is Cyber?
Refers to the interconnected digital world, encompassing the internet, networks, and information
systems.
🛡️ What is Security?
The practice of protecting systems and data from unauthorized access, ensuring confidentiality,
integrity, and availability.
🧠 Practical Questions
Encourages critical thinking about cybersecurity scenarios and policies.
🏢 Private Sector Support and Moving the Internet Out of the Lab
Transition of internet technologies from research labs to commercial applications.
🏛️ What is Governance?
Mechanisms and policies that regulate internet operations and standards.
🎮 Hobbyists
Individuals experimenting with hacking for learning or personal challenge.
💰 Criminal Organizations
Groups engaging in cybercrime for financial gain.
🎭 Hacktivists
Hackers driven by political or social motivations.
🔍 Reconnaissance
Gathering information about targets to identify vulnerabilities.
🧰 Weaponization
Developing or selecting tools to exploit identified vulnerabilities.
📤 Delivery
Transmitting the exploit to the target system.
🎯 Effects
Actions taken post-compromise, such as data exfiltration or system disruption.
⚠️ Primary Effects
🔄 Secondary Effects
Subsequent consequences, including reputational damage and financial loss.
🌐 Second-Level Effects
Broader implications, such as changes in policy, public trust, and international relations.
United States Government Accountability Office
GAO Testimony
Before the Subcommittee on Oversight,
Investigations, and Management,
Committee on Homeland Security, House
of Representatives
CYBERSECURITY
For Release on Delivery
Expected at 2:00 p.m. EDT
Tuesday, April 24, 2012
GAO-12-666T
April 24, 2012
CYBERSECURITY
Threats Impacting the Nation
What GAO Recommends The number of cybersecurity incidents reported by federal agencies continues to
rise, and recent incidents illustrate that these pose serious risk. Over the past 6
GAO has previously made years, the number of incidents reported by federal agencies to the federal
recommendations to resolve identified information security incident center has increased by nearly 680 percent. These
significant control deficiencies. incidents include unauthorized access to systems; improper use of computing
resources; and the installation of malicious software, among others. Reported
attacks and unintentional incidents involving federal, private, and infrastructure
systems demonstrate that the impact of a serious attack could be significant,
including loss of personal or sensitive information, disruption or destruction of
View GAO-12-666T. For more information,
contact Gregory C. Wilshusen at (202) 512- critical infrastructure, and damage to national and economic security.
6244 or wilshuseng@gao.gov.
Thank you for the opportunity to testify at today’s hearing on the cyber-
based threats facing our nation.
In my testimony today, I will describe (1) cyber threats facing the nation’s
systems, (2) vulnerabilities present in federal systems and systems
supporting critical infrastructure, 3 and (3) reported cyber incidents and
their impacts. In preparing this statement in April 2012, we relied on our
previous work in these areas. (Please see the related GAO products in
appendix I.) These products contain detailed overviews of the scope and
methodology we used. We also reviewed more recent agency, inspector
1
James R. Clapper, Director of National Intelligence, Unclassified Statement for the
Record on the Worldwide Threat Assessment of the US Intelligence Community for the
Senate Select Committee on Intelligence (January 31, 2012).
2
See, most recently, GAO, High-Risk Series: An Update, GAO-11-278 (Washington, D.C.:
February 2011).
3
Critical infrastructures are systems and assets, whether physical or virtual, so vital to our
nation that their incapacity or destruction would have a debilitating impact on national
security, economic well-being, public health or safety, or any combination of these.
The unique nature of cyber-based attacks can vastly enhance their reach
and impact. For example, cyber attackers do not need to be physically
close to their victims, technology allows attacks to easily cross state and
national borders, attacks can be carried out at high speed and directed at
a number of victims simultaneously, and cyber attackers can more easily
remain anonymous. Moreover, the use of these and other techniques is
becoming more sophisticated, with attackers using multiple or “blended”
approaches that combine two or more techniques. Using these
techniques, threat actors may target individuals, resulting in loss of
privacy or identity theft; businesses, resulting in the compromise of
proprietary information or intellectual capital; critical infrastructures,
resulting in their disruption or destruction; or government agencies,
resulting in the loss of sensitive information and damage to economic and
national security.
4
The 24 major departments and agencies are the Departments of Agriculture, Commerce,
Defense, Education, Energy, Health and Human Services, Homeland Security, Housing
and Urban Development, the Interior, Justice, Labor, State, Transportation, the Treasury,
and Veterans Affairs; the Environmental Protection Agency, General Services
Administration, National Aeronautics and Space Administration, National Science
Foundation, Nuclear Regulatory Commission, Office of Personnel Management, Small
Business Administration, Social Security Administration, and U.S. Agency for International
Development.
5
A material weakness is a deficiency, or a combination of deficiencies, in internal control
such that there is a reasonable possibility that a material misstatement of the entity’s
financial statements will not be prevented, or detected and corrected on a timely basis. A
significant deficiency is a deficiency, or a combination of deficiencies, in internal control
that is less severe than a material weakness, yet important enough to merit attention by
those charged with governance. A control deficiency exists when the design or operation
of a control does not allow management or employees, in the normal course of performing
their assigned functions, to prevent, or detect and correct, misstatements on a timely
basis.
Over the past several years, we and agency inspectors general have
made hundreds of recommendations to resolve similar previously
identified significant control deficiencies. We have also recommended
that agencies fully implement comprehensive, agencywide information
security programs, including by correcting weaknesses in specific areas
of their programs. The effective implementation of these
recommendations will strengthen the security posture at these agencies.
6
GAO, Critical Infrastructure Protection: Multiple Efforts to Secure Control Systems Are
Under Way, but Challenges Remain, GAO-07-1036 (Washington, D.C.: Sept. 10, 2007).
7
GAO, Information Security: TVA Needs to Address Weaknesses in Control Systems and
Networks, GAO-08-526 (Washington, D.C.: May 21, 2008).
8
According to US-CERT, the growth in the number of incidents is attributable, in part, to
agencies improving detection and reporting of security incidents on their respective
networks.
In summary, the cyber-threats facing the nation are evolving and growing,
with a wide array of potential threat actors having access to increasingly
sophisticated techniques for exploiting system vulnerabilities. The danger
posed by these threats is heightened by the weaknesses that continue to
exist in federal information systems and systems supporting critical
infrastructures. Ensuring the security of these systems is critical to
avoiding potentially devastating impacts, including loss, disclosure, or
modification of personal or sensitive information; disruption or destruction
of critical infrastructure; and damage to our national and economic
security.
Information Security: FDIC Has Made Progress, but Further Actions Are
Needed to Protect Financial Data. GAO-11-708. Washington, D.C.:
August 12, 2011.
(311087)
Page 16 GAO-12-666T Cyber Threats
This is a work of the U.S. government and is not subject to copyright protection in the
United States. The published product may be reproduced and distributed in its entirety
without further permission from GAO. However, because this work may contain
copyrighted images or other material, permission from the copyright holder may be
necessary if you wish to reproduce this material separately.
GAO’s Mission The Government Accountability Office, the audit, evaluation, and
investigative arm of Congress, exists to support Congress in meeting its
constitutional responsibilities and to help improve the performance and
accountability of the federal government for the American people. GAO
examines the use of public funds; evaluates federal programs and
policies; and provides analyses, recommendations, and other assistance
to help Congress make informed oversight, policy, and funding decisions.
GAO’s commitment to good government is reflected in its core values of
accountability, integrity, and reliability.
Order by Phone The price of each GAO publication reflects GAO’s actual cost of
production and distribution and depends on the number of pages in the
publication and whether the publication is printed in color or black and
white. Pricing and ordering information is posted on GAO’s website,
http://www.gao.gov/ordering.htm.
Place orders by calling (202) 512-6000, toll free (866) 801-7077, or
TDD (202) 512-2537.
Orders may be paid for using American Express, Discover Card,
MasterCard, Visa, check, or money order. Call for additional information.
Connect with GAO on Facebook, Flickr, Twitter, and YouTube.
Connect with GAO Subscribe to our RSS Feeds or E-mail Updates. Listen to our Podcasts.
Visit GAO on the web at www.gao.gov.
Contact:
To Report Fraud,
Waste, and Abuse in Website: www.gao.gov/fraudnet/fraudnet.htm
E-mail: fraudnet@gao.gov
Federal Programs Automated answering system: (800) 424-5454 or (202) 512-7470
NOTICE: The project that is the subject of this report was approved by the Gov-
erning Board of the National Research Council, whose members are drawn from
the councils of the National Academy of Sciences, the National Academy of Engi-
neering, and the Institute of Medicine. The members of the committee responsible
for the report were chosen for their special competences and with regard for
appropriate balance.
Support for this project was provided by the National Science Foundation under
Award Number CNS-0940372. Additional support was provided by Microsoft Cor-
poration, Google, Inc., and the President’s Committee of the National Academies.
-
cies that provided support for the project.
Additional copies of this report are available from the National Academies Press,
500 Fifth Street, NW, Keck 360, Washington, DC 20001; (800) 624-6242 or (202)
334-3313; http://www.nap.edu.
dedicated to the furtherance of science and technology and to their use for the
general welfare. Upon the authority of the charter granted to it by the Congress
in 1863, the Academy has a mandate that requires it to advise the federal govern-
The National Academy of Engineering was established in 1964, under the char-
-
ing engineers. It is autonomous in its administration and in the selection of its
members, sharing with the National Academy of Sciences the responsibility for
advising the federal government. The National Academy of Engineering also
sponsors engineering programs aimed at meeting national needs, encourages
Institute acts under the responsibility given to the National Academy of Sciences
by its congressional charter to be an adviser to the federal government and, upon
its own initiative, to identify issues of medical care, research, and education.
Dr. Harvey V. Fineberg is president of the Institute of Medicine.
www.national-academies.org
Staff
HERBERT S. LIN, Study Director and Chief Scientist, Computer Science
and Telecommunications Board
ERIC WHITAKER, Senior Program Assistant, Computer Science and
Telecommunications Board
1 Ms. Blumenthal resigned from the committee on May 1, 2013, and accepted a position
iv
Preface
sector companies both large and small suffer from cyber thefts of sensitive
information, cyber vandalism (e.g., defacing of Web sites), and denial-of-
service attacks. The nation’s critical infrastructure, including the electric
-
operation.
Concerns about the vulnerability of the information technology on
which the nation relies have deepened in the security-conscious envi-
ronment after the September 11, 2001, attacks and in light of increased
cyber espionage directed at private companies and government agencies
in the United States. National policy makers have become increasingly
concerned that adversaries backed by considerable resources will attempt
have been advanced, and a number of bills have been introduced in Con-
gress to tackle parts of the cybersecurity challenge.
Although the larger public discourse sometimes treats the topic of
cybersecurity as a new one, the Computer Science and Telecommunica-
vii
viii PREFACE
the report also addresses issues not covered in earlier CSTB work, and
the committee acknowledges with gratitude input from William Press
-
tial resource.
PREFACE ix
The 1991 CSTB report Computers at Risk warned that “as computer systems
become more prevalent, sophisticated, embedded in physical processes, and
interconnected, society becomes more vulnerable to poor system design . . . and
attacks on computer systems” and that “the nature and magnitude of computer
system problems are changing dramatically” (p. 1). It also lamented that “known
techniques are not being used” to increase security.
In 1999, CSTB released Trust in Cyberspace, which proposed a research
agenda to increase the trustworthiness of information technology (IT), with a spe-
cial focus on networked information systems. This report went beyond security
matters alone, addressing as well other dimensions of trustworthiness such as
correctness, reliability, safety, and survivability. Importantly, it also noted that “eco-
nomic and political context is critical to the successful development and deploy-
ment of new technologies” (p. viii).
In 2002, CSTB issued Cybersecurity Today and Tomorrow: Pay Now or Pay
Later, which reprised recommendations from a decade of CSTB cybersecurity
studies. Its preface noted that “it is a sad commentary on the state of the world
that what CSTB wrote more than 10 years ago is still timely and relevant. For those
who work in computer security, there is a deep frustration that research and recom-
mendations do not seem to translate easily into deployment and utilization” (p. v).
CSTB’s 2007 report Toward a Safer and More Secure Cyberspace observed
that “there is an inadequate understanding of what makes IT systems vulnerable
to attack, how best to reduce these vulnerabilities, and how to transfer cybersecu-
rity knowledge to actual practice” (p. vii). It set forth an updated research agenda,
sought to inspire the nation to strive for a safer and more secure cyberspace, and
focused “substantial attention on the very real challenges of incentives, usability,
and embedding advances in cybersecurity into real-world products, practices, and
services” (p. xii).
In 2009, CSTB turned its attention to the technical and policy dimensions
of cyberattack—the offensive side of cybersecurity. Technology, Policy, Law, and
Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities concluded
that although cyberattack capabilities are an important asset for the United States,
the current policy and legal framework for their use is ill-formed, undeveloped,
and highly uncertain and that U.S. policy should be informed by an open and
public national debate on technological, policy, legal, and ethical issues posed by
cyberattack capabilities.
In 2010, the CSTB report Toward Better Usability, Security, and Privacy of
Information Technology: Report of a Workshop identified research opportunities
and ways to embed usability considerations in design and development related to
security and privacy. In that year, CSTB also produced a second workshop report,
Proceedings of a Workshop on Deterring Cyberattacks: Informing Strategies and
Developing Options, a collection of papers that examined governmental, economic,
technical, legal, and psychological challenges involved in deterring cyberattacks.
NOTE: All of these reports were published by the National Academies Press, Washington, D.C.
x PREFACE
Acknowledgment of Reviewers
The review of this report was overseen by Sam Fuller (Analog Devices).
Appointed by the National Research Council, he was responsible for mak-
xi
out in accordance with institutional procedures and that all review com-
this report rests entirely with the authoring committee and the institution.
Contents
3.2.2 Cyberattack, 32
Attack, 35
3.3 Inherent Vulnerabilities of Information Technology, 35
xiii
xiv CONTENTS
4.2.5 Deterrence, 86
4.3 Assessing Cybersecurity, 88
4.4 On the Need for Research, 90
5.1 Economics, 93
5.1.1 Economic Approaches to Enhancing
Cybersecurity, 93
5.1.2 Economic Impact of Compromises in
Cybersecurity, 96
5.2 Innovation, 98
5.3 Civil Liberties, 100
5.3.1 Privacy, 100
CONTENTS xv
APPENDIXES
Summary
Modern military forces use weapons that are computer controlled. Even
more important, the movements and actions of military forces are increas-
ingly coordinated through computer-based networks that allow informa-
they are supposed to do and only when they are supposed to do it, and
SUMMARY 3
drug abuse, and so on are rarely “solved” or taken off the policy agenda
-
so decisively that they will never reappear—and the same is true for
cybersecurity.
At the same time, improvements to the cybersecurity posture of
-
able value in reducing the loss and damage that may be associated with
cybersecurity breaches. A well-defended target is less attractive to many
malevolent actors than are poorly defended targets. In addition, defensive
thus making intrusion attempts slower and more costly and possibly
helping to deter future intrusions.
Improvements to cybersecurity call for two distinct kinds of activ-
ity: efforts to more effectively and more widely use what is known
about improving cybersecurity, and efforts to develop new knowledge
about cybersecurity. The gap in security between the U.S. national cyber-
gap is the difference between what our cybersecurity posture is and what
it could be if known best cybersecurity practices and technologies were
widely deployed and used. The second part (Part 2) is the gap between
the strongest posture possible with known practices and technologies
-
technical in nature (requiring, e.g., research relating to economic or psy-
chological factors regarding the use of known practices and techniques,
enhanced educational efforts to promote security-responsible user behav-
security awareness). Closing the Part 1 gap does not require new technical
For
a number of years, the cybersecurity issue has received increasing public
attention, and a greater amount of authoritative information regarding
cybersecurity threats is available publicly. But all too many decision mak-
-
tional cybersecurity postures, and little has been done to harness market
forces to address matters related to the cybersecurity posture of the nation
as a whole. If the nation’s cybersecurity posture is to be improved to a
level that is higher than the level to which today’s market will drive it, the
-
curity must be altered in some fashion.
Cybersecurity is important to the nation, but the United States
-
tives of cybersecurity. Tradeoffs are inevitable and will have to be
accepted through the nation’s political and policy-making processes.
Senior policy makers have many issues on their agenda, and they must set
priorities for the issues that warrant their attention. In an environment of
many competing priorities, reactive policy making is often the outcome.
Support for efforts to prevent a disaster that has not yet occurred is typi-
cally less than support for efforts to respond to a disaster that has already
yes, but we also want a private sector that innovates rapidly, and the con-
venience of not having to worry about cybersecurity, and the ability for
applications to interoperate easily and quickly with one another, and the
right to no diminution in our civil liberties, and so on. Although research
and deeper thought may reveal that, in some cases, tradeoffs between
security and these other equities are not as stark as they might appear at
SUMMARY 5
opacity has many undesirable consequences, but one of the most impor-
tant consequences is that the role offensive capabilities could play in
defending important information technology assets of the United States
cannot be discussed fully.
What is sensitive about offensive U.S. capabilities in cyberspace is
(rather than the nature of that technology itself); fragile and sensitive
-
ticular vulnerability, a particular operational program); or U.S. knowledge
-
vides a generally reasonable basis for understanding what can be done
and for policy discussions that focus primarily on what should be done.
will never be solved once and for all. Solutions to the problem, limited in
scope and longevity though they may be, are at least as much nontechni-
cal as technical in nature.
-
puters are not openly visible. People type at the keyboard of computers or
tablets and use their smart phones daily. People’s personal lives involve
computing through social networking, home management, communica-
tion with family and friends, and management of personal affairs. The
operation of medical devices implanted in human bodies is controlled by
embedded (built-in) microprocessors.
A much larger collection of information technology (IT) is instru-
https://blogs.cisco.com/security/cyberspace-what-is-it/.
precise one.
Given our dependence on cyberspace, we want and need our infor-
mation technologies to do what they are supposed to do and only when
they are supposed to do it. We also want these technologies to not do
things they are not supposed to do. And we want these things to be true
in the face of deliberately hostile or antisocial actions.
Cybersecurity issues arise because of three factors taken together.
First, we live in a world in which there are parties that will act in deliber-
ately hostile or antisocial ways—parties that would do us harm or sepa-
rate us from our money or violate our privacy or steal our ideas. Second,
we rely on IT for a large and growing number of societal functions. Third,
IT systems, no matter how well constructed (and many are not as well
constructed as the state of the art would allow), inevitably have vulner-
abilities that the bad guys can take advantage of.
are relying on such technologies and it is the U.S. government that takes
actions to render their technologies inoperative, the impact would usually
be seen as positive.
Similarly, many repressive regimes put into place various mecha-
nisms in cyberspace to monitor communications of dissidents. These
and perhaps most alarmingly, to deliberate attack. The modern thief can
Learned from Edward Snowden in 2013,” National Journal, December 31, 2013, available at
http://www.nationaljournal.com/defense/everything-we-learned-from-edward-snowden-
in-2013-20131231.
steal more with a computer than with a gun. Tomorrow’s terrorist may
be able to do more damage with a keyboard than with a bomb. (p. 7)
continued
-
able administrators.
The number of Internet users has grown
by at least two orders of magnitude in the untutored in the need for security
past two decades, and hundreds of millions and are thus more vulnerable.
of new users (perhaps as many as a billion)
will begin to use the Internet as large parts A larger user base means a larger
of Africa, South America, and Asia come number of potentially malevolent
actors.
devices will become increasingly connected
to the Internet of Things, on the theory that
network connections between these devices
will enable them to operate more efficiently
and effectively.
The rise of social networking and Connectivity among friends and
contacts offers opportunities for
such as Facebook and Twitter, is based on malevolent actors to improperly
the ability of IT to bring large numbers of take advantage of trust
people into contact with one another. relationships.
Cybercrime
the Internet and IT to steal valuable assets (e.g., money) from their rightful
owners or otherwise to take actions that would be regarded as criminal
if these actions were taken in person, and a breach of security is usually
an important element of the crime. Criminal activity using cyber means
includes cyber fraud and theft of services (e.g., stealing credit card num-
bers); cyber harassment and bullying (e.g., taking advantage of online
anonymity to threaten a victim); cyber vandalism (e.g., defacing a Web
site); penetration or circumvention of cybersecurity mechanisms intended
to protect the privacy of communications or stored information (e.g.,
identity theft (e.g., stealing login names and passwords to forge e-mail
or to improperly manipulate bank accounts). Loss of privacy and theft
of intellectual property are also crimes (at least sometimes) but generally
occupy their own categories of concern. Note also that in addition to the
-
curity consume resources (e.g., money, talent) that could be better used to
build improved products or services or to create new knowledge. And, in
some cases, concerns about cybersecurity have been known to inhibit the
use of IT for some particular application, thus leading to self-denial of the
Loss of privacy. Losses of privacy can result from the actions of oth-
ers or of the individual concerned. Large-scale data breaches occur from
time to time, for reasons including loss of laptops containing sensitive
data and system penetrations by sophisticated intruders. Intruders have
used the sound and video capabilities of home computers for blackmail
-
sive loss of life, long-lasting disruption of the services that these
have negative effects in these other areas. Some of the most important
necessary, and the costs of inaction are not borne by the relevant decision
makers. Decision makers discount future possibilities so much that they
do not see the need for present-day action. Also, cybersecurity is increas-
ingly regarded as a part of risk management—an important part in many
18
source per se simply by creating the bits and then can be used to produce
everything from photo-realistic images to an animation to forged e-mail.
Digital encoding can represent many kinds of information with which
depending on the data means that the programmer must anticipate what
the program should do for all possible data inputs. This mental task is of
to properly anticipate some particular set of data (e.g., the program pro-
cesses only numeric input, and fails to account for the possibility that a
user might input a letter).
A further consequence is that for programs of any meaningful utility,
testing for all possible outcomes is essentially impossible when treating
inputs. This means that although it may be possible to show that the pro-
gram does what it is supposed to do when presented with certain inputs,
it is impossible to show that it will never do what it is not supposed to do
The fact that a given sequence of bits could just as easily be a program
as data means that a computer that receives information assuming it to be
data could in fact be receiving a program, and that program could be hos-
tile. Mechanisms in a computer are supposed to keep data and program
separate, but these mechanisms are not foolproof and can sometimes be
tricked into allowing the computer to interpret data as instructions. It is
can be programs that can penetrate the computer’s security, and opening
-
tiality of information that may have come improperly into the possession
an unbroken seal, it meant that the letter had not been disclosed or altered
(data integrity), and Bob would verify Alice’s signature and read the mes-
sage. If he received the container with a broken seal, Bob would then take
appropriate actions.
With modern cryptographic techniques, each of the steps remains
The separation of the Internet into nodes for transmitting and receiving data
and links and routers for moving data through the Internet captures the essence
of its original architectural design, but in truth it presents a somewhat oversimpli-
fied picture. Some of the more important adjustments to this picture include the
following:
FIGURE 2.1 A schematic of the Internet. Three “layers” of the Internet are de-
picted. The top and bottom layers (the applications layer and the physical in-
frastructure layer) are shown as much wider than the middle layer (the packet-
switching layer), because within each of the wide layers is found a large number
of largely independent actors. But within the packet-switching layer, the number
of relevant actors is much smaller, and those that do have some control over the
packet-switching layer act in tight coordination.
networks, local area networks). Users navigate the Internet using the
How can a user navigate from one computer to another on the Internet? To
navigate—to follow a course to a goal—across any space requires a method for
designating locations in that space. On a topographic map, each location is des-
ignated by a combination of a latitude and a longitude. In the telephone system,
a telephone number corresponding to a landline designates each location. On
a street map, locations are designated by street addresses. Just like a physical
neighborhood, the Internet has addresses—32- or 128-bit numbers, called IP ad-
dresses (IP for Internet Protocol)—that define the specific location of every device
on the Internet.
Also like the physical world, the Internet has names—called domain names,
which are generally more easily remembered and informative than the addresses
that are attached to most devices—that serve as unchanging identifiers of those
devices even when their specific addresses are changed. The use of domain
names on the Internet relies on a system of servers—called name servers—that
translate the user-friendly domain names into the corresponding IP addresses.
This system of addresses and names linked by name servers is called the Domain
Name System (DNS) and is the basic infrastructure supporting navigation across
the Internet.
Conceptually, the DNS is in essence a directory assistance service. George
uses directory assistance to look up Sam’s number, so that George can call Sam.
Similarly, a user who wants to visit the home page of the National Academy of
Sciences must either know that the IP address for this page is 144.171.1.30, or use
the DNS to perform the lookup for www.nas.edu. The user gives the name www.
nas.edu to a DNS name server and receives in return the IP address 144.171.1.30.
However, in practice, the user almost never calls on the DNS explicitly—rather, the
entire process of DNS lookup is hidden from the user in the process of viewing a
Web page, sending e-mail, and so on.
Disruptions to the DNS affect the user experience. Disruptions may prevent
users from accessing the Web sites of their choosing. A disruption can lead a user
to a “look-alike” Web site pretending to be its legitimate counterpart. If the look-alike
site is operated by a malevolent actors, the tricked user may lose control of vital
information (such as login credentials).
• Open process. Any interested person can participate in the work, know
what is being decided, and make his or her voice heard on an issue. All IETF docu-
ments, mailing lists, attendance lists, and meeting minutes are publicly available
on the Internet.
• Technical competence. The issues addressed in IETF-produced docu-
ments are issues that the IETF has the competence to speak to, and the IETF
is willing to listen to technically competent input from any source. The IETF’s
technical competence also means that IETF output is designed to sound network
engineering principles, an element often referred to as “engineering quality.”
• Volunteer core. IETF participants and leaders are people who come to the
IETF because they want to do work that furthers IETF’s mission of “making the
Internet work better.”
• Rough consensus and running code. The IETF makes standards based on
the combined engineering judgment of its participants and their real-world experi-
ence in implementing and deploying its specifications.
• Protocol ownership. When the IETF takes ownership of a protocol or func-
tion, it accepts the responsibility for all aspects of the protocol, even though some
aspects may rarely or never be seen on the Internet. Conversely, when the IETF
is not responsible for a protocol or function, it does not attempt to exert control
over it, even though such a protocol or function may at times touch or affect the
Internet.
SOURCE: Adapted from material found at the IETF Web site at http://www.ietf.org.
data from A to B without regard for the content of that data), any given
applications provider can architect a service without having to obtain
agreement from any other party. As long as the data packets are properly
formed and adhere to the standard Internet Protocol, the application pro-
vider can be assured that the transport mechanisms will accept the data
for forwarding to users of the application. Interpretation of those packets
is the responsibility of programs on the receiver’s end.
for a wide range of security threats, and perhaps take action to curb or
-
cation users. Further, this argument implies that the end-to-end design
philosophy has impeded or even prevented the growth of such security
services.
Those favoring the preservation of the end-to-end design philosophy
argue that because of the higher potential for inadvertent disruption
as a side effect of a change in architecture or protocols, every proposed
change must be tested and validated. Because such changes potentially
them when they detect that a user’s security has been compromised. Such
offerings are not uncommon, and they have a modest impact on users’
Chapter 1 points out that bad things that can happen in cyberspace
fall into a number of different categories: cybercrime; losses of privacy;
misappropriation of intellectual property such as proprietary software,
R&D work, blueprints, trade secrets, and other product information; espi-
onage; disruption of services; destruction of or damage to physical prop-
erty; and threats to national security. After a brief note about terminology,
this chapter addresses how adversarial cyber operations can result in any
or all of these outcomes.
mind the conceptual basis) for discussions about cybersecurity and pub-
-
cerns about cybersecurity have spread rapidly in the past 10 years to many
29
-
tration (the essential characteristic of cyber espionage) and other kinds of
law and domestic legal authorities for conducting such actions; these
points are discussed further in Section 4.2.3 on domestic and international
law.)
• Any hostile or unfriendly action taken against a computer system
or network if (and only if) that action is intended to cause damage to or
destruction of information stored in or transiting through that system or
network and is effected primarily through the direct use of information
attack.
should not have access to it. To date, the vast majority—nearly all—of
and bid information, and software source code have all been obtained by
-
-
ered, the credit card owner can notify the bank and prevent the card’s
further use.
the winter holiday season of 2013, when the Target retail store chain
suffered a data breach in which personal information belonging to 70
million to 110 million people was stolen.1 Such information included
names, mailing and e-mail addresses, phone numbers, and credit card
numbers. Shortly after the breach occurred, observers noted an order-
of-magnitude increase in the number of high-value stolen cards on black
market Web sites, from nearly every bank and credit union. The Target
1 New
York Times http://www.nytimes.com/2014/01/11/business/
target-breach-affected-70-million-customers.html.
of the new Miss Teen USA and took pictures that he subsequently used to
blackmail the victim.4
the warning light on the camera from turning on.
3.2.2 Cyberattack
A cyberattack is an action intended to cause a denial of service or
damage to or destruction of information stored in or transiting through
an information technology system or network.
A denial-of-service (DOS) attack is intended to render a properly
functioning system or network unavailable for normal use. A DOS attack
may mean that the e-mail does not go through, or the computer simply
-
trolled by the system). As a rule, the effects of a DOS attack vanish when
the attack ceases. DOS attacks are not uncommon, and have occurred
against individual corporations, government agencies (both civilian and
military), and nations.
requests for service (e.g., requests to display a Web page, to receive and
2 Paul Ziobro, “Target Earnings Suffer After Breach,” Wall Street Journal, February 27,
at $200 Million,” Wall Street Journal, February 19, 2014, available at http://online.wsj.com/
news/articles/SB10001424052702304675504579391080333769014.
4 Nate Anderson, “Webcam Spying Goes Mainstream as Miss Teen USA Describes Hack,”
to handle legitimate requests for service and thus blocking others from
using those resources. Such an attack is relatively easy to block if these
bogus requests for service come from a single source, because the tar-
get can simply drop all service requests from that source. A distributed
around the world. The duration and intensity of attacks varied across
the Web sites attacked; most attacks lasted 1 minute to 1 hour, and a few
lasted up to 10 hours.6 Attacks were stopped when the attackers ceased
their efforts rather than being stopped by Estonian defensive measures.7
The Estonian government was quick to claim links between those con-
ducting the attacks and the Russian government,8 -
cials denied any involvement.9
A damaging or destructive attack can alter a computer’s program-
ming in such a way that the computer does not later behave as it should.
If a physical device (such as a generator) is controlled by the computer,
the operation of that device may be compromised. The attack may also
being sent from one point to another). Such an attack may delete data
5 Economist
Estonia, presentation to Centre for Strategic and International Studies, November 28, 2007.
6
asert.arbornetworks.com/2007/05/estonian-ddos-attacks-a-summary-to-date/.
7 McAfee Corporation, Cybercrime: The Next Wave, McAfee Virtual Criminology Report,
1 Mark Landler and John Markoff, “Digital Fears Emerge After Data Siege in Estonia,” New York
the victim does not have a chance to prepare for it), the effects of an attack
may or may not be concealed. If the intent of an attack is to destroy a
they generally use the same basic technical approaches to penetrate the
security of a system or network, even though they have different out-
system is entirely secure (see the left side of Figure 3.1). Of course, this
FIGURE 3.1 A secure but useless computer (left), and an insecure but useful
computer (right).
information or programs are good when they are in fact bad. This fact
underscores a basic point about most adversarial cyber operations—the
11 This asymmetry applies primarily when the intruder can choose when to act, that
is, when the precise timing of the intrusion’s success does not matter. If the intruder must
of tries to succeed, and the asymmetry between intruder and defender may be reduced
something that many people do every day with ease. The user can type
the name of a Web page (called a URL, uniform resource locator) as
depicted in the top of Figure 3.2, and the proper Web page appears in a
second or two as depicted at the bottom of Figure 3.2. In addition, the user
also wants the display of the Web page to be the only thing that happens
by any of these actors may mean that the requested page does not appear
as required.
Inspection of Figure 3.3 with a powerful magnifying lens (not sup-
FIGURE 3.2 Viewing a Web page, from the user’s perspective. When a user types
a Web page’s name (top), the corresponding page appears and can be read (bottom).
Figure 3.2a and b
Each of these actors must carry out correctly the role it plays in
-
ing protocols if packets are to reach their destination. Moreover, each of
these actors could take (or be tricked into taking) one or more actions
that thwart the user’s intent in retrieving a given Web page, which is to
receive the requested Web page promptly and to have only that task be
accomplished, and not have any other unrequested task be accomplished.
a problem very different from that posed by the same cabinet located
-
net, the intruder might take advantage of an easily pickable lock on the
cabinet—that is, an easily pickable lock is a vulnerability. The payload in
-
tion on those papers, perhaps by replacing certain pages with pages of
the intruder’s creation (i.e., he alters the data recorded on the pages),
he can pour ink over the papers (i.e., he renders the data unavailable to
any legitimate user), or he can copy the papers and take the copies away,
Access
one that the intruder need spend only a little effort preparing and the
target that is known to be connected to the Internet. Public Web sites are
computer language used for managing data input and data manipulation
was rendered unusable through online attacks. The computer was probed within 30 seconds
13 SC
Magazine,
compromises-32-million-passwords/article/159676/.
14
SC Magazine,
rockyou-to-pay-ftc-250k-after-breach-of-32m-passwords/article/233992/.
15 Henry Samuel, “Chip and Pin Scam ‘Has Netted Millions from British Shoppers,’”
A supply chain penetration may be effected late in the chain, for example,
against a deployed computer in operation or one that is awaiting delivery on a
loading dock. In these cases, such a penetration is by its nature narrowly and
specifically targeted, and it is also not scalable, because the number of computers
that can be penetrated is proportional to the number of human assets available. In
other cases, a supply chain penetration may be effected early in the supply chain
(e.g., introducing a vulnerability during development), and high leverage against
many different targets might result from such an attack.
nature of these documents, many have asked how their security could
have been compromised. According to a Reuters news report,16 Snowden
him with their credentials, telling them that he needed that information
in his role as systems administrator.
Sometimes, social engineering is combined with either remote access
or close access methods. An intruder may make contact through the Inter-
net with someone likely to have privileges on the system or network of
interest. Through that contact, the intruder can trick the person into taking
intruder sends the victim an e-mail with a link to a Web page and when
the victim clicks on that link, the Web page may take advantage of a
technical vulnerability in the browser to run a hostile program of its own
choosing on the user’s computer, often or usually without the permission
or even the knowledge of the user.
Social engineering can be combined with close access techniques in
-
ports can be glued shut, but such a countermeasure also makes it impos-
16
Mark Hosenball and Warren Strobel, “Snowden Persuaded Other NSA Workers to
Give Up Passwords–Sources,” Reuters, November 7, 2013, available at http://www.reuters.
com/article/2013/11/08/net-us-usa-security-snowden-idUSBRE9A703020131108.
The red team scattered USB drives in parking lots, smoking areas, and
drive was inserted, and the result was that 75 percent of the USB drives
distributed were inserted into a computer.17
Vulnerability
Access is only one aspect of a penetration, which also requires the
intruder to take advantage of a vulnerability in the target system or
the target such as a default setting that leaves system protections turned
off. Vulnerabilities arise from the characteristics of information technol-
ogy and information technology systems described above.
discovered and can then be used by anyone with moderate technical skills
until a patch can be developed, disseminated, and installed. Intruders
with the time and resources may also discover unintentional defects that
they protect as valuable secrets that can be used when necessary. As long
as those defects go unaddressed, the vulnerabilities they create can be
used by the intruder.
-
would have an assigned login name and password that would enable
them to do certain things (and only those things) on the Web site. But if
the program’s creator had installed a back door, a knowledgeable intruder
(perhaps in cahoots with the program’s creator) could enter a special
17 See Steve Stasiukonis, “Social Engineering, the USB Way,” Dark Reading
40-character password and use any login name and then be able to do
anything he wanted on the system.
impact are those in a remotely accessible service that runs by default on all
versions of a widely used piece of software—under such circumstances,
an intruder could take advantage of the vulnerability in many places
nearly simultaneously, with all of the consequences that penetrations of
such scale might imply.
Those who discover such vulnerabilities in systems face the question
of what to do with them. A private party may choose to report a vulner-
ability privately to those responsible for maintaining the system so that
penetration approaches and techniques, and thus may look quite simi-
lar to the victim, at least until the nature of the malware involved is
ascertained.
ability for future use, and so on. The payload is what determines if an
Malware may also install itself in ways that keep it from being
detected. It may delete itself, leaving behind little or no trace that it was
ever present. In some cases, malware can remain even after a computer is
scanned with anti-malware software or even when the operating system
is reinstalled from scratch.
-
puters in everyday use—run through a particular power-on sequence
when their power is turned on. The computer’s power-on sequence loads
a small program from a chip inside the computer known as the BIOS
(Basic Input-Output System), and then runs the BIOS program. The BIOS
program then loads the operating system from another part of the com-
puter, usually its hard drive. Most anti-malware software scans only the
operating system on the hard drive, assuming the BIOS chip to be intact.
But some malware is designed to modify the program on the BIOS chip,
and reinstalling the operating system simply does not touch the (modi-
IN-99-03.html.
eye.
• Target selection
nationalistic considerations.
The skills of malevolent actors also span a very broad range. Some
have only a rudimentary understanding of the underlying technology and
are capable only of using tools that others develop to conduct their own
operations but in general are not capable of developing new tools. Those
with an intermediate level of skill are capable of developing hacking tools
on their own.
Those with the most advanced levels of skills—that is, the high-
end threat—can identify weaknesses in target systems and networks and
develop tools to take advantage of such knowledge. Moreover, they are
1 See Fortinet, Inc., “Threats on the Horizon: The Rise of the Advanced Persistent Threat,”
out of which a malevolent actor can assemble his own adversarial cyber
operation. In an environment in which such services can be bought and
• Bad guys who want to have an effect on their targets have some
motivation to keep trying, even if their initial efforts are not successful in
intruding on a victim’s computer systems or networks.
• Bad guys nearly always make use of deception in some form—
they trick the victim into doing something that is contrary to the victim’s
interests.
• A would-be bad guy who is induced or persuaded in some way to
refrain from intruding on a victim’s computer systems or networks results
in no harm to those systems or networks, and such an outcome is just as
good as thwarting his hostile operation (and may be better if the user is
persuaded to avoid conducting such operations in the future).
• Cyber bad guys will be with us forever for the same reason that
crime will be with us forever—as long as the information stored in, pro-
cessed by, or carried through a computer system or network has value to
19
R. Velde, “Bitcoin: A Primer,” Chicago Fed Letter, Number 517, December 2013, available
at http://www.chicagofed.org/digital_assets/publications/chicago_fed_letter/2013/
third parties, cyber bad guys will have some reason to conduct adversarial
operations against a potential victim’s computer systems and networks.
target set (i.e., what targets the adversary seeks to penetrate) and in what
the adversary wishes to do or be able to do once penetration is achieved
Enhancing Cybersecurity
of using IT must be weighed against the security risks that the use of IT
-
cient degree, and the use of IT should be rejected. In other cases, security
costs should be factored into the decision. But what should not happen
is that security risks be ignored entirely—as may sometimes be the case.
53
safety of the computer system cannot be taken for granted forever after.
But disconnection does help under many circumstances.
The broader point can be illustrated by supervisory control and data
acquisition (SCADA) systems, some of which are connected to the Inter-
net.1 SCADA systems are used to control many elements of physical
infrastructure: electric power, gas and oil pipelines, chemical plants, fac-
tories, water and sewage, and so on. Infrastructure operators connect their
SCADA systems to the Internet to facilitate communications with them, at
least in part because connections and communications hardware that are
Detection
From the standpoint of an individual system or network operator,
the only thing worse than being penetrated is being penetrated and not
knowing about it. Detecting that one has been the target of a hostile cyber
action.
1 See http://cyberarms.wordpress.com/2013/03/19/worldwide-map-of-internet-connected-
scada-systems/.
ENHANCING CYBERSECURITY 55
when it was created, a hash of the program,2 and so on. Signatures might
also be associated with the path through which a program has arrived at
By law and policy, DHS is the primary agency responsible for protecting
U.S. government agencies other than the Department of Defense and the
header information in each packet but not the content of a packet itself)
and compares that data to known patterns of such data that have previ-
dropped).
This signature-based technique for detection has two primary weak-
nesses. First, it is easy to morph the code without affecting what the
program can do so that there are an unlimited number of functionally
equivalent versions with different signatures. Second, the technique can-
not identify a program as malware if the program has never been seen
before.
Another technique for detection monitors the behavior of a program;
are behavioral signatures that help with anomaly detection, this tech-
constructed algorithm, hashes of two different bit sequences are very unlikely to have the
same hash value.
3 Department of Homeland Security, National Cyber Security Division, Computer
Emergency Readiness Team (US-CERT), Privacy Impact Assessment [of the] Einstein Program:
Collecting, Analyzing, and Sharing Computer Security Information Across the Federal Civilian
Government
privacy_pia_eisntein.pdf.
Assessment
A hostile action taken against an individual system or network may or
may not be part of a larger adversary operation that affects many systems
simultaneously, and the scale and the nature of the systems and networks
affected in an operation are critical information for decision makers.
Detecting a coordinated adversary effort against the background noise
of ongoing hostile operations also remains an enormous challenge, given
that useful information from multiple sites must be made available on a
timely basis. (And as detection capabilities improve, adversaries will take
steps to mask such signs of coordinated efforts.)
An assessment addresses many factors, including the scale of the hos-
tile cyber operation (how many entities are being targeted), the nature of
the targets (which entities are being targeted), the success of the operation
4 See the OpenNet Initiative (http://opennet.net/) and the Information Warfare Moni-
tor (http://www.infowar-monitor.net/) Web sites for more information on these groups.
A useful press report on the activities of these groups can be found at Kim Hart, “A
New Breed of Hackers Tracks Online Acts of War,” Washington Post, August 27, 2008,
available at http://www.washingtonpost.com/wp-dyn/content/article/2008/08/26/
AR2008082603128_pf.html.
ENHANCING CYBERSECURITY 57
ENHANCING CYBERSECURITY 59
1 A MAC address (MAC is an acronym for media access control) is a unique number as-
sociated with a physical network adapter, specified by the manufacturer and hard-coded into
the adapter hardware. An IP address (Internet Protocol address) is a number assigned by the
operator of a network using the Internet Protocol to a device (e.g., a computer) attached to
that network; the operator may, or may not, use a configuration protocol that assigns a new
number every time the device appears on the network.
2 See Gerry Smith, “FBI Agent: We’ve Dismantled the Leaders of Anonymous,” The
Saltzer and Schroeder articulate eight design principles that can guide sys-
tem design and contribute to an implementation without security flaws:
5
If the app store
does whitelisting consistently and rigorously (and app stores do vary sig-
cannot run programs that have not been properly signed. Another issue
for whitelisting is who establishes any given whitelist—the user (who
ENHANCING CYBERSECURITY 61
themselves that the system they are about to use is adequate for their individual
purposes. Finally, it is simply not realistic to attempt to maintain secrecy for any
system that receives wide distribution.
• Separation of privilege: Where feasible, a protection mechanism that re-
quires two keys to unlock it is more robust and flexible than one that allows access
to the presenter of only a single key. The reason for this greater robustness and
flexibility is that, once the mechanism is locked, the two keys can be physically
separated, and distinct programs, organizations, or individuals can be made re-
sponsible for them. From then on, no single accident, deception, or breach of trust
is sufficient to compromise the protected information.
• Least privilege: Every program and every user of the system should oper-
ate using the least set of privileges necessary to complete the job. This principle
reduces the number of potential interactions among privileged programs to the
minimum for correct operation, so that unintentional, unwanted, or improper uses
of privilege are less likely to occur. Thus, if a question arises related to the possible
misuse of a privilege, the number of programs that must be audited is minimized.
• Least common mechanism: The amount of mechanism common to more
than one user and depended on by all users should be minimized. Every shared
mechanism (especially one involving shared variables) represents a potential infor-
mation path between users and must be designed with great care to ensure that it
does not unintentionally compromise security. Further, any mechanism serving all
users must be certified to the satisfaction of every user, a job presumably harder
than satisfying only one or a few users.
• Psychological acceptability: It is essential that the human interface be
designed for ease of use, so that users routinely and automatically apply the pro-
tection mechanisms correctly. More generally, the use of protection mechanisms
should not impose burdens on users that might lead users to avoid or circumvent
them—when possible, the use of such mechanisms should confer a benefit that
makes users want to use them. Thus, if the protection mechanisms make the
system slower or cause the user to do more work—even if that extra work is
“easy”—they are arguably flawed.
SOURCE: Adapted from J.H. Saltzer and M.D. Schroeder, “The Protection of Information in
Computer Systems,” Proceedings of the IEEE 63(9):1278-1308, September 1975.
(who may not be willing or able to provide the full range of applications
desired by the user or may accept software too uncritically for inclusion
on the whitelist).
These approaches to defense are well known, and are often imple-
mented to a certain degree in many situations. But in general, these
approaches have not been adopted as fully as they could be, leaving sys-
tems more vulnerable than they would otherwise be. If the approaches
• . In many cases,
closing down access paths and introducing cybersecurity to a system’s
design slows it down or makes it harder to use. Restricting access privi-
leges to users often has serious usability implications and makes it harder
upgrade all parts of the system at once. This means that for practical
technology environment in which the parts that have not been replaced
are likely still vulnerable, and their interconnection to the parts that have
been replaced may make even the new components vulnerable.
ENHANCING CYBERSECURITY 63
-
ment agency.
As applied to individuals, authentication serves two purposes:
privileges and no others. Because certain users have privileges that others
lost (forgotten) passwords. Because people often reuse the same name
and password combinations across different systems to ease the burden
(in order of frequency) “123456,” “password,” and “12345678.” See Impact Lab, “The Top
50 Gawker Media Passwords,” December 14, 2010, available at http://www.impactlab.
net/2010/12/14/the-top-50-gawker-media-passwords/.
code that can be used to reset the password. Although anyone can request
-
metrics Firms,” The Register, May 16, 2002, http://www.theregister.co.uk/ 2002/05/16/
ENHANCING CYBERSECURITY 65
-
ing action on behalf of someone without those privileges.
Organizational Authentication
Given the role of the CA, its compromise is a dangerous event that can
-
and some have even gone rogue on their own. The security of the Inter-
net is under stress today in part because the number of trusted but not
trustworthy CAs is growing. Thus, CAs must do what they can to ensure
is compromised.
-
check its status to see if it has been revoked. Few users are so diligent—
they rely on software to perform such checks. Sometimes the software
fails to perform a check, leaving the user with a false sense of security.
revoked and asks the user if he or she wants to proceed. Faced with this
question, the user often proceeds.
Furthermore, there is an inherent tension between authentication
and privacy, because the act of authentication involves some disclosure
a given party with any given piece of information. The Internet is a means
for transporting information from one computer to another, but today’s
Internet protocols do not require a validated identity to be associated with
the packets that are sent.
Nevertheless, nearly all users of the Internet obtain service through
an Internet service provider, and the ISP usually does have—for billing
purposes—information about the party sending or receiving any given
packet. In other words, access to the Internet usually requires some kind
of authentication of identity, but the architecture of the Internet does not
require that identity to be carried with sent packets all the way to the
ENHANCING CYBERSECURITY 67
intended recipient. (As an important aside, an ISP knows only who pays
a bill for Internet service, and one bill may well cover Internet access for
multiple users. However, the paying entity may itself have accounting
systems in place to differentiate among these multiple users.)
In the name of greater security, proposals have been made for a
“strongly authenticated” Internet as a solution to the problem of attri-
Forensics
forensics are necessary because, among other things, intruders often seek
to cover their tracks.
-
tal information carries with it no physical signature that can be associated
-
nature on a document says something about the computer that signed the
document using a private and secret cryptographic key, it does not neces-
sarily follow that the individual associated with that key signed the docu-
ment. Because the key is a long string of digits, it is almost certainly stored
in machine-readable form, and the association of the individual with the
signed document requires a demonstration that no one else could have
for information that the perpetrator of a hostile action may have tried to
delete or did not know was recorded, audits of system logs for reconstruc-
tions of a perpetrator’s system accesses and activities, statistical and his-
relevant to civil proceedings, both because the standards of proof there are
lower and because the use of digital forensics in business activities may
also be the subject of litigation.
Also, the forensic investigator must proceed differently in an after-
the-fact investigation than in a real-time investigation. Law enforcement
authorities are often oriented toward after-the-fact forensics, which help
-
vention or mitigation of damage is the goal of law enforcement authorities
ENHANCING CYBERSECURITY 69
Containment
Containment refers to the process of limiting the effects of a hostile
-
ing environment designed to be disposable—corruption or compromise
in this environment does not matter much to the user, and the intruder
is unlikely to gain much in the way of additional resources or privileges.
for safe interaction between the buffer and the “real” environment, and
in an imperfectly designed disposable environment, unsafe actions can
Recovery
In general, recovery-oriented approaches accomplish repair by restor-
ing a system to its state at an earlier point in time. If that point in time
is too recent, then the restoration will include the damage to the system
caused by the attack. If that point in time is too far back, an unacceptable
Resilience
A resilient system is one whose performance degrades gradually
rather than catastrophically when its other defensive mechanisms are
-
form some of its intended functions, although perhaps more slowly or for
fewer people or with fewer applications.
Redundancy is one way to provide a measure of resilience. For
ENHANCING CYBERSECURITY 71
redundancy for certain systems is simply to replicate the system and run
is, systems and networks that it has the legal right to access, monitor, and
modify. These also may reduce important functionality in the systems
(or is occurring).
The DOD does not describe active cyber defense in any detail, but
the formulation above for “active cyber defense” could, if read broadly,
looks like the real thing may be misled into taking action harmful to his
own interests, and at the very least has been forced to waste time, effort,
and resources in obtaining useless information.
The term “honeypot” in computer security jargon refers to a machine,
a virtual machine, or other network resource that is intended to act as a
decoy or diversion for would-be intruders. Honeypots intentionally con-
the techniques and methods used by the intruder. This process allows
administrators to be better prepared for hostile operations against their
real production systems. Honeypots are very useful for gathering infor-
mation about new types of operation, new techniques, and information
on how things like worms or malicious code propagate through systems,
and they are used as much by security researchers as by network security
administrators.
When the effects of a honeypot are limited in scope to the victim’s sys-
tems and networks, the legal and policy issues are relatively limited. But if
they have effects on the intruder’s systems, both the legal and the policy
B’s systems in the future. All of these actions raise legal and policy issues
regarding their propriety.
Disruption
Disruption is intended to reduce the damage being caused by an
adversarial cyber operation in progress, usually by affecting the operation
of the computer systems being used to conduct the operation.
-
abling the computers that control a botnet. Of course, this approach pre-
ENHANCING CYBERSECURITY 73
those botnets.
9
In addition, they
provided information about the botnets to computer emergency response
teams (CERTs) located abroad, requesting that they target related com-
mand-and-control infrastructure. At the same time, the FBI provided
related information to its overseas law enforcement counterparts.
Preemption
Preemption—sometimes also known as anticipatory self-defense—is
national security.10
Preemption as a defensive strategy is a controversial subject, and the
-
tial. 11
10 Mike McConnell, “How to Win the Cyber War We’re Losing,” Washington Post,
February 28, 2010, available at http://www.washingtonpost.com/wp-dyn/content/article/
2010/02/25/AR2010022502493.html.
11 Herbert Lin, “A Virtual Necessity: Some Modest Steps Toward Greater Cybersecurity,”
the adversary take nearly all of the measures and make all of the prepa-
rations needed to carry out that action. The potential victim considering
preemption must thus be able to target the adversary’s cyber assets that
would be used to launch a hostile operation. But the assets needed to
The task of securing the routing protocols of the Internet makes a good case
study of the nontechnical complexities that can emerge in what might have been
thought of as a purely technical problem.
As noted in Chapter 2, the Internet is a network of networks. Each network
acts as an autonomous system under a common administration and with common
routing policies. BGP is the Internet protocol used to characterize every network
to each other, and in particular to every network operated by an Internet service
provider (ISP).
In general, the characterization is provided by the ISP responsible for the
network, and in part the characterization specifies how that ISP would route traffic
to a given destination. A problem arises if and when a malicious ISP in some part
of the Internet falsely asserts that it is the right path to a given destination (i.e., it
asserts that it would forward traffic to a destination but in fact would not). Traffic sent
to that destination can be discarded, causing that destination to appear to be off
the net. Further, the malicious ISP might be able to mimic the expected behavior
of the correct destination, fooling unsuspecting users into thinking that their traffic
has been delivered properly and thus causing further damage.
The technical proposal to mitigate this problem was to have the owner of each
region of Internet addresses digitally sign an assertion to the effect that it is the
rightful owner (which would be done using cryptographic mechanisms), and then
delegate this assertion to the ISP that actually provides access to the addresses,
which in turn would validate it by a further signature, and so on as the assertion
crossed the Internet. A suspicious ISP trying to decide if a routing assertion is valid
could check this series of signed assertions to validate it.
This scheme has a bit of overhead, which is one objection, but it also has an-
other problem—how can a suspicious ISP know that the signed assertion is valid?
ENHANCING CYBERSECURITY 75
It has been signed using some cryptographic key, but the suspicious ISP must
know who owns that key. To this end, it is necessary to have a global key distribu-
tion and validation scheme, which is called a public-key infrastructure, or PKI. The
original proposal was that there would be a “root of trust,” an actor that everyone
trusted, who would sign a set of assertions about the identities of lower-level enti-
ties, and so on until there was a chain of correctness-confirming assertions that
linked the assertions of each owner of an address block back to this root of trust.
This idea proved unacceptable for the reason, perhaps obvious to nontechni-
cal people, that there is no actor that everyone—every nation, every corporation,
and so on—is willing to trust. If there were such an actor, and if it were to suddenly
refuse to validate the identity of some lower-level actor, that lower-level actor would
be essentially removed from the Internet. The alternative approach was to have
many roots of trust—perhaps each country would be the root of trust for actors
within its borders. But this approach, too, is hard to make work in practice—for
example, what if a malicious country signs some assertion that an ISP within its
border is the best means to reach some range of addresses? How can someone
know that this particular root of trust did not in fact have the authority to make as-
sertions about this part of the address space? Somehow one must cross-link the
various roots of trust, and the resulting complexity may be too hard to manage.
Schemes that have been proposed to secure the global routing mechanisms
of the Internet differ with respect to the overhead, the range of threats to which
they are resistant, and so on. But the major problem that all these schemes come
up against is the nontechnical problem of building a scheme that can successfully
stabilize a global system built out of regions that simply do not trust each other.
And of course routing is only part of making a secure and resilient Internet. An
ISP that is malicious can make correct routing assertions and then just drop or
otherwise disrupt the packets as they are forwarded. The resolution of these sorts
of dilemmas seems to depend on an understanding of how to manage trust, not
on technical mechanisms for signing identity assertions.
4.2.1 Economics12
Many problems of cybersecurity can be understood better from an
-
tory frameworks, and tragedy of the commons. Taken together, economic
-
tions, cybersecurity is and will be a hard problem to address.
Many actors make decisions that affect cybersecurity: technology
-
ment, the intelligence community, and governments (both as technology
users and as guardians of the larger social good). Each of these actors gets
plenty of blame for being the “problem”: if technology vendors would just
properly engineer their products, if end users would just use the technol-
ogy available to them and learn and practice safe behavior, if companies
would just invest more in cybersecurity or take it more seriously, if law
enforcement would just pursue the bad guys more aggressively, if policy
makers would just do a better job of regulation or legislation, and so on.
There is some truth to such assertions, and yet it is important to
understand the incentives for these actors to behave as they do. For
-
ity, time, and cost in design and testing while being hard to value or even
assess by customers.
-
als do sometimes (perhaps even often) take cybersecurity into account.
But these parties have strong incentives to take only those cybersecurity
measures that are valuable for addressing their own cybersecurity needs,
-
promise intermediary M’s computer facilities in order to attack V. This
convoluted routing is done so that V will have a harder time tracing the
12 For an overview of the economic issues underlying cybersecurity, see Tyler Moore,
“Introducing the Economics of Cybersecurity: Principles and Policy Options,” in National
Research Council, Proceedings of a Workshop on Deterring Cyberattacks: Informing Strategies
and Developing Options for U.S. Policy, pp. 3-24, The National Academies Press, Washington
D.C., 2010. An older but still very useful paper is Ross Anderson, “Why Information Security
Is Hard—An Economic Perspective,” Proceedings of the 17th Annual Computer Security
Applications Conference, IEEE Computer Society, New Orleans, La., 2001, pp. 358-365.
ENHANCING CYBERSECURITY 77
4.2.2 Psychology
A wide variety of psychological factors and issues are relevant to
cybersecurity.
Social Engineering
13
http://www.csoonline.com/article/514063/social-engineering-the-basics.
14
-
sion-making process, Stein points to uncertainty about realities on the
-
ing, Stein points out that because the information-processing capability of
people is limited, they are forced in confusing situations to use a variety
14 The Oxford
Handbook of Political Psychology
ENHANCING CYBERSECURITY 79
households—but the basic models for security and privacy are essentially
unchanged.
Security features can be clumsy and awkward to use and can pres-
measures are all too often disabled or bypassed by the users they are
intended to protect. Because the intent of security is to make a system
two systems because it is much easier than rekeying the data by hand.
ENHANCING CYBERSECURITY 81
But establishing an electronic link between the systems may add an access
path that is useful to an intruder. Taking into account the needs of usable
security might call for establishing the link but protecting it or tearing
down the link after the data has been transferred.
In other cases, security techniques do not transfer well from one tech-
4.2.3 Law
U.S. domestic law, international law, and foreign domestic law affect
cybersecurity in a number of ways.
Domestic Law
-
eral statutes addressing various aspects of cybersecurity either directly or
indirectly.17 (The acts discussed below are listed with the date of original
passage, and “as amended” should be understood with each act.)
actions. These statutes include the Computer Fraud and Abuse Act of
1986 (prohibits various intrusions on federal computer systems or on
computer systems used by banks or in interstate and foreign commerce);
the Electronic Communications Privacy Act of 1986 (ECPA; prohibits
17Eric A. Fischer, Federal Laws Relating to Cybersecurity: Overview and Discussion of Pro-
posed Revisions
www.fas.org/sgp/crs/natsec/R42114.pdf.
ENHANCING CYBERSECURITY 83
Powers Act is poorly suited to U.S. military forces that might engage in
International Law
be inferred from historical precedent and practice, and there are no such
in the form of treaties but rather is found in international case law. Here
too, guidance for what counts as proper behavior in cyberspace is lack-
ing. Universal adherence to norms of behavior in cyberspace could help
to provide nations with information about the intentions and capabilities
activities in cyberspace, the United States and that other nation are more
likely to be able to work together to combat hostile cyber operations that
ENHANCING CYBERSECURITY 85
to be clear, and many factors relevant to a decision will not be known. For
-
mously, and clandestinely, knowledge about the scope and character of
a cyberattack will be hard to obtain quickly. Attributing the incident to a
period of time. Other nontechnical factors may also play into the assess-
ment of a cyber incident, such as the state of political relations with other
nations that are capable of launching the cyber operations involved in
the incident.
Once the possibility of a cyberattack is made known to national
authorities, information must be gathered to determine perpetrator and
purpose, and must be gathered using the available legal authorities. Some
entity within the federal government integrates the relevant information,
and then it or another higher entity (e.g., the National Security Council)
taken has evolved over time. Today, the National Cybersecurity and Com-
the U.S. government that fuses information on the above factors and inte-
grates the intelligence, national security, law enforcement, and private-
20
4.2.5 Deterrence
Deterrence relies on the idea that inducing a would-be intruder to
refrain from acting in a hostile manner is as good as successfully defend-
ing against or recovering from a hostile cyber operation. Deterrence
through the threat of retaliation is based on imposing negative conse-
quences on adversaries for attempting a hostile operation.
Imposing a penalty on an intruder serves two functions. It serves
20See U.S. Department of Homeland Security, “About the National Cybersecurity and
Communications Integration Center,” available at http://www.dhs.gov/about-national-
cybersecurity-communications-integration-center.
ENHANCING CYBERSECURITY 87
the goal of justice—an intruder should not be able to cause damage with
impunity, and the penalty is a form of punishment for the intruder’s
misdeeds. In addition, it sets the precedent that misdeeds can and will
result in a penalty for the intruder, and it seeks to instill in future would-
be intruders the fear that he or she will suffer from any misdeeds they
might commit, and thus to deter such action, thereby discouraging further
misdeeds.
What the nature of the penalty should be and who should impose the
penalty are key questions in this regard. (Note that a penalty need not
national security, the penalty can take the form of diplomacy such as
demarches and breaks in diplomatic relations, economic actions such as
trade sanctions, international law enforcement such as actions taken in
international courts, nonkinetic military operations such as deploying
forces as visible signs of commitment and resolve, military operations
such as the use of cruise missiles against valuable adversary assets, or
cyber operations launched in response.
have strong intuitions that some systems are more secure than others, but
assessing a system’s cybersecurity posture turns out to be a remarkably
thorny problem. From a technical standpoint, assessing the nature and
•
precisely specify what it means for the system to operate securely. Indeed,
many vulnerabilities in systems can be traced to misunderstandings or a
lack of clarity about what a system should do under a particular set of
circumstances (such as the use of penetration techniques or attack tools
that the defender has never seen before).
• A system that contains functionality that should not be present
-
tionality may entail doing something harmful. Discovering that a system
-
many factors other than technology affect the security of a system, includ-
of the people using the system, the access control policy in place, the
boundaries of the system (e.g., are users allowed to connect their own
ENHANCING CYBERSECURITY 89
What does the discussion above imply for the development of cyber-
security metrics—measurable quantities whose value provides informa-
be achieved for the foreseeable future. But other metrics may still be use-
ful under some circumstances.
It is important to distinguish between input metrics (metrics for what
system users or designers do to the system), output metrics (metrics for
what the system produces), and outcome metrics (metrics for what users
or designers are trying to achieve—the “why” for the output metrics).21
• -
ment that are believed to be associated with desirable cybersecurity out-
-
rity are not validated in practice, and/or are established intuitively.
• -
eters that are believed to be associated with desirable cybersecurity out-
framework/part3.pdf.
• -
-
lems. A good solution to a cybersecurity problem is one that is effective, is
ENHANCING CYBERSECURITY 91
should engage both academic and industry actors, and it can involve col-
laboration early and often with technology-transition stakeholders, even
in the basic science stages.
• Respect the need for breadth in the research agenda. Cybersecurity
5.1 ECONOMICS
Economics and cybersecurity are intimately intertwined in the public
policy debate in two ways—the scale of economic losses due to adversary
93
•
— One type of information is more and better information about
threats and vulnerabilities, which could enable individual organi-
postures.
• Insurance. The insurance industry may have a role in incentiv-
the intangible nature of losses and assets, and unclear legal grounds.
• . This approach is based on three
-
-
petitive position in the marketplace. Relevant standards-setting bodies
include the National Institute of Standards and Technology for the U.S.
investment in cybersecurity.
— Public recognition of adherence to high cybersecurity stan-
regulated party. Risks vary greatly from system to system. There is wide
measures.
-
tant in cybersecurity, the present administration is promulgating its
Cybersecurity Framework. Under development as this report is being
written, the framework is a set of core practices to develop capabilities
to manage cybersecurity.3 To encourage critical infrastructure companies
known estimates.
3 -
rity Framework,” available at http://www.nist.gov/cyberframework/.
4 Michael Daniel, “Incentives to Support the Adoption of the Cybersecurity Frame-
posts/2012/07/09/nsa_chief_cybercrime_constitutes_the_greatest_ transfer_of_wealth_in_
-
sial and are discussed in Section 3.6 on threat assessment.
-
vice disruptions often delay service but do not deny it, and a customer
who visits a Web site that is inaccessible today may well visit it tomor-
row when it is accessible. Should the opportunity cost of a disruption
from reporting it. The surveys taken to determine economic loss are often
not representative, and questions about loss can be structured in a way
that does not allow erroneously large estimates to be corrected by errors
on the other side of the ledger.7
6Center for Strategic and International Studies, The Economic Impact of Cybercrime and
Cyber Espionage
rp-economic-impact-cybercrime.pdf.
7
5.2 INNOVATION
A stated goal of U.S. public policy is to promote innovation in prod-
ucts and services in the private sector. In information technology (as in
offering, at least until a competitor comes along. During this period, the
vendor has the chance to establish relationships with customers and to
-
ment is not conducive to focusing on security from the outset. Software
are thrown away. In this environment, it makes very little sense to invest
up front in that kind of adherence unless such adherence is relatively
very well and in some considerable detail just what the ultimate artifact
is supposed to do. But some large software systems emerge from incre-
mental additions to small software systems in ways that have not been
anticipated by the designers of the original system, and sometimes users
change their minds about the features they want, or even worse, want
contradictory features.
Functionality that users demand is sometimes in tension with secu-
rity as well. Users demand attributes such as ease of use, interoperability,
and backward compatibility. Often, information technology purchasers
-
ping a product—whether to ship with the security features turned on or
often get in the way of using the product, an outcome that may lead to
frustration and customer dissatisfaction. Inability to use the product may
also result in a phone call to the vendor for customer service, which is
with security features turned off tends to reduce one source of customer
complaints and makes it easier for the customer to use the product. The
-
rity breaches that may occur as a result, at which point tying those con-
-
stances, many vendors will chose to ship with security turned off—and
many customers will simply accept forever the vendor’s initial default
settings.
Restricting users’ access privileges often has serious usability impli-
cations and makes it harder for users to get legitimate work done, as for
5.3.1 Privacy
8 What an individual regards as “private” may not be the same as what the law
law may say otherwise. No technical security measure will protect the privacy interests of
inspection of all
in any way. If the entities with whom the information is shared are law
enforcement or national security authorities, privacy concerns are likely
to be even stronger.
when the causes involved are unpopular. In such cases, one way of pro-
9Steven M. Bellovin, “Identity and Security,” IEEE Security and Privacy 8(2, March-
April):88, 2010.
e-commerce.11
International debates over what should constitute the proper scope
of Internet governance are quite contentious, with the United States gen-
poses threats to their national security and political stability (e.g., news
10 Lennard G. Kruger, “Internet Governance and the Domain Name System: Issues for
-
ern nations have opposed such measures in multiple forums, and in
particular have opposed attempts to broaden the Internet governance
-
nance are thus often disputes over content regulation in the name of
Internet security.
-
dards for passing information and what these protocols and standards
both in other nations and in the United States—that would require packet-
level authentication in the basic Internet protocols in the name of pro-
moting greater security. Requiring authentication in this manner would
implicate all of the civil liberties issues discussed above as well as the
performance and feasibility issues discussed in Chapter 2.
12 White
House, International Strategy for Cyberspace: Prosperity, Security, and Openness in
a Networked World
rss_viewer/international_strategy_ for_cyberspace.pdf.
On the face of it, these two policy objectives are inconsistent with
each other—one promotes cybersecurity internationally and the other
13
MSP.2013.161.
-
space to an appropriately responsible actor is problematic under many
behavior in cyberspace.
For illustrative purposes, two domains in which norms may be rel-
evant to cybersecurity relate to conducting cyber operations for differ-
15
White House, International Strategy for Cyberspace—Prosperity, Security, and Openness
in a Networked World, May 2011, available at http://www.whitehouse.gov/sites/default/
Most other nations do not draw such a sharp line between these two
kinds of information collection. But even were all nations to agree in
principle that such a line should be drawn, how might these two types
of information (information related to national security and information
-
-
Today, the United States does not target intelligence assets for the
on the desire of the United States to uphold a robust legal regime for the
well with foreign companies that they were trying to persuade to relo-
cate to the United States. And that use of its intelligence agencies might
well undercut the basis on which the United States could object to other
attack (or preparations for attack) should not preclude the possibility of
suggest that the nature of a targeted entity can provide useful clues to
16 Strategic
Studies Quarterly 6(3):46-70, 2012.
that such weapons have legitimate uses (e.g., both military and civilian
entities use such weapons to test their own defenses). Distinguishing
offensive capabilities developed for cyberattack from those used to shore
impossible task.
-
cial systems or power grids, much as nations today have agreed to avoid
targeting hospitals in a kinetic attack. Agreements to restrict use are by
has not prevented the world’s nations (including the United States) from
-
able” restrictions.
One issue is that nonstate actors may have access to some of the same
cyber capabilities as do national signatories, and nonstate actors are
unlikely to adhere to any agreement that restricts their use of such capa-
17 Much of the discussion in this section is based on Herbert Lin, “A Virtual Necessity:
Some Modest Steps Toward Greater Cybersecurity,” Bulletin of the Atomic Scientists, Septem-
ber 1, 2012, available at http://www.thebulletin.org/2012/september/virtual-necessity-
some-modest-steps-toward-greater-cybersecurity.
-
ing in a multilateral way various nations’ views about the nature of cyber
weapons, cyberspace, offensive operations, and so on could promote
greater mutual understanding among the parties involved.
-
tively refute, even in principle, the possibility of meaningful arms control
agreements in cyberspace is open to question today. What is clear is that
progress in cyber arms control, if it is feasible at all, is likely to be slow.
China play major roles in the IT industry, and Ireland, Israel, Korea,
Memory
Puerto Rico, Singapore, South Korea, Taiwan, United
States
Motherboard Taiwan
• Using trusted suppliers. Such parties must be able to show that they
have taken adequate measures to ensure the dependability of the com-
ponents they supply or ship. Usually, such measures would be regarded
as “best practices” that should be taken by suppliers whether they are
foreign or domestic.
• Diversifying suppliers. The use of multiple suppliers increases the
rule, testing can indicate only the presence of a problem—not its absence.
Thus, testing generally cannot demonstrate the presence of unwanted
(and hostile) functionality in a component, although testing may be able
to provide evidence that the component does in fact perform as it is sup-
posed to perform.
fair to say that the risk associated with corruption in the supply chain can
be managed and mitigated to a certain degree—but not avoided entirely.
can be conducted for cyber defensive purposes and also for other purpos-
es.19 Furthermore, according to a variety of public sources, policy regard-
ing offensive operations in cyberspace includes the following points:
19 National Research Council, Technology, Policy, Law, and Ethics Regarding U.S. Acquisition
and Use of Cyberattack Capabilities, The National Academies Press, Washington, D.C., 2009.
20 White House, International Strategy for Cyberspace—Prosperity, Security, and Openness
21
Inter-Agency Legal Conference, Ft. Meade, Md., September 18, 2012, available at http://
opiniojuris.org/2012/09/19/harold-koh-on-international-law-in-cyberspace/.
22 Robert Gellman, “Secret Cyber Directive Calls for Ability to Attack Without Warning,”
Washington Post
23 Gellman, “Secret Cyber Directive Calls for Ability to Attack Without Warning,” 2013.
24 Glenn Greenwald and Ewen MacAskill, “Obama Orders U.S. to Draw Up Over-
the fact that attribution is much more uncertain, the ability of nonstate
its defensive posture, and it can seek the assistance of law enforcement
authorities to investigate and to take action to mitigate the threat.
Although both of these responses (if properly implemented) are
helpful, their effectiveness is limited. Tightening security often reduces
important functionality in the systems being locked down—they become
posture is also costly. Law enforcement authorities can help, but they
cannot do so quickly and the resources they can bring to bear are usually
overwhelmed by the demands for their assistance.
A number of commentators and reports have suggested that a more
aggressive defensive posture—that is, an active defense—is appropriate
under some circumstances.25 Such an approach, especially if carried out
25 -
Washington Post, February 27, 2012, available at http://www.washington
post.com/blogs/checkpoint-washington/post/active-defense-at-center-of-debate-on-
Hackers, Firms Salting Their Servers with Fake Data,” Washington Post
available at http://www.washingtonpost.com/world/national-security/to-thwart-
6.1 FINDINGS
Finding 1. Cybersecurity is a never-ending battle. A permanently
decisive solution to the problem will not be found in the foresee-
able future.
For the most part, cybersecurity problems result from the inherent
-
nology systems, and human fallibility in making judgments about what
actions and information are safe or unsafe from a cybersecurity perspec-
None of these factors is likely to change in the foreseeable future, and thus
there are no silver bullets—or even combinations of silver bullets—that
can “solve the problem” permanently.
In addition, threats to cybersecurity evolve. As new defenses emerge
to stop older threats, intruders adapt by developing new tools and tech-
niques to compromise security. As information technology becomes more
ubiquitously integrated into society, the incentives to compromise the
security of deployed IT systems grow. As innovation produces new infor-
mation technology applications, new venues for criminals, terrorists, and
other hostile parties also emerge, along with new vulnerabilities that
-
ple with access to cyberspace multiplies the number of possible victims
and also the number of potential malevolent actors.
-
116
an ongoing process rather than something that can be done once and
then forgotten. Adversaries—especially at the high-end part of the threat
spectrum—constantly adapt and evolve their intrusion techniques, and
the defender must adapt and evolve as well.
These comments should not be taken to indicate a standstill in the
drug abuse, and so on are rarely “solved” or taken off the policy agenda
-
so decisively that they will never reappear—and the same is true for
cybersecurity.
value in reducing the loss and damage that may be associated with
cybersecurity breaches.
deployed is surely a recipe for inaction that leaves one vulnerable to many
lower-level threats.
The value of defensive measures is found in several points:
adversary may help to prevent him from being able to access everything
on the targeted system.
• A well-defended target is usually less attractive to malevolent
cybersecurity postures that are not the best. The second part—Part 2—is
the gap between the strongest posture possible with known practices and
Part 1 gap were fully closed, the resulting cybersecurity posture would
that increases the ability to respond quickly in the future when threats
unknown today emerge.
Note that the Part 1 gap is primarily nontechnical in nature (requir-
ing, e.g., research relating to economic or psychological factors regarding
the use of known practices and techniques, enhanced educational efforts
to promote security-responsible user behavior, and incentives to build
-
ing the Part 1 gap does not require new technical knowledge of cyberse-
States as a nation.
too many decision makers still focus on the short-term costs of improving
-
more, little has been done to harness market forces to address matters
related to the cybersecurity posture of the nation as a whole.
make it easy and intuitive for developers and users to “do the right
thing”; the employment of business drivers and policy mechanisms to
facilitate security technology transfer and diffusion of R&D into com-
mercial products and services; and the promotion of risk-based decision
making (and metrics to support this effort).
Consider what such a culture might mean in practice:
altered somehow, and the business cases for the security of these organi-
1 Van-
ity Fair, April 2011, available at http://www.vanityfair.com/culture/features/2011/04/
The Atlantic,
March 4, 2011, available at http://www.theatlantic.com/technology/archive/2011/03/
-
Christian Science Monitor, September 22, 2011,
available at http://www.csmonitor.com/ USA/2011/0922/From-the-man-who-discovered-
relations as well, given that the United States usually has many interests
very large scale. But China is also the largest single holder of U.S. debt
and one of the largest trading partners of the United States. China is the
States and China are arguably the most important nations regarding the
mitigation of global climate change. And this list goes on. What is the
-
of application X
justifying why immediate attention and action to improve the cybersecu-
rity posture of application X can be deferred or studied further. Reactive
we also want a private sector that innovates rapidly, and the convenience
of not having to worry about cybersecurity, and the ability for applica-
tions to interoperate easily and quickly with one another, and the right to
no diminution of our civil liberties, and so on.
But the tradeoffs between security and these other national interests
case entail sharper and starker tradeoffs than are necessary and that the
posture for the nation might also provide better protection for intellectual
property, thereby enhancing the nation’s capability for innovation. More
usable security technologies or procedures could provide better security
and also increase the convenience of using information technology.
Nonetheless, irreconcilable tensions will sometimes be encountered.
At that point, policy makers will have to confront rather than side-
step those tensions, and honest acknowledgment and discussion of the
tradeoffs (e.g., a better cybersecurity posture may reduce the nation’s
innovative capability, may increase the inconvenience of using informa-
tion technology, may reduce the ability to collect intelligence) will go a
long way toward building public support for a given policy position.
information networks and prepare to, and when directed, conduct full
spectrum military cyberspace operations in order to enable actions in all
domains, ensure US/Allied freedom of action in cyberspace and deny
the same to our adversaries.2
The United States has publicly stated that it does not collect intel-
ligence information for the purpose of enhancing the competitiveness or
business prospects of U.S. companies. And it has articulated its view that
established principles of international law—including those of the law of
But beyond these very general statements, the U.S. government has
placed little on the public record, and there is little authoritative informa-
tion about U.S. offensive capabilities in cyberspace, rules of engagement,
-
ties within the Department of Defense and the intelligence community,
and a host of other topics related to offensive operations.
discussed at length. But a full public discussion of issues in these areas has
thinking on these issues highly opaque. Such opacity has many undesir-
able consequences, but one of the most important consequences is that the
role offensive capabilities could play in defending important information
technology assets of the United States cannot be discussed fully.
What is sensitive about offensive U.S. capabilities in cyberspace is
(rather than the nature of that technology itself); fragile and sensitive
-
ticular vulnerability, a particular operational program); or U.S. knowledge
-
vides a generally reasonable basis for understanding what can be done
and for policy discussions that focus primarily on what should be done.
6.2 CONCLUSION
COMMITTEE MEMBERS
DAVID CLARK, Chair, is a senior research scientist at the MIT Computer
receiving his Ph.D. there in 1973. Since the mid-1970s, Clark has been
leading the development of the Internet; from 1981 to 1989 he acted as
chief protocol architect in this development, and he chaired the Internet
-
tural underpinnings of the Internet and at the relationship of technology
and architecture to economic, societal, and policy considerations. He is
Design program. Clark is past chair of the Computer Science and Tele-
communications Board of the National Research Council and has contrib-
uted to a number of studies on the societal and policy impact of computer
communications. He is co-director of the MIT Communications Futures
program, a project for industry collaboration and coordination along the
communications value chain.
129
APPENDIX A 131
STAFF
HERBERT S. LIN is chief scientist at the Computer Science and Telecom-
munications Board, National Research Council of the National Academies,
where he has been the study director of major projects on public policy
and information technology. These projects include a number of studies
related to cybersecurity: Cryptography’s Role in Securing the Information
Society (1996); Realizing the Potential of C4I: Fundamental Challenges (1999);
Engaging Privacy and Information Technology in a Digital Age (2007); Toward
a Safer and More Secure Cyberspace (2007); Technology, Policy, Law, and Eth-
ics Regarding U.S. Acquisition and Use of Cyberattack Capabilities (2009); and
Proceedings of a Workshop on Deterring Cyberattacks: Informing Strategies and
Developing Options (2010). Prior to his NRC service, he was a professional
staff member and staff scientist for the House Armed Services Committee
(1986-1990), where his portfolio included defense policy and arms control
issues. He received his doctorate in physics from MIT.
Bibliography
This bibliography lists the reports from the National Research Coun-
cil’s Computer Science and Telecommunications Board from which this
report takes much of its material. All were published by and are available
from the National Academies Press, Washington, D.C.
Chapter 1
• Computers at Risk: Safe Computing in the Information Age (1991)
• Toward a Safer and More Secure Cyberspace (2007)
Chapter 2
• Computing the Future: A Broader Agenda for Computer Science and
Engineering (1992)
• Trust in Cyberspace (1999)
• Being Fluent with Information Technology (1999)
• The Internet’s Coming of Age (2001)
• Signposts in Cyberspace: The Domain Name System and Internet Navi-
gation (2005)
132
APPENDIX B 133
Chapter 3
• Toward a Safer and More Secure Cyberspace (2007)
• Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use
of Cyberattack Capabilities (2009)
Chapter 4
• Cryptography’s Role in Securing the Information Society (1996)
• Who Goes There? Authentication Through the Lens of Privacy (2003)
• Toward a Safer and More Secure Cyberspace (2007)
• Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use
of Cyberattack Capabilities (2009)
• Toward Better Usability, Security, and Privacy of Information Technol-
ogy: Report of a Workshop (2010)
• Letter Report from the Committee on Deterring Cyberattacks: Informing
Strategies and Developing Options for U.S. Policy (2010)
Chapter 5
• Toward a Safer and More Secure Cyberspace (2007)
• Engaging Privacy and Information Technology in a Digital Age (2007)
• Assessing the Impacts of Changes in the Information Technology R&D
Ecosystem: Retaining Leadership in an Increasingly Global Environment
(2009)
• Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use
of Cyberattack Capabilities (2009)
Chapter 6
• Toward a Safer and More Secure Cyberspace (2007)
• Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use
of Cyberattack Capabilities (2009)
February 2018
Publicity surrounding the threat of cyber-attacks continues to grow, yet immature classification
methods for these events prevent technical staff, organizational leaders, and policy makers from
engaging in meaningful and nuanced conversations about the risk to their organizations or
critical infrastructure. This paper provides a taxonomy of cyber events that is used to analyze
over 2,431 publicized cyber events from 2014-2016 by industrial sector. Industrial sectors vary
in the scale of events they are subjected to, the distribution between exploitive and disruptive
event types, and the method by which data is stolen or organizational operations are disrupted.
The number, distribution, and mix of cyber event types highlight significant differences by
sector, demonstrating that strategies may vary based on deeper understandings of the threat
environment faced across industries.
As the private and public sectors grapple with the problem of cyber events, disagreement
remains regarding what can and should be done. Technical solutions, organizational resiliency,
employee education, and improvements in system controls are among many options to reduce
risk. Yet, they are rarely evaluated as part of a strategic approach for addressing diverse threats,
which vary by industry.
Confusion about threats and response options originates in part from imprecision in how we
categorize and measure the range of disruptive cyber events. By failing to recognize the
distinctions between specific forms of attack, the effects they produce on the targeted networks,
the financial strain they place on the targeted organizations, or their broader effects on society,
this confusion leads to the misallocation of resources.
This paper provides a new taxonomy that expands on earlier work by the author and colleagues
to classify cyber incidents by the range of disruptive and exploitative effects produced. It applies
the taxonomy in a sector-based analysis of 2,431 publicized cyber events from 2014-2016. It
finds some striking differences across industries in the scale, method of attack, and distribution
of effect. Government and Professional Services face the largest number of attacks. Governments
experience a mix of disruptive and exploitive events, whereas retail and hotel operators primarily
face exploitive attacks. These findings highlight the need for deeper analysis by sector to assess
the risk for specific organizations and critical infrastructure. They also suggest the importance of
tailoring risk mitigation strategies to fit the different threat environments in various sectors.
Cyber Taxonomies
A confusing array of cyber threat classification systems have been proposed over the past two
decades. Some are based on different phases of the hacking process, while others focus on
specific targets. For example, de Bruijne et al (2017) has created a classification of actors and
methods, whereas Gruschka (2010) develops a taxonomy of attacks against cloud systems. Other
classification approaches focus on specific techniques, such as Distributed Denial of Service or
DDoS attacks (Mirkovic 2004); specific targets, such as browsers (Gaur 2015); or particular IT
capabilities, such as industrial control systems (Zhu 2017) and smart grids (Hu 2014).
Few taxonomies in the information security literature seek to classify events by impact on the
target, the key question for risk assessment. Only two (Howard 1998) and (Kjaerland 2005)
directly propose categories of the effect to the victim. Others including Hansman (2005) focus on
Howard’s widely cited taxonomy includes classification methods for attackers, objectives, tools,
access, and impact. He divides the impact of cyber activity, described as the “unauthorized
results,” into five categories: Corruption of Data, Disclosure of Information, Denial of Service,
Increased Access, and Theft of Service.
Kjaerland (2005) classifies cyber effects differently and as belonging to one of four categories:
Disrupt, Distort, Destruct, and Disclosure. He develops these categories in concert with other
dimensions of analysis to evaluate the linkage between sector, actor, method, and target.
Both of these effect-based taxonomies fail to meet basic standards of a well-defined taxonomy
(Ranganathan 1957), including:
Exclusiveness - No two categories should overlap or should have the same scope and boundaries.
Ascertainability - Each category should be definitively and immediately understandable from its
name.
Consistency - The rules for making the selection should be consistently adhered to.
Affinity and Context - As you move from the top of the hierarchical classification to the bottom,
the specification of the classification should increase.
Currency - Names of the categories in the classification should reflect the language in the
domain for which it is created.
Differentiation -When differentiating a category, it should give rise to at least two subcategories
Howard’s taxomony fails the exhaustiveness requirement because some important and
increasingly common types of cyber events do not fit any of its categories. Examples of these
omissions include attacks on Supervisory Control and Data Acquisition (SCADA) systems, data
deletion resulting from the use of wiper viruses, or social media account hijacking and website
defacement.
The Howard taxonomy also fails the test of exclusivity by including two overlapping effects
categories: Increased Access and Theft of Resources. Most hackers seek greater access and
misuse system resources as a means to an end, not as the final result. Their ultimate goal is not
just access, but the illicit acquisition of information or the disruption of organizational services.
For example, if a hacker wanted to illicitly gain and disseminate information about a company,
they would first obtain unauthorized use of a specific computer or network. Using Howard’s
Kjaerland’s classification system also fails important tests for a well-designed taxonomy. By
allowing the same event to be assigned to multiple categories, it violates the criteria of
exclusivity and consistency. For example, Kjaerland’s definition of Destruct notes that “Destruct
is seen as the most invasive and malicious and may include Distort or Disrupt.”
Kjaerland also fails the test of context by mixing impact classification (e.g. destruction of
information) with specific tactics or tools. For example, in his definition of Disrupt he classifies
use of a Trojan as a Disrupt event. However, a Trojan is a technique of hiding a malicious
program in another. That technique can cause many different types of effects depending on
whether it is used to steal or destroy information.
A New Taxonomy
This paper extends previous work (Harry 2015) (Harry & Gallagher 2017) to offer a new
taxonomy for classifying the primary effects on a target of any given cyber event.
I define a cyber event as the result of any single unauthorized effort, or the culmination of many
such technical actions, that engineers, through use of computer technology and networks, a
desired primary effect on a target. For example, if a hacker used a spearphish email to gain
access and then laterally moved through the network to delete data on five machines, that would
count as a single event type whose primary effect resulted in the destruction of data. This
encapsulation of hacker tactics and tradecraft into specification of the primary effect of those
actions is what I define as a cyber event.
In the risk assessment framework developed at the Center for International and Security Studies
at Maryland (CISSM), primary effects are the direct impacts to the target organization’s data or
IT-enabled operations. Cyber events can also cause secondary effects to the organization, such as
the financial costs of replacing equipment damaged in an attack, a drop in the organization’s
stock price, due to bad publicity from the attack, or a loss of confidence in the organization’s
ability to safeguard confidential data. And, they can cause second order effects on individuals or
organizations who rely on the targeted organization for some type of goods or services. These
could include effects on the physical environment, the supply chain, or even distortions an attack
might have on an individual’s attitudes, preferences, or opinion deriving from the release of
salacious information. While these are important areas to consider, they are outside of the scope
of this paper.
Any given cyber event can have one of two types of primary objectives: the disruption to the
functions of the target organization, or the illicit acquisition of information. An attacker might
disrupt an organization’s ability to make products, deliver services, carry out internal functions,
or communicate with the outside world in a number of ways. Alternatively, hackers may seek to
steal credit card user accounts, intellectual property, or sensitive internal communications to get
financial or other benefits without disrupting the organization’s operations.
Disruptive Events
A malicious actor may utilize multiple tactics that have wildly different disruptive effects
depending on how an organization uses information technology to carry out its core functions.
For example, an actor could delete data from one or more corporate networks, deploy
ransomware, destroy physical equipment used to produce goods by manipulating Supervisory
Control and Data Acquisition (SCADA) systems, prevent customers from reaching an
organization’s website, or deny access to a social media account.
Disruptive effects can be classified into five sub-categories depending on the part of an
organization’s IT infrastructure that is most seriously impacted, regardless of what tactic or
techniques were used to accomplish that result. They are: Message Manipulation, External
Denial of Service, Internal Denial of Service, Data Attack, and Physical Attack.
Message Manipulation. Any cyber event that interferes with a victim’s ability to accurately
present or communicate its “message” to its user or customer base is a Message Manipulation
attack. These include the hijacking of social media accounts, such as Facebook or Twitter, or
defacing a company website by replacing the legitimate site with pages supporting a political
cause. For example, in 2015, ISIS affiliated hackers gained access to the YouTube and Twitter
accounts for US CENTCOM. The hackers changed the password, posted threatening messages to
U.S. Service members, and replaced graphics with ISIS imagery (Lamothe 2015). Similarly, in
2016, the website for the International Weightlifting Federation (IWF) was defaced after a
controversial decision to disqualify an Iranian competitor (Cimpanu 2016). Both events used
different tactics, but the primary effect on the targeted organization’s ability to interact with its
audience was the same.
Internal Denial of Service. When a cyber event executed from inside a victim’s network
degrades or denies access to other internal systems, it is an Internal Denial of Service attack. For
instance, an attacker who had gained remote access to a router inside an organization’s network
could reset a core router to factory settings so that devices inside the network could no longer
communicate with one another. The anti DDOS vendor Staminus apparently experienced such an
internal denial of service attack in 2016. It issued a public statement that “a rare event cascaded
across multiple routers in a system-wide event, making our backbone unavailable.” (Reza 2016).
An attacker using malware installed on a file server to disrupt data sent and received between
itself and a user workstation would achieve a similar effect.
Data Attack. Any cyber event that manipulates, destroys, or encrypts data in a victim’s network
is categorized as a Data Attack. Common techniques include the use of wiper viruses and
ransomware. Using stolen administrative credentials to manipulate data and violate its integrity,
such as changing grades in a university registrar’s database would also fit this category. For
example, in 2017 the mass deployment of the NotPeyta ransomware resulted in thousands of data
attack cyber events against individuals as well as to small, medium, and large businesses, with
one case costing the shipping firm Maersk over $200 million (Matthews 2017).
Physical Attack. A cyber event that manipulates, degrades, or destroys physical systems is
classified as a Physical Attack. Current techniques used to achieve this type of effect include the
manipulation of Programable Logic Controllers (PLC) to open or close electrical breakers or
utilize user passwords to access and change settings in a human machine interface to overheat a
blast furnace, causing damage to physical equipment. For example, in the December 2015 cyber-
attack on a Ukrainian utility, a malicious actor accessed and manipulated the control interface to
trip several breakers in power substations. This deenergized a portion of the electrical grid and
tens of thousands of customers lost power for an extended period of time (Lee et al 2016).
Exploitive Events
Some cyber events are designed to steal information rather than to disrupt operations. Hackers
may be seeking customer data, intellectual property, classified national security information, or
sensitive details about the organization itself. While the tactics or techniques used by malicious
actors may change regularly, the location from which they get that information does not. I define
five categories of an exploitive event below: Exploitation of Sensors, Exploitation of End Hosts,
Exploitation of Sensors. A cyber event that results in the loss of data from a peripheral device
like a credit card reader, automobile, smart lightbulb, or a network-connected thermostat is
categorized as an Exploitation of Sensor event. The attack on Eddie Bauer stores where hackers
gained access to hundreds of Point of Sale machines and systematically stole credit card numbers
from thousands of customers fits this category (Krebs 2016). Other examples include illicit
acquisition of technical, customer, personal, or organizational data from CCTV cameras, smart
TVs, or baby monitors.
Exploitation of End Hosts. Hackers often are interested in the data stored on user’s desktop
computers, laptops, or mobile devices. When data is stolen through illicit access to devices used
directly by employees of an organization or by private individuals it is categorized as an
Exploitation of End Host cyber event. Tactics used in this type of attack include sending a
malicious link for a user to click or leveraging compromised user credentials to log in to an
account.
Exploitation of Data in Transit. Hackers who acquire data as it is being transmitted between
devices cause Exploitation of Data in Transit events. Examples of this type of event include the
acquisition of unencrypted data as it is sent from a PoS device to a database or moved from an
end-user device through an unsecured wireless hotspot at a local coffee shop.
The best way to assess how well this classification system meets the criteria for a well-defined
taxonomy is to see whether it can be easily and unambiguously used to categorize all the events
in an extensive data set.
Unfortunately, there are no public datasets of cyber attacks that include a variety of cyber events
with a range of both exploitive and disruptive effects. Most public data repositories focus on
some types of events to the exclusion of others. The Privacy Rights Clearinghouse, for example,
has a dataset focused on domestic exploitive attacks, while the blog H-zone.org has a dataset
focused on website defacement attacks (a subset of Message Manipulation). Other datasets are
on privately maintained blogs and webpages. Some do not use a repeatable process to classify or
categorize by sector thereby limiting the range of analysis that can be applied. Others are
compiled from proprietary data or are only available for a steep fee.
To create a dataset that had the information needed to test the CISSM taxonomy, the author used
systematic web searches to identify cyber events that could be characterized by their effects.
Initial searches for generalized references to cyber attacks yielded 3,355 possible events that
were referenced by blogs, security vendor portals, or other English-language news sources from
January 2014 through December 2016.
Of the initial 3,355 candidate cyber events initially discovered, 2,431 were included in the
dataset (72%). Media reports about 909 of the candidate events were broad discussions of
malware campaigns or generalized discussions about threat actor plans and tactics. These were
excluded because they did not provide information on the primary effect to a specific victim.
Media reports about an additional 15 events specifically spoke to the tactics used by the threat
actor independent of the effect to the primary victim, so they were also discarded. For example,
one source discussed the use of compromised Amazon Web services credentials to access a
system but did not talk about what types of actions took place once in the target network.
In complex cases where the victim suffered multiple effects (e.g. website defacement and
DDoS), the dataset counts each effect as a separate, but overlapping, event registered to the
victim. Cyber events were coded to include date, event type, organization type (using the North
American Industrial Classification System (NAICS), a description of the event, and a link to the
source.
This dataset is not an exhaustive accounting of all cyber events during this period. It only
includes events for which there was a direct news source that was verifiable and that provided
some insight into the methods of the attack. The true population of malign cyber activity is
unknown because some significant events are kept secret and many other cyber incidents are too
trivial to warrant media attention. Nevertheless, this dataset includes a large enough number of
events for it to be useful for testing the taxonomy and making rough generalizations about
relative frequencies of different types of events in different sectors.
As discussed earlier, a well-designed taxonomy should, among other things, account for all the
items to be classified, clearly differentiate among categories, ensure that each item has a unique
Each of the 2,431 events in the dataset could be coded as either Exploitive or Disruptive and
assigned to one of ten effect-based sub-categories in the CISSM taxonomy. This fulfills the
exhaustiveness requirement. Treating complex attacks in which multiple effects were achieved
by the hacker as a set of separate but overlapping events made it possible to apply the taxonomy
in a consistent manner, to differentiate between categories of effect, and to maintain clear
differentiation between the categorized effects. This analysis did not assess the taxonomy’s
currency, ascertainability, or affinity, because these standards should be judged by individual
users rather than the creator of the taxonomy.
Many cyber classification systems run into the same three major problems: Their inability to
distinguish between tactics and effects; their difficulty remaining relevant as threat actors change
and hacking techniques evolve; and their applicability to some types of IT systems, but not
others. The CISSM taxonomy disentangles stable categories of effects from the rapidly
advancing tactics employed by an ever-changing set of state and non-state hackers in a way that
can be applied to all IT systems in use today or envisioned for the future.
Categorizing cyber events according to their effects rather than treating them as an
indistinguishable, but ever increasing, mass of “cyberattacks” yields a number of useful insights.
Of the 2,431 cyber events during the three-year period reviewed, over 70 percent (1,700) were
exploitive, whereas 30 percent (725) were disruptive. This ratio appears to relatively stable when
examining events on a yearly basis, too. Of the 633 events recorded in 2014, 67 percent (423)
were exploitive, and 33 percent (210) were disruptive. Of the 843 cyber events in 2015, 67
percent (563) were exploitive and 33 percent (280) were disruptive. And of the 955 events
recorded in 2016, 75 percent (714) were exploitive events, compared with 25 percent (241) that
were disruptive.
Of the 1,700 exploitative events, the two most common sub-categories are Exploitation of
Application Server events and Exploitation of End Host events. Ninety percent of all exploitive
events in the dataset fall into one of these two categories. This reflects the current popularity of
SQL injection attacks against web applications, and the heavy use of spearphising campaigns
against end users.
A much smaller percentage of exploitative attacks fall into the other three categories, most likely
because these types of events often require internal access, are inherently more difficult to pull
off, are not as well monitored, or are not as well publicized. Exploitation of Sensor events
represent only 5 percent of the exploitive events sample, probably because the value of data from
many of the devices in this category, like smart thermostats and baby monitors, might not be as
large as records from other sources. Whereas a ready market exists on the Dark Web for
customer data stolen from POS devices, most types of sensor data will not be of broad interest.
The 725 disruptive cyber events in the dataset follow a similar pattern with most activity falling
into categories that are generally less problematic. Ninety-six percent of all disruptive events are
either Message Manipulation (60 percent, 433) or External Denial of Service (36 percent, 263)
events. This events reflect efforts by malign actors using less sophisticated techniques to deface
websites that are vulnerable to external access and manipulation, weak passwords surrounding
social media accounts, or high levels of DDoS activity applied by actors against identified
targets.
The remaining 5 percent (29 events), are split between Internal Denial of Service (2 percent, 11
events), Data Attack (2 percent, 14 events), or Physical Attack (1 percent, 4 events). These types
of events involved internal networks, so they required more sophisticated access techniques or
malware leveraged to engineer the intended disruptive effects.
In Figure 2, the level of cyber event activity in different sectors is ranked into three tiers—High,
medium, and low—to identify which sectors are currently most prone to the types of
cyberattacks that make it into the public record. Sectors that experience more than 15 percent of
all cyber events in our dataset fit into the highest tier. Government services and professional
services fall into this category; together, they account for 38 percent of all events recorded.
Medium-activity sectors include those that see at least 3.8 percent, but less than 15 percent, of
the events in the full dataset. Sectors falling into this tier include information services, education,
healthcare, finance, retail, entertainment, and accommodation services. The nine sectors in this
category experienced approximately 56.7 percent of the total number of cyber events, suggesting
that a larger breadth of industries is affected by significant numbers of cyber events.
The lowest activity tier includes sectors that had fewer than 3.8 percent of the total events. This
tier includes traditional industries that are less dependent on information technology than other
sectors of the modern economy, such as agriculture, mining, real estate, and construction. Two
sectors considered critical infrastructure—transportation and utilities—also fell into this tier.
In addition to the frequency of event activity, the nature of those effects is also an important
factor in assessing risk to specific sectors. In Figure 3, the percentage of total cyber events
characterized as exploitive is plotted versus the percentage that are disruptive in nature. Only the
ten sectors with the highest frequencies of cyber events are represented in the figure, as many of
the low tier sectors have too few observations to draw meaningful conclusions.
Figure 3: Exploitive vs Disruptive as a Share of Total Events for Top 10 Industry Sectors
The only sectoral category where the relative frequency of exploitative and disruptive events is
roughly the same as in the entire data set (70 percent Exploitive, 30 percent Disruptive) is the
“other” category. The relative frequency within most sectors is significantly different from the
average distribution. This highlights the importance of assessing risks on a per industry basis
instead of applying general guidance about what types of cyber events are most common.
Lastly, the categories of cyber events are also found to vary between sectors. Table 1 highlights
all cyber events, by share, drawing out some interesting differences. For example, while
Accommodation and Food Services represent only 4.8 percent of all cyber events in the dataset,
that sector accounts for over 36 percent of all Exploitation of Sensor events, well above the
average rate of 3.8 percent. This observation draws attention to the heavy targeting by hackers of
PoS devices used by fast food restaurants and hotels. The same sector is under-represented for
Message Manipulation events. Only 3.4 percent of the events it experienced fell in this category,
compared to the average of 17.9 percent for all sectors.
Differences between sectors in the frequencies of different types of cyber events likely reflect
differences in attacker motivations, vulnerabilities, and benefits that can be obtained through
different types of exploitation of data disruption of key organizational services. For example,
Government Services suffers more Message Manipulation and External Denial of Service event
types, whereas it does not see many Application Server events. A review of specific incidents in
the dataset reveals a large number of attacks against websites aimed at promoting a political
message. These attacks are often exploiting misconfigurations and can be automated thereby
promoting larger numbers of events, whereas exploitation of applications might not occur as
often if the information exploited requires greater effort by the hacker to achieve their goals.
Conclusion
Having an easy-to-use taxonomy that provides an exclusive, exhaustive, and consistent way to
differentiate the primary effects of cyber activity will help organizational leaders and policy
makers have more sophisticated discussions about the different types of threats they face, and the
appropriate risk mitigation strategies. The taxonomy presented in this paper and the analysis of
three years of publicized cyber event data highlights variance in scale, effects, and method.
Differences in the types of disruptive or exploitive attacks directly inform organizational leaders
on both the range as well as concentration of effects they might face. By disentangling tactics
from effect this classification provides a first step in creating a framework by which
organizational leaders can categorize and assess the most consequential forms of cyber attack
they might face. Additional work to measure the impact of specific attacks would allow
organizations and governments to adequately plan for the types of threats they are most likely to
Charles Harry is a senior leader, practitioner, and researcher with over 20 years of experience in
intelligence and cyber operations. Dr. Harry is the Director of Operations at the Maryland Global
Initiative in Cybersecurity (MaGIC), an Associate Research Professor in the School of Public
Policy, and a Senior Research Associate at CISSM.
Nancy Gallagher is the CISSM Director and a Research Professor at the School of Public Policy.
Cimpanu C. (2016) “Iranian Hacker Defaces IWF Website Following Controversial Rio
Olympics Decision” Softpedia News http://news.softpedia.com/news/iranian-hackers-deface-
iwf-website-following-controversial-rio-olympics-decision-507436.shtml
Choo, K. (2011). “The cyber threat landscape: Challenges and future research directions”
Computers & Security, 30(8), 719-731. doi: http://dx.doi.org/10.1016/j.cose.2011.08.004
de Bruijne, M., van Eeten M., Ganan, C., Pieters, W. (2017). “Towards a New Cyber Threat
actor Topology: A Hybrid Method for the NCSC Cyber Security Assessment” Delft University
of Technology https://www.wodc.nl/binaries/2740_Volledige_Tekst_tcm28-273243.pdf
Gruschka, N., & Jensen, M. (2010). “Attack Surfaces: A Taxonomy for Attacks on Cloud
Services” Paper presented at the IEEE CLOUD
Hansman, S., Hunt R., (2005) “A taxonomy of network and computer attacks”. Computers and
Security, Vo.l 24 Issue 1, p 31-43
Harry, C. (2015) “A Framework for Characterizing Disruptive Cyber Activity and Assessing its
Impact”, Working Paper, Center for International and Security Studies at Maryland (CISSM),
University of Maryland
Harry C. & Gallagher N. (2017) “Categorizing and Assessing Severity of Disruptive Cyber
Events” Policy Brief, Center for International and Security Studies at Maryland (CISSM),
University of Maryland
Howard, J. and Longstaff, T. (1998) “A Common Language for Computer Security Incidents,”
Technical Report, Sandia National Laboratories
Jiankun H., Hemanshu R., and Song G. (2014) “Taxonomy of Attacks for Agent-Based Smart
Grids” IEEE Transactions on Parallel and Distributed Systems, Vol 25, No 7
Kjaerland, M., (2005) “A taxonomy and comparison of computer security incidents from the
commercial and government sectors”. Computers and Security, Vol 25 pgs 522–538.
Krebs, B. (2016) “Malware Infected All Eddie Bauer Stores in US and Canada”, Krebs on
Security, https://krebsonsecurity.com/2016/08/malware-infected-all-eddie-bauer-stores-in-u-s-
canada/
Lamothe, D “U.S military social media accounts apparently hacked by Islamic State
sympathizers” , http://www.washingtonpost.com/news/checkpoint/wp/2015/01/12/centcom-
twitter-account-apparently-hacked-by-islamic-statesympathizers/, Washington Post, January
2015.
Lee, R., Assante, M., Conway, T. (2016) “Analysis of the Cyber Attack on the Ukrainian Power
Grid”, Sans Institute,. https://ics.sans.org/media/E-ISAC_SANS_Ukraine_DUC_5.pdf
Matthews, L. (2017) “NotPeyta Ransomware Attack Cost Shipping Giant Maersk Over $200
Million”, Forbes https://www.forbes.com/sites/leemathews/2017/08/16/notpetya-ransomware-
attack-cost-shipping-giant-maersk-over-200-million/#40970b504f9a
Mirkovic, J., and Reiher, P. (2004) “A taxonomy of DDoS attack and DDoS defense
mechanisms” SIGCOMM Comput. Commun. Rev., Vol 34(2), pgs 39-53
Reza, Ali (2016) “Anti-DDoS firm Staminus hacked, private data posted on line” , Hack Read,
https://www.hackread.com/anti-ddos-firm-staminus-hacked-private-data-posted-online/
Saini, A. Gaur M.S, Laxmi V.S (2015) “A Taxonomy of Browser Attacks”, Handbook of
Research on Digital Crime, Cyberspace Security, and Information Assurance 2015 p. 291-313
Simmons, C., Ellis, C., Shiva, S., Dasgupta, D., & Wu, Q (2009). “AVOIDIT: A cyber attack
taxonomy". University of Memphis.
Woolf, N (2016) “DDoS attack that disrupted internet was the largest of its kind in history,
experts say.” Guardian https://www.theguardian.com/technology/2016/oct/26/ddos-attack-dyn-
mirai-botnet
Zetter, K. (2016) “Inside the Cunning Unprecedented Hack of Ukraine’s Power Grid”, Wired,
https://www.wired.com/2016/03/inside-cunning-unprecedented-hack-ukraines-power-grid/
This article is an editorial note submitted to CCR. It has NOT been peer reviewed. The authors take full responsibility for this
article's technical content. Comments can be posted through CCR Online.
Categories and Subject Descriptors This is intended to be a brief, necessarily cursory and incomplete
C.2.1 [Network Architecture and Design]: Packet-switching history. Much material currently exists about the Internet,
networks. covering history, technology, and usage. A trip to almost any
3
bookstore will find shelves of material written about the Internet .
General Terms
4
Design, Experimentation, Management. In this paper , several of us involved in the development and
evolution of the Internet share our views of its origins and history.
Keywords
Internet, History. 2 Perhaps this is an exaggeration based on the lead author's
residence in Silicon Valley.
1. INTRODUCTION 3 On a recent trip to a Tokyo bookstore, one of the authors
The Internet has revolutionized the computer and communications counted 14 English language magazines devoted to the Internet.
world like nothing before. The invention of the telegraph, 4 An abbreviated version of this article appears in the 50th
telephone, radio, and computer set the stage for this anniversary issue of the CACM, Feb. 97. The authors would like
unprecedented integration of capabilities. The Internet is at once a to express their appreciation to Andy Rosenbloom, CACM
Senior Editor, for both instigating the writing of this article and
* Deceased his invaluable assistance in editing both this and the abbreviated
version.
1 http://www.isoc.org/internet/history/brief.shtml
ACM SIGCOMM Computer Communication Review 22 Volume 39, Number 5, October 2009
This history revolves around four distinct aspects. There is the In late 1966 Roberts went to DARPA to develop the computer
technological evolution that began with early research on packet network concept and quickly put together his plan for the
switching and the ARPANET (and related technologies), and “ARPANET”, publishing it in 1967 [11]. At the conference where
where current research continues to expand the horizons of the he presented the paper, there was also a paper on a packet network
infrastructure along several dimensions, such as scale, concept from the UK by Donald Davies and Roger Scantlebury of
performance, and higher level functionality. There is the NPL. Scantlebury told Roberts about the NPL work as well as that
operations and management aspect of a global and complex of Paul Baran and others at RAND. The RAND group had written
operational infrastructure. There is the social aspect, which a paper on packet switching networks for secure voice in the
resulted in a broad community of Internauts working together to military in 1964 [1]. It happened that the work at MIT (1961-
create and evolve the technology. And there is the 1967), at RAND (1962-1965), and at NPL (1964-1967) had all
commercialization aspect, resulting in an extremely effective proceeded in parallel without any of the researchers knowing
transition of research results into a broadly deployed and available about the other work. The word “packet” was adopted from the
information infrastructure. work at NPL and the proposed line speed to be used in the
6
ARPANET design was upgraded from 2.4 kbps to 50 kbps .
The Internet today is a widespread information infrastructure, the
initial prototype of what is often called the National (or Global or In August 1968, after Roberts and the DARPA funded community
Galactic) Information Infrastructure. Its history is complex and had refined the overall structure and specifications for the
involves many aspects - technological, organizational, and ARPANET, an RFQ was released by DARPA for the
community. And its influence reaches not only to the technical development of one of the key components, the packet switches
fields of computer communications but throughout society as we called Interface Message Processors (IMP's). The RFQ was won
move toward increasing use of online tools to accomplish in December 1968 by a group headed by Frank Heart at Bolt
electronic commerce, information acquisition, and community Beranek and Newman (BBN). As the BBN team worked on the
operations. IMP's with Bob Kahn playing a major role in the overall
ARPANET architectural design, the network topology and
economics were designed and optimized by Roberts working with
2. ORIGINS OF THE INTERNET Howard Frank and his team at Network Analysis Corporation, and
The first recorded description of the social interactions that could the network measurement system was prepared by Kleinrock's
be enabled through networking was a series of memos written by team at UCLA7.
J.C.R. Licklider of MIT in August 1962 discussing his “Galactic
Network” concept [9]. He envisioned a globally interconnected set Due to Kleinrock's early development of packet switching theory
of computers through which everyone could quickly access data and his focus on analysis, design and measurement, his Network
and programs from any site. In spirit, the concept was very much Measurement Center at UCLA was selected to be the first node on
like the Internet of today. Licklider was the first head of the the ARPANET. All this came together in September 1969 when
5
computer research program at DARPA , starting in October 1962. BBN installed the first IMP at UCLA and the first host computer
While at DARPA he convinced his successors at DARPA, Ivan was connected. Doug Engelbart's project on “Augmentation of
Sutherland, Bob Taylor, and MIT researcher Lawrence G. Human Intellect” (which included NLS, an early hypertext
Roberts, of the importance of this networking concept. system) at Stanford Research Institute (SRI) provided a second
node. SRI supported the Network Information Center, led by
Leonard Kleinrock at MIT published the first paper on packet Elizabeth (Jake) Feinler and including functions such as
switching theory in July 1961 [6] and the first book on the subject maintaining tables of host name to address mapping as well as a
in 1964 [7]. Kleinrock convinced Roberts of the theoretical directory of the RFC's. One month later, when SRI was connected
feasibility of communications using packets rather than circuits, to the ARPANET, the first host-to-host message was sent from
which was a major step along the path towards computer Kleinrock's laboratory to SRI. Two more nodes were added at UC
networking. The other key step was to make the computers talk
together. To explore this, in 1965 working with Thomas Merrill,
Roberts connected the TX-2 computer in Mass. to the Q-32 in 6 It was from the RAND study that the false rumor started claiming
California with a low speed dial-up telephone line creating the that the ARPANET was somehow related to building a network
first (however small) wide-area computer network ever built [10]. resistant to nuclear war. This was never true of the ARPANET,
The result of this experiment was the realization that the time- only the unrelated RAND study on secure voice considered
shared computers could work well together, running programs and nuclear war. However, the later work on Internetting did
retrieving data as necessary on the remote machine, but that the emphasize robustness and survivability, including the capability
circuit switched telephone system was totally inadequate for the to withstand losses of large portions of the underlying networks.
job. Kleinrock's argument for packet switching was confirmed.
7 Including amongst others Vint Cerf, Steve Crocker, and Jon
Postel. Joining them later were David Crocker who was to
5 The Advanced Research Projects Agency (ARPA) changed its play an important role in documentation of electronic mail
name to Defense Advanced Research Projects Agency protocols, and Robert Braden, who developed the first NCP
(DARPA) in 1971, then back to ARPA in 1993, and back to and then TCP for IBM mainframes and was also to play a
DARPA in 1996. We refer throughout to DARPA, the current long term role in the ICCB and IAB. 776
name.
ACM SIGCOMM Computer Communication Review 23 Volume 39, Number 5, October 2009
Santa Barbara and University of Utah. These last two nodes possibility. While there were other limited ways to interconnect
incorporated application visualization projects, with Glen Culler different networks, they required that one be used as a component
and Burton Fried at UCSB investigating methods for display of of the other, rather than acting as a peer of the other in offering
mathematical functions using storage displays to deal with the end-to-end service.
problem of refresh over the net, and Robert Taylor and Ivan
Sutherland at Utah investigating methods of 3-D representations In an open-architecture network, the individual networks may be
over the net. Thus, by the end of 1969, four host computers were separately designed and developed and each may have its own
connected together into the initial ARPANET, and the budding unique interface which it may offer to users and/or other
Internet was off the ground. Even at this early stage, it should be providers. including other Internet providers. Each network can be
noted that the networking research incorporated both work on the designed in accordance with the specific environment and user
underlying network and work on how to utilize the network. This requirements of that network. There are generally no constraints
tradition continues to this day. on the types of network that can be included or on their
geographic scope, although certain pragmatic considerations will
dictate what makes sense to offer.
Computers were added quickly to the ARPANET during the
following years, and work proceeded on completing a functionally The idea of open-architecture networking was first introduced by
complete Host-to-Host protocol and other network software. In Kahn shortly after having arrived at DARPA in 1972. This work
December 1970 the Network Working Group (NWG) working was originally part of the packet radio program, but subsequently
under S. Crocker finished the initial ARPANET Host-to-Host became a separate program in its own right. At the time, the
protocol, called the Network Control Protocol (NCP). As the program was called “Internetting”. Key to making the packet radio
ARPANET sites completed implementing NCP during the period system work was a reliable end-end protocol that could maintain
1971-1972, the network users finally could begin to develop effective communication in the face of jamming and other radio
applications. interference, or withstand intermittent blackout such as caused by
being in a tunnel or blocked by the local terrain. Kahn first
In October 1972 Kahn organized a large, very successful contemplated developing a protocol local only to the packet radio
demonstration of the ARPANET at the International Computer network, since that would avoid having to deal with the multitude
Communication Conference (ICCC). This was the first public of different operating systems, and continuing to use NCP.
demonstration of this new network technology to the public. It
However, NCP did not have the ability to address networks (and
was also in 1972 that the initial “hot” application, electronic mail,
machines) further downstream than a destination IMP on the
was introduced. In March Ray Tomlinson at BBN wrote the basic ARPANET and thus some change to NCP would also be required.
email message send and read software, motivated by the need of
(The assumption was that the ARPANET was not changeable in
the ARPANET developers for an easy coordination mechanism.
this regard). NCP relied on ARPANET to provide end-to-end
In July, Roberts expanded its utility by writing the first email
reliability. If any packets were lost, the protocol (and presumably
utility program to list, selectively read, file, forward, and respond
any applications it supported) would come to a grinding halt. In
to messages. From there email took off as the largest network this model NCP had no end-end host error control, since the
application for over a decade. This was a harbinger of the kind of
ARPANET was to be the only network in existence and it would
activity we see on the World Wide Web today, namely, the
be so reliable that no error control would be required on the part
enormous growth of all kinds of “people-to-people” traffic. of the hosts.
Thus, Kahn decided to develop a new version of the protocol
3. THE INITIAL INTERNETTING which could meet the needs of an open-architecture network
CONCEPTS environment. This protocol would eventually be called the
The original ARPANET grew into the Internet. Internet was based Transmission Control Protocol/Internet Protocol (TCP/IP). While
on the idea that there would be multiple independent networks of NCP tended to act like a device driver, the new protocol would be
more like a communications protocol.
rather arbitrary design, beginning with the ARPANET as the
pioneering packet switching network, but soon to include packet Four ground rules were critical to Kahn's early thinking:
satellite networks, ground-based packet radio networks and other • Each distinct network would have to stand on its own
networks. The Internet as we now know it embodies a key and no internal changes could be required to any such
underlying technical idea, namely that of open architecture network to connect it to the Internet.
networking. In this approach, the choice of any individual
network technology was not dictated by a particular network • Communications would be on a best effort basis. If a
architecture but rather could be selected freely by a provider and packet didn't make it to the final destination, it would
made to interwork with the other networks through a meta-level shortly be retransmitted from the source.
“Internetworking Architecture”. Up until that time there was only
one general method for federating networks. This was the • Black boxes would be used to connect the networks;
traditional circuit switching method where networks would these would later be called gateways and routers. There
interconnect at the circuit level, passing individual bits on a would be no information retained by the gateways about the
synchronous basis along a portion of an end-to-end circuit individual flows of packets passing through them,
between a pair of end locations. Recall that Kleinrock had shown thereby keeping them simple and avoiding complicated
in 1961 that packet switching was a more efficient switching adaptation and recovery from various failure modes.
method. Along with packet switching, special purpose
interconnection arrangements between networks were another • There would be no global control at the operations level.
ACM SIGCOMM Computer Communication Review 24 Volume 39, Number 5, October 2009
Other key issues that needed to be addressed were: select when to acknowledge and each ack returned
would be cumulative for all packets received to that point.
• Algorithms to prevent lost packets from permanently
disabling communications and enabling them to be • It was left open as to exactly how the source and
successfully retransmitted from the source. destination would agree on the parameters of the
windowing to be used. Defaults were used initially.
• Providing for host to host “pipelining” so that multiple
packets could be enroute from source to destination at the • Although Ethernet was under development at Xerox
discretion of the participating hosts, if the PARC at that time, the proliferation of LANs were not
intermediate networks allowed it. envisioned at the time, much less PCs and workstations. The
original model was national level networks like
• Gateway functions to allow it to forward packets ARPANET of which only a relatively small number were
appropriately. This included interpreting IP headers for expected to exist. Thus a 32 bit IP address was used of
routing, handling interfaces, breaking packets into which the first 8 bits signified the network and the
smaller pieces if necessary, etc. remaining 24 bits designated the host on that network.
This assumption, that 256 networks would be sufficient for
• The need for end-end checksums, reassembly of packets the foreseeable future, was clearly in need of reconsideration
from fragments and detection of duplicates, if any. when LANs began to appear in the late 1970s.
• The need for global addressing The original Cerf/Kahn paper on the Internet described one
protocol, called TCP, which provided all the transport and
• Techniques for host to host flow control. forwarding services in the Internet. Kahn had intended that the
TCP protocol support a range of transport services, from the
• Interfacing with the various operating systems totally reliable sequenced delivery of data (virtual circuit model)
to a datagram service in which the application made direct use of
• There were also other concerns, such as implementation the underlying network service, which might imply occasional
efficiency, internetwork performance, but these were lost, corrupted or reordered packets.
secondary considerations at first.
However, the initial effort to implement TCP resulted in a version
Kahn began work on a communications-oriented set of operating that only allowed for virtual circuits. This model worked fine for
system principles while at BBN and documented some of his early file transfer and remote login applications, but some of the early
thoughts in an internal BBN memorandum entitled work on advanced network applications, in particular packet voice
“Communications Principles for Operating Systems” [4]. At this in the 1970s, made clear that in some cases packet losses should
point he realized it would be necessary to learn the not be corrected by TCP, but should be left to the application to
implementation details of each operating system to have a chance deal with. This led to a reorganization of the original TCP into
to embed any new protocols in an efficient way. Thus, in the two protocols, the simple IP which provided only for addressing
spring of 1973, after starting the internetting effort, he asked Vint and forwarding of individual packets, and the separate TCP,
Cerf (then at Stanford) to work with him on the detailed design of which was concerned with service features such as flow control
the protocol. Cerf had been intimately involved in the original and recovery from lost packets. For those applications that did not
NCP design and development and already had the knowledge want the services of TCP, an alternative called the User Datagram
about interfacing to existing operating systems. So armed with Protocol (UDP) was added in order to provide direct access to the
Kahn's architectural approach to the communications side and basic service of IP.
with Cerf's NCP experience, they teamed up to spell out the
details of what became TCP/IP. A major initial motivation for both the ARPANET and the
Internet was resource sharing - for example allowing users on the
The give and take was highly productive and the first written packet radio networks to access the time sharing systems attached
version8 of the resulting approach was distributed at a special to the ARPANET. Connecting the two together was far more
meeting of the International Network Working Group (INWG) economical that duplicating these very expensive computers.
which had been set up at a conference at Sussex University in However, while file transfer and remote login (Telnet) were very
September 1973. Cerf had been invited to chair this group and important applications, electronic mail has probably had the most
used the occasion to hold a meeting of INWG members who were significant impact of the innovations from that era. Email
heavily represented at the Sussex Conference. provided a new model of how people could communicate with
Some basic approaches emerged from this collaboration between each other, and changed the nature of collaboration, first in the
Kahn and Cerf: building of the Internet itself (as is discussed below) and later for
• Communication between two processes would logically much of society.
consist of a very long stream of bytes (they called them There were other applications proposed in the early days of the
octets). The position of any octet in the stream would be Internet, including packet based voice communication (the
used to identify it. precursor of Internet telephony), various models of file and disk
sharing, and early “worm” programs that showed the concept of
• Flow control would be done by using sliding windows
agents (and, of course, viruses). A key concept of the Internet is
and acknowledgments (acks). The destination could
that it was not designed for just one application, but as a general
infrastructure on which new applications could be conceived, as
illustrated later by the emergence of the World Wide Web. It is
8 This was subsequently published as Reference [4].
ACM SIGCOMM Computer Communication Review 25 Volume 39, Number 5, October 2009
the general purpose nature of the service provided by TCP and IP Domain Name System (DNS) was invented by Paul Mockapetris
that makes this possible. of USC/ISI. The DNS permitted a scalable distributed mechanism
for resolving hierarchical host names (e.g. www.acm.org) into an
4. PROVING THE IDEAS Internet address.
DARPA let three contracts to Stanford (Cerf), BBN (Ray
Tomlinson) and UCL (Peter Kirstein) to implement TCP/IP (it The increase in the size of the Internet also challenged the
was simply called TCP in the Cerf/Kahn paper but contained both capabilities of the routers. Originally, there was a single
distributed algorithm for routing that was implemented uniformly
components). The Stanford team, led by Cerf, produced the
detailed specification and within about a year there were three by all the routers in the Internet. As the number of networks in the
independent implementations of TCP that could interoperate. Internet exploded, this initial design could not expand as
necessary, so it was replaced by a hierarchical model of routing,
This was the beginning of long term experimentation and with an Interior Gateway Protocol (IGP) used inside each region
development to evolve and mature the Internet concepts and of the Internet, and an Exterior Gateway Protocol (EGP) used to
technology. Beginning with the first three networks (ARPANET, tie the regions together. This design permitted different regions to
Packet Radio, and Packet Satellite) and their initial research use a different IGP, so that different requirements for cost, rapid
communities, the experimental environment has grown to reconfiguration, robustness and scale could be accommodated.
incorporate essentially every form of network and a very broad- Not only the routing algorithm, but the size of the addressing
based research and development community [5]. With each tables, stressed the capacity of the routers. New approaches for
expansion has come new challenges. address aggregation, in particular classless inter-domain routing
(CIDR), have recently been introduced to control the size of
The early implementations of TCP were done for large time router tables.
sharing systems such as Tenex and TOPS 20. When desktop
computers first appeared, it was thought by some that TCP was As the Internet evolved, one of the major challenges was
too big and complex to run on a personal computer. David Clark how to propagate the changes to the software, particularly the host
and his research group at MIT set out to show that a compact and software. DARPA supported UC Berkeley to investigate
simple implementation of TCP was possible. They produced an modifications to the Unix operating system, including
implementation, first for the Xerox Alto (the early personal incorporating TCP/IP developed at BBN. Although Berkeley later
workstation developed at Xerox PARC) and then for the IBM PC. rewrote the BBN code to more efficiently fit into the Unix system
That implementation was fully interoperable with other TCPs, but and kernel, the incorporation of TCP/IP into the Unix BSD system
was tailored to the application suite and performance objectives of releases proved to be a critical element in dispersion of the
the personal computer, and showed that workstations, as well as protocols to the research community. Much of the CS research
large time-sharing systems, could be a part of the Internet. In community began to use Unix BSD for their day-to-day
1976, Kleinrock published the first book on the ARPANET [8]. It computing environment. Looking back, the strategy of
included an emphasis on the complexity of protocols and the incorporating Internet protocols into a supported operating system
pitfalls they often introduce. This book was influential in for the research community was one of the key elements in the
spreading the lore of packet switching networks to a very wide successful widespread adoption of the Internet.
community.
One of the more interesting challenges was the transition of
Widespread development of LANS, PCs and workstations in the ARPANET host protocol from NCP to TCP/IP as of January
the 1980s allowed the nascent Internet to flourish. Ethernet 1, 1983. This was a “flag-day” style transition, requiring all hosts
technology, developed by Bob Metcalfe at Xerox PARC in 1973, to convert simultaneously or be left having to communicate via
is now probably the dominant network technology in the Internet rather ad-hoc mechanisms. This transition was carefully planned
and PCs and workstations the dominant computers. This change within the community over several years before it actually took
from having a few networks with a modest number of time-shared place and went surprisingly smoothly (but resulted in a
hosts (the original ARPANET model) to having many networks distribution of buttons saying “I survived the TCP/IP transition”).
has resulted in a number of new concepts and changes to the
underlying technology. First, it resulted in the definition of three TCP/IP was adopted as a defense standard three years earlier
in 1980. This enabled defense to begin sharing in the DARPA
network classes (A, B, and C) to accommodate the range of
Internet technology base and led directly to the eventual
networks. Class A represented large national scale networks
(small number of networks with large numbers of hosts); Class B partitioning of the military and non- military communities. By
1983, ARPANET was being used by a significant number of
represented regional scale networks; and Class C represented local
area networks (large number of networks with relatively few defense R&D and operational organizations. The transition of
hosts). ARPANET from NCP to TCP/IP permitted it to be split into a
MILNET supporting operational requirements and an ARPANET
A major shift occurred as a result of the increase in scale of supporting research needs.
the Internet and its associated management issues. To make it
easy for people to use the network, hosts were assigned names, so Thus, by 1985, Internet was already well established as a
technology supporting a broad community of researchers and
that it was not necessary to remember the numeric addresses.
Originally, there were a fairly limited number of hosts, so it was developers, and was beginning to be used by other communities
feasible to maintain a single table of all the hosts and their for daily computer communications. Electronic mail was being
used broadly across several communities, often with different
associated names and addresses. The shift to having a large
number of independently managed networks (e.g., LANs) meant systems, but interconnection of different mail systems was
that having a single table of hosts was no longer feasible, and the showing the utility of inter-personal electronic communication
ACM SIGCOMM Computer Communication Review 26 Volume 39, Number 5, October 2009
5. TRANSITION TO WIDESPREAD Engineering and Architecture Task Forces and by NSF's Network
Technical Advisory Group of RFC 985 (Requirements for Internet
INFRASTRUCTURE Gateways ), which formally ensured interoperability of DARPA's
At the same time that the Internet technology was being
and NSF's pieces of the Internet.
experimentally validated and widely used amongst a subset of
computer science researchers, other networks and networking In addition to the selection of TCP/IP for the NSFNET program,
technologies were being pursued. The usefulness of computer Federal agencies made and implemented several other policy
networking - especially electronic mail - demonstrated by decisions which shaped the Internet of today.
DARPA and Department of Defense contractors on the
ARPANET was not lost on other communities and disciplines, so • Federal agencies shared the cost of common infrastructure,
that by the mid-1970s computer networks had begun to spring up such as trans-oceanic circuits. They also jointly supported
wherever funding could be found for the purpose. The U.S. “managed interconnection points” for interagency traffic;
Department of Energy (DoE) established MFENet for its the Federal Internet Exchanges (FIX-E and FIX-W) built
researchers in Magnetic Fusion Energy, whereupon DoE's High for this purpose served as models for the Network
Energy Physicists responded by building HEPNet. NASA Space Access Points and “*IX” facilities that are prominent
Physicists followed with SPAN, and Rick Adrion, David Farber, features of today's Internet architecture.
and Larry Landweber established CSNET for the (academic and
industrial) Computer Science community with an initial grant • To coordinate this sharing, the Federal Networking
10
from the U.S. National Science Foundation (NSF). AT&T's free- Council was formed. The FNC also cooperated with other
wheeling dissemination of the UNIX computer operating system international organizations, such as RARE in Europe,
spawned USENET, based on UNIX' built-in UUCP through the Coordinating Committee on
Intercontinental Research Networking, CCIRN, to
communication protocols, and in 1981 Ira Fuchs and Greydon
coordinate Internet support of the research community
Freeman devised BITNET, which linked academic mainframe
worldwide.
computers in an “email as card images” paradigm.
With the exception of BITNET and USENET, these early • This sharing and cooperation between agencies on
networks (including ARPANET) were purpose-built - i.e., they Internet-related issues had a long history. An
were intended for, and largely restricted to, closed communities of unprecedented 1981 agreement between Farber, acting for
scholars; there was hence little pressure for the individual CSNET and the NSF, and DARPA's Kahn,
networks to be compatible and, indeed, they largely were not. In permitted CSNET traffic to share ARPANET
addition, alternate technologies were being pursued in the infrastructure on a statistical and no-metered-
commercial sector, including XNS from Xerox, DECNet, and settlements basis.
IBM's SNA9. It remained for the British JANET (1984) and U.S.
• Subsequently, in a similar mode, the NSF encouraged its
NSFNET (1985) programs to explicitly announce their intent to
regional (initially academic) networks of the
serve the entire higher education community, regardless of
NSFNET to seek commercial, non-academic customers,
discipline. Indeed, a condition for a U.S. university to receive
expand their facilities to serve them, and exploit the
NSF funding for an Internet connection was that “... the
resulting economies of scale to lower subscription costs for
connection must be made available to ALL qualified users on all.
campus.”
In 1985, Dennis Jennings came from Ireland to spend a year at • On the NSFNET Backbone - the national-scale segment of
NSF leading the NSFNET program. He worked with the the NSFNET - NSF enforced an “Acceptable Use Policy”
community to help NSF make a critical decision - that TCP/IP (AUP) which prohibited Backbone usage for purposes
would be mandatory for the NSFNET program. When Steve “not in support of Research and Education.” The
Wolff took over the NSFNET program in 1986, he recognized the predictable (and intended) result of encouraging
need for a wide area networking infrastructure to support the commercial network traffic at the local and regional
level, while denying its access to national-scale
general academic and research community, along with the need to
transport, was to stimulate the emergence and/or growth of
develop a strategy for establishing such infrastructure on a basis
“private”, competitive, long-haul networks such as PSI,
ultimately independent of direct federal funding. Policies and
UUNET, ANS CO+RE, and (later) others. This process
strategies were adopted (see below) to achieve that end. of privately-financed augmentation for commercial
NSF also elected to support DARPA's existing Internet uses was thrashed out starting in 1988 in a series of NSF-
organizational infrastructure, hierarchically arranged under the initiated conferences at Harvard's Kennedy
(then) Internet Activities Board (IAB). The public declaration of School of Government on “The
this choice was the joint authorship by the IAB's Internet Commercialization and Privatization of the Internet” - and
on the “com-priv” list on the net itself.
9 The desirability of email interchange, however, led to one of the 10 Originally named Federal Research Internet Coordinating
first “Internet books”: !%@:: A Directory of Electronic Mail Committee, FRICC. The FRICC was originally formed to
Addressing and Networks, by Frey and Adams, on email address coordinate U.S. research network activities in support of the
translation and forwarding. international coordination provided by the CCIRN.
ACM SIGCOMM Computer Communication Review 27 Volume 39, Number 5, October 2009
• In 1988, a National Research Council committee, distribution way to share ideas with other network researchers. At
chaired by Kleinrock and with Kahn and Clark as first the RFCs were printed on paper and distributed via snail
members, produced a report commissioned by NSF mail. As the File Transfer Protocol (FTP) came into use, the RFCs
titled “Towards a National Research Network”. This were prepared as online files and accessed via FTP. Now, of
report was influential on then Senator Al Gore, and course, the RFCs are easily accessed via the World Wide Web at
ushered in high speed networks that laid the networking dozens of sites around the world. SRI, in its role as Network
foundation for the future information superhighway. Information Center, maintained the online directories. Jon Postel
• In 1994, a National Research Council report, again acted as RFC Editor as well as managing the centralized
chaired by Kleinrock (and with Kahn and Clark as administration of required protocol number assignments, roles that
members again), Entitled “Realizing The Information he continued to play until his death, October 16, 1998
Future: The Internet and Beyond” was released. This
report, commissioned by NSF, was the document in
which a blueprint for the evolution of the information The effect of the RFCs was to create a positive feedback loop,
superhighway was articulated and which has had a with ideas or proposals presented in one RFC triggering another
lasting affect on the way to think about its evolution. It RFC with additional ideas, and so on. When some consensus (or a
anticipated the critical issues of intellectual property least a consistent set of ideas) had come together a specification
rights, ethics, pricing, education, architecture and document would be prepared. Such a specification would then be
regulation for the Internet. used as the base for implementations by the various research
• NSF's privatization policy culminated in April, 1995, teams.
with the defunding of the NSFNET Backbone. The
funds thereby recovered were (competitively) Over time, the RFCs have become more focused on protocol
redistributed to regional networks to buy national-scale standards (the “official” specifications), though there are still
Internet connectivity from the now numerous, private, informational RFCs that describe alternate approaches, or provide
long-haul networks. background information on protocols and engineering issues. The
The backbone had made the transition from a network built from RFCs are now viewed as the “documents of record” in the Internet
routers out of the research community (the “Fuzzball” routers engineering and standards community.
from David Mills) to commercial equipment. In its 8 1/2 year
lifetime, the Backbone had grown from six nodes with 56 kbps The open access to the RFCs (for free, if you have any kind of a
links to 21 nodes with multiple 45 Mbps links. It had seen the connection to the Internet) promotes the growth of the Internet
Internet grow to over 50,000 networks on all seven continents and because it allows the actual specifications to be used for examples
outer space, with approximately 29,000 networks in the United in college classes and by entrepreneurs developing new systems.
States.
Email has been a significant factor in all areas of the Internet, and
Such was the weight of the NSFNET program's ecumenism and that is certainly true in the development of protocol specifications,
funding ($200 million from 1986 to 1995) - and the quality of the
technical standards, and Internet engineering. The very early
protocols themselves - that by 1990 when the ARPANET itself RFCs often presented a set of ideas developed by the researchers
11
was finally decommissioned , TCP/IP had supplanted or at one location to the rest of the community. After email came
marginalized most other wide-area computer network protocols
into use, the authorship pattern changed - RFCs were presented by
worldwide, and IP was well on its way to becoming THE bearer joint authors with common view independent of their locations.
service for the Global Information Infrastructure.
The use of specialized email mailing lists has been long used in
the development of protocol specifications, and continues to be an
6. THE ROLE OF DOCUMENATION important tool. The IETF now has in excess of 75 working
A key to the rapid growth of the Internet has been the free and groups, each working on a different aspect of Internet
open access to the basic documents, especially the specifications engineering. Each of these working groups has a mailing list to
of the protocols. discuss one or more draft documents under development. When
consensus is reached on a draft document it may be distributed as
The beginnings of the ARPANET and the Internet in the an RFC.
university research community promoted the academic tradition
of open publication of ideas and results. However, the normal As the current rapid expansion of the Internet is fueled by the
cycle of traditional academic publication was too formal and too realization of its capability to promote information sharing, we
slow for the dynamic exchange of ideas essential to creating should understand that the network's first role in information
networks. sharing was sharing the information about it's own design and
operation through the RFC documents. This unique method for
In 1969 a key step was taken by S. Crocker (then at UCLA) in evolving new capabilities in the network will continue to be
establishing the Request for Comments (or RFC) series of notes critical to future evolution of the Internet.
[3]. These memos were intended to be an informal fast
ACM SIGCOMM Computer Communication Review 28 Volume 39, Number 5, October 2009
7. FORMATION OF THE BROAD funding of the Internet. In addition to NSFNet and the various US
and international government-funded activities, interest in the
COMMUNITY commercial sector was beginning to grow. Also in 1985, both
The Internet is as much a collection of communities as a Kahn and Leiner left DARPA and there was a significant decrease
collection of technologies, and its success is largely attributable to in Internet activity at DARPA. As a result, the IAB was left
both satisfying basic community needs as well as utilizing the
without a primary sponsor and increasingly assumed the mantle of
community in an effective way to push the infrastructure forward. leadership.
This community spirit has a long history beginning with the early
ARPANET. The early ARPANET researchers worked as a close-
The growth continued, resulting in even further substructure
knit community to accomplish the initial demonstrations of packet
switching technology described earlier. Likewise, the Packet within both the IAB and IETF. The IETF combined Working
Satellite, Packet Radio and several other DARPA computer Groups into Areas, and designated Area Directors. An Internet
Engineering Steering Group (IESG) was formed of the Area
science research programs were multi-contractor collaborative
Directors. The IAB recognized the increasing importance of the
activities that heavily used whatever available mechanisms there
were to coordinate their efforts, starting with electronic mail and IETF, and restructured the standards process to explicitly
recognize the IESG as the major review body for standards. The
adding file sharing, remote access, and eventually World Wide
Web capabilities. Each of these programs formed a working IAB also restructured so that the rest of the Task Forces (other
than the IETF) were combined into an Internet Research Task
group, starting with the ARPANET Network Working Group.
Force (IRTF) chaired by Postel, with the old task forces renamed
Because of the unique role that ARPANET played as an
infrastructure supporting the various research programs, as the as research groups.
Internet started to evolve, the Network Working Group evolved
into Internet Working Group. The growth in the commercial sector brought with it increased
concern regarding the standards process itself. Starting in the early
In the late 1970's, recognizing that the growth of the Internet was 1980's and continuing to this day, the Internet grew beyond its
primarily research roots to include both a broad user community
accompanied by a growth in the size of the interested research
and increased commercial activity. Increased attention was paid to
community and therefore an increased need for coordination
mechanisms, Vint Cerf, then manager of the Internet Program at making the process open and fair. This coupled with a recognized
need for community support of the Internet eventually led to the
DARPA, formed several coordination bodies - an International
formation of the Internet Society in 1991, under the auspices of
Cooperation Board (ICB), chaired by Peter Kirstein of UCL, to
coordinate activities with some cooperating European countries Kahn's Corporation for National Research Initiatives (CNRI) and
the leadership of Cerf, then with CNRI.
centered on Packet Satellite research, an Internet Research Group
which was an inclusive group providing an environment for
general exchange of information, and an Internet Configuration In 1992, yet another reorganization took place. In 1992, the
Control Board (ICCB), chaired by Clark. The ICCB was an Internet Activities Board was re-organized and re-named the
invitational body to assist Cerf in managing the burgeoning Internet Architecture Board operating under the auspices of the
Internet activity. Internet Society. A more “peer” relationship was defined between
the new IAB and IESG, with the IETF and IESG taking a larger
responsibility for the approval of standards. Ultimately, a
In 1983, when Barry Leiner took over management of the Internet
research program at DARPA, he and Clark recognized that the cooperative and mutually supportive relationship was formed
continuing growth of the Internet community demanded a between the IAB, IETF, and Internet Society, with the Internet
Society taking on as a goal the provision of service and other
restructuring of the coordination mechanisms. The ICCB was
disbanded and in its place a structure of Task Forces was formed, measures which would facilitate the work of the IETF.
each focused on a particular area of the technology (e.g. routers,
end-to-end protocols, etc.). The Internet Activities Board (IAB) The recent development and widespread deployment of the World
was formed from the chairs of the Task Forces. It of course was Wide Web has brought with it a new community, as many of the
only a coincidence that the chairs of the Task Forces were the people working on the WWW have not thought of themselves as
same people as the members of the old ICCB, and Dave Clark primarily network researchers and developers. A new
continued to act as chair. coordination organization was formed, the World Wide Web
Consortium (W3C). Initially led from MIT's Laboratory for
Computer Science by Tim Berners-Lee (the inventor of the
After some changing membership on the IAB, Phill Gross became
WWW) and Al Vezza, W3C has taken on the responsibility for
chair of a revitalized Internet Engineering Task Force (IETF), at
the time merely one of the IAB Task Forces. As we saw above, by evolving the various protocols and standards associated with the
Web.
1985 there was a tremendous growth in the more
practical/engineering side of the Internet. This growth resulted in
an explosion in the attendance at the IETF meetings, and Gross Thus, through the over two decades of Internet activity, we have
was compelled to create substructure to the IETF in the form of seen a steady evolution of organizational structures designed to
working groups. support and facilitate an ever-increasing community working
collaboratively on Internet issues.
This growth was complemented by a major expansion in the
community. No longer was DARPA the only major player in the
ACM SIGCOMM Computer Communication Review 29 Volume 39, Number 5, October 2009
8. COMMERCIALIZATION OF THE Network management provides an example of the interplay
between the research and commercial communities. In the
TECHNOLOGY beginning of the Internet, the emphasis was on defining and
Commercialization of the Internet involved not only the implementing protocols that achieved interoperation. As the
development of competitive, private network services, but also the network grew larger, it became clear that the sometime ad hoc
development of commercial products implementing the Internet
procedures used to manage the network would not scale. Manual
technology. In the early 1980s, dozens of vendors were configuration of tables was replaced by distributed automated
incorporating TCP/IP into their products because they saw buyers algorithms, and better tools were devised to isolate faults. In 1987
for that approach to networking. Unfortunately they lacked both
it became clear that a protocol was needed that would permit the
real information about how the technology was supposed to work elements of the network, such as the routers, to be remotely
and how the customers planned on using this approach to managed in a uniform way. Several protocols for this purpose
networking. Many saw it as a nuisance add-on that had to be were proposed, including Simple Network Management Protocol
glued on to their own proprietary networking solutions: SNA, or SNMP (designed, as its name would suggest, for simplicity,
DECNet, Netware, NetBios. The DoD had mandated the use of and derived from an earlier proposal called SGMP) , HEMS (a
TCP/IP in many of its purchases but gave little help to the vendors
more complex design from the research community) and CMIP
regarding how to build useful TCP/IP products. (from the OSI community). A series of meeting led to the
decisions that HEMS would be withdrawn as a candidate for
In 1985, recognizing this lack of information availability and standardization, in order to help resolve the contention, but that
appropriate training, Dan Lynch in cooperation with the IAB work on both SNMP and CMIP would go forward, with the idea
arranged to hold a three day workshop for ALL vendors to come that the SNMP could be a more near-term solution and CMIP a
learn about how TCP/IP worked and what it still could not do longer-term approach. The market could choose the one it found
well. The speakers came mostly from the DARPA research more suitable. SNMP is now used almost universally for network
community who had both developed these protocols and used based management.
them in day to day work. About 250 vendor personnel came to
listen to 50 inventors and experimenters. The results were In the last few years, we have seen a new phase of
surprises on both sides: the vendors were amazed to find that the commercialization. Originally, commercial efforts mainly
inventors were so open about the way things worked (and what comprised vendors providing the basic networking products, and
still did not work) and the inventors were pleased to listen to new service providers offering the connectivity and basic Internet
problems they had not considered, but were being discovered by services. The Internet has now become almost a “commodity”
the vendors in the field. Thus a two way discussion was formed service, and much of the latest attention has been on the use of
that has lasted for over a decade. this global information infrastructure for support of other
commercial services. This has been tremendously accelerated by
After two years of conferences, tutorials, design meetings and the widespread and rapid adoption of browsers and the World
workshops, a special event was organized that invited those Wide Web technology, allowing users easy access to information
vendors whose products ran TCP/IP well enough to come together linked throughout the globe. Products are available to facilitate the
in one room for three days to show off how well they all worked provisioning of that information and many of the latest
together and also ran over the Internet. In September of 1988 the developments in technology have been aimed at providing
first Interop trade show was born. 50 companies made the cut. increasingly sophisticated information services on top of the basic
5,000 engineers from potential customer organizations came to Internet data communications.
see if it all did work as was promised. It did. Why? Because the
vendors worked extremely hard to ensure that everyone's products
interoperated with all of the other products - even with those of 9. HISTORY OF THE FUTURE
their competitors. The Interop trade show has grown immensely On October 24, 1995, the FNC unanimously passed a resolution
since then and today it is held in 7 locations around the world defining the term Internet. This definition was developed in
each year to an audience of over 250,000 people who come to consultation with members of the internet and intellectual
learn which products work with each other in a seamless manner, property rights communities.
learn about the latest products, and discuss the latest technology.
RESOLUTION: The Federal Networking Council
In parallel with the commercialization efforts that were (FNC) agrees that the following language reflects our
highlighted by the Interop activities, the vendors began to attend definition of the term “Internet”. “Internet” refers to the
the IETF meetings that were held 3 or 4 times a year to discuss global information system that -- (i) is logically linked
new ideas for extensions of the TCP/IP protocol suite. Starting together by a globally unique address space based on
with a few hundred attendees mostly from academia and paid for the Internet Protocol (IP) or its subsequent
by the government, these meetings now often exceeds a thousand extensions/follow-ons; (ii) is able to support
attendees, mostly from the vendor community and paid for by the communications using the Transmission Control
attendees themselves. This self-selected group evolves the TCP/IP Protocol/Internet Protocol (TCP/IP) suite or its
suite in a mutually cooperative manner. The reason it is so useful subsequent extensions/follow-ons, and/or other IP-
is that it is comprised of all stakeholders: researchers, end users compatible protocols; and (iii) provides, uses or makes
and vendors. accessible, either publicly or privately, high level
services layered on the communications and related
infrastructure described herein.
ACM SIGCOMM Computer Communication Review 30 Volume 39, Number 5, October 2009
The Internet has changed much in the two decades since it came same time, the industry struggles to find the economic rationale
into existence. It was conceived in the era of time-sharing, but has for the large investment needed for the future growth, for example
survived into the era of personal computers, client-server and to upgrade residential access to a more suitable technology. If the
peer-to-peer computing, and the network computer. It was Internet stumbles, it will not be because we lack for technology,
designed before LANs existed, but has accommodated that new vision, or motivation. It will be because we cannot set a direction
network technology, as well as the more recent ATM and frame and march collectively into the future.
switched services. It was envisioned as supporting a range of
functions from file sharing and remote login to resource sharing
and collaboration, and has spawned electronic mail and more
recently the World Wide Web. But most important, it started as
the creation of a small band of dedicated researchers, and has
grown to be a commercial success with billions of dollars of
annual investment.
One should not conclude that the Internet has now finished
changing. The Internet, although a network in name and
geography, is a creature of the computer, not the traditional
network of the telephone or television industry. It will, indeed it
must, continue to change and evolve at the speed of the computer
Figure 1: Timeline
industry if it is to remain relevant. It is now changing to provide
such new services as real time transport, in order to support, for
example, audio and video streams. The availability of pervasive
10. REFERENCES
1. P. Baran, “On Distributed Communications Networks,” IEEE
networking (i.e., the Internet) along with powerful affordable
Trans. Comm. Systems, March 1964.
computing and communications in portable form (i.e., laptop
computers, two-way pagers, PDAs, cellular phones), is making 2. V. G. Cerf and R. E. Kahn, “A protocol for packet network
possibly a new paradigm of nomadic computing and interconnection,” IEEE Trans. Comm. Tech., vol. COM-22, V
communications.. 5, pp. 627-641, May 1974.
3. S. Crocker, RFC001 Host software, Apr-07-1969.
This evolution will bring us new applications - Internet telephone
and, slightly further out, Internet television. It is evolving to 4. R. Kahn, Communications Principles for Operating Systems.
permit more sophisticated forms of pricing and cost recovery, a Internal BBN memorandum, Jan. 1972.
perhaps painful requirement in this commercial world. It is
5. Proceedings of the IEEE, Special Issue on Packet
changing to accommodate yet another generation of underlying
Communication Networks, Volume 66, No. 11, November, 1978.
network technologies with different characteristics and
(Guest editor: Robert Kahn, associate guest editors: Keith
requirements, from broadband residential access to satellites. New
Uncapher and Harry van Trees)
modes of access and new forms of service will spawn new
applications, which in turn will drive further evolution of the net 6. L. Kleinrock, “Information Flow in Large Communication
itself. Nets,” RLE Quarterly Progress Report, July 1961.
7. L. Kleinrock, Communication Nets: Stochastic Message Flow
The most pressing question for the future of the Internet is not
and Delay, Mcgraw-Hill (New York), 1964.
how the technology will change, but how the process of change
and evolution itself will be managed. As this paper describes, the 8. L. Kleinrock, Queueing Systems: Vol II, Computer
architecture of the Internet has always been driven by a core Applications, John Wiley and Sons (New York), 1976
group of designers, but the form of that group has changed as the
number of interested parties has grown. With the success of the 9. J.C.R. Licklider & W. Clark, “On-Line Man Computer
Internet has come a proliferation of stakeholders - stakeholders Communication,” August 1962.
now with an economic as well as an intellectual investment in the 10. L. Roberts & T. Merrill, “Toward a Cooperative Network of
network. We now see, in the debates over control of the domain Time-Shared Computers,” Fall AFIPS Conf., Oct. 1966.
name space and the form of the next generation IP addresses, a
struggle to find the next social structure that will guide the 11. L. Roberts, “Multiple Computer Networks and Intercomputer
Internet in the future. The form of that structure will be harder to Communication,” ACM Gatlinburg Conf., October
find, given the large number of concerned stake-holders. At the 1967.
ACM SIGCOMM Computer Communication Review 31 Volume 39, Number 5, October 2009
🔐 Privatizing the Internet: Locking the Doors
🌐 What Does It Mean?
Privatizing the internet means restricting or controlling access to parts of the internet, typically by
governments or corporations. It’s the opposite of a fully open, decentralized network. Think of it like
🧱🚪
building walls and gates in a digital city.
🛡️ Purpose in Cybersecurity
1. National Security: Countries may privatize portions of the internet to prevent cyber espionage or
terrorist activity.
2. Data Sovereignty: Ensures that data stays within national borders (e.g., China’s Great Firewall
🇨🇳 ).
3. Corporate Control: Companies may build private networks (Intranets) for sensitive data, safe
from the chaos of the public web.
🧠 Example
A government mandates that all citizen data be stored on local servers, not global cloud
platforms. ☁️➡️ 🏠
⚖️ Pros & Cons
✅ Pros ❌ Cons
Enhanced security Limits freedom of information
|"Build walls where needed 🏰, but also expand roads for progress 🛤️—while guarding every corner
|with cyber shields. 🛡️⚔️"
GAINING THE ADVANTAGE
Applying Cyber Kill Chain® Methodology to Network Defense
THE MODERN DAY ATTACKER
Cyberattacks aren’t new, but the stakes at every level are higher than ever. Adversaries are more
sophisticated, well-resourced, trained, and adept at launching skillfully planned intrusion campaigns called
Advanced Persistent Threats (APT). Our nation’s security and prosperity depend on critical infrastructure.
Protecting these assets requires a clear understanding of our adversaries, their motivations and strategies.
Adversaries are intent on the compromise and extraction of data for economic, political
and national security advancement. Even worse, adversaries have demonstrated
their willingness to conduct destructive attacks. Their tools and techniques have the
ability to defeat most common computer network defense mechanisms.
1
Stopping adversaries at any stage breaks the chain of attack! Adversaries
must completely progress through all phases for success; this puts
the odds in our favor as we only need to block them at any given one
for success. Every intrusion is a chance to understand more about
2
our adversaries and use their persistence to our advantage.
7
RECONNAISSANCE Identify the Targets
1
ADVERSARY D E F E ND E R
The adversaries are in the preparation This is an essential phase for defenders
and staging phase of their operation. to understand. Though they cannot
Malware generation is likely not done detect weaponization as it happens,
by hand – they use automated tools. they can infer by analyzing malware
A “weaponizer” couples malware and artifacts. Detections against
exploit into a deliverable payload. weaponizer artifacts are often the
most durable & resilient defenses.
ff Obtain a weaponizer, either
in-house or obtain through ff Conduct full malware analysis –
public or private channels not just what payload it drops,
ff For file-based exploits, select “decoy” but how it was made.
document to present to the victim. ff Build detections for weaponizers
ff Select backdoor implant and – find new campaigns and new
appropriate command and control payloads only because they re-
infrastructure for operation used a weaponizer toolkit.
The adversaries convey the This is the first and most important
malware to the target. They have opportunity for defenders to block
launched their operation. the operation. A key measure
of effectiveness is the fraction
ff Adversary controlled delivery: of intrusion attempts that are
ff Direct against web servers blocked at delivery stage.
ff Adversary released delivery:
ff Analyze delivery medium – understand
ff Malicious email upstream infrastructure.
7
RESOURCES
White Paper
Video
Article
cyber.security@lmco.com
855-LMCYBER Connect
855-562-9237
LOCKHEED MARTIN, LOCKHEED and the STAR design trademarks used throughout are registered trademarks
in the U.S. Patent and Trademark Office owned by Lockheed Martin Corporation.
The opinions expressed in this publication are those of the author and do not necessarily
reflect the views of the Centre for International Governance Innovation or its Operating
Board of Directors or International Board of Governors.
4 Acronyms
5 Introduction
13 Conclusions
13 Acknowledgements
14 Works Cited
16 About CIGI
16 CIGI Masthead
GLOBAL COMMISSION ON INTERNET GOVERNANCE Paper Series: no. 1 — May 2014
ACRONYMS
ABOUT THE GLOBAL CERTs computer emergency response teams
COMMISSION ON INTERNET CSIS Center for Strategic International Studies
GOVERNANCE DDoS distributed denial-of-service
DNS domain name system
The Global Commission on Internet Governance was
established in January 2014 to articulate and advance a GATT General Agreement on Tariffs and Trade
strategic vision for the future of Internet governance. The GGE Group of Governmental Experts (UN)
two-year project conducts and supports independent
research on Internet-related dimensions of global public IANA Internet Assigned Numbers Authority
policy, culminating in an official commission report that IEEE Institute of Electrical and Electronics
will articulate concrete policy recommendations for the Engineers
future of Internet governance. These recommendations
IETF Internet Engineering Task Force
will address concerns about the stability, interoperability,
security and resilience of the Internet ecosystem. ICANN Internet Corporation for Assigned Names
and Numbers
Launched by two independent global think tanks,
ISOC Internet Society
the Centre for International Governance Innovation
(CIGI) and Chatham House, the Global Commission on ISP Internet service provider
Internet Governance will help educate the wider public ITU International Telecommunication Union
on the most effective ways to promote Internet access,
while simultaneously championing the principles of LOAC Laws of Armed Conflict
freedom of expression and the free flow of ideas over NSA National Security Agency (US)
the Internet.
W3C World Wide Web Consortium
The Global Commission on Internet Governance will WCIT World Conference on International
focus on four key themes: Telecommunications
• enhancing governance legitimacy — including WGIG Working Group on Internet Governance (UN)
regulatory approaches and standards; WIPO World Intellectual Property Organization
• stimulating economic innovation and growth — WTO World Trade Organization
including critical Internet resources, infrastructure
and competition policy;
profit corporation under US law, although its procedures with successful self-governance are weak in many parts of
have evolved to include government voices (but not votes). the cyber domain because of the large size of the resource,
In any event, its mandate is limited to domain names and the large number of users and the poor understanding of
assignment of top-level numeric addresses, not the full how the system will evolve (among others).
panoply of cyberspace governance. National governments
control copyright and intellectual property laws, although In its earliest days, the Internet was like a small village
they are subject to negotiation and litigation, sometimes of known users — an authentication layer of code was
within the frameworks of the World Intellectual Property not necessary and development of norms was simple in
Organization (WIPO) and the World Trade Organization a climate of trust. All of that changed with burgeoning
(WTO). Governments also determine national spectrum growth and commercial use. While the openness and
allocation within an international framework negotiated accessibility of cyberspace as a medium of communication
at the International Telecommunication Union (ITU). provide valuable benefits to all, free-riding behaviour in
the form of crime, attacks and threats creates insecurity.
The United Nations Charter, the Laws of Armed Conflict The result is a demand for protection that can lead to
(LOAC) and various regional organizations provide a fragmentation, “walled gardens,” private networks and
general overarching framework as national governments cyber equivalents to the seventeenth century enclosures
try to manage problems of security and espionage. The that were used to solve that era’s “tragedy of the commons”
Council of Europe’s Convention on Cybercrime (2014) in (Ostrom 2009, 421; Hurwitz 2009). Internet experts worry
Budapest provides a legal framework that has been ratified about “balkanization” or fragmentation. To some extent
by 42 states. Incident response teams (computer emergency that has already occurred, yet most states do not want
response teams [CERTs] and CSIRTs [Computer Security fragmentation into a “splinter-net” that would curtail
Incident Response Teams]) cooperate regionally and economic benefits.
globally to share information about disruptions. Bilateral
negotiations, track two dialogues, regular forums and Providing security is a classic function of government, and
independent commissions strive to develop norms and some observers believe that growing insecurity will lead
confidence-building measures. Much of the governance to an increased role for governments in cyberspace. Many
efforts occur within national legal frameworks, although states desire to extend their sovereignty in cyberspace,
the technological volatility of the cyber domain means that seeking the technological means to do so. As Diebert
laws and regulations are always chasing a moving target. and Rohozinski (2010) put it, “securing cyberspace has
definitely entailed a ‘return of the state’ but not in ways
The cyberspace domain is often described as a public good that suggest a return to the traditional Westphalian
or a global commons, but these terms are an imperfect paradigm of state sovereignty.” Moreover, while accounts
fit. A public good is one from which all can benefit and of cyberwar have been exaggerated, cyber espionage is
none should be excluded, and while this may describe rampant and more than 30 governments are reputed to
some of the information protocols of the Internet, it does have developed offensive capabilities and doctrines for the
not describe the physical infrastructure, which is a scarce use of cyber weapons (Rid 2013). US Cyber Command has
proprietary resource located within the boundaries of announced plans to employ 6,000 professionals by 2016
sovereign states and more like a “club good” available to (Garamone 2014). Ever since the Stuxnet virus was used
some, but not all. And cyberspace is not a commons like the to disrupt Iran’s nuclear centrifuge program in 2009 and
high seas, because parts of it are under sovereign control. 2010, the hypothetical use of cyber weapons has become
At best, it is an “imperfect commons” or a condominium very real to governments (Demchak and Dombrowski
of joint ownership without well-developed rules (pers. 2011, 32).
comm. with James A. Lewis; see Center for Strategic
International Studies [CSIS] 2008). It has also been termed Efforts to attack or secure a government network also
a club good where a shared resource is subject to various involve the use of cyber weapons by non-state actors. The
degrees of exclusion according the rules and agreements of number of criminal attacks has increased, with estimates
different institutions (Raymond 2013). of global costs ranging from US$80–400 billion annually
(Lewis and Baker 2013, 5). Corporations and private actors,
Cyberspace can also be categorized as what Elinor Ostrom however, can also help to protect the Internet, and this often
termed a “common pool resource,” from which exclusion entails devolution of responsibilities and authority (Deibert
is difficult and exploitation by one party can subtract value and Rohozinski 2010, 30; see Demchak and Dombrowski
for other parties.3 Government is not the sole solution to 2011). For example, banking and financial firms have
such common pool resource problems. Ostrom showed developed their own elaborate systems of security and
that community self-organization is possible under certain punishment through networks of connectedness, such
conditions. However, the conditions that she associated as depriving repeat offenders of their trading rights,
and by slowing speeds and raising transaction costs for
3 See Ostrom et al. (1999, 278), for a challenge to Garrett Hardin’s
addresses that are associated with suspect behaviour.
(1968, 1243) formulation of “the tragedy of the commons.” Informal consortia, such as the Conficker Working Group,
have arisen to deal with particular problems, and hacker has a degree of hierarchical coherence among norms. A
groups like Anonymous have acted to punish corporate regime complex is a loosely coupled set of regimes. On a
and government behaviour of which they disapprove. spectrum of formal institutionalization, a regime complex
is intermediate between a single legal instrument at one
Governments want to protect the Internet so their societies end and fragmented arrangements at the other. While
can continue to benefit from it, but at the same time, they there is no single regime for the governance of cyberspace,
also want to protect their societies from what might come there is a set of loosely coupled norms and institutions that
through the Internet. China, for example, has developed ranks somewhere between an integrated institution that
a firewall and pressures Chinese companies to self-censor imposes regulation through hierarchical rules, and highly
behind it, and the country could reduce its connections to fragmented practices and institutions with no identifiable
the Internet if it is attacked (Clarke and Knake 2012, 146). core and non-existent linkages.
Nonetheless, China — and other governments — still seeks
the economic benefits of connectivity. The tension between The oval map of cyber governance activities in Figure 1
protection of the Internet and protecting society leads to mixes norms, institutions and procedures, some of which
imperfect compromises (see Zittrain 2008). Reaching an are large in scale, while others are relatively small; some are
agreement on norms to govern security is complicated quite formal and some very informal. The labels are often
by the fact that while Western countries speak of “cyber arbitrary.4 The oval is not designed to map all governance
security,” authoritarian countries such as Russia and China activities in cyberspace (which is a massive undertaking)
refer to “information security,” which includes censorship and, thus, is deliberately incomplete. Like all heuristics, it
of content that would be constitutionally protected in distorts reality as it simplifies. Nonetheless, it is a useful
democratic states. corrective to the usual UN versus multi-stakeholder
dichotomy as an approach to cyber governance, and it
These differences were dramatized at the December 2012 locates Internet governance within the larger context of
World Conference on International Telecommunications cyber governance. First, it indicates the extent and wide
(WCIT) convened by the ITU in Dubai. Although the range of actors and activities related to governance that
meeting was ostensibly about updating telephony exist in the space. Second, it separates issues related to
regulations, the underlying issue was the extent to which the technical function of connectivity, such as the domain
the ITU would play a role in the governance of the Internet. name system (DNS) and technical standards where a
Authoritarian countries, and many developing countries, relatively coherent and hierarchical regime exists, from
feel that their approach to security and development the much broader range of issues that constitute the
would benefit from the UN bloc politics that characterize larger regime complex. Third, it encourages us to think
the ITU. Moreover, they dislike the fact that ICANN is a of layers and domains of cyber governance that are much
non-profit incorporated in the United States and at least broader than just the issues of DNS and ICANN, which
partially accountable to the US Commerce Department. have limited functions and little to do directly with larger
Western governments, on the other hand, fear that the issues such as security, human rights or development. As
cumbersome features of the ITU would undercut the Laura DeNardis (2014, 226) writes, “a question such as
flexibility of the “multi-stakeholder” process that stresses ‘who should control the Internet, the United Nations or
the role of the private and non-profit sectors as well as some other organization’ makes no sense whatsoever. The
governments. While there are different interpretations of appropriate question involves determining what is the
multi-stakeholderism, which can be traced back to the most effective form of governance in each specific context.”
Geneva and Tunis meetings of the UN’s World Summit on
the Information Society in 2003 and 2005 (Maurer 2011), When we look at the whole range of cyber governance
respectively, the vote in Dubai was 89 to 55 (Klimburg issues, some of the bipolarity in alignments that
2013, 3) against the “Western” governments (including characterized the WCIT begins to erode. Liberalism is
Japan and India). In the aftermath of the WCIT conference, not the only divide. For example, some of the countries
there were articles about the crisis in Internet governance that voted against the West were not authoritarian, but
and worries about a new Cold War (see Klimburg 2013; were post-colonial or developing countries concerned
Mueller 2012). Many of these fears were overstated, about issues of sovereignty, which can be swayed by
however, if one looks at cyber governance through the lens programs to develop their cyber capabilities or to protect
of regime theory. the interests of their telecom companies. Also, within the
liberal democratic bloc, there are important differences
REGIMES AND REGIME COMPLEXES between the United States and Europe over issues of
privacy, which have been increased by Edward Snowden’s
Regimes are a subset of norms, which are shared revelations regarding surveillance. Such issues may wind
expectations about appropriate behaviour. Norms can up having strong effects and being resolved within trade
be descriptive, prescriptive or both. They can also be
institutionalized (or not) to varying degrees. A regime
4 I am indebted to Alexander Klimburg for help with the labels.
International Law
Conventions Human Rights
Regimes
UN Charter, UNGA
Resolutions and LOAC ICCPR
Government Groupings
G8, G20, 3G and OECD
UN — 1/3 Committee UN — WSIS Process
GGE IGF, WSIS and WGIG
Telecom Regimes
ITU (ITRs) and GUCCI
Conferences
Incident Response Regimes
London Process,
IWWN and FIRST
International Policy Standards NETmundial and Regional Organizations
1Net Group
ICANN, IANA, ISOC and RIRs CoE, OSCE, SCO, OAS and ARF
Intelligence Corporate Decisions
Community Independent
ISPs and telcos (including Internet Technical Standards
Alliances Commissions Civil Rights Organizations
routing and content)
IETF, W3C and IAB
“Five Eyes” and Toomas Hendrik EFF, Freedom House and Access
Bern Group Ilves, Carl Bildt
Trade Regimes
Intellectual Property Regimes
WTO and Wassenaar
WIPO and ACTA Arrangement
Source: Author.
agreements like the proposed Trans-Atlantic Trade and “adaptability and flexibility are particularly important
Investment Partnership. It oversimplifies the politics of in a setting...in which the most demanding international
cyber governance to compress all of these dimensions commitments are interdependent yet governments vary
into a bipolar dispute over liberal versus authoritarian widely in their interest and ability to implement them.”
approaches to content control.
NORMS AND CYBER SUB-ISSUES
This mapping of a regime complex also indicates the
importance of linkages of cyber to normative and regime The norms that affect the various sub-issues of regime
structures outside the issue area. The various actors that complexes can be compared along a variety of dimensions
are located at the edge of the oval have independent such as effectiveness, resilience, autonomy and others
structures of power and institutions outside the cyber (Hasenclever, Mayer and Rittberger 1997). It is more useful
issue area, but still play a significant role in issues of cyber to compare cyber issues in terms of four dimensions:
governance. In other words, much of cyber governance depth, breadth, fabric and compliance. Depth refers to
comes from actors and institutions that are not focused the hierarchical coherence of a set of rules or norms. Is
purely on cyber. Moreover, these institutions compete there an overarching set of rules, which are compatible
and are used in a process of “contested multilateralism,” and mutually reinforcing (even if they are not adhered
whereby state and non-state actors seek to shape the to or complied with by all actors)? For example, on the
norms that govern activities within the oval (Morse and issue of domain names and standards, the norms, rules
Keohane, forthcoming). and procedures have coherence and depth; however, on
the issue of espionage, there are few. Breadth refers to the
Finally, this approach helps to relieve some of the fears scope of the numbers of state and non-state actors that
of extreme balkanization. Interference with the central have accepted a set of norms (whether they fully comply
regime of domain names and standards could fragment or not). For instance, on the issue of crime, 42 states have
the functioning of the Internet, and it might make sense ratified the Budapest convention.
to consider a special treaty limited to that area (Sofaer,
Clark and Diffie 2010). However, trying to develop a “Fabric” refers to the mix of state and non-state actors in an
treaty for the broad range of cyberspace as a whole could issue area. This is particularly interesting in cyber because
be counterproductive. The loose coupling among issues the low barriers to entry mean many of the resources and
that now exists permits cooperation among actors in some much of the action is controlled by non-state actors. Issues
areas at the same time that they have disagreements in with a high degree of state control have a “tight fabric”;
others. For example, China and the United States can use those where non-state actors are pre-eminent have a
the Internet for economic cooperation even as they differ loosely woven fabric. Security issues such as the laws of
on human rights and content control. Countries could war in cyber have a tight fabric of sovereign control, while
cooperate on cybercrime, even while they differ on laws of the DNS has a loose fabric in which non-state actors play
war or espionage. a major role. As suggested above, a loosely woven fabric
is not synonymous with shallowness or incoherence. A
What regime complexes lack in coherence, they make up fourth dimension for comparison is compliance: how
in flexibility and adaptability. Particularly in a domain widespread is the behavioural adherence to a set of
with extremely volatile technological change, these norms? For instance, on the sub-issue of domain names
characteristics help both states and non-state actors to and standards, compliance is high; on issues of privacy it
adjust to uncertainty. Moreover, they permit the formation is mixed; and on human rights it is low. Some of the major
of clubs or smaller groupings of like-minded states than can sub-issues of the cyber regime complex are compared
pioneer the development of norms that may be extended along these dimensions below. (The list is not designed to
to larger groups at a later time. As Keohane and Victor be complete and other rows for trade, intellectual property
(2011, 7) note of the regime complex for climate change, or development can easily be added to the table.)
Source: Author.
The variation in the characteristics of these sub-issues after extradition laws that relate to actions that are “doubly
suggests why cyberspace is likely to remain a regime criminal” — that is, illegal in both countries.
complex rather than a single, strong regime for some time.
As Keohane and Victor (2011, 8) argue in regard to climate War has an overarching normative structure that is derived
change, it is “actually many different cooperation problems, from the UN Charter and the LOAC. The issue has a tight
implying different tasks and structures. Three forces — structure growing out of the nature of war as a sovereign
the distribution of interests, the gains from linkages, and action of states. The third meeting of the UN’s GGE, which
the management of uncertainty — help to account for the concluded in July 2013, agreed in principle that such
variation in the institutional outcomes, from integration to laws applied in the cyber domain. What this means in
fragmentation.” This is clearly true of cyberspace as well, practice, when there is great technological uncertainty, is
though it is important to notice the difference there is one more challenging. While a group of NATO legal scholars
area of the cyber domain where interests and gains from has produced the Tallinn Manual on International Law
linkages are strong enough that a coherent regime exists. Applicable to Cyber Warfare — which attempts to translate
general principles regarding proportion, discrimination
Partly because of strong common interests in connectivity, and collateral damage into the cyber domain — the scope
and partly because of path dependency and the way the of the acceptance of these principles has been limited by its
basic standards of the Internet were established in the origins (Schmitt 2013). While there has been no cyberwar
United States, there is a core regime related to standards in a strict sense, there has been cyber sabotage, such as
and assigned names and numbers including management Stuxnet, and cyber instruments, such as distributed denial-
of the DNS root zone servers. While there has been of-service (DDoS) attacks, which were used in the Russian
controversy about the status of ICANN, and the US invasion of Georgia. On the other hand, there have been
government has indicated it plans to devolve the IANA press accounts that the United States decided not to use
function to ICANN in the future, no state has thus far found cyber adjuncts in Iraq, Libya and elsewhere, because
it would benefit from ceasing to comply. The development of uncertainties about civilians and collateral damage
of standards is advanced primarily by non-state actors, (Schmitt and Shanker 2011; Markoff and Shanker 2009).
such as the IETF, the W3C, the IEEE and others, where Thus, compliance is judged with these norms as mixed.
states and voting have minimal effect. This is the area of
cyber where the concept of multi-stakeholderism is most According to press accounts, there is extensive use of cyber
apparent. espionage by a wide variety of states and non-state actors.
While espionage is an ancient practice that is not against
Crime might seem to be the next likely sub-issue to be international law, it often violates the domestic laws of
susceptible to regime formation. The issue has a loose sovereign states. Traditionally (for example, in the US-
fabric in which spammers, criminals and other free riders Soviet competition during the Cold War), rough “rules of
impose large costs on both states and private actors. The the road” led to reciprocal expulsions and reductions in
Budapest convention provides a coherent structure with diplomatic missions as a means of regulating the friction
depth, but its breadth has been limited by its origins in created by espionage. Thus far, cyber espionage is so easy
Europe. Many post-colonial countries and authoritarian and relatively safe that no such rules of the road have
countries such as Russia and China object to obligations been developed. The United States has complained about
that they see as intrusions on their sovereignty as well Chinese cyber espionage that steals intellectual property,
as the European origin of the norms. Some developing and raised the issue at the summit between US President
countries also see little to gain by joining, as few of their Barack Obama and President of the People’s Republic of
national companies would benefit, while they fear the China Xi Jinping in June 2013. However, the US effort to
potentially high costs of enforcement, should they to create a norm that differentiates spying for commercial
become signatories. Moreover, some private companies gain from all other spying has been lost in the noise created
find it is in their economic interest to hide the extent to by the revelations of extensive National Security Agency
which they have been victimized and simply absorb it (NSA) surveillance released by Snowden (Goldsmith
as a business cost, rather than suffer reputational and 2013). Moreover, normative efforts have been plagued
regulatory costs. States may also think that the costs are by the loose fabric of the issue. Although the exposure
not high enough to merit action — even if cybercrime of Chinese spying in 2013 by Mandiant suggested a clear
costs US$400 billion, it is still only 0.05 percent of global government connection, many other instances are more
GDP. Thus, insurance markets are difficult to develop ambiguous about whether they are by government or non-
and compliance is far from satisfactory. This may change state actors (Sanger, Barboza and Perlroth 2013).
in the future if the costs of cybercrime increase, given its
sophistication and scope. Despite differences over what Privacy is a sub-issue of growing importance given the
information activities constitute a crime in authoritarian increases in computing power and storage that are often
and democratic countries, cooperation could be modelled summarized as the “era of big data.” There are widespread
concerns about companies, criminals and governments
storing and misusing personal data. At the same time, in
the age of social media, there are changing generational protected online. Within the declaration, however, there is
attitudes in many societies about where to draw the a potential tension between Article 19 (freedom of opinion
appropriate lines between public and private. Private and expression) and Article 29 (public order and general
terms-of-service agreements are often cumbersome and welfare). On the other hand, different states interpret the
opaque to consumers. Additionally, personal identification declaration in different ways, and authoritarian states
information, once on the Internet, can end up in numerous that feel threatened by freedom of speech or assembly
places, rendering futile most efforts to have the initial make no exceptions for the Internet. The US government
posting site remove it. At the same time, European efforts has proclaimed an Internet freedom agenda, but has not
to enforce a “right to be forgotten” with legal excisions of explained whether this includes a right of privacy for
history have raised concerns among some civil libertarians. foreigners. This agenda has also been complicated in the
The concept of privacy is poorly defined and understood, wake of the Snowden revelations. In 2011, the Netherlands
and has very different legal structures in Europe and the held a conference that launched a Freedom Online
United States, not to mention authoritarian states (see Coalition, which now includes 22 states committed to
Brenner 2014). Thus, it is not surprising that while there human rights online, but the disparities in behaviour led
are conflicting norms, the normative structure for the sub- to the conclusion that the normative structure in this sub-
issue lacks depth, breadth or compliance. issue lacks depth, breadth or compliance. Nonetheless,
the loose fabric of the issue allows ample opportunity for
Content control is another sub-issue with conflicting non-state actors to press for human rights in cyberspace.
norms with little depth or breadth. For authoritarian For instance, the civil society organization Global Network
states, information that crosses borders by any means and Initiative has been pressing private companies to sign
jeopardizes the stability of a regime is a threat. The SCO has, up to principles that advance transparency and respect
therefore, expressed a concern about information security, human rights (MacKinnon 2012, chapter 14).
and Russia and China have proposed UN resolutions to
that effect. In practice, authoritarian countries filter such THE FUTURE DYNAMICS OF THE
threatening messages and would like to have a normative
structure that would encourage other states to comply. CYBER REGIME COMPLEX
But the United States could not stop a Falun Gang email
Given the youth of the issue and the volatility of the
to China without violating the free speech clauses of the
technology, there are many potential paths along which
US Constitution. This is why democratic countries refer to
cyber norms may evolve. Regime theorists have developed
cyber security and argue against the control of the content
three quite different causal models that tend to complement
of Internet packets.
each other. Realists argue that regimes are created and
At the same time, democratic countries do control some sustained by the most powerful state. Such hegemons
content. Most try to stop child pornography but are have the incentive to provide public goods and discipline
divided on issues such as hate speech, and many Internet free riders because they will benefit disproportionately.
corporations have been caught between conflicting national But, as their power ebbs, the maintenance of regimes
legal systems. Moreover, this sub-issue has a loosely woven becomes more difficult (Gilpin 1987). From this point of
fabric and various private groups create black and gray view, the declining US control of the Internet suggests
lists of what they regard as violators of various norms. future fragmentation.
In some cases, these vigilantes have been able to borrow
A second approach, liberal institutionalism, emphasizes
the authority of government (Mueller 2010, chapter 9).
the rational self-interest of states seeking the benefits
Copyright is another important area related to content
of cooperative solutions to collective action problems.
control. For example, the proposed Stop Online Piracy
Regimes and their institutions help states achieve benefits
Act in the US Congress would have required Web hosting
by providing information and reducing transactions costs.
companies, search engines and ISPs to sever relations
They cut contracting costs, provide focal points, enhance
with websites and users found in violation of copyright.
transparency and credibility, monitor compliance and
While such measures have met with strong resistance, it is
provide a basis for sanctioning deviant behaviour (Keohane
likely they will remain contentious both in domestic and
1984). This approach helps to explain why a regime exists
transnational politics. Thus, there is no depth, breadth or
for the DNS where perceived interests in cooperation are
widespread compliance with a normative structure for
high, while a regime does not exist in the sub-issue of
content control.
espionage where interests diverge significantly.
Human rights is a cyber sub-issue that has many of the
A constructivist set of theories emphasizes cognitive
same problems of conflicting values that plague content
factors, such as how constituencies, groups and social
control, but there is an overriding legal structure in the form
movements change the perception and organization of
of the Universal Declaration of Human Rights. Moreover,
their interests over time (Ruggie 1998). It is a cliché that
in June 2012, the UN Human Rights Council affirmed
states act in their national interest. The important question
that the same rights that people have off-line must also be
is how those interests are perceived and implemented. accompanied the Russian disruption of Estonia in 2007
This is particularly important in the cyber domain, where and invasion of Georgia in 2008; the establishment of the
the technology is new, and states are still struggling to American Cyber Command in 2009; and the discovery
understand and define their interests. In a chronological of Stuxnet in 2010. Others point to the 2013 Snowden
analogy, state learning of interests in the cyber domain is revelations that the NSA not only carried out espionage
equivalent to about the year 1960, in what was then a new (which is not new or unique), but allegedly subverted
technology of nuclear weapons and nuclear energy (Nye encryption standards and open-source software. Some
2011a). It was not until 1963 that the first arms control treaty technologists believe that trust can be rebuilt from the
was ratified — the atmospheric test ban — and 1968 that bottom up with new software technologies, as well as
the Non-Proliferation Treaty was signed. The situation in procedures for inspection of hardware supply chains.
cyber is made more complex by the much greater roles of a Others argue that low trust will be a persistent condition
diverse set of private and non-profit actors responding to and it will exacerbate a fragmenting trend toward greater
rapid social and economic change. Transnational epistemic control by sovereign states (see Schneier 2013).
communities of people and groups that share ideas and
outlooks — such as ISOC and the IETF — play important Some analysts reinforce their pessimistic projections
roles (Adler and Haas 1992). Over time, the extent and by pointing to realist theories about the decline of US
interests of these cyber epistemic communities has grown. hegemony over the Internet. In its early days, the Internet
Cognitive theories help to explain the evolution of norms, was largely American, but today, China has twice as
but also why there is considerable fragmentation in the many users as the United States. Where once only roman
normative structures of sub-issues like privacy, content characters were used on the internet and HTML tags
control and human rights. were based on abbreviated English words, now there are
generic top-level domain names in Chinese, Arabic and
Optimists about the development of norms in the cyber Cyrillic scripts, with more alphabets expected to come
regime complex can point to some recent evidence of online shortly (ICANN 2013). And in 2014, the United
progress. For example, the disagreement between the States announced that it would relax its Department
sovereigntist and multi-stakeholder philosophies seemed of Commerce’s supervision of ICANN and the IANA
somewhat less stark at the NETmundial conference in function. Some experts worried that this would open the
Sao Paolo, Brazil in 2014 than at the WCIT conference in way for authoritarian states to try to exert control over
Dubai in 2012. Moreover, while early meetings of the GGE the system of root zone servers, and use that to censor the
were unable to reach consensus, the latest meeting reached addresses of opponents.
agreement on a number of points, including the principle
that international laws of war applied to cyberspace. In Such fears seem exaggerated both on technical grounds
addition, the number of states acceding to the Council and in their underlying premises. Not only would such
of Europe’s Convention on Cybercrime has gradually censorship be difficult, but, as liberal institutionalist
increased, and INTERPOL has established a cybercrime theories point out, there are self-interested grounds for
centre in Singapore. Forty-one states have agreed to use states to avoid such fragmentation of the Internet. In
the Wassenaar Arrangement on Export Controls for addition, the descriptions in the decline in US power
Conventional Arms and Dual-Use Goods and Technologies in the cyber regime are overstated. Not only does the
to stop sales of spyware to authoritarian countries. There United States remain the second-largest user of the
has been an increase in international and transnational Internet, but it is also the home of eight of the 10 largest
cooperation among CERTs. Before the recent dispute over global information companies (Statista 2013).5 Moreover,
Ukraine, the United States and Russia agreed that their when one looks at the composition of voluntary multi-
hotline arrangements would be extended to cyber events. stakeholder communities such as the IETF, one sees a
The United States and China established an official working disproportionate number of Americans participating for
group on cyber in 2013. Numerous track two groups and path dependent and technical expertise reasons. From an
various private conferences and commissions continued institutionalist or constructivist viewpoint, the loosening
to work on the development of norms. Industry groups of US influence over ICANN could be seen as a strategy
continued to work on standards regarding everything for strengthening the institution and reinforcing the
from undersea cable protection to financial services. And American multi-stakeholder philosophy rather than as a
non-profit groups pressed companies and governments to sign of defeat (Zittrain 2014).
protect privacy and human rights.
It is interesting to look at the experience of other regimes
Conversely, pessimists about normative change in the when US pre-eminence diminished in an issue area. In
cyber regime complex point to the overall decline of the trade, for example, the United States was by far the largest
trust that is so important in the issue area. Some observers trading nation when the General Agreement on Tariffs and
date this loss to what they see as the militarization
of cyberspace symbolized by: the DDOS attacks that 5 Note that Yahoo and Yahoo-Japan have been treated as one entity for
the purposes of company rankings.
Trade (GATT) was created in 1947, and the United States CONCLUSIONS
deliberately accepted trade discrimination by Europe and
Japan as part of its Cold War strategy. After those countries Predicting the future of the normative structures that will
recovered, they joined the United States in a club of like- govern the various issues of cyberspace is difficult because
minded nations within the GATT (Keohane and Nye of the newness and volatility of the technology, the rapid
2001). In the 1990s, as other states’ shares of global trade changes in economic and political interests, and the social
increased, the United States supported the expansion and generational cognitive evolution that is affecting how
of GATT into the WTO, and the club model became state and non-state actors understand and define their
obsolete. The United States supported Chinese accession interests. While the explanations are complementary, it
to the WTO and China surpassed it as the world’s largest seems likely that liberal institutionalist and cognitive
trading nation. While global rounds of trade negotiations regime theories will provide better tools for understanding
became more difficult to accomplish and various free trade those changes than oversimplified theories of hegemonic
agreements proliferated, the rules of the WTO continued transition.
to provide a general framework where the norm of most
favoured nation status and reciprocity created a structure One projection does seem clear. It is unlikely that there
where particular club deals could be generalized to a will be a single overarching regime for cyberspace any
larger number of countries. Moreover, new entrants, such time soon. A good deal of fragmentation exists now and
as China, found it in their interests to observe even adverse is likely to persist. The evolution of the present regime
judgments of the WTO dispute settlement process. complex, which lies halfway between a single coherent
legal structure and complete fragmentation of normative
Similar to the non-proliferation regime, when the United structures, is more likely. Different sub-issues are likely
States had a nuclear monopoly in the 1940s, it proposed to develop at different rates, with some progressing and
the Baruch Plan for UN control, which the Soviet Union some regressing in the dimensions of depth, breadth
rejected in order to pursue it own nuclear weapons. In the and compliance. Some areas, such as crime, in which
1950s as nuclear technology spread, the United States used states have common interests against third-party free
the Atoms for Peace program, coupled with inspections riders, seem ripe for interstate agreement, even if only an
by the new International Atomic Energy Agency, to try to agreement to assist in legal and forensic efforts (Tikk 2011).
separate the peaceful from weapons purposes. During the Other issues, such as privacy, may see compromises in the
1960s, the five nuclear weapon states negotiated the Non- context of trade negotiations, which apparently have no
Proliferation Treaty, which promised peaceful assistance to direct connection with the cyber area. And some areas,
states that accepted a legal status of non-nuclear weapon such as war, may not be susceptible to formal arms control
states. In the 1970s, after India’s explosion of a nuclear agreements, but may see the evolution of declaratory policy,
device and the further spread of technology for the confidence-building measures and rough rules of the road.
enrichment and reprocessing of fissile materials, the United Rather than global agreements, like-minded states may
States and like-minded states created a Nuclear Suppliers act together to avoid destabilizing behaviour, and later
Group that agreed “to exercise restraint” in the export of try to generalize such behaviour to a broader group of
sensitive technologies, as well as an International National actors through means ranging from formal negotiation to
Nuclear Fuel Cycle Evaluation, which called into question development assistance. Whatever the outcomes, analysts
the optimistic projections about the use of plutonium fuels. interested in the development of normative structures
While none of these regime adaptations were perfect, and for the governance of cyberspace should avoid the over-
problems persist with North Korea and Iran today, the net simplified popular dichotomies of a “war” between the
effect of the normative structure was to slow the growth in ITU and ICANN. Instead, they would do better to view the
the number of nuclear weapon states from the 25 expected problems in the full complexity offered by regime theories
in the 1960s to the nine that exist today (see Nye 1981). In and the concept of regime complexes.
2003, the United States launched the Proliferation Security
Initiative, a loosely structured grouping of countries ACKNOWLEDGEMENTS
that shares information and coordinates efforts to stop
trafficking in nuclear proliferation-related materials. I am indebted to Amelia Mitchell for research assistance,
and to Laura DeNardis, Fen Hampson, Melissa Hathaway,
In short, projections based on realist theories of hegemony Roger Hurwitz, James Lewis, Robert O. Keohane,
are based on poorly specified indicators of change (see Alexander Klimberg, John Mallery, Tim Maurer, Bruce
Nye 2011b, chapter 6). Even after monopolies over a new Schneier and Jonathan Zittrain for comments.
technology erode, it is possible to develop normative
frameworks for governance of an issue area.
WORKS CITED Goldsmith, Jack and Tim Wu. 2006. Who Controls the
Internet? Illusions of a Borderless World. Oxford: Oxford
Adler, Emmannuel and Peter M. Haas. 1992. “Conclusion: University Press.
Epistemic Communities, World Order, and the Creation
of a Reflective Research Program.” International Hardin, Garrett. 1968. “The Tragedy of the Commons.”
Organization 46 (1): 367–90. Science 162 (3859).
Blumenthal, Marjory and David D. Clark. 2009. “The Hasenclever, Andrea, Peter Mayer and Volker Rittberger.
Future of the Internet and Cyberpower.” In Cyberpower 1997. Theories of International Regimes. Cambridge:
and National Security, edited by Franklin D. Kramer, Cambridge University Press.
Stuart Starr and Larry K. Wentz. Washington, DC: Hurwitz, Roger. 2009. “The Prospects for Regulating
National Defense University Press. Cyberspace.” Unpublished paper. November.
Brenner, Joel. 2013. “Mr. Wemmick’s Condition; or Privacy ICANN. 2013. “Internet Domain Name Expansion Now
as a Disposition, Complete with Skeptical Observations Underway.” News release, October 23. www.icann.org/
Regarding Various Regulatory Enthusiasms.” Lawfare en/news/press/releases/release-23oct13-en.
Research Paper Series 2 (1): 1–43.
Keohane, Robert O. 1984. After Hegemony: Cooperation and
Choucri, Nazli. 2012. Cyberpolitics in International Relations. Discord in the World Political Economy. Princeton, NJ:
Cambridge: MIT Press. Princeton University Press.
Clarke, Richard A. and Robert K. Knake. 2012. Cyber War: Keohane, Robert O. and Joseph S. Nye. 1977. Power and
The Next Threat to National Security and What to Do About Interdependence. Boston: Little, Brown.
It. New York: Ecco.
———. 2001. “Between Centralization and Fragmentation:
Council of Europe Convention on Cybercrime. 2014. The Club Model of Multilateral Cooperation and
“Convention on Cybercrime CETS No.: 185.” Problems of Democratic Legitimacy.” John F. Kennedy
http://conventions.coe.int/Treaty/Commun/ School of Government, Harvard University Faculty
ChercheSig.asp?NT=185&CM=8&DF=&CL=ENG. Research Working Paper Series, RWP01-004.
CSIS. 2008. Securing Cyberspace for the 44th Presidency: A Keohane, Robert O. and David G. Victor. 2011. “The Regime
Report of the CSIS Commission on Cybersecurity for the Complex for Climate Change.” Perspectives on Politics 9.
44th Presidency. Washington, DC: CSIS.
Klimburg, Alexander. 2013. “The Internet Yalta.”
Deibert, Ronald J. and Rafal Rohozinski. 2010. “Risking Center for a New American Security Commentary.
Security: Policies and Paradoxes of Cyberspace www.cnas.org/sites/default/files/publications-
Security.” International Political Sociology 4 (1). pdf/CNAS_WCIT_commentary%20corrected%20
Demchak, Chris C. and Peter Dombrowski. 2011. “Rise of %2803.27.13%29.pdf.
a Cybered Westphalian Age.” Strategic Studies Quarterly Krasner, Stephen, ed. 1983. International Regimes. Ithaca,
(Spring). NY: Cornell University Press.
DeNardis, Laura. 2014. The Global War for Internet Kuehl, Daniel T. 2009. “From Cyberspace to Cyberpower:
Governance. New Haven: Yale University Press. Defining the Problem.” In Cyberpower and National
Garamone, Jim. 2014. “Hagel Thanks Alexander, Cyber Security, edited by Franklin D. Kramer, Stuart Starr
Community for Defense Efforts.” American Forces and Larry K. Wentz, 26–28. Washington, DC: National
Press Service, March 28. www.defense.gov/news/ Defense University Press.
newsarticle.aspx?id=121928. Lewis, James A. and Stewart Baker. 2013. The Economic
Gilpin, Robert. 1987. “The Theory of Hegemonic Stability.” Impact of Cybercrime and Cyberespionage. CSIS report.
Understanding International Relations: 477–84. h t t p : / / c s i s . o rg / f i l e s / p u b l i c a t i o n / 6 0 3 9 6 r p t _
cybercrime-cost_0713_ph4_0.pdf.
Goldsmith, Jack. 2013. “Reflections on U.S. Economic
Espionage, Post-Snowden.” Lawfare, December 10. Libicki, Martin. 2009. Cyberdeterrence and Cyberwar. Santa
www.lawfareblog.com/2013/12/reflections-on-u-s- Monica: RAND.
economic-espionage-post-snowden/. MacKinnon, Rebecca. 2012. Consent of the Networked: The
Worldwide Struggle for Internet Freedom. New York: Basic
Books.
Markoff, John and Thom Shanker. 2009. “Halted ’03 Iraq Schmitt, Eric and Thom Shanker. 2011. “U.S. Debated
Plan Illustrates U.S. Fear of Cyberwar Risk.” The New Cyberwarfare in Attack Plan on Libya.” The New York
York Times, August 1. www.nytimes.com/2009/08/02/ Times, October 17. www.nytimes.com/2011/10/18/
us/politics/02cyber.html. world/africa/cyber-warfare-against-libya-was-
debated-by-us.html.
Maurer, Tim. 2011. “Cyber Norm Emergence at the
United Nations — An Analysis of the UN’s Activities Schmitt, Miachael N., ed. 2013. Tallinn Manual on the
Regarding Cyber-security?” Belfer Center for Science International Law Applicable to Cyber Warfare. Cambridge:
and International Affairs, Harvard Kennedy School Cambridge University Press.
Discussion Paper 2011-11.
Schneier, Bruce. 2013. “The Battle for Power on the Internet.”
Morse, Julia and Robert O. Keohane. Forthcoming. The Atlantic, October 24. www.theatlantic.com/
“Contested Multilateralism.” The Review of International technology/archive/2013/10/the-battle-for-power-
Organizations. on-the-internet/280824/.
Mueller, Milton. 2010. Networks and States. Cambridge, Sofaer, Abraham D., David Clark and Whitfield Diffie.
MA: MIT Press. 2010. “Cyber Security and International Agreements.”
In Proceedings of a Workshop on Deterring Cyberattacks:
———. 2012. “ITU Phobia: Why WCIT Was Derailed.” Informing Strategies and Developing Options for U.S.
Internet Governance Project. www.internetgovernance. Policy, edited by Committee on Deterring Cyberattacks:
org/2012/12/18/itu-phobia-why-wcit-was-derailed/. Informing Strategies and Developing Options and National
Research Council. Washington, DC: National Academies
Nye, Joseph S. 1981. “Maintaining the Non-Proliferation
Press.
Regime.” International Organization: 15–38.
Starr, Stuart H. 2009. “Toward a Preliminary Theory of
———. 2011a. “Nuclear Lessons for Cyber Security.”
Cyberpower.” In Cyberpower and National Security,
Strategic Studies Quarterly: 18–38.
edited by Franklin D. Kramer, Stuart Starr and Larry K.
———. 2011b. The Future of Power. New York: PublicAffairs. Wentz. Washington, DC: National Defense UP.
Ostrom, Elinor. 2009. “A General Framework for Analyzing Statista. 2013. “Market Value of the Largest Internet
Sustainability of Social-Ecological Systems.” Science Companies Worldwide as of May 2013 (In Billion
325. U.S. Dollars).” Statista. www.statista.com/
statistics/277483/market-value-of-the-largest-internet-
Ostrom, Elinor, Joanna Burger, Christopher Field, Richard companies-worldwide/.
Norgaard and David Policansky. 1999. “Revisiting the
Commons: Local Lessons, Global Challenges.” Science Tikk, Eneken. 2011. “Ten Rules for Cyber Security.” Survival
284 (5412). 53 (3): 119–32.
Raymond, Mark. 2013. “Puncturing the Myth of the Internet WGIG. 2005. Report of the Working Group on Internet
as a Commons.” Georgetown Journal of International Governance. Château de Bossey: WGIG. www.wgig.org/
Affairs Special Issue: 5–15. docs/WGIGREPORT.pdf.
Rid, Thomas. 2013. Cyber War Will Not Take Place. New Zittrain, Jonathan. 2008. The Future of the Internet and How
York: Oxford University Press. to Stop It. New Haven: Yale University Press.
Ruggie, John Gerard. 1982. “International Regimes, ———. 2014. “No Barack Obama Isn’t Handing Control
Transactions, and Change: Embedded Liberalism in the of the Internet Over to China.” The New Republic (224).
Postwar Economic Order.” International Organization
36 (2).
CIGI’s current research programs focus on three themes: the global economy; global security & politics; and international
law.
CIGI was founded in 2001 by Jim Balsillie, then co-CEO of Research In Motion (BlackBerry), and collaborates with and
gratefully acknowledges support from a number of strategic partners, in particular the Government of Canada and the
Government of Ontario.
Le CIGI a été fondé en 2001 par Jim Balsillie, qui était alors co-chef de la direction de Research In Motion (BlackBerry). Il
collabore avec de nombreux partenaires stratégiques et exprime sa reconnaissance du soutien reçu de ceux-ci, notamment
de l’appui reçu du gouvernement du Canada et de celui du gouvernement de l’Ontario.
CIGI MASTHEAD
Managing Editor, Publications Carol Bonnett
EXECUTIVE
President Rohinton Medhora
COMMUNICATIONS
Communications Specialist Kevin Dias kdias@cigionline.org (1 519 885 2444 x 7238)
Public Affairs Coordinator Erin Baxter ebaxter@cigionline.org (1 519 885 2444 x 7265)
10 St James’s Square
London, England SW1Y 4LE, United Kingdom
tel +44 (0)20 7957 5700 fax +44 (0)20 7957 5710
www.chathamhouse.org
April 2017
Preface
This paper extrapolates from present trends to describe plausible future crises playing out in
multiple global cities within 10 years. While predicting the future is fraught with uncertainty, much
of what occurs in the scenarios that follow is fully possible today and absent a significant course
change, probable in the timeframe discussed.
It is not hard to find tech evangelists touting that ubiquitous and highly interconnected digital
technology will bring great advances in productivity and efficiency, as well as new capabilities we
cannot foresee. This paper attempts to reveal what is possible when these technologies are applied
to critical infrastructure applications en masse without adequate security in densely populated
cities of the near future that are less resilient than other environments. Megacities need and will
deploy these new technologies to keep up with insatiable demand for energy, communications,
transportation, and other services, but it is important to recognize that they are also made more
vulnerable by following this path. 2
To illustrate what these eventualities could look like, we have constructed four scenarios for the
not-too-distant future (2025) that lay out some of the more extreme risks we may face in an all-
digital world.
The photos are all going, “Hey,” and the plate goes and refills itself and brings you fresh
food, and your beer mug tells you you’re drinking too much. Everything is just smart. This is
my view of the Internet of Things: you’re able to infuse intelligence into everything, you’re
able to put a chip in everything, you’re able to put software in everything, you’re able to
connect everything online and just everything is a lot smarter. The doorknob is a lot
smarter, and the lightbulb is a lot smarter, and your wristwatch is a lot smarter. Everything
starts to get really, really smart.
On the role of technology in improving the human condition, techno utopianism has been soundly
besting ludditism going on two centuries now, and the world of late 2025, with its autonomous
vehicles, fully integrated smart cities, deep virtual and augmented realities, and artificial intelligence
getting ever closer to human-parity general intelligence, is the result.
With that said, what’s transpired around the world just this past year has got to give pause to even
the most ardent optimists. Compiled below are the unnerving events witnessed in four of the
planet’s largest and most important cities as reported by the local media and then deconstructed
by an artificial intelligence (AI)-assisted omniscient cyber forensicist. In all it’s clear that the
technologies that helped the infrastructure managers of these cities handle the almost
incomprehensibly complex operations of a modern megacity were also the root cause (or at a
minimum, the enabler) of the disasters that befell them. Undoubtedly, cyber attackers played a
greater or lesser role in getting these crises rolling, but it appears that despite decades of warnings,
in the name of progress we’ve made things ever so easy for them. Now all we can hope for is that
we learn from these experiences and implement changes as quickly as possible, knowing full well
that change in infrastructure matters never comes quickly.
Bangkok Post, Wednesday, April 23, 2025, Midnight—On a normal day, most residents of Bangkok
could expect clean water to flow from their faucets and their toilets to flush. Bold infrastructure
engineering work done in the late nineteenth century supported public health improvements that
led to ever-larger populations. Population growth in turn put stress on the systems and was
relieved by subsequent waves of engineering imagination and excellence. Some of the world’s
largest, most efficient water treatment plants have given this city some of the most affordable,
mainly clean water in Asia.
IoT, Automation, Autonomy, and Megacities in 2025 | 3
However, as of five days ago, nothing in this sprawling city of 30 million has been normal:
● Multiple fires raging out of control when fire hydrants couldn’t produce water
● Industrial businesses shuttered because they couldn’t make their products without reliable
water supplies
● Foul water coming out of faucets, spilling forth from toilets in apartments and on the streets
from manhole covers
● And perhaps worst is that some power plants in and around the city are running at reduced
capacity due to having less of the water they require for cooling, and power outages are
undermining all efforts to restore order.
After years of droughts brought groundwater to historically low levels, a few days ago reports of
low and then no water pressure started coming in from households, businesses, and government
offices. By early evening the state-run Metropolitan Waterworks Authority (MWA) issued a
statement saying it had lost control over the majority of the pumps responsible for maintaining
water pressure, as well as its operator consoles, and was investigating the issue.
Throughout this ordeal, the governor of Bangkok tried to keep a calm face. But today he appeared
to lose his courage. On the Royal Thai Army’s Channel 7, the mayor said:
People of Krung Thep, I realize many of you cannot hear my message, but for those who
can, I strongly urge you to be strong, and carry on with as much faith and discipline as you
can muster. It appears we may be the victims of an unprecedented cyber attack on our
water infrastructure. The smartest engineers in our city are working day and night to
understand the full extent of the attack and with luck, will restore water, electricity, order,
and hope to our city as soon as possible. Otherwise, I am not sure what will become of us.
Tomorrow will be day six. With order breaking down, the Army trying to help the police keep the
growing riots in check, and with bottled water reserves almost depleted, we can only pray the
engineers will have success soon.
The roots of the problem began earlier that April when a water authority engineer received a file
from a collaboration platform used for getting new files from the city’s primary water service
infrastructure automation vendor. The file appeared to correspond to a firmware update posted on
the vendor’s knowledge portal indicating it was required to patch a memory leak problem. Once
downloaded, the file was then transferred from the engineer’s laptop to three engineering
workstations.
It only took seconds for the attackers’ command-and-control system to find the signal emanating
from the Trojan contained in the file. Almost immediately, additional implants made their way
4 | Michael Assante and Andrew Bochman
undetected on to the target workstations and began to exfiltrate the information needed to seed
the necessary changes to system software. That was the software residing on actuators and
digitally controlled pumps throughout the sprawling water system serving central Bangkok and
surrounding districts.
While remaining undetected, the attackers eventually learned enough to capture the digital
credentials they needed to manage the IT and operational technology (OT) infrastructure. Data
stolen from both the business network and the water Supervisory Control and Data Acquisition
(SCADA) network provided the keys needed to focus the attackers’ engineering efforts. It took
three months, but testing proved their bricking 3 payloads would load successfully 90 percent of
the time and result in irrecoverable device shutdowns. Staging the automatic software loads was
now the only step to be completed.
When the first attack came, the Bangkok water system experienced several waves of destruction as
malicious firmware was propagated to digital systems, including variable-speed drives required for
pumps and communication devices throughout their water transmission and distributed system.
Engineering teams were getting good at using analytics to predict failures and deal with
probabilistic failures in the system, but the scale of these failures was never seen before.
Equipment failures spanned from distribution pumping stations, water treatment plants and
chemical feeding systems, to transmission pumping stations. The attackers were able to shut down
pumping at five key stations rapidly depressurizing the entire water distribution system and setting
off an overwhelming onslaught of alarms at the water control center. Programmable logic
controllers (PLCs) began to report errors before their symbols went gray on operator consoles.
First the pumps, then the routers and modems, and finally the controllers were lost.
Work crews were unable to quickly repower units, to say nothing of the systems that were in a
bricked state. System planners ran through dusty plans for restoring the system by using older
pumps found in outlying stations. A crisis soon developed as newer systems to measure water
quality were no longer feeding data up to the quality analysis application running in the private
cloud. The water-quality checkpoints failed to report data and vital components failed in the
elaborate array of automatic chlorine feeding systems. The overflow of sewage was now
threatening water quality throughout the system.
Not only were they blinded, but the operators were robbed of the tools necessary to control water
processing for treatment and pumping. With PLCs no longer functioning, there were also problems
with digital actuators such as discharge and suction valves, variable-speed drives, motor-control
units, and supply and exhaust fans. This was an attack on a previously unprecedented scale, and
the IT group and SCADA support engineers found they did not have the tools for the job or the
ability to touch the staggering number of affected devices. The small IT department was unable to
3 “Bricking” is a term used when a computational device is rendered inoperable or is unable to perform its intended
function. A bricked device would be described as entering a disabled state. A “disabled state” encapsulates any behavior
that deviates from the documented function of the device. Examples can include nonresponsive connectivity ports,
improper I/O function, erroneous status information, or communication by the device that it is in a faulted state.
IoT, Automation, Autonomy, and Megacities in 2025 | 5
keep up with the reporting and requests for assistance from the Raw Water Development
Department, Treatment Plant Services, and Water Distribution and Control Departments.
Frantic calls to device manufacturers became an all-hands effort as inventories were quickly
exhausted. Devices on the shelf had already been rushed to a central station nearest to the city’s
emergency response center and sports arena.
2. Inability to detect intrusions allowed attackers to discover many firmware devices and to
engineer payloads for several different models, allowing for a massive attack
3. Automation and connectivity provided a pathway to find, touch, and deliver firmware
uploads
Xinhua News Service, May 5, 2025, 11:10 pm—At the time of this report, 80 percent of Shanghai’s
transportation system is completely inoperable. The computer systems that manage airports,
airlines, trains, subways, buses, and more have been massively disrupted. Airlines are reporting their
logistics scheduling systems are unstable. The few rail operators we reached are saying they can’t
see the positions of their trains and, in some cases, can’t verify the position of their track switches.
Following established procedures in this state of uncertainty, they stopped all movement. To top it
off, bus and taxi services, both autonomous and with drivers, are unable to keep up with the
unprecedented surge in demand and may be experiencing glitches of their own.
Although the cause is unknown, some opinions are forming. According to Mr. Steve Hu, chairman
of Huawei’s Global Cyber Security and User Privacy Committee:
The scale of this attack on transportation infrastructure seems unprecedented. There can
be little doubt there is a nation state behind this action. Who else could muster the
resources to create so many concurrent impacts on such diverse systems?
In time, we’ll come to know how fast these services can be returned to normal and hopefully
identify the root cause. What we do know for sure: hotels are reporting they are completely full,
more than 3 million people are stranded, and it’s going to be a long night.
Here’s a recap of today’s events. During this evening’s rush hour, subways and trains serving
Shanghai’s Pudong and Hongqiao international airports started running late and then stopped
6 | Michael Assante and Andrew Bochman
running altogether. In short order, concentric rings of similar troubles spread across the greater
Shanghai region. Rail commuters, both residents of the city as well as business people and tourists
from other parts of China and around the world, are utterly stranded. Buses and taxis initially
responded to the huge surge in demand; however, as the 15 million people dependent on trains
and subways turned to these alternatives, they too experienced systems failures that rendered
them nearly useless. One international banker we interviewed said he’s never seen anything like
this:
I was waiting for the 4:15 train to Pudong and even though the monitor said it was arriving,
it never actually came. Eventually I tried hailing a taxi but the app indicated a five-hour wait
time. My flight to Chengdu for work was supposed to depart at 7 pm but now I understand
it was just canceled. I give up. I want to just go home but even that now seems impossible.
Then came the airlines. Chinese airlines including Air China, Shanghai Airlines, and Juneyao Airlines
as well as foreign carriers Delta, Emirates, Singapore Airlines, and others had in recent months
begun reporting intermittent issues with their scheduling and logistics systems. As late afternoon
turned into evening, a clear disruption to air operations led some experts to suspect a coordinated
cyber attack.
The global economy took notice with sharp drops in the Shanghai and Hong Kong indexes and
overseas the DOW and FTSE are falling as well in pre-opening trading. The costs to productivity
seem likely to be massive, as are the ripple effects of unprecedented supply chain disruption.
Though contemporaries said the cause was “unknown” but attributed it to terrorist attack, in
retrospect the cause was obvious and not malevolent. High-precision time measurement matters
more than ever in 2025 as larger, more interconnected systems rely on the efficient exchange of
accurate time-stamped data. Many developers had been warned to select their algorithms and
libraries carefully, but not all heeded that advice, and this cascading transportation disruption
began in 100-nanosecond increments before it built into a time typhoon:
• Dynamic power management (DPM) had been rolled out to reduce the costs of paying for
the “electrification of everything”
● Widely deployed DPM schemas were used to shut down devices when they were not
needed and to wake them before they were needed to receive/send data or process data
● Industrial Internet of Things (IIoT) implementations had been coded to optimize the
performance of associated devices in an attempt to manage out inefficiencies. A software
update addressed a few known bugs and added an innovative new way to manage DPM
IoT, Automation, Autonomy, and Megacities in 2025 | 7
● Recent modernization projects allowed the world’s most-used metro to squeeze additional
capacity from the fixed core system and already maxed-out train car-per-track
arrangement
● New optimization software had taken advantage of cheap slap-on instruments to measure
activity in stations and along tracks to handle growing commuter numbers
The inflows from Maglev stations and hand-off stations to other forms of transportation like
airports were synchronized to better control train traffic. The software had already provided results
and slight tweaks were showing more promise under incredible demands to do more with what
was in place. Additional software-based controls allowed system designers to deal with the scale
of more data inputs and larger sensor deployments. “Run trains closer together, safely” was the
motto and driving force for innovation to include upgrading track-positioning sensors, from
passive radio frequency identification (RFID) tags to more powerful multisensor devices that could
include measurements that not only conveyed location, but indications of train loading and
maintenance information. The software was used to coordinate messaging and device power-on
based on advances in predictive analysis and being able to estimate train location while using the
sensors to verify and report. All of this meant Shanghai could keep up with its growing population
and continue to serve as an engine for growth and global investment.
The first failures in trackside instruments, caused by a compounding error that began impacting
instruments weeks after the update had been loaded, were being handled by logic in trackside
controllers and local system estimators. The predictive algorithm worked well, but its insatiable
appetite for data would finally go unmet as trackside devices failed to wake in time to provide
anticipated reports. Trackside controllers could not send necessary outputs and the matching of
train controller-fed locational data began to deviate. The complexity of multiple data sources and
the management of large underground deployments of firmware-based devices had been moved
into software. The DPM software tweak left devices in a sleep state too long resulting in
unanticipated extra controller-initiated communications when devices awoke outside of their
predictive windows.
The predictive applications began to fail and safety logic brought trains to a stop until sufficient
data would allow for the verification logic to solve. The loss of vital data and disruption in train
service quickly cascaded to other transportation elements as passengers became stranded,
stations were occupied to capacity, and data flows between the transit systems and other systems
warned that something was terribly wrong. The larger transportation system-of-systems began to
fail as humans were not where they were supposed to be and data triggered verification routines.
Tremendous amounts of data being sent by automated systems and individual customer requests
for automobile sharing services were overwhelming dispatching applications causing a denial of
service and timely processing.
Human override of the train safety logic in the applications was ruled out and initial forensics was
able to uncover the power management issue. A fallback version of the software was staged and
deployed, but the scale of the instrument failure meant hours were going to spread into tens of
8 | Michael Assante and Andrew Bochman
hours and possibly multiple days. Transit authority maintenance crews had never had to touch so
many devices that quickly. Offers to have military units available to aid in the loading of fallback
software were turned down as the loads were tricky and had to be verified.
If the rippling impacts of stranded commuters were not enough, the congestion of the city’s
cellular network began to stress priority service schemes, eventually leading to network latencies
as voice data and digital messaging began to overwhelm towers and backbones. The technology
implemented to digitize infrastructures had outpaced the cell networks they relied upon. The
traffic models had not anticipated a day anything like this and although the cellular network
remained available the latencies affected smart grid meters and telemetry signals from field
terminal units and digital sensing devices. The congestion resulted in local power management
conflicts that resulted in losing power to sections of the distribution system feeding one of the
airports and associated operational data centers.
The loss of power prompted power meters and non-power IIoT/IoT devices to send “last will and
testament” messages using capacitors for a last joule of energy. The resulting communication
surges piled onto the already-congested network. More congestion resulted in additional spot
power outages. The power disruptions were exacerbated by failures in backup generator and
micro-grid supplies, also due to cellular network congestion. The dependencies between
applications, data, and infrastructures became painfully obvious. It may be many weeks, if not
months, from now before the true chain of events can be mapped out.
Radio Fórmula Cadena Nacional, July 26, 2025—In other big cities around the world we’ve seen
cyber-attacks on infrastructure spark the devolution of city services and, almost instantaneously,
civil order. What’s playing out here in Mexico’s beloved capital is a reversal of that sequence, with
all-too-familiar city employee strikes that have slowed the city to a crawl for the past few months
setting the stage for something quite out of the ordinary. We Mexicans long ago learned to expect
and tolerate near-crippling bureaucracy and inefficiency. But Mexico City, also known as Ciudad
de la Esperanza, or “The City of Hope,” has been the exception in many ways: exuberant, business-
fueled perpetual motion despite the enervating friction of its incorrigible, corrupt, and bloated
government.
All that, however, seems to have unraveled quickly when tens of thousands of strikers and other
protesters, enraged by the latest round of pay cuts, turned out on the streets and brought the city
to a weeklong standstill. Incessant social media campaigns in support of the strikers were to be
expected as were cyber-attacks of mixed success on city government websites. But when the built
infrastructure started acting as if it were possessed by demons, it became clear that a more
disruptive type of cyber assault might be occurring. It appears now that over the course of
approximately 90 minutes, about half of all the elevators in the city froze, often stuck in between
floors, stranding hundreds, maybe thousands, of people all over the city in truly desperate
IoT, Automation, Autonomy, and Megacities in 2025 | 9
situations. How the cyber protesters were able to make this happen is anyone’s guess. One thing is
certain though: first responders including police, fire, and assorted facilities engineers were 100
percent occupied when the next crisis hit.
Within a few hours of the start of the elevator troubles, fire alarms and sprinkler systems activated
inexplicably in other buildings, forcing their occupants into the street. One could sense the
beginning of mass panic. With the police fully engaged in frantic rescue attempts across the city
and the military not yet activated, the streets began to boil. It was at about this time that the attack
on Santa Úrsula’s Estadio Azteca turned out the lights, emptying tens of thousands onto already-
jammed streets. The stampedes and barbarism that ensued and expanded from there have left
many thinking there may be no imaginable point of return.
Over the last 15 years the operations and maintenance of heavily used machines like elevators and
escalators have been brought into cloud analytic platforms with remote access, diagnostics, and
predictive maintenance. Elevators and escalators are typically out of service two days per year as a
result of planned inspection and maintenance or a malfunction. The collection of diagnostic data
combined with predictive analytics and remote access provides more efficiency in servicing and
enhancing the already high levels of availability. Instruments collect data and feed it over wireless
pathways to communication gateway devices to then reach a controller and head up to a cloud
platform. The cloud platform software provides a view to remotely manage hundreds of thousands
of machines while collecting data from millions of sensors. The local building managers are able to
receive a feed of the transport systems in their facilities and service providers and manufacturers
can monitor machine health and maintenance for an entire fleet. The software aids engineers in
determining if and when technicians need to be sent out, while equipping them with information
for tests and the work that needs to be performed. Sensors can provide vibration, speed, and
temperature data to building managers and service technicians’ smart phones and tablets armed
with maintenance applications.
These global systems harness powerful core analytics, saving money by improving computing
efficiency and machine performance. Engineers on different continents can diagnose faults and
performance irregularities on large numbers of machines each. Centralization has increased
productivity of service providers, but it also provides an adept individual or group with the ability to
remotely interact with many machines at once.
A group of hackers began toying around and found easy ways to make money using their skills.
They were smart and stayed below the radar for the most part. The group acted more like a club
than a gang. The recent social tensions had been a big topic at the lot gatherings. Two of their
members had been experimenting with their apartment building’s automated systems. They found
it comical that wireless network broadcast would advertise central elevator data and west side
cargo elevator. Their explorations brought them into contact with sensor data streams and a host
of IP addressable microcomputers. Some had web interfaces others did not. It was mostly just for
10 | Michael Assante and Andrew Bochman
fun until their explorations uncovered remote connections, and evidence of interactions that came
from mobile phone applications and a central data depository.
When three of the club members were caught up in the strike, they began encouraging other
members to get involved. One of the members was put into the hospital at the hands of riot
control police, and finally the group came together to plan an assault. They used their own
apartment building access to figure out how to access the systems of buildings surrounding the
protests. The idea was simple: dump more people into the streets so the police would need to pull
back and city officials would be forced to negotiate with the strikers.
This handful of hackers fully appreciates the potential consequences of their actions. Their actual
plan was basic: put elevators into shutdown and maintenance modes while removing or changing
IP addresses and configurations so machines could only be restarted onsite. The only thing holding
them back was scale. They had caught the user name and password for three buildings but some
implementations had not been accessed recently. A Google search yielded a hardcoded user
account made by the manufacturer. Then they were in all over the city. Soon, they were knocking
machines off line by the hundreds. The next move was a little more interactive as the group tripped
evacuation alarms from a cracked building management application. The alarms got people
moving, but it was the triggering of the fire-suppression systems based on bad data inputs to temp
sensors that finished the job.
A few hours of play created this chaos and it proved to be enough to tip the city into a prolonged
and brutish emergency with deadly results.
New York Times, Friday, September 19, 2025, 2 pm—In what is already viewed as the worst attack
on New York since 2001, and what may turn out to be many times worse before it’s over, the city
has just been hit with what appears to be a coordinated cyber-physical attack of the kind national
security experts have been warning about for decades. The ultimate costs and causes may never
be known, and it seems the largest and most famous American city will never be the same.
The city of 25 million has just been plunged into what is inarguably its worst blackout of all time.
Historical blackouts (most famously in 1965, 1977, and 2003) were contained to between one and
several days in duration. The current blackout is going into its third full week. All five boroughs are
affected, along with parts of New Jersey, White Plains, and Long Island.
Electricity outages quickly impaired other critical city services like water and sewage,
transportation, communications, and more. Residents with the means to have been streaming out
of the city since July 5 and flooding suburbs to the north and west, as well as inundating Boston,
Baltimore, and Washington, D.C. Drone footage captured this morning offered views that were
nothing short of apocalyptic: stores shuttered and/or looted, street lights out, nonfunctional
IoT, Automation, Autonomy, and Megacities in 2025 | 11
subways, and gas cars moving fast to avoid organized bands of thieves. New York City, traumatized
once more, is now held together by the National Guard acting under State of Emergency authority.
The day prior looked like it was going to be a typical Friday leading into an Indian Summer
weekend, albeit a hotter one than normal, as summer high temps had been well into the 100s.
Then from most accounts, the 4G and 5G phone and data networks stopped cold, and not just for
residents, but for most businesses and government workers too, and the city shifted with startling
speed from a festive holiday mood to anxiety and then what lies beyond anxiety.
One might have thought there’d be strong backup systems for the wireless systems on which so
much depended, but one public department of public services (DPS) employee we reached shared:
Others put blame on the less-than-reliable renewable energy systems that have been deployed en
masse since the NY REV grid modernization plan took full effect in the late teens and early
twenties. But then the hydro power the city has relied on from Canada should have saved the day,
right? Well it most certainly has not.
It’s hard to say where this is going to end up. The U.S. economy is in shock and the DOW and
global stock markets have declined between 30 and 50 percent since July 7. Right now, your best
bet for New York is to get out if you’re there and stay out if you’re not. The city that never sleeps
has been plunged into a deep coma from which it seems unlikely to ever fully reemerge.
The engineers could not fathom how they ended up here. The first quarter of the twenty-first
century ushered in the smart grid, which soon gave rise to the industrial internet and distributed
intelligent microgrids, all of which were coordinated by the cloud-hosted computing grid. The
tremendous complexity of this system of systems was concealed by mathematical algorithms and
data analysis. Beautiful pictures and data displays would tell us where to go and what to do to
maintain it. Modern society became tethered to a digital infrastructure that could not be
catalogued. This digital fabric spread across the globe and was deeply embedded in all things, from
the removal of waste water to the determination of how billions of people might best travel to
work each morning. Cloud computing further united parts of the world. Many argued that
globalization in the physical world had taken a step back in the 2020s, but the cyber world told a
different story.
Some experts had warned about the uniquely potent risk posed by highly targeted and
sophisticated cyber attacks. Several had been observed in other parts of the world that should have
served as a harbinger of sorts, but each time they were dismissed with “it could not happen like
that in America.” Even the insurance industry, wary of such scenarios, did not believe any capable
threat actor would attempt a massively damaging attack, let alone succeed.
12 | Michael Assante and Andrew Bochman
The countdown to disaster began more than a decade ago, with several cyber campaigns that
were discovered and discussed openly in the media, given names like Den of Thieves and Elegant
Frost. They showed that tarnished national pride was offended by a series of Western energy and
trade policies that left them with a smaller share of the new global prosperity. What infuriated them
the most was the prosperity being enjoyed by neighbors, while they began to languish and be
outcompeted.
Their own words warned us—they had felt as if they had been pushed into a corner—but what we
did not know is that, from the corner, the groups could flip several switches. With the Shanghai,
Mexico City, and Bangkok disasters preceding it, the year leading up to what was coined as the
worst cyber attack the world had ever seen was uniquely chaotic and dangerous.
There were already several countries that were dealing with a nasty web of insurrection and
subversive armed intrusions.
The lessons of hybrid warfare borne out in the 2010–2020 timeframe in eastern Europe were
being applied with some effect. It was years of deeply knowing several targeted organizations and
their operations that allowed planners to build their plan. The attackers were well positioned, as
their country had enjoyed a short season of growth and modernization that brought Western and
Chinese firms to upgrade their power systems and cellular networks, and to bring IIoT to their
country. These improvements gave the attackers the ability to understand the system of digital
bricks that the world was building its future upon. It took over a year to engineer an attack, and
months to then position everything in a veil of darkness. Strategic investments in research and
technology programs combined a deep technical understanding of modern satellite and
atmospheric communication networks, automation and control technology, and a chip-level
working knowledge of microcomputer boards.
• Stage 1. It all began with a series of implants in meter and microgrid data aggregators and
select communication gateways. The code was light weight, easily positioned in a few initial
hosts; once in place, it could self-propagate from device to device across the native
communication networks. The only trick was to propagate in a manner where the attack
did not congest its own pathways and did not get so noisy as to reveal itself in large swaths
of traffic where it hit public networks. The simple family of exploits took advantage of an
unknown weakness in the code used to enable web-capable management interface.
Researchers first published the vulnerability four years prior but no one had put the time
into operationalizing a working exploit, or at least no one thought that had happened. The
takeover of hundreds of thousands of power grid meters and power inverters provided a
large homogenous botnet that could quickly overwhelm New York’s telecommunication
networks while refusing remote connection attempts by the utility. If you could even get
through all the traffic, the devices would no longer recognize authentication attempts. This
attack stage created a great deal of confusion while complicating all sorts of
IoT, Automation, Autonomy, and Megacities in 2025 | 13
communications that relied on shared networks to receive vital data from the many “micro-
processor-based things” that helped the city function.
• Stage 2. The second stage was composed of a few select actions to disrupt power flowing
in and to one of the world’s hungriest load centers. This attack required serious
engineering, but once in place, the code would do all the work. Operational traffic captures
from a few unmanned substations provided a good look at how the utilities being targeted
were applying a common industrial protocol used in SCADA applications. The software
implants had been coded to verify the specific implementation of breaker control before it
began to send commands to RTUs to open remotely operated circuit breakers and de-
energize critical circuits. These precise actions would create pockets of outages pushing
the system closer to stability limits. The loss of load would result in an over frequency
condition that machines would instantly sense and begin to balance. That is when the final
attack would activate.
• Stage 3. The last stage of the attack was timed as a final shot before the other malicious
codes would turn to a final payload module and overwrite memory at a basic level forcing
replacement of the many devices. The long-term prospects of reenergizing the power
system that served the city would become bleak in a matter of seconds. The final shot had a
40–50 percent chance of creating a wider outage to be felt outside of NYC. The attackers
had been hard at work finding their way onto the operational networks for a number of
cloud-connected gas turbines that supplied large portions of the consumed power on the
island. Once there they were able to devise two primary methods for placing the turbine in
a dangerous condition and after several attempts began to overwrite the system software
and firmware. Some of the attacks were successful in changing control setpoints that were
able to trip units while a few others actually caused physical damage. The result was a well-
synchronized loss of supply pushing the grid back in the other direction. The outage was
still contained to the region and manifested itself in several pockets—leaving some
microgrid devices and power meters to continue sending a tsunami of messages.
The combination of all three stages overwhelmed grid operators and city managers, creating
conditions that stressed the well-practiced plans to deal with all sorts of crises. The city known for
its planning and ability to absorb assault was plunged into a dark and an eerie silence. The pause
lasted longer than normal as emergency operations personnel waited to see if the power would
return and tried to make sense of why they were receiving minimal data from what was recently
heralded as one of the most instrumented cities in the world. The optimization cloud applications
were providing strange results for fire and police units on their city-wide operational picture
displays. Few people knew that the ocean of power meters were jamming networks with constant
streams of gibberish packets.
At first everyone focused on the immediate crisis of clogged communications and power outages,
but the city’s hydrologists knew there were bigger problems to worry about. NYC has been kept
dry by a series of huge pumps that remove intruding water into the city’s vast underground and
returning it to the Hudson. Slight variations in the water height had required a massive city works
14 | Michael Assante and Andrew Bochman
project to keep underground vaults dry and allow New Yorkers to use one of the most important
services the city offered—public mass transit. The pumps had been configured to receive power
from multiple redundant circuits, several key pumping stations were now offline, and the water
intrusion spread. Unknown to the attackers, two of the key substations attacked were required to
maintain power flow downstream to those pumping stations. The failure of a proper make-and-
break configuration on the local backup generator would go unnoticed as alerts were never sent
to the city’s hydrology ops center. The intruding water set off a number of tiny sensors used to
show the spread of water, but that data never found its way to the NYC private cloud providing
data to city engineers. The water would actually undermine a valiant effort to restore power to
sections of the city as transformers and conduits were energized without knowing sections were
underwater. Several electrical shorts occurred, adding to the damage.
It took two days to simply hatch a plan to combat the remaining botnet, and within the first hour
the plan would become unnecessary as the meters and inverters began to pop offline, never to
reset and reboot and come back. It was not the eye of the data storm as one person joked but the
end of the advanced meter network the utility had come to rely on. The utility power meter
engineers and security team, analyzing infected meters taken from the field, had missed the
module responsible for the firmware overwrite routine as they focused on portion of the code
responsible for sending out all the errant messages. The plan would now have to be modified to
visit each device and swap them out. New reports were starting to be radioed in or sent via satellite
phone that two of the three types of meters had actually performed remote disconnects,
interrupting power to homes and buildings. The outage would grow in size for one last time.
The only saving grace was that there were few ways to get to the remaining systems to perform
additional cyber attacks. By day three, cell towers, which had only recently been providing
sufficient throughput, began blinking off the communications grid while other emergency facilities
were suffering from the same fate, losing their back-up generators. The decision to evacuate was a
hard one, but no one could provide a confident estimate for restoring power at the edge of the
system, where meters had been bricked. Even worse, the city was literally flooding from the
basements up, making habitation a health risk and further undermining efforts to move and care
for people. The mayor requested the governor send in the National Guard to help utility personnel
remove meters for direct connections. The procedure was not complex but it did require two-man
teams to visit hundreds of thousands of locations. A return to normal would be measured in
neither days nor weeks, but more likely months if not years. And it would certainly have to be a
“new normal.”
Now mix in the accelerating rate at which connectivity between not just intelligent objects and the
cloud, but between objects and other objects, is expanding, and the degree of interdependence
we’re building and accepting is simply staggering. Boy do things work great when they work. But
what are our plans B and C for when we these things fail or the systems on which they depend fail?
And fail they will.
Below find a starter list of cautionary observations, each of which suggests its high-level solution.
• Mass cascades—Single system disruptions can quickly cascade as large number of people’s
routines or plans are changed resulting in capacity surges and difficult-to-predict impacts
as first-order impacts are accompanied by second- and third-order impacts
• Mono-culture risk
What’s it going to take to follow through on any of these suggestions? Accidents, property
damage, corporate reputational damage, national security impacts, injuries, and significant loss of
human life. In short, problems that individuals, companies, and governments recognize today as
safety problems. Security pundits, particularly those focused on cyber security risks to industrial
operations, have been warning for years that interconnecting and automating systems that control
often-highly dangerous physical processes brings with it a type and scale of risk we had previously
not seen. Many have said that the answer lies in fusing security matters with safety culture.
We’ve seen cars, phone, toys, and many other types of tech-enabled products recalled or
terminated due to safety issues. When the same business and social impulses begin to extend into
the security realm, when more industrial software has to meet the requirements of “safety critical”
systems, we may find ways to avoid the scenes such as those depicted in this paper.
16 | Michael Assante and Andrew Bochman
Our civilization is grappling with unbounded complexity and cyber exposure brought by
automating important processes without a full consideration of the possible cyber consequences.
Obvious and seemingly unstoppable trend lines are pointing to massive deployment of increasingly
automated and even autonomous systems underway now and accelerating over the next few
years. We recommend a strategic pause to reconsider how we more fully value automation from a
cyber-informed cost-benefit perspective. And with or without that pause (we assume most won’t
understand the rationale) it is imperative that we find ways to identify, interrupt, and prevent
catastrophic cyber-physical consequences of both cyber-attack and malfunction of these
technologies.
Acknowledgments
This report is made possible by general support to CSIS. No direct sponsorship contributed to its
publication.
This report is produced by the Center for Strategic and International Studies (CSIS), a private,
tax-exempt institution focusing on international public policy issues. Its research is
nonpartisan and nonproprietary. CSIS does not take specific policy positions. Accordingly, all
views, positions, and conclusions expressed in this publication should be understood to be
solely those of the author(s).
© 2017 by the Center for Strategic and International Studies. All rights reserved.
The Internet of Things: Frequently Asked
Questions
Eric A. Fischer
Senior Specialist in Science and Technology
Summary
“Internet of Things” (IoT) refers to networks of objects that communicate with other objects and
with computers through the Internet. “Things” may include virtually any object for which remote
communication, data collection, or control might be useful, such as vehicles, appliances, medical
devices, electric grids, transportation infrastructure, manufacturing equipment, or building
systems.
In other words, the IoT potentially includes huge numbers and kinds of interconnected objects. It
is often considered the next major stage in the evolution of cyberspace. Some observers believe it
might even lead to a world where cyberspace and human space would seem to effectively merge,
with unpredictable but potentially momentous societal and cultural impacts.
Two features makes objects part of the IoT—a unique identifier and Internet connectivity. Such
“smart” objects each have a unique Internet Protocol (IP) address to identify the object sending
and receiving information. Smart objects can form systems that communicate among themselves,
usually in concert with computers, allowing automated and remote control of many independent
processes and potentially transforming them into integrated systems.
Those systems can potentially impact homes and communities, factories and cities, and every
sector of the economy, both domestically and globally. Although the full extent and nature of the
IoT’s impacts remain uncertain, economic analyses predict that it will contribute trillions of
dollars to economic growth over the next decade. Sectors that may be particularly affected
include agriculture, energy, government, health care, manufacturing, and transportation.
The IoT can contribute to more integrated and functional infrastructure, especially in “smart
cities,” with projected improvements in transportation, utilities, and other municipal services. The
Obama Administration announced a smart-cities initiative in September 2015.
There is no single federal agency that has overall responsibility for the IoT. Agencies may find
IoT applications useful in helping them fulfill their missions. Each is responsible for the
functioning and security of its own IoT, although some technologies, such as drones, may fall
under the jurisdiction of other agencies as well. Various agencies also have relevant regulatory,
sector-specific, and other mission-related responsibilities, such as the Departments of Commerce,
Energy, and Transportation, the Federal Communications Commission, and the Federal Trade
Commission.
Security and privacy are often cited as major issues for the IoT, given the perceived difficulties of
providing adequate cybersecurity for it, the increasing role of smart objects in controlling
components of infrastructure, and the enormous increase in potential points of attack posed by the
proliferation of such objects. The IoT may also pose increased risks to privacy, with cyberattacks
potentially resulting in exfiltration of identifying or other sensitive information about an
individual. With an increasing number of IoT objects in use, privacy concerns also include
questions about the ownership, processing, and use of the data they generate.
Several other issues might affect the continued development and implementation of the IoT.
Among them are
the lack of consensus standards for the IoT, especially with respect to
connectivity;
the transition to a new Internet Protocol (IPv6) that can handle the exponential
increase in the number of IP addresses that the IoT will require;
methods for updating the software used by IoT objects in response to security and
other needs;
energy management for IoT objects, especially those not connected to the electric
grid; and
the role of the federal government, including investment, regulation of
applications, access to wireless communications, and the impact of federal rules
regarding “net neutrality.”
No bills specifically on the IoT have been introduced in the 114th Congress, although S.Res. 110
was agreed to in March 2015, and H.Res. 195 was introduced in April. Both call for a U.S. IoT
strategy, a focus on a consensus-based approach to IoT development, commitment to federal use
of the IoT, and its application in addressing challenging societal issues. House and Senate
hearings have been held on the IoT, and several congressional caucuses may consider associated
issues. Moreover, bills affecting privacy, cybersecurity, and other aspects of communication could
affect IoT applications.
Contents
What Is the Internet of Things (IoT)? .............................................................................................. 1
How Does the IoT Work? ................................................................................................................ 2
What Impacts Will the IoT Have? ................................................................................................... 4
Economic Growth ..................................................................................................................... 4
Economic Sectors ...................................................................................................................... 4
Agriculture .......................................................................................................................... 4
Energy ................................................................................................................................. 5
Health Care ......................................................................................................................... 6
Manufacturing ..................................................................................................................... 6
Transportation ..................................................................................................................... 6
Infrastructure and Smart Cities ........................................................................................... 7
Social and Cultural Impacts ...................................................................................................... 8
What Is the Current Federal Role? .................................................................................................. 8
What Issues Might Affect the Development and Implementation of the IoT? ............................... 11
Technical Issues ....................................................................................................................... 11
Internet Addresses .............................................................................................................. 11
High-Speed Internet .......................................................................................................... 13
Wireless Communications ................................................................................................ 13
Standards........................................................................................................................... 13
Other Technical Issues ...................................................................................................... 14
Cybersecurity .......................................................................................................................... 14
Safety ...................................................................................................................................... 15
Privacy .................................................................................................................................... 16
Other Policy Issues .................................................................................................................. 17
Federal Role ...................................................................................................................... 17
Spectrum Access ............................................................................................................... 18
Net Neutrality ................................................................................................................... 18
What Actions Has Congress Taken? .............................................................................................. 19
Legislation ............................................................................................................................... 19
Bills ................................................................................................................................... 19
Resolutions........................................................................................................................ 19
Hearings .................................................................................................................................. 19
Caucuses.................................................................................................................................. 20
Where Can I Find Additional Resources on This Topic? .............................................................. 20
Contacts
Author Contact Information .......................................................................................................... 20
Acknowledgments ......................................................................................................................... 20
he Internet of Things (IoT) is a complex, often poorly understood phenomenon. The term
T is more than a decade old, but interest has grown considerably over the last few years as
applications have increased.1 The impacts of the IoT on the economy and society more
generally are expected by many to grow substantially. This report was developed to assist
Congress in responding to some commonly asked questions about it:
“What Is the Internet of Things (IoT)?”
“How Does the IoT Work?”
“What Impacts Will the IoT Have?”
“What Is the Current Federal Role?”
“What Issues Might Affect the Development and Implementation of the IoT?”
“What Actions Has Congress Taken?”
“Where Can I Find Additional Resources on This Topic?”
1
Postscapes, “A Brief History of the Internet of Things,” 2015, http://postscapes.com/internet-of-things-history.
2
See, for example, Roberto Minerva, Abyi Biru, and Domenico Rotondi, “Towards a Definition of the Internet of
Things (IoT)” (IEEE Internet Initiative, May 27, 2015), http://iot.ieee.org/images/files/pdf/
IEEE_IoT_Towards_Definition_Internet_of_Things_Revision1_27MAY15.pdf.
3
Adam Thierer, “The Internet of Things and Wearable Technology: Addressing Privacy and Security Concerns without
Derailing Innovation” (Mercatus Center (George Mason University), November 19, 2014, http://mercatus.org/
publication/internet-things-and-wearable-technology-addressing-privacy-and-security-concerns-without.
4
See, for example, Goldman Sachs Global Investment Research, “Our Thinking—What Is the Internet of Things?,”
Goldman Sachs, September 2014, http://www.goldmansachs.com/our-thinking/pages/iot-infographic.html. Some
observers even use Industrial Internet as a synonym for the IoT, although it more commonly applies to manufacturing
and other industrial activities. See, for example, World Economic Forum, “Industrial Internet of Things: Unleashing the
Potential of Connected Products and Services” (World Economic Forum, January 2015), http://www.weforum.org/
reports/industrial-internet-things-unleashing-potential-connected-products-and-services; Industrial Internet Consortium,
(continued...)
context of IoT to denote related concepts such as cyber-physical systems5 and the Internet of
Everything.6
The IoT is often considered the next major stage in the evolution of cyberspace.7 The first
electronic computers were developed in the 1940s, but forty years passed before connecting
computers through wired devices began to spread in the 1980s. The first decade of the twenty-
first century saw the next stage, marked by the rapid spread of smartphones and other mobile
devices that use wireless communications,8 as well as social media, big-data analytics, and cloud
computing.9 Building on those advances, connections between two or more machines (M2M) and
between machines and people are expected by many observers to lead to huge growth in the IoT
by 2020.10
(...continued)
“Home,” 2015, http://www.industrialinternetconsortium.org/index.htm.
5
National Institute of Standards and Technology, “Cyber-Physical Systems,” May 22, 2015, http://www.nist.gov/cps/
index.cfm. NIST defines cyber-physical systems as “co-engineered interacting networks of physical and computational
components.” It is a somewhat broader concept than the IoT, in that such systems need not be connected to the Internet
to function.
6
Cisco, “The Internet of Everything,” 2013, http://perma.cc/Y4LQ-633J?type=live. This concept is similar to that of
the IoT but emphasizes its ubiquity, leading some observers to argue that it is more comprehensive (Dorothy
Shamonsky, “Internet of Things vs. Internet of Everything: Does the Distinction Matter to User Experience
Designers?,” ICS Insight Blog, July 13, 2015, http://www.ics.com/blog/internet-things-vs-internet-everything-does-
distinction-matter-user-experience-designers). For purposes of this report, they are treated as synonymous.
7
The term cyberspace usually refers to the worldwide collection of connected ICT components, the information that is
stored in and flows through those components, and the ways that information is structured and processed. Its evolution
has been characterized in many different ways, but IoT’s emergence is a common theme. See, for example, Janna
Anderson and Lee Rainie, “The Internet of Things Will Thrive by 2025,” Pew Research Center, May 14, 2014,
http://www.pewinternet.org/2014/05/14/internet-of-things/; Simona Jankowski et al., “The Internet of Things: Making
Sense of the Next Mega-Trend” (Goldman Sachs Global Investment Research, September 3, 2014),
http://www.goldmansachs.com/our-thinking/pages/internet-of-things/iot-report.pdf; The White House, “Cyberspace
Policy Review,” May 29, 2009, http://www.whitehouse.gov/assets/documents/Cyberspace_Policy_Review_final.pdf..
8
Pew Research Internet Project, “Device Ownership over Time,” January 2014, http://www.pewinternet.org/data-trend/
mobile/device-ownership/.
9
Nicholas D. Evans, “SMAC and the Evolution of IT,” Computerworld, December 9, 2013,
http://www.computerworld.com/article/2475696/it-transformation/smac-and-the-evolution-of-it.html. SMAC stands for
social media, mobile devices, analytics (big data), and cloud computing.
10
Gartner, Inc., “Gartner Says 4.9 Billion Connected ‘Things’ Will Be in Use in 2015” (press release, November 11,
2014), http://www.gartner.com/newsroom/id/2905717; Leon Spencer, “Internet of Things Market to Hit $7.1 Trillion
by 2020: IDC,” June 5, 2014, http://www.zdnet.com/article/internet-of-things-market-to-hit-7-1-trillion-by-2020-idc/.
11
See CRS Report R42338, Smart Meter Data: Privacy and Cybersecurity, by Brandon J. Murrill, Edward C. Liu, and
Richard M. Thompson II.
12
See CRS Report R44192, Unmanned Aircraft Systems (UAS): Commercial Outlook for a New Industry, by Bill
Canis.
13
Tove B. Danovich, “Internet-Connected Sheep and the New Roaming Wireless,” The Atlantic, February 9, 2015,
http://www.theatlantic.com/technology/archive/2015/02/internet-connected-sheep-and-the-new-roaming-wireless/
385274/; David Evans, “Introducing the Wireless Cow,” The Agenda, July 2015, http://www.politico.com/agenda/
(continued...)
object part of the IoT is embedded or attached computer chips or similar components that give the
object both a unique identifier and Internet connectivity. Objects with such components are often
called “smart”—such as smart meters and smart cars.
Internet connectivity allows a smart object to communicate with computers and with other smart
objects. Connections of smart objects to the Internet can be wired, such as through Ethernet
cables, or wireless, such as via a Wi-Fi or cellular network.
To enable precise communications, each IoT object must be uniquely identifiable. That is
accomplished through an Internet Protocol (IP) address, a number assigned to each Internet-
connected device, whether a desktop computer, a mobile phone, a printer, or an IoT object.14
Those IP addresses ensure that the device or object sending or receiving information is correctly
identified.
What kinds of information do IoT objects communicate? The answer depends on the nature of the
object, and it can be simple or complex. For example, a smart thermometer might have only one
sensor, used to communicate ambient temperature to a remote weather-monitoring center. A
wireless medical device might, in contrast, use various sensors to communicate a person’s body
temperature, pulse, blood pressure, and other variables to a medical service provider via a
computer or mobile phone.
Smart objects can also be involved in command networks. For example, industrial control
systems can adjust manufacturing processes based on input from both other IoT objects and
human operators. Network connectivity can permit such operations to be performed in “real
time”—that is, almost instantaneously.
Smart objects can form systems that communicate information and commands among themselves,
usually in concert with computers they connect to. This kind of communication enables the use of
smart systems in homes, vehicles, factories, and even entire cities.
Smart systems allow for automated and remote control of many processes. A smart home can
permit remote control of lighting, security, HVAC (heating, ventilating, and air conditioning), and
appliances. In a smart city, an intelligent transportation system (ITS) may permit vehicles to
communicate with other vehicles and roadways to determine the fastest route to a destination,
avoiding traffic jams, and traffic signals can be adjusted based on congestion information
received from cameras and other sensors.15 Buildings might automatically adjust electric usage,
based on information sent from remote thermometers and other sensors.16 An Industrial Internet
application can permit companies to monitor production systems and adjust processes, remotely
control and synchronize machinery operations, track inventory and supply chains, and perform
other tasks.17
(...continued)
story/2015/06/internet-of-things-growth-challenges-000098.
14
Internet Assigned Numbers Authority (IANA), “Number Resources,” 2015, https://www.iana.org/numbers.
15
Department of Transportation, “Intelligent Transportation Systems (ITS),” 2015, http://www.its.dot.gov/index.htm;
Bruce Katz, “Why the U.S. Government Should Embrace Smart Cities” (Brookings Institution, July 26, 2011),
http://www.brookings.edu/research/opinions/2011/07/26-cities-katz.
16
Richard Barker and Amy Liu, “Smart Buildings the Next Step for Seattle,” Brookings Institution, July 28, 2014,
http://www.brookings.edu/blogs/the-avenue/posts/2014/07/28-smart-buildings-seattle-barker-liu; Bob Violino, “Smart
Cities Are Here Today—and Getting Smarter,” Computerworld, February 12, 2014, http://www.computerworld.com/
article/2487526/emerging-technology-smart-cities-are-here-today-and-getting-smarter.html.
17
See, for example, Industrial Internet Consortium, “Home.”
IoT connections and communications can be created across a broad range of objects and networks
and can transform previously independent processes into integrated systems. These integrated
systems can potentially have substantial effects on homes and communities, factories and cities,
and every sector of the economy, both domestically and globally.
Economic Growth
Several economic analyses have predicted that the IoT will contribute significantly to economic
growth over the next decade, but the predictions vary substantially in magnitude. The current
global IoT market has been valued at about $2 trillion, with estimates of its predicted value over
the next five to ten years varying from $4 trillion to $11 trillion.19 Such variability demonstrates
the difficulty of making economic forecasts in the face of various uncertainties, including a lack
of consensus among researchers about exactly what the IoT is and how it will develop.20
Economic Sectors
Agriculture
The IoT can be leveraged by the agriculture industry through precision agriculture, with the goal
of optimizing production and efficiency while reducing costs and environmental impacts. For
farming operations, it involves analysis of detailed, often real-time data on weather, soil and air
quality, water supply, pest populations, crop maturity, and other factors such as the cost and
availability of equipment and labor.21 Field sensors test soil moisture and chemical balance,
18
See, for example, National Security Telecommunications Advisory Committee, “NSTAC Report to the President on
the Internet of Things,” November 19, 2014, http://www.dhs.gov/sites/default/files/publications/
NSTAC%20Report%20to%20the%20President%20on%20the%20Internet%20of%20Things%20Nov%202014%20%2
8updat%20%20%20.pdf..
19
Denise Lund et al., “Worldwide and Regional Internet of Things (IoT) 2014–2020 Forecast: A Virtuous Circle of
Proven Value and Demand,” May 2014; Gartner, Inc., “Gartner Says the Internet of Things Installed Base Will Grow to
26 Billion Units By 2020” December 12, 2013, http://www.gartner.com/newsroom/id/2636073; James Manyika et al.,
“The Internet of Things: Mapping the Value Beyond the Hype” (McKinsey Global Institute, June 2015),
http://www.mckinsey.com/~/media/McKinsey/dotcom/Insights/Business Technology/Unlocking the potential of the
Internet of Things/Unlocking_the_potential_of_the_Internet_of_Things_Full_report.ashx; Verizon, “State of the
Market: The Internet of Things 2015,” February 20, 2015, http://www.verizonenterprise.com/resources/reports/
rp_state-of-market-the-market-the-internet-of-things-2015_en_xg.pdf.
20
Anderson and Rainie, “The Internet of Things Will Thrive by 2025.”
21
Jasper Janangir Mohammed, “Surprise: Agriculture Is Doing More with IoT Innovation than Most Other Industries,”
VentureBeat, December 7, 2014, http://venturebeat.com/2014/12/07/surprise-agriculture-is-doing-more-with-iot-
innovation-than-most-other-industries/; IBM Research, “Precision Agriculture,” 2015, http://www.research.ibm.com/
(continued...)
which can be coupled with location technologies to enable precise irrigation and fertilization.22
Drones and satellites can be used to take detailed images of fields, giving farmers information
about crop yield, nutrient deficiencies, and weed locations.23 For ranching and animal operations,
radio frequency identification (RFID) chips and electronic identification readers (EID) help
monitor animal movements, feeding patterns, and breeding capabilities, while maintaining
detailed records on individual animals.24
Energy
Within the energy sector, the IoT may impact both production and delivery, for example through
facilitating monitoring of oil wellheads and pipelines.25 When IoT components are embedded into
parts of the electrical grid, the resulting infrastructure is commonly referred to as the “smart
grid.”26 This use of IoT enables greater control by utilities over the flow of electricity and can
enhance the efficiency of grid operations.27 It can also expedite the integration of microgenerators
into the grid.28
Smart-grid technology can also provide consumers with greater knowledge and control of their
energy usage through the use of smart meters in the home or office.29 Connection of smart meters
to a building’s HVAC, lighting, and other systems can result in “smart buildings” that integrate
the operation of those systems.30 Smart buildings use sensors and other data to automatically
adjust room temperatures, lighting, and overall energy usage, resulting in greater efficiency and
lower energy cost.31 Information from adjacent buildings may be further integrated to provide
additional efficiencies in a neighborhood or larger division in a city.
(...continued)
articles/precision_agriculture.shtml.
22
Agnes Szolnoki and Andras Nabradi, “Economic, Practical Impacts of Precision Farming—With Especial Regard to
Harvesting,” Applied Studies in Agribusiness and Commerce 8, no. 2–3 (2014): 141–46, http://ageconsearch.umn.edu//
handle/202892.
23
Matthew J. Grassi, “Imagery: Which Way Is Right for Me?,” PrecisionAg, August 6, 2015,
http://www.precisionag.com/data/imagery/imagery-which-way-is-right-for-me/.
24
See, for example, Adrianne Jeffries, “Internet of Cows: Technology Could Help Track Disease, but Ranchers Are
Resistant,” The Verge, May 13, 2013, http://www.theverge.com/2013/5/10/4316658/internet-of-cows-technology-
offers-ways-to-track-livestock-but; The State of Victoria, “On-Farm Benefits of Sheep Electronic Identification (EID),”
Agriculture, 2015, http://agriculture.vic.gov.au/agriculture/farm-management/national-livestock-identification-system/
nlis-sheep-and-goats/on-farm-benefits-of-sheep-electronic-identification.
25
Verizon, “State of the Market: The Internet of Things 2015.”
26
Department of Energy, “The Smart Grid,” 2015, http://www.smartgrid.gov/the_smart_grid#smart_grid.
27
CRS Report R41886, The Smart Grid and Cybersecurity—Regulatory Policy and Issues, by Richard J. Campbell.
28
Jean Kumagai, “The Rise of the Personal Power Plant,” IEEE Spectrum, May 28, 2014, http://spectrum.ieee.org/
energy/the-smarter-grid/the-rise-of-the-personal-power-plant.
29
CRS Report R42338, Smart Meter Data: Privacy and Cybersecurity, by Brandon J. Murrill, Edward C. Liu, and
Richard M. Thompson II.
30
Institute for Building Efficiency, “What Is a Smart Building?,” April 2011, http://www.institutebe.com/smart-grid-
smart-building/What-is-a-Smart-Building.aspx.
31
IBM, “Smarter Buildings,” 2015, http://www.ibm.com/smarterplanet/us/en/green_buildings/overview/.
Health Care
The IoT has many applications in the health care field,32 in both health monitoring and treatment,
including telemedicine and telehealth.33 Applications may involve the use of medical technology
and the Internet to provide long-distance health care and education.34 Medical devices—which
can be wearable or nonwearable, or even implantable, injectable, or ingestible35—can permit
remote tracking of a patient’s vital signs, chronic conditions, or other indicators of health and
wellness.36 Wireless medical devices may be used not only in hospital settings but also in remote
monitoring and care, freeing patients from sustained or recurring hospital visits.37 Some experts
have stated that advances in healthcare IoT applications will be important for providing
affordable, quality care to the aging U.S. population.38
Manufacturing
Integration of IoT technologies into manufacturing and supply chain logistics is predicted to have
a transformative effect on the sector.39 The biggest impact may be realized in optimization of
operations, making manufacturing processes more efficient.40 Efficiencies can be achieved by
connecting components of factories to optimize production, but also by connecting components
of inventory and shipping for supply chain optimization.41 Another application is predictive
maintenance, which uses sensors to monitor machinery and factory infrastructure for damage.
Resulting data can enable maintenance crews to replace parts before potentially dangerous and/or
costly malfunctions occur.42
Transportation
Transportation systems are becoming increasingly connected. New motor vehicles are equipped
with features such as global positioning systems (GPS) and in-vehicle entertainment, as well as
32
The use of IoT in medicine is sometimes referred to as “connected” or “digital” health. See, for example, Food and
Drug Administration, “Digital Health,” September 22, 2015, http://www.fda.gov/ForConsumers/ConsumerUpdates/
ucm20035974.htm.
33
American Telemedicine Association, “What Is Telemedicine?” 2015, http://www.americantelemed.org/about-
telemedicine/what-is-telemedicine.
34
Health Resources and Services Administration, “Telehealth,” Department of Health and Human Services, 2015,
http://www.hrsa.gov/ruralhealth/about/telehealth/telehealth.html.
35
Manyika et al., “The Internet of Things: Mapping the Value Beyond the Hype.”
36
Jerome Couturier et al., “How Can the Internet of Things Help to Overcome Current Healthcare Challenges,”
Digiworld Economic Journal, no. 87 (Q 2012): 67–81, http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2304133.
37
See, for example, Food and Drug Administration, “Wireless Medical Devices,” September 22, 2015,
http://www.fda.gov/MedicalDevices/DigitalHealth/WirelessMedicalDevices/default.htm.
38
See testimony from Senate Special Committee on Aging, Roundtable: Harnessing the Power of Telehealth: Promises
and Challenges?, 2014, http://www.aging.senate.gov/hearings/roundtable-harnessing-the-power-of-telehealth-
promises-and-challenges; House Committee on the Judiciary, Subcommittee on Courts, Intellectual Property, and the
Internet, Internet of Things, 2015, http://judiciary.house.gov/index.cfm/2015/7/hearing-internet-of-things.
39
Lopez Research, “Building Smarter Manufacturing with the Internet of Things (IoT),” January 2014,
http://www.cisco.com/web/solutions/trends/iot/iot_in_manufacturing_january.pdf; James Macaulay, Lauren Buckalew,
and Gina Chung, “Internet of Things in Logistics” (DHL Trend Research and Cisco Consulting Services, 2015),
http://www.dhl.com/content/dam/Local_Images/g0/New_aboutus/innovation/DHLTrendReport_Internet_of_things.pdf.
40
Manyika et al., “The Internet of Things: Mapping the Value Beyond the Hype.”
41
Macaulay, Buckalew, and Chung, “Internet of Things in Logistics.”
42
Manyika et al., “The Internet of Things: Mapping the Value Beyond the Hype.”
advanced driver assistance systems (ADAS), which utilize sensors in the vehicle to assist the
driver, for example with parking and emergency braking.43 Further connection of vehicle systems
enables fully autonomous or self-driving automobiles, which are predicted to be commercialized
in the next 5-20 years.44
Additionally, IoT technologies can allow vehicles within and across modes—including cars,
buses, trains, airplanes, and unmanned aerial vehicles (drones)—to “talk” to one another and to
components of the IoT infrastructure, creating intelligent transportation systems (ITS). Potential
benefits of ITS may include increased safety and collision avoidance, optimized traffic flows, and
energy savings, among others.45
43
Intel, “Technology and Computing Requirements for Self-Driving Cars,” June 2014, http://www.intel.com/content/
dam/www/public/us/en/documents/white-papers/automotive-autonomous-driving-vision-paper.pdf.
44
James M. Anderson et al., Autonomous Vehicle Technology: A Guide for Policymakers (Santa Monica, CA: Rand
Corporation, 2014), http://www.rand.org/content/dam/rand/pubs/research_reports/RR400/RR443-1/RAND_RR443-
1.pdf.
45
Intelligent Transportation Systems (ITS) and Joint Program Office (JPO), “ITS 2015-2019 Strategic Plan”
(Department of Transportation, February 19, 2015), http://www.its.dot.gov/strategicplan.pdf.
46
Manyika et al., “The Internet of Things: Mapping the Value Beyond the Hype”; Matthew Cuddy et al., “The
Smart/Connected City and Its Implications for Connected Transportation” (Department of Transportation, October 14,
2014), http://www.its.dot.gov/itspac/Dec2014/Smart_Connected_City_FINAL_111314.pdf.
47
Katz, “Why the U.S. Government Should Embrace Smart Cities.”
48
GE Lighting, “GE Announces Programs for Intelligent Cities on Both U.S. Coasts as It Pilots New Connected LED
Solution” (Press Release, April 15, 2015), http://pressroom.gelighting.com/news/ge-announces-programs-for-
intelligent-cities-on-both-u-s-coasts-as-it-pilots-new-connected-led-solution#.VcuyzfnYjnh.
49
See Andrea Zanella et al., “Internet of Things for Smart Cities,” IEEE Internet of Things Journal 1, no. 1 (February
2014): 22–32, doi:10.1109/JIOT.2014.2306328.
50
See, for example, Brookings Institution, “Getting Smarter About Smart Cities,” April 18, 2014,
http://www.brookings.edu/~/media/research/files/papers/2014/04/smart-cities/bmpp_smartcities.pdf; City of Scottsdale,
“myScottsdale,” 2015, http://www.scottsdaleaz.gov/service-request/myScottsdale; City of Dubuque, “DBQ IQ Water
Management,” 2015, http://www.cityofdubuque.org/1786/DBQ-IQ; Cleantech San Diego, “Smart Cities San Diego,”
2015, http://cleantechsandiego.org/smart-city-san-diego/; Boyd Cohen, “The 10 Smartest Cities In North America,”
Fast Company, November 14, 2013, http://www.fastcoexist.com/3021592/the-10-smartest-cities-in-north-america; GE
Lighting, “GE Announces Programs for Intelligent Cities”; Smart Cities Council, “Vision,” 2015,
http://smartcitiescouncil.com/category-vision; Violino, “Smart Cities Are Here Today—and Getting Smarter.”
As with IoT and other popular technology terms, there is no established consensus definition or
set of criteria for characterizing what a smart city is. Specific characterizations vary widely, but in
general they involve the use of IoT and related technologies to improve energy, transportation,
governance, and other municipal services for specified goals such as sustainability or improved
quality of life.51 The related technologies include
social media (such as Facebook and Twitter),
mobile computing (such as smartphones and wearable devices),
data analytics (big data—the processing and use of very large data sets; and open data—
databases that are publicly accessible), and
cloud computing (the delivery of computing services from a remote location,
analogous to the way utilities such as electricity are provided).52
Together, these are sometimes called SMAC.53
51
See, for example, Brookings Institution, “Getting Smarter About Smart Cities”; Hafedh Chourabi et al.,
“Understanding Smart Cities: An Integrative Framework” (45th Hawaii International Conference on System Sciences,
IEEE, 2012), 2289–97, doi:10.1109/HICSS.2012.615; Frost and Sullivan, “Strategic Opportunity Analysis of the
Global Smart City Market,” August 2013, http://twimgs.com/audiencedevelopment/JC/LANDINGPAGES/GOV/
YEAR_2014/020314/4Define.pdf; GSMA and A.T. Kearney, “GSMA Mobile Economy 2013,” July 19, 2013,
http://www.gsmamobileeconomy.com/GSMA%20Mobile%20Economy%202013.pdf; Smart Cities Council,
“Definitions and Overviews,” 2015, http://smartcitiescouncil.com/smart-cities-information-center/definitions-and-
overviews.
52
See CRS Report R42887, Overview and Issues for Implementation of the Federal Cloud Computing Initiative:
Implications for Federal Information Technology Reform Management, by Patricia Moloney Figliola and Eric A.
Fischer.
53
See, for example, Evans, “SMAC and the Evolution of IT.”
54
Carlo Ratti of the Massachusetts Institute of Technology, as quoted in Violino, “Smart Cities Are Here Today—and
Getting Smarter.”
55
See, for example, Hayley Tsukayama, “What Eric Schmidt Meant When He Said ‘the Internet Will Disappear,’” The
Washington Post, January 23, 2015, https://www.washingtonpost.com/blogs/the-switch/wp/2015/01/23/what-eric-
schmidt-meant-when-he-said-the-internet-will-disappear/.
helping them fulfill their missions through a variety of applications such as those discussed in this
report and elsewhere.56 Each agency is responsible under various laws and regulations for the
functioning and security of its own IoT, although some technologies, such as drones, may also fall
under some aspects of the jurisdiction of other agencies.
Various agencies have regulatory, sector-specific, and other mission-related responsibilities that
involve aspects of IoT. For example, entities that use wireless communications for their IoT
devices will be subject to allocation rules for the portions of the electromagnetic spectrum that
they use.
The Federal Communications Commission (FCC) allocates and assigns
spectrum for nonfederal entities.57
In the Department of Commerce, the National Telecommunications and
Information Administration (NTIA) fulfills that function for federal entities,58
and the National Institute of Standards and Technology (NIST) creates
standards, develops new technologies, and provides best practices for the Internet
and Internet-enabled devices.59
The Federal Trade Commission (FTC) regulates and enforces consumer
protection policies, including for privacy and security of consumer IoT devices.60
The Department of Homeland Security (DHS) is responsible for coordinating
security for the 16 critical infrastructure sectors.61 Many of those sectors use
industrial control systems (ICS), which are often connected to the Internet, and
the DHS National Cybersecurity and Communications Integration Center
(NCCIC) has an ICS Cyber Emergency Response Team (ICS-CERT) to help
critical-infrastructure entities address ICS cybersecurity issues.62
The Food and Drug Administration (FDA) also has responsibilities with
respect to the cybersecurity of Internet-connected medical devices.63
The Department of Justice (DOJ) addresses law-enforcement aspects of IoT,
including cyberattacks, unlawful exfiltration of data from devices and/or
56
See, for example, Joseph Bradley et al., “Internet of Everything: A $4.6 Trillion Public-Sector Opportunity,” White
Paper (Cisco, 2013), http://internetofeverything.cisco.com/sites/default/files/docs/en/
ioe_public_sector_vas_white%20paper_121913final.pdf.
57
CRS Report RL32589, The Federal Communications Commission: Current Structure and Its Role in the Changing
Telecommunications Landscape, by Patricia Moloney Figliola; CRS Report R43256, Spectrum Policy: Provisions in
the 2012 Spectrum Act, by Linda K. Moore.
58
CRS Report R43866, The National Telecommunications and Information Administration (NTIA): An Overview of
Programs and Funding, by Linda K. Moore.
59
See, for example, National Institute of Standards and Technology, “Cyber-Physical Systems.”
60
See, for example, FTC Staff, “Internet of Things: Privacy and Security in a Connected World” (Federal Trade
Commission, January 2015), http://www.ftc.gov/system/files/documents/reports/federal-trade-commission-staff-report-
november-2013-workshop-entitled-internet-things-privacy/150127iotrpt.pdf.
61
For descriptions of these sectors, see The White House, “Critical Infrastructure Security and Resilience” (Presidential
Policy Directive 21, February 12, 2013), http://www.whitehouse.gov/the-press-office/2013/02/12/presidential-policy-
directive-critical-infrastructure-security-and-resil. The directive also identifies sector-specific agencies for each of the
identified sectors.
62
Department of Homeland Security, “About the National Cybersecurity and Communications Integration Center,”
April 27, 2015, http://www.dhs.gov/about-national-cybersecurity-communications-integration-center.
63
See, for example, Food and Drug Administration, “Cybersecurity for Medical Devices and Hospital Networks: FDA
Safety Communication,” June 13, 2013, http://www.fda.gov/MedicalDevices/Safety/AlertsandNotices/ucm356423.htm.
64
See, for example, Department of Justice, “FY 2015 Budget Request: Cybersecurity,” February 28, 2014,
http://www.justice.gov/sites/default/files/jmd/legacy/2014/08/18/cyber-security.pdf.
65
See, for example, CRS Report R40147, Issues in Green Building and the Federal Response: An Introduction, by Eric
A. Fischer; CRS Report R41886, The Smart Grid and Cybersecurity—Regulatory Policy and Issues, by Richard J.
Campbell.
66
See, for example, Brian Cronin and Kevin Dopart, “Connected Vehicles—Improving Safety, Mobility, and the
Environment” (U.S. Department of Transportation, April 9, 2014), http://www.its.dot.gov/presentations/pdf/
NASA_Briefingv3.2.pdf; Intelligent Transportation Systems (ITS) and Joint Program Office (JPO), “ITS 2015-2019
Strategic Plan.”; CRS Report R42367, Medicaid and Federal Grant Conditions After NFIB v. Sebelius: Constitutional
Issues and Analysis, by Kenneth R. Thomas.
67
Intelligent Transportation Systems Joint Program Office, “About ITS,” Department of Transportation, 2015,
http://www.its.dot.gov/its_program/about_its.htm.
68
CRS Report R42718, Pilotless Drones: Background and Considerations for Congress Regarding Unmanned Aircraft
Operations in the National Airspace System, by Bart Elias.
69
CRS Report R44192, Unmanned Aircraft Systems (UAS): Commercial Outlook for a New Industry, by Bill Canis.
70
Denise E Zheng and William A. Carter, “Leveraging the Internet of Things for a More Efficient and Effective
Military” (Center for Strategic and International Studies, September 2015), http://csis.org/files/publication/
150915_Zheng_LeveragingInternet_WEB.pdf.
71
National Science Foundation, “Cyber-Physical Systems (CPS),” 2015, http://www.nsf.gov/funding/pgm_summ.jsp?
pims_id=503286&org=CISE&sel_org=CISE&from=fund; National Science Foundation, “Partnerships for Innovation:
Building Innovation Capacity,” 2015, http://nsf.gov/funding/pgm_summ.jsp?pims_id=504708.
72
Subcommittee on Networking and Information Technology Research and Development, Committee on Technology,
“Supplement to the President’s Budget for Fiscal Year 2015: The Networking and Information Technology Research
(continued...)
in such R&D include the Food and Drug Administration (FDA), the National
Aeronautics and Space Administration (NASA), the National Institutes of
Health (NIH), the Department of Veterans Affairs (VA), and several DOD
agencies.
The White House has also announced a smart-cities initiative focusing on the
development of a research infrastructure, demonstration projects, and other R&D
activities.73
Technical Issues
Prominent technical limitations that may affect the growth and use of the IoT include a lack of
new Internet addresses under the most widely used protocol, the availability of high-speed and
wireless communications, and lack of consensus on technical standards.
Internet Addresses
A potential barrier to the development of IoT is the technical limitations of the version of the
Internet Protocol (IP) that is used most widely. IP is the set of rules that computers use to send
and receive information via the Internet, including the unique address that each connected device
or object must have to communicate. Version 4 (IPv4) is currently in widest use. It can
accommodate about four billion addresses, and it is close to saturation, with few new addresses
available in many parts of the world.74
Some observers predict that Internet traffic will grow faster for IoT objects than any other kind of
device over the next five years,75 with more than 25 billion IoT objects in use by 2020,76 and
(...continued)
and Development Program,” February 2015, https://www.nitrd.gov/pubs/2016supplement/
FY2016NITRDSupplement.pdf.
73
The White House, “Fact Sheet: Administration Announces New ‘Smart Cities’ Initiative to Help Communities
Tackle Local Challenges and Improve City Services” (Press Release, September 14, 2015),
https://www.whitehouse.gov/the-press-office/2015/09/14/fact-sheet-administration-announces-new-smart-cities-
initiative-help.
74
Iljitsch van Beijnum, “It’s Official: North America Out of New IPv4 Addresses,” Ars Technica, July 2, 2015,
http://arstechnica.com/information-technology/2015/07/us-exhausts-new-ipv4-addresses-waitlist-begins/.
75
Cisco predicts an annual growth rate of 71% for IoT traffic during that period, with mobile devices at about 63% and
desktop computers under 10% (Cisco, “The Zettabyte Era—Trends and Analysis,” May 2015, http://www.cisco.com/c/
en/us/solutions/collateral/service-provider/visual-networking-index-vni/VNI_Hyperconnectivity_WP.html).
76
Lund et al., “Worldwide and Regional Internet of Things (IoT) 2014–2020 Forecast: A Virtuous Circle of Proven
Value and Demand”; Gartner, Inc., “Gartner Says the Internet of Things Installed Base Will Grow to 26 Billion Units
By 2020.”
perhaps 50 billion devices altogether.77 IPv4 appears unlikely to meet that growing demand, even
with the use of workarounds such as methods for sharing IP addresses.78
Version 6 (IPv6) allows for a huge increase in the number IP addresses. With IPv4, the maximum
number of unique addresses, 4.2 billion, is not enough to provide even one address for each of the
7.3 billion people on Earth. IPv6, in contrast, will accommodate over 1038 addresses—more than
a trillion trillion per person.
It is highly likely that to accommodate the anticipated growth in the numbers of Internet-
connected objects, IPv6 will have to be implemented broadly. It has been available since 1999 but
was not formally launched until 2012.79 In most countries, fewer than 10% of IP addresses were
in IPv6 as of September 2015. Adoption is highest in some European countries and in the United
States,80 where adoption has doubled in the past year to about 20%.81 Globally, adoption has
doubled annually since 2011, to about 7% of addresses in mid-2015.82 While growth in adoption
is expected to continue, it is not yet clear whether the rate of growth will be sufficient to
accommodate the expected growth in the IoT. That will depend on a number of factors, including
replacement of some older systems and applications that cannot handle IPv6 addresses,83
resolution of security issues associated with the transition, and availability of sufficient resources
for deployment.84
Efforts to transition federal systems to IPv6 began more than a decade ago.85 According to
estimates by NIST, adoption for public-facing services has been much greater within the federal
government than within industry or academia.86 However, adoption varies substantially among
77
Dave Evans, “The Internet of Things: How the Next Evolution of the Internet Is Changing Everything” (Cisco
Internet Business Solutions Group (IBSG), April 2011), http://www.cisco.com/web/about/ac79/docs/innov/
IoT_IBSG_0411FINAL.pdf. The latter figure also includes computers and mobile devices such as smartphones.
78
Matt Ford et al., “Address Sharing-Coming to a Network near You,” IETF Journal, June 2009,
http://www.internetsociety.org/articles/address-sharing-coming-network-near-you.
79
Internet Society, “Ipv6: Making Room for the Next 5 Billion People,” March 26, 2014,
http://www.internetsociety.org/deploy360/wp-content/uploads/2014/03/gen-ipv6factsheet-201403-en_FA_web.pdf.
The launch was essentially an organized attempt to stimulate adoption.
80
The top five were Belgium (34%), Switzerland (19%), the United States (18%), Germany, and Peru (17% each)
(Akamai, “IPv6 Adoption by Country and Network,” State of the Internet, September 16, 2015,
https://www.stateoftheinternet.com/trends-visualizations-ipv6-adoption-ipv4-exhaustion-global-heat-map-network-
country-growth-data.html).
81
Ibid.; Google, “IPv6,” September 23, 2015, http://www.google.com/intl/en/ipv6/. Google lists the U.S. adoption rate
at 21%.
82
Google, “IPv6”; Internet Society, “World IPv6 Launch,” May 27, 2014, http://www.worldipv6launch.org/
infographic/.
83
IPv6 addresses are four times longer than those in IPv4, and some systems and applications cannot process the longer
addresses properly (van Beijnum, “It’s Official: North America Out of New IPv4 Addresses”).
84
Sheila Frankel et al., “Guidelines for the Secure Deployment of IPv6,” SP 800-119 (National Institute of Standards
and Technology, December 2010), http://csrc.nist.gov/publications/nistpubs/800-119/sp800-119.pdf; van Beijnum,
“It’s Official: North America Out of New IPv4 Addresses”; Panayotis A. Yannakogeorgos, “The Rise of IPv6,” Air and
Space Power Journal, April 2015, 103–28, http://www.au.af.mil/au/afri/aspj/digital/pdf/articles/2015-Mar-Apr/F-
Pano.pdf.
85
Chief Information Officers Council, “Planning Guide/Roadmap toward IPv6 Adoption Within the U.S.
Government,” June 2012, https://cio.gov/wp-content/uploads/downloads/2012/09/
2012_IPv6_Roadmap_FINAL_20120712.pdf; Yannakogeorgos, “The Rise of IPv6.”
86
National Institute of Standards and Technology, “Estimating IPv6 & DNSSEC Deployment Status,” September 24,
2015, http://fedv6-deployment.antd.nist.gov/snap-all.html.
agencies, and some data suggest that federal adoption plateaued in 2012.87 Data were not
available for this report on domains that are not public-facing, and it is not clear whether adoption
of IPv6 by federal agencies will affect their deployment of IoT applications.
High-Speed Internet
Use and growth of the IoT can also be limited by the availability of access to high-speed Internet
and advanced telecommunications services, commonly known as broadband, on which it
depends. While many urban and suburban areas have access, that is not the case for many rural
areas, for which private-sector providers may not find establishment of the required infrastructure
profitable, and government programs may be limited.88
Wireless Communications
Many observers believe that issues relating to access to the electromagnetic spectrum89 will need
to be resolved to ensure the functionality and interoperability of IoT devices. Access to spectrum,
both licensed and unlicensed, is essential for devices and objects to communicate wirelessly. IoT
devices are being developed and deployed for new purposes and industries, and some argue that
the current framework for spectrum allocation may not serve these new industries well.90
Standards
Currently, there is no single universally recognized set of technical standards for the IoT,
especially with respect to communications,91 or even a commonly accepted definition among the
various organizations that have produced IoT standards or related documents.92 Many observers
agree that a common set of standards will be essential for interoperability and scalability of
devices and systems.93 However, others have expressed pessimism that a universal standard is
feasible or even desirable, given the diversity of objects that the IoT potentially encompasses.94
Several different sets of de facto standards have been in development, and some observers do not
87
Ibid.; Mohana Ravindranath, “Government Outpacing Private Sector in IPv6 Adoption, Official Says,” NextGov:
CIO Briefing, May 18, 2015, http://www.nextgov.com/cio-briefing/2015/05/government-could-be-outpacing-private-
sector-ipv6-adoption/113056/.
88
For more information, see CRS Report R44080, Municipal Broadband: Background and Policy Debate, by Lennard
G. Kruger and Angele A. Gilroy, and CRS Report RL30719, Broadband Internet Access and the Digital Divide:
Federal Assistance Programs, by Lennard G. Kruger and Angele A. Gilroy.
89
Electromagnetic spectrum, commonly referred to as radio frequency spectrum or wireless spectrum, refers to
electromagnetic waves that, with applied technology, can transmit signals to deliver voice, text, and video
communications.
90
For more information, see CRS Report R43256, Spectrum Policy: Provisions in the 2012 Spectrum Act, by Linda K.
Moore.
91
Colin Neagle, “A Guide to the Confusing Internet of Things Standards World,” Network World, July 21, 2014,
http://www.networkworld.com/article/2456421/internet-of-things/a-guide-to-the-confusing-internet-of-things-
standards-world.html.
92
Minerva, Biru, and Rotondi, “Towards a Definition of the Internet of Things (IoT).”
93
See, for example, World Economic Forum, “Industrial Internet of Things: Unleashing the Potential of Connected
Products and Services.”
94
Christopher Null, “The State of IoT Standards: Stand by for the Big Shakeout,” TechBeacon, September 2, 2015,
http://techbeacon.com/state-iot-standards-stand-big-shakeout.
expect formal standards to appear before 2017. Whether conflicts between standards will affect
growth of the sector as it did for some other technologies is not clear.95
Cybersecurity
The security of devices and the data they acquire, process, and transmit is often cited as a top
concern in cyberspace.98 Cyberattacks can result in theft of data and sometimes even physical
destruction. Some sources estimate losses from cyberattacks in general to be very large—in the
hundreds of billions or even trillions of dollars.99 As the number of connected objects in the IoT
grows, so will the potential risk of successful intrusions and increases in costs from those
incidents.
Cybersecurity involves protecting information systems, their components and contents, and the
networks that connect them from intrusions or attacks involving theft, disruption, damage, or
other unauthorized or wrongful actions.100 IoT objects are potentially vulnerable targets for
hackers.101 Economic and other factors may reduce the degree to which such objects are designed
with adequate cybersecurity capabilities built in. IoT devices are small, are often built to be
disposable, and may have limited capacity for software updates to address vulnerabilities that
come to light after deployment.
95
Lawson, “Why Internet of Things ‘Standards’ Got More Confusing in 2014,” PC World, December 24, 2014,
http://www.pcworld.com/article/2863572/iot-groups-are-like-an-orchestra-tuning-up-the-music-starts-in-2016.html.
96
See, for example, Roger Ordman, “Efficient Over-the-Air Software and Firmware Updates for the Internet of
Things,” Embedded Computing Design, April 10, 2014, http://embedded-computing.com/articles/efficient-software-
firmware-updates-the-internet-things/.
97
Keita Sekine, “Energy-Harvesting Devices Replace Batteries in IoT Sensors,” Core & Code, Q3 2014,
http://core.spansion.com/article/energy-harvesting-devices-replace-batteries-in-iot-sensors/.
98
See, for example, National Security Telecommunications Advisory Committee, “NSTAC Report to the President on
the Internet of Things.”
99
Center for Strategic and International Studies, “Net Losses: Estimating the Global Cost of Cybercrime” (McAfee,
June 2014), http://www.mcafee.com/us/resources/reports/rp-economic-impact-cybercrime2.pdf?cid=BHP028; World
Economic Forum, “Industrial Internet of Things: Unleashing the Potential of Connected Products and Services.”
100
CRS Report R43831, Cybersecurity Issues and Challenges: In Brief, by Eric A. Fischer.
101
Scott R. Peppet, “Regulating the Internet of Things: First Steps toward Managing Discrimination, Privacy, Security
& Consent,” Texas Law Review, Forthcoming, March 1, 2014, http://papers.ssrn.com/abstract=2409074.
The interconnectivity of IoT devices may also provide entry points through which hackers can
access other parts of a network. For example, a hacker might gain access first to a building
thermostat, and subsequently to security cameras or computers connected to the same network,
permitting access to and exfiltration or modification of surveillance footage or other
information.102 Control of a set of smart objects could permit hackers to use their computing
power in malicious networks called botnets to perform various kinds of cyberattacks.103
Access could also be used for destruction, such as by modifying the operation of industrial
control systems, as with the Stuxnet malware that caused centrifuges to self-destruct at Iranian
nuclear plants.104 Among other things, Stuxnet showed that smart objects can be hacked even if
they are not connected to the Internet. The growth of smart weapons and other connected objects
within DOD has led to growing concerns about their vulnerabilities to cyberattack and increasing
attempts to prevent and mitigate such attacks, including improved design of IoT objects.105
Cybersecurity for the IoT may be complicated by factors such as the complexity of networks and
the need to automate many functions that can affect security, such as authentication.
Consequently, new approaches to security may be needed for the IoT.106
IoT cybersecurity will also likely vary among economic sectors and subsectors, given their
different characteristics and requirements. Each sector will have a role in developing
cybersecurity best practices, unique to its needs. The federal government has a role in securing
federal information systems, as well as assisting with security of nonfederal systems, especially
critical infrastructure.107 Cybersecurity legislation considered in the 114th Congress, while not
focusing specifically on the IoT, would address several issues that are potentially relevant to IoT
applications, such as information sharing and notification of data breaches.108
Safety
Given that smart objects can be used both to monitor conditions and to control machinery, the IoT
has broad implications for safety, with respect to both improvements and risks. For example,
102
Government Accountability Office, “Federal Facility Cybersecurity: DHS and GSA Should Address Cyber Risk to
Building and Access Control Systems,” December 12, 2014, http://www.gao.gov/products/GAO-15-6..
103
See, for example, Eduard Kovacs, “‘Spike’ DDoS Toolkit Targets PCs, Servers, IoT Devices: Akamai,” Security
Week, September 25, 2014, http://www.securityweek.com/spike-ddos-toolkit-targets-pcs-servers-iot-devices-akamai.
104
CRS Report R41524, The Stuxnet Computer Worm: Harbinger of an Emerging Warfare Capability, by Paul K.
Kerr, John W. Rollins, and Catherine A. Theohary.
105
Sydney J. Freedberg, Jr., “Cybersecurity Now Key Requirement for All Weapons: DoD Cyber Chief,” Breaking
Defense, January 27, 2015, http://breakingdefense.com/2015/01/cybersecurity-now-key-requirement-for-all-weapons-
dod-cio/; Patrick Tucker, “For Years, the Pentagon Hooked Everything to the Internet. Now It’s a ‘Big, Big Problem,’”
Defense One, September 29, 2015, http://www.defenseone.com/technology/2015/09/years-pentagon-hooked-
everything-internet-now-its-big-big-problem/122402/.
106
Benjamin Jun, “Make Way for the Internet of Things!” (RSA Conference 2014, San Francisco, CA, February 27,
2014), http://www.rsaconference.com/writable/presentations/file_upload/tech-r02-internet-of-things-v2.pdf; Benjamin
Jun, “Endpoints in the New Age: Apps, Mobility, and the Internet of Things” (RSA Conference 2015, San Francisco,
CA, April 21, 2015), https://www.rsaconference.com/writable/presentations/file_upload/eco-t07r-endpoints-in-the-
new-age-apps-mobility-and-the-internet-of-things.pdf.
107
Critical infrastructure was defined by the USA PATRIOT Act as “systems and assets, physical or virtual, so vital to
the United States that the incapacity or destruction of such systems and assets would have a debilitating impact on
security, national economic security, national public health and safety, or any combination of those matters” (5 U.S.C.
§5195c(e)).
108
For more discussion of congressional and executive-branch actions in cybersecurity, see CRS Report R43831,
Cybersecurity Issues and Challenges: In Brief, by Eric A. Fischer, and related reports.
objects embedded in pipelines can monitor both the condition of the equipment and the flow of
contents. Among other benefits, that can help both to expedite shutoffs in the event of leaks and
to prevent them through predictive maintenance.109 Connected vehicles can help reduce vehicle
collisions through crash avoidance technologies and other applications.110 Wireless medical
devices can improve patient safety by permitting remote monitoring and facilitating adjustments
in care.111
However, given the complexities involved in some applications of IoT, malfunctions might in
some instances result in catastrophic system failures, creating significant safety risks, such as
flooding from dams or levees.112 In addition, hackers could potentially cause malfunctions of
devices such as insulin pumps113 or automobiles,114 potentially creating significant safety risks.
Privacy
Cyberattacks may also compromise privacy, resulting in access to and exfiltration of identifying
or other sensitive information about an individual. For example, an intrusion into a wearable
device might permit exfiltration of information about the location, activities, or even the health of
the wearer.
In addition to the question of whether security measures are adequate to prevent such intrusions,
privacy concerns also include questions about the ownership, processing, and use of such data.
With an increasing number of IoT objects being deployed, large amounts of information about
individuals and organizations may be created and stored by both private entities and governments.
With respect to government data collection, the U.S. Supreme Court has been reticent about
making broad pronouncements concerning society’s expectations of privacy under the Fourth
Amendment of the Constitution while new technologies are in flux, as reflected in opinions over
the last five years.115 Congress may also update certain laws, such as the Electronic
Communications Privacy Act of 1986, given the ways that privacy expectations of the public are
evolving in response to IoT and other new technologies.116 IoT applications may also create
109
Adam Lesser, “Internet of Things: The Influence of M2M Data on the Energy Industry” (GigaOm Research, March
4, 2014), http://research.gigaom.com/report/internet-of-things-the-influence-of-m2m-data-on-the-energy-industry/.
110
Cronin and Dopart, “Connected Vehicles—Improving Safety, Mobility, and the Environment.”
111
Couturier et al., “How Can the Internet of Things Help to Overcome Current Healthcare Challenges.”
112
AIG, “The Internet of Things: Evolution or Revolution?,” June 10, 2015, https://www.aig.com/Chartis/internet/US/
en/AIG%20White%20Paper%20-%20IoT%20English%20DIGITAL_tcm3171-677828.pdf.
113
FTC Staff, “Internet of Things: Privacy and Security in a Connected World.”
114
Ian Foster et al., “Fast and Vulnerable: A Story of Telematic Failures,” in Proceedings of the 9th USENIX
Conference on Offensive Technologies (USENIX Association, 2015), 15–15, https://www.usenix.org/system/files/
conference/woot15/woot15-paper-foster.pdf; Andy Greenberg, “Hackers Remotely Kill a Jeep on the Highway—With
Me in It,” accessed October 6, 2015, http://www.wired.com/2015/07/hackers-remotely-kill-jeep-highway/.
115
In the 2010 case City of Ontario v. Quon, the Court sidestepped the question whether individuals have a reasonable
expectation of privacy in their electronic communications by resolving the case on other grounds (City of Ontario v.
Quon, 560 U.S. 746 (2010) [“The judiciary risks error by elaborating too fully on the Fourth Amendment implications
of emerging technology before its role in society has become clear.”]). Similarly, in the 2012 GPS tracking case United
States v. Jones, the majority avoided the question of whether people should expect privacy in their public movements
over a long period of time by instead relying on a hundreds-year-old trespass theory of the Fourth Amendment (United
States v. Jones, 132 S. Ct. 945, 954 [2012]). More recently, in the 2015 case California v.Riley, the Court held that the
government must obtain a warrant before accessing the data on a cellphone confiscated upon an arrest; however, the
ruling did not separately opine on the level of protections for data stored in the cloud, on which IoT applications will
undoubtedly rely (California v. Riley, 134 S. Ct. 2473, 2495 [2015]).
116
See, CRS Report R44036, Stored Communications Act: Reform of the Electronic Communications Privacy Act
(continued...)
challenges for interpretation of other laws relating to privacy, such as the Health Insurance
Portability and Accountability Act and various state laws, as well as established practices such as
those arising from norms such as the Fair Information Practice Principles.117
Federal Role
As described in the section, “What Is the Current Federal Role?” many federal agencies are
involved in different aspects of the IoT. Some business representatives and others have stressed
the role of effective public/private partnerships in the development of this technology space.118
However, observers have also expressed concerns about the role of government regulations and
policy, as discussed further in sections below, and about the degree and effectiveness of
coordination among the involved federal agencies.119 Concerns of some extend beyond the federal
role to that of state, local, and foreign governments.120
Given the eclectic nature of the IoT, overall coordination of federal efforts may be challenging
with respect to identification of both the goals of coordination and the methods for achieving
them. Nevertheless, several observers have argued in favor of a national strategy for the IoT,121
including in resolutions considered in the 114th Congress (see “What Actions Has Congress
Taken?”).
Some interagency initiatives have been established with respect to specific aspects of the IoT. For
example, in addition to the R&D coordination activities for cyber-physical systems under the
NITRD program,122 a specific framework has been developed for smart cities123 as part of the
overall White House initiative involving several federal agencies, local governments, and the
private sector.124
(...continued)
(ECPA), by Richard M. Thompson II and Jared P. Cole.
117
FTC Staff, “Internet of Things: Privacy and Security in a Connected World”; Thierer, “The Internet of Things and
Wearable Technology.”
118
See, for example, Brookings Institution, “Getting Smarter About Smart Cities”; House Committee on Energy and
Commerce, The Internet of Things: Exploring the Next Technology Frontier, 2015, http://energycommerce.house.gov/
hearing/internet-things-exploring-next-technology-frontier; House Committee on the Judiciary, Subcommittee on
Courts, Intellectual Property, and the Internet, Internet of Things.
119
See, for example, Gary Arlen, “Internet of Things Caucus Readies House Hearings,” Multichannel News, July 8,
2015, http://www.multichannel.com/blog/i-was-saying/internet-things-caucus-readies-house-hearings/392024; Darren
Samuelson, “The Agenda—Internet of Things,” July 2015, http://www.politico.com/agenda/issue/internet-of-things-
july-2015.
120
See, for example, Helen Rebecca Schindler et al., “Europe’s Policy Options for a Dynamic and Trustworthy
Development of the Internet of Things” (RAND Europe, July 26, 2013), http://www.rand.org/content/dam/rand/pubs/
research_reports/RR300/RR356/RAND_RR356.pdf; Thierer, “The Internet of Things and Wearable Technology.”
121
See, for example, Samuelson, “The Agenda—Internet of Things.”
122
See “What Is the Current Federal Role?” above.
123
Subcommittee on Networking and Information Technology Research and Development, Committee on Technology,
“Smart Cities and Connected Communities Framework,” September 11, 2015, https://www.nitrd.gov/sccc/.
124
The White House, “Fact Sheet: Administration Announces New ‘Smart Cities’ Initiative.”
Spectrum Access
Radio frequency (electromagnetic) spectrum is widely regarded as a critical link in IoT
communications, with reliable and affordable access to it required to accommodate the billions of
new IoT devices projected to go online over the next decade.125 New technology for mobile
communications is predicted to allow devices to operate on any available radio frequency and
potentially permit communications technologies and cyber-physical systems to converge
further.126 Concerns have been raised that current spectrum policy may favor consumer-oriented
mobile services and the wireless industry, rather than emerging markets for IoT devices, such as
transportation and manufacturing.127 Congress may therefore be faced with decisions about
whether the current policy needs to be revised.
Net Neutrality
The concept of “net neutrality” includes the two general principles that owners of the networks
that comprise and provide access to the Internet should not control how end users lawfully use
that network, and that they should not be able to discriminate against content provider access to
that network.128 The FCC adopted an order in February 2015 that established regulatory
guidelines to protect the marketplace from potential abuses that could threaten the net neutrality
concept.129 The order bans broadband Internet access providers (both fixed and wireless) from
blocking and throttling lawful content, and it prohibits paid prioritization of affiliated or
proprietary content.130 The order also creates a general conduct standard that Internet service
providers cannot harm consumers or providers of applications, content, and services. These rules
went into effect, with limited exceptions, on June 12, 2015, but have been challenged in the U.S.
Court of Appeals for the D.C. Circuit.131
It remains unclear how the FCC order will affect IoT devices and services. Some observers view
the implementation of FCC regulations as a positive development. They believe that it will ensure
openness and nondiscrimination for service providers, leading to the growth of new services and
consumer demand. Others have expressed concerns that the regulations will stifle investment and
innovation to the detriment of the expansion and growth of Internet deployment and services.
Furthermore, the rules are subject to “reasonable network management,” as defined by the FCC,
and a category of “specialized services” defined as those that “do not provide access to the
Internet generally” are exempt from the rules established by the order.132 Depending on how
individual IoT services and devices are categorized and the degree of network management such
125
CRS Report R43256, Spectrum Policy: Provisions in the 2012 Spectrum Act, by Linda K. Moore.
126
CRS Insight IN10191, What Is 5G? Implications for Spectrum and Technology Policy, by Linda K. Moore.
127
CRS Insight IN10221, The Robot Did It: Spectrum Policy and the Internet of Things, by Linda K. Moore.
128
For additional information on the net neutrality issue see CRS Report R40616, Access to Broadband Networks: The
Net Neutrality Debate, by Angele A. Gilroy, and CRS Report R43971, Net Neutrality: Selected Legal Issues Raised by
the FCC’s 2015 Open Internet Order, by Kathleen Ann Ruane.
129
Federal Communications Commission, “Protecting and Promoting the Open Internet; Final Rule,” Federal Register
80, no. 70 (April 13, 2015): 19738–850, http://www.gpo.gov/fdsys/pkg/FR-2015-04-13/pdf/2015-07841.pdf.
130
Paid prioritization occurs when a broadband Internet access provider accepts payment (monetary or otherwise) to
manage its network in a way that benefits particular content, applications, devices, or services.
131
The challenges were consolidated under U.S. Telecom Association v. FCC, D.C. Cir. No. 15-1063, April 14, 2015.
132
The FCC order cited heart monitors and energy consumption sensors as examples of “specialized services.” See
Federal Communications Commission, “Protecting and Promoting the Open Internet; Final Rule” para. 35.
specialized services may need, the order could also affect IoT applications on a case-by-case
basis.
Bills
No bills have been introduced in the last two Congresses relating specifically to the IoT.
However, many bills have been introduced with provisions related to aspects of the IoT such as
connected vehicles, cyber-physical systems, smart cities, and the smart grid. None of those bills
were enacted as of September 2015, although some bills with provisions on applications and
appropriations relating to telehealth and telemedicine were enacted in both the 113th and 114th
Congresses. Several bills in the 114th Congress would address issues that are potentially relevant
to IoT applications, such as information sharing in cybersecurity, privacy, and notification of data
breaches.133
Resolutions
Two similar resolutions on the IoT have been submitted in the 114th Congress, one in the House
(H.Res. 195/Lance, introduced April 13, 2015) and one in the Senate (S.Res. 110/Fischer,
introduced and passed March 24). Both call for
a U.S. strategy for development of the IoT to improve social well-being while
allowing for innovation and protecting against misuse,
recognition of the importance of a consensus-based approach and the role of
businesses in that development,
federal government commitment to use the IoT, and
a U.S. commitment to use the IoT for developing new technologies to address
challenging societal issues.
The House version also calls for the use of cost-benefit analysis to determine when federal action
is needed to address “discrete harms” in the marketplace. It also refers explicitly to energy
optimization and the need for cybersecurity.
Hearings
Both the House and the Senate have held hearings on the IoT in 2015. In the Senate, the
Committee on Commerce, Science, and Transportation held a hearing on February 11.134 In the
House, one was held by the Energy and Commerce Committee on March 24,135 and another by
the Subcommittee on Courts, Intellectual Property, and the Internet of the Committee on the
133
For more information, see CRS Report R43831, Cybersecurity Issues and Challenges: In Brief, by Eric A. Fischer,
and related reports.
134
Senate Committee on Commerce, Science, and Transportation, The Connected World: Examining the Internet of
Things, 2015, http://www.commerce.senate.gov/public/index.cfm/hearings?ID=d3e33bde-30fd-4899-b30d-
906b47e117ca.
135
House Committee on Energy and Commerce, The Internet of Things: Exploring the Next Technology Frontier.
Judiciary on July 29.136 The hearings featured witnesses from businesses and associations who
discussed the growth, uses, and economic potential of the IoT, as well as some of the issues
described in this report, such as privacy, regulation, security, spectrum management, and
standards.
Caucuses
There are several congressional caucuses that may consider issues associated with the IoT.
Among them are caucuses on cloud computing,137 cybersecurity,138 the Internet,139 and high-
performance buildings. In addition, new caucuses announced in this session included one
expressly on the Internet of Things,140 and one on smart transportation.141
Eric A. Fischer
Senior Specialist in Science and Technology
efischer@crs.loc.gov, 7-7071
Acknowledgments
This report was originally coauthored by Stephanie M. Logan while she was a CRS research assistant.
Stephanie performed most of the research and provided much of the organizational structure and text for
the report. Her insights and other contributions were invaluable.
136
House Committee on the Judiciary, Subcommittee on Courts, Intellectual Property, and the Internet, Internet of
Things.
137
Cloud Computing Caucus Advisory Group, “Home,” 2015, https://www.cloudcomputingcaucus.org/.
138
Congressional Cybersecurity Caucus, “Welcome,” 2015, http://cybercaucus.langevin.house.gov/.
139
Congressional Internet Caucus Advisory Committee, “NetCaucus,” 2015, http://www.netcaucus.org/.
140
The Honorable Suzan DelBene, “U.S. Reps. DelBene and Issa Announce Creation of the Congressional Internet of
Things Caucus” (Press Release, January 13, 2015), https://delbene.house.gov/media-center/press-releases/us-reps-
delbene-and-issa-announce-creation-of-the-congressional-internet.
141
Senator Gary Peters, “Peters, Gardner Announce New Bipartisan Smart Transportation Caucus” (press release, June
10, 2015), http://www.peters.senate.gov/newsroom/press-releases/peters-gardner-announce-new-bipartisan-smart-
transportation-caucus.
The following CRS staff contributed to sections of this report: Angele A. Gilroy to “Net Neutrality,” Linda
K. Moore to “Spectrum Access,” Megan Stubbs to “Agriculture,” and Richard M. Thompson II and Jared
P. Cole to “Privacy.”
🧪 2. Vulnerability Scanning
Nessus – Comprehensive vulnerability scanner.
OpenVAS – Open-source vulnerability scanning.
Nikto – Scans web servers for dangerous files, outdated software, etc.
Qualys – Cloud-based scanner (enterprise-level).
🔍 3. Penetration Testing
Metasploit Framework – Exploitation framework for penetration testing.
Burp Suite – Web application security testing (has free and pro versions).
SQLmap – Automated SQL injection tool.
Hydra – Password brute-forcing tool.
Aircrack-ng – Wi-Fi penetration testing tool.
🔑 5. Password Security
John the Ripper – Password cracking tool.
Hashcat – GPU-based password recovery tool.
CrackStation – Online password hash cracking tool (for legal use only).
OTHERS
🧪 3. Vulnerability Assessment
Scan for known weaknesses before attackers do.
Tool Purpose
§ Georgian military
§ Armenian military
§ Kavkaz Center
§ NATO
§ OSCE
§ UK
§ Turkey
§ China
§ Japan
§ South Korea
§ Compilation times
– Over 96% of the malware samples were compiled between
Monday and Friday
– More than 89% were compiled between 8AM and 6PM in the
UTC+3 / UTC+4 time zone, which parallels the working hours in
Moscow and St. Petersburg
– These samples had compile dates ranging from mid-2007 to
September 2014
Multi-vector / Multi-Flow
based exploit
Obfuscated/encrypted to
evade detection
Obfuscated/encrypted to EVILTOSS
evade detection, dedicated
CHOPSTICK
§ SOURFACE
– This downloader is typically called Sofacy within the cyber
security community.
– However because we have observed the name “Sofacy” used to
refer to APT28 malware generally (to include the SOURFACE
dropper, EVILTOSS, CHOPSTICK, and the credential harvester
OLDBAIT), we are using the name SOURFACE to precisely refer
to a specific downloader.
– This downloader obtains a second-stage backdoor from a C2
server
§ OLDBAIT
– It is a credential harvester
– Installs itself in %ALLUSERPROFILE%\\Application
Data\
Microsoft\MediaPlayer\updatewindws.exe
– Credentials for the following applications are collected: Internet
Explorer, Mozilla Firefox, Eudora, The Bat! (an email client),
Becky! (an email client)
– Both email and HTTP can be used to send out the collected
credentials
§ FireEye Blog
https://www.fireeye.com/blog/threat-research/2014/10/apt28-a-window-
into-russias-cyber-espionage-operations.html
🧪 Example:
Bit 1 = +5V
Bit 0 = 0V
This is called binary voltage signaling.
🔌 Coaxial Cable
Copper core with shielding.
Higher bandwidth than twisted pair.
Used in older broadband networks.
💡 Fiber Optics
Uses light pulses for transmission.
Immune to electromagnetic interference.
Supports long distances and very high bandwidth (up to Tbps).
📶 Wireless Media
Uses radio frequencies, microwaves, or infrared.
Signal is affected by interference, attenuation, and obstacles.
3. 🔄 Transmission Modes
Defines directionality of data flow:
5. 🕰️ Synchronization
Ensures sender and receiver agree on bit timing (when a bit starts/ends).
8. 🧷 Line Configuration
Point-to-Point: Direct link between two devices.
Multipoint: Shared link between multiple devices.
🧠 RECAP
Concept Key Points
February 2018
Publicity surrounding the threat of cyber-attacks continues to grow, yet immature classification
methods for these events prevent technical staff, organizational leaders, and policy makers from
engaging in meaningful and nuanced conversations about the risk to their organizations or
critical infrastructure. This paper provides a taxonomy of cyber events that is used to analyze
over 2,431 publicized cyber events from 2014-2016 by industrial sector. Industrial sectors vary
in the scale of events they are subjected to, the distribution between exploitive and disruptive
event types, and the method by which data is stolen or organizational operations are disrupted.
The number, distribution, and mix of cyber event types highlight significant differences by
sector, demonstrating that strategies may vary based on deeper understandings of the threat
environment faced across industries.
As the private and public sectors grapple with the problem of cyber events, disagreement
remains regarding what can and should be done. Technical solutions, organizational resiliency,
employee education, and improvements in system controls are among many options to reduce
risk. Yet, they are rarely evaluated as part of a strategic approach for addressing diverse threats,
which vary by industry.
Confusion about threats and response options originates in part from imprecision in how we
categorize and measure the range of disruptive cyber events. By failing to recognize the
distinctions between specific forms of attack, the effects they produce on the targeted networks,
the financial strain they place on the targeted organizations, or their broader effects on society,
this confusion leads to the misallocation of resources.
This paper provides a new taxonomy that expands on earlier work by the author and colleagues
to classify cyber incidents by the range of disruptive and exploitative effects produced. It applies
the taxonomy in a sector-based analysis of 2,431 publicized cyber events from 2014-2016. It
finds some striking differences across industries in the scale, method of attack, and distribution
of effect. Government and Professional Services face the largest number of attacks. Governments
experience a mix of disruptive and exploitive events, whereas retail and hotel operators primarily
face exploitive attacks. These findings highlight the need for deeper analysis by sector to assess
the risk for specific organizations and critical infrastructure. They also suggest the importance of
tailoring risk mitigation strategies to fit the different threat environments in various sectors.
Cyber Taxonomies
A confusing array of cyber threat classification systems have been proposed over the past two
decades. Some are based on different phases of the hacking process, while others focus on
specific targets. For example, de Bruijne et al (2017) has created a classification of actors and
methods, whereas Gruschka (2010) develops a taxonomy of attacks against cloud systems. Other
classification approaches focus on specific techniques, such as Distributed Denial of Service or
DDoS attacks (Mirkovic 2004); specific targets, such as browsers (Gaur 2015); or particular IT
capabilities, such as industrial control systems (Zhu 2017) and smart grids (Hu 2014).
Few taxonomies in the information security literature seek to classify events by impact on the
target, the key question for risk assessment. Only two (Howard 1998) and (Kjaerland 2005)
directly propose categories of the effect to the victim. Others including Hansman (2005) focus on
Howard’s widely cited taxonomy includes classification methods for attackers, objectives, tools,
access, and impact. He divides the impact of cyber activity, described as the “unauthorized
results,” into five categories: Corruption of Data, Disclosure of Information, Denial of Service,
Increased Access, and Theft of Service.
Kjaerland (2005) classifies cyber effects differently and as belonging to one of four categories:
Disrupt, Distort, Destruct, and Disclosure. He develops these categories in concert with other
dimensions of analysis to evaluate the linkage between sector, actor, method, and target.
Both of these effect-based taxonomies fail to meet basic standards of a well-defined taxonomy
(Ranganathan 1957), including:
Exclusiveness - No two categories should overlap or should have the same scope and boundaries.
Ascertainability - Each category should be definitively and immediately understandable from its
name.
Consistency - The rules for making the selection should be consistently adhered to.
Affinity and Context - As you move from the top of the hierarchical classification to the bottom,
the specification of the classification should increase.
Currency - Names of the categories in the classification should reflect the language in the
domain for which it is created.
Differentiation -When differentiating a category, it should give rise to at least two subcategories
Howard’s taxomony fails the exhaustiveness requirement because some important and
increasingly common types of cyber events do not fit any of its categories. Examples of these
omissions include attacks on Supervisory Control and Data Acquisition (SCADA) systems, data
deletion resulting from the use of wiper viruses, or social media account hijacking and website
defacement.
The Howard taxonomy also fails the test of exclusivity by including two overlapping effects
categories: Increased Access and Theft of Resources. Most hackers seek greater access and
misuse system resources as a means to an end, not as the final result. Their ultimate goal is not
just access, but the illicit acquisition of information or the disruption of organizational services.
For example, if a hacker wanted to illicitly gain and disseminate information about a company,
they would first obtain unauthorized use of a specific computer or network. Using Howard’s
Kjaerland’s classification system also fails important tests for a well-designed taxonomy. By
allowing the same event to be assigned to multiple categories, it violates the criteria of
exclusivity and consistency. For example, Kjaerland’s definition of Destruct notes that “Destruct
is seen as the most invasive and malicious and may include Distort or Disrupt.”
Kjaerland also fails the test of context by mixing impact classification (e.g. destruction of
information) with specific tactics or tools. For example, in his definition of Disrupt he classifies
use of a Trojan as a Disrupt event. However, a Trojan is a technique of hiding a malicious
program in another. That technique can cause many different types of effects depending on
whether it is used to steal or destroy information.
A New Taxonomy
This paper extends previous work (Harry 2015) (Harry & Gallagher 2017) to offer a new
taxonomy for classifying the primary effects on a target of any given cyber event.
I define a cyber event as the result of any single unauthorized effort, or the culmination of many
such technical actions, that engineers, through use of computer technology and networks, a
desired primary effect on a target. For example, if a hacker used a spearphish email to gain
access and then laterally moved through the network to delete data on five machines, that would
count as a single event type whose primary effect resulted in the destruction of data. This
encapsulation of hacker tactics and tradecraft into specification of the primary effect of those
actions is what I define as a cyber event.
In the risk assessment framework developed at the Center for International and Security Studies
at Maryland (CISSM), primary effects are the direct impacts to the target organization’s data or
IT-enabled operations. Cyber events can also cause secondary effects to the organization, such as
the financial costs of replacing equipment damaged in an attack, a drop in the organization’s
stock price, due to bad publicity from the attack, or a loss of confidence in the organization’s
ability to safeguard confidential data. And, they can cause second order effects on individuals or
organizations who rely on the targeted organization for some type of goods or services. These
could include effects on the physical environment, the supply chain, or even distortions an attack
might have on an individual’s attitudes, preferences, or opinion deriving from the release of
salacious information. While these are important areas to consider, they are outside of the scope
of this paper.
Any given cyber event can have one of two types of primary objectives: the disruption to the
functions of the target organization, or the illicit acquisition of information. An attacker might
disrupt an organization’s ability to make products, deliver services, carry out internal functions,
or communicate with the outside world in a number of ways. Alternatively, hackers may seek to
steal credit card user accounts, intellectual property, or sensitive internal communications to get
financial or other benefits without disrupting the organization’s operations.
Disruptive Events
A malicious actor may utilize multiple tactics that have wildly different disruptive effects
depending on how an organization uses information technology to carry out its core functions.
For example, an actor could delete data from one or more corporate networks, deploy
ransomware, destroy physical equipment used to produce goods by manipulating Supervisory
Control and Data Acquisition (SCADA) systems, prevent customers from reaching an
organization’s website, or deny access to a social media account.
Disruptive effects can be classified into five sub-categories depending on the part of an
organization’s IT infrastructure that is most seriously impacted, regardless of what tactic or
techniques were used to accomplish that result. They are: Message Manipulation, External
Denial of Service, Internal Denial of Service, Data Attack, and Physical Attack.
Message Manipulation. Any cyber event that interferes with a victim’s ability to accurately
present or communicate its “message” to its user or customer base is a Message Manipulation
attack. These include the hijacking of social media accounts, such as Facebook or Twitter, or
defacing a company website by replacing the legitimate site with pages supporting a political
cause. For example, in 2015, ISIS affiliated hackers gained access to the YouTube and Twitter
accounts for US CENTCOM. The hackers changed the password, posted threatening messages to
U.S. Service members, and replaced graphics with ISIS imagery (Lamothe 2015). Similarly, in
2016, the website for the International Weightlifting Federation (IWF) was defaced after a
controversial decision to disqualify an Iranian competitor (Cimpanu 2016). Both events used
different tactics, but the primary effect on the targeted organization’s ability to interact with its
audience was the same.
Internal Denial of Service. When a cyber event executed from inside a victim’s network
degrades or denies access to other internal systems, it is an Internal Denial of Service attack. For
instance, an attacker who had gained remote access to a router inside an organization’s network
could reset a core router to factory settings so that devices inside the network could no longer
communicate with one another. The anti DDOS vendor Staminus apparently experienced such an
internal denial of service attack in 2016. It issued a public statement that “a rare event cascaded
across multiple routers in a system-wide event, making our backbone unavailable.” (Reza 2016).
An attacker using malware installed on a file server to disrupt data sent and received between
itself and a user workstation would achieve a similar effect.
Data Attack. Any cyber event that manipulates, destroys, or encrypts data in a victim’s network
is categorized as a Data Attack. Common techniques include the use of wiper viruses and
ransomware. Using stolen administrative credentials to manipulate data and violate its integrity,
such as changing grades in a university registrar’s database would also fit this category. For
example, in 2017 the mass deployment of the NotPeyta ransomware resulted in thousands of data
attack cyber events against individuals as well as to small, medium, and large businesses, with
one case costing the shipping firm Maersk over $200 million (Matthews 2017).
Physical Attack. A cyber event that manipulates, degrades, or destroys physical systems is
classified as a Physical Attack. Current techniques used to achieve this type of effect include the
manipulation of Programable Logic Controllers (PLC) to open or close electrical breakers or
utilize user passwords to access and change settings in a human machine interface to overheat a
blast furnace, causing damage to physical equipment. For example, in the December 2015 cyber-
attack on a Ukrainian utility, a malicious actor accessed and manipulated the control interface to
trip several breakers in power substations. This deenergized a portion of the electrical grid and
tens of thousands of customers lost power for an extended period of time (Lee et al 2016).
Exploitive Events
Some cyber events are designed to steal information rather than to disrupt operations. Hackers
may be seeking customer data, intellectual property, classified national security information, or
sensitive details about the organization itself. While the tactics or techniques used by malicious
actors may change regularly, the location from which they get that information does not. I define
five categories of an exploitive event below: Exploitation of Sensors, Exploitation of End Hosts,
Exploitation of Sensors. A cyber event that results in the loss of data from a peripheral device
like a credit card reader, automobile, smart lightbulb, or a network-connected thermostat is
categorized as an Exploitation of Sensor event. The attack on Eddie Bauer stores where hackers
gained access to hundreds of Point of Sale machines and systematically stole credit card numbers
from thousands of customers fits this category (Krebs 2016). Other examples include illicit
acquisition of technical, customer, personal, or organizational data from CCTV cameras, smart
TVs, or baby monitors.
Exploitation of End Hosts. Hackers often are interested in the data stored on user’s desktop
computers, laptops, or mobile devices. When data is stolen through illicit access to devices used
directly by employees of an organization or by private individuals it is categorized as an
Exploitation of End Host cyber event. Tactics used in this type of attack include sending a
malicious link for a user to click or leveraging compromised user credentials to log in to an
account.
Exploitation of Data in Transit. Hackers who acquire data as it is being transmitted between
devices cause Exploitation of Data in Transit events. Examples of this type of event include the
acquisition of unencrypted data as it is sent from a PoS device to a database or moved from an
end-user device through an unsecured wireless hotspot at a local coffee shop.
The best way to assess how well this classification system meets the criteria for a well-defined
taxonomy is to see whether it can be easily and unambiguously used to categorize all the events
in an extensive data set.
Unfortunately, there are no public datasets of cyber attacks that include a variety of cyber events
with a range of both exploitive and disruptive effects. Most public data repositories focus on
some types of events to the exclusion of others. The Privacy Rights Clearinghouse, for example,
has a dataset focused on domestic exploitive attacks, while the blog H-zone.org has a dataset
focused on website defacement attacks (a subset of Message Manipulation). Other datasets are
on privately maintained blogs and webpages. Some do not use a repeatable process to classify or
categorize by sector thereby limiting the range of analysis that can be applied. Others are
compiled from proprietary data or are only available for a steep fee.
To create a dataset that had the information needed to test the CISSM taxonomy, the author used
systematic web searches to identify cyber events that could be characterized by their effects.
Initial searches for generalized references to cyber attacks yielded 3,355 possible events that
were referenced by blogs, security vendor portals, or other English-language news sources from
January 2014 through December 2016.
Of the initial 3,355 candidate cyber events initially discovered, 2,431 were included in the
dataset (72%). Media reports about 909 of the candidate events were broad discussions of
malware campaigns or generalized discussions about threat actor plans and tactics. These were
excluded because they did not provide information on the primary effect to a specific victim.
Media reports about an additional 15 events specifically spoke to the tactics used by the threat
actor independent of the effect to the primary victim, so they were also discarded. For example,
one source discussed the use of compromised Amazon Web services credentials to access a
system but did not talk about what types of actions took place once in the target network.
In complex cases where the victim suffered multiple effects (e.g. website defacement and
DDoS), the dataset counts each effect as a separate, but overlapping, event registered to the
victim. Cyber events were coded to include date, event type, organization type (using the North
American Industrial Classification System (NAICS), a description of the event, and a link to the
source.
This dataset is not an exhaustive accounting of all cyber events during this period. It only
includes events for which there was a direct news source that was verifiable and that provided
some insight into the methods of the attack. The true population of malign cyber activity is
unknown because some significant events are kept secret and many other cyber incidents are too
trivial to warrant media attention. Nevertheless, this dataset includes a large enough number of
events for it to be useful for testing the taxonomy and making rough generalizations about
relative frequencies of different types of events in different sectors.
As discussed earlier, a well-designed taxonomy should, among other things, account for all the
items to be classified, clearly differentiate among categories, ensure that each item has a unique
Each of the 2,431 events in the dataset could be coded as either Exploitive or Disruptive and
assigned to one of ten effect-based sub-categories in the CISSM taxonomy. This fulfills the
exhaustiveness requirement. Treating complex attacks in which multiple effects were achieved
by the hacker as a set of separate but overlapping events made it possible to apply the taxonomy
in a consistent manner, to differentiate between categories of effect, and to maintain clear
differentiation between the categorized effects. This analysis did not assess the taxonomy’s
currency, ascertainability, or affinity, because these standards should be judged by individual
users rather than the creator of the taxonomy.
Many cyber classification systems run into the same three major problems: Their inability to
distinguish between tactics and effects; their difficulty remaining relevant as threat actors change
and hacking techniques evolve; and their applicability to some types of IT systems, but not
others. The CISSM taxonomy disentangles stable categories of effects from the rapidly
advancing tactics employed by an ever-changing set of state and non-state hackers in a way that
can be applied to all IT systems in use today or envisioned for the future.
Categorizing cyber events according to their effects rather than treating them as an
indistinguishable, but ever increasing, mass of “cyberattacks” yields a number of useful insights.
Of the 2,431 cyber events during the three-year period reviewed, over 70 percent (1,700) were
exploitive, whereas 30 percent (725) were disruptive. This ratio appears to relatively stable when
examining events on a yearly basis, too. Of the 633 events recorded in 2014, 67 percent (423)
were exploitive, and 33 percent (210) were disruptive. Of the 843 cyber events in 2015, 67
percent (563) were exploitive and 33 percent (280) were disruptive. And of the 955 events
recorded in 2016, 75 percent (714) were exploitive events, compared with 25 percent (241) that
were disruptive.
Of the 1,700 exploitative events, the two most common sub-categories are Exploitation of
Application Server events and Exploitation of End Host events. Ninety percent of all exploitive
events in the dataset fall into one of these two categories. This reflects the current popularity of
SQL injection attacks against web applications, and the heavy use of spearphising campaigns
against end users.
A much smaller percentage of exploitative attacks fall into the other three categories, most likely
because these types of events often require internal access, are inherently more difficult to pull
off, are not as well monitored, or are not as well publicized. Exploitation of Sensor events
represent only 5 percent of the exploitive events sample, probably because the value of data from
many of the devices in this category, like smart thermostats and baby monitors, might not be as
large as records from other sources. Whereas a ready market exists on the Dark Web for
customer data stolen from POS devices, most types of sensor data will not be of broad interest.
The 725 disruptive cyber events in the dataset follow a similar pattern with most activity falling
into categories that are generally less problematic. Ninety-six percent of all disruptive events are
either Message Manipulation (60 percent, 433) or External Denial of Service (36 percent, 263)
events. This events reflect efforts by malign actors using less sophisticated techniques to deface
websites that are vulnerable to external access and manipulation, weak passwords surrounding
social media accounts, or high levels of DDoS activity applied by actors against identified
targets.
The remaining 5 percent (29 events), are split between Internal Denial of Service (2 percent, 11
events), Data Attack (2 percent, 14 events), or Physical Attack (1 percent, 4 events). These types
of events involved internal networks, so they required more sophisticated access techniques or
malware leveraged to engineer the intended disruptive effects.
In Figure 2, the level of cyber event activity in different sectors is ranked into three tiers—High,
medium, and low—to identify which sectors are currently most prone to the types of
cyberattacks that make it into the public record. Sectors that experience more than 15 percent of
all cyber events in our dataset fit into the highest tier. Government services and professional
services fall into this category; together, they account for 38 percent of all events recorded.
Medium-activity sectors include those that see at least 3.8 percent, but less than 15 percent, of
the events in the full dataset. Sectors falling into this tier include information services, education,
healthcare, finance, retail, entertainment, and accommodation services. The nine sectors in this
category experienced approximately 56.7 percent of the total number of cyber events, suggesting
that a larger breadth of industries is affected by significant numbers of cyber events.
The lowest activity tier includes sectors that had fewer than 3.8 percent of the total events. This
tier includes traditional industries that are less dependent on information technology than other
sectors of the modern economy, such as agriculture, mining, real estate, and construction. Two
sectors considered critical infrastructure—transportation and utilities—also fell into this tier.
In addition to the frequency of event activity, the nature of those effects is also an important
factor in assessing risk to specific sectors. In Figure 3, the percentage of total cyber events
characterized as exploitive is plotted versus the percentage that are disruptive in nature. Only the
ten sectors with the highest frequencies of cyber events are represented in the figure, as many of
the low tier sectors have too few observations to draw meaningful conclusions.
Figure 3: Exploitive vs Disruptive as a Share of Total Events for Top 10 Industry Sectors
The only sectoral category where the relative frequency of exploitative and disruptive events is
roughly the same as in the entire data set (70 percent Exploitive, 30 percent Disruptive) is the
“other” category. The relative frequency within most sectors is significantly different from the
average distribution. This highlights the importance of assessing risks on a per industry basis
instead of applying general guidance about what types of cyber events are most common.
Lastly, the categories of cyber events are also found to vary between sectors. Table 1 highlights
all cyber events, by share, drawing out some interesting differences. For example, while
Accommodation and Food Services represent only 4.8 percent of all cyber events in the dataset,
that sector accounts for over 36 percent of all Exploitation of Sensor events, well above the
average rate of 3.8 percent. This observation draws attention to the heavy targeting by hackers of
PoS devices used by fast food restaurants and hotels. The same sector is under-represented for
Message Manipulation events. Only 3.4 percent of the events it experienced fell in this category,
compared to the average of 17.9 percent for all sectors.
Differences between sectors in the frequencies of different types of cyber events likely reflect
differences in attacker motivations, vulnerabilities, and benefits that can be obtained through
different types of exploitation of data disruption of key organizational services. For example,
Government Services suffers more Message Manipulation and External Denial of Service event
types, whereas it does not see many Application Server events. A review of specific incidents in
the dataset reveals a large number of attacks against websites aimed at promoting a political
message. These attacks are often exploiting misconfigurations and can be automated thereby
promoting larger numbers of events, whereas exploitation of applications might not occur as
often if the information exploited requires greater effort by the hacker to achieve their goals.
Conclusion
Having an easy-to-use taxonomy that provides an exclusive, exhaustive, and consistent way to
differentiate the primary effects of cyber activity will help organizational leaders and policy
makers have more sophisticated discussions about the different types of threats they face, and the
appropriate risk mitigation strategies. The taxonomy presented in this paper and the analysis of
three years of publicized cyber event data highlights variance in scale, effects, and method.
Differences in the types of disruptive or exploitive attacks directly inform organizational leaders
on both the range as well as concentration of effects they might face. By disentangling tactics
from effect this classification provides a first step in creating a framework by which
organizational leaders can categorize and assess the most consequential forms of cyber attack
they might face. Additional work to measure the impact of specific attacks would allow
organizations and governments to adequately plan for the types of threats they are most likely to
Charles Harry is a senior leader, practitioner, and researcher with over 20 years of experience in
intelligence and cyber operations. Dr. Harry is the Director of Operations at the Maryland Global
Initiative in Cybersecurity (MaGIC), an Associate Research Professor in the School of Public
Policy, and a Senior Research Associate at CISSM.
Nancy Gallagher is the CISSM Director and a Research Professor at the School of Public Policy.
Cimpanu C. (2016) “Iranian Hacker Defaces IWF Website Following Controversial Rio
Olympics Decision” Softpedia News http://news.softpedia.com/news/iranian-hackers-deface-
iwf-website-following-controversial-rio-olympics-decision-507436.shtml
Choo, K. (2011). “The cyber threat landscape: Challenges and future research directions”
Computers & Security, 30(8), 719-731. doi: http://dx.doi.org/10.1016/j.cose.2011.08.004
de Bruijne, M., van Eeten M., Ganan, C., Pieters, W. (2017). “Towards a New Cyber Threat
actor Topology: A Hybrid Method for the NCSC Cyber Security Assessment” Delft University
of Technology https://www.wodc.nl/binaries/2740_Volledige_Tekst_tcm28-273243.pdf
Gruschka, N., & Jensen, M. (2010). “Attack Surfaces: A Taxonomy for Attacks on Cloud
Services” Paper presented at the IEEE CLOUD
Hansman, S., Hunt R., (2005) “A taxonomy of network and computer attacks”. Computers and
Security, Vo.l 24 Issue 1, p 31-43
Harry, C. (2015) “A Framework for Characterizing Disruptive Cyber Activity and Assessing its
Impact”, Working Paper, Center for International and Security Studies at Maryland (CISSM),
University of Maryland
Harry C. & Gallagher N. (2017) “Categorizing and Assessing Severity of Disruptive Cyber
Events” Policy Brief, Center for International and Security Studies at Maryland (CISSM),
University of Maryland
Howard, J. and Longstaff, T. (1998) “A Common Language for Computer Security Incidents,”
Technical Report, Sandia National Laboratories
Jiankun H., Hemanshu R., and Song G. (2014) “Taxonomy of Attacks for Agent-Based Smart
Grids” IEEE Transactions on Parallel and Distributed Systems, Vol 25, No 7
Kjaerland, M., (2005) “A taxonomy and comparison of computer security incidents from the
commercial and government sectors”. Computers and Security, Vol 25 pgs 522–538.
Krebs, B. (2016) “Malware Infected All Eddie Bauer Stores in US and Canada”, Krebs on
Security, https://krebsonsecurity.com/2016/08/malware-infected-all-eddie-bauer-stores-in-u-s-
canada/
Lamothe, D “U.S military social media accounts apparently hacked by Islamic State
sympathizers” , http://www.washingtonpost.com/news/checkpoint/wp/2015/01/12/centcom-
twitter-account-apparently-hacked-by-islamic-statesympathizers/, Washington Post, January
2015.
Lee, R., Assante, M., Conway, T. (2016) “Analysis of the Cyber Attack on the Ukrainian Power
Grid”, Sans Institute,. https://ics.sans.org/media/E-ISAC_SANS_Ukraine_DUC_5.pdf
Matthews, L. (2017) “NotPeyta Ransomware Attack Cost Shipping Giant Maersk Over $200
Million”, Forbes https://www.forbes.com/sites/leemathews/2017/08/16/notpetya-ransomware-
attack-cost-shipping-giant-maersk-over-200-million/#40970b504f9a
Mirkovic, J., and Reiher, P. (2004) “A taxonomy of DDoS attack and DDoS defense
mechanisms” SIGCOMM Comput. Commun. Rev., Vol 34(2), pgs 39-53
Reza, Ali (2016) “Anti-DDoS firm Staminus hacked, private data posted on line” , Hack Read,
https://www.hackread.com/anti-ddos-firm-staminus-hacked-private-data-posted-online/
Saini, A. Gaur M.S, Laxmi V.S (2015) “A Taxonomy of Browser Attacks”, Handbook of
Research on Digital Crime, Cyberspace Security, and Information Assurance 2015 p. 291-313
Simmons, C., Ellis, C., Shiva, S., Dasgupta, D., & Wu, Q (2009). “AVOIDIT: A cyber attack
taxonomy". University of Memphis.
Woolf, N (2016) “DDoS attack that disrupted internet was the largest of its kind in history,
experts say.” Guardian https://www.theguardian.com/technology/2016/oct/26/ddos-attack-dyn-
mirai-botnet
Zetter, K. (2016) “Inside the Cunning Unprecedented Hack of Ukraine’s Power Grid”, Wired,
https://www.wired.com/2016/03/inside-cunning-unprecedented-hack-ukraines-power-grid/