Chapter 4 Cybersecurity
Chapter 4 Cybersecurity
Let’s take a more detailed look at computer networking and securing the network. In today’s
world, the internet connects nearly everyone and everything, and this is accomplished
through networking. While most see computer networking as a positive, criminals routinely
use the internet, and the networking protocols themselves, as weapons and tools to exploit
vulnerabilities and for this reason we must do our best to secure the network. We will review
the basic components of a network, threats and attacks to the network, and learn how to
protect them from attackers. Network security itself can be a specialty career within
cybersecurity; however, all information security professionals need to understand how
networks operate and are exploited to better secure them.
Learning Objectives
L4
L4.1.1
L4.1.2
L4.2.1
L4.2.2
L4.3.1
L4.3.2
L4.3.3
Chapter at a Glance
While working through Chapter 4, Network Security, make sure to:
Module Objectives
Manny: One of the biggest issues in cybersecurity is that computers are all linked together,
sometimes by physical networks within a building, and almost always via the Internet, so it's
easy
for viruses and other threats to move rapidly through networks.
Tasha: That's right, and cyber threats and attacks are getting more sophisticated all the time.
This
aspect of cybersecurity is always evolving. Let's find out more.
What is Networking
A network is simply two or more computers linked together to share data, information or
resources.
To properly establish secure data communications, it is important to explore all of the
technologies involved in computer communications. From hardware and software to
protocols and encryption and beyond, there are many details, standards and procedures to be
familiar with.
Types of Networks
Local area network (LAN) - A local area network (LAN) is a network typically spanning a
single floor or building. This is commonly a limited geographical area.
Wide area network (WAN) - Wide area network (WAN) is the term usually assigned to the
long-distance connections between geographically remote networks.
Network Devices
HUB
Hubs are used to connect multiple devices in a network. They’re less likely to be seen in business or
corporate networks than in home networks. Hubs are wired devices and are not as smart as switches
or routers.
SWITCH
Rather than using a hub, you might consider using a switch, or what is also known as an
intelligent hub. Switches are wired devices that know the addresses of the devices connected
to them and route traffic to that port/device rather than retransmitting to all devices.
Offering greater efficiency for traffic delivery and improving the overall throughput of data,
switches are smarter than hubs, but not as smart as routers. Switches can also create separate
broadcast domains when used to create VLANs, which will be discussed later.
ROUTER
Routers are used to control traffic flow on networks and are often used to connect similar networks
and control traffic flow between them. Routers can be wired or wireless and can connect multiple
switches. Smarter than hubs and switches, routers determine the most efficient “route” for the
traffic to flow across the network.
FIREWALL
Firewalls are essential tools in managing and controlling network traffic and protecting the network.
A firewall is a network device used to filter traffic. It is typically deployed between a private network
and the internet, but it can also be deployed between departments (segmented networks) within an
organization (overall network). Firewalls filter traffic based on a defined set of rules, also called
filters or access control lists.
SERVER
A server is a computer that provides information to other computers on a network. Some common
servers are web servers, email servers, print servers, database servers and file servers. All of these
are, by design, networked and accessed in some way by a client computer. Servers are usually
secured differently than workstations to protect the information they contain.
ENDPOINT
Endpoints are the ends of a network communication link. One end is often at a server where a
resource resides, and the other end is often a client making a request to use a network resource. An
endpoint can be another server, desktop workstation, laptop, tablet, mobile phone or any other end
user device.
Other Networking Terms
Ethernet
Ethernet (IEEE 802.3) is a standard that defines wired connections of networked devices.
This standard defines the way data is formatted over the wire to ensure disparate devices can
communicate over the same cables.
Device Address
Media Access Control (MAC) Address - Every network device is assigned a Media Access
Control (MAC) address. An example is 00-13-02-1F-58-F5. The first 3 bytes (24 bits) of the
address denote the vendor or manufacturer of the physical network interface. No two
devices can have the same MAC address in the same local network; otherwise an address
conflict occurs.
Internet Protocol (IP) Address - While MAC addresses are generally assigned in the firmware
of the interface, IP hosts associate that address with a unique logical address. This logical IP
address represents the network interface within the network and can be useful to maintain
communications when a physical device is swapped with new hardware. Examples are
192.168.1.1 and 2001:db8::ffff:0:1.
Networking at a Glance
This diagram represents a small business network, which we will build upon during this
lesson. The lines depict wired connections. Notice how all devices behind the firewall
connect via the network switch, and the firewall lies between the network switch and the
internet.
The network diagram below represents a typical home network. Notice the primary
difference between the home network and the business network is that the router, firewall,
and network switch are often combined into one device supplied by your internet provider
and shown here as the wireless access point.
Networking Models
Many different models, architectures and standards exist that provide ways to interconnect
different hardware and software systems with each other for the purposes of sharing
information, coordinating their activities and accomplishing joint or shared tasks.
Computers and networks emerge from the integration of communication devices, storage
devices, processing devices, security devices, input devices, output devices, operating
systems, software, services, data and people.
Translating the organization’s security needs into safe, reliable and effective network systems
needs to start with a simple premise. The purpose of all communications is to exchange
information and ideas between people and organizations so that they can get work done.
Those simple goals can be re-expressed in network (and security) terms such as:
In the most basic form, a network model has at least two layers:
Select each plus sign hotspot to learn more about each topic.
Upper Layer
The upper layer, also known as the host or application layer, is responsible for managing the
integrity of a connection and controlling the session as well as establishing, maintaining and
terminating communication sessions between two computers. It is also responsible for
transforming data received from the Application Layer into a format that any system can
understand. And finally, it allows applications to communicate and determines whether a
remote communication partner is available and accessible.
Lower Layer
The lower layer is often referred to as the media or transport layer and is responsible for receiving
bits from the physical connection medium and converting them into a frame. Frames are grouped
into standardized sizes. Think of frames as a bucket and the bits as water. If the buckets are sized
similarly and the water is contained within the buckets, the data can be transported in a controlled
manner. Route data is added to the frames of data to create packets. In other words, a destination
address is added to the bucket. Once we have the buckets sorted and ready to go, the host layer
takes over.
Open Systems Interconnection (OSI) Model
The OSI Model was developed to establish a common way to describe the communication
structure for interconnected computer systems. The OSI model serves as an abstract
framework, or theoretical model, for how protocols should function in an ideal world, on
ideal hardware. Thus, the OSI model has become a common conceptual reference that is used
to understand the communication of various hierarchical components from software
interfaces to physical hardware.
The OSI model divides networking tasks into seven distinct layers. Each layer is responsible
for performing specific tasks or operations with the goal of supporting data exchange (in
other words, network communication) between two computers. The layers are
interchangeably referenced by name or layer number. For example, Layer 3 is also known as
the Network Layer. The layers are ordered specifically to indicate how information flows
through the various levels of communication. Each layer communicates directly with the
layer above and the layer below it. For example, Layer 3 communicates with both the Data
Link (2) and Transport (4) layers.
The Application, Presentation, and Session Layers (5-7) are commonly referred to simply as
data. However, each layer has the potential to perform encapsulation. Encapsulation is the
addition of header and possibly a footer (trailer) data by a protocol used at that layer of the
OSI model. Encapsulation is particularly important when discussing Transport, Network and
Data Link layers (2-4), which all generally include some form of header. At the Physical
Layer (1), the data unit is converted into binary, i.e., 01010111, and sent across physical
wires such as an ethernet cable.
It's worth mapping some common networking terminology to the OSI Model so you can see
the value in the conceptual model.
When someone references an image file like a JPEG or PNG, we are talking about the
Presentation Layer (6).
When discussing logical ports such as NetBIOS, we are discussing the Session Layer (5).
When discussing TCP/UDP, we are discussing the Transport Layer (4).
When discussing routers sending packets, we are discussing the Network Layer (3).
When discussing switches, bridges or WAPs sending frames, we are discussing the Data Link
Layer (2).
Encapsulation occurs as the data moves down the OSI model from Application to Physical.
As data is encapsulated at each descending layer, the previous layer’s header, payload and
footer are all treated as the next layer’s payload. The data unit size increases as we move
down the conceptual model and the contents continue to encapsulate.
The inverse action occurs as data moves up the OSI model layers from Physical to
Application. This process is known as de-encapsulation (or decapsulation). The header and
footer are used to properly interpret the data payload and are then discarded. As we move up
the OSI model, the data unit becomes smaller. The encapsulation/de-encapsulation process is
best depicted visually below:
Transmission Control Protocol/Internet Protocol (TCP/IP)
The OSI model wasn’t the first or only attempt to streamline networking protocols or
establish a common communications standard. In fact, the most widely used protocol today,
TCP/IP, was developed in the early 1970s. The OSI model was not developed until the late
1970s. The TCP/IP protocol stack focuses on the core functions of networking.
The most widely used protocol suite is TCP/IP, but it is not just a single protocol; rather, it is a
protocol stack comprising dozens of individual protocols. TCP/IP is a platform-independent protocol
based on open standards. However, this is both a benefit and a drawback. TCP/IP can be found in
just about every available operating system, but it consumes a significant amount of resources and is
relatively easy to hack into because it was designed for ease of use rather than for security.
The two primary Transport Layer protocols of TCP/IP are TCP and UDP. TCP is a full-
duplex connection-oriented protocol, whereas UDP is a simplex connectionless protocol. In
the Internet Layer, Internet Control Message Protocol (ICMP) is used to determine the health
of a network or a specific link. ICMP is utilized by ping, traceroute and other network
management tools. The ping utility employs ICMP echo packets and bounces them off remote
systems. Thus, you can use ping to determine whether the remote system is online, whether
the remote system is responding promptly, whether the intermediary systems are supporting
communications, and the level of performance efficiency at which the intermediary systems
are communicating.
Internet Protocol (IPv4 and IPv6)
IP is currently deployed and used worldwide in two major versions. IPv4 provides a 32-bit
address space, which by the late 1980s was projected to be exhausted. IPv6 was introduced in
December 1995 and provides a 128-bit address space along with several other important
features.
With the ever-increasing number of computers and networked devices, it is clear that IPv4
does not provide enough addresses for our needs. To overcome this shortcoming, IPv4 was
sub-divided into public and private address ranges. Public addresses are limited with IPv4,
but this issue was addressed in part with private addressing. Private addresses can be shared
by anyone, and it is highly likely that everyone on your street is using the same address
scheme.
The nature of the addressing scheme established by IPv4 meant that network designers had to
start thinking in terms of IP address reuse. IPv4 facilitated this in several ways, such as its
creation of the private address groups; this allows every LAN in every SOHO (small office,
home office) situation to use addresses such as 192.168.2.xxx for its internal network
addresses, without fear that some other system can intercept traffic on their LAN.
Range
10.0.0.0 to 10.255.255.254
172.16.0.0 to 172.31.255.254
192.168.0.0 to 192.168.255.254
The first octet of 127 is reserved for a computer’s loopback address. Usually, the address
127.0.0.1 is used. The loopback address is used to provide a mechanism for self-diagnosis
and troubleshooting at the machine level. This mechanism allows a network administrator to
treat a local machine as if it were a remote machine and ping the network interface to
establish whether it is operational.
A much larger address field: IPv6 addresses are 128 bits, which supports
2128 or 340,282,366,920,938,463,463,374,607,431,768,211,456 hosts. This ensures that we
will not run out of addresses.
Improved security: IPsec is an optional part of IPv4 networks, but a mandatory component
of IPv6 networks. This will help ensure the integrity and confidentiality of IP packets and
allow communicating partners to authenticate with each other.
Improved quality of service (QoS): This will help services obtain an appropriate share of a
network’s bandwidth.
An IPv6 address is shown as 8 groups of four digits. Instead of numeric (0-9) digits like IPv4,
IPv6 addresses use the hexadecimal range (0000-ffff) and are separated by colons (:) rather
than periods (.). An example IPv6 address is 2001:0db8:0000:0000:0000:ffff:0000:0001. To
make it easier for humans to read and type, it can be shortened by removing the leading zeros
at the beginning of each field and substituting two colons (::) for the longest consecutive zero
fields. All fields must retain at least one digit. After shortening, the example address above is
rendered as 2001:db8::ffff:0:1, which is much easier to type. As in IPv4, there are some
addresses and ranges that are reserved for special uses:
::1 is the local loopback address, used the same as 127.0.0.1 in IPv4.
The range 2001:db8:: to 2001:db8:ffff:ffff:ffff:ffff:ffff:ffff is reserved for documentation use,
just like in the examples above.
fc00:: to fdff:ffff:ffff:ffff:ffff:ffff:ffff:ffff are addresses reserved for internal network use and
are not routable on the internet.
Formatting IPv6
Which of the following examples is a correctly shortened version of the address
2001:0db8:0000:0000:0000:ffff:0000:0001?
A. 2001:db8::ffff:0000:1
B. 2001:0db8:0:ffff::1
Incorrect. The multiple octets of 0000 are shortened to :: and the single octet of 0 is shortened to just 0.
C. 2001:0db8::ffff:0:0001
D. 2001:db8::ffff:0:1
What is WiFi?
Wireless networking is a popular method of connecting corporate and home systems because
of the ease of deployment and relatively low cost. It has made networking more versatile than
ever before. Workstations and portable systems are no longer tied to a cable but can roam
freely within the signal range of the deployed wireless access points. However, with this
freedom comes additional vulnerabilities.
Wi-Fi range is generally wide enough for most homes or small offices, and range extenders
may be placed strategically to extend the signal for larger campuses or homes. Over time the
Wi-Fi standard has evolved, with each updated version faster than the last.
In a LAN, threat actors need to enter the physical space or immediate vicinity of the physical
media itself. For wired networks, this can be done by placing sniffer taps onto cables,
plugging in USB devices, or using other tools that require physical access to the network. By
contrast, wireless media intrusions can happen at a distance.
TCP/IP (as well as most protocols) is also subject to passive attacks via monitoring or
sniffing. Network monitoring, or sniffing, is the act of monitoring traffic patterns to obtain
information about a network.
Physical Ports
Physical ports are the ports on the routers, switches, servers, computers, etc. that you connect
the wires, e.g., fiber optic cables, Cat5 cables, etc., to create a network.
Logical Ports
Well-known ports (0–1023): These ports are related to the common protocols that are
at the core of the Transport Control Protocol/Internet Protocol (TCP/IP) model,
Domain Name Service (DNS), Simple Mail Transfer Protocol (SMTP), etc.
Registered ports (1024–49151): These ports are often associated with proprietary
applications from vendors and developers. While they are officially approved by the
Internet Assigned Numbers Authority (IANA), in practice many vendors simply
implement a port of their choosing. Examples include Remote Authentication Dial-In
User Service (RADIUS) authentication (1812), Microsoft SQL Server (1433/1434)
and the Docker REST API (2375/2376).
Dynamic or private ports (49152–65535): Whenever a service is requested that is
associated with well-known or registered ports, those services will respond with a
dynamic port that is used for that session and then released.
Secure Ports
Some network protocols transmit information in clear text, meaning it is not encrypted and
should not be used. Clear text information is subject to network sniffing. This tactic uses
software to inspect packets of data as they travel across the network and extract text such as
usernames and passwords. Network sniffing could also reveal the content of documents and
other files if they are sent via insecure protocols. The table below shows some of the insecure
protocols along with recommended secure alternatives.
Click each insecure port on the left to discover why it is insecure and why the corresponding
secure alternative port is preferred.
21-FTP
Secure
Insecure Port Description Protocol Protocol
Alternative Port
21 - FTP Port 21, File Transfer File Transfer 22* - SFTP Secure File
Protocol (FTP) sends Protocol Transfer Protocol
the username and
password using
plaintext from the
client to the server.
This could be
intercepted by an
attacker and later
used to retrieve
confidential
information from the
server. The secure
alternative, SFTP, on
port 22 uses
encryption to protect
the user credentials
and packets of data
being transferred.
23-Telnet
Secure
Insecure Port Description Protocol Protocol
Alternative Port
23 - Telnet Port 23, telnet, is Telnet 22* - SSH Sec
used by many Linux
systems and any
other systems as a
basic text-based
terminal. All
information to and
from the host on a
telnet connection is
sent in plaintext and
can be intercepted by
an attacker. This
includes username
and password as well
as all information
that is being
presented on the
screen, since this
interface is all text.
Secure Shell (SSH)
on port 22 uses
encryption to ensure
that traffic between
the host and terminal
is not sent in a
plaintext format.
25-SMTP
Secure
Insecure Port Description Protocol Protocol
Alternative Port
25 - SMTP Port 25, Simple Mail Simple Mail 587 - SMTP SMTP with TLS
Transfer Protocol Transfer Protocol
(SMTP) is the default
unencrypted port for
sending email
messages. Since it is
unencrypted, data
contained within the
emails could be
discovered by
network sniffing. The
secure alternative is
to use port 587 for
SMTP using
Transport Layer
Security (TLS) which
will encrypt the data
between the mail
client and the mail
server.
37-Time
Secure
Insecure Port Description Protocol Protocol
Alternative Port
37 - Time Port 37, Time Time Protocol 123 - NTP Network Time
Protocol, may be in Protocol
use by legacy
equipment and has
mostly been replaced
by using port 123 for
Network Time
Protocol (NTP). NTP
on port 123 offers
better error-handling
capabilities, which
reduces the
likelihood of
unexpected errors.
53-DNS
Secure
Insecure Port Description Protocol Protocol
Alternative Port
53 - DNS Port 53, Domain Domain Name 853 - DoT DNS over TLS
Name Service Service (DoT)
(DNS), is still used
widely. However,
using DNS over TLS
(DoT) on port 853
protects DNS
information from
being modified in
transit.
80-HTTP
Secure
Insecure Port Description Protocol Protocol
Alternative Port
80 - HTTP Port 80, HyperText HyperText 443 - HTTPS HyperText
Transfer Protocol Transfer Protocol Transfer Protocol
(HTTP) is the basis (SSL/TLS)
of nearly all web
browser traffic on the
internet. Information
sent via HTTP is not
encrypted and is
susceptible to
sniffing attacks.
HTTPS using TLS
encryption is
preferred, as it
protects the data in
transit between the
server and the
browser. Note that
this is often notated
as SSL/TLS. Secure
Sockets Layer (SSL)
has been
compromised is no
longer considered
secure. It is now
recommended for
web servers and
clients to use
Transport Layer
Security (TLS) 1.3 or
higher for the best
protection.
143-IMAP
Secure
Insecure Port Description Protocol Protocol
Alternative Port
143 - IMAP Port 143, Internet Internet Message 993 - IMAP IMAP for
Message Access Access Protocol SSL/TLS
Protocol (IMAP) is a
protocol used for
retrieving emails.
IMAP traffic on port
143 is not encrypted
and susceptible to
network sniffing. The
secure alternative is
to use port 993 for
IMAP, which adds
SSL/TLS security to
encrypt the data
between the mail
client and the mail
server.
161/162 SNMP
Secure
Insecure Port Description Protocol Protocol
Alternative Port
161/162 - Ports 161 and 162, Simple Network 161/162 - SNMP SNMPv3
SNMP Simple Network Management
Management Protocol
Protocol, are
commonly used to
send and receive data
used for managing
infrastructure
devices. Because
sensitive information
is often included in
these messages, it is
recommended to use
SNMP version 2 or 3
(abbreviated
SNMPv2 or
SNMPv3) to include
encryption and
additional security
features. Unlike
many others
discussed here, all
versions of SNMP
use the same ports, so
there is not a
definitive secure and
insecure pairing.
Additional context
will be needed to
determine if
information on ports
161 and 162 is
secured or not.
445=SMB
Secure
Insecure Port Description Protocol Protocol
Alternative Port
445 - SMB Port 445, Server Server Message 2049 - NFS Network File
Message Block Block System
(SMB), is used by
many versions of
Windows for
accessing files over
the network. Files are
transmitted
unencrypted, and
many vulnerabilities
are well-known.
Therefore, it is
recommended that
traffic on port 445
should not be
allowed to pass
through a firewall at
the network
perimeter. A more
secure alternative is
port 2049, Network
File System (NFS).
Although NFS can
use encryption, it is
recommended that
NFS not be allowed
through firewalls
either.
389-LDAP
Secure
Insecure Port Description Protocol Protocol
Alternative Port
389 - LDAP Port 389, Lightweight 636 - LDAPS Lightweight
Lightweight Directory Access Directory Access
Directory Access Protocol Protocol Secure
Protocol (LDAP), is
used to communicate
directory information
from servers to
clients. This can be
an address book for
email or usernames
for logins. The
LDAP protocol also
allows records in the
directory to be
updated, introducing
additional risk. Since
LDAP is not
encrypted, it is
susceptible to
sniffing and
manipulation attacks.
Lightweight
Directory Access
Protocol Secure
(LDAPS) adds
SSL/TLS security to
protect the
information while it
is in transit.
A. HTTPS
Incorrect. HyperText Transfer Protocol Secure is the secure alternative for HTTP and uses SSL/TLS for
securing website communications.
B. LDAPS
Incorrect. Lightweight Directory Access Protocol Secure (LDAPS) is the secure alternative for Lightweight
Directory Access Protocol (LDAP) and is used to exchange directory information in a secured protocol.
C. SFTP
Incorrect. Secure File Transfer Protocol (SFTP) is the secure alternative to FTP and is used to transfer files.
D. SSH
Correct. Secure Shell (SSH) is the secure alternative to telnet as it encrypts all traffic between the host and
remote user.
Manny: It's not just cybersecurity experts who have to know about the different types of
network and
cyber threats and attacks.
Tasha: You're right, Manny. Everyone from small businesses (like Java Sip) to the biggest
corporations,
needs to know the impact of network and cyber-attacks. It seems like every day there is
news of
ransomware or other cyber-attacks. These attacks are costing the world financially and
they're
increasing every year.
Manny: Anyone who uses a smartphone or has an email or social media account has
probably
encountered spoofing, phishing, and other nefarious attempts to defraud users or infect their
devices.
Let's find out more.
Chad Kliewer: I'll say greetings and welcome to the discussion on cyberattacks. I'm
your host,
Chad Kliewer, holder of a CISSP and CCSP, and current (ISC)2 member. I'll be
facilitating our experience. And I'm extremely excited to welcome our special guest,
Joe Sullivan, CISSP, and also an (ISC) 2 member. Joe's a former CISO in the
banking and finance industry, who now specializes in forensics, incident response
and recovery. So, Joe, you ready to get started?
Joe Sullivan: I am looking forward to this. I'm excited.
Kliewer: All right. Anything else you'd like to add about your background? I didn't give
you much opportunity to do that.
Sullivan: Just a brief overview. I’ve been in information security for 2 years now in
various aspects as you've mentioned.
Kliewer: Okay, awesome. Thank you much. So, I'm going to dive right into some
content here. Because part of what we're trying to do is we're trying to look at how
we prevent attacks, and then once those attacks happen, how they really impact the
business and how they impact the companies. We all hear about these attacks
constantly, but we never really look so much at how they impact each individual
business. So, we're going to start out just talking a little bit and say, if we can't detect
any future ongoing attack, how are we going to remediate that, and how are we
going to stop it? And the one point we want to make here is how important it is to
make sure that we're aggregating all that data using this Security Information Event
Management system or SIEM, S-I-E-M. And what are your thoughts on using a SIEM
to make actional intelligence, Joe?
Sullivan: Integrating a SIEM for actionable intelligence, I think you have to take a
step back and think about, when do we trigger incident response, typically? Over the
course of my career, incident response is usually triggered after something bad
happens. They're on the network, or we see and exploit, or we've been compromised
or there's a knock on the door that says, Hey, your data's out there. If we have a
SIEM or user behavior analytics, whatever the case may be properly optimized and
tuned, we can pick up on those indicators of compromise before the bad things
happen. And when I say indicators of compromise, I'm referring to things like
scanning, malicious email attachments, web application, enumeration and things like
that. Attackers spend the majority of their time in the recon phase. If we can detect
those recon activities, that's actionable intelligence where we can block IPs, block
tools and things like that before they actually get on the network. Even once they get
on a network, recon still takes place. I get on a machine, what's the vendor? What
software am I running on this machine?
What applications are installed? What's the network look like? And still, we're not to
the point where a breach is actually taking place yet. Again, if we're detecting an
activity in our SIEM with the appropriate logging, monitoring and alerting, we can
trigger incident response well before the actual breach takes place.
Kliewer: So, what are your thoughts on the actionable intelligence and how we
prevent threats? Do you think most of the threats or most of the, well, we'll say
incidents, are actually detected by internal systems, or do you think they're mostly
the result of receiving the indicators of compromise from a third-party organization,
such as a government entity or something like that?
Sullivan: If you look at as far as detection, we have events determining what's
malicious and what's just an event or a false positive is the challenge here. When
you have lean running security teams, who don't have the time to go in and tune and
optimize this (but then again, something is better than nothing) a well operationalized
security program with the appropriate headcount has the chance of detecting these
and getting those alerts and indicators of compromise and acting on those earlier;
whereas, if you have a lean running program (a two- to three-headcount security
department that are wearing many different hats) it's a little bit more challenging to
tune and optimize that. It's in scenarios like that where it might be beneficial to
outsource that to a third-party SOC or something, and let them say, “Hey, we've
detected this going on in your network, it doesn't look like a false positive, you should
go check this out.”
Kliewer: Awesome. So, I'm going to paraphrase a little bit and read between the lines
and say that I didn't hear one thing in there about, ‘You need to buy this software
product to detect all the incidents.’
Sullivan: You don't really need to buy a software product to detect all the incidents.
You know, if you look at like the CIS controls in this CSF, this cybersecurity
framework, or even this 853, if you implement those and get your logs where you just
have some visibility into them monitoring something, you can detect these. It doesn't
really need to have a high-dollar SIEM or something like that. Network segmentation,
we'll look at that. Host-based firewalls does a lot of good for limiting the impact of an
incident.
Kliewer: Okay, awesome. So now I want to kind of roll that just a little bit more, and
we kind of talked about that that's more the processes to log retention, so do you
think what we've talked about so far still holds true when it's cloud-based software
products or even cloud-based, and I'm going to say cloud-based SIEM, like a lot of
them are?
Sullivan: The concept still holds true, right? We still want to aggregate the logs. The
challenging cloud is the threat surface is a little bit different. I have all these different
authentication portals and command line tools that can be used in public cloud
services. And your threat model is things like permissions and IAM—identity and
access management—if you don't have the appropriate permissions set up, you
don't know what a user can do (like in some cases with a particular public cloud
service I won't name) if you have a certain permission where you can actually roll
back permissions, but you're limited, you can actually roll back your own policy and
do something where you had permission at an earlier date, but you don't now. It's
those little gotchas like that that you need to be aware of. And then there is
provisioning cloud services, depending on how you provision certain virtual
machines, RDP and SSH is enabled by default facing the internet, so you want to be
aware of what's the context of if I provision that here or from the command line tool?
The logging, monitoring, and alerting, you can have a cloud-based SIEM third party,
or a lot of public cloud providers have their own tools. It's a little bit different
approach, a little bit different aggregating those logs and reading them, setting up the
alerts, so there might be a learning curve there. And then there's things like the
instance metadata service, which if you get in contact with that, you can actually—it’s
like getting all the metadata on your VMs, your hosts, your disk drives, your backups
and things like that, and gives you a wealth of information. And we're seeing older
attacks like server-side request forgery coming back. In the Capital One breach a
while back in a public cloud service, we've seen that take place. And there's various
controls and mitigations they put in place to mitigate the IMDS attacks, and you need
to be aware of what those are and how you can prevent those from happening. So,
it's a little bit different, a little bit more comprehensive. It's not the same as your
traditional on-prem resources, so there's a learning curve going through there. It's a
little bit more challenging at first, but I think overall, it's the same approach, you just
have a different way of implementing it.
Kliewer: Awesome. So, thanks for answering that. Since you mentioned the recent
Capital One breach that involved the cloud service, can you kind of give our listeners
an overview, we'll say about a 15,000-foot view of that breach and what happened?
Sullivan: The Capital One breach was actually an insider threat. They actually had
access to this system, had worked with it before, and the instance metadata
service—so you hit the web application, which caused a URL on the back end to get
data, allocate resources, authentication and things like that. Like say, you have data
in an S3 bucket, you can actually hit that IMDS and get that information back. That
server-side request forgery attack let that person enumerate those resources and get
access to them and download them. So, they had to go back and determine, “Well,
how can we prevent this from happening?” And implemented things like now
you need a token to send to the IMDS to actually get that information back, or we're
going to limit the response from the IMDS into one hop, that way it doesn't go past
the machine out to the internet. So, an attacker can't actually get that.
Kliewer: Okay. Awesome. Thanks for covering that for us. I want to shift gears just a
little bit, and we're talking about an attack here that involved some cloud
components, but not necessarily in the cloud. And I wanted to talk just a little bit,
because it was such a widespread incident—I mean, it can be called a cyberattack,
we'll call it an incident with SolarWinds—it was one that was very widespread,
gained a lot of notoriety because it was one that affected a lot of US government
agencies, and I'm guessing probably a lot of other government agencies as well.
And this was a very good example of a supply chain attack, where some malicious
code or malicious programs were embedded within the supply chain or within an
update package. So, would you like to kind of lead us through a little bit, Joe, and
just once again at a real high level of what steps that SolarWinds attack really took?
I'm going to preface it by saying the reason it has such a huge impact was because it
went undetected for so long. It went undetected, I think for at least, I'm going to say
at least six to eight months that we know of, possibly quite longer. But if you could
give our listeners an overview of that SolarWinds attack and how they actually
utilized the cloud components.
Sullivan: Sure, no problem there. SolarWinds was a really, really clever attack. The
initial foothold, we're not sure. They gained access to the internal network. We don't
know if it was a spear phishing attack. There had also been rumor that a password
was leaked as well. It could have been someone had set up a site for a watering hole
attack. However, they did it, once they got access to the network, they focused on
the build server where the actual code is compiled. And instead of actually
implementing their malicious code in the build process, in the build,
they coded as the output of the build process, that way it got packaged in and signed
with the SolarWind software. They took that approach because, one, it keeps them
off the radar for code scanning and code review. They're not going to see that code.
And once they get signed, it's trusted at that point. So, once they got pushed out to
the update server, all these individual companies who were running Orion
SolarWinds download that, it gets on their network, but the attack didn't start or that
malware didn't trigger for two weeks. And once it started triggering, it communicated
with cloud resources where they set up their C2 network with AWS, GCP, Azure,
GoDaddy and services like that and actually mimicked the Orion syntax. So, it
looked like regular Orion traffic going back and forth. And that gave them access to
the network. They could read email, obtain documents. They even got certificates
where they could impersonate users. And it wasn't detected for a long time. It was a
really sophisticated attack. They were very patient, and this was a really crafty
attack.
Kliewer: Awesome. And just to point out there, because I want to point out in a little
bit for our listeners and our learners in our courses that we've talked about some of
these different components. I think we talked about C2, the command and control,
which is what they're actually using to actually go back and obtain that information
out of the host networks once they're compromised. And the fact that these
command-and-control networks were propagated or stored in not just one cloud
network infrastructure, but they used multiple cloud infrastructures and multiple cloud
providers to do this, and all of that stuff helped them evade detection basically. So,
like I said, I wanted to point that out a little bit. And I can tell you as one person who
was part of an organization, who was named in that SolarWinds attack, and one of
the initial organizations that were listed as compromised—I'm going to back this up
to our SIEM conversation earlier and say that SIEM was absolutely priceless in
showing us that, yes, we did establish the initial communication with their command
and control, but nothing happened past that point. We can show beyond a shadow of
a doubt that we did not exfiltrate data, that there was no other data that went back
and forth between our internal network and that command-and-control service. So
that's where that whole SIEM ties into it. So, Joe, I wanted to talk about one other
thing, which I know is one of those areas that's kind of near and dear to your heart as
a hacker kind of guy, not to use that in a negative component, but I'd like to hear
your thoughts on threat hunting versus pen testing, vulnerability scanning,
and malicious actors. I mean, how do you know the difference between somebody
that's out there doing threat hunting or vulnerability assessment across the internet
versus somebody who's a real malicious actor or a real threat?
Sullivan: Well, I think when you look at threat hunting, pen testing and vulnerability
scanning, if you're doing it internally, obviously you know this is happening. If you're
a third-party performing this for another organization, obviously you're doing it with
permission so they're aware of it; whereas if you see these activities taking place,
then you haven't given anyone permission, they're not going on internally, you have
bigger issues. And these are often used interchangeably today. Threat hunting, in
my mind, in my experience is I'm actually going to look at my network and act like
there's a potential attacker here, we've been breached and we're going to treat it like
that. We're going to look at our business-critical systems. We're going to capture
memory. We're going to do packet captures. We're looking for indicators of
compromise to see if do we actually have a bad actor on the network? This is
beneficial because of your attack dwell time, right? You don't always detect the
attacker immediately. Hopefully you do, but usually there's four to six weeks or
something like that where they're on the network. This helps shorten that time period
if you perform regular threat hunting. Whereas pen testing, I want to know, can you
actually get into my network? Is it possible to compromise my software, my
configurations, my people? Can you get access into the building? And that tells
you, like I say, people ask me, what do you do? Well, I hack networks and break into
buildings to keep people from hacking networks and breaking into buildings. If you
have a good idea of how this takes place, you can better shore up your defenses in
those particular areas. Vulnerability scanning is something every organization should
be doing. I'm running regular scans with whatever vulnerability scanner you like that
fits into your particular context, that identifies these vulnerabilities as they take place
or as they get released and you can set up a remediation plan to patch those.
Kliewer: Awesome. I think that is a great breakdown of those different pieces. So, I'm
trying to figure out here if we have any other questions. And I want to take just a
couple minutes here to—I want to roll back a little bit, and it's not so much in a cloud
context, but still help define some of the rules and regulations we have in place
today. But what I wanted to do, Joe, is I want to back up and talk a little bit about the
T.J. Maxx incident. Happened quite a few years back, and I think it's probably used
in a lot of textbooks. But there was an incident with T.J. Maxx, or basically,
somebody was able to access their networks and use network sniffers, you
name it, to siphon off credit card numbers, flowing from their front-end systems to
their backend systems, and then turn around and sell those numbers on the dark
web, you name it. Does that about sum that up? Do you have a better summary of
it? Sullivan: Yeah, this one's going way back aways, right? The T.J. Maxx hack is, if I
remember right, was primarily, the initial foothold was they had an unsecured
wireless network. Once they got on that wireless network, there was no network
segmentation, so they were able to move freely. I think they got 94 million people or
so credit cards. It was a huge breach, but yes, that's basically from a high level, what
the T.J. Maxx attack was. Kliewer: Awesome. And the reason I bring that up,
because I wanted talk about that for our listeners a little bit, because everybody's
also familiar with the PCI DSS or the Payment Card Industry Data Security
Standards. And ultimately, that was one of the incidents and one of the cyberattacks
that really led up to that PCI rule. And I want to be clear. It is a rule, not necessarily
a regulation or a law, it's something that's set forth by industry. I mean, what are the
pieces that PCI covers, Joe? I heard you mention several causes of that T.J. Maxx
incident. Can you help us connect the dots between that incident and PCI?
Sullivan: Sure. Just to kind of step back and kind of recap what you were saying
about PCI, a lot of times, it's misstated that this is a regulation or a law. It's actually a
contractual obligation between you and the credit card companies. And the credit
card companies got together and did all this because they wanted to avoid
government regulation. So, they said, “Hey, we actually police ourselves, we don't
need you to get in our business here.” So, they came up with PCI. The T.J. Maxx
incident impacted PCI. They looked at what happened at T.J. Maxx, and they
said, “You know what? You really need to better secure your wireless networks and
need to be separate from your regular network, and your systems, actually whole
PCI data, those have to be segmented. They have to have network access control
as well. And you need to use the appropriate encryption to encrypt all this in transit
and at rest.” And so, we came up with more strict PCI requirements, and you get into
the network segmentation. And you don't want to apply PCI to all your resources,
right. on the network (your systems, your servers, your devices) because then
everything has to be PCI compliant. The secret to becoming PCI compliant is
narrowing the scope, applying it just to those credit card related systems. There was
something else on that one too. Just totally train crashed there. Oh, they also
recommend using a higherlevel agnostic security or control framework, and then
scoping down to your PCI system. So, then you're looking at something like the CIS
controls or this cyber security framework as well.
Kliewer: All right. And I think that's a great point to make there is regardless of what
country you are or geolocation, whatever, the PCI pretty much applies worldwide, but
there are other frameworks and other tools you can use depending on your
geographic location that can help implement those same regulations and rules, and I
think that's a great connection to make there. And all right, I want to kind of start
wrapping things up here just a little bit, Joe. Are there any other real last minute or
overarching things that you'd like to talk about on the attack
surface or what you'd like our listeners to know when it comes to the cyberattacks
and what happens out there?
Sullivan: I think I'm going to sound like a broken record on this one, right? It still goes
back to doing those basic things like you see in the CIS controls. Notice where you're
at with asset inventory, know what assets you have, know what are business-critical
assets, know where the crown jewels are, segment those, appropriate logging,
monitoring, alerting, patch management, vulnerability scanning. In fact, it was June
of last year, the White House actually came out with a document that said, these are
the things you should be doing to protect your information security program—regular
backups, penetration testing, vulnerability management. These things still hold true.
And that was very much a watershed event. I don't remember a time where the
White House actually came out and said, “Hey, this is what you needed to do to
secure your network.” Why did they do that? Because you see organizations
like SolarWinds getting government organizations breached, and you see the
Colonial Pipeline, which is supplying oil to the United States, and the meat packing
processing plant, which also got ransomware at that time—provides food and meat
to people in the US. It's where these incidents, these cyber events and these
ransomware attacks aren't just affecting individual companies now, they're affecting
people across the nation when you get to this level. So that really changed the
criticality of what you need to be doing to secure your network. And you see, CISA
came out with supply chain guidelines to protect your organization against those. I
guess what I'm getting at is do the basics and determine what your context is. Do I
need to focus on supply chain? Do I need to focus on vulnerability scanning,
penetration testing—are my backups in place? And take care of the basics and build
on top of that.
Kliewer: Awesome, great advice, Joe. And I want to take just a moment here. To our
listeners, I hope you've enjoyed this discussion. I hope you found this useful, and I
hope you found it helpful as the official training that you've been taking. And again, I
want to offer many, many, many thanks to our special guest, Joe Sullivan for
volunteering his time to share his experience with us.
Sullivan: Oh, good to be here, Chad, I enjoyed it. Good conversation.
Types of Threats
There are many types of cyber threats to organizations. Below are several of the most
common types:
Spoofing:
An attack with the goal of gaining access to a target system through the use of a falsified identity.
Spoofing can be used against IP addresses, MAC address, usernames, system names, wireless
network SSIDs, email addresses, and many other types of logical identification.
Phishing:
An attack that attempts to misdirect legitimate users to malicious websites through the abuse of
URLs or hyperlinks in emails could be considered phishing.
DOS/DDOS:
A denial-of-service (DoS) attack is a network resource consumption attack that has the primary goal
of preventing legitimate activity on a victimized system. Attacks involving numerous unsuspecting
secondary victim systems are known as distributed denial-of-service (DDoS) attacks.
Virus:
The computer virus is perhaps the earliest form of malicious code to plague security administrators.
As with biological viruses, computer viruses have two main functions—propagation and destruction.
A virus is a self-replicating piece of code that spreads without the consent of a user, but frequently
with their assistance (a user has to click on a link or open a file).
Worm:
Worms pose a significant risk to network security. They contain the same destructive potential as
other malicious code objects with an added twist—they propagate themselves without requiring any
human intervention.
Trojan:
Named after the ancient story of the Trojan horse, the Trojan is a software program that appears
benevolent but carries a malicious, behind-the-scenes payload that has the potential to wreak havoc
on a system or network. For example, ransomware often uses a Trojan to infect a target machine
and then uses encryption technology to encrypt documents, spreadsheets and other files stored on
the system with a key known only to the malware creator.
On-Path Attack:
In an on-path attack, attackers place themselves between two devices, often between a web
browser and a web server, to intercept or modify information that is intended for one or both of the
endpoints. On-path attacks are also known as man-in-the-middle (MITM) attacks.
Side Channel:
A side-channel attack is a passive, noninvasive attack to observe the operation of a device. Methods
include power monitoring, timing and fault analysis attacks.
Advanced persistent threat (APT) refers to threats that demonstrate an unusually high level of
technical and operational sophistication spanning months or even years. APT attacks are often
conducted by highly organized groups of attackers.
Inside Threat:
Insider threats are threats that arise from individuals who are trusted by the organization. These
could be disgruntled employees or employees involved in espionage. Insider threats are not always
willing participants. A trusted user who falls victim to a scam could be an unwilling insider threat.
Malware:
A program that is inserted into a system, usually covertly, with the intent of compromising the
confidentiality, integrity or availability of the victim’s data, applications or operating system or
otherwise annoying or disrupting the victim.
Ransomeware:
Malware used for the purpose of facilitating a ransom attack. Ransomware attacks often use
cryptography to “lock” the files on an affected computer and require the payment of a ransom fee in
return for the “unlock” code.
A Viral Threat
Tasha: Before her shift starts, Gabriela attempts to upload a school assignment on her iPad,
but the device is not responding.
Gabriela: Ugh, why is nothing working? This stupid thing. I need to turn in this assignment.
Keith: What is it?
Gabriela: It just spins and spins.
Keith: Have you updated recently?
Gabriela: Yes.
Keith: Have you clicked on any new links?
Gabriela: Oh, no. That strange email from the other day! It said I won a gift certificate, but
when I clicked the link, it didn't go anywhere.
Keith: It's okay. Sounds like you have a virus though. But we can ask Susan for help. Have
you backed it up to the cloud?
Gabriela: I have.
Keith: Great, everything will be all right then.
Types of Threats
What is a type of malware that encrypts files and demands payment for the decryption code?
(D4, L4.2.1)
A. APT
Incorrect. APT (Advanced Persistent Threat) is a long-term reconnaissance that is usually associated with
organized crime or nation-state adversaries.
B. Ransomware
Correct. Ransomware is a type of malware that encrypts files and demands payment for the decryption code.
C. Phishing
Incorrect. While a ransomware attack may start with a phishing attack, not all phishing attacks encrypt files.
D. Denial of Service
Incorrect. A denial-of-service (DoS) attack is a network resource consumption attack that has the primary goal
of preventing legitimate activity on a victimized system.
Identify the Malware Threats
Which threats are directly associated with malware? Select all that apply.
APT This option is incorrect. Advanced persistent threat (APT) is not associated with malware. APT
refers to threats that demonstrate an unusually high level of technical and operational sophistication spanning
months or even years. APT attacks are associated with social engineering and are often conducted by highly
organized groups of attackers.
Ransomware This option is correct. Ransomware is malware used for the purpose of facilitating a
ransom attack.
Trojan This option is correct. The Trojan is a software program that appears benevolent but carries a
malicious, behind-the-scenes payload that has the potential to wreak havoc on a system or network. As such,
the Trojan is associated with malware.
DDoS This option is incorrect. A denial-of-service (DoS) attack is a network resource consumption
attack, not associated with malware, that has the primary goal of preventing legitimate activity on a victimized
system. Attacks involving numerous unsuspecting secondary victim systems are known as distributed denial-
of-service (DDoS) attacks.
Phishing This option is incorrect. Phishing is not associated with malware. It is associated with social
engineering as an attack that attempts to misdirect legitimate users to malicious websites through the abuse
of URLs or hyperlinks in emails, for example.
Virus This option is correct. A virus is associated with malware because it is a self-replicating piece of
code that spreads without the consent of a user.
While there is no single step you can take to protect against all attacks, there are some basic
steps you can take that help to protect against many types of attacks.
Here are some examples of steps that can be taken to protect networks.
If a system doesn’t need a service or protocol, it should not be running. Attackers cannot
exploit a vulnerability in a service or protocol that isn’t running on a system.
Firewalls can prevent many different types of attacks. Network-based firewalls protect
entire networks, and host-based firewalls protect individual systems.
Identify Threats and Tools Used to Prevent Them Continued
Narrator: This table lists tools used to identify threats that can help to protect against many
types of attacks, like virus and malware, Denial of Service attacks, spoofing, on-path and
side-
channel attacks. From monitoring activity on a single computer, like with HIDS, to gathering
log
data, like with SIEM, to filtering network traffic like with firewalls, these tools help to protect
entire networks and individual systems.
These tools, which we will cover more in depth, all help to identify potential threats, while
anti-
malware, firewall and intrusion protection system tools also have the added ability to prevent
threats.
IDSs can recognize attacks that come from external connections, such as an attack from the
internet, and attacks that spread internally, such as a malicious worm. Once they detect a
suspicious event, they respond by sending alerts or raising alarms. A primary goal of an IDS
is to provide a means for a timely and accurate response to intrusions.
Intrusion detection and prevention refer to capabilities that are part of isolating and protecting
a more secure or more trusted domain or zone from one that is less trusted or less secure.
These are natural functions to expect of a firewall, for example.
IDS types are commonly classified as host-based and network-based. A host-based IDS
(HIDS) monitors a single computer or host. A network-based IDS (NIDS) monitors a
network by observing network traffic patterns.
Intrusion Detection System (IDS)
Host-based Intrusion Detection System (HIDS)
A HIDS monitors activity on a single computer, including process calls and information
recorded in system, application, security and host-based firewall logs. It can often examine
events in more detail than a NIDS can, and it can pinpoint specific files compromised in an
attack. It can also track processes employed by the attacker. A benefit of HIDSs over NIDSs
is that HIDSs can detect anomalies on the host system that NIDSs cannot detect. For
example, a HIDS can detect infections where an intruder has infiltrated a system and is
controlling it remotely. HIDSs are more costly to manage than NIDSs because they require
administrative attention on each system, whereas NIDSs usually support centralized
administration. A HIDS cannot detect network attacks on other systems.
A NIDS monitors and evaluates network activity to detect attacks or event anomalies. It
cannot monitor the content of encrypted traffic but can monitor other packet details. A single
NIDS can monitor a large network by using remote sensors to collect data at key network
locations that send data to a central management console. These sensors can monitor traffic at
routers, firewalls, network switches that support port mirroring, and other types of network
taps. A NIDS has very little negative effect on the overall network performance, and when it
is deployed on a single-purpose system, it doesn’t adversely affect performance on any other
computer. A NIDS is usually able to detect the initiation of an attack or ongoing attacks, but
they can’t always provide information about the success of an attack. They won’t know if an
attack affected specific systems, user accounts, files or applications.
Security management involves the use of tools that collect information about the IT
environment from many disparate sources to better examine the overall security of the
organization and streamline security efforts. These tools are generally known as security
information and event management (or S-I-E-M, pronounced “SIM”) solutions. The general
idea of a SIEM solution is to gather log data from various sources across the enterprise to
better understand potential security concerns and apportion resources accordingly.
SIEM systems can be used along with other components (defense-in-depth) as part of an
overall information security program.
Preventing Threats
While there is no single step you can take to protect against all threats, there are some basic
steps you can take that help reduce the risk of many types of threats.
Keep systems and applications up to date. Vendors regularly release patches to correct bugs
and security flaws, but these only help when they are applied. Patch management ensures
that systems and applications are kept up to date with relevant patches.
Remove or disable unneeded services and protocols. If a system doesn’t need a service or
protocol, it should not be running. Attackers cannot exploit a vulnerability in a service or
protocol that isn’t running on a system. As an extreme contrast, imagine a web server is
running every available service and protocol. It is vulnerable to potential attacks on any of
these services and protocols.
Use intrusion detection and prevention systems. As discussed, intrusion detection and
prevention systems observe activity, attempt to detect threats and provide alerts. They can
often block or stop attacks.
Use up-to-date anti-malware software. We have already covered the various types of
malicious code such as viruses and worms. A primary countermeasure is anti-malware
software.
Use firewalls. Firewalls can prevent many different types of threats. Network-based firewalls
protect entire networks, and host-based firewalls protect individual systems. This chapter
included a section describing how firewalls can prevent attacks.
Antivirus
The use of antivirus products is strongly encouraged as a security best practice and is a
requirement for compliance with the Payment Card Industry Data Security Standard (PCI
DSS). There are several antivirus products available, and many can be deployed as part of an
enterprise solution that integrates with several other security products.
Antivirus systems try to identify malware based on the signature of known malware or by
detecting abnormal activity on a system. This identification is done with various types
of scanners, pattern recognition and advanced machine learning algorithms.
Anti-malware now goes beyond just virus protection as modern solutions try to provide a
more holistic approach detecting rootkits, ransomware and spyware. Many endpoint solutions
also include software firewalls and IDS or IPS systems.
Scans
Firewalls
In building construction or vehicle design, a firewall is a specially built physical barrier that
prevents the spread of fire from one area of the structure to another or from one compartment
of a vehicle to another. Early computer security engineers borrowed that name for the devices
and services that isolate network segments from each other, as a security measure. As a
result, firewalling refers to the process of designing, using or operating different processes in
ways that isolate high-risk activities from lower-risk ones.
Firewalls enforce policies by filtering network traffic based on a set of rules. While a firewall
should always be placed at internet gateways, other internal network considerations and
conditions determine where a firewall would be employed, such as network zoning or
segregation of different levels of sensitivity. Firewalls have rapidly evolved over time to
provide enhanced security capabilities. This growth in capabilities can be seen in Figure 5.37,
which contrasts an oversimplified view of traditional and next-generation firewalls. It
integrates a variety of threat management capabilities into a single framework, including
proxy services, intrusion prevention services (IPS) and tight integration with the identity and
access management (IAM) environment to ensure only authorized users are permitted to pass
traffic across the infrastructure. While firewalls can manage traffic at Layers 2 (MAC
addresses), 3 (IP ranges) and 7 (application programming interface (API) and application
firewalls), the traditional implementation has been to control traffic at Layer 4.
Which tools help to identify, prevent or both identify and prevent threats? Select
identify, prevent or both for each tool.
1. IDS
Identify. An Intrusion Detection System helps to identify threats, but does not have the capability
to prevent them.
2. HIDS
Identify. A Host Intrusion Detection System helps to identify threats to a host system, but does not
prevent them.
3. NIDS
Identify. A Network Intrusion Detection System helps to identify threats based on network traffic,
but does not prevent them.
4. SIEM
Identify. A Security Incident and Event Management system identifies threats by correlating and
storing logs from multiple systems, but does not take action to prevent the threats from
materializing.
5. Anti-malware/Antivirus
Both. Anti-malware/Antivirus helps to both identify and prevent threats by identifying malicious
software and stopping the processes before they fully execute.
6. Scans
Identify. Scans help to identify threats, often by conducting a vulnerability analysis, and may
suggest action to mitigate the threats, but does not prevent them.
7. Firewall
Both. Most modern firewalls both identify and prevent threats by automatically adjusting rules to
block malicious traffic from entering a secured network.
8. IPS (NIPS/HIPS)
Module Objective
Manny: In this section, we are going to be exploring the concepts and terminology around
data centers
and the cloud. Sounds exciting!
Tasha: It can be, Manny. This is where a lot of the future applications of cybersecurity will
come from.
As threats evolve, so does the technology to improve data protection, wherever that data is
stored and
however it's transmitted.
On-Premises Data Centers
When it comes to data centers, there are two primary options: organizations can outsource the
data center or own the data center. If the data center is owned, it will likely be built on
premises. A place, like a building for the data center is needed, along with power, HVAC,
fire suppression and redundancy.
Select each plus sign hotspot to learn more about each topic.
High-density equipment and equipment within enclosed spaces requires adequate cooling and
airflow. Well-established standards for the operation of computer equipment exist, and
equipment is tested against these standards. For example, the recommended range for
optimized maximum uptime and hardware life is from 64° to 81°F (18° to 27°C), and it is
recommended that a rack have three temperature sensors, positioned at the top, middle and
bottom of the rack, to measure the actual operating temperature of the environment. Proper
management of data center temperatures, including cooling, is essential.
Cooling is not the only issue with airflow: Contaminants like dust and noxious fumes require
appropriate controls to minimize their impact on equipment. Monitoring for water or gas
leaks, sewer overflow or HVAC failure should be integrated into the building control
environment, with appropriate alarms to signal to organizational staff. Contingency planning
to respond to the warnings should prioritize the systems in the building, so the impact of a
major system failure on people, operations or other infrastructure can be minimized.
Data Center/Closets
The facility wiring infrastructure is integral to overall information system security and
reliability. Protecting access to the physical layer of the network is important in minimizing
intentional or unintentional damage. Proper protection of the physical site must address these
sorts of security challenges. Data centers and wiring closets may include the following:
Power
Data centers and information systems in general consume a tremendous amount of electrical
power, which needs to be delivered both constantly and consistently. Wide fluctuations in the
quality of power affect system lifespan, while disruptions in supply completely stop system
operations.
Power at the site is always an integral part of data center operations. Regardless of fuel
source, backup generators must be sized to provide for the critical load (the computing
resources) and the supporting infrastructure. Similarly, battery backups must be properly
sized to carry the critical load until generators start and stabilize. As with data backups,
testing is necessary to ensure the failover to alternate power works properly.
Fire Suppression
For server rooms, appropriate fire detection/suppression must be considered based on the size of
the room, typical human occupation, egress routes and risk of damage to equipment. For example,
water used for fire suppression would cause more harm to servers and other electronic components.
Gas-based fire suppression systems are more friendly to the electronics, but can be toxic to humans.
Redundancy
The concept of redundancy is to design systems with duplicate components so that if a failure
were to occur, there would be a backup. This can apply to the data center as well. Risk
assessments pertaining to the data center should identify when multiple separate utility
service entrances are necessary for redundant communication channels and/or mechanisms.
If the organization requires full redundancy, devices should have two power supplies
connected to diverse power sources. Those power sources would be backed up by batteries
and generators. In a high-availability environment, even generators would be redundant and
fed by different fuel types.
Example of Redundancy (Application of)
Narrator: In addition to keeping redundant backups of information, you also have a
redundant
source of power, to provide backup power so you have an uninterrupted power supply, or
UPS.
Transfer switches or transformers may also be involved. And in case the power is interrupted
by
weather or blackouts, a backup generator is essential. Often there will be two generators
connected by two different transfer switches. These generators might be powered by diesel
or
gasoline or another fuel such as propane, or even by solar panels. A hospital or essential
government agency might contract with more than one power company and be on two
different grids in case one goes out. This is what we mean by redundancy.
For example, Hospital A and Hospital B are competitors in the same city. The hospitals
create an agreement with each other: if something bad happens to Hospital A (a fire, flood,
bomb threat, loss of power, etc.), that hospital can temporarily send personnel and systems to
work inside Hospital B in order to stay in business during the interruption (and Hospital B
can relocate to Hospital A, if Hospital B has a similar problem). The hospitals have decided
that they are not going to compete based on safety and security—they are going to compete
on service, price and customer loyalty. This way, they protect themselves and the healthcare
industry as a whole.
The service level agreement goes down to the granular level. For example, if I'm outsourcing
the IT services, then I will need to have two full-time technicians readily available, at least
from Monday through Friday from eight to five. With cloud computing, I need to have access
to the information in my backup systems within 10 minutes. An SLA specifies the more
intricate aspects of the services.
We must be very cautious when outsourcing with cloud-based services, because we have to
make sure that we understand exactly what we are agreeing to. If the SLA promises 100
percent accessibility to information, is the access directly to you at the moment, or is it access
to their website or through their portal when they open on Monday? That's where you'll rely
on your legal team, who can supervise and review the conditions carefully before you sign
the dotted line at the bottom.
Which of the following is typically associated with an on-premises data center? (D4, L4.3.1)
A. Fire suppression
Fire suppression is associated with an on-premises data center, but this answer is incorrect. What else might be
associated with an on-premises data center?
B. HVAC
HVAC is associated with an on-premises data center, but this answer is incorrect. What else might be associated
with an on-premises data center?
C. Power
Power is associated with an on-premises data center, but this answer is incorrect. What else might be associated
with an on-premises data center?
D. All of the above
Correct. Fire suppression, HVAC and power are all associated with an on-premises data center.
A. HVAC
B. Generator
C. Utility
D. UPS
HVAC is not a source of redundant power, but it is something that needs to be protected by a
redundant power supply, which is what the other three options will provide. What happens if
the HVAC system breaks and equipment gets too hot? If the temperature in the data center
gets too hot, then there is a risk that the server will shut down or fail sooner than expected,
which presents a risk that data will be lost. So that is another system that requires redundancy
in order to reduce the risk of data loss. But it is not itself a source of redundant power.
Cloud
Cloud computing is usually associated with an internet-based set of computing resources, and
typically sold as a service, provided by a cloud service provider (CSP).
There are various definitions of what cloud computing means according to the leading
standards, including NIST. This NIST definition is commonly used around the globe, cited by
professionals and others alike to clarify what the term “cloud” means:
“a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of
configurable computing resources (such as networks, servers, storage, applications, and
services) that can be rapidly provisioned and released with minimal management effort or
service provider interaction.” NIST SP 800-145
This image depicts cloud computing characteristics, service and deployment models, all of
which will be covered in this section and by your instructor.
Cloud Redundancy
Narrator: Many organizations have moved from hard-wired server rooms to operations that
are
run by cloud-based facilities, because it provides both security and flexibility. Cloud service
providers have different availability zones, so that if one goes down, activities can shift to
another. You don’t have to maintain a whole data center with all the redundancy that entails
–
the cloud service provider does that for you.
There are several ways to contract with a cloud service provider. You can set up the billing
so
that it depends on the data used, just like your mobile phone. And you have resource
pooling,
meaning you can share in the resources of other colleagues or similar types of industries to
provide data for artificial intelligence or analytics.
Cloud Characteristics
Cloud-based assets include any resources that an organization accesses using cloud
computing. Cloud computing refers to on-demand access to computing resources available
from almost anywhere, and cloud computing resources are highly available and easily
scalable. Organizations typically lease cloud-based resources from outside the organization.
Cloud computing has many benefits for organizations, which include but are not limited to:
Usage is metered and priced according to units (or instances) consumed. This can also be
billed back to specific departments or functions.
Reduced cost of ownership. There is no need to buy any assets for everyday use, no loss of
asset value over time and a reduction of other related costs of maintenance and support.
Reduced energy and cooling costs, along with “green IT” environment effect with optimum
use of IT resources and systems.
Allows an enterprise to scale up new software or data-based services/solutions through
cloud systems quickly and without having to install massive hardware locally.
Service Models
Some cloud-based services only provide data storage and access. When storing data in the
cloud, organizations must ensure that security controls are in place to prevent unauthorized
access to the data.
There are varying levels of responsibility for assets depending on the service model. This
includes maintaining the assets, ensuring they remain functional, and keeping the systems and
applications up to date with current patches. In some cases, the cloud service provider is
responsible for these steps. In other cases, the consumer is responsible for these steps.
Types of cloud computing service models include Software as a Service (SaaS) , Platform as
a Service (PaaS) and Infrastructure as a Service (IaaS).
Select each plus sign hotspot to learn more about each topic.
Deployment Models
There are four cloud deployment models. The cloud deployment model also affects the
breakdown of responsibilities of the cloud-based assets. The four cloud models available are
public, private, hybrid and community .
Select each plus sign hotspot to learn more about each topic.
Public
Public clouds are what we commonly refer to as the cloud for the public user. It is very easy to get
access to a public cloud. There is no real mechanism, other than applying for and paying for the
cloud service. It is open to the public and is, therefore, a shared resource that many people will be
able to use as part of a resource pool. A public cloud deployment model includes assets available for
any consumers to rent or lease and is hosted by an external cloud service provider (CSP). Service
level agreements can be effective at ensuring the CSP provides the cloud-based services at a level
acceptable to the organization.
Private
Private clouds begin with the same technical concept as public clouds, except that instead of being
shared with the public, they are generally developed and deployed for a private organization that
builds its own cloud. Organizations can create and host private clouds using their own resources.
Therefore, this deployment model includes cloud-based assets for a single organization. As such, the
organization is responsible for all maintenance. However, an organization can also rent resources
from a third party and split maintenance requirements based on the service model (SaaS, PaaS or
IaaS). Private clouds provide organizations and their departments private access to the computing,
storage, networking and software assets that are available in the private cloud.
Hybrid
A hybrid cloud deployment model is created by combining two forms of cloud computing
deployment models, typically a public and private cloud. Hybrid cloud computing is gaining
popularity with organizations by providing them with the ability to retain control of their IT
environments, conveniently allowing them to use public cloud service to fulfill non-mission-critical
workloads, and taking advantage of flexibility, scalability and cost savings. Important drivers or
benefits of hybrid cloud deployments include: Retaining ownership and oversight of critical tasks and
processes related to technology, Reusing previous investments in technology within the
organization, Control over most critical business components and systems, and Cost-effective means
to fulfilling noncritical business functions (utilizing public cloud components).
Community
Community clouds can be either public or private. What makes them unique is that they are
generally developed for a particular community. An example could be a public community cloud
focused primarily on organic food, or maybe a community cloud focused specifically on financial
services. The idea behind the community cloud is that people of like minds or similar interests can
get together, share IT capabilities and services, and use them in a way that is beneficial for the
particular interests that they share.
Think of a rule book and legal contract—that combination is what you have in a service-level
agreement (SLA). Let us not underestimate or downplay the importance of this document/
agreement. In it, the minimum level of service, availability, security, controls, processes,
communications, support and many other crucial business elements are stated and agreed to
by both parties.
The purpose of an SLA is to document specific parameters, minimum service levels and
remedies for any failure to meet the specified requirements. It should also affirm data
ownership and specify data return and destruction details. Other important SLA points to
consider include the following:
A. Public
B. Private
C. Hybrid
Incorrect. A hybrid cloud uses both public and private clouds together.
A. SaaS
Incorrect. SaaS provides access to software applications but not the equipment necessary for customers to build
and operate their own software.
B. IaaS
Incorrect. IaaS provides use of hardware and related equipment that is retained by the provider but does not
allow customers to build and operate their own software in the most suitable way, since it would also require
them to manage the operating systems as well.
C. PaaS
Correct. PaaS typically provides a set of software building blocks and development tools, such as programming
languages and supporting a run-time environment, that facilitate the construction of high-quality, scalable
applications.
D. SLA
Network Design
The objective of network design is to satisfy data communication requirements and result in
efficient overall performance.
Click each tab below to learn about several elements that are considered when planning for
security in a network. Bottom of Form
Defense in Depth
Defense in depth uses a layered approach when designing the security posture of an
organization. Think about a castle that holds the crown jewels. The jewels will be placed in a
vaulted chamber in a central location guarded by security guards. The castle is built around
the vault with additional layers of security—soldiers, walls, a moat. The same approach is
true when designing the logical security of a facility or system. Using layers of security will
deter many attackers and encourage them to focus on other, easier targets.
Defense in depth provides more of a starting point for considering all types of controls—
administrative, technological, and physical—that empower insiders and operators to work
together to protect their organization and its systems.
Here are some examples that further explain the concept of defense in depth:
Data: Controls that protect the actual data with technologies such as encryption, data leak
prevention, identity and access management and data controls.
Application: Controls that protect the application itself with technologies such as data leak
prevention, application firewalls and database monitors.
Host: Every control that is placed at the endpoint level, such as antivirus, endpoint firewall,
configuration and patch management.
Internal network: Controls that are in place to protect uncontrolled data flow and user
access across the organizational network. Relevant technologies include intrusion detection
systems, intrusion prevention systems, internal firewalls and network access controls.
Perimeter: Controls that protect against unauthorized access to the network. This level
includes the use of technologies such as gateway firewalls, honeypots, malware analysis and
secure demilitarized zones (DMZs).
Physical: Controls that provide a physical barrier, such as locks, walls or access control.
Policies, procedures and awareness: Administrative controls that reduce insider threats
(intentional and unintentional) and identify risks as soon as they appear.
Zero Trust
Zero trust networks are often microsegmented networks, with firewalls at nearly every
connecting point. Zero trust encapsulates information assets, the services that apply to them
and their security properties. This concept recognizes that once inside a trust-but-verify
environment, a user has perhaps unlimited capabilities to roam around, identify assets and
systems and potentially find exploitable vulnerabilities. Placing a greater number of firewalls
or other security boundary control devices throughout the network increases the number of
opportunities to detect a troublemaker before harm is done. Many enterprise architectures are
pushing this to the extreme of microsegmenting their internal networks, which enforces
frequent re-authentication of a user ID, as depicted in this image.
Consider a rock music concert. By traditional perimeter controls, such as firewalls, you
would show your ticket at the gate and have free access to the venue, including backstage
where the real rock stars are. In a zero-trust environment, additional checkpoints are added.
Your identity (ticket) is validated to access the floor level seats, and again to access the
backstage area. Your credentials must be valid at all 3 levels to meet the stars of the show.
Zero trust is an evolving design approach which recognizes that even the most robust access
control systems have their weaknesses. It adds defenses at the user, asset and data level,
rather than relying on perimeter defense. In the extreme, it insists that every process or action
a user attempts to take must be authenticated and authorized; the window of trust becomes
vanishingly small.
While microsegmentation adds internal perimeters, zero trust places the focus on the assets,
or data, rather than the perimeter. Zero trust builds more effective gates to protect the assets
directly rather than building additional or higher walls.
At one time, network access was limited to internal devices. Gradually, that was extended to
remote connections, although initially those were the exceptions rather than the norm. This
started to change with the concepts of bring your own device (BYOD) and Internet of Things
(IoT).
Considering just IoT for a moment, it is important to understand the range of devices that
might be found within an organization. They include heating, ventilation and air conditioning
(HVAC) systems that monitor the ambient temperature and adjust the heating or cooling
levels automatically or air monitoring systems, through security systems, sensors and
cameras, right down to vending and coffee machines. Look around your own environment
and you will quickly see the scale of their use.
Having identified the need for a NAC solution, we need to identify what capabilities a
solution may provide. As we know, everything begins with a policy. The organization’s
access control policies and associated security policies should be enforced via the NAC
device(s). Remember, of course, that an access control device only enforces a policy and
doesn’t create one.
The NAC device will provide the network visibility needed for access security and may later
be used for incident response. Aside from identifying connections, it should also be able to
provide isolation for noncompliant devices within a quarantined network and provide a
mechanism to “fix” the noncompliant elements, such as turning on endpoint protection. In
short, the goal is to ensure that all devices wishing to join the network do so only when they
comply with the requirements laid out in the organization policies. This visibility will
encompass internal users as well as any temporary users such as guests or contractors, etc.,
and any devices they may bring with them into the organization.
Medical devices
IoT devices
BYOD/mobile devices (laptops, tablets, smartphones)
Guest users and contractors
As we have established, it is critically important that all mobile devices, regardless of their
owner, go through an onboarding process, ideally each time a network connection is made,
and that the device is identified and interrogated to ensure the organization’s policies are
being met.
Network-enabled devices are any type of portable or nonportable device that has native
network capabilities. This generally assumes the network in question is a wireless type of
network, typically provided by a mobile telecommunications company. Network-enabled
devices include smartphones, mobile phones, tablets, smart TVs or streaming media players
(such as a Roku Player, Amazon Fire TV, or Google Android TV/Chromecast), network-
attached printers, game systems, and much more.
The Internet of Things (IoT) is the collection of devices that can communicate over the
internet with one another or with a control console in order to affect and monitor the real
world. IoT devices might be labeled as smart devices or smart-home equipment. Many of the
ideas of industrial environmental control found in office buildings are finding their way into
more consumer-available solutions for small offices or personal homes.
Embedded systems and network-enabled devices that communicate with the internet are
considered IoT devices and need special attention to ensure that communication is not used in
a malicious manner. Because an embedded system is often in control of a mechanism in the
physical world, a security breach could cause harm to people and property. Since many of
these devices have multiple access routes, such as ethernet, wireless, Bluetooth, etc., special
care should be taken to isolate them from other devices on the network. You can impose
logical network segmentation with switches using VLANs, or through other traffic-control
means, including MAC addresses, IP addresses, physical ports, protocols, or application
filtering, routing, and access control management. Network segmentation can be used to
isolate IoT environments.
Microsegmentation
The toolsets of current adversaries are polymorphic in nature and allow threats to bypass
static security controls. Modern cyberattacks take advantage of traditional security models to
move easily between systems within a data center. Microsegmentation aids in protecting
against these threats. A fundamental design requirement of microsegmentation is to
understand the protection requirements for traffic within a data center and traffic to and from
the internet traffic flows.
When organizations avoid infrastructure-centric design paradigms, they are more likely to
become more efficient at service delivery in the data center and become apt at detecting and
preventing advanced persistent threats.
VLANs do not guarantee a network’s security. At first glance, it may seem that traffic cannot
be intercepted because communication within a VLAN is restricted to member devices.
However, there are attacks that allow a malicious user to see traffic from other VLANs (so-
called VLAN hopping). The VLAN technology is only one tool that can improve the overall
security of the network environment.
VLAN Segmentation
Narrator: VLANS are virtual separations within a switch and are used mainly to limit
broadcast
traffic. A VLAN can be configured to communicate with other VLANs or not, and may be
used to
segregate network segments.
There are a few common uses of VLANs in corporate networks. The first is to separate
Voice
Over IP (VOIP) telephones from the corporate network. This is most often done to more
effectively manage the network traffic generated by voice communications by isolating it
from
the rest of the network.
Another common use of VLANs in a corporate network is to separate the data center from all
other network traffic. This makes it easier to keep the server-to-server traffic contained to the
data center network while allowing certain traffic from workstations or the web to access the
servers. As briefly discussed earlier, VLANs can also be used to segment networks. For
example,
a VLAN can separate the payroll workstations from the rest of the workstations in the
network.
Routing rules can also be used to only allow devices within this Payroll VLAN to access the
servers containing payroll information.
Earlier, we also discussed Network Access Control (NAC). These systems use VLANs to
control
whether devices connect to the corporate network or to a guest network. Even though a
wireless access controller may attach to a single port on a physical network switch, the
VLAN
associated with the device connection on the wireless access controller determines the
VLAN
that the device operates on and to which networks it is allowed to connect.
Finally, in large corporate networks, VLANs can be used to limit the amount of broadcast
traffic
within a network. This is most common in networks of more than 1,000 devices and may be
separated by department, location/building, or any other criteria as needed.
The most important thing to remember is that while VLANs are logically separated, they may
be
allowed to access other VLANs. They can also be configured to deny access to other
VLANs.
A. VPN
Correct. A Virtual Private Network (VPN) describes a communication tunnel that provides point-to-point
transmission of both authentication and data traffic over an untrusted network.
B. Zero Trust
Incorrect. Zero trust is a concept that uses microsegmentation to protect network segments.
C. DMZ
Incorrect. DMZ is a virtual segment usually defined by a firewall that contains less-trusted devices between the
corporate network and the internet.
Incorrect. A Virtual Private Network (VPN) describes a communication tunnel that provides point-to-point
transmission of both authentication and data traffic over an untrusted network.
Module 4: Chapter 4 Summary
Domain 4.1.1, 4.1.2, 4.1.3, 4.2.1, 4.2.2, 4.2.3, 4.3.1, 4.3.2, 4.3.3
Module Objective
In this chapter, we covered computer networking and securing the network. A network is
simply two or more computers linked together to share data, information or resources. There
are many types of networks, such as LAN, WAN, WLAN and VPN, to name a few. Some of
the devices found on a network can be hubs, switches, routers, firewalls, servers, endpoints
(e.g., desktop computer, laptop, tablet, mobile phone, VOIP or any other end user device).
Other network terms you need to know and understand include ports, protocols, ethernet, Wi-
Fi, IP address and MAC address.
The two models discussed in this chapter are OSI and TCP/IP. The OSI model has seven
layers and the TCP/IP four. They both take the 1s and 0s from the physical or network
interface layer, where the cables or Wi-Fi connect, to the Application Layer, where users
interact with the data. The data traverses the network as packets, with headers or footers
being added and removed accordingly as they get passed layer to layer. This helps route the
data and ensures packets are not lost and remain together. IPv4 is slowly being phased out by
IPv6 to improve security, improve quality of service and support more devices.
As mentioned, Wi-Fi has replaced many of our wired networks, and with its ease of use, it
also brings security issues. Securing Wi-Fi is very important.
We then learned about some of the attacks on a network, e.g., DoS/DDoS attacks, fragment
attacks, oversized packet attacks, spoofing attacks, and man-in-the middle attacks. We also
discussed the ports and protocols that connect the network and services that are used on
networks, from physical ports, e.g., LAN port, that connect the wires, to logical ports, e.g., 80
or 443, that connect the protocols/services.
We then examined some possible threats to a network, including spoofing, DoS/DDoS, virus,
worm, Trojan, on-path (man-in-the-middle) attack, and side-channel attack. The chapter went
on to discuss how to identify threats, e.g., using IDS/NIDS/HIDS or SIEM, and prevent
threats, e.g., using antivirus, scans, firewalls, or IPS/NIPS/HIPS. We discussed on-premises
data centers and their requirements, e.g., power, HVAC, fire suppression, redundancy and
MOU/MOA. We reviewed the cloud and its characteristics, to include service models: SaaS,
IaaS and PaaS; and deployment models: public, private, community and hybrid. The
importance of an MSP and SLA were also discussed.
2. The most essential representation of data (zero or one) at Layer 1 of the Open Systems
Interconnection (OSI) model.
BIT
Broadcast
4. The byte is a unit of digital information that most commonly consists of eight bits.
Byte
5. A model for enabling ubiquitous, convenient, on-demand network access to a shared pool of
configurable computing resources (e.g., networks, servers, storage, applications, and services) that
can be rapidly provisioned and released with minimal management effort or service provider
interaction. NIST 800-145
Cloud computing
6. A system in which the cloud infrastructure is provisioned for exclusive use by a specific community
of consumers from organizations that have shared concerns (e.g., mission, security requirements,
policy and compliance considerations). It may be owned, managed and operated by one or more of
the organizations in the community, a third party or some combination of them, and it may exist on
or off premises. NIST 800-145
Community cloud
7. The opposite process of encapsulation, in which bundles of data are unpacked or revealed.
De-encapsulation
Denial-of-Service (DoS)
9. This acronym can be applied to three interrelated elements: a service, a physical server and a
network protocol.
10. Enforcement of data hiding and code hiding during all phases of software development and
operational use. Bundling together data and methods is the process of encapsulation; its opposite
process may be called unpacking, revealing, or using other terms. Also used to refer to taking any set
of data and packaging it or hiding it in another data structure, as is common in network protocols
and encryption.
Encapsulation
11. The process and act of converting the message from its plaintext to ciphertext. Sometimes it is
also referred to as enciphering. The two terms are sometimes used interchangeably in literature and
have similar meanings.
Encryption
12. The internet protocol (and program) used to transfer files between hosts.
13. In a fragment attack, an attacker fragments traffic in such a way that a system is unable to put
data packets back together.
Fragment attack
Hardware
15. A combination of public cloud storage and private cloud storage where some critical data resides
in the enterprise's private cloud while other data is stored and accessible from a public cloud storage
provider.
Hybrid cloud
16. The provider of the core computing, storage and network hardware and software that is the
foundation upon which organizations can build and then deploy applications. IaaS is popular in the
data center where software and servers are purchased as a fully outsourced service and usually
billed on usage and how much of the resource is used.
17. An IP network protocol standardized by the Internet Engineering Task Force (IETF) through RFC
792 to determine if a particular service or host is available.
18. Standard protocol for transmission of data from source to destinations in packet-switched
communications networks and interconnected systems of such networks. CNSSI 4009-2015
19. An attack where the adversary positions himself in between the user and the system so that he
can intercept and alter data traveling between them. Source: NISTIR 7711
Man-in-the-Middle
20. Part of a zero-trust strategy that breaks LANs into very small, highly localized zones using
firewalls or similar technologies. At the limit, this places firewall at every connection point.
Microsegmentation
21. Purposely sending a network packet that is larger than expected or larger than can be handled by
the receiving system, causing the receiving system to fail unexpectedly.
22. Representation of data at Layer 3 of the Open Systems Interconnection (OSI) model.
Packet
handled by the receiving system, causing the receiving system to fail unexpectedly.
22. Representation of data at Layer 3 of the Open Systems Interconnection (OSI) model.
Packet
Payload
24. An information security standard administered by the Payment Card Industry Security Standards
Council that applies to merchants and service providers who process credit or debit card
transactions.
26. The phrase used to describe a cloud computing platform that is implemented within the
corporate firewall, under the control of the IT department. A private cloud is designed to offer the
same features and benefits of cloud systems, but removes a number of objections to the cloud
computing model, including control over enterprise and customer data, worries about security, and
issues connected to regulatory compliance.
Private cloud
27. A set of rules (formats and procedures) to implement and control some type of association (that
is, communication) between systems. NIST SP 800-82 Rev. 2
Protocols
28. The cloud infrastructure is provisioned for open use by the general public. It may be owned,
managed, and operated by a business, academic, or government organization, or some combination
of them. It exists on the premises of the cloud provider. NIST SP 800-145
Public cloud
29. The standard communication protocol for sending and receiving emails between senders and
receivers.
Software
31. The cloud customer uses the cloud provider's applications running within a cloud infrastructure.
The applications are accessible from various client devices through either a thin client interface, such
as a web browser or a program interface. The consumer does not manage or control the underlying
cloud infrastructure including network, servers, operating systems, storage, or even individual
application capabilities, with the possible exception of limited user-specific application configuration
settings. Derived from NIST 800-145
32. Faking the sending address of a transmission to gain illegal entry into a secure system. CNSSI
4009-2015
Spoofing
33. Internetworking protocol model created by the IETF, which specifies four layers of functionality:
Link layer (physical communications), Internet Layer (network-to-network communication),
Transport Layer (basic channels for connections and connectionless exchange of data between
hosts), and Application Layer, where other protocols and user applications programs make use of
network services.
34. A virtual local area network (VLAN) is a logical group of workstations, servers, and network
devices that appear to be on the same LAN despite their geographical distribution.
VLAN
35. A virtual private network (VPN), built on top of existing networks, that can provide a secure
communications mechanism for transmission between networks.
VPN
36. A wireless area network (WLAN) is a group of computers and devices that are located in the
same vicinity, forming a network based on radio transmissions rather than wired connections. A Wi-
Fi network is a type of WLAN.
WLAN
37. The graphical user interface (GUI) for the Nmap Security Scanner, an open-source application
that scans networks to determine everything that is connected as well as other information.
Zenmap
38. Removing the design belief that the network has any trusted space. Security is managed at each
possible level, representing the most granular asset. Microsegmentation of workloads is a tool of the
model.
Zero Trust
Summary
Description
This quiz will help you to confirm your understanding and retention of concepts for this chapter.
Please complete it by answering all questions, reviewing correct answers and feedback, and
revisiting any chapter material you feel you need extra time with.
Instructions
Quiz:
Chapter 4: Resource
Below you will find the Chapter 4 resource. It includes the chapter summary, exam domain
mapping, key takeaways and graphics, terms and definitions and formulas that were covered
in the content.