Domain 5
Domain 5
Back to Page
/ 50
Back to Page
Understand Data Security
Objectives
Data Handling
Data goes through a lifecycle as users create, use, share, and modify it. Many different models of the data
lifecycle can be found, and all have basic operational steps in common. The data security lifecycle model, as seen
in Figure 5.1, is useful because it easily aligns with roles that people and organizations perform during the
evolution of data from creation to destruction or disposal. It also helps put the different data states of in use, at
rest, and in motion, into context. Let’s take a closer look.
All ideas, data, information, or knowledge go through six major phases during their lifetime. Conceptually, these
involve:
Data handling is extremely important. As soon as you receive assets, or data that you need to protect, you need to
know the best practices for handling it.
First,you need to recognize which assets to protect. This is based on the data’s value, according to the data
owner. The kinds of risk and vulnerabilities you face with respect to information being compromised, destroyed, or
changed by any means, must be acknowledged and accounted for. This is the lifecycle of data handling, from
create, to store, to use, to share, to archive, and finally, to destroy. At any point, there aredifferent risks to the data
and different practices for handling it. Some of these procedures are mandated by government standards.
For example, in the United States, the Occupational Safety and Health Administration (OSHA) is the federal
government agency that protects the well-being of workers. Under the rules of the Healthcare Insurance Portability
and Accountability Act (HIPAA), medical records need to be kept for 10 years, but under OSHA, if we have a
medical record of an on-the-job injury, that record needs to be maintained for more than 30 years, even after the
employee’s last day of work in the organization. That’s a regulatory requirement, and if it is not abided by, the result
can be penalties and fines. So, you must be cautious when deciding how to handle data, as there may be multiple
regulations that apply to a single piece.
Also, in the United States, there are specific guidelines related to Payment Card Industry Data Security Standards
(PCI DSS) requirements regarding credit card information and how to maintain it securely. In the European Union,
the GDPR also has specific requirements regarding the handling of financial data. To protect data properly, all
relevant requirements for the type of data being protected in various geographic areas must be known.
Many countries and jurisdictions have regulations that require certain data protections throughout every stage of
the data’s lifecycle. These govern how the data is acquired, processed, stored, and ultimately destroyed. So, when
looking at the data lifecycle, we need to keep a watchful eye and protect information at every stage, even if it’s
ready to be legally destroyed at the end of the lifecycle. In some cases, multiple jurisdictions may impose rules
affecting the data. In these instances, you need to be aware of all regulations that affect the data.
Data has value and must be handled appropriately. In this section, you’ll explore the basics of classifying and
labeling data to ensure it is treated and controlled in a manner consistent with the data’s sensitivity. In addition,
you’ll review what’s required to complete the data lifecycle by documenting retention requirements and ensuring
the destruction of data that is no longer in use.
Classification
Businesses recognize that information has value that others might use their advantage if the information is not
kept confidential, so businesses classify data. Such classification dictates rules and restrictions about how
information is used, stored, or shared with others. During the data classification process, an initial step is to
determine the level of confidentiality, which then determines the labeling, handling, and use of the data.
Before labels can be attached to data sets that indicate sensitivity or handling requirements, the potential impact
or loss to the organization needs to be assessed. Classification is the process of recognizing the organizational
impacts should information suffer security compromise related to its characteristics of confidentiality, integrity,
and availability. Information is then labeled and handled accordingly.
Classifications are derived from laws, regulations, contract-specified standards, or other business expectations.
One classification might indicate the risk as “Minor. May disrupt some processes,” while a more extreme risk
might be labeled, “Grave. Could lead to loss of life or threaten the organization’s ongoing existence.” Classification
descriptions should reflect the ways in which the organization has chosen (or been mandated) to characterize and
manage risks.
Labeling
Security labels are part of implementing controls to protect various categories of information. It is reasonable to
want a simple way of assigning a level of sensitivity to a data asset, such that the higher the level, the greater the
presumed harm to the organization, and thus the greater security protection the data asset requires. While this
approach is useful, it should not be taken to mean that clear and precise boundaries exist between classification
labels such as “low sensitivity” and “moderate sensitivity,” for example.
Unless otherwise mandated, organizations are free to create classification systems that best meet their own
needs. In professional practice, it is typically best if the organization has enough classifications to distinguish
between sets of assets with differing sensitivity or value, but not so many classifications that the distinction
between each is confusing to individuals. Typically, two or three classifications are manageable, and more than
four tends to be difficult.
Highly restricted. Compromise of data with this sensitivity label could possibly put an organization’s future
existence at risk. Compromise could lead to substantial loss of life, injury or property damage, and the
litigation and claims that would follow.
Moderately restricted. Compromise of data with this sensitivity label could lead to loss of temporary
competitive advantage, loss of revenue, or disruption of planned investments or activities.
Low sensitivity (sometimes called “internal use only”). Compromise of data with this sensitivity label could
cause minor disruptions, delays, or impacts.
Unrestricted public data. As this data is already published, no harm can come from further dissemination or
disclosure.
Organizations experience initial costs to establish classification systems and label data. It is important that data
classification and labeling encompasses all data. Do not be tempted to reduce costs by implementing labeling for
important data only. For instance, if you decided that only important data should be labeled then the result will be
reduced confidence in the classification of unlabeled data. Stated alternately, you will not be able to determine if
the data was mistakenly left unlabeled or if the data is unimportant.
After initial classification and labeling, organizations may realize new efficiencies relating to security control
implementations. All information classified at each level can adopt a blanket policy and security controls. This
leads to the architecture of more efficient security design and bolsters the consistency or implementations.
Retention
Information and data should be kept only for as long as it is beneficial. For various data types, certain industry
standards, laws, and regulations define retention periods. If there are no external requirements for data, the
organization is responsible for defining and implementing its own data retention policy.
Data retention policies are applicable for hard copy and electronic data, and no data should be kept beyond its
required or useful life. Security professionals should ensure that data destruction is performed when an asset
reaches its retention limit. For the security professional to succeed in this assignment, an accurate inventory must
be maintained, including the asset location, retention period requirement, and destruction requirements.
Organizations should conduct a periodic review of retained records to reduce the volume of information stored
and to ensure that only necessary information is preserved.
Records retention policies indicate how long an organization is required to maintain information and assets.
Policies should be written in a way that:
Personnel understand the various retention requirements for data of different types throughout the
organization.
The organization appropriately documents the retention requirements for each type of information.
The organization’s systems, processes, and individuals retain information in accordance with the required
schedule, but no longer.
A common mistake with records retention is applying the longest retention period to all types of information in an
organization. This not only wastes storage but also increases risk of data exposure and adds unnecessary “noise”
when searching or processing information contained in relevant records. It also may be in violation of externally
mandated requirements such as legislation, regulations, or contracts (violation of which may result in fines or
other judgments). Records and information no longer mandated to be retained should be destroyed in accordance
with the policies of the enterprise and any appropriate legal requirements that may need to be considered.
Destruction
Data that might be left on media after deletion is known as remanence and may be a significant security concern.
Steps must be taken to reduce the risk to an acceptable level that data remanence could compromise sensitive
information. This can be done by one of several means:
Clearing the device or system, which usually involves writing multiple patterns of random values throughout
all storage media (such as main memory, registers, and fixed disks). This is sometimes called “overwriting”
or “zeroizing” the system, although writing zeros has the risk that a missed block or storage extent may still
contain recoverable, sensitive information after the process is completed.
Purging a device or system, which eliminates (or greatly reduces) the chance that residual physical effects
from the writing of the original data values may still be recovered, even after the system is cleared.
Physical destruction of a device or system is the ultimate remedy to data remanence. Magnetic or optical
disks and some flash drive technologies may require being mechanically shredded, chopped or broken up,
etched in acid, or burned; their remains may be buried in protected landfills, in some cases.
In many routine operational environments, security considerations may accept that clearing a system is sufficient.
But when systems elements are to be removed and replaced, either as part of maintenance upgrades or for
disposal, purging, or destruction may be required to protect sensitive information from compromise by an
attacker.
Proactive security practitioners may spend time reviewing data which has been aggregated into actionable
information via monitoring systems or log files. You cannot defend against an attack which you are unaware of.
This section discusses the highlevel concept around events management. The following diagrams showcase an
example of a community-based threat intelligence tool which provides proactive information regarding threat
activities.
Logging is the primary form of instrumentation that attempts to capture signals generated by events. Events are
any actions that take place within a system’s environment and cause measurable or observable change in one or
more elements or resources within the system. One example of Raw Log data is shown above in Figure 5.4.
Logging imposes a computational cost but is invaluable when determining accountability. Proper design of
logging environments and regular log reviews remain best practices regardless of the type of computer system.
Major controls frameworks emphasize the importance of organizational logging practices. Information that may
be relevant to be logged and reviewed includes but is not limited to:
User IDs
System activities
Dates and times of key events (e.g., logon and logoff)
Device and location identity
Successful and rejected system and resource access attempts
System configuration changes and system protection activation and deactivation events
Logging and monitoring the health of the information environment is essential to identifying inefficient or
improperly performing systems, detecting compromises, and providing a record of how systems are used. Robust
logging practices provide tools to effectively correlate information from diverse systems to fully understand the
relationship between activities
Log reviews are an essential function not only for security assessment and testing but also for identifying security
incidents, policy violations, fraudulent activities, and operational problems near the time of occurrence. Log
reviews support audits— forensic analysis related to internal and external investigations—and provide support for
organizational security baselines. Review of historic audit logs can determine whether a vulnerability identified in a
system has been previously exploited.
Controls are implemented to protect against unauthorized changes to log information. Operational problems with
logging are often related to alterations to recorded messages, edited or deleted log files, and storage capacity of
log file media being exceeded.
Organizations must adhere to retention policy for logs as prescribed by law, regulations, and corporate
governance. Since attackers want to hide the evidence of an attack, the organization’s policies and procedures
should address the preservation of original logs. Additionally, logs contain valuable and sensitive information
about the organization. Appropriate measures must be taken to protect log data from malicious use.
Different tools are used to log events depending on whether the attack risk is from traffic coming into or leaving
the infrastructure. Ingress monitoring refers to surveillance and assessment of all inbound communications
traffic and access attempts. Devices and tools that offer log and alert opportunities for ingress monitoring
include:
Firewalls
Gateways
Remote authentication servers
IDS/IPS tools
SIEM solutions
Anti-malware solutions
Egress monitoring is used to regulate data leaving the organization’s IT environment. This is also known as
data loss prevention (DLP) or data leak protection. A DLP solution should be deployed so that it can inspect all
forms of data leaving the organization, including:
Encryption Overview
Almost every action taken in the modern digital world involves cryptography . Encryption protects personal and
business transactions; digitally signed software updates verify a creator or supplier’s claim to authenticity.
Digitally signed contracts, binding on all parties, are routinely exchanged via email without fear of being repudiated
later by the sender.
Cryptography is used to protect information by keeping its meaning or content secret and making it unintelligible
to someone who does not have a way to decrypt (unlock) that protected information. The objective of every
encryption system is to transform an original set of data, called plaintext, into an otherwise unintelligible
encrypted form, called ciphertext .
Properly used, alone or in combination, cryptographic solutions provide a range of services that can help achieve
required systems security postures in many ways:
An encryption system as shown in Figure 5.5, is a set of hardware, software, algorithms, control parameters, and
operational methods that provide a set of encryption services.
Plaintext isthe data or message in its normal, unencrypted form and format. Its meaning or value to an end user
(a person or process) is immediately available for use. Plaintext can be:
It is important to remember that plaintext can be anything and much of it is not readable to humans.
Symmetric Encryption
The central characteristic of a symmetric algorithm is that it uses the same key during both the encryption and
decryption processes. It could be said that the decryption process is a mirror image of the encryption process.
Figure 5.6 displays how symmetric algorithms work.
Since the same key is used for both encryption and decryption, the two parties communicating need to share
knowledge of the same key. This type of algorithm adds a layer of data protection since a person who does not
have the correct key would not be able to read the encrypted message. However, because the key is shared
several security challenges may arise:
If two parties suspect a specific communication path between them is compromised, they obviously can't
share key material along that path. Someone who has compromised communications between parties
likely would also intercept a key.
Distribution of a key is difficult, because the key cannot be sent in the same channel as the encrypted
message, or a man-in-the-middle (MITM) could gain access to the key. Sending the key through a different
channel (band) than the encrypted message is called out-of-band key distribution. Examples of out-of-
band key distribution would include sending the key via courier, fax or phone.
Any party with knowledge of the key can access (and therefore change) the message.
Each individual or group of people wishing to communicate would need to use a different key for each
individual or group with whom they want to connect. This raises the challenge of scalability—the number of
keys needed grows as the number of different users or groups increases. Under this type of symmetric
arrangement, an organization of 1,000 employees would need to manage 499,500 keys if every employee
wanted to communicate confidentially with every other employee.
Other names for symmetric algorithms, which you may encounter, include:
Same key
Single key
Shared key
Secret key
Session key
An example of symmetric encryption is a substitution cipher, which involves the simple process of substituting
letters for other letters, or more appropriately, substituting bits for other bits, based upon a cryptovariable. These
ciphers involve replacing each letter of the plaintext with another that may be further down the alphabet.
Figure 5.7: Symmetric Encryption via Decoding Ring
Asymmetric Encryption
Asymmetric encryption uses one key to encrypt and a different key to decrypt the input plaintext. This is in stark
contrast to symmetric encryption, which uses the same key to encrypt and decrypt. Within most security teams,
asymmetric encryption is left to the cryptanalysts and cryptographers .
A user wishing to communicate using an asymmetric algorithm would first generate a key pair. To ensure the
strength of the key generation process, this is usually done by a cryptographic application or a public key
infrastructure (PKI) implementation without user involvement. One half of the key pair is kept secret; only the key
holder knows that key—hence, why it is called a private key. The other half of the key pair can be given freely to
anyone who wants a copy. In many companies, it may be available through a corporate website or access to a key
server. Therefore, this second half of a key pair is referred to as a public key.
Note that anyone can encrypt something using the recipient’s public key, but only the recipient —with their private
key—can decrypt it.
Asymmetric key cryptography solves the problem of key distribution by allowing a message to be sent across an
untrusted medium in a secure manner without the overhead of prior key exchange or key material distribution. It
also allows for several other features not readily available in symmetric cryptography, such as the nonrepudiation
of origin and delivery, access control, and data integrity. Asymmetric key cryptography also solves the problem of
scalability. It scales well as numbers increase, as each party only requires a key pair—the private and public keys.
An organization with 100,000 employees would only need 200,000 keys (one private and one public for each
employee). This is less than half of the keys that would be required for symmetric encryption.
The problem, however, is that asymmetric cryptography is extremely slow compared with its symmetric
counterpart. Asymmetric cryptography is impractical for everyday use in encrypting large amounts of data or for
frequent transactions where speed is required. This is because asymmetric key cryptography handles much larger
keys and is mathematically intensive.
Below is an example that illustrates the use of asymmetric cryptography to achieve different security attributes.
The two keys (private and public) are a key pair; they must be used together. This means that any message that is
encrypted with a public key can only be decrypted with the corresponding other half of the key pair, the private key.
Similarly, signing a message with a sender’s private key can only be verified by the recipient decrypting its
signature with the sender’s public key. Therefore, if the key holder keeps the private key secure, there exists a
method of transmitting a message confidentially. The sender would encrypt the message with the receiver’s public
key. Only the receiver with the private key would be able to open or read the message, providing confidentiality.
Figure 5.8 shows how asymmetric encryption can be used to send a confidential message across an untrusted
channel.
Hashing
Hashing takes an input set of data (of almost arbitrary size) and returns a fixed-length result called the hash value.
A hash function is the algorithm used to perform this transformation. When used with cryptographically strong
hash algorithms, this is the most common method of ensuring message integrity today.
Hashes have many uses in computing and security, one of which is to create a message digest by applying such a
hash function to the plaintext body of a message. Hashing puts data through a hash function or algorithm to
create an alphanumeric set of figures, called a digest, that means nothing to people who might view it. No matter
how long the input is, the hash digest will be the same number of characters. Any minor change in the input, a
misspelling, or upper case or lower case, will create a completely different hash digest. So, you can use the hash
digest to confirm that the input exactly matches what is expected or required; a good example is a password.
To be useful and secure, a cryptographic hash function must demonstrate five main properties:
Usefulness. It is easy to compute the hash value for any given message.
Nonreversible. It is computationally infeasible to reverse the hash process or otherwise derive the original
plaintext of a message from its hash value (unlike an encryption process, for which there must be a
corresponding decryption process).
Content integrity assured. It is computationally infeasible to modify a message such that reapplying the
hash function will produce the original hash value.
Uniqueness. It is computationally infeasible to find two or more different, sensible messages that hash to
the same value.
Deterministic. The same input will always generate the same hash, when using the same hashing algorithm.
Cryptographic hash functions have many applications in information security, including digital signatures,
message authentication codes, and other forms of authentication. They also can be used for fingerprinting, to
detect duplicate data or uniquely identify files, and as checksums to detect accidental data corruption. The
operation of a hashing algorithm is demonstrated in Figure 5.9.
In this example, the originator wants to send a message to the receiver and ensure that the message is not altered
by noise or lost packets as it is transmitted. The originator runs the message through a hashing algorithm that
generates a digest of the message. The digest is appended to the message and sent together with the message to
the recipient. Once the message is delivered, the receiver will generate their own digest of the received message
using the same hashing algorithm. The digest of the received message is compared with the digest sent by the
originator. If the digests are the same, the received message is the same as the sent message.
The problem with a simple hash function like this is that it does not protect against a malicious attacker who
would be able to change both the message and the hash/digest by intercepting it in transit. The general idea of a
cryptographic hash function can be summarized with the following formula:
Variable data input + hashing algorithm = fixed bit size data output (the digest)
As seen in Figure 5.10, even the slightest change in the input message results in a completely different hash
value.
Hash functions are sensitive to any changes in the message. Because the size of the hash digest does not vary
according to the message size, you cannot tell the size of a message based on the digest.
Here’s another example: You pay your monthly $500 rent through automatic withdrawal. Someone at the rental
office thinks they can change the amount to $5,000, keep the extra money, and no one will notice. However, that
change completely alters the digest. Since the digest is different, it will indicate that someone corrupted the
information by changing the value of the automatic withdrawal, and it will not go through. Hashing is an extra layer
of defense.
Before a software product provided by a third party goes live, you must make sure that no one has changed
anything after testing. The programmer usually will send a digest of the code to compare it to the original. This is
also known as a checksum . If a discrepancy is found, that means something has changed. Security coders will
then compare the original checksum to the new one using software. The coders may need to go line by line to
discover what needs to be fixed. Often these problems are not intentional; rather, they sneak in when final
adjustments to the software are made.
Often passwords will be stored as a fixed hash value or digest, so that the system can tell whether your password
matches without the password itself ever being visible.
A more secure password with alphanumeric and special characters will generate a different type of hash digest.
However, this system of password management is already becoming obsolete. Often, for security purposes, you
will be asked to generate a new password with a minimum number of characters, and the software behind it will
recognize the hash function and tell you whether the password is sufficiently secure, or it will prompt for creation
of a better one.
Attackers use password hashes to guess passwords offline. If an attacker can copy the hashed password file
from a compromised workstation or server, and they know the algorithm that is used to hash the password, the
attacker can use a computer to generate random sequences of letters and number combinations to match the
known password hash.
Understand System Hardening
Objective
Overview
Hardening is the process of applying secure configurations to reduce the attack surface and lock down various
hardware, communications systems, and software, including operating systems web servers application servers
, ,
,and applications. In this module, you will review configuration management practices that will ensure systems
are installed and maintained according to industry and organizational security standards.
Configuration management is a process and discipline used to ensure that the only changes made to a system
are those that have been authorized and validated. It is both a decision-making process and a set of control
processes. If we look more closely at this definition, the basic configuration management process includes
components such as identification, baselines, updates, and patches .
Identification. This is the baseline identification of a system and all its components, interfaces, and
documentation.
Baseline. A security baseline is a minimum level of protection used as a reference point. Baselines ensure
that updates to technology and architectures are subjected to the minimum understood and acceptable
level of security requirements.
Change control. This is an update process for requesting changes to a baseline by changing one or more
components in that baseline. A review and approval process is needed for all changes, including updates
and patches.
Verification and audit. This is a regression and validation process, which may involve testing and analysis, to
verify that nothing in the system was broken by a newly applied set of changes. An audit process can
validate that the currently in-use baseline matches the total of its initial baseline plus all approved changes
applied in sequence.
Effective use of configuration management gives systems owners, operators, support teams, and security
professionals another important set of tools to monitor and oversee the configuration of the organization’s
devices, networks, applications, and projects.
An organization may mandate the configuration of equipment through standards and baselines, which can ensure
that network devices, software, hardware, and endpoint devices are consistently configured and that all such
devices are compliant with the organization’s security baseline. If a device is found to be not compliant, it may be
disabled or isolated into a quarantine area until it can be checked and updated.
Inventory
Making an inventory, catalog, or registry of all the organization’s information assets is the first step in any asset
management process. It requires that you locate and identify all assets of interest, including—and especially—the
information assets. You can’t protect what you don’t know you have.
Baselines
IT departments manage the configurations of network appliances, servers, end user computers, and
corresponding firmware and applications—a literal total of millions or billions of possible configurations. Even a
commercial software product might have thousands of individual modules, processes, parameters, and
initialization files or other elements. If any one of them is missing, the system cannot function correctly or may
cause a security weakness. A baseline requires a total inventory of all system components: hardware, software,
data, administrative controls, documentation, and a desired or known configuration. For example, operating
system baselines will include a predetermined application suite and configuration which is aimed at
accomplishing organizational security needs while facilitating the efficient provisioning of new systems.
When protecting assets, baselines can be particularly helpful in achieving a minimal protection level of those
assets based on value. That is, if assets have been classified based on value, and meaningful baselines have been
established for each of the classification levels, you can establish a minimum level of security required for each
level.
Updates
Repairs, maintenance actions, and updates are frequently required on almost all levels of systems elements, from
basic infrastructure to operating systems, applications platforms, networks, and user interfaces. Such
modifications must be acceptance tested to verify that newly installed or repaired functionality works as required.
They also must be regression tested to verify that the modifications did not introduce other erroneous or
unexpected behaviors in the system. Ongoing security assessment and evaluation testing ascertains whether a
system that has passed acceptance testing is still secure.
Patches
Patch management mostly applies to software and hardware devices that are subject to regular modification. A
patch is an update, upgrade, or modification to a system or component. Generally, patches address a vulnerability
or improve functionality. The challenge for the security professional is the timely maintenance of all patches, as
they can come at irregular intervals from many different vendors. Some patches, such as zero-day patches, are
critical and should be deployed quickly, while others may not be as critical but should still be rapidly deployed
because subsequent critical patches may depend on the earlier version being implemented. Standards such as
the PCI DSS require organizations to deploy security patches within a certain timeframe.
While security professionals worldwide support timely patching, patches have been known to introduce
vulnerabilities. For example, during the well-known SolarWinds attack, many organizations were adversely
impacted by a flawed patch from the reputable vendor that affected system functionality. To mitigate risk, an
organization should test a patch before rolling it out. Preferably testing should occur in a test instance prior to
rolling out to a production environment.
If patch does not work or the application has unacceptable consequences, it may be necessary to roll back to
a a
previous, pre-patch state. Typically, the criteria for rollback have been documented and would automatically be
performed when the rollback criteria were met.
Many vendors offer a patch management solution for their products. These systems often have certain
automated processes, or unattended updates, that allow the patching of systems without interaction from the
administrator. Unattended or automated patching might result in unscheduled outages as production systems are
taken offline or rebooted as part of the patch process. The risk of using unattended patching should be weighed
against the risk of having unpatched systems in the organization’s network.
Risk of Change
A robust change management process must be in place and testing undertaken in model environments before any
change in a production or live environment. Even with extensive planning and testing, there are sometimes
unintended consequences, so you must make sure there is a rollback plan. A rollback means restoring a system to
the state it was in before the change was made during which it was working properly.
A rollback plan is important in all environments, but it is critical for those who are unable to fully test a change.
Understand Best Practice Security Policies
Objective
Overview
An organization’s security policies define what “security” means to that organization, which in almost all cases
reflects the tradeoff between security, operability, affordability, and potential risk impacts. Security policies
express or impose behavioral or other constraints on a system and its uses. Well-designed systems operating
within these constraints should reduce the potential of security breaches to an acceptable level.
Security governance that does not align properly with organizational goals can lead to implementation of security
policies and decisions that unnecessarily inhibit productivity, impose undue costs, and hinder strategic intent.
Allpolicies must support the organization’s regulatory and contractual obligations. Sometimes, it can be
challenging to ensure a policy encompasses all requirements while remaining simple enough for users to
understand.
Here are six common security-related policies that exist in most organizations.
This aspect of a security policy defines whether data is for use within the company, is restricted for use by only
certain roles, or can be made public to anyone outside the organization. In addition, some data has associated
legal usage definitions. Your organization’s policy should spell out any legal restrictions or refer to legal definitions
as required. Proper data classification helps organizations comply with pertinent laws and regulations. For
example, classifying credit card data as confidential can help ensure compliance with the PCI DSS. One of the
requirements of this standard is to encrypt credit card information. Data owners who correctly defined the
encryption aspect of their organization’s data classification policy will require that the data be encrypted according
to the specifications defined in this standard.
Password Policy
Every organization should have a password policy that defines expectations of systems and users. The password
policy should describe senior leadership’s commitment to ensuring secure access to data, outline any standards
that the organization has selected for password formulation, and identify who is designated to enforce and
validate the policy. Often a password policy is based on industry best practice and enhanced by other regulatory
or contractual requirements. Here are some examples of what that password policy might include.
Password creation:
All user and admin passwords must be of a certain length. Longer passphrases are encouraged.
Passwords cannot be the same or like other passwords used on any other websites, system, application, or
personal account.
Passwords should not be a single word or a commonly used phrase.
Avoid passwords that are easy to guess, such as the names and birthdays of friends and family, favorite
bands or catchphrases you like to use.
Dictionary words and phrases should be avoided.
Default installation passwords must be changed immediately after installation is complete.
Password aging:
User passwords must be changed on a schedule established by the organization. Previously used
passwords may not be reused.
System-level passwords must be changed according to a schedule established by the organization.
Password protection:
Passwords must not be shared with anyone, even IT staff or supervisors, and must not be revealed or sent
electronically. Do not write down your passwords
The acceptable use policy (AUP) defines acceptable use of the organization’s network and computer systems and
can help protect the organization from legal action. It should detail the appropriate and approved usage of the
organization’s assets, including the IT environment, devices, and data. Each employee (or anyone having access to
the organization’s assets) should be required to sign a copy of the AUP and both parties should keep a copy.
Policy aspects commonly included in AUPs:
Data access
System access
Data disclosure
Passwords
Data retention
Internet usage
Company device usage
An organization may allow workers to acquire equipment of their choosing and use personally owned equipment
for business and personal use. This is sometimes called Bring Your Own Device (BYOD).
Letting employees choose the device that is most comfortable for them may be good for employee morale, but it
presents additional challenges for the security professional because it means the organization loses some control
over standardization and privacy. If employees are allowed to use their phones and laptops for both personal and
business use, this can pose a challenge if, for example, the device must be examined for a forensic audit. It can be
hard to ensure that the device is configured securely and does not have any backdoors or other vulnerabilities that
could be used to access organizational data or systems. Recent innovations in mobile operating systems and
software have introduced methods to segregate company data from personal data, which lessens the risk to
organizational data on a personal device.
Allemployees must read and agree to adhere to a BYOD policy before any access to the systems, network, and/or
data is allowed. Certainly, the appropriate tools will be necessary to manage the use of, and security around, BYOD
devices. The organization needs to establish clear user expectations and set the appropriate business rules.
Privacy Policy
Often, personnel have access to personally identifiable information (PII) (also referred to as electronic protected
health information [ePHI] in the health industry). It is imperative that an organization documents that personnel
understand and acknowledge the organization’s policies and procedures for handling of PII and are aware of the
legal repercussions of handling such sensitive data. This type of documentation is like the AUP but is specific to
privacy-related data.
The organization’s privacy policy should stipulate which information is considered PII/ePHI, the appropriate
handling procedures and mechanisms used by the organization, how the user is expected to perform in
accordance with the stated policy and procedures, any enforcement mechanisms and punitive measures for
failure to comply, as well as references to applicable regulations and laws to which the organization is subject.
This can include national and international laws, such as the GDPR in the EU and Personal Information Protection
and Electronic Documents Act (PIPEDA) in Canada; laws for specific industries in certain countries such as HIPAA
and Gramm–Leach–Bliley Act (GLBA) in the U.S.; or local laws in the market in which the organization operates.
Organizations should create a public document that explains how private information is used, both internally and
externally. For example, it may be required that a medical provider presents patients with a description of how the
provider will protect their information or a reference to where they can find this description, such as the provider’s
website.
Change management is the discipline of transitioning from the current state to a future state. It consists of three
major activities: deciding to change, making the change, and confirming that the change has been correctly
accomplished. Change management focuses on making the decision to change and results in approvals for
systems support teams, developers, and end users to start making the directed alterations.
Throughout a system lifecycle, changes made to the system, its individual components, and its operating
environment all have the capability to introduce new vulnerabilities and thus undermine the enterprise’s security.
Change management requires a process to implement necessary changes so they do not adversely affect
business operations.
Policies will be set according to the organization’s needs, vision and mission. Each policy should have a penalty or
consequence for noncompliance. The first time may be a written warning; the next might be a forced leave of
absence or suspension without pay, and a critical violation could even result in an employee’s immediate
termination. Compliance with policies should be outlined clearly during onboarding, particularly for information
security personnel. It should be made clear who is responsible for enforcing policies, and the employee must sign
off on policies and have documentation saying they have done so.
Some organizations include a few questions in a survey or quiz to confirm that the employees truly understand the
policies to which they are agreeing. The policies described in this section are part of the baseline security posture
of any organization. Any security or data handling procedures should be backed up by the appropriate policies.
Change management is comprised of three primary steps. First, the request for change, followed by the approval
process which is followed by a rollback plan.
Figure 5.12: Change Management Components
All major change management practices address a common set of core activities that start with a
request for change (RFC) and move through various development and test stages until the change is released to
the end users. From first to last, each step is subject to some form of formalized management and decision-
making; each step produces accounting or log entries to document results.
Approval
Rollback
Depending upon the nature of the change, a variety of activities may need to be completed. These generally
include:
Rollback authority would generally be defined in the rollback plan, which might be immediate or scheduled as a
subsequent change if monitoring of the change suggests inadequate performance.
Change management happens in a cycle. There is no real stopping point; it is continuously going. This means that
there must be continuous monitoring of that environment. So, if you or anyone should request a change, it needs
to go through the appropriate approvals. The organization must be prepared for rollback if necessary, meaning
that if that change did not work, you need to be able to roll back to the legacy system.
While change management is an organization-wide process, it often falls on information security professionals to
coordinate the effort and maybe to provide oversight and governance. Depending on an organization’s size,
change management also may fall under IT, a project management office, or a quality or risk management
department. The common theme is that change management acknowledges and incorporates input from the end
users as well as all areas of IT, development, information security, and—most importantly—management, to
ensure that all changes are properly tested, approved, and communicated prior to implementation.
Raising the cyber literacy of all people—especially your employees—goes a long way in securing information and
systems. The security team can’t watch over the shoulders of everyone with access to your information and
systems. That’s why to reduce the effectiveness of certain types of attacks (such as social engineering), it is
crucial that organizations inform employees and contractors about how to recognize security problems and how
to operate in a secure manner. While the specifics of secure operation differ in each organization, there are some
general concepts that are applicable to all security awareness programs.
Security awareness training is undertaken so that employees know what is expected of them, based on their
responsibilities and accountabilities, as well as to uncover ignorance, carelessness, or complacency that could
pose a risk to the organization. Through training, organizations can align information security goals with their
mission and vision.
There are three types of learning activities that organizations use, whether for information security or for any other
purpose:
Education. The overall goal of education is to help learners improve their understanding of ideas and their
ability to relate them to their own experiences and apply that learning in useful ways.
Training. This focuses on building proficiency in a specific set of skills or actions, including sharpening the
perception and judgment needed to select and apply skills. Training can focus on low-level skills, an entire
task, or complex workflows consisting of many tasks.
Awareness. These activities attract and engage the learner’s attention by acquainting them with aspects of
an issue, concern, problem or need.
Notice that none of these have an expressed or implied degree of formality, prioritization, location, or target
audience. If a senior executive or the janitor needs to understand security, you obviously start at the awareness
level of both.
Here is an example of security awareness training using an organization’s strategy to improve fire safety in the
workplace:
Education may help workers in a secure server room understand the interaction of the various fire and
smoke detectors, suppression systems, alarms, and their interactions with electrical power, lighting, and
ventilation systems.
Training would provide those workers with task-specific, detailed learning about the proper actions each
should take in the event of an alarm, a suppression system going off without an alarm, a ventilation system
failure, or other contingency. This training would build on the learning acquired via the educational
activities.
Awareness activities would include not only posting the appropriate signage, floor, or doorway markings,
but also other indicators to help workers detect an anomaly, respond to an alarm, and take appropriate
action. In this case, awareness is a constantly available reminder of what to do when the alarms go off.
Education to help select groups better understand how social engineering attacks are conducted and
engage those employees in creating and testing their own strategies for improving their defensive
techniques.
Training to help users increase their proficiency in recognizing a potential phishing or similar attempt, while
also practicing the correct responses to such events. Training may include simulated phishing emails sent
to users on a network to test their ability to identify a phishing email.
Raising users’ overall awareness of the threat posed by phishing, vishing, SMS phishing (also called
“smishing”), and other social engineering tactics. Awareness techniques also can alert selected users to
new approaches that such attacks might take.
Let’s look at some common risks and why it’s important to include them in your security awareness training
programs.
Phishing
The use of phishing attacks to target individuals, entire departments, and even companies is a significant threat
that the security professional must be aware of and prepared to defend against. Countless variations on the basic
phishing attack have appeared in recent years, leading to a variety of attacks that are deployed relentlessly
against individuals and networks in a stream of emails, phone calls, spam, instant messages, videos, file
attachments, and other delivery mechanisms.
Phishing attacks that attempt to trick highly placed officials or private individuals with sizable assets into
authorizing large fund wire transfers to previously unknown entities are known as whaling attacks.
Many security teams use simulated phishing emails to raise awareness of this tactic. Simulated emails provide
practice opportunities so employees can learn what to look for in email messages and identify them as potential
threats.
Social Engineering
Social engineering is an important part of any security awareness training program for one very simple reason:
Bad actors know that social engineering works. For the cyberattackers, social engineering is an inexpensive
investment with a potentially high payoff. Social engineering, applied over time, can extract significant insider
knowledge about almost any organization or individual.
One of the most important messages to deliver in a security awareness program is an understanding of the threat
of social engineering. People need to be reminded of the threat and types of social engineering so that they can
recognize and resist a social engineering attack.
Most social engineering techniques are not new. Many have even been taught as basic fieldcraft for espionage
agencies and are part of the repertoire of investigative techniques used by police detectives. A short list of tactics
that we see across cyberspace currently includes:
Phone phishing (or “vishing”). Using a rogue interactive voice response (IVR) system to recreate a legitimate-
sounding copy of a bank or other institution’s IVR system. The victim is prompted through a phishing email
to call via a provided phone number to verify information such as account numbers, account access codes
or a PIN, and to confirm answers to security questions, contact information, and addresses. A typical
vishing system will reject logins continually, ensuring the victim enters PINs or passwords multiple times,
often disclosing several different passwords. More advanced systems may be used to transfer the victim to
a human posing as a customer service agent for further questioning.
Pretexting. The human equivalent of phishing, where someone impersonates an authority figure or a trusted
individual to gain access to login information. The pretexter may claim to be an IT support worker who is
doing maintenance or an investigator performing a company audit. Or they might impersonate a coworker,
the police, a tax authority, or another seemingly legitimate person. The goal is to gain access to computers
and information.
Quid pro quo. A request for password or login credentials in exchange for some compensation, such as a
“free gift,” a monetary payment, or access to an online game or service. If it sounds too good to be true, it
probably is.
Tailgating. The practice of following an authorized user into a restricted area or system. The low-tech
version of tailgating occurs when a stranger asks you to hold the door open behind you because they forgot
their company RFID card. In a more sophisticated version, someone may ask to borrow your phone or
laptop to perform a simple action when in fact they are installing malicious software onto your device.
Social engineering works because it plays on human tendencies. Education, training, and awareness work best to
counter or defend against social engineering because they help people realize that every person in the
organization plays a role in information security. Like simulated phishing emails, simulated phone phishing calls,
pretexting, or tailgating build employee awareness to enable them to recognize when something is suspicious.
Password Protection
We use many different passwords and systems. Many password managers will store a user’s passwords, so the
user does not have to remember passwords for multiple systems. The greatest disadvantage of these solutions is
the risk of compromise to the password manager.
Password managers may be protected by a weak password or passphrase chosen by the user and easily
compromised. There have been many cases where a person’s private data was stored by a cloud provider but
easily accessed by unauthorized persons through password compromise.
Organizations should encourage the use of different passwords for different systems and should provide a
recommended password management solution for its users.
Examples of poor password protection that should be avoided are:
Reusing passwords for multiple systems, especially using the same password for business and personal
use.
Writing down passwords and leaving them in unsecured areas.
Sharing a password with tech support or a coworker.
By following a good password policy and appropriate procedures, you can improve password security immensely.
Appropriate communications about current and potential threats keeps awareness high. Among the ways to
communicate are encouraging friendly competition between departments to spot the most phishing attempts or
offering friendly reminders such as a squishy stress ball that says, “Lock your computer.” There are also automatic
systems that lock the computer when the user steps away.
Make sure the organization’s leaders understand the importance of training, promoting, and improving the
organization’s information security environment. Provide the opportunity for personnel to practice what they’ve
learned with exercises and simulations. Occasionally, you can send simulated phishing emails, for example, and
provide positive feedback to those who report a possible problem. Positive feedback received on training ensures
it is appropriate and understood.
Chapter Summary
This chapter focused on the day-to-day, moment-by-moment, use of security controls and risk mitigation
strategies in an organization. You discovered ways to secure data and the systems on which they reside. Data
(information) security as a process and discipline provides a structure for protecting the value of data as the
organization creates, stores, shares, uses, modifies, archives, and—finally—destroys that data (known as data
handling). During data handling, an organization classifies (assigns data sensitivity levels), categorizes
(determines type of data), labels (applies a name to the data), retains (determines how long to keep the data), and
destroys (erases or obliterates) the data.
A best practice for securing data is encryption . The chapter covered the process of encrypting data in plaintext
with a key and algorithm to create ciphertext then using either the same key ( symmetric ) or a different key (
asymmetric ) and same algorithm to decrypt the ciphertext to convert it back to plaintext. Next, hashing was
methodically described; hashing takes an input set of data of almost arbitrary size and returns a fixed-length result
called the hash value.
System hardening is the process of applying secure configurations (to reduce the attack surface) and locking
down various hardware, communications systems, and software, including operating systems, web servers,
application servers, and applications.
Configuration management was introduced as a process and discipline used to ensure that the only changes
made to a system are those that have been authorized and validated. Configuration management consists of
identification, baseline, change control, verification, and audit. During configuration management, one must
conduct inventory, baselines, updates, and patches.
The following best practice security policies were examined: data handling (appropriate use of data), password
(appropriate use of passwords), acceptable use (appropriate use of the assets, devices, and data), Bring Your Own
Device (appropriate use of personal devices), privacy (appropriate protection of one’s privacy), and change
management (appropriate transition from current state to a future state).
The chapter ended by exploring the importance of security awareness training and how it reduces internal threats
to an organization. By breaking down the levels of security awareness training into education, training, and
awareness, we identified that the training can be tailored to the security topic(s), organization, position, or
individual. There are many methods for training: computer-based training, live training, online synchronous
training, regular communications, reward mechanisms, gamification, and microtraining. The module highlighted
some of the main threats, including phishing and social engineering (e.g., baiting, phone phishing or vishing,
pretexting, quid pro quo, and tailgating. The importance of protecting passwords was also emphasized.