Assets, Threats, and Vulnerabilities
Assets, Threats, and Vulnerabilities
Hello, and welcome to Assets, Threats, and Vulnerabilities, the fifth course
in the Google Cybersecurity Certificate. You’re on an exciting journey!
By the end of this course, you’ll build an understanding of the wide range of
assets organizations must protect. You’ll explore many of the most common
security controls used to protect valuable assets from risk. You’ll also discover
the variety of ways assets are vulnerable to threats by adopting an attacker
mindset.
4. Tools of the Trade: Linux and SQL — Explore foundational computing skills,
including communicating with the Linux operating system through the
command line and querying databases with SQL.
Course 5 content
Each course of this certificate program is broken into modules. You can
complete courses at your own pace, but the module breakdowns are designed
to help you finish the entire Google Cybersecurity Certificate in about six
months.
What’s to come? Here’s a quick overview of the skills you’ll learn in each
module of this course.
You will focus on security controls that protect organizational assets. You'll
explore how privacy impacts asset security and understand the role that
encryption plays in maintaining the privacy of digital assets. You'll also
explore how authentication and authorization systems help verify a user’s
identity.
Finally, you will explore common types of threats to digital asset security.
You'll also examine the tools and techniques used by cybercriminals to target
assets. In addition, you'll be introduced to the threat modeling process and
learn ways security professionals stay ahead of security breaches.
Understand risks, threats, and
vulnerabilities
When security events occur, you’ll need to work in close coordination with
others to address the problem. Doing so quickly requires clear communication
between you and your team to get the job done.
Security risk
Security plans are all about how an organization defines risk. However, this
definition can vary widely by organization. As you may recall, a risk is
anything that can impact the confidentiality, integrity, or availability of an
asset. Since organizations have particular assets that they value, they tend to
differ in how they interpret and approach risk.
One way to interpret risk is to consider the potential effects that negative
events can have on a business. Another way to present this idea is with this
calculation:
For example, you risk being late when you drive a car to work. This negative
event is more likely to happen if you get a flat tire along the way. And the
impact could be serious, like losing your job. All these factors influence how
you approach commuting to work every day. The same is true for how
businesses handle security risks.
The business impact of a negative event will always depend on the asset and
the situation. Your primary focus as a security professional will be to focus on
the likelihood side of the equation by dealing with certain factors that increase
the odds of a problem.
Risk factors
As you’ll discover throughout this course, there are two broad risk factors that
you’ll be concerned with in the field:
Threats
Vulnerabilities
Let’s apply this to the risk of being late to work. A threat would be a nail
puncturing your tire, since tires are vulnerable to running over sharp objects.
In terms of security planning, you would want to reduce the likelihood of this
risk by driving on a clean road.
Categories of threat
Threats are circumstances or events that can negatively impact assets. There
are many different types of threats. However, they are commonly categorized
as two types: intentional and unintentional.
Categories of vulnerability
Key takeaways
Previously, you learned that identifying, tracking, and classifying assets are all
important parts of asset management. In this reading, you’ll learn more about
the purpose and benefits of asset classification, including common
classification levels.
Keeping assets safe requires a workable system that helps businesses operate
smoothly. Setting these systems up requires having detailed knowledge of the
assets in an environment. For example, a bank needs to have money available
each day to serve its customers. Equipment, devices, and processes need to be
in place to ensure that money is available and secure from unauthorized
access.
Regardless of its type, every asset should be classified and accounted for. As
you may recall, asset classification is the practice of labeling assets based on
sensitivity and importance to an organization. Determining each of those two
factors varies, but the sensitivity and importance of an asset typically requires
knowing the following:
What you have
Where it is
How important it is
Note: Although many organizations adopt this classification scheme, there can
be variability at the highest levels. For example, government organizations
label their most sensitive assets as confidential instead of restricted.
For example, a business might issue a laptop to one of its employees to allow
them to work remotely. You might assume the business is the asset owner in
this situation. But, what if the employee uses the laptop for personal matters,
like storing their photos?
Key takeaways
Every business is different. Each business will have specific requirements to
address when devising their security strategy. Knowing why and how
businesses classify their assets is an important skill to have as a security
professional. Information is one of the most important assets in the world. As
a cybersecurity professional, you will be closely involved with protecting
information from damage, disclosure, and misuse. Recognizing the challenges
that businesses face classifying this type of asset is a key to helping them solve
their security needs.
Earlier, you learned that most information is in the form of data, which is in a
constant state of change. In recent years, businesses started moving their data
to the cloud. The adoption of cloud-based services has complicated how
information is kept safe online. In this reading, you’ll learn about these
challenges and the opportunities they’ve created for security professionals.
Soaring into the cloud
Cloud-based services
PaaS refers to back-end application development tools that clients can access
online. Developers use these resources to write code and build, manage, and
deploy their own apps. Meanwhile, the cloud service providers host and
maintain the back-end hardware and software that the apps use to operate.
Some examples of PaaS services include Google App Engine™ platform,
Heroku®, and VMware Cloud Foundry.
IaaS customers are given remote access to a range of back-end systems that
are hosted by the cloud service provider. This includes data processing
servers, storage, networking resources, and more. Resources are commonly
licensed as needed, making it a cost-effective alternative to buying and
maintaining on premises.
Microsoft Azure
Cloud security
Shifting applications and infrastructure over to the cloud can make it easier to
operate an online business. It can also complicate keeping data private and
safe. Cloud security is a growing subfield of cybersecurity that specifically
focuses on the protection of data, applications, and infrastructure in the cloud.
For example, a PaaS client pays to access the resources they need to build
their applications. So, it is reasonable to expect them to be responsible for
securing the apps they build. On the other hand, the responsibility for
maintaining the security of the servers they are accessing should belong to the
cloud service provider because there are other clients using the same systems.
Resource configuration
Data handling
Many other challenges exist besides these. As more businesses adopt cloud-
based services, there’s a growing need for cloud security professionals to meet
a growing number of risks. Burning Glass, a leading labor market analytics
firm, ranks cloud security among the most in-demand skills in cybersecurity.
Key takeaways
So much of the global marketplace has shifted to cloud-based services. Cloud
technology is still new, resulting in the emergence of new security models and
a range of security challenges. And it’s likely that other concerns might arise
as more businesses become reliant on the cloud. Being familiar with the cloud
and the different services that are available is an important step towards
supporting any organizations efforts to protect information online.
The U.K.’s National Cyber Security Centre has a detailed guide for choosing,
using, and deploying cloud services securely based on the shared
responsibility model.
As you might recall, the framework consists of three main components: the
core, tiers, and profiles. In the following sections, you'll learn more about each
of these CSF components.
Core
The CSF core is a set of desired cybersecurity outcomes that help
organizations customize their security plan. It consists of six functions, or
parts: Identify, Protect, Detect, Respond, Recover, and Govern. These functions
are commonly used as an informative reference to help organizations identify
their most important assets and protect those assets with appropriate
safeguards. The CSF core is also used to understand ways to detect attacks and
develop response and recovery plans should an attack happen.
Previously, the core consisted of just five functions. Govern was added in
February of 2024 to emphasize the importance of leadership and decision-
making when it comes to managing cybersecurity risks.
Tiers
Profiles
The CSF profiles are pre-made templates of the NIST CSF that are developed
by a team of industry experts. CSF profiles are tailored to address the specific
risks of an organization or industry. They are used to help organizations
develop a baseline for their cybersecurity plans, or as a way of comparing
their current cybersecurity posture to a specific industry standard.
Note: The core, tiers, and profiles were each designed to help any business
improve their security operations. Although there are only three components,
the entire framework consists of a complex system of subcategories and
processes.
Note: Regulations are rules that must be followed, while frameworks are
resources you can choose to use.
Since its creation, many businesses have used the NIST CSF. However, CSF can
be a challenge to implement due to its high level of detail. It can also be tough
to find where the framework fits in. For example, some businesses have
established security plans, making it unclear how CSF can benefit them.
Alternatively, some businesses might be in the early stages of building their
plans and need a place to start.
Analyze and prioritize existing gaps in security operations that place the
businesses assets at risk.
Pro tip: Always consider current risk, threat, and vulnerability trends when
using the NIST CSF.
You can learn more about implementing the CSF in this report by CISA that
outlines how the framework was applied in the commercial facilities sector.
The NIST CSF has continued to evolve since its introduction in 2014. Its design
is influenced by the standards and best practices of some of the largest
companies in the world.
A benefit of the framework is that it aligns with the security practices of many
organizations across the global economy. It also helps with regulatory
compliance that might be shared by business partners.
Key takeaways
The NIST CSF is a flexible resource that organizations may choose to use to
assess and improve their security posture. It's a useful framework that
combines the security best practices of industries around the world.
Implementing the CSF can be a challenge for any organization. The CSF can
help business meet regulatory compliance requirements to avoid financial and
reputational risks.
Every business needs to plan for the risk of data theft, misuse, or abuse.
Implementing the principle of least privilege can greatly reduce the risk of
costly incidents like data breaches by:
Limiting access to sensitive information
Guest accounts are provided to external users who need to access an internal
network, like customers, clients, contractors, or business partners.
It's best practice to determine a baseline access level for each account type
before implementing least privilege. However, the appropriate access level
can change from one moment to the next. For example, a customer support
representative should only have access to your information while they are
helping you. Your data should then become inaccessible when the support
agent starts working with another customer and they are no longer actively
assisting you. Least privilege can only reduce risk if user accounts are
routinely and consistently monitored.
Pro tip: Passwords play an important role when implementing the principle
of least privilege. Even if user accounts are assigned appropriately, an
insecure password can compromise your systems.
Usage audits
Privilege audits
Usage audits
When conducting a usage audit, the security team will review which resources
each account is accessing and what the user is doing with the resource. Usage
audits can help determine whether users are acting in accordance with an
organization’s security policies. They can also help identify whether a user has
permissions that can be revoked because they are no longer being used.
Privilege audits
Users tend to accumulate more access privileges than they need over time, an
issue known as privilege creep. This might occur if an employee receives a
promotion or switches teams and their job duties change. Privilege audits
assess whether a user's role is in alignment with the resources they have
access to.
Key takeaways
The principle of least privilege is a security control that can reduce the risk of
unauthorized access to sensitive information and resources. Setting up and
configuring user accounts with the right levels of access and authorization is
an important step toward implementing least privilege. Auditing user
accounts and revoking unnecessary access rights is an important practice that
helps to maintain the confidentiality, integrity, and availability of information.
data lifecycle
Organizations of all sizes handle a large amount of data that must be kept
private. You learned that data can be vulnerable whether it is at rest, in use, or
in transit. Regardless of the state it is in, information should be kept private by
limiting access and authorization.
The data lifecycle is an important model that security teams consider when
protecting information. It influences how they set policies that align with
business objectives. It also plays an important role in the technologies security
teams use to make information accessible.
In general, the data lifecycle has five stages. Each describe how data flows
through an organization from the moment it is created until it is no longer
useful:
Collect
Store
Use
Archive
Destroy
Protecting information at each stage of this process describes the need to keep
it accessible and recoverable should something go wrong.
Data governance
Data owner: the person that decides who can access, edit, use, or destroy
their information.
Data custodian: anyone or anything that's responsible for the safe handling,
transport, and storage of information.
Data steward: the person or group that maintains and implements data
governance policies set by an organization.
Businesses store, move, and transform data using a wide range of IT systems.
Data governance policies often assign accountability to data owners,
custodians, and stewards.
Most security plans include a specific policy that outlines how information
will be managed across an organization. This is known as a data governance
policy. These documents clearly define procedures that should be followed to
participate in keeping data safe. They place limits on who or what can access
data. Security professionals are important participants in data governance. As
a data custodian, you will be responsible for ensuring that data isn’t damaged,
stolen, or misused.
Securing data can be challenging. In large part, that's because data owners
generate more data than they can manage. As a result, data custodians and
stewards sometimes lack direct, explicit instructions on how they should
handle specific types of data. Governments and other regulatory agencies have
bridged this gap by creating rules that specify the types of information that
organizations must protect by default:
PHI stands for protected health information. In the U.S., it is regulated by the
Health Insurance Portability and Accountability Act (HIPAA), which defines
PHI as “information that relates to the past, present, or future physical or
mental health or condition of an individual.” In the EU, PHI has a similar
definition but it is regulated by the General Data Protection Regulation
(GDPR).
SPII is a specific type of PII that falls under stricter handling guidelines. The S
stands for sensitive, meaning this is a type of personally identifiable
information that should only be accessed on a need-to-know basis, such as a
bank account number or login credentials.
Key takeaways
Previously, you learned how regulations and compliance reduce security risk.
To review, refer to the reading about how security controls, frameworks, and
compliance regulations are used together to manage security and minimize
risk. In this reading, you will learn how information privacy regulations affect
data handling practices. You'll also learn about some of the most influential
security regulations in the world.
Security and privacy are two terms that often get used interchangeably
outside of this field. Although the two concepts are connected, they represent
specific functions:
The key difference: Privacy is about providing people with control over their
personal information and how it's shared. Security is about protecting
people’s choices and keeping their information safe from potential threats.
For example, a retail company might want to collect specific kinds of personal
information about its customers for marketing purposes, like their age,
gender, and location. How this private information will be used should be
disclosed to customers before it's collected. In addition, customers should be
given an option to opt-out if they decide not to share their data.
Note: Privacy and security are both essential for maintaining customer trust
and brand reputation.
Data privacy and protection are topics that started gaining a lot of attention in
the late 1990s. At that time, tech companies suddenly went from processing
people’s data to storing and using it for business purposes. For example, if a
user searched for a product online, companies began storing and sharing
access to information about that user’s search history with other companies.
Businesses were then able to deliver personalized shopping experiences to
the user for free.
Note: The more data is collected, stored, and used, the more vulnerable it is to
breaches and threats.
GDPR
GDPR is a set of rules and regulations developed by the European Union (EU)
that puts data owners in total control of their personal information. Under
GDPR, types of personal information include a person's name, address, phone
number, financial information, and medical information.
The GDPR applies to any business that handles the data of EU citizens or
residents, regardless of where that business operates. For example, a US based
company that handles the data of EU visitors to their website is subject to the
GDPRs provisions.
PCI DSS
HIPAA
HIPAA is a U.S. law that requires the protection of sensitive patient health
information. HIPAA prohibits the disclosure of a person's medical information
without their knowledge and consent.
Several other security and privacy compliance laws exist. Which ones your
organization needs to follow will depend on the industry and the area of
authority. Regardless of the circumstances, regulatory compliance is
important to every business.
As a security analyst, you are likely to be involved with security audits and
assessments in the field. Businesses usually perform security audits less
frequently, approximately once per year. Security audits may be performed
both internally and externally by different third-party groups.
Key takeaways
Types of encryption
Ciphers are vulnerable to brute force attacks, which use a trial and error
process to discover private information. This tactic is the digital equivalent of
trying every number in a combination lock trying to find the right one. In
modern encryption, longer key lengths are considered to be more secure.
Longer key lengths mean more possibilities that an attacker needs to try to
unlock a cipher.
Approved algorithms
Symmetric algorithms
Triple DES (3DES) is known as a block cipher because of the way it converts
plaintext into ciphertext in “blocks.” Its origins trace back to the Data
Encryption Standard (DES), which was developed in the early 1970s. DES was
one of the earliest symmetric encryption algorithms that generated 64-bit
keys, although only 56 bits are used for encryption. A bit is the smallest unit of
data measurement on a computer. As you might imagine, Triple DES generates
keys that are three times as long. Triple DES applies the DES algorithm three
times, using three different 56-bit keys. This results in an effective key length
of 168 bits. Despite the longer keys, many organizations are moving away
from using Triple DES due to limitations on the amount of data that can be
encrypted. However, Triple DES is likely to remain in use for backwards
compatibility purposes.
Advanced Encryption Standard (AES) is one of the most secure symmetric
algorithms today. AES generates keys that are 128, 192, or 256 bits.
Cryptographic keys of this size are considered to be safe from brute force
attacks. It’s estimated that brute forcing an AES 128-bit key could take a
modern computer billions of years!
Asymmetric algorithms
Rivest Shamir Adleman (RSA) is named after its three creators who developed
it while at the Massachusetts Institute of Technology (MIT). RSA is one of the
first asymmetric encryption algorithms that produces a public and private key
pair. Asymmetric algorithms like RSA produce even longer key lengths. In
part, this is due to the fact that these functions are creating two keys. RSA key
sizes are 1,024, 2,048, or 4,096 bits. RSA is mainly used to protect highly
sensitive data.
Digital Signature Algorithm (DSA) is a standard asymmetric algorithm that
was introduced by NIST in the early 1990s. DSA also generates key lengths of
2,048 bits. This algorithm is widely used today as a complement to RSA in
public key infrastructure.
Generating keys
Note: OpenSSL is just one option. There are various others available that can
generate keys with any of these common algorithms.
Companies use both symmetric and asymmetric encryption. They often work
as a team, balancing security with user experience.
Key takeaways
As a security analyst, it’s important that you understand the role of encryption
to secure data online and that you’re familiar with the right security controls
to do so.
Scenario
In this scenario, all of the files in your home directory have been encrypted.
You’ll need to use Linux commands to break the Caesar cipher and decrypt the
files so that you can read the hidden messages they contain.
Here’s how you’ll do this task: First, you’ll explore the contents of the home
directory and read the contents of a file. Next, you’ll find a hidden file and
decrypt the Caesar cipher it contains. Finally, you’ll decrypt the encrypted
data file to recover your data and reveal the hidden message.
Note: The lab starts with you logged in as user analyst, with your home
directory, /home/analyst, as the current working directory.Disclaimer: For
optimal performance and compatibility, it is recommended to use
either Google Chrome or Mozilla Firefox browsers while accessing the labs.
You'll need to start the lab before you can access the materials. To do this,
click the green “Start Lab” button at the top of the screen.
After you click the Start Lab button, you will see a shell, where you will be
performing further steps in the lab. You should have a shell like this:
When you have completed all the tasks, refer to the End your Lab section that
follows the tasks for information on how to end your lab.
The lab starts in your home directory, /home/analyst, as the current working
directory.
In this task, you need to explore the contents of your home directory and read
the contents of a file to get further instructions.
1. Use the ls command to list the files in the current working directory.
The command to complete this step:
ls /home/analyst
Copied!
content_copy
Two files, Q1.encrypted and README.txt, and a subdirectory, caesar, are
listed:
2. Use the cat command to list the contents of the README.txt file.
The command to complete this step:
cat README.txt
Copied!
content_copy
This will display the following output:
Hello,
All of your data has been encrypted. To recover your data, you will need to
solve a cipher. To get started look for a hidden file in the caesar subdirectory.
The message in the README.txt file advises that the caesar subdirectory
contains a hidden file.
In the next task, you’ll need to find the hidden file and solve the Caesar cipher
that protects it. The file contains instructions on how to recover your data.
Click Check my progress to verify that you have completed this task correctly.
Check my progress
In this task, you need to find a hidden file in your home directory and decrypt
the Caesar cipher it contains. This task will enable you to complete the next
task.
ls -a
Copied!
content_copy
This will display the following output:
. .. .leftShift3
Hidden files in Linux can be identified by their name starting with a period (.).
3. Use the cat command to list the contents of the .leftShift3 file.
The command to complete this step:
cat .leftShift3
Copied!
content_copy
The message in the .leftShift3 file appears to be scrambled. This is because the
data has been encrypted using a Caesar cipher. This cipher can be solved by
shifting each alphabet character to the left or right by a fixed number of
spaces. In this example, the shift is three letters to the left. Thus "d" stands for
"a", and "e" stands for "b".
4. You can decrypt the Caesar cipher in the .leftshift3 file by using the
following command:
cat .leftShift3 | tr "d-za-cD-ZA-C" "a-zA-Z"
Copied!
content_copy
Note: The tr command translates text from one set of characters to another,
using a mapping. The first parameter to the tr command represents the input set
of characters, and the second represents the output set of characters. Hence, if
you provide parameters “abcd” and “pqrs”, and the input string to
the tr command is “ac”, the output string will be “pr".
This will display the following output:
In order to recover your files you will need to enter the following command:
openssl aes-256-cbc -pbkdf2 -a -d -in Q1.encrypted -out Q1.recovered -k
ettubrute
In this case, the command tr "d-za-cD-ZA-C" "a-zA-Z" translates all the
lowercase and uppercase letters in the alphabet back to their original position.
The first character set, indicated by "d-za-cD-ZA-C", is translated to the second
character set, which is "a-zA-Z".
Note: The output provides you with the command you need to solve the next
task!
You don’t need to copy the command revealed in the output. It will be provided
in the next task.
5. Now, return to your home directory before completing the next task:
cd ~
Copied!
content_copy
Click Check my progress to verify that you have completed this task correctly.
Check my progress
1. Use the exact command revealed in the previous task to decrypt the
encrypted file:
openssl aes-256-cbc -pbkdf2 -a -d -in Q1.encrypted -out Q1.recovered -k
ettubrute
Copied!
content_copy
Although you don't need to memorize this command, to help you better
understand the syntax used, let's break it down.
In this instance, the openssl command reverses the encryption of the file with
a secure symmetric cipher, as indicated by AES-256-CBC. The -pbkdf2 option
is used to add extra security to the key, and -a indicates the desired encoding
for the output. The -d indicates decrypting, while -in specifies the input file
and -out specifies the output file. The -k specifies the password, which in this
example is ettubrute.
ls
Copied!
content_copy
The new file Q1.recovered in the directory listing is the decrypted file and
contains a message.
3. Use the cat command to list the contents of the Q1.recovered file.
The command to complete this step:
cat Q1.recovered
Copied!
content_copy
This will display the following output:
If you are able to read this, then you have successfully decrypted the classic
cipher text. You recovered the encryption key that was used to encrypt this
file. Great work!
Click Check my progress to verify that you have completed this task correctly.
Decrypt a file
Check my progress
Conclusion
Great work! You now have practical experience in using basic Linux Bash shell
commands to
As a security analyst, it’s important that you understand the role of encryption
to secure data online and that you’re familiar with the right security controls
to do so.
Scenario
In this scenario, all of the files in your home directory have been encrypted.
You’ll need to use Linux commands to break the Caesar cipher and decrypt the
files so that you can read the hidden messages they contain.
Here’s how you’ll do this task: First, you’ll explore the contents of the home
directory and read the contents of a file. Next, you’ll find a hidden file and
decrypt the Caesar cipher it contains. Finally, you’ll decrypt the encrypted
data file to recover your data and reveal the hidden message.
Note: The lab starts with you logged in as user analyst, with your home
directory, /home/analyst, as the current working directory.
The lab starts in your home directory, /home/analyst, as the current working
directory.
In this task, you need to explore the contents of your home directory and read
the contents of a file to get further instructions.
1. Use the ls command to list the files in the current working directory.
1
ls /home/analyst
1
Q1.encrypted README.txt caesar
2. Use the cat command to list the contents of the README.txt file.
1
cat README.txt
The message in the README.txt file advises that the caesar subdirectory
contains a hidden file.
In the next task, you’ll need to find the hidden file and solve the Caesar cipher
that protects it. The file contains instructions on how to recover your data.
In this task, you need to find a hidden file in your home directory and decrypt
the Caesar cipher it contains. This task will enable you to complete the next
task.
1. First, use the cd command to change to the caesar subdirectory of your home
directory:
1
cd caesar
2. Use the ls -a command to list all files, including hidden files, in your home
directory.
1
ls -a
1
. .. .leftShift3
Hidden files in Linux can be identified by their name starting with a period (.).
3. Use the cat command to list the contents of the .leftShift3 file.
1
cat .leftShift3
The message in the .leftShift3 file appears to be scrambled. This is because
the data has been encrypted using a Caesar cipher. This cipher can be solved
by shifting each alphabet character to the left or right by a fixed number of
spaces. In this example, the shift is three letters to the left. Thus "d" stands for
"a", and "e" stands for "b".
4. You can decrypt the Caesar cipher in the .leftshift3 file by using the
following command:
1
cat .leftShift3 | tr "d-za-cD-ZA-C" "a-zA-Z"
Note: The tr command translates text from one set of characters to another,
using a mapping. The first parameter to the tr command represents the input
set of characters, and the second represents the output set of characters. Hence,
if you provide parameters “abcd” and “pqrs”, and the input string to
the tr command is “ac”, the output string will be “pr".
1
2
3
In order to recover your files you will need to enter the following command:
openssl aes-256-cbc -pbkdf2 -a -d -in Q1.encrypted -out Q1.recovered -k ettub
rute
Note: The output provides you with the command you need to solve the next
task!
You don’t need to copy the command revealed in the output. It will be provided
in the next task.
5. Now, return to your home directory before completing the next task:
1
cd ~
1. Use the exact command revealed in the previous task to decrypt the encrypted
file:
1
openssl aes-256-cbc -pbkdf2 -a -d -in Q1.encrypted -out Q1.recovered -k ettub
rute
Although you don't need to memorize this command, to help you better
understand the syntax used, let's break it down.
In this instance, the openssl command reverses the encryption of the file with
a secure symmetric cipher, as indicated by AES-256-CBC. The -pbkdf2 option
is used to add extra security to the key, and -a indicates the desired encoding
for the output. The -d indicates decrypting, while -in specifies the input file
and -out specifies the output file. The -k specifies the password, which in this
example is ettubrute.
2. Use the ls command to list the contents of your current working directory
again.
1
ls
The new file Q1.recovered in the directory listing is the decrypted file and
contains a message.
3. Use the cat command to list the contents of the Q1.recovered file.
1
cat Q1.recovered
Conclusion
Great work! You now have practical experience in using basic Linux Bash shell
commands to
Previously, you learned that hash functions are algorithms that produce a
code that can't be decrypted. Hash functions convert information into a
unique value that can then be used to determine its integrity. In this reading,
you’ll learn about the origins of hash functions and how they’ve changed over
time.
Origins of hashing
Hash functions have been around since the early days of computing. They
were originally created as a way to quickly search for data. Since the
beginning, these algorithms have been designed to represent data of any size
as small, fixed-size values, or digests. Using a hash table, which is a data
structure that's used to store and reference hash values, these small values
became a more secure and efficient way for computers to reference data.
One of the earliest hash functions is Message Digest 5, more commonly known
as MD5. Professor Ronald Rivest of the Massachusetts Institute of Technology
(MIT) developed MD5 in the early 1990s as a way to verify that a file sent over
a network matched its source file.
Generally, the longer the hash value, the more secure it is. It wasn’t long after
MD5's creation that security practitioners discovered 128-bit digests resulted
in a major vulnerability.
MD5 values are limited to 32 characters in length. Due to the limited output
size, the algorithm is considered to be vulnerable to hash collision, an
instance when different inputs produce the same hash value. Because hashes
are used for authentication, a hash collision is similar to copying someone’s
identity. Attackers can carry out collision attacks to fraudulently impersonate
authentic data.
Next-generation hashing
To avoid the risk of hash collisions, functions that generated longer values
were needed. MD5's shortcomings gave way to a new group of functions
known as the Secure Hashing Algorithms, or SHAs.
SHA-1
SHA-224
SHA-256
SHA-384
SHA-512
This is a safe system unless an attacker gains access to the user database. If
passwords are stored in plaintext, then an attacker can steal that information
and use it to access company resources. Hashing adds an additional layer of
security. Because hash values can't be reversed, an attacker would not be able
to steal someone's login credentials if they managed to gain access to the
database.
Rainbow tables
Functions with larger digests are less vulnerable to collision and rainbow
table attacks. But as you’re learning, no security control is perfect.
Key takeaways
Activity overview
As a security analyst, you’ll need to implement security controls to protect
organizations against a range of threats.
That’s where hashing comes in. Previously, you learned that a hash function is
an algorithm that produces a code that can’t be decrypted. Hash functions are
used to uniquely identify the contents of a file so that you can check whether it
has been modified. This code provides a unique identifier known as a hash
value or digest.
For example, a malicious program may mimic an original program. If one code
line is different from the original program, it produces a different hash value.
Security teams can then identify the malicious program and work to mitigate
the risk.
Many tools are available to compare hashes for various scenarios. But for a
security analyst it’s important to know how to manually compare hashes.
In this lab activity, we’ll create hash values for two files and use Linux
commands to manually examine the differences.
Scenario
In this scenario, you need to investigate whether two files are identical or
different.
Here’s how you'll do this task: First, you’ll display the contents of two files
and create hashes for each file. Next, you’ll examine the hashes and compare
them.
Note: The lab starts with your user account, called analyst, already logged in to
the Bash shell. This means you can start the tasks as soon as you click the Start
Lab button.Disclaimer: For optimal performance and compatibility, it is
recommended to use either Google Chrome or Mozilla Firefox browsers
while accessing the labs.
You'll need to start the lab before you can access the materials. To do this,
click the green “Start Lab” button at the top of the screen.
After you click the Start Lab button, you will see a shell, where you will be
performing further steps in the lab. You should have a shell like this:
When you have completed all the tasks, refer to the End your Lab section that
follows the tasks for information on how to end your lab.
The lab starts in your home directory, /home/analyst, as the current working
directory. This directory contains two files file1.txt and file2.txt, which contain
same data.
In this task, you need to display the contents of each of these files. You’ll then
generate a hash value for each of these files and send the values to new files,
which you’ll use to examine the differences in these values later.
ls
Copied!
content_copy
Two files, file1.txt and file2.txt, are listed.
2. Use the cat command to display the contents of the file1.txt file:
cat file1.txt
Copied!
content_copy
Note: If you enter a command incorrectly and it fails to return to the command-
line prompt, you can press CTRL+C to stop the process and force the shell to
return to the command-line prompt.
3. Use the cat command to display the contents of the file2.txt file:
cat file2.txt
Copied!
content_copy
4. Review the output of the two file contents:
5. analyst@4fb6d613b6b0:-$ cat file1.txt
6. X5O!P%@AP[4\PZX54(P^)7CC)7}$EICAR-STANDARD-ANTIVIRUS-
TEST-FILE!$H+H*
7. analyst@4fb6d613b6b0:-$ cat file2.txt
X5O!P%@AP[4\PZX54(P^)7CC)7}$EICAR-STANDARD-ANTIVIRUS-
TEST-FILE!$H+H*
Do the contents of the two files appear identical when you use the cat
command?
Yes
No
Submit
Answer: Yes. The contents of the two files appear identical when you use
the cat command to display the file contents.
Although the contents of both files appear identical when you use
the cat command, you need to generate the hash for each file to determine if
the files are actually different.
5. Use the sha256sum command to generate the hash of the file1.txt file:
sha256sum file1.txt
Copied!
content_copy
You now need to follow the same step for the file2.txt file.
6. Use the sha256sum command to generate the hash of the file2.txt file:
sha256sum file2.txt
Copied!
content_copy
7. Review the generated hashes of the contents of the two files:
8. analyst@4fb6d613b6b0:-$ sha256sum file1.txt
9. 131f95c51cc819465fa1797f6ccacf9d494aaaff46fa3eac73ae63ffbdfd82
67 file1.txt
10. analyst@4fb6d613b6b0:-$ sha256sum file2.txt
2558ba9a4cad1e69804ce03aa2a029526179a91a5e38cb723320e83af9
ca017b file2.txt
Do both files produce the same generated hash value?
Yes
No
Submit
Answer: No. The generated hash value for file1.txt is different from the
generated hash value for file2.txt, which indicates that the file contents are not
identical.
Click Check my progress to verify that you have completed this task correctly.
Check my progress
In this task, you’ll write the hashes to two separate files and then compare
them to find the difference.
1. Use the sha256sum command to generate the hash of the file1.txt file,
and send the output to a new file called file1hash:
sha256sum file1.txt >> file1hash
Copied!
content_copy
You now need to complete the same step for the file2.txt file.
2. Use the sha256sum command to generate the hash of the file2.txt file,
and send the output to a new file called file2hash:
sha256sum file2.txt >> file2hash
Copied!
content_copy
Now, you should have two hashes written to separate files. The first hash was
written to the file1hash file, and the second hash was written to
the file2hash file.
cat file1hash
cat file2hash
Copied!
content_copy
4. Inspect the output and note the difference in the hash values.
Note: Although the content in file1.txt and file2.txt previously appeared
identical, the hashes written to the file1hash and file2hash files
are completely different.
Now, you can use the cmp command to compare the two files byte by byte. If a
difference is found, the command reports the byte and line number where the
first difference is found.
5. Use the cmp command to highlight the differences in
the file1hash and file2hash files:
cmp file1hash file2hash
Copied!
content_copy
6. Review the output, which reports the first difference between the two
files:
analyst@4fb6d613b6b0:-$ cmp file1hash file2hash
file1hash file2hash differ: char1, line 1
Note: The output of the cmp command indicates that the hashes differ at the
first character in the first line.
Based on the hash values, is file1.txt different from file2.txt?
Yes
No
Submit
Answer: Yes, the contents of the two files are different because the hash
values of each file are different.
Click Check my progress to verify that you have completed this task correctly.
Compare hashes
Check my progress
Conclusion
Great work!
That’s where hashing comes in. Previously, you learned that a hash function is
an algorithm that produces a code that can’t be decrypted. Hash functions are
used to uniquely identify the contents of a file so that you can check whether it
has been modified. This code provides a unique identifier known as a hash
value or digest.
For example, a malicious program may mimic an original program. If one code
line is different from the original program, it produces a different hash value.
Security teams can then identify the malicious program and work to mitigate
the risk.
Many tools are available to compare hashes for various scenarios. But for a
security analyst it’s important to know how to manually compare hashes.
In this lab activity, we’ll create hash values for two files and use Linux
commands to manually examine the differences.
Scenario
In this scenario, you need to investigate whether two files are identical or
different.
Here’s how you'll do this task: First, you’ll display the contents of two files
and create hashes for each file. Next, you’ll examine the hashes and compare
them.
The lab starts in your home directory, /home/analyst, as the current working
directory. This directory contains two files file1.txt and file2.txt, which
contain same data.
In this task, you need to display the contents of each of these files. You’ll then
generate a hash value for each of these files and send the values to new files,
which you’ll use to examine the differences in these values later.
1
ls
2. Use the cat command to display the contents of the file1.txt file:
Note: If you enter a command incorrectly and it fails to return to the command-
line prompt, you can press CTRL+C to stop the process and force the shell to
return to the command-line prompt.
3. Use the cat command to display the contents of the file2.txt file:
1
cat file2.txt
2
3
4
1
X5O!P%@AP[4\PZX54(P^)7CC)7}$EICAR-STANDARD-ANTIVIRUS-TEST-
FILE!$H+H*
analyst@4fb6d613b6b0:-$ cat file2.txt
X5O!P%@AP[4\PZX54(P^)7CC)7}$EICAR-STANDARD-ANTIVIRUS-TEST-
FILE!$H+H*
analyst@4fb6d613b6b0:-$ cat file1.txt
Do the contents of the two files appear identical when you use the cat
command?
Answer: Yes. The contents of the two files appear identical when you use
the cat command to display the file contents.
Although the contents of both files appear identical when you use
the cat command, you need to generate the hash for each file to determine if
the files are actually different.
5. Use the sha256sum command to generate the hash of the file1.txt file:
1
sha256sum file1.txt
You now need to follow the same step for the file2.txt file.
6. Use the sha256sum command to generate the hash of the file2.txt file:
1
sha256sum file2.txt
Answer: No. The generated hash value for file1.txt is different from the
generated hash value for file2.txt, which indicates that the file contents are
not identical.
In this task, you’ll write the hashes to two separate files and then compare
them to find the difference.
1. Use the sha256sum command to generate the hash of the file1.txt file, and
send the output to a new file called file1hash:
1
sha256sum file1.txt >> file1hash
You now need to complete the same step for the file2.txt file.
2. Use the sha256sum command to generate the hash of the file2.txt file, and
send the output to a new file called file2hash:
1
sha256sum file2.txt >> file2hash
Now, you should have two hashes written to separate files. The first hash was
written to the file1hash file, and the second hash was written to
the file2hash file.
1
2
cat file1hash
cat file2hash
4. Inspect the output and note the difference in the hash values.
Now, you can use the cmp command to compare the two files byte by byte. If a
difference is found, the command reports the byte and line number where the
first difference is found.
1
cmp file1hash file2hash
6. Review the output, which reports the first difference between the two files:
1
2
analyst@4fb6d613b6b0:-$ cmp file1hash file2hash
file1hash file2hash differ: char1, line 1
Note: The output of the cmp command indicates that the hashes differ at the
first character in the first line.
Answer: Yes, the contents of the two files are different because the hash
values of each file are different.
Conclusion
Great work!
These are valuable tools you can use to validate data integrity as you
contribute to the control of your organization’s security.
The rise of SSO and MFA
Most companies help keep their data safely locked up behind authentication
systems. Usernames and passwords are the keys that unlock information for
most organizations. But are those credentials enough? Information security
often focuses on managing a user's access of, and authorization to,
information.
Here's an example of how SSO can connect a user to multiple applications with
one access token:
Limitations of SSO
Usernames and passwords alone are not always the most secure way of
protecting sensitive information. SSO provides useful benefits, but there’s still
the risk associated with using one form of authentication. For example, a lost
or stolen password could expose information across multiple services.
Thankfully, there’s a solution to this problem.
MFA builds on the benefits of SSO. It works by having users prove that they
are who they claim to be. The user must provide two factors (2FA) or three
factors (3FA) to authenticate their identification. The MFA process asks users
to provide these proofs, such as:
Something a user has: normally received from a service provider, like a one-
time passcode (OTP) sent via SMS
Key takeaways
Implementing both SSO and MFA security controls improves security without
sacrificing the user experience. Relying on passwords alone is a serious
vulnerability. Implementing SSO means fewer points of entry, but that’s not
enough. Combining SSO and MFA can be an effective way to protect
information, so that users have a streamlined experience while unauthorized
people are kept away from important information.
The principle of least privilege in which a user is only granted the minimum
level of access and authorization required to complete a task or function.
Separation of duties, which is the principle that users should not be given
levels of authorization that would allow them to misuse a system.
Both principles typically support each other. For example, according to least
privilege, a person who needs permission to approve purchases from the IT
department shouldn't have the permission to approve purchases from every
department. Likewise, according to separation of duties, the person who can
approve purchases from the IT department should be different from the
person who can input new purchases.
In other words, least privilege limits the access that an individual receives,
while separation of duties divides responsibilities among multiple people to
prevent any one person from having too much control.
Authenticating users
Pro tip: Another way to remember this authentication model is: something
you know, something you have, and something you are.
User provisioning
Back-end systems need to be able to verify whether the information provided
by a user is accurate. To accomplish this, users must be properly provisioned.
User provisioning is the process of creating and maintaining a user's digital
identity. For example, a college might create a new user account when a new
instructor is hired. The new account will be configured to provide access to
instructor-only resources while they are teaching. Security analysts are
routinely involved with provisioning users and their access privileges.
Pro tip: Another role analysts have in IAM is to deprovision users. This is an
important practice that removes a user's access rights when they should no
longer have them.
Granting authorization
If the right user has been authenticated, the network should ensure the right
resources are made available. There are three common frameworks that
organizations use to handle this step of IAM:
Key takeaways
Controlling access requires a collection of systems and tools. IAM and AAA are
common frameworks for implementing least privilege and separation of
duties. As a security analyst, you might be responsible for user provisioning
and collaborating with other IAM or AAA teams. Having familiarity with these
models is valuable for helping organizations achieve their security objectives.
They each ensure that the right user is granted access to the right resources at
the right time and for the right reasons.
Vulnerabilities of CI/CD
Protect Your Software Pipeline: CI/CD Security
CI/CD automates the entire software release process, from code creation to
deployment. This automation is what enables modern development teams to
be agile and respond quickly to user needs. Let's break down the key parts:
You might be wondering how security fits into all this automation. The good
news is that Continuous Delivery and Deployment can actually enhance
security. CD allows you to build security checks right into your deployment
pipeline. This ensures that only thoroughly vetted software versions are
released.
Knowing the benefits of CI/CD is only half the battle. You also need to
understand the potential security weaknesses. Here are some common
vulnerabilities to be aware of:
CI/CD pipelines often use many third-party libraries and components. If these
components have known vulnerabilities (Common Vulnerabilities and
Exposures, or CVEs), those vulnerabilities can be unknowingly added to your
application during the automated build process.
Action Step: Regularly scan and update your dependencies. Make sure you’re
using secure versions of all external components.
Misconfigured Permissions: Controlling Access
Weak access controls in CI/CD tools, code repositories, and related systems
are a significant vulnerability. Unauthorized access can allow attackers to
modify code, pipeline configurations, or inject malicious content.
Action Step: Integrate automated security testing (SAST and DAST) into your
CI/CD pipeline. This should be a core part of your secure CI/CD strategy.
Hardcoding sensitive data like API keys, passwords, and tokens directly into
code or pipeline settings is a serious security mistake. If exposed, these
secrets can lead to major security breaches.
Action Step: Never hardcode secrets. Use secure vaults or dedicated secrets
management tools to store and manage sensitive information. Enforce this
practice across your team.
Unsecured Build Environments: Protecting the Pipeline Infrastructure
The CI/CD environment itself (the servers and systems that run your pipeline)
needs to be secure. If this environment is vulnerable, attackers can
compromise it to alter builds, inject malicious code, or steal sensitive data.
Key takeaways
The essence of securing your CI/CD pipeline is to bring robust security to your
software release process, enabling engineers to develop, test, and deploy code
with confidence and resilience against threats. By building security into your
CI/CD, you empower your team to release features, improvements, and
critical security updates rapidly and reliably, ensuring software is not only
delivered efficiently but also with the highest level of security, proactively
protecting your organization and your customers.
Resources: