CySA Study Notes
CySA Study Notes
COMPTIA
CySA+
STUDY
NOTES
CompTIA is a registered trademark of CompTIA. You can learn more about CompTIA
trademarks on the USPTO trademark search (TESS) website.
Contents
Explaining the Importance of Security Controls and Security Intelligence .................................... 5
Identify Security Control Types ................................................................................................. 5
Explain the Importance of Threat Data and Intelligence ........................................................... 6
Utilizing Threat Data and Intelligence ........................................................................................... 7
Classify Threats and Threat Actor Types .................................................................................. 7
Utilize Attack Frameworks and Indicator Management ............................................................. 8
Utilize Threat Modeling and Hunting Methodologies ................................................................. 9
Analyzing Security Monitoring Data ............................................................................................ 11
Analyze Network Monitoring Output ........................................................................................ 11
Analyze Appliance Monitoring Output ..................................................................................... 13
Analyze Endpoint Monitoring Output ....................................................................................... 14
Analyze Email Monitoring Output ............................................................................................ 15
Collecting and Querying Security Monitoring Data ..................................................................... 17
Configure Log Review and SIEM Tools .................................................................................. 17
Analyze and Query Logs and SIEM Data ................................................................................ 18
Utilizing Digital Forensics and Indicato Analysis Techniques ..................................................... 19
Identify Digital Forensics Techniques ...................................................................................... 19
Analyze Network-related IoCs ................................................................................................. 20
Analyze Host-related IoCs ....................................................................................................... 21
Analyze Application-Related IoCs ........................................................................................... 23
Analyze Lateral Movement and Pivot IoCs ............................................................................. 24
Applying Incident Response Procedures .................................................................................... 25
Explain Incident Response Processes .................................................................................... 25
Apply Detection and Containment Processes ......................................................................... 26
Apply Eradication, Recovery, and Post‑Incident Processes.................................................... 27
Applying Risk Mitigation and Security Frameworks .................................................................... 28
Apply Risk Identification, Calculation,and Prioritization Processes ......................................... 28
Explain Frameworks, Policies, and Procedures ...................................................................... 30
Performing Vulnerability Management ........................................................................................ 31
Analyze Output from Enumeration Tools ................................................................................ 31
Configure Infrastructure Vulnerability Scanning Parameters .................................................. 32
Analyze Output from Infrastructure Vulnerability Scanners ..................................................... 33
● Security controls can be classified based on how they uphold the CIA triad
(confidentiality, integrity, availability).
● Some technical controls may uphold confidentiality but not integrity or availability.
● An organization needs to define which parameters it needs to uphold to mitigate risk.
● Open-source threat intelligence includes publicly available sources that provide valuable
information about cybersecurity threats.
● Examples of open-source providers include AT&T Security, Malware Information Sharing
Project (MISP), Spamhaus, SANS ISC Suspicious Domains, and VirusTotal.
● Blogs and discussions from experienced practitioners offer insights into cybersecurity
trends and attitudes.
● Critical infrastructure sectors like communications, energy, and healthcare have their
own ISACs.
● Embedded systems and industrial control systems are crucial areas of focus for
cybersecurity in critical infrastructure industries.
● Threat intelligence should be shared with different security functions to enhance risk
management, security engineering, incident response, vulnerability management, and
detection and monitoring.
● It helps organizations stay updated on threat sources, actors, tactics, and vulnerabilities,
allowing them to make informed decisions and improve security.
● Threats were historically classified based on "static" known threats like viruses, rootkits,
Trojans, and botnets.
● Modern threats require classifying based on behaviors, not just known attack signatures.
● Threat classification is essential for detecting unknown threats, including known
unknowns, recycled threats, and unknown unknowns.
● The attack surface encompasses all points at which an adversary could interact with a
system and compromise it.
● To determine the attack surface, inventory the assets and processes on your network.
● Consider various threat-modeling scenarios, such as corporate data networks,
websites/cloud, and bespoke software apps.
Attack Vector:
● The attack vector is a specific means of exploiting a point on the attack surface.
● MITRE identifies three principal categories of attack vectors: Cyber, Human, and
Physical.
● Risk assessment is crucial and involves assessing the likelihood and impact of an event.
● Likelihood is measured as a probability or percentage, while impact is expressed as a
cost (dollar) value.
● Helps prioritize responses to the most critical threat models.
● Threat hunting involves proactively searching for evidence of Tactics, Techniques, and
Procedures (TTPs) within a network.
● It contrasts with a reactive process triggered by incident reports.
● Utilizes insights from threat research and modeling to discover signs of TTPs.
Establishing a Hypothesis:
Open-Source Intelligence:
Shodan:
● Email harvesting and social media profiling to gather information about employees.
● Methods include trading lists, Google searches, and testing for valid email addresses.
● Unwary users may share sensitive information on social media.
Flow Analysis:
NetFlow:
Zeek (Bro):
● Zeek is a passive network monitor that reads packets from a network tap.
● It selectively logs data of interest, reducing storage and processing requirements.
● Customizable data collection and alert settings.
● Correlating IP addresses, domains, and URLs in network traffic with reputation tracking
whitelists and blacklists.
● Identifying known-bad IP addresses and domains using reputation risk intelligence.
● Malware uses dynamically generated domains via a domain generation algorithm (DGA).
● DGAs create a range of possible DNS names.
● Secure recursive DNS resolvers can help detect DGAs.
● Fast flux networks continually change IP addresses.
HTTP Methods:
Percent Encoding:
● Forward proxies act on behalf of internal hosts, forwarding their HTTP requests.
● Proxies ensure compliance with administrative and security policies for outbound
Internet traffic.
● Proxies can be non-transparent (client configuration required) or transparent (intercept
traffic without client reconfiguration).
● Analysis of proxy logs reveals details of HTTP requests, visited websites, and content.
● Proxies may use Common Log Format, recording data in space-delimited fields,
including user ID, request method, HTTP status code, resource size, and MIME type.
● Proxies intercepting or blocking traffic can record matched rules, aiding in intent
determination.
Reverse Proxy:
● Web Application Firewalls (WAFs) apply rules to HTTP traffic, parsing headers and
HTML message bodies.
● WAFs address web-based vulnerabilities like SQL injection and cross-site scripting
(XSS).
● Logs record source/destination addresses, matched rules, and actions taken.
● Log formats can vary, but useful information includes event time, severity, URL
parameters, HTTP methods, and context for the rule.
● IDS is a packet sniffer that analyzes traffic and generates event logs based on rule
matches.
● IDS sensors are placed inside firewalls or near critical servers to identify malicious
traffic.
● Spanning ports or TAPs are used for monitoring in switched environments.
● IDS can also function as an Intrusion Prevention System (IPS), taking action to block
malicious traffic.
● IDS/IPS solutions include Snort, Zeek, and Security Onion.
● IDS/IPS creates log entries for rule matches, and rule changes.
● Log entries may contain event time, severity, URL parameters, and more.
● Various output formats include unified, syslog, CSV, and pcap.
● Analysts monitor alerts and decide whether to escalate them to incidents.
● NAC authenticates users and evaluates device integrity before granting network access.
● IEEE 802.1X provides port-based NAC, requiring authentication for network access.
● NAC policies define health checks, time-based, location-based, role-based, and rule-
based access rules.
● PsExec can exploit the default behavior of launching processes with local SYSTEM
account privileges.
● Detection of suspicious PowerShell parameters should be done using host-based EDR
or protection suites with behavioral analysis routines.
● Post-exploitation allows an attacker to manipulate files with SYSTEM account privileges
and open a reverse shell connection for further actions.
Injecting a Keylogger:
● A keylogger can capture user activity, and any suspicious behavior should alert both
attackers and defenders.
● Discovering modern malware with administrative privileges can be challenging; check for
network communication to its handler.
● Process Monitor records process interactions with the system, including Registry key
usage, and helps analyze operations.
● Autoruns shows autostart processes and their configurations in the Registry and file
system.
● System Monitor (sysmon) logs security-relevant event types.
● EDR configurations need tuning to reduce false positives and to share threat
intelligence.
● Custom malware signatures can be developed and shared through industry portals or
with security vendors.
● Blacklisting blocks known threats but risks false positives and may not cover all threats.
● Whitelisting allows only trusted elements but can be restrictive and requires constant
fine-tuning.
● Execution control enforces what software can be installed beyond a baseline and can
use whitelisting or blacklisting.
Configuration Changes:
● Maintain and update blacklists and whitelists in response to incidents and threat
monitoring.
● Consider strategic changes like adopting a "least privileges" model for increased security
but assess the potential impact.
● Internet email headers contain sender and recipient addresses, plus details about the
servers handling email transmission.
● Multiple servers are involved in the email's journey from sender to recipient, and each
adds information to the header.
● Headers consist of three "sender" address fields: Display from, Envelope from, and
Received from/by.
● Analyzing these headers can reveal the true origin of an email, especially in cases of
spoofing.
● Emails can carry malicious payloads, including exploits targeting vulnerabilities in email
clients or file attachments designed to trick users.
● Attackers also use embedded links in emails, which may lead to malicious sites.
● Analyzing the email's body content, MIME format, and embedded links can help detect
malicious content.
● Sender Policy Framework (SPF) and DomainKeys Identified Mail (DKIM) authenticate
email senders.
● Domain-Based Message Authentication, Reporting, and Conformance (DMARC) helps
ensure the effective use of SPF and DKIM.
● These mechanisms can be used to identify authorized email servers and prevent email
spoofing.
● SMTP logs record the exchange of requests and responses between local and remote
email servers.
● Status codes in SMTP logs indicate the success or failure of email transmission.
● Analyzing SMTP logs helps identify email issues and potential security concerns.
● Security Information and Event Management (SIEM) systems apply rules to data inputs
and generate alerts for analysts to investigate.
● SIEMs often produce false negatives, making it crucial to understand how alerts are
generated.
● Simple correlation methods in SIEM include signature detection and rules-based
policies, but they tend to produce many false positives and are blind to new threats.
● Heuristic analysis and machine learning enhance simple correlation methods by
analyzing data points and context to generate alerts.
● Human analysts are slow compared to automated systems and cannot handle the
volume of data.
● Machine learning allows systems to adapt and respond to evolving threats.
Behavioral Analysis:
Anomaly Analysis:
● Anomaly analysis identifies events that don't conform to expected patterns or rules.
● It doesn't rely on known malicious signatures, reducing false negatives.
● It can check network traffic or host-based events against established standards and
raise alerts if deviations occur.
Trend Analysis:
● Trend analysis identifies patterns within datasets over time to predict future events and
detect relationships between events.
● It can help in predicting attacks and understanding the nature of incidents.
● Trend analysis involves frequency-based, volume-based, and statistical deviation
analysis.
● Metrics for trend analysis include the number of alerts and incidents, network and host
metrics, threat awareness education, compliance, and externally measured threat levels.
● Correlation rules interpret relationships between data points and diagnose significant
incidents.
● Correlation rules use logical expressions and operators to match conditions.
● Queries extract records based on conditions from stored data.
● Regular expressions are used to search for patterns in string data.
● The "grep" command in Unix-like systems and "findstr" in Windows support string
searches using regular expressions.
Scripting Tools:
● Bash and PowerShell are scripting languages used to automate data analysis tasks.
● Awk is a scripting engine used to modify and extract data.
● WMIC (Windows Management Instrumentation Command-line) is used to review logs in
remote Windows machines.
● PowerShell offers advanced functionality, cmdlets, and the ability to execute scripts for
managing Windows systems.
3. Disk Image Acquisition: This refers to acquiring data from non-volatile storage media
like hard disk drives (HDDs) and solid-state drives (SSDs). There are different methods
for acquisition, such as live acquisition (while the computer is running), static acquisition
by shutting down the computer, or static acquisition by pulling the plug. It's crucial to
document the acquisition process.
4. Write Blockers: Write blockers ensure that the image acquisition tools do not change
the source disk's data. These can be hardware devices or software running on the
forensics workstation.
5. Imaging Utilities: Once the target disk is connected, imaging utilities are used to create
a cryptographic hash of the data and obtain a bit-by-bit copy of the disk contents.
Different tools and formats are mentioned, including vendor-specific file formats like
.e01.
6. Hashing: Creating cryptographic hashes of disk contents is essential for proving that the
data has not been tampered with. Secure Hash Algorithm (SHA) and Message Digest
Algorithm (MD5) are mentioned as common methods for hashing.
7. File Integrity and Changes to Binaries: Hash values can be used to check the integrity
of files, especially for operating system and application binaries. Differences in hash
values might indicate changes, which can be investigated for potential malware.
8. Timeline Generation and Analysis: Timelines are constructed to establish a
chronological order of events, helping in forensic investigations. The timeline can provide
insights into how an adversary gained access, installed tools, made changes, retrieved
data, and potentially exfiltrated data.
9. Carving: File carving involves extracting data from an image when there's no associated
file system metadata. This process is based on file signatures and is used to reconstruct
deleted files or data fragments from unallocated and slack space.
10. Chain of Custody: The chain of custody refers to documenting the handling of evidence
from collection to presentation in court. It ensures the integrity and proper custody of
evidence. Physical devices need to be properly identified, labeled, and stored, and
metadata should be created to describe evidence characteristics. Adequate physical
security measures are also crucial for preserving evidence.
Memory Overflow:
● Memory overflow, or buffer overflow, can be used by attackers to execute arbitrary code.
● A memory leak may indicate an attempt at a buffer overflow.
● Analyze file system changes and anomalies, even if malware isn't saved to disk.
● Metadata about file creation, access, and modification can help establish timelines.
● Attackers often aim to exfiltrate data; their motives may evolve as they gain access.
● Data staging techniques include temporary files, user profiles, alternate data streams,
etc.
● Data may be compressed and encrypted for exfiltration.
Cryptography Tools:
● Cryptography analysis tools help determine encryption types and key strength.
● In some cases, decryption keys can be recovered from system memory.
Persistence Indicators:
● Application services may fail to start or stop unexpectedly for various reasons.
● Service interruption may lead to suspicions of cybersecurity issues.
● Causes can include adversaries preventing security services from running, compromised
service processes, DoS/DDoS attacks, or excessive bandwidth usage.
● Tools to monitor running services in Windows include Task Manager, Services.msc, net
start, and the Get-Service PowerShell cmdlet.
● Linux offers commands like who, w, and rwho to monitor user sessions.
● The lastlog command provides log-on history.
● User account creation and authentication attempts are logged in /var/log/auth.log or
/var/log/secure.
● In incident response, the OODA loop (Observe, Orient, Decide, Act) model is used to
make tactical decisions in analyzing and responding to specific incidents.
● It helps in maintaining clarity and decisiveness when responding to stressful and intense
situations, such as cybersecurity incidents.
● Defensive capabilities are categorized in the courses of action (CoA) matrix, mapping to
each stage of an adversary's kill chain.
● Defensive capabilities include Detect, Destroy, Degrade, Disrupt, Deny, and Deceive.
● These capabilities are used to address various stages of an adversary's attack.
● Incident detection and analysis rely on both manual and automated detection
mechanisms.
● Identifying indicators of compromise (IoCs) from various sources is crucial.
● Accurate analysis is needed to distinguish between false positives and real incidents.
● Using security information and event management (SIEM) tools helps aggregate and
analyze data.
Impact Analysis:
● Factors affecting impact analysis include data integrity, system process criticality,
downtime, economic consequences, and data correlation.
● Incident security level classification helps categorize incidents based on their impact.
Containment:
Reconstruction/Reimaging:
● Restoring the host software and settings through reimaging using clean backups or
templates.
● Reconstructing a system using a configuration template or scripted install from trusted
media.
● Ensuring that a sanitized system is free from infection.
Reconstitution of Resources:
● Steps for manually reconstituting a resource when reimaging is not possible, focusing on
malware removal and system cleanup.
● Disabling autostart locations to prevent malicious processes from executing.
● Replacing contaminated OS and application processes with clean versions.
● Continuing to monitor the system after reconstitution.
Recovery:
● Recovery aims to restore capabilities and services, depending on the nature of the
incident.
● Examples of recovery scenarios include data restoration, rebooting servers, and
malware removal.
● Patching vulnerable systems is crucial to prevent future incidents.
● Restoration of permissions, verification of logging and communication to security
monitoring, and vulnerability mitigation through system hardening are discussed.
Post-Incident Activities:
● Post-incident activities include report writing, evidence retention, lessons learned, and
incident response plan updates.
● Report writing should convey technical information to non-technical executives,
emphasizing the impact, security policy changes, and recommendations.
● Evidence must be preserved for legal or regulatory purposes.
● Lessons learned meetings are conducted to analyze incidents, determine root causes,
and improve procedures.
● Lessons learned reports serve as the basis for incident summary reporting.
● Indicator of Compromise (IoC) generation and monitoring for improved detection is
emphasized.
● Corrective actions and controls should follow the change control process, ensuring
minimal impact on business functions.
● The selection of security controls depends on various factors, including regulations, cost,
and risk impact.
● The Return on Security Investment (ROSI) helps evaluate the cost-effectiveness of
security controls.
Engineering Tradeoffs:
● Risk scenarios must be articulated in plain language, explaining the cause, effect, and
business impact.
● Effective communication ensures that stakeholders understand the risks associated with
their workflows.
Risk Register:
● A risk register documents the results of risk assessments, including risk ratings,
descriptions, and countermeasures.
● It should be shared among stakeholders to enhance risk visibility.
● Compensating controls are used when a standard control cannot be implemented due to
business or technical reasons.
● They require documentation to demonstrate their effectiveness and consistent use.
Exception Management:
● Training and exercises are essential for ongoing risk management and security control
validation.
● Tabletop exercises are facilitated training events where participants respond to risk
scenarios.
● Penetration testing involves actively trying to exploit vulnerabilities to test security
controls.
● Red and blue team exercises simulate adversarial attacks and defense, with a white
team overseeing and reporting the results.
crack WEP encryption. However, it's mostly effective against outdated WEP
security.
○ Reaver: Reaver targets Wi-Fi Protected Setup (WPS) vulnerabilities to gain
unauthorized access to wireless networks. It can crack WPS PINs through brute
force attacks, but it's not effective against networks with strong security
measures.
Report Confidentiality:
● Vulnerability scan reports are logged and should be treated as highly confidential.
● Access to these reports should be limited to authorized administrators.
● Some tools allow for automated distribution of reports via email or alerts for non-
compliance.
Common Identifiers:
● Vulnerability scan reports often use CVSS metrics to rate the severity of vulnerabilities.
● The CVSS scores vulnerabilities from 0 (none) to 10 (critical).
● These scores are based on various metrics, including access vector, access complexity,
privileges required, user interaction, scope, and confidentiality, integrity, and availability
impacts.
● Vulnerability scans can produce false positives (incorrectly identifying issues that don't
exist) and false negatives (missing real issues).
● Addressing false positives may involve adjusting scan scopes, updating baselines, or
adding exceptions.
● Monitoring false negatives can be mitigated through repeated scans, different scan
types, and using multiple scanners or databases.
● Validating scan results involves reconciling them with your knowledge of the
environment.
● This can include reconciling results, correlating results with other data sources,
comparing to best practices, and identifying exceptions.
Scanner Examples:
Certificate Management:
● Digital certificates are crucial for machine and user identity assurance.
● Root certificates are vital, and their compromise can be a high-value target.
● Certificates are used for authentication, encryption, and digital signatures.
● OpenSSL and certutil are tools for creating and managing certificates.
● SSH key management is essential to prevent data breaches.
Federation:
Privilege Management:
● Auditing and monitoring are critical to detect insider threats and unauthorized access.
● Logs should include access attempts and changes to system configuration.
● SIEM systems and rule-based monitoring can automate log analysis.
● Manual review ensures proper account and privilege management.
● A code of conduct and privileged user agreement (PUA) sets ethical behavior
expectations.
● Acceptable use policies (AUP) govern the proper use of equipment and services.
● AUPs prevent misuse of equipment and protect organizations from legal issues.
Network Architecture:
● Physical network architecture refers to cabling, switch ports, router ports, and wireless
access points, which can introduce vulnerabilities if not secured.
● Security controls such as physical security measures, authentication, and endpoint
security help protect physical network components.
● Adversaries may attempt to exploit physical access points using devices like Wi-Fi
Pineapple.
● VPNs (Virtual Private Networks) are used to secure remote access to internal network
resources.
● Software-Defined Networking (SDN) abstracts network functions into control, data, and
management planes, simplifying network configuration and enhancing security.
Segmentation:
● Network segmentation divides the network into distinct subnetworks to limit the spread of
compromises.
● Segmentation creates secure zones, such as DMZs, management interfaces, and audit
and logging zones.
● VLANs (Virtual LANs) and firewalls are commonly used for segmentation.
● System isolation or "air gap" physically separates networks or hosts with special security
requirements from others.
● Logical isolation via firewalls or VPNs may also be used to protect sensitive hosts.
● Physical segmentation uses separate switches and routers for network segments.
● Virtual segmentation leverages VLANs and is more cost-effective and flexible.
● When onboarding new vendors, suppliers, or partners, due diligence should confirm that
they meet minimum standards for cybersecurity risk management, security assurance,
product support lifecycle, and more.
● For high-value data processing, it's essential to verify every stage of the supply chain,
including electronics manufacturing, to ensure no backdoors or monitoring mechanisms
are present.
● The US Department of Defense (DoD) operates the Trusted Foundry Program to ensure
secure supply chain operations.
● Organizations should purchase hardware from reputable suppliers to avoid counterfeit or
compromised devices.
● A hardware root of trust (RoT) or trust anchor is a secure subsystem that provides
attestation for system integrity.
● Trusted Platform Module (TPM) is a common RoT, often found in computers, and used
to verify system integrity.
● TPM helps verify and secure the boot process and system integrity.
● HSM is used for secure key management, especially in cases where multiple entities
require secure key pairs for encryption.
● HSMs automate key management processes and minimize the risk of human
compromise.
● HSMs come in various form factors and can be used for enterprise key management.
Anti-Tamper:
● Anti-tamper mechanisms use FPGA and physically unclonable functions (PUF) to detect
tampering with hardware.
● These mechanisms can automatically take remedial actions like zero-filling
cryptographic keys.
Trusted Firmware:
eFUSE:
Secure Processing:
● Secure processing is designed to protect sensitive data in memory from malicious code.
● Processor security extensions enable secure processing, and trusted execution ensures
a trusted OS is running.
● Secure enclaves secure sensitive data in an encrypted container, preventing attacks like
buffer overflows.
○ Embedded systems are static environments that are ideal for security but can be
black boxes to security administrators.
○ Updates for embedded systems are possible but should be carefully controlled.
6. Vulnerabilities Associated with Controller Systems:
○ Industrial systems prioritize safety, availability, and integrity over confidentiality.
○ Workflow and process automation systems are often used to control critical
infrastructure, and vulnerabilities can have significant consequences.
7. Mitigation for Vulnerabilities in Specialized Systems:
○ Recommendations include hiring staff with expertise in operational technology
(OT) networks, minimizing connections to OT networks, developing and testing a
patch management program, and regularly auditing logical and physical access
to OT systems.
8. Vulnerabilities Associated with Premises and Vehicular Systems:
○ Building automation and physical access control systems can have vulnerabilities
related to PLCs, plaintext credentials, and code injection.
○ Gaining physical access to these systems may lead to further attacks.
○ Vehicles and drones with CAN bus systems may be vulnerable to attacks due to
the lack of source addressing and message authentication in the CAN bus
protocol. Remote access to these systems can also pose risks.
Act (GLBA), the Federal Information Security Management Act (FISMA), and more.
These legal requirements are non-technical controls that organizations must adhere to.
5. Personal Data Processing Policies: The text outlines privacy principles and policies,
including purpose limitation, data minimization, data sovereignty, and data retention.
These policies are non-technical controls aimed at ensuring the proper handling of
personal data.
6. Data Ownership Policies and Roles: The text defines various roles within an
organization related to data ownership and stewardship, such as data owner, data
steward, data custodian, and privacy officer. These roles are responsible for managing
and protecting data, and they represent non-technical controls.
7. Data Sharing and Privacy Agreements: The text discusses legal agreements like
Service Level Agreements (SLAs), Interconnection Security Agreements (ISAs), Non-
Disclosure Agreements (NDAs), and Data Sharing and Use Agreements. These
agreements are non-technical controls used to formalize the responsibilities and
expectations related to data sharing and privacy.
● Access control models apply to data security, often used in network, file system, and
database security.
● File systems use Access Control Lists (ACLs) to specify permissions for objects, such as
files and directories.
● Database security involves securing various database objects like tables, views, rows,
and columns.
● Geographic access requirements involve data sovereignty and controlling access from
different locations.
● Data breaches often result from incorrect permissions, which can be identified through
audits.
● Tools like 'icacls' (Windows) and 'chmod' (Linux) allow configuration and modification of
file permissions.
● Linux permissions consist of read, write, and execute, with settings for owner, group,
and others.
● Absolute mode uses octal notation (r=4, w=2, x=1) to set permissions.
● More advanced permission configurations can be configured using special permissions
and ACLs.
Encryption:
● Encryption safeguards data against unauthorized access and is used for data at rest,
data in transit, and data in use.
● Data at rest is stored on persistent media, which can be encrypted using whole disk
encryption, database encryption, etc.
● Data in transit is protected using transport encryption protocols like TLS or IPsec.
● Data in use is stored in volatile memory and can be encrypted to prevent unauthorized
access.
● DLP products automate data discovery, classification, and policy enforcement to prevent
unauthorized data access or transfer.
● Components include a policy server, endpoint agents, and network agents.
● DLP agents scan structured and unstructured data, preventing data leakage through
various means.
● Remediation actions include alerting, blocking, quarantining, or using tombstone
techniques.
● DLP uses methods like classification, dictionaries, policy templates, exact data match,
document matching, and statistical analysis.
● DLP helps to protect data with confidentiality classifications or sensitive data types.
Deidentification Controls:
● SDL runs parallel or integrated with the focus on software functionality and usability.
● SDL incorporates threat, vulnerability, and risk-related controls within the life cycle to
produce systems that are secure by design.
● Examples of SDL frameworks include Microsoft's SDL and the OWASP Software
Security Assurance Process.
● Planning phase involves training developers and testers in security, acquiring security
analysis tools, and ensuring a secure development environment.
● Requirements phase determines security and privacy needs regarding data processing
and access controls.
● Design phase identifies threats, controls, and secure coding practices to meet
requirements.
● Implementation phase includes white-box source code analysis and code review to
identify and resolve vulnerabilities.
● Testing phase involves black-box or grey-box analysis to test for vulnerabilities in the
published application.
● Deployment phase includes source authenticity verification of installer packages and
best practice configuration.
● Secure coding standards provide rules and guidelines for developing secure software
systems.
● Open Web Application Security Project (OWASP) provides resources on secure
programming, web app vulnerabilities, and best practices.
● SysAdmin, Network, and Security (SANS) Institute offers research, white papers, and
best practice guidance on secure coding.
● Attacks against software code aim to run the attacker's code on the system.
● Arbitrary code execution allows attackers to run their code on the system.
● Privilege escalation occurs when a user gains access to additional resources or
functionality they are not normally allowed to access.
● Types of privilege escalation include vertical and horizontal privilege escalation.
Rootkits:
● Rootkits are tools with root-level access to the computing device, allowing unrestricted
access.
● Kernel mode rootkits can gain complete control over the system and require low-level
access.
● User mode rootkits work within the user-level processes and are less privileged.
● Buffer overflow attacks target the stack or heap, potentially allowing arbitrary code
execution.
● Integer overflow attacks can cause unexpected behavior in software, like changing a
debit to a credit or altering buffer sizes.
● Memory layout, languages used, and security measures can mitigate overflow issues.
● Race conditions occur when the order and timing of events affect the outcome of
execution processes.
● Null pointer dereference is a common exploit that can trigger race conditions.
● Time of check to time of use (TOCTTOU) race conditions can lead to exploits in which a
resource is changed between checking and using it.
Persistent XSS:
● Attacker submits a post with a malicious script, which executes when other users view
the message.
● Exploits server-side scripts.
SQL Injection:
● XML used for authentication, data exchange, and uploading in web apps.
● Vulnerable to spoofing, request forgery, arbitrary data/code injection.
● Types of attacks: XML bomb (Billion Laughs), XML External Entity (XXE).
Clickjacking:
● Formal methods are used for critical software where corner cases must be eliminated.
● They require a formal system specification, which can be complex.
● They are valuable in verifying the security of systems.
● Byron Cook's article on formal reasoning about Amazon Web Services is a case study.
● Regression testing verifies that code changes haven't caused existing functionality to
fail.
● Security regression testing focuses on input validation, data processing, and control
logic.
● It identifies broken security mechanisms after code changes.
Multicloud:
Hybrid Clouds:
● Virtual Private Cloud (VPC) provides a virtual network within a public cloud.
● Consumers are responsible for configuring and securing network components.
Unprotected Storage:
● Forensic analysis in the cloud can be challenging due to virtualized resources and
dispersed data.
● On-demand cloud services make data recovery difficult.
● Chain of custody issues can arise, and reliance on cloud service providers may be
required.
● Continuous Integration (CI) emphasizes frequent code commits and automated testing
to detect and resolve conflicts early in development.
● Continuous Delivery (CD) involves testing all infrastructure components supporting the
app.
● Continuous Deployment is the process of making changes to the production
environment to support a new app version.
DevSecOps:
Machine Learning:
● Machine learning uses algorithms and data to develop strategies for tasks such as
identifying objects or detecting patterns.
● Deep learning involves neural networks with multiple hidden layers to make more
informed determinations about complex concepts.