Security Operations Maturity Model White Paper
Security Operations Maturity Model White Paper
Maturity Model
A practical guide to assessing and improving the maturity of
your security operations through Threat Lifecycle Management
TABLE OF CONTENTS
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
The Necessity of a Balanced Security Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Obstacles to Faster Threat Detection and Response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Information Overload and Alarm Fatigue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Lack of Centralized Forensic Visibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Swivel-Chair Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Ineffective Holistic Threat Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Fragmented Workflow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Lack of Automation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
Security Operations
Maturity Model
This white paper explores how to assess and evolve the principle programs of
the security operations center (SOC): threat monitoring, threat hunting, threat
investigation, and incident response. LogRhythm developed the Threat Lifecycle
Management (TLM) framework to help organizations ideally align technology,
people, and process in support of these programs. The TLM framework
defines the critical security operations technological capabilities and workflow
processes that are vital to realize an efficient and effective SOC. LogRhythm’s
Security Operations Maturity Model (SOMM) helps organizations measure
the effectiveness of their security operations, and to mature their security
operations capabilities. Using our TLM framework, the SOMM provides a practical
guide for organizations that wish to optimally reduce their mean time to detect
(MTTD) and mean time to respond (MTTR) — thereby dramatically improving their
resilience to cyberthreats.
3
Introduction
The Necessity of a Balanced Security Approach
Organizations globally are being compromised by sophisticated cyberattacks
at an unprecedented rate and with devastating and costly consequences.
A 2018 CyberEdge survey of 1,200 global IT security professionals representing
organizations with 500 or more employees indicates that 77 percent of surveyed
organizations were compromised during the 12 months preceding the study.1
Modern threat actors include criminal organizations motivated by financial
gain, ideologically driven groups that seek to disrupt or discredit their targets,
malicious insiders driven by profit or revenge, and nation-states and state-
sponsored organizations engaged in covert operations and industrial espionage
targeting both public and private interests.
These threat actors are highly motivated and well-funded. They often have
software development capabilities that rival those of mainstream technology
innovators and will go to extreme lengths to achieve their objectives. The
emergence of an increasingly mature cybercrime supply chain and underground
economy that support these threat actors serves to heighten their capabilities
and increase their ranks. In fact, Cybercrime-as-a-Service (CaaS) is estimated to
generate more than $1 trillion in annual revenue. 2 The nature of a cyber-incident,
meanwhile, is such that the cost is significant and increases as an attack’s
lifecycle progresses. A 2018 Mandiant report indicates that threat actors were
present on victims’ networks for a median of 101 days before being detected. 3
The longer an attacker can remain undetected within an organization, the more
data of value they can exfiltrate, the more pervasive the effort required to
neutralize and recover from the threat and, consequently, the more damaging
and expensive the incident.
4. Worldwide Semiannual Security Spending Guide, International Data Corporation, Oct. 2018 // 5. IBID
5
Obstacles to Faster Threat Detection and Response
Even as security budgets rise, and organizations place increasing emphasis on a
more balanced cybersecurity strategy that focuses on detection and response,
along with prevention, significant reductions in MTTD and MTTR have been
difficult to realize due to six common obstacles.
3. Swivel-Chair Analysis
Because of its investment in multiple point security products, an organization’s
security team must triage and investigate threats by moving back and forth
among numerous product user interfaces to develop a complete picture of a
cyberthreat and assess its risk. This inefficient and disjointed process — often
referred to as “swivel-chair analysis” — is time-consuming, does not scale, and
is prone to errors and inconsistent results.
6
4. Ineffective Holistic Threat Detection
One of the most common obstacles in detecting and remediating threats is the
failure to realize central and holistic visibility into threats across the extended IT
landscape. First-generation SIEMs, and other point analytics solutions, have tried
to serve this need, but they lack the depth and breadth of centralized forensic
data, business, and operational risk context. Furthermore they lack the ability to
perform analytics across all attack surfaces — whether user, network, or endpoint
— and consequently cannot corroborate activity across those attack surfaces to
detect advanced threats. Products focused on performing point analytics from
specific attack surfaces are vulnerable to both higher numbers of false negatives
without visibility to the full scope of threat indicators, as well as false positives
where potential threat activity could be ruled out with additional context.
5. Fragmented Workflow
To facilitate collaboration across members of the threat detection, threat
investigation, and incident response teams, security teams likely utilize multiple
disjointed communications tools and techniques, including point security
products, IT ticketing systems, email, spreadsheets, and shared online document
stores. The disparate nature of these approaches prevents alignment of people
and processes in the security operation and introduces inefficient workflow,
inability to create consistent, repeatable processes, and extends ramp time
of new team members.
6. Lack of Automation
Organizations must perform numerous tasks to effectively triage, investigate,
neutralize, and recover from a threat. Many of these tasks are routine, repetitive,
and time-consuming. Automation allows analysts to focus on higher-value
activities. It becomes increasingly more difficult to implement automation
solutions when leveraging multiple point security tools with independent data
silos. Without automation of preapproved actions, security teams cannot act to
immediately neutralize threats, and system changes can often sit in IT ticketing
queues for hours or days.
Materially reducing cyberthreat MTTD and MTTR is only possible when these
traditional obstacles are overcome. This allows organizations to detect and
neutralize threats early in the Cyberattack Lifecycle, thereby avoiding
damaging cyber-incidents.
7
Understanding the
Cyberattack Lifecycle
When a threat actor targets an organization’s environment, a process unfolds
from initial intrusion through eventual data breach. Whether the attacker is a
lone actor, a criminal group, or a nation-state operations unit, if they are detected
and neutralized quickly, damage is more likely to be negligible. Conversely, if an
attacker is allowed to dwell for weeks or months, a data breach is much more
likely and the threat may have compromised hundreds of systems and/or user
accounts as they work toward their goal. In its Quantifying the Value of Time in
Cyber-Threat Detection and Response report, Aberdeen Group determined that
limiting dwell time to 30 days results in a reduction of the impact on business by
23 percent.6 In addition, compression of dwell time delivers even stronger results
for business. When dwell time is confined to seven days, the impact is reduced by
77 percent. If shortened to just one day, business impact is reduced by as much
as 96 percent.
Threat actors may adopt many different strategies to achieve their goals. The
Cyberattack Lifecycle provides a useful framework to understand how the phases
of an attack build toward that ultimate goal. Some of the phases may be merged
in certain types of attacks, and in other cases, phases may be skipped altogether.
However, while attack types vary, the overall pattern remains consistent. Mature
security operations teams kill threats early through technology-enabled threat
management processes that drive down MTTD and MTTR — rapidly detecting and
neutralizing threats before real damage occurs.
The following graphic illustrates the Cyberattack Lifecycle and the typical steps
involved in a cyber-incident such as a data breach:
Exfiltration,
Initial Command Lateral Target
Reconnaissance Corruption,
Compromise & Control Movement Attainment
Disruption
8 6. Quantifying the Value of Time in Cyber-Threat Detection and Response, Aberdeen Group, February 2016
Phase 1: Reconnaissance
The first stage in reconnaissance is identifying potential targets (companies or
individuals) that satisfy the mission of the attacker (e.g., financial gain, targeted
access to sensitive information, brand damage, etc.). Once the target or targets
are identified, the attacker determines the best mode of entry.
9
Phase 5: Target Attainment
At this stage in the lifecycle, the attacker typically has multiple remote access
entry points and may have compromised hundreds (or even thousands) of an
organization’s internal systems and user accounts. They have mapped out and
deeply understand the aspects of the IT environment of highest interest to them.
Ultimately, the attacker is within reach of the desired target(s), and is comfortable
with completing their ultimate mission at the time of their choosing.
10
The LogRhythm Threat
Lifecycle Management
Framework
Organizations that strive to reduce their cybersecurity risk through significant
reductions in MTTD and MTTR must realize an enterprise capability for detecting
and responding to threats across the holistic physical, virtual, and cloud-based
information technology (IT) environment. Industries such as critical infrastructure
and manufacturing, or industries being impacted by the rise of IoT, should
realize the same enterprise threat detection and response capability across the
operational technology (OT) environment as well.
11
PRINCIPLE PROGRAMS OF A SOC
Threat monitoring consists of evaluating alarms and events that might indicate
the presence of a cyberthreat, and quickly triaging them to determine if further
investigation is required.
12
Stage 1: Centralize Event and Forensic Data
Before detecting any threat, organizations must be able to see evidence of the
attack within the IT/OT environment. Because threats target all aspects of the IT/
OT infrastructure, the more organizations can see, the more ably they can detect.
There are three principle types of data enterprises should focus on, generally in
the following priority:
•E
ndpoint forensic sensors that can record with high fidelity all activity
occurring on the monitored system.
13
Stage 2: Discover
Once organizations establish visibility, they now stand a chance at detecting and
responding to threats. Discovery of potential threats is accomplished through a
blend of search and machine analytics.
Search Analytics
This type of analytics is performed by people and enabled by software. It includes
things such as targeted hunting of threats by monitoring dashboards and
leveraging search capabilities. It also includes reviewing reports to identify known
exceptions. Search analytics is people intensive. Thus, while effective, it cannot be
the sole (or even primary) method of analytics most organizations should employ.
Machine Analytics
This type of analytics is performed by software using machine learning (ML)
and other automated analysis techniques where outputs can be efficiently
leveraged by people. Machine analytics is the future of a modern and efficient
threat discovery capability. The goal of using machine analytics should be to help
organizations realize a “risk-based monitoring” strategy through the automatic
identification and prioritization of attacks and threats. This is critical for both
detecting advanced threats via data science-driven approaches, as well as helping
organizations orient precious human cognitive cycles to the areas of highest risk
to the business.
14
Stage 3: Qualify
Threats must be rapidly qualified to assess the potential impact to the business
and the urgency of additional investigation and response efforts. The qualification
process is manual and time intensive, while also being very time sensitive. An
inefficient qualification process increases the level of human investment needed
to evaluate all threat indicators (e.g., alarms), but an efficient process allows
organizations to analyze more indicators with less staff.
False positives will happen. Organizations need the tools to identify them quickly
and accurately. Inefficient qualification could mean a true threat (aka “true
positive”) has been ignored for hours or days. Incorrect qualification could mean
that organizations miss a critical threat and let it go unattended. Philosophically
and practically, it is important to note that only qualified threats can truly be
considered detected, otherwise it’s simply noise — an alarm bell going off that
nobody really hears.
Stage 4: Investigate
Once threats have been qualified, they need to be fully investigated to
conclusively determine whether a security incident has occurred or is in progress.
This begins with conducting a deep investigation using all the collected evidence
to understand the risk presented by the threat and its scope. Rapid access to
forensic data and intelligence on the threat is paramount. Automation of routine
investigatory tasks and tools that facilitate cross-organizational collaboration is
ideal for optimally reducing MTTR.
Ideally, a secure facility for keeping track of all active and past investigations
is available. This can help ensure that forensic evidence is well-organized
and is available to collaborators. It can also provide an account of who did
what in support of investigation and response activities to measure
organizational effectiveness and hold parties responsible for the tasks they
own in the investigation.
Stage 5: Neutralize
When an incident is qualified, organizations must implement mitigations to
reduce and eventually eliminate risk to the business. For some threats, such as
ransomware or compromised privileged users, every second counts. To maximally
reduce MTTR, easily accessible and updated incident response processes and
playbooks, coupled with automation, are critically important. Similar to the
Investigate stage, facilities that enable cross-organizational (e.g., IT, legal, HR)
information sharing and collaboration are also important.
15
Stage 6: Recover
Once the incident has been neutralized and risk to the business is under control,
full recovery efforts can commence. These efforts are less time critical, and
they can take days or weeks depending on the scope of the incident. To recover
effectively and on a timely basis, it is imperative that an organization’s security
team has access to all forensic information surrounding the investigation and
incident-response process. This includes ensuring that any changes made during
incident response are tracked, audit trail information is captured, and the affected
systems are updated and brought back online. Many recovery-related processes
can benefit from automation. In addition, the recovery process should ideally
include putting measures in place that leverage the gathered threat intelligence
to detect if the threat returns or left behind a back door.
Exfiltration,
Initial Command Lateral Target
Reconnaissance Corruption,
Compromise & Control Movement Attainment
Disruption
Exfiltration,
Initial Command Lateral Target
Reconnaissance Corruption,
Slow MTTD & MTTRCompromise & Control Movement Attainment
Disruption
FINANCIAL COST: $ $ $ $ $ $ $ $ $ $ $ $ $ $ $
Slow MTTD & MTTR
FINANCIAL COST: $ $ $ $ $ $ $ $ $ $ $ $ $ $ $
Exfiltration,
Initial Command Lateral Target
Reconnaissance Corruption,
Compromise & Control Movement Attainment
Disruption
Exfiltration,
Initial Command Lateral Target
Reconnaissance Corruption,
Fast MTTD & MTTRCompromise & Control Movement Attainment
Disruption
FINANCIAL COST: $
Fast MTTD & MTTR
Figure 3. Neutralizing an Attack Earlier in the Lifecycle Results in a Drastic Reduction in Financial Cost to the Company and Damage
FINANCIAL COST: $
to its Reputation
16
Technology Enablement
Each of the TLM phases is critically dependent on technology. The right
technological approach and strategy will significantly influence the organizational
capability and cost when it comes to realizing TLM and resulting levels of MTTD/
MTTR. The same three-person virtual SOC, leveraging a more optimal technology
approach, might have twice as much capacity and realize two times the
reduction in MTTD/MTTR versus a team relying on outdated or poorly integrated
technologies. While there are a variety of strategies and approaches to realizing
technologically enabled TLM, security operations teams ideally have a modern
and highly integrated technological platform that delivers all of the following:
•C
entralized Security Intelligence: Centralized visibility into all security alerts
and alarms generated across the distributed IT/OT infrastructure, including
visibility into the current status of active threat investigations and incidents
with real-time situational awareness
• Centralized Forensic Visibility and Search: Centralized search into all forensic
data from across the distributed IT/OT environment, including immediate access
to complete, full-fidelity forensic data to accelerate threat investigation and
incident response
•H
olistic Threat Analytics: The application of artificial intelligence, TTP/IOC-
based scenario analytics and deep contextual analytics across a 360-degree
view of forensic data to detect advanced threats and accurately prioritize all
threats across the holistic attack surface
•T
ask Automation: The automation of routine and time-consuming tasks
performed in support of threat investigation and incident response, including
automated execution of mitigations and countermeasures for threat
containment and neutralization
•O
perational Metrics: The ability to easily capture metrics and effectively
report on the business key performance indicators (KPIs), service-level
agreements (SLAs), and operating-level agreements (OLAs)
17
Understanding
and Measuring the
Capabilities of a Security
Operations Program
Enterprises should think of TLM as a critical business operation. Like any core
business operation, mature organizations will want to measure operational
effectiveness to identify whether KPIs and SLAs are being realized. Following
are some of the key operational metrics that allow enterprises to measure and
communicate to the business current organizational and operational effectiveness
when it comes to being able to detect and respond to cyber-related threats.
•E
nterprise Security Event Visibility: the percentage of security event-
generating devices that can be centrally searched and forensically analyzed
•E
nterprise Log Visibility: the percentage of log-generating devices and
servers that can be centrally searched and forensically analyzed
•E
nterprise Network Forensic Visibility: the percentage of the infrastructure
that is being independently monitored by a network forensics (e.g., full packet
capture) technology
18
•E
nterprise Endpoint Forensic Visibility: the percentage of the
infrastructure that is being independently monitored by an endpoint
forensics (e.g., EDR) technology
These metrics:
•S
hould be measurable/reportable by business unit, compliance domains,
and data risk domains
• Can be separately measured across the IT and OT infrastructure
• Will indicate inherent threat detection risk when forensic visibility is low
•W
ill indicate inherent threat response and recovery risk when forensic
visibility is low
CFV Calculation: This metric and related sub-metrics are difficult to empirically
measure. An organizational method for estimating visibility should be formalized
and then consistently applied. Organizations should consider establishing target
visibility for each type (e.g., enterprise log visibility target = 100 percent of
production servers in data domains A, B, and C; 50 percent for data domains
X, Y, and Z). Organizations can then measure their current visibility against
targets, as well as against the whole environment.
•C
entralized Security Event Prioritization: the percentage of security event
generating devices, across which automated correlation and prioritization is
being performed to risk score and prioritize related alarms
•C
entralized Scenario Analytics: the percentage of log-generating devices
across which automated TTP- or IOC-based scenario analytics is being
applied to detect applicable threats and further risk score and prioritize
related alarms
•C
entralized User Behavior Analytics: the percentage of enterprise users
across which behavioral analytics is being applied to detect behavioral shifts
that might indicate a user-borne threat is present
19
•C
entralized Network Behavior Analytics: the percentage of enterprise
network infrastructure across which behaviorial analytics is being applied to
detect behavioral shifts that might indicate a network-borne threat is present
These metrics:
•S
hould be measurable/reportable by business unit, compliance
domains, and data risk domains
• Can be separately measured across the IT and OT infrastructure
•W
ill indicate inherent false positive risk and related operational
efficiency risk when machine analytics is low
•W
ill indicate inherent false negative risk and related threat
detection risk when machine analytics is low
• Can support the business case for realizing expanded machine analytics
CMAV Calculation: This metric and related sub-metrics are difficult to empirically
measure. An organizational method for estimating machine analytics visibility
should be formalized and then consistently applied. Organizations should consider
establishing target machine analytics visibility for each type (e.g., Centralized
User Behavior Analytics target = 100 percent of IT workers and execs; 50 percent
for all other users). Organizations can then measure their current visibility against
targets, as well as against the distributed IT/OT environment.
Endpoint Endpoint
Network Network
IT Infrastructure IT Infrastructure
20
Workflow Metrics
The following figure shows the key workflow metrics that should be measured to
ultimately determine TLM operational effectiveness, and effectiveness of
the supporting TLM technological solution. Each metric is then described in
further detail.
Earliest
Collect
Evidence
Alarm
Discover
Creation
Initial
Qualify
Inspection
Case
Creation
Investigate
Elevate
to Incident
Mitigate Neutralize
Recovery Recover
•S
hould be measurable/reportable within alarm priority bands
(e.g., high/medium/low, risk score bands, etc.)
•M
easures operational effectiveness and capacity of the front-line
(i.e., security analyst) team
•M
ight indicate the team can take on additional monitoring load
(e.g., monitoring another area of the IT infrastructure)
•M
ight indicate a need for increased staff, or for the team to narrow
its monitoring focus (e.g., focusing only on highest-risk areas of
the IT infrastructure and ignoring others)
TTT Calculation: The date/time difference between alarm creation and the initial
inspection of the alarm
21
Alarm Time to Qualify (TTQ):
TTQ measures the amount of time it took an alarm to be fully inspected and
qualified. It helps organizations identify bottlenecks and understand the team’s
capacity for qualifying threats. This metric:
•S
hould be measurable/reportable within alarm priority bands
(e.g., high/medium/low, risk score bands, etc.)
•S
hould be measurable/reportable within alarm outcome
(e.g., false positive, benign issue, incident, etc.)
•M
easures operational effectiveness and capacity of the front-line
(i.e., security analyst) team
•M
ight indicate weakness in the technological TLM solution in the area
of alarm drill down, search, data analysis, and contextual analysis
TTQ Calculation: The date/time difference between alarm creation and the alarm
either being closed or added to a case
•S
hould be measurable/reportable based on threat/incident types
(e.g., via the MITRE ATT&CK categories)
•M
easures operational effectiveness and capacity of the second-line
(i.e., threat investigation) team
•M
ight indicate slowness in the technology TLM solution in the area of
search, data analysis, contextual analysis, and collaboration
TTI Calculation: The date/time difference between the case being created and the
case either being closed or elevated to an incident
•S
hould be measurable/reportable based on threat/incident types
(e.g., via the MITRE ATT&CK categories)
•M
easures operational effectiveness and capacity of the third-line
(i.e., incident response) team
22
•M
ight indicate slowness in the technology TLM solution in the area of
evidence capture and use, standard playbooks, automation, and collaboration
Many organizations are adopting the MITRE Adversarial Tactics, Techniques, and
Common Knowledge (ATT&CK) framework for assessing their overall maturity in
being able to respond to threats across the Cyberattack Lifecycle. TLM can help
organizations empirically measure MTTD and MTTR across MITRE tactics.
•S
hould be measurable/reportable based on threat/incident types
(e.g., via the MITRE ATT&CK categories)
•M
easures operational effectiveness and capacity of third-line (i.e., incident
response) teams and other supporting teams (e.g., IT, Legal, HR)
•M
ight indicate slowness/weakness in the technology TLM solution in
the area of evidence capture and use, standard playbooks, automation,
and collaboration
TTV Calculation: The date/time difference between incident mitigation and the
incident being considered fully recovered from and closed
•S
hould be measurable/reportable based on threat/incident types
(e.g., via the MITRE ATT&CK categories)
•S
hould be measurable/reportable based on threat detection method
(e.g., hunting, behavioral analytics, scenario analytics, specific threat
detection technology, etc.)
23
•M
easures operational effectiveness and capacity of the first- and
second-line teams
•M
ight indicate slowness/weakness in the technology TLM solution in the
areas supporting threat discovery (e.g., threat hunting, behavioral anomaly
detection) and workflow capabilities supporting threat qualification
(e.g., search, data analysis)
•S
hould be measurable/reportable based on threat/incident types
(e.g., via the MITRE ATT&CK categories)
•M
easures operational effectiveness and capacity of the second-line
(e.g., threat investigation) and third-line (e.g., incident response) teams
•M
ight indicate slowness/weakness in the technology TLM solution in the
areas supporting threat investigation (e.g., search) and mitigation
(e.g., automation)
24
The LogRhythm Security Operations Maturity Model
LogRhythm has developed a Security Operations Maturity Model (SOMM) —
based on LogRhythm’s Threat Lifecycle Management (TLM) framework — that
can be used to assess an organization’s current maturity, and plan for improved
maturity across time. As an organization’s TLM capability matures, it will realize
improved effectiveness of its security operations resulting in faster MTTD and
MTTR. Material reductions in MTTD/MTTR will profoundly decrease the risk of
experiencing high-impact cybersecurity incidents.
Months
MTTD & MTTR
Weeks
Days
Hours
The following table describes each level in further detail, identifying the key TLM
technological and workflow/process capabilities that should be realized. These
capabilities are described at a high level with the intent of serving as a guidepost
for enterprises. The manner in which each capability is realized will vary from
organization to organization. The important thing is that the intent of the capability
is realized. For each level, LogRhythm has also described typical associated
organizational characteristics and risk characteristics. This is to provide additional
context in support of security operations maturity assessment and planning.
Organizations should use this model as a basis to evaluate their current security
operations maturity and develop a roadmap to achieve the level of maturity that
is appropriate in light of their resources, budget, and risk tolerance.
25
Organizational
TLM Capabilities Risk Characteristics
Characteristics
• Targeted log data and • Moving beyond minimal, • Extremely resilient and highly
security event centralization “check box” compliance, effective compliance posture
seeking efficiencies and
• Targeted server and • Good visibility to insider
improved assurance
endpoint forensics threats, with some blind spots
• Have recognized organization
• Targeted environmental • Good visibility to external
is effectively blind to most
risk characterization threats, with some blind spots
threats; striving toward a
• Reactive and manual vulnerability material improvement that • Mostly blind to APTs, but more
LEVEL intelligence workflow works to detect and respond likely to detect indicators and
to potential high-impact evidence of APTs
2 • Reactive and manual threat
intelligence workflow
threats, focused on areas
of highest risk
• More resilient to
cybercriminals, except
• Basic machine analytics for
Securely • Have established formal those leveraging APT-type
correlation and alarm prioritization
processes and assigned attacks or targeting blind spots
Compliant • Basic monitoring and response responsibilities for monitoring
• Highly vulnerable
processes established and high-risk alarms
to nation-states
• Have established basic,
yet formal process for
incident response
26
Organizational
TLM Capabilities Risk Characteristics
Characteristics
• Holistic log data and security • Have recognized • Extremely resilient and highly
event centralization organization is blind to effective compliance posture
many high-impact threats
• Holistic server and endpoint forensics • Great visibility into, and quickly
• Have invested in the responding to insider threats
• Targeted network forensics
organizational processes
• Great visibility into, and quickly
• IOC-based threat intelligence integrated and headcount to significantly
responding to external threats
into analytics and workflow improve ability to detect
and respond to all classes • Good visibility to APTs,
• Holistic vulnerability integration with basic
of threats but have blind spots
correlation and workflow integration
• Have invested in and • Very resilient to cybercriminals,
• Advanced machine analytics for IOC-
established a formal except those leveraging
LEVEL and TTP-based scenario analytics for
security operations and APT-type attacks that target
known threat detection
3
incident response center blind spots
• Targeted machine analytics for anomaly (SOC) that is running
• Still vulnerable to nation-
detection (e.g., via behavioral analytics) effectively with trained staff
states, but much more
Vigilant • Formal and mature monitoring and response • Are effectively monitoring likely to detect early and
process with standard playbooks for most alarms and have progressed respond quickly
common threats into proactive threat hunting
• Functional physical or virtual SOC • Are leveraging automation
to improve the efficiency
• Case management for threat
and speed of threat
investigation workflow
investigation and incident
• Targeted automation of investigation response processes
and mitigation workflow
• Basic MTTD/MTTR operational metrics
• Holistic log data and security • Are a high-value target for • Extremely resilient and highly
event centralization nation-states, cyber terrorists, efficient compliance posture
and organized crime
• Holistic server and endpoint forensics • Seeing and quickly responding
• Are continuously being to all classes of threats
• Holistic network forensics
attacked across all potential
• Seeing evidence of APTs early
• Industry specific IOC- and TTP-based vectors: physical, logical, social
in the Cyberattack Lifecycle
threat intelligence integrated into
• A disruption of service or and are able to strategically
analytics and workflows
breach is intolerable and manage their activities
• Holistic vulnerability intelligence with represents organizational
• Extremely resilient to all
advanced correlation and automation failure at the highest level
class of cybercriminals
workflow integration
• Takes a proactive stance
• Can withstand and defend
• Advanced IOC- and TTP-based scenario toward threat management
against the most extreme
machine analytics for known threat detection and security in general
nation-state-level adversary
• Advanced machine analytics for holistic • Invests in best-in-class people,
LEVEL anomaly detection (e.g., via multi-vector technology, and processes
AI/ML-based behavioral analytics)
4 • Established, documented, and mature
response processes with standard
• Have 24/7 alarm monitoring
with organizational and
operational redundancies
Resilient playbooks for advanced threats (e.g., APTs) in place
• Established, functional 24/7 • Have extensive proactive
physical or virtual SOC capabilities for threat
prediction and threat hunting
• Cross-organizational case management
collaboration and automation • Have automated threat
qualification, investigation,
• Extensive automation of investigation
and response processes
and mitigation workflow
wherever possible
• Fully autonomous automation,
from qualification to mitigation,
for common threats
• Advanced MTTD/MTTR operational
metrics and historical trending
27
CONCLUSION
The world will continue to be hostile.
Threats will continue to target data, and threat actors will be persistent and
creative in their efforts. There is no silver bullet on the horizon — no magic AI that
will easily solve the problem. To realize an improved security posture and reduce
cyber-incident risk, organizations must invest in realizing more mature levels of
Threat Lifecycle Management — at an enterprise level across the holistic IT and
OT infrastructure.
28
About LogRhythm
LogRhythm is a world leader in NextGen SIEM, empowering organizations on six
continents to successfully reduce risk by rapidly detecting, responding to, and neutralizing
damaging cyberthreats. The LogRhythm NextGen SIEM Platform combines enterprise log
management, user and entity behavior analytics (UEBA), network detection and response
(NDR) and security orchestration, automation, and response (SOAR) in a single end-to-
end solution. The LogRhythm platform is powered by AI and our patented Machine Data
Intelligence Fabric. Its seamlessly integrated solution set is designed to deliver enterprises
highest-efficacy Threat Lifecycle Management (TLM) at lowest total cost of ownership
(TCO). A LogRhythm-powered security operations center (SOC) helps customers
measurably secure their cloud, physical, and virtual infrastructures for both IT and OT
environments. Built for security professionals by security professionals, the LogRhythm
NextGen SIEM Platform has won many accolades, including being positioned as a Leader
in Gartner’s SIEM Magic Quadrant. www.logrhythm.com
Chris Petersen co-founded LogRhythm in March 2003 and has served as a member of
LogRhythm’s board of directors and chief technology officer (CTO) since its inception.
Mr. Petersen currently serves as LogRhythm’s chief product and technology officer (CPO/
CTO). In his current role, Mr. Petersen is responsible for product from concept to delivery
as the executive leader for product management, engineering, and LogRhythm Labs. Mr.
Petersen has served in a variety of other executive roles at LogRhythm including SVP of
products, SVP of research & development, and SVP of customer care.
Immediately before co-founding LogRhythm, he led product marketing for the Dragon
Intrusion Detection product line as part of Enterasys Networks. He was also a faculty
member at the Institute for Applied Network Security, providing expert advice on intrusion
detection and security information and event management (SIEM) to Fortune 500 clients
across North America.
29
threat intelligence research and supporting the development of SOCRATES, its back-end
SIEM technology. Mr. Petersen is a sought-after expert in cybersecurity and is often quoted
in media publications. He holds a B.S. in Accounting from Colorado State University.
Andrew Hollister
Chief Architect & Product Manager, LogRhythm Labs
Prior to joining LogRhythm, Mr. Hollister was a consultant with engagements encompassing
technologies such as Data Loss Prevention, Application Whitelisting, and NG Firewalls.
James Carder
Chief Information Security Officer & Vice President, LogRhythm Labs
James Carder brings more than 21 years of experience working in corporate IT security
and consulting for the Fortune 500 and U.S. Government. At LogRhythm, he develops
and maintains the company’s security governance model and risk strategies, protects
the confidentiality, integrity, and availability of information assets, leads the security
awareness program, and oversees both threat and vulnerability management as well as
the security operations center (SOC). He also directs the mission and strategic vision for
the LogRhythm Labs, Machine Data Intelligence, strategic integrations, threat research,
and compliance research teams.
Prior to joining LogRhythm, Mr. Carder was the director of Security Informatics
at Mayo Clinic where he oversaw Threat Intelligence, Incident Response, Security
Operations, and the Offensive Security groups. Prior to Mayo, he served as a Senior
Manager at MANDIANT, where he led professional services and incident response
engagements. He led criminal and national security-related investigations at the city,
state and federal levels, including those involving the theft of credit card information
and advanced persistent threats (APTs).
Mr. Carder is a sought-after and frequent speaker at cybersecurity events and is a noted
author of several cybersecurity publications. He holds a Bachelor of Science degree in
Computer Information Systems from Walden University, an MBA from the University of
Minnesota’s Carlson School of Management, is an Advisory Board member for Colorado
University (Boulder and Denver), member of the Forbes Technology Council, and is a
Certified Information Systems Security Professional (CISSP).
30
1.866.384.0713 // info@logrhythm.com // 4780 Pearl East Circle, Boulder CO, 80301