0% found this document useful (0 votes)
60 views74 pages

W11 Security

The document discusses security principles and terminology. It covers why most software is insecure, the need to consider security throughout the development lifecycle, and common security objectives like confidentiality, integrity and availability. It also discusses authentication methods like passwords and their problems, as well as alternatives like one-time passwords, shared secrets, and hardware tokens.

Uploaded by

gigi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
60 views74 pages

W11 Security

The document discusses security principles and terminology. It covers why most software is insecure, the need to consider security throughout the development lifecycle, and common security objectives like confidentiality, integrity and availability. It also discusses authentication methods like passwords and their problems, as well as alternatives like one-time passwords, shared secrets, and hardware tokens.

Uploaded by

gigi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 74

SECURITY

Lecture 11
Content
■ Security Principles
■ Security Metrics
References
■ Security Engineering: A Guide to Building Dependable
Distributed Systems 2nd Edition, Ross J. Anderson, Wiley,
2008. https://www.cl.cam.ac.uk/~rja14/book.html
(3rd edition appeared in nov 2020)
■ Secure Software Design and Programming: Class Materials
by David A. Wheeler, George Mason University
https://dwheeler.com/secure-class/
■ https://inst.eecs.berkeley.edu/~cs161/fa18/lectures/
■ [RFC 1392] https://tools.ietf.org/html/rfc1392#appendix-C
■ Daniel Geer – Measuring Security tutorial
■ Common Vulnerability Scoring System (CVSS)
https://www.first.org/cvss/specification-document
Security principles

■ Why is most software insecure?


■ Must consider security throughout lifecycle
■ Information security principles/terminology
■ Weakness groupings
■ Risk management/assurance cases
Insecure software
■ Insecure software may:
– Release private/secret information
– Corrupt information
– Lose service
■ Costing:
– Money
– Time
– Trust
– Lives
Why is most software insecure?
■ Few developers know how to develop secure software
– Most schools don’t have it in their curricula
– Programming books/courses don’t teach it
– Some common operations intrinsically dangerous (esp.
C/C++)
– Most developers don’t think like an attacker
– Developers don’t learn from others’ security mistakes
■ Most vulnerabilities caused by same mistakes over
40+ years
■ Customers can’t easily evaluate software security
■ Security often not seriously considered
– E.g., in contracts, requirements, & evaluation criteria
■ Managers don’t always resource/train adequately
Must consider security throughout
lifecycle

Source: “Improving Security Across the Software Development Lifecycle – Task Force Report”, April 1, 2004. http://www.cyberpartnership.org/init.html;
based on Gary McGraw 2004, IEEE Security and Privacy. Fair use asserted.
What do other organizations do?
BSIMM Survey
■ Building Security in Maturity Model (BSIMM)
– Study (survey) of software security initiatives of various
organizations
– Shows % of various activities among 109 organizations surveyed
– https://www.bsimm.com/
■ 4 domains (divided in 12 practices, divided into many activities)
– Governance: Practices that help organize, manage, and measure a
software security initiative
– Intelligence: Practices that result in collections of corporate
knowledge used in carrying out software security activities
– Secure Software Development Lifecycle (SSDL) Touchpoints:
Practices associated with analysis and assurance of particular
software development artifacts and processes.
– Deployment: Practices that interface with traditional network
security and software maintenance organizations

Developing secure software requires more than design & code... but you need those fundamentals
Information Security Terminology

■ Attack: “Any kind of malicious activity that attempts to collect, disrupt,


deny, degrade, or destroy information system resources or the
information itself.” [National Information Assurance (IA) Glossary, CNSS
4009]
■ Attacker: Someone who attacks a system (without authorization)
■ Cracker: “an individual who attempts to access computer systems
without authorization” (type of attacker) [RFC 1392]
■ Hacker: “A person who delights in having an intimate understanding of
the internal workings of a system, computers and computer networks in
particular” [RFC 1392]
– NOTE: Hacker ≠ attacker
– Most hackers don’t attack systems
– Many attackers aren’t hackers (might not be clever or
knowledgeable)
– Common journalist mistake
Many types of attackers
■ Criminals (for money)
■ Terrorists
■ Governments (Prism example)
■ Crackers (often for pleasure)
■ …
We want to prevent their attacks from succeeding!
It is harder to defend vs. well-resourced adversary:
– What are their resources?
– What are they trying to do (so we can
counter)?
Security objectives
■ Typical security objectives (CIA):
– Confidentiality: “No unauthorized read”
– Integrity: “No unauthorized modification (write/delete)”
– Availability: “Keeps working in presence of attack”
■ vs. “Denial of Service” (DoS) attack
■ Sometimes separately-listed objectives:
– Non-repudiation (of sender and/or receiver)
– Privacy (e.g., protecting user identity)
– Auditing/accountability/logging
– Identity & [identity] authentication (I&A), authorization
■ Last two abbreviated as AuthN and AuthZ
[User] Authentication
■ Proving the identity of a user (might be a program!)
■ Authentication is basis of an authorization decision
– All objectives depend on if you’re authorized
– So authentication is fundamental to get authorized
■ Authentication approaches (first 3 traditional):
– Something you know (passwords)
– Something you have (key, token)
– Something you are (biometrics)
– Somebody you know (vouching) – often forgotten
■ Most common approach: Passwords (“something you know”) to prove
username (identity)
■ Strong authentication = more than one approach
– Two-factor = two different approaches both required
All authentication systems have
problems
■ First three authentication approaches can also
be described as:
– Something you forgot
– Something you lost
– Something you cease to be
■ Be aware of each systems’ problems
– Recovery mechanisms can be biggest
weakness
– Choose approach(es) appropriate to task
Source: “Security” (post) by JasterBobaMereel (1102861), in reply to Amazon Wants To Replace Passwords With Selfies and Videos
https://it.slashdot.org/comments.pl?sid=8881629&cid=51699623
Password Problems
■ User-created passwords often easily guessed
– Often based on user name, personal traits, etc.
– Often based on dictionaries with trivial substitutions
– Often too short
■ Yet system-generated passwords often too hard to
remember
■ How many passwords do you have to remember?
– If browser stores them, what if browser is subverted?
– If reuse, breaking into one breaks into many
■ Often passwords can be captured or discovered
– E.g., keyloggers, confuse users into thinking they’re on
different site
Attackers vs. passwords
■ Capture password
– Keyloggers, network eavesdropping, shoulder surfing
– Break into server, capture passwords, reuse
elsewhere
■ Brute force attack
– Try all combinations
■ Dictionary attacks
– Guess passwords using a password dictionary +
permutations
– Password dictionaries widely available
■ Include multiple human languages, terms from
wide interests (e.g., Shakespeare and Star Trek)
Defending passwords
■ Encrypt connection carrying passwords
■ Require “good” passwords when user tries to set one
– Long enough (most important), different symbol types,
etc.
– Check against dictionaries of “bad” passwords
■ On server, don’t store passwords as clear text
■ Require occasional password changes
■ Make it hard for attacker to exploit “lost my password”
■ Alert user when the password is changed
– e.g., via email
Alternatives to passwords

■ Algorithms
– One-time passwords
– Shared secret
– Public key cryptography
■ Hardware
One-time passwords
■ Password list – must use in order, can’t reuse
– Give user a list, cross off each one as used
■ Pros:
– Counters network eavesdropping, shoulder
surfing
– Cheap to implement; tiny state to store at
server
■ Cons:
– Harder to distribute list
– Compromise of list allows impersonation
– Users hate them (when implemented by hand)
Shared secret
■ User & server have shared secret
■ Authentication process:
– Server generates nonce (random number), sends to
client
– Client encrypts nonce with secret, sends back
– Server also encrypts, compares with client value
– If same, user must know the secret – ok!
■ Pros:
– Prevents network eavesdropping
■ Cons:
– If secret compromised, user can be impersonated
Public key cryptography
■ Use “public key cryptography”
– User has two numbers, “public key” & “private key”
– Server knows public key of users
■ Authentication process:
– Server generates/sends random nonce to client
– Client encrypts nonce with private key, sends back
– Server decrypts with public key, if match, ok!
■ Pros:
– Provides non-repudiation of client key
■ Cons:
– If user’s private key compromised, user can be
impersonated
Hardware devices
■ Challenge-response: Server sends number
(nonce), devices receives and generates response,
response sent back to server
– Could implement shared-secret or public key
■ Time-based challenge-response: Uses current time
to determine what to send to server
– Server and token have to have time
synchronized
■ Smartcards: Contains user credentials
– Better ones never yield credentials outside
card
Example: YubiKey
■ YubiKey: Physical device, plugs into USB port, pretends to
be (an additional) keyboard
■ User moves cursor to “password” position and presses
button on Yubikey
■ On Button press, generates and “types in” a one-time
password + ENTER
■ Server verifies; if verifies, that password can’t be reused
■ Internally works on shared-secret key with AES
■ Shared secret used to encrypt a “serial number” that‘s
incremented
■ Sources: http://lwn.net/Articles/409031/
http://yubico.com/products/yubikey/
Will passwords disappear soon?
Doubtful
■ “Hey, have you seen [insert thing here]? It's totally going to kill passwords!
No, it's not.”
– “Despite its many flaws, the one thing that the humble password has
going for it over technically superior alternatives is that everyone
understands how to use it. Everyone.”
– “If your [password-replacement] product is so awesome, have you
stopped to consider why no one is using it?” [Hunt]
■ “We evaluate two decades of proposals to replace text passwords… Our
comprehensive approach leads to key insights about the difficulty of
replacing passwords… no known scheme come[s] close to providing all
desired benefits: none even retains the full set of benefits that legacy
passwords already provide… many academic proposals have failed to gain
traction because researchers rarely consider a sufficiently wide range of
real-world constraints.”
■ They’ll be replaced someday, but it’s not as easy as you might think
Source: “Here's Why [Insert Thing Here] Is Not a Password Killer” by Troy Hunt, 2018-11-05,
https://www.troyhunt.com/heres-why-insert-thing-here-is-not-a-password-killer/
“The Quest to Replace Passwords: A Framework for Comparative Evaluation of Web Authentication Schemes”
by Joseph Bonneau et al, 2012 IEEE Symposium on Security and Privacy,
Authorization
■ Once you have user identity and authentication, you can
determine what they’re authorized to do
■ Discretionary Access Control
– Data has owner, owner decides who can do what
■ Mandatory Access Control
– Data has certain properties, some access rights
cannot be granted even by owner (e.g., classification)
■ Role Based Access Control (RBAC)
– Assigns users into roles (static or dynamic)
– Access granted to the role, not directly to the user
– Sometimes membership restrictions
Auditing/Accountability/Logging
■ Record system actions, esp. security-relevant ones (e.g., log
in)
■ Detect unusual activity that might signal attack or
exploitation
– So you can take action: Disconnect that connection,
take down system, prosecute, …
– May help recovery or preventing future exploitation (by
knowing what happened)
– Operational systems often send logs elsewhere
■ If system subverted, older log entries can’t be
changed
Defense-in-depth/breadth

■ Defense in depth: Having multiple defense


mechanisms (“layers”) in place, so that an
attacker has to defeat multiple mechanisms to
perform a successful attack
■ Defense in breadth: Applying approaches to
develop secure software throughout the lifecycle
Common assumptions about
attacks
■ Attackers can interact with our systems without particular notice
■ Probing (poking at systems) may go unnoticed …
– … even if highly repetitive, leading to crashes, and easy to detect
■ It’s easy for attackers to know general information about their targets
– OS types, software versions, usernames, server ports, IP
addresses, usual patterns of activity, administrative procedures
■ Attackers can obtain access to a copy of a given system to measure
and/or determine how it works
– Shannon's Maxim: "The Enemy Knows the System"
■ Attackers can make energetic use of automation
– They can often find clever ways to automate
■ Attackers can pull off complicated coordination across a bunch of
different elements/systems
Common assumptions about
attacks
■ Attackers can bring large resources to bear if req’d
– Computation, network capacity
– But they are not super-powerful (e.g., control entire ISPs)
■ If it helps the attacker in some way, assume they can obtain privileges
– But if the privilege gives everything away (attack becomes trivial),
then we care about unprivileged attacks
■ The ability to robustly detect that an attack has occurred does not
replace desirability of preventing
■ Infrastructure machines/systems are well protected (hard to directly
take over)
– So a vulnerability that requires infrastructure compromise is less
worrisome than same vulnerability that doesn’t
Common assumptions about
attacks
■ Network routing is hard to alter … other than with physical access
near clients (e.g., “wifi/coffeeshop”)
– Such access helps fool clients to send to wrong place
– Can enable Man-in-the-Middle (MITM) attacks
■ We worry about attackers who are lucky
– Since often automation/repetition can help “make luck”:
– If its 1 in a million, just try a million times!
■ Just because a system does not have apparent value, it may still be a
target
– "Lets break into the Casino network... Through the fishtank"
■ Attackers are mostly undaunted by fear of getting caught
– There are exceptions
Trustworthiness vs. trust
■ “Trustworthiness implies that something is worthy of
being trusted”
■ “Trust merely implies that you trust something, whether
it is trustworthy or not”
– Trust is a decision
– You should only trust things that have adequate
evidence of being trustworthy
■ A trusted system or component is one whose failure can
break the security policy,
■ A trustworthy system or component is one that won’t fail
Source: Definitions from “Principled Assuredly Trustworthy Composable Architectures“ by Peter
Neumann, 2004
Weaknesses & Vulnerabilities
■ Weakness: A type of defect/flaw that might lead to
a failure to meet security objectives

■ Vulnerability: “Weakness in an information system,


system security procedures, internal controls, or
implementation that could be exploited by a threat
source” [CNSS 4009]
Weakness classifications
■ Software is vulnerable because of some weakness
that is exploitable
– Typically vulnerability is unintentional
– Usually the weakness (type/kind of flaw) has
occurred thousands of times before
■ Many weakness classification systems exist
– Common Weakness Enumeration (CWE) –
merged (http://cwe.mitre.org)
– “Seven pernicious kingdoms”, etc.
■ Key is to learn what these weaknesses are
Risk Management & Assurance
Cases
 You cannot
eliminate all
risks!
 You can manage
them

Source: DoD Risk, Issue, and Opportunity Management Guide for Defense Acquisition Programs, January 2017,
http://www.acq.osd.mil/se/docs/2017-RIO.pdf
Assurance case (ISO/IEC 15026)
■ Assurance = Grounds for justified confidence that a claim
has been/will be achieved (but how communicate that?)
■ ISO/IEC 15026-2:2011 specifies defines structure &
contents of an assurance case
– Facilitates stakeholder communications, engineering
decisions
– Typically for claims such as safety & security
■ An assurance case includes:
– Claim(s): Top-level claim(s) for a property of a system or
product
– Arguments: Systematic argumentation justifying this
claim
– Evidence/assumptions: evidence & explicit
assumptions underlying argument
Structure of an assurance case
Claim
(Conclusions,
uncertainty)

Argument Justification of
Argument

Evidence Assumption
Sub-claim

“Arguing through multiple levels of subordinate claims, this structured


argumentation connects the top-level claim to the evidence and
assumptions.”
Security-specific example of an
assurance case (moderate threat)
Claim:
System is adequately
secure against
moderate threats

System design counters or Most vulnerabilities are due to likely/ OTS/ System security
reduces impact of most common weaknesses (defect types), & platform is verification found no
vulnerabilities custom sw is unlikely to have them secure issues

Static Dynamic
All un- Compo- Identified All analysis analysis
trusted nents given Passwords developers results results
list of
inputs id’d limited stored as trained in ok ok
likely
and privilege, so salted likely
weak-
checked by break-ins hashes, not weaknesses
nesses …
strict less likely to clear text, so & how to All likely
white-lists have attacker avoid them weak-nesses All SQL
significant cannot easily
have specific statements
harm reuse them
Buffer overflows not counter- prepared
if acquired
possible in selected measures
programming language
Good assurance cases
■ Good assurance case makes it easy to determine “enough
has been done”
– Supports decisions (“we’ve done/did enough”)
■ Use powerful terms: “All” / “highest priority” / “most
important”
– Work to identify (& justify) “these are all the important
cases”
– Then show “we’ve addressed all important cases”
– Like testing, “create a graph & cover it”
■ “A list of stuff you did” is not very convincing; organize the
argument!
Basic measurement terminology
applied to defect detection tools
Analysis/tool report Report correct Report incorrect

Reported (a defect) True positive (TP): False positive (FP): Incorrect


Correctly reported (a report (of a “defect” that’s
defect) not a defect) (“Type I error”)

Did not report (a True negative (TN): False negative (FN):


defect) Correctly did not Incorrect because it failed to
report (a given defect) report (a defect) (“Type II
error”)
■ Developers worry about false positives (FPs)
– “Tool report wasted my time”
■ Auditors worry about false negatives (FNs)
– “Tool missed something important”
Receiver operating characteristic
(ROC) curve
■ Binary classifiers must generally
trade off between FPs and TPs
■ ROC curve graphically illustrates
this
■ Don’t normally know the true
values for given tools, but effect is
still pronounced
– Tool suppliers often focus on
meeting developer needs, not
auditors
– Tool users can configure tool
to affect trade-off

FPR = #FP / (#FP + #TN)


TPR = #TP / (#TP + #FN)
[ Source: Wikipedia “ROC curve”]
Measurement roll-ups useful for
defect-detecting tools
■ Precision = #TP/(#TP+#FP)
– “Probability that a report is correct”
– High precision desired by developers for defect detectors
■ Recall (sensitivity, soundness, find rate, TPR) = #TP/(#TP+#FN)
– “Probability that a report will be generated when it’s supposed to be”
– High recall desired by auditors in defect detectors
■ F-score (harmonic mean) = 2 x (Precision x Recall) / (Precision + Recall)
– Harmonic mean (in general) = reciprocal of average of reciprocals,
an especially good “averaging” measure for many situations involving
ratios
– A common way to combine precision & recall into one number
■ Discrimination rate = #Discriminations / #Test_Pairs
– Given a pair of tests (one with defect, one without)
– Discrimination occurs if tool correctly reports flaw (TP) in test with
flaw AND doesn’t when there’s no flaw (TN)
Source: CAS Static Analysis Tool Study - Methodology (Dec 2011) http://samate.nist.gov/docs/CAS_2011_SA_Tool_Method.pdf
Security metrics
■ Based on risk analysis
■ Risk = Threat*Vulnerability*Impact
■ To quantify them (Lindstrom 2005) defines
– Asset Value: quantifiable values to be assigned
to each asset for objective evaluation
– Potential Loss: five distinct types of breaches:
confidentiality, integrity, availability,
productivity, and liability
– Security spending: often divided among various
business units and departments, as well as
being lumped in with network and
infrastructure spending
Security Measurement for Situational
Awareness in Cyberspace
■ How to define and use metrics as quantitative
characteristics to represent the security state of a
computer system or network?

■ How to define and use metrics to measure Cyber


Situational Awareness (CSA) from a defender’s
point of view?
Situational Awareness
Measurements classification
■ Objective measures: can be gathered in three
ways: (i) in real-time as the task is completed, (ii)
during an interruption in task performance, or (iii)
post-test following completion of the task

■ Subjective measures: Asking individuals to rate


their own or the observed SA of individuals on an
anchored scale
Situational Awareness
Measurements classification
■ Performance measures: set of commonly used
performance metrics, including the quantity of output
or productivity level, time to perform the task or
respond to an event, the accuracy of the response, and
the number of errors committed.
Note! Good SA does not always lead to good
performance, and poor SA does not always lead to poor
performance (Endsley, 1990). Performance measures
should be used in conjunction with others measures for
more accurate assessment.
■ Behavioral measures: are subjective in nature, as they
primarily rely on observer ratings.
Security Measurements
Challenges
■ Lack of real-time CSA
■ Lack of understanding of impacts of cyber events
on high level mission operations
■ Lack of quantitative metrics and measures for
comprehensive security assessment
■ Lack of incorporating human (analyst) cognition
into cyber-physical situational awareness
■ Lack of mission assurance policy
Security Measurements
Questions
■ How to identify and represent mission composition and dependency
relationships?
■ How to derive the dependency relationships between mission elements
and cyber assets?
■ As a single vulnerability may enable widespread compromises in an
enterprise, how to quickly identify the start point of an attack and
predict its potential attack path?
■ How to assess the direct impact and propagation of cyber incidents on
high level mission elements and operations?
■ How to systematically represent and model the identified inter- and
intra- dependency relationships between major elements or
components involved in cyber mission assurance?
■ How to define and develop quantitative metrics and measures for
meaningful cyber situational awareness, enterprise security
management and mission assurance analysis?
Limitations of some approaches
Approach Technology strength Developer Limittaion

CAMUS Ontology fusion based Applied Visions, Centralized approach


cyber assets to Inc. Lack of cyber impact assessment
missions and users Lack of mission asset prioritization
mapping
MAAP Mission assurance and Carnegie Centralized approach
operational risk Mellon Focus on operational risk analysis
analysis in complex University Lack of mission asset dependencies
work processes
RiskMAP Risk-to-mission MITRE Centralized approach
assessment at network Lack of mission asset dependencies
and business
objectives levels
Ranked Identifying critical Carnegie Lack of mission models
Attack assets based on page Mellon Cannot analyze cyber impacts on high level
Graph rank and reachability University missions
analysis on attack
graphs
CMIA Cyber mission impact MITRE Centralized approach
assessment based on Lack of cyber impact analysis;
military mission models Lack of mission asset prioritization
To address these challenges, key technologies
such as quantitative and meaningful security
Common Security and
Performance Metrics for CSA
Metric Acronym Description Score/Value
Asset AC The (remained) capacity of a [0, 1]: 0 means not operational;
Capacity cyber asset (after being attacked 1 means fully operational
or compromised)
Average ALAP The average effort to penetrate a n: the average length of
Length of network, or compromise a potential attack paths
Attack Paths system/service; evaluated by
attack graphs
Compromised CHP The percentage of compromised [0, 1]: 0 means no
Host hosts in a network at time t compromise;
Percentage 1 means all compromised
Exploit EP How easy (or hard) to exploit a [0, 1]: 0 means hard to exploit;
Probability vulnerability? Could be measured 1 means easy to be exploited
by CVSS exploitability sub-score
Impact Factor IF The impact level of a vulnerability [0, 1]: 0 means no impact;
after being exploited, could be 1 means totally destroyed
measured by CVSS impact sub-
score
Common Security and
Performance Metrics for CSA
Metric Acronym Description Score/Value
Number of NAP The number of potential attack paths n: the number of potential
Attack Paths in a network, could be evaluated attack paths
based on attack graphs
Network NP Is a network ready to carry out a [0, 1]: 0 means not ready; 1
Preparedness mission? E.g., all required services means fully ready
are supported by available cyber
assets
Network NR The percentage of compromised [0, 1]: 0 means cannot
Resilience systems/services that can be recover; 1 means can be
replaced/recovered by fully recovered
backup/alternative systems/services
Operational OC The (remained) operational capacity [0, 1]: 0 means not
Capacity of a system/service (after being operational;
affected by a direct attack or indirect 1 means fully operational
impact)
Resource RR Is there any redundant (backup) 0 or 1: 0 means no backup
Redundancy resources assigned or allocated for a system; 1 means at least 1
critical task/operation? backup system
Common Security and
Performance Metrics for CSA
Metric Acronym Description Score/Value
Service SA The availability of a required service 0 or 1: 0 means not
Availability to support a particular mission, task, available; 1 means service
or operation is available
Shortest SAP The minimal effort to penetrate a n: the shortest length of
Attack Path network, or compromise a system or potential attack paths
service, evaluated by attack graphs
Severity Score SS The severity/risk of a vulnerability if it [0, 1]: 0 means no risk; 1
was successfully exploited, could be means extremely high risk
measured based on CVSS score
Vulnerable VHP The percentage of vulnerable hosts in [0, 1]: 0 means no
Host a network vulnerable host;
Percentage 1 means all hosts are
vulnerable
Common Vulnerability
Assessment on Computer System

http://www.first.org/cvss/
Common Vulnerability
Assessment on Computer System
■ Base: intrinsic and fundamental characteristics of
a vulnerability that are constant over time and
user environments,
■ Temporal: characteristics of a vulnerability that
change over time but not among user
environments
■ Environmental: characteristics of a vulnerability
that are relevant and unique to a particular user's
environment

http://www.first.org/cvss/
Base
Metrics

■ Exploitability Metrics
– reflect the characteristics of the thing that is
vulnerable
– when scoring Base metrics, it should be
assumed that the attacker has advanced
knowledge of the weaknesses of the target
system, including general configuration and
default defense mechanisms (e.g., built-in
firewalls, rate limits, traffic policing).
Attack
Vector

Metric Value Description


Network (N) The vulnerable component is bound to the network stack and the set of possible attackers extends beyond the
other options listed below, up to and including the entire Internet. Such a vulnerability is often termed “remotely
exploitable” and can be thought of as an attack being exploitable at the protocol level one or more network hops
away (e.g., across one or more routers). An example of a network attack is an attacker causing a denial of service
(DoS) by sending a specially crafted TCP packet across a wide area network (e.g., CVE-2004-0230).

Adjacent (A) The vulnerable component is bound to the network stack, but the attack is limited at the protocol level to a logically
adjacent topology. This can mean an attack must be launched from the same shared physical (e.g., Bluetooth or
IEEE 802.11) or logical (e.g., local IP subnet) network, or from within a secure or otherwise limited administrative
domain (e.g., MPLS, secure VPN to an administrative network zone). One example of an Adjacent attack would be
an ARP (IPv4) or neighbor discovery (IPv6) flood leading to a denial of service on the local LAN segment (e.g.,
CVE-2013-6014).
Local (L) •The vulnerable component is not bound to the network stack and the attacker’s path is via read/write/execute
capabilities. Either:the attacker exploits the vulnerability by accessing the target system locally (e.g., keyboard,
console), or remotely (e.g., SSH); or
•the attacker relies on User Interaction by another person to perform actions required to exploit the vulnerability
(e.g., using social engineering techniques to trick a legitimate user into opening a malicious document).
Physical (P) The attack requires the attacker to physically touch or manipulate the vulnerable component. Physical interaction
may be brief (e.g., evil maid attack[^1]) or persistent. An example of such an attack is a cold boot attack in which an
attacker gains access to disk encryption keys after physically accessing the target system. Other examples include
peripheral attacks via FireWire/USB Direct Memory Access (DMA).
Privileges
required

Metric Description
Value
None The attacker is unauthorized prior to attack, and therefore does not
(N) require any access to settings or files of the the vulnerable system
to carry out an attack.
Low (L) The attacker requires privileges that provide basic user capabilities
that could normally affect only settings and files owned by a user.
Alternatively, an attacker with Low privileges has the ability to
access only non-sensitive resources.
High (H) The attacker requires privileges that provide significant (e.g.,
administrative) control over the vulnerable component allowing
access to component-wide settings and files.
Impact
Metrics

■ capture the effects of a successfully exploited


vulnerability on the component that suffers the
worst outcome that is most directly and predictably
associated with the attack
■ Only the increase in access, privileges gained, or
other negative outcome as a result of successful
exploitation should be considered when scoring the
Impact metrics of a vulnerability.
Confiden-
tiality

Metric Description
Value
High (H) There is a total loss of confidentiality, resulting in all resources within the
impacted component being divulged to the attacker. Alternatively, access to
only some restricted information is obtained, but the disclosed information
presents a direct, serious impact. For example, an attacker steals the
administrator's password, or private encryption keys of a web server.
Low (L) There is some loss of confidentiality. Access to some restricted information
is obtained, but the attacker does not have control over what information is
obtained, or the amount or kind of loss is limited. The information
disclosure does not cause a direct, serious loss to the impacted
component.
None (N) There is no loss of confidentiality within the impacted component.
Temporal
Metrics

■ measure
– the current state of exploit techniques or code
availability,
– the existence of any patches or workarounds,
– or the confidence in the description of a
vulnerability.
Exploit
Code
Maturity
Metric Value Description
Not Defined Assigning this value indicates there is insufficient information to choose one of the other values,
(X) and has no impact on the overall Temporal Score, i.e., it has the same effect on scoring as
assigning High.
High (H) Functional autonomous code exists, or no exploit is required (manual trigger) and details are
widely available. Exploit code works in every situation, or is actively being delivered via an
autonomous agent (such as a worm or virus). Network-connected systems are likely to encounter
scanning or exploitation attempts. Exploit development has reached the level of reliable, widely
available, easy-to-use automated tools.
Functional Functional exploit code is available. The code works in most situations where the vulnerability
(F) exists.
Proof-of- Proof-of-concept exploit code is available, or an attack demonstration is not practical for most
Concept (P) systems. The code or technique is not functional in all situations and may require substantial
modification by a skilled attacker.
Unproven No exploit code is available, or an exploit is theoretical.
(U)
Environmental
Metrics

■ enable the analyst to customize the CVSS score


depending on the importance of the affected IT
asset to a user’s organization, measured in terms of
complementary/alternative security controls in
place, Confidentiality, Integrity, and Availability.
■ the metrics are the modified equivalent of Base
metrics and are assigned values based on the
component placement within organizational
infrastructure.
Base Metrics Equations
ISS = 1 - [ (1 - Confidentiality) × (1 - Integrity) × (1 - Availability) ]

Impact =
If Scope is Unchanged 6.42 × ISS
15
If Scope is Changed 7.52 × (ISS - 0.029) - 3.25 × (ISS - 0.02)

Exploitability = 8.22 × AttackVector × AttackComplexity ×

PrivilegesRequired × UserInteraction

BaseScore =
If Impact \<= 0 0, else
If Scope is Unchanged Roundup (Minimum [(Impact + Exploitability),
10])
If Scope is Changed Roundup (Minimum [1.08 × (Impact +
Exploitability), 10])
Temporal Metrics Equations

TemporalScore = Roundup (BaseScore ×


ExploitCodeMaturity ×
RemediationLevel ×
ReportConfidence)
Environmental Metrics Equations
MISS = Minimum ( 1 - [ (1 - ConfidentialityRequirement ×
ModifiedConfidentiality) × (1 - IntegrityRequirement ×
ModifiedIntegrity) × (1 - AvailabilityRequirement ×
ModifiedAvailability) ], 0.915)

ModifiedImpact =

If ModifiedScope is Unchanged 6.42 × MISS

If ModifiedScope is Changed 7.52 × (MISS - 0.029) - 3.25 ×


(MISS × 0.9731 - 0.02)13

ModifiedExploitability = 8.22 × ModifiedAttackVector ×


ModifiedAttackComplexity ×
ModifiedPrivilegesRequired ×
ModifiedUserInteraction
Environmental Metrics Equations
EnvironmentalScore =

If ModifiedImpact \<= 0 0, else

If ModifiedScope is Roundup ( Roundup [Minimum


Unchanged ([ModifiedImpact +
ModifiedExploitability], 10) ] ×
ExploitCodeMaturity ×
RemediationLevel × ReportConfidence)
If ModifiedScope is Changed Roundup ( Roundup [Minimum (1.08 ×
[ModifiedImpact +
ModifiedExploitability], 10) ] ×
ExploitCodeMaturity ×
RemediationLevel × ReportConfidence)
Metric Values
Metric Metric Value Numerical Value
Attack Vector / Modified Attack Vector Network 0.85

Adjacent 0.62
Local 0.55
Physical 0.2
Attack Complexity / Modified Attack Low 0.77
Complexity
High 0.44
Privileges Required / Modified Privileges None 0.85
Required
Low 0.62 (or 0.68 if Scope /
Modified Scope is
Changed)
High 0.27 (or 0.5 if Scope /
Modified Scope is
Changed)
User Interaction / Modified User Interaction None 0.85
Qualitative Severity Rating Scale
Rating CVSS Score
None 0.0
Low 0.1 - 3.9
Medium 4.0 - 6.9
High 7.0 - 8.9
Critical 9.0 - 10.0
Example MySQL Stored SQL Injection
■ Vulnerability
– A vulnerability in the MySQL Server database could allow a remote,
authenticated user to inject SQL code that runs with high privileges
on a remote MySQL Server database. A successful attack could
allow any data in the remote MySQL database to be read or
modified. The vulnerability occurs due to insufficient validation of
user-supplied data as it is replicated to remote MySQL Server
instances.
■ Attack
– An attacker requires an account on the target MySQL database with
the privilege to modify user-supplied identifiers, such as table
names. The account must be on a database which is configured to
replicate data to one or more remote MySQL databases. An attack
consists of logging in using the account and modifying an identifier
to a new value that contains a quote character and a fragment of
malicious SQL. This SQL will later be replicated to, and executed on,
one or more remote systems, as a highly privileged user. The
malicious SQL is injected into SQL statements in a way that
prevents the execution of arbitrary SQL statements.
CVSS
■ CVSS v3.1 Base Score: 6.4
Metric Value Comments
Attack Vector Network The attacker connects to the exploitable MySQL
database over a network.
Attack Complexity Low Replication must be enabled on the target database.
Following the guidance in Section 2.1.2 of the CVSS
v3.1, we assume the system is configured in this way.
Privileges Required Low The attacker requires an account with the ability to
change user-supplied identifiers, such as table
names. Basic users do not get this privilege by
default, but it is not considered a sufficiently trusted
privilege to warrant this metric being High.
User Interaction None No user interaction is required as replication happens
automatically.
Scope Changed The vulnerable component is the MySQL server
database that the attacker logs into to perform the
attack. The impacted component is a remote MySQL
server database (or databases) that this database
replicates to.
CVSS score continued
Metric Value Comments
Confidentiality Low The injected SQL runs with high privilege and can
access information the attacker should not have
access to. Although this runs on a remote database
(or databases), it may be possible to exfiltrate the
information as part of the SQL statement. The
malicious SQL is injected into SQL statements that
are part of the replication functionality, preventing the
attacker from executing arbitrary SQL statements.
Integrity Low The injected SQL runs with high privilege and can
modify information the attacker should not have
access to. The malicious SQL is injected into SQL
statements that are part of the replication
functionality, preventing the attacker from executing
arbitrary SQL statements.
Availability None Although injected code is run with high privilege, the
nature of this attack prevents arbitrary SQL
statements being run that could affect the availability
of MySQL databases.
Example VMware Guest to Host
Escape Vulnerability
■ Vulnerability
– Due to a flaw in the handler function for Remote Procedure Call (RPC)
commands, it is possible to manipulate data pointers within the Virtual
Machine Executable (VMX) process. This vulnerability may allow a user
in a Guest Virtual Machine to crash the VMX process resulting in a
Denial of Service (DoS) on the host or potentially execute code on the
host.
■ Attack
– A successful exploit requires an attacker to have access to a Guest
Virtual Machine (VM). The Guest VM needs to be configured to have
4GB or more of memory. The attacker would then have to construct a
specially crafted remote RPC call to exploit the VMX process.
– The VMX process runs in the VMkernel that is responsible for handling
input/output to devices that are not critical to performance. It is also
responsible for communicating with user interfaces, snapshot
managers, and remote console. Each virtual machine has its own VMX
process which interacts with the host processes via the VMkernel.
– The attacker can exploit the vulnerability to crash the VMX process
resulting in a DoS of the host or potentially execute code on the host
operating system.
CVSS
■ CVSS v3.1 Base Score: 9.9

Metric Value Comments


Attack Vector Network VMX process is bound to the network stack
and the attacker can send RPC commands
remotely.
Attack Low The only required condition for this attack is
Complexity for virtual machines to have 4GB of memory.
Virtual machines that have less than 4GB of
memory are not affected.
Privileges Low The attacker must have access to the guest
Required virtual machine. This is easy in a tenant
environment.
User Interaction None The attacker requires no user interaction to
successfully exploit the vulnerability. RPC
commands can be sent anytime.
CVSS score continued
Metric Value Comments
Scope Changed The vulnerable component is a VMX
process that can only be accessed
from the guest virtual machine. The
impacted component is the host
operating system which has separate
authorization authority from the guest
virtual machine.
Confidentiality High Full compromise of the host operating
system via remote code execution.
Integrity High Full compromise of the host operating
system via remote code execution.
Availability High Full compromise of the host operating
system via remote code execution.
Wrap-up

■ Meaningful security metrics are necessary to quantitatively


evaluate and measure the operational effectiveness and
system performance of a system.
■ The Common Vulnerability Scoring System (CVSS) has
been widely adopted as the primary method for assessing
the severity of computer system security vulnerabilities
■ CVSS is supported by a CVSS Calculator
https://www.first.org/cvss/calculator/3.1
Quiz time
■ Please switch over to Moodle

■ THANK YOU ALL, HAVE A GREAT HOLIDAY AND AN


AWESOME NEW YEAR!

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy