0% found this document useful (0 votes)
59 views51 pages

Security Models

The document discusses security models and access control matrices. It describes basic security components, classes of threats, policies and mechanisms, and the goals of security. It also explains access control matrices, including their structure and how they can be used to represent the security state of a system.

Uploaded by

CARLOS MEDINA
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
59 views51 pages

Security Models

The document discusses security models and access control matrices. It describes basic security components, classes of threats, policies and mechanisms, and the goals of security. It also explains access control matrices, including their structure and how they can be used to represent the security state of a system.

Uploaded by

CARLOS MEDINA
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 51

Security Models

Security Models
Bell-Lapadula, Biba, Chinese Wall

Notas:

Introduction

Notas:
Copyright CyberTech de Colombia, 2002-2009 Pág. 1
Security Models

Basic Components

• Confidentiality
– Keeping data and resources hidden
• Integrity
– Two views:
• Data integrity (integrity)
• Origin integrity (authentication)
– Integrity mechanisms fall into two classes:
• Prevention mechanisms
• Detection mechanisms
• Availability
– Enabling access to data and resources

Notas:

Classes of Threats

• Disclosure:
– unauthorized access to information
– snooping / wiretapping
• Deception
– modification, spoofing (phishing), repudiation of origin,
denial of receipt
• Disruption
– modification
• Usurpation
– modification, spoofing, delay, denial of service

Notas:
Copyright CyberTech de Colombia, 2002-2009 Pág. 2
Security Models

Policies and Mechanisms

• Policy says what is, and is not, allowed


– This defines “security” for the site/system/etc.
– May be represented mathematically, as a list of allowed
(secure) and disallowed (nonsecure) states
• A given policy provides an axiomatic description of secure and
nonsecure states
– In practice, policies are rarely so precise

• Mechanisms enforce policies


– Can be a method, a tool or procedure for enforcing a
security policy,
• Composition of policies
– If policies conflict, discrepancies may create security
vulnerabilities

Notas:

Goals of Security

• Given a security policy’s specification of “secure”


and “nonsecure” actions, the goals of security are:
– Prevention
• Prevent attackers from violating security policy
– Detection
• Detect attackers’ violation of security policy
– Recovery
• Stop attack, assess and repair damage
• Continue to function correctly even if attack succeeds

Notas:
Copyright CyberTech de Colombia, 2002-2009 Pág. 3
Security Models

Trust and Assumptions

• Underlie all aspects of security


• Policies
– Unambiguously partition system states
– Correctly capture security requirements
• Mechanisms
– Assumed to enforce policy
– Support mechanisms work correctly

Notas:

Types of Mechanisms

secure precise broad

set of reachable states set of secure states

Notas:
Copyright CyberTech de Colombia, 2002-2009 Pág. 4
Security Models

Assurance

• Specification
– Requirements analysis
– Statement (formal or informal) of desired functionality
• Design
– How system will meet specification
• Implementation
– Programs/systems that carry out design

Notas:

Operational Issues

• Cost-Benefit Analysis
– Is it cheaper to prevent or recover?
• Risk Analysis
– Should we protect something?
– How much should we protect this thing?
• Laws and Customs
– Are desired security measures illegal?
– Will people do them?

Notas:
Copyright CyberTech de Colombia, 2002-2009 Pág. 5
Security Models

Human Issues

• Organizational Problems
– Power and responsibility
– Financial benefits
• People problems
– Outsiders and insiders
– Social engineering

Notas:

The security life cycle


(tying together)

Notas:
Copyright CyberTech de Colombia, 2002-2009 Pág. 6
Security Models

Access Control Matrix

Notas:

Access Control Matrix Overview

• Protection state of system


– Describes current settings, values of system relevant to
protection
– It can be described using an access control matrix
• Consider the set of possible protection states P
– Some subset Q of P consists of exactly those states in
which the system is authorized to reside
• Whenever the system state is in Q, the system is secure
– When the current state is P-Q the system is not secure
• Characterizing the states in Q is the function of a security
policy,
– Preventing the system from entering a state in P-Q is the function
of a security mechanism

Notas:
Copyright CyberTech de Colombia, 2002-2009 Pág. 7
Security Models

Access Control Matrix

• Describes protection state precisely


• Matrix describing rights of subjects with respect to
every object on the system,
– A subject is just an active entity
– An object can also be a subject
• State transitions change elements of matrix
– If the system starts in a one of a set of authorized states and
then you apply secure operations, then –by induction- the
system will be always secure,

Notas:

Description

• Subjects S = { s1,…,sn }
• Objects O = { o1,…,om }
Objects
• Rights R = { r1,…,rk }
ACM O1 … Om S1 … Sn
• Entries A[si, oj] ⊆ R
S1
Subjects

– A[si, oj] = { rx, …, ry } means


S2 subject si has rights rx, …, ry
… over object oj
Sn
• The triple (S,O,A) is the set
of protection states of the
system

Notas:
Copyright CyberTech de Colombia, 2002-2009 Pág. 8
Security Models

Example 1
(set of rights r,w,x,a,o)

File-1 File-2 Process-1 Process-2


Process-1 read, write, read read, write, write
own execute, own
Process-2 append read, own read read, write,
execute, own

Process-1 owns File-1, so it could alter the contents of A[x,file1] where x is any subject

Notas:

R - the set of rights


• The ACM model is an abstract model of the protection state,
– The meaning is associated with a particular implementation of the
system,
– The meaning of a right may vary depending on the object involved,
• For example, a process in *nix:
– Accessing a file the rights r, w, x means the usual
– Accessing a directory:
• r: to be able to list the contents of the directory
• w: to be able to create, rename or delete files or subdirectories
• x: to be able to access files or subdirectories
– Accessing a process:
• r: to be able to receive signals
• w: to be able to send signals
• x: to be able to execute a process as a subprocess
– The superuser “owns” all objects on the system
• He can access any (local) file regardless of the permissions the owner
has granted

Notas:
Copyright CyberTech de Colombia, 2002-2009 Pág. 9
Security Models

Example 2 – Hosts on a LAN


(set of rights ftp, mail, nfs, own)

mquiroga Sicua Banner


mquiroga own ftp ftp
ftp, nfs, mail,
Sicua ftp, nfs, mail
own
ftp, nfs, mail,
Banner ftp, mail
own

mquiroga has a ftp client but no servers, so neither of the other system can access it, but it
can ftp to them.
Sicua and Banner offer ftp services to anyone

Notas:

Example 3 – Programs
(set of rights +, -, call)

counter inc-ctr dec-ctr manager


inc_ctr +
dec-ctr -
manager call call call

Subjects are the procedures… Objects are the variables…


inc_ctr can increment counter, dec_ctr can decrement it

manager can call inc_ctr, dec_ctr and itself… presumably is recursive

Notas:
Copyright CyberTech de Colombia, 2002-2009 Pág. 10
Security Models

Example 4
(ACM at 3AM and 10AM)

At 3AM, time condition At 10AM, time condition


met; ACM is: not met; ACM is:

… picture … … picture …
… …
annie paint annie
… …

Notas:

Access Controlled by History

Name Position Age salary


Alice Teacher 45 $40,000=
Bob Aide 20 $20,000=
Cathy Principal 37 $60,000=
Dilbert Teacher 50 $50,000=
Eve Teacher 33 $50,000=

Query C1 = “position = teacher”


sum_salary(C1) can be answered,
Query C2 = “age > 40 & position = teacher”
sum_salary(C2) should not be answered,

How can be modeled this control using an ACM?

Notas:
Copyright CyberTech de Colombia, 2002-2009 Pág. 11
Security Models

Protection State Transitions

• As processes execute operations, the state of the


protection system changes,
• Let the initial state of the system be X0=(S0,O0,A0)
• The symbol → represents transition
– Xi →τi+1 Xi+1: command τ moves system from state Xi to Xi+1
– X→τi+1*Y: a sequence of commands moves system from
state X to Y
• Commands often called transformation procedures
– For every command, there is a sequence of state transition
operations that takes the initial state Xi to the resulting state
Xi+1

Notas:

Primitive Operations

• create subject s; create object o


– Creates new row, column in ACM; creates new column in
ACM
• destroy subject s; destroy object o
– Deletes row, column from ACM; deletes column from ACM
• enter r into A[s,o]
– Adds r rights for subject s over object o
• delete r from A[s,o]
– Removes r rights from subject s over object o

Notas:
Copyright CyberTech de Colombia, 2002-2009 Pág. 12
Security Models

Create Subject

• Precondition: s ∉ S
• Primitive command: create subject s
• Postconditions:
– S´ = S ∪{ s }, O´ = O ∪{ s }
– (∀y ∈ O´)[a´[s, y] = ∅], (∀x ∈ S´)[a´[x, s] = ∅]
– (∀x ∈ S)(∀y ∈ O)[a´[x, y] = a[x, y]]

Notas:

Create Object

• Precondition: o ∉ O
• Primitive command: create object o
• Postconditions:
– S´ = S, O´ = O ∪ { o }
– (∀x ∈ S´)[a´[x, o] = ∅]
– (∀x ∈ S)(∀y ∈ O)[a´[x, y] = a[x, y]]

Notas:
Copyright CyberTech de Colombia, 2002-2009 Pág. 13
Security Models

Add Right

• Precondition: s ∈ S, o ∈ O
• Primitive command: enter r into a[s, o]
• Postconditions:
– S´ = S, O´ = O
– a´[s, o] = a[s, o] ∪ { r }
– (∀x ∈ S´)(∀y ∈ O´ – { o }) [a´[x, y] = a[x, y]]
– (∀x ∈ S´ – { s })(∀y ∈ O´) [a´[x, y] = a[x, y]]

Notas:

Delete Right

• Precondition: s ∈ S, o ∈ O
• Primitive command: delete r from a[s, o]
• Postconditions:
– S´ = S, O´ = O
– a´[s, o] = a[s, o] – { r }
– (∀x ∈ S´)(∀y ∈ O´ – { o }) [a´[x, y] = a[x, y]]
– (∀x ∈ S´ – { s })(∀y ∈ O´) [a´[x, y] = a[x, y]]

Notas:
Copyright CyberTech de Colombia, 2002-2009 Pág. 14
Security Models

Destroy Subject

• Precondition: s ∈ S
• Primitive command: destroy subject s
• Postconditions:
– S´ = S – { s }, O´ = O – { s }
– (∀y ∈ O´)[a´[s, y] = ∅], (∀x ∈ S´)[a´[x, s] = ∅]
– (∀x ∈ S´)(∀y ∈ O´) [a´[x, y] = a[x, y]]

Notas:

Destroy Object

• Precondition: o ∈ o
• Primitive command: destroy object o
• Postconditions:
– S´ = S, O´ = O – { o }
– (∀x ∈ S´)[a´[x, o] = ∅]
– (∀x ∈ S´)(∀y ∈ O´) [a´[x, y] = a[x, y]]

Notas:
Copyright CyberTech de Colombia, 2002-2009 Pág. 15
Security Models

Creating File

• Process p creates file f with owner read (r) and write


(w) permission

command create•file(p, f)
create object f;
enter own into A[p, f];
enter r into A[p, f];
enter w into A[p, f];
end

Notas:

Creating a new process

• Process p wishes to create a new process q

command spawn•process(p, q)
create subject q;
enter own into A[p, q];
enter r into A[p, q];
enter w into A[p, q];
enter r into A[q, p];
enter w into A[q, p];
end

Notas:
Copyright CyberTech de Colombia, 2002-2009 Pág. 16
Security Models

Mono-Operational Commands

• The system can update the matrix only by using


defined commands…
– However you can write mono-operational commands
• Make process p the owner of file g

command make•owner(p, g)
enter own into A[p, g];
end

Notas:

Conditional Commands

• Let p give q, r rights over f, if p owns f


command grant•read•file•1(p, f, q)
if own in A[p, f] then enter r into A[q, f];
end

• Let p give q, r and w rights over f, if p owns f and p


has c rights over q
command grant•read•file•2(p, f, q)
if own in A[p, f] and c in A[p, q]
then enter r into A[q, f];
enter w into A[q, f];
end

Notas:
Copyright CyberTech de Colombia, 2002-2009 Pág. 17
Security Models

Not or, not not

• All conditions are joined by and, and never by or,


– Joining conditions with or is equivalent to two commands,
each with one of the conditions,
• The negation of a condition is not permitted

Notas:

Copy Right
(aka grant right)

• Allows possessor to give rights to another


• Often attached to a right, so only applies to that right
– r is read right that cannot be copied
– rc is read right that can be copied (pass it on)
• Is copy flag copied when giving r rights?
– Depends on model, instantiation of model

command grant•r (p, f, q)


if r in A[p, f] and c in A[p, f]
then enter r into A[q, f];
end

Notas:
Copyright CyberTech de Colombia, 2002-2009 Pág. 18
Security Models

Attenuation of Privilege

• Principle says you can’t give rights you do not


possess
– Restricts addition of rights within a system

– Usually ignored for owner


• Why? Owner gives herself rights, gives them to others, deletes
her rights.

Notas:

Own Right

• Usually allows possessor to change entries in an


ACM column
– So owner of object can add, delete rights for others
• The owner of an object is usually the subject that
created the object or a subject to which the creator
gave ownership,

Notas:
Copyright CyberTech de Colombia, 2002-2009 Pág. 19
Security Models

Key Points

• Access control matrix simplest abstraction


mechanism for representing protection state
• Transitions alter protection state
• Six primitive operations alter matrix
– Transitions can be expressed as commands composed of
these operations and, possibly, conditions

Notas:

Security Policies

Notas:
Copyright CyberTech de Colombia, 2002-2009 Pág. 20
Security Models

Types of Security Policies

• A military security policy is a security policy


developed primarily to provide confidentiality
– aka. governmental security policy
• A commercial security policy is a security policy
developed primarily to provide integrity

Notas:

Confidentiality Policy

• Goal: prevent the unauthorized disclosure of


information
– Deals with information flow
– Integrity incidental
• Multi-level security models are best-known examples
– Bell-LaPadula Model basis for many, or most, of these

Notas:
Copyright CyberTech de Colombia, 2002-2009 Pág. 21
Security Models

Bell-LaPadula Model v1

• Security levels arranged in linear ordering


– Top Secret: highest
– Secret
– Confidential
– Unclassified: lowest
• A subject has a security clearance
– L(s)=ls
• An object has a security classification
– L(o)=lo
– For all security classifications li , i=0,…k-1 , li<li+1
• The goal is to prevent read access to objects at a
security classification higher than the subject’s
clearance

Notas:

Example

Security Level Subjects Object


Tamara,
Top Secret (TS) Personnel Files
Thomas
Samuel,
Secret (S) Electronic Mail Files
Sally
Claire,
Confidential (C) Activity Log Files
Clarence
Ursula,
Unclassified (UC) Telephone List Files
Ulaley

Thomas can read all files


Claire cannot read Personnel or E-Mail Files
Ursula can only read Telephone Lists

Notas:
Copyright CyberTech de Colombia, 2002-2009 Pág. 22
Security Models

Reading Information
(simple security condition v1)

• Information flows up, not down


– “reads up” disallowed, “reads down” allowed
• Simple Security Condition v1
– Subject s can read object o iff,
• L(o) ≤ L(s) and
• s has permission to read o
– Check the combination of mandatory control (relationship
of security levels) and discretionary control (the required
permission)
– Sometimes called “no reads up” rule

Notas:

Writing Information
(star property v1)

• Information flows up, not down


– “writes up” allowed, “writes down” disallowed
• *-Property v1
– Subject s can write object o iff
• L(s) ≤ L(o) and
• s has permission to write o
– Check the combination of mandatory control (relationship
of security levels) and discretionary control (the required
permission)
– Sometimes called “no writes down” rule

Notas:
Copyright CyberTech de Colombia, 2002-2009 Pág. 23
Security Models

Basic Security Theorem v1

If a system is initially in a secure state, and every


transition of the system satisfies the simple security
condition (v1), and the *-property (v1), then every
state of the system is secure

Proof: induct on the number of transitions

Notas:

Bell-LaPadula Model v2

• Depending on the task a subject can have different


security clearances:
– TS for some tasks, S for others,
• The idea is to expand the notion of security level to
include categories
– v.g Nuc, Nato, Crypto, US, Eur, Asi, Lat, …
• So a security level is the pair (clearance, category
set)
– For example:
• ( Top Secret, { Nuc, Eur, US } )
• ( Confidential, { Eur, US } )
• ( Secret, { Nuc, US } )
• ( Confidential, { Eur } )

Notas:
Copyright CyberTech de Colombia, 2002-2009 Pág. 24
Security Models

A modern algebra detour


(sets and relations)

• For a set S, a relation R is any subset of SxS


– If a,b ∈ S, and (a,b) ∈ R, write aRb
• Example
– Let S = {1, 2, 3} and relation R is ≤
– R = {(1,1), (1,2), (1,3), (2,2), (2,3), (3,3)}
– So we write 1 ≤ 2 and 3 ≤ 3 but not 3 ≤ 2

Notas:

A modern algebra detour


(relation properties)

• The following definitions describe properties of


relations
– Reflexive:
• For all a ∈ S, aRa
– On I, ≤ is reflexive as 1 ≤ 1, 2 ≤ 2, 3 ≤ 3
– Antisymmetric:
• For all a, b ∈ S, aRb ∧ bRa ⇒ a = b
– On I, ≤ is antisymmetric
– Transitive:
• For all a, b, c ∈ S, aRb ∧ bRc ⇒ aRc
– On I, ≤ is transitive as 1 ≤ 2 and 2 ≤ 3 means 1 ≤ 3

Notas:
Copyright CyberTech de Colombia, 2002-2009 Pág. 25
Security Models

A modern algebra detour


(a bigger example)

• C set of complex numbers


– a ∈ C ⇒ a = aR + aIi, with aR, aI integers
• Lets define the relation ≤C
– a ≤C b if, and only if, aR ≤ bR and aI ≤ bI
• Clearly ≤ C is reflexive, antisymmetric and transitive
– Of course, we are using the relation ≤ defined over integers
(aR , aI are integers)

Notas:

A modern algebra detour


(partial ordering)

• Relation R orders some members of set S


– If all ordered, it’s total ordering
• Example
– ≤ on integers is total ordering
– ≤C is partial ordering on C (because neither 3+5i ≤C 4+2i nor
4+2i ≤C 3+5i holds)
• A total ordering requires an additional property:
– Comparability:
• For all a, b ∈ S, aRb or bRa
– On I, ≤ is transitive as 1 ≤ 2 and 2 ≤ 3 means 1 ≤ 3

Notas:
Copyright CyberTech de Colombia, 2002-2009 Pág. 26
Security Models

A modern algebra detour


(upper bounds)

• Under a partial ordering we define the “upper


bound” of two elements
• For a, b ∈ S, if ∃ u ∈ S such that aRu, bRu, then u is
an upper bound for a, b.
– Least upper if there is no t ∈ S such that aRt, bRt, and tRu
• Example
– For 1 + 5i, 2 + 4i ∈ C, upper bounds include:
• 2 + 5i, 3 + 8i, and 9 + 100i
– Least upper bound of those is 2 + 5i

Notas:

A modern algebra detour


(lower bounds)

• Under a partial ordering we define the “lower bound”


of two elements
• For a, b ∈ S, if ∃ l ∈ S with lRa, lRb, then l is lower
bound
– Greatest lower if there is no t ∈S such that tRa, tRb, and lRt
• Example
– For 1 + 5i, 2 + 4i ∈ C, lower bounds include
• 0, –1 + 2i, 1 + 1i, and 1 + 4i
– Greatest lower bound of those is 1 + 4i

Notas:
Copyright CyberTech de Colombia, 2002-2009 Pág. 27
Security Models

A modern algebra detour


(lattices)

• A lattice is the combination of a set of elements S


and a relation R meeting the following criteria:
– R is reflexive, antisymmetric, transitive on the elements of S
– For every s, t ∈ S, there exists a greatest lower bound under
R
– For every s, t ∈S, there exists a least upper bound under R

Notas:

A modern algebra detour


(a couple of lattices examples)

• The set {0, 1, 2} forms a lattice under the relation ≤


– Clearly is reflexive, antisymmetric and transitive,
– The least upper bound of any two integers is the smaller,
– The upper lower bound of any two integers is the larger,
• Consider the subset (C’) of the set of complex
numbers for which the real and imaginary parts are
integers from 0 to 10, inclusive
– C’, ≤C form a lattice
• As shown earlier, ≤C is reflexive, antisymmetric, and transitive
• Least upper bound for a and b:
– cR = max(aR, bR), cI = max(aI, bI); then c = cR + cIi
• Greatest lower bound for a and b:
– cR = min(aR, bR), cI = min(aI, bI); then c = cR + cIi

Notas:
Copyright CyberTech de Colombia, 2002-2009 Pág. 28
Security Models

A modern algebra detour


(a lattice example with picture)

2+5i

1+5i 2+4i

1+4i
Arrows represent ≤C

Notas:

A modern algebra detour


(two exercises)

• Prove that the set of all subsets of a given set S (aka.


power set of S) forms a lattice under the relation
“subset” (⊆),
• Consider a set with elements that are totally ordered
by a relation. Does the set form a lattice under that
relation? If so, show that it does. If not, give a
counterexample,

• END DETOUR, you are on your own.

Notas:
Copyright CyberTech de Colombia, 2002-2009 Pág. 29
Security Models

Bell-LaPadula Model v2

• … a security level is the pair (clearance, category


set)
– ( Top Secret, { Nuc, Eur, US } ), ( Confidential, { Eur, US } ),

• The category set is a lattice
{NUC, EUR, US}

{NUC, EUR} {NUC, US} {EUR, US}

{NUC} {EUR} {US}


Arrows represent ⊆

Notas:

Levels and Lattices

• The security level (L,C) dominates the security level (L’,C’) iff
A´≤ A and C´⊆ C
– Examples
• George is cleared into security level (SECRET, {NUC,EUR})
• DocA is classified as (CONFIDENTIAL,{NUC})
• DocB is classified as (SECRET, {EUR,US})
• DocC is classified as (SECRET, {EUR})
– George dom DocA
– George ¬dom DocB
» {EUR,US} ⊆ {NUC,EUR}
– George dom DocC
– More examples
• (Top Secret, {Nuc,Asi}) dom (Secret, {Nuc})
• (Secret, {Nuc, Eur}) dom (Confidential,{Nuc,Eur})
• (Top Secret, {Nuc}) ¬dom (Confidential, {Eur})
• Any pair of security levels may (or may not) be related by dom,
– dom is a partial order

Notas:
Copyright CyberTech de Colombia, 2002-2009 Pág. 30
Security Models

Levels and Lattices (.)

• Let C be set of classifications (TS, S, C, UC, …) and


K set of categories (NUC, EUR, US, …),
• The set of security levels L = C × K, and the relation
dom form a lattice
– lub(L) = (max(A), C)
– glb(L) = (min(A), ∅)

• Personal note: L = C × 2K

Notas:

Reading Information
(simple security condition v2)

• Information flows up, not down


– “reads up” disallowed, “reads down” allowed
• Simple Security Condition v2
– Subject s can read object o iff,
• L(s) dom L(o) and
• s has permission to read o
– Check the combination of mandatory control (relationship
of security levels) and discretionary control (the required
permission)
– Sometimes called “no reads up” rule

Notas:
Copyright CyberTech de Colombia, 2002-2009 Pág. 31
Security Models

Writing Information
(star property v2)

• Information flows up, not down


– “Writes up” allowed, “writes down” disallowed
• *-Property v2
– Subject s can write object o iff,
• L(o) dom L(s) and
• s has permission to write o
– Check the combination of mandatory control (relationship
of security levels) and discretionary control (the required
permission)
– Sometimes called “no writes down” rule

Notas:

Basic Security Theorem v2

Let Σ be a system with a secure initial state σ0 and let


T be a set of state transformations. If every element
of T preserves the simple security condition (v2) and
the *-property (v2), the every σi i≥0, is secure

Proof: induct on the number of transitions

Notas:
Copyright CyberTech de Colombia, 2002-2009 Pág. 32
Security Models

Problem

• Suppose a military organization with Colonel’s


clearance (Secret, {Nuc, Eur}) and Major’s clearance
(Secret, {Eur})
– Colonel can’t write a message to Major
• *-property L(Major) ¬ dom L(Colonel)
– By simple security condition Major can’t read from Colonel
– Major can write to colonel
• *-property L(Colonel) dom L(Major)
– By simple security condition Colonel can read from Major

• Clearly absurd!

Notas:

Solution

• A subject may decrease its security level in order to


communicate with entities at lower security levels
– A subject has a maximum security level and a current
security level
• Of course maxlevel(s) dom curlevel(s)
• Example
– Colonel has maxlevel (Secret, {Nuc, Eur})
• Colonel sets curlevel to (Secret, { Eur })
– It’s valid, maxlevel(Colonel) dom curlevel(Colonel)
– Now Colonel can write a message to Major
• *-property L(Major) dom curlevel(Colonel)
• Major can read from Colonel
– simple security condition L(Major) dom curlevel(Colonel)

Notas:
Copyright CyberTech de Colombia, 2002-2009 Pág. 33
Security Models

Bell-LaPadula models
(key points)

• Confidentiality models restrict flow of information


• Bell-LaPadula models multilevel security
– Cornerstone of much work in computer security
• v.g. DoD’s Trusted Computer System Evaluation Criteria
(“Orange Book”)

Notas:

Integrity Policies

• Requirements
– Very different than confidentiality policies
• for commercial applications, integrity rather than
confidentiality, is key
• Biba’s models
– Low-Water-Mark policy
– Ring policy
– Strict Integrity policy
• Clark-Wilson model

Notas:
Copyright CyberTech de Colombia, 2002-2009 Pág. 34
Security Models

Requirements of Integrity Policies

• Users will not write their own programs, but will use existing
production programs and databases
• Programmers will develop and test programs on a
nonproduction system; if they need access to actual data, they
will be given production data via a special process, but will use
it on their development system
• A special process must be followed to install a program from
the development system onto the production system
• The special process in requirement 3 must be controlled and
audited
• The managers and auditors must have access to both the
system state and the system logs that are generated

Notas:

Some principles of operation

• Separation of duty:
– If two or more steps are required to perform a critical function, at
least two different people should perform the steps,
• Separation of function:
– Developers don’t develop new programs directly on production
systems,
– Developers don’t process production data on the development
systems,
• Developers and testers may receive sanitized production data,
• Auditing:
– Commercial systems emphasize recovery and accountability,
• What actions took place and who performed them
• Logging and auditing is a must
• Disclosure is certainly an issue
– Bell-LaPadula is too complex in a commercial environment
• A lot of categories, a lot of security levels!

Notas:
Copyright CyberTech de Colombia, 2002-2009 Pág. 35
Security Models

Biba Integrity Model


(basis for all three models)
• A system consists of
– A set of subjects S, a set of objects O and a set of integrity levels I,
• We can define
– The relation ≤ ⊆ I × I holding when second integrity level dominates
or is the same as the first
– The relation < ⊆ I × I holding when second integrity level
dominates first
– min: I × I → I returns lesser of integrity levels
– i: S ∪ O → I gives integrity level of an object or a subject
– The relation r ⊆ S × O defines the ability of a subject s ∈ S to read
an object o ∈ O
– The relation w ⊆ S × O defines the ability of a subject s ∈ S to read
an object o ∈ O
– The relation x ⊆ S × S defines the ability of a subject s ∈ S to
invoke (execute) another subject s ∈ S

Notas:

Intuition for Integrity Levels

• The higher the level, the more confidence


– That a program will execute correctly
– That data is accurate and/or reliable
• Note relationship between integrity and
trustworthiness
• Important point: integrity levels are not security
levels
– Security labels primarily limit the flow of information,
– Integrity labels primarily inhibit the modification of
information

Notas:
Copyright CyberTech de Colombia, 2002-2009 Pág. 36
Security Models

Information Transfer Path

• An information transfer path is a sequence of


objects o1, ..., on+1 and a corresponding sequence of
subjects s1, ..., sn such that for all i, 1 ≤ i ≤ n
– si r oi , and
– si w oi+1
• Idea: information can flow from o1 to on+1 along this
path by successive reads and writes

Notas:

Biba’s Low-Water-Mark Policy

• General idea: whenever a subject accesses an


object, the policy changes the integrity levels of the
subject to the lower of the subject and the object
• Rules
– if s ∈ S reads o ∈ O, then i´(s) = min(i(s), i(o)), where i´(s) is
the subject’s integrity level after the read
• The subject is relying on data less trustworthy than itself, so
its trustworthiness will drop to that of the subject
– s ∈ S can write to o ∈ O if and only if i(o) ≤ i(s)
• It prevents a subject from writing to a more highly trusted
object,
– s1 ∈ S can execute s2 ∈ S if and only if i(s2) ≤ i(s1)
• It prevents that a less trusted invoker controls the execution of
the invoked subject, corrupting it even though it is more
trustworthy

Notas:
Copyright CyberTech de Colombia, 2002-2009 Pág. 37
Security Models

Information Flow Theorem

• If there is information transfer path from o1 ∈ O to


on+1 ∈ O, then the enforcement of low-water-mark
policy requires:
– i(on+1) ≤ i(o1) for all n ≥ 1
• Proof?

Notas:

Problems

• Subjects’ integrity levels decrease as system runs


– Soon no subject will be able to access objects at high
integrity levels
• Alternative: change object levels rather than subject
levels
– Soon all objects will be at the lowest integrity level
• Crux of problem is model prevents indirect
modification
– Because subject levels lowered when subject reads from
low-integrity object

Notas:
Copyright CyberTech de Colombia, 2002-2009 Pág. 38
Security Models

Biba’s Ring Policy

• Idea: subject integrity levels static


• Rules
– Any subject can read any object
• A subject’s integrity level never goes down
– s ∈ S can write to o ∈ O if and only if i(o) ≤ i(s)
• It prevents a subject from writing to a more highly trusted
object
– s1 ∈ S can execute s2 ∈ S if and only if i(s2) ≤ i(s1)
• It prevents that a less trusted invoker controls the execution of
the invoked subject, corrupting it even though it is more
trustworthy
• Eliminates indirect modification problem
• Same information flow theorem holds

Notas:

Biba’s Strict Integrity Policy


(aka Biba’s model)

• Similar to Bell-LaPadula model:


– s ∈ S can read o ∈ O iff i(s) ≤ i(o)
– s ∈ S can write to o ∈ O iff i(o) ≤ i(s)
– s1 ∈ S can execute s2 ∈ S iff i(s2) ≤ i(s1)
• Add compartments and discretionary controls to get
full dual of Bell-LaPadula model
• Information flow theorem holds
– Different proof, though

Notas:
Copyright CyberTech de Colombia, 2002-2009 Pág. 39
Security Models

LOCUS and Biba

• Goal: prevent untrusted software from altering data


or other software
• Approach: make levels of trust explicit
– credibility rating based on estimate of software’s
trustworthiness (0 untrusted, n highly trusted)
• A credibility rating is a Biba integrity level,
– trusted file systems contain software with a single
credibility level
– Process has risk level (Biba’s integrity level)
• User may execute programs with credibility level at least as
great as the user’s risk level
• Must use run-untrusted command to run software at lower
credibility level
– This acknowledges the risk that the user is taking

Notas:

Clark-Wilson Integrity Model

• Uses transactions as the basic operation


– Models many commercial systems more realistically than Biba’s
models,
• Integrity defined by a set of constraints
– Data in a consistent or valid state when it satisfies these
constraints
• Example: Bank
– D today’s deposits, W withdrawals, YB yesterday’s balance, TB
today’s balance
– Integrity constraint: TB = YB + D – W
• Well-formed transaction move system from one consistent
state to another
• Issue: who examines, certifies transactions done correctly?
– The principle of separation of duty requires that the certifier and
the implementers be different people

Notas:
Copyright CyberTech de Colombia, 2002-2009 Pág. 40
Security Models

Clark-Wilson Integrity Model


(entities)
• CDIs: constrained data items
– Data subject to integrity controls
• v.g. for a bank, the balances of accounts are CDIs
• UDIs: unconstrained data items
– Data not subject to integrity controls
• v.g. for a bank, gifts selected by the account holders when opened
their accounts or anything that it’s not crucial to the bank’s operation
• IVPs: integrity verification procedures
– Procedures that test the CDIs conform to the integrity constraints
• TPs: transaction procedures
– Procedures that take the system from one valid state to another
– TPs implement well-formed transactions
• Let’s consider the relationship between CDIs, IVPs and TPs…

Notas:

Certification Rules 1 and 2

• CR1: When any IVP is run, it must ensure all CDIs are in a valid
state
• CR2 : For some associated set of CDIs, a TP must transform
those CDIs in a valid state into a (possibly different) valid state
– CR2 defines relation certified (C) that associates a set of CDIs with
a particular TP
– Example: TP balance, CDIs accounts, in bank example
• (balance,account1),(balance,account2),…,(balance,accountn) ∈ C
– A TP may corrupt a CDI if it is not certified to work on that CDI…

• The system must prevent TPs from operating on CDIs for which
they have not been certified…

Notas:
Copyright CyberTech de Colombia, 2002-2009 Pág. 41
Security Models

Enforcement Rules 1 and 2

• ER1: The system must maintain the certified relations and must
ensure that only TPs certified to run on a CDI manipulate that
CDI
– If a TP f operates on a CDI o, then (f,o) ∈ C
• ER2: The system must associate a user with each TP and set of
CDIs. The TP access those CDIs on behalf of the associated
user. The TP cannot access that CDI on behalf of a user not
associated with that TP and CDI
– This defines a set of triples (user, TP, { CDI set }) to capture the
association of users, TPs and CDIs,
• Call this relation allowed

– System must maintain, enforce certified relation


– System must also restrict access based on user ID (allowed
relation)
• What about the principle of separation of duty? …

Notas:

Users and Rules

• CR3: The allowed relations must meet the


requirements imposed by the principle of separation
of duty
• ER3: The system must authenticate each user
attempting to execute a TP
– Type of authentication undefined, and depends on the
instantiation
– Authentication not required before use of the system (it may
manipulate UDIs), but is required before manipulation of
CDIs (requires using TPs)
• An auditor may want to review the transactions…

Notas:
Copyright CyberTech de Colombia, 2002-2009 Pág. 42
Security Models

Logging

• CR4: All TPs must append enough information to


reconstruct the operation to an append-only CDI.
– This CDI is the log
– Auditor needs to be able to determine what happened
during reviews of transactions

• What about untrusted input? …


– v.g. a user deposits a check on an ATM, but the stated
amount is different of the typed amount in the check

Notas:

Handling Untrusted Input

• CR5: Any TP that takes as input a UDI may perform


only valid transformations, or no transformations,
for all possible values of the UDI. The transformation
either rejects the UDI or transforms it into a CDI.
– In bank, numbers entered at keyboard are UDIs, so cannot
be input to TPs. TPs must validate numbers (to make them a
CDI) before using them; if validation fails, TP rejects UDI

• If a user can create a TP and associate some set of


entities and herself with that TP, he could have the
TP perform unauthorized acts that violated integrity
constraints…

Notas:
Copyright CyberTech de Colombia, 2002-2009 Pág. 43
Security Models

Separation of duty and the model

• ER4: Only the certifier of a TP may change the list of


entities associated with that TP. No certifier of a TP,
or of an entity associated with that TP, may ever
have execute permission with respect to that entity
– Enforces separation of duty with respect to certified and
allowed relations

Notas:

Requirements of Integrity Policies


(revisited)
• Users will not write their own programs, but will use existing
production programs and databases
• Programmers will develop and test programs on a
nonproduction system; if they need access to actual data, they
will be given production data via a special process, but will use
it on their development system
• A special process must be followed to install a program from
the development system onto the production system
• The special process in requirement 3 must be controlled and
audited
• The managers and auditors must have access to both the
system state and the system logs that are generated

Notas:
Copyright CyberTech de Colombia, 2002-2009 Pág. 44
Security Models

Comparison With Requirements


(production programs: TPs, production data: CDIs)

• Users can’t certify TPs, so CR5 and ER4 enforce this


– They must use existing TPs and CDIs) production programs
and production databases
• Model doesn’t directly cover it
– No technical controls can prevent programmer from
developing program on production system,
• usual control is to delete software tools
• Install a program from development to production
requires a TP to do the installation and “trusted
personnel” to do the certification

Notas:

Comparison With Requirements

• CR4 provides logging; ER3 authenticates trusted


personnel doing installation; CR5, ER4 control
installation procedure
• New program is UDI before certification, CDI (and TP) after
• Log is CDI, so appropriate TP can provide managers,
auditors access
• Access to state handled similarly

Notas:
Copyright CyberTech de Colombia, 2002-2009 Pág. 45
Security Models

Comparison to Biba

• Biba
– Several integrity levels
– No notion of certification rules; trusted subjects ensure
actions obey rules
– Untrusted data examined before being made trusted
• Clark-Wilson
– Each object only has two integrity levels (CDI and UDI).
Each subject has two integrity levels (certified or TP and
uncertified)
– Explicit requirements that actions must meet
– Trusted entity must certify method to upgrade untrusted
data (and not certify the data itself)

Notas:

Hybrid Policies
(Chinese Wall Model)

• It’s a security policy that refers equally to


confidentiality and integrity,
– Describes policies that involve a conflict of interest
• Problem:
– Tony counsels Bank of America in its investments
– He also counsels Citibank
• Conflict of interest: the two banks’ investments may
come into conflict

Notas:
Copyright CyberTech de Colombia, 2002-2009 Pág. 46
Security Models

Organization

• Organize entities into “conflict of interest” classes


• Security
– Control subject accesses to each class
– Control writing to all classes to ensure information is not
passed along in violation of rules
• Allow sanitized data to be viewed by everyone

Notas:

Definitions

• Objects: items of information related to a company


• Company dataset (CD): contains objects related to a
single company
• Conflict of interest class (COI): contains datasets of
companies in competition
• Notation:
– COI(O) represent the COI class that contains the object O
– CD(O) represent the company dataset that contains object O
• Assume: each object belongs to exactly one COI class

Notas:
Copyright CyberTech de Colombia, 2002-2009 Pág. 47
Security Models

Example

Bank COI Class Gasoline Company COI Class

Bank of America Shell Oil Standard Oil

Citibank Bank of the West Union ’76 ARCO

Tony has access to the objects in the CD of Bank of America. Because the CD of Citibank
is in the same COI class, Tony cannot gain access to the objects in Citibank’s CD,
• Although he can access Bank of America’s CD and Arco’s CD

Notas:

Temporal Element

• Imagine this scenario: Tony used to work on Bank of


America’s portfolio then quit and then he was
contracted by Citibank
– He is working only on one CD in the bank COI class at a
time
– However, much of the information learned from Bank’s of
America portfolio will be still current
– Possible that information learned earlier may allow him to
make decisions later
• If Anthony reads any CD in a COI, he can never read
another CD in that COI

Notas:
Copyright CyberTech de Colombia, 2002-2009 Pág. 48
Security Models

CW-Simple Security Condition (v1)

• Let PR(S) be set of objects that S has already read


• Subject s can read object o iff either condition holds:
– There is an object o´ such that s has accessed o´ and CD(o´)
= CD(o)
– Meaning s has read something in o’s dataset
– For all objects o´ ∈ O, if o´ ∈ PR(s) ⇒ COI(o´) ≠ COI(o)
– Meaning s has not read any objects in o’s conflict of interest
class
• Initially, PR(s) = ∅, initial read request granted

Notas:

CW-Simple Security Condition (v1) (.)

• Two consequences:
– Once a subject reads any object in a COI class, the only
other objects in that COI in that COI class that the subject
can read are in the same CD as the read object
– The minimum number of subjects needed to access every
object in a COI class is the same as the number of CDs in
that COI class
• The gasoline company COI class requires at least four analyst
to access all information in the COI class without any conflict
of interest
• Ignores sanitized data

Notas:
Copyright CyberTech de Colombia, 2002-2009 Pág. 49
Security Models

CW-Simple Security Condition (final version


with sanitization)

• Public information may belong to a CD


– As is publicly available, no conflicts of interest arise
– So, should not affect ability of analysts to read
– Typically, all sensitive data removed from such information
before it is released publicly (called sanitization)
• Subject s can read object o iff either condition holds:
– There is an object o´ such that s has accessed o´ and CD(o´)
= CD(o)
– For all objects o´ ∈ O, if o´ ∈ PR(s) ⇒ COI(o´) ≠ COI(o)
– o is a sanitized object

Notas:

Writing

• Tony and Susan work in same trading house


• Tony can read Bank of America’s CD and Shell’ CD
• Susan can read Citibank’s CD and Shell’ CD
• If Anthony could write to Shell’ CD, Susan can read it
– Hence, indirectly, she can read information from Bank of
America’s CD, a clear conflict of interest!

Notas:
Copyright CyberTech de Colombia, 2002-2009 Pág. 50
Security Models

CW-*-Property

• A subject s can write to an object o iff both of the


following hold:
– The CW-simple security condition permits s to read o; and
– For all unsanitized objects o´, if s can read o´, then CD(o´) =
CD(o)
• Says that s can write to an object if all the (unsanitized) objects
it can read are in the same dataset
• In the example Tony can read BoA’s CD and Shell’s
CD
– Assuming CD(BoA) contains unsanitized objects (a
reasonable assumption), then Tony can’t write to Shell’s CD
because of the second condition
• CD(BoA) ≠ CD(Shell) !

Notas:

Aqui voy…

• Hay una expresión formal detrás del Chinese Wall


model,
– Desafortunadamente no la vamos a ver en el curso,
• Hay una buena descripción en “Computer Security,
art and science” de Matt Bishop
– http://www.amazon.com/gp/product/0201440997/104-
2720675-
9815955?v=glance&n=283155&n=507846&s=books&v=glanc
e

Notas:
Copyright CyberTech de Colombia, 2002-2009 Pág. 51

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy