CS 5950/6030 - Computer Security and Information Assurance Section 3: Program Security
CS 5950/6030 - Computer Security and Information Assurance Section 3: Program Security
Slides not created by the above authors are © 2006 by Leszek T. Lilien
Requests to use original slides for non-profit purposes will be gladly granted upon a written request.
Program Security – Outline (1)
3.1. Secure Programs – Defining & Testing
a. Introduction
b. Judging S/w Security by Fixing Faults
c. Judging S/w Security by Testing Pgm Behavior
d. Judging S/w Security by Pgm Security Analysis
e. Types of Pgm Flaws
Section 3 – Computer Security and Information Assurance – Spring 2006 © 2006 by Leszek T. Lilien 2
Program Security – Outline (2)
3.3. Malicious Code
3.3.1. General-Purpose Malicious Code incl.
Viruses
a. Introduction
b. Kinds of Malicious Code
c. How Viruses Work
d. Virus Signatures
e. Preventing Virus Infections
f. Seven Truths About Viruses
g. Case Studies
h. Virus Removal and System Recovery After Infection
Section 3 – Computer Security and Information Assurance – Spring 2006 © 2006 by Leszek T. Lilien 3
Program Security – Outline (3)
3.4. Controls for Security
a. Introduction
b. Developmental controls for security
c. Operating System controls for security
d. Administratrive controls for security
e. Conclusions
Section 3 – Computer Security and Information Assurance – Spring 2006 © 2006 by Leszek T. Lilien 4
3. Program Security (1)
Program security –
Our first step on how to apply security to computing
Protecting programs is the heart of computer security
All kinds of programs, from apps via OS, DBMS, networks
Issues:
How to keep pgms free from flaws
Partial answers:
Third-party evaluations
Section 3 – Computer Security and Information Assurance – Spring 2006 © 2006 by Leszek T. Lilien 5
Program Security (2)
Outline:
3.1. Secure Programs – Defining and Testing
3.2. Nonmalicious Program Errors
3.3. Malicious Code
3.3.1. General-Purpose Malicious Code incl. Viruses
3.3.2. Targeted Malicious Code
3.4. Controls Against Program Threats
Section 3 – Computer Security and Information Assurance – Spring 2006 © 2006 by Leszek T. Lilien 6
3.1. Secure Programs - Defining & Testing
Outline
a. Introduction
b. Judging S/w Security by Fixing Faults
c. Judging S/w Security by Testing Pgm Behavior
d. Judging S/w Security by Pgm Security Analysis
e. Types of Pgm Flaws
Section 3 – Computer Security and Information Assurance – Spring 2006 © 2006 by Leszek T. Lilien 8
Introduction (2)
Fault tolerance terminology:
Error - may lead to a fault
Fault - cause for deviation from intended function
Failure - system malfunction caused by fault
Note: [cf. A. Striegel]
Faults - seen by „insiders” (e.g., programmers)
Failures - seen by „outsiders” (e.g., independent testers, users)
Error/fault/failure example:
Programmer’s indexing error, leads to buffer overflow fault
Buffer overflow fault causes system crash (a failure)
Section 3 – Computer Security and Information Assurance – Spring 2006 © 2006 by Leszek T. Lilien 12
Judging S/w Security by Testing Pgm Behavior (3)
Problems with pgm behavior testing
Limitations of testing
Can’t test exhaustively
Testing checks what the pgm should do
Can’t test what the pgm should not do
i.e., can’t make sure that pgm does only what it should do
– nothing more
Evolving technology
New s/w technologies appear
Security techniques catching up with s/w technologies
Section 3 – Computer Security and Information Assurance – Spring 2006 [cf. A. Striegel]
© 2006 by Leszek T. Lilien 13
d. Judging S/w Security
by Pgm Security Analysis
Best approach to judging s/w security:
pgm security analysis
Analyze what can go wrong
At every stage of program development!
From requirement definition to testing
After deployment
Configurations / policies / practices
Section 3 – Computer Security and Information Assurance – Spring 2006 © 2006 by Leszek T. Lilien 16
a. Buffer Overflows (1)
Buffer overflow flaw — often inadvertent (=>nonmalicious)
but with serious security consequences
Section 3 – Computer Security and Information Assurance – Spring 2006 © 2006 by Leszek T. Lilien 20
Buffer Overflows (5)
Supp. buffer overflow affects a call stack area—CONT
Stack: [..........][A][data][data][...]
Subroutine finishes
Buffer for char sample[10] is deallocated
Stack: [A][data][data][...]
RET operation pops A from stack (considers it ret. addr.)
Stack: [data][data][...]
Pgm (which called the subroutine) jumps to A
=> shifts program control to where attacker wanted
Section 3 – Computer Security and Information Assurance – Spring 2006 © 2006 by Leszek T. Lilien 22
Buffer Overflows (7)
Web server attack similar to buffer overflow attack:
pass very long string to web server (details: textbook, p.103)
Section 3 – Computer Security and Information Assurance – Spring 2006 © 2006 by Leszek T. Lilien 23
b. Incomplete Mediation (1)
Incomplete mediation flaw — often inadvertent (=>
nonmalicious) but with serious security consequences
Incomplete mediation:
Sensitive data are in exposed, uncontrolled condition
Example
URL to be generated by client’s browser to access server,
e.g.:
http://www.things.com/order/final&custID=101&part=555A&qy=20&p
rice=10&ship=boat&shipcost=5&total=205
Instead, user edits URL directly, changing price and total
cost as follows:
http://www.things.com/order/final&custID=101&part=555A&qy=20&p
rice=1&ship=boat&shipcost=5&total=25
User uses forged URL to access server
The server takes 25 as the total cost
Section 3 – Computer Security and Information Assurance – Spring 2006 © 2006 by Leszek T. Lilien 24
Incomplete Mediation (2)
Section 3 – Computer Security and Information Assurance – Spring 2006 © 2006 by Leszek T. Lilien 25
c. Time-of-check to Time-of-use Errors (1)
Time-of-check to time-of-use flaw — often inadvertent (=>
nonmalicious) but with serious security consequences
A.k.a. synchronization flaw / serialization flaw
TOCTTOU — mediation with “bait and switch” in the middle
Non-computing example:
Swindler shows buyer real Rolex watch (bait)
After buyer pays, switches real Rolex to a forged one
In computing:
Change of a resource (e.g., data) between time
access checked and time access used
Q: Any examples of TOCTTOU problems from
computing?
Section 3 – Computer Security and Information Assurance – Spring 2006 © 2006 by Leszek T. Lilien 26
Time-of-check to Time-of-use Errors (2)
...
TOCTTOU — mediation with “bait and switch” in the middle
...
Q: Any examples of TOCTTOU problems from
computing?
A: E.g., DBMS/OS: serialization problem:
pgm1 reads value of X = 10
pgm1 adds X = X+ 5
pgm2 reads X = 10, adds 3 to X, writes X = 13
pgm1 writes X = 15
Section 3 – Computer Security and Information Assurance – Spring 2006 © 2006 by Leszek T. Lilien 27
Time-of-check to Time-of-use Errors (3)
Section 3 – Computer Security and Information Assurance – Spring 2006 © 2006 by Leszek T. Lilien 28
Time-of-check to Time-of-use Errors (4)
Prevention of TOCTTOU errors
...
Q: Any examples of preventing TOCTTOU from
DBMS/OS areas?
A1: E.g., DBMS: locking to enforce proper serialization
(locks need not use signatures—fully controlled by DBMS)
In the previous example:
will force writing X = 15 by pgm 1, before pgm2
reads X (so pgm 2 adds 3 to 15)
OR:
will force writing X = 13 by pgm 2, before pgm1
reads X (so pgm 1 adds 5 to 13)
Section 3 – Computer Security and Information Assurance – Spring 2006 © 2006 by Leszek T. Lilien 29
d. Combinations of Nonmal. Pgm Flaws
The above flaws can be exploited in multiple steps by a
concerted attack
Nonmalicious flaws can be exploited to plant malicious
flaws (next)
Section 3 – Computer Security and Information Assurance – Spring 2006 © 2006 by Leszek T. Lilien 30
3.3. Malicious Code
Malicious code or rogue pgm is written to exploit flaws in
pgms
Malicious code can do anything a pgm can
Malicious code can change
data
other programs
Section 3 – Computer Security and Information Assurance – Spring 2006 © 2006 by Leszek T. Lilien 31
3.3.1. General-Purpose Malicious Code
(incl. Viruses)
Outline
a. Introduction
b. Kinds of Malicious Code
c. How Viruses Work
d. Virus Signatures
e. Preventing Virus Infections
f. Seven Truths About Viruses
g. Case Studies
Viruses
Many kinds and varieties
Benign or harmful
Trapdoors
Trojan Horses
X
Files
Bacteria
Worms
Logic Bombs Viruses
Section 3 – Computer Security and Information Assurance – Spring 2006 © 2006 by Leszek T. Lilien 34
b. Kinds of Malicious Code (2)
Trojan horse - A computer program that appears to have a
useful function, but also has a hidden and potentially
malicious function that evades security mechanisms,
sometimes by exploiting legitimate authorizations of a
system entity that invokes the program
Virus - A hidden, self-replicating section of computer
software, usually malicious logic, that propagates by
infecting (i.e., inserting a copy of itself into and becoming part of)
another program. A virus cannot run by itself; it requires
that its host program be run to make the virus active.
Worm - A computer program that can run independently,
can propagate a complete working version of itself onto
other hosts on a network, and may consume computer
resources destructively.
Section 3 – Computer Security and Information Assurance – Spring 2006 © 2006 by Leszek T. Lilien 35
Kinds of Malicious Code (3)
Bacterium - A specialized form of virus which does not attach to a specific file.
Usage obscure.
Section 3 – Computer Security and Information Assurance – Spring 2006 © 2006 by Leszek T. Lilien 36
Kinds of Malicious Code (4)
Section 3 – Computer Security and Information Assurance – Spring 2006 © 2006 by Leszek T. Lilien 37
c. How Viruses Work (1)
Pgm containing virus must be executed to spread virus or
infect other pgms
Even one pgm execution suffices to spread virus widely
INSTALL or SETUP
Virus installs itself in any/all executing pgms present in
memory
Virus installs itself in pgms on hard disk
Section 3 – Computer Security and Information Assurance – Spring 2006 © 2006 by Leszek T. Lilien 38
How Viruses Work (2)
Document virus
Spreads via picture, document, spreadsheet, slide
presentation, database, ...
E.g., via .jpg, via MS Office documents .doc, .xls, .ppt, .mdb
Currently most common!
Section 3 – Computer Security and Information Assurance – Spring 2006 © 2006 by Leszek T. Lilien 39
How Viruses Work (3)
Kinds of viruses w.r.t. way of attaching to infected pgms
1) Appended viruses
Appends to pgm
2) Surrounding viruses
Surronds program
existence
E.g. if surrounds „ls”, the „after” part removes listing of
virus file produced by „ls” so user can’t see it
3) Integrating viruses
Integrates into pgm code
Section 3 – Computer Security and Information Assurance – Spring 2006 © 2006 by Leszek T. Lilien 41
How Viruses Work (5)
OR
Changing pointer to T with pointer to V (textbook, Fig. 3-7)
OS has File Directory
File Directory has an entry that points to file with code for T
Virus replaces pointer to T’s file with pointer to V’s file
In both cases actions of V replace actions of T when user
executes what she thinks is „T”
Section 3 – Computer Security and Information Assurance – Spring 2006 © 2006 by Leszek T. Lilien 42
How Viruses Work (6)
Easy to create
Section 3 – Computer Security and Information Assurance – Spring 2006 © 2006 by Leszek T. Lilien 43
How Viruses Work (7)
Virus hiding places
1) In bootstrap sector – best place for virus
Bec. virus gains control early in the boot process
Before infection:
After infection:
Runtime monitors
Runtime debuggers
Section 3 – Computer Security and Information Assurance – Spring 2006 © 2006 by Leszek T. Lilien 45
d. Virus Signatures (1)
Virus hides but can’t become invisible – leaves behind a virus
signature, defined by patterns:
1) Storage patterns : must be stored somewhere/somehow
(maybe in pieces)
2) Execution patterns: executes in a particular way
3) Distribution patterns: spreads in a certain way
Virus scanners use virus signatures to detect viruses
(in boot sectior, on hard disk, in memory)
Scanner can use file checksums to detect changes to files
Once scanner finds a virus, it tries to remove it
i.e., tries to remove all pieces of a virus V from target pgm T
Section 3 – Computer Security and Information Assurance – Spring 2006 © 2006 by Leszek T. Lilien 46
Virus Signatures (2)
Detecting Virus Signatures (1)
Difficulty 1 — in detecting execution patterns:
Most of effects of virus execution (see next page) are
„invisible”
Bec. they are normal – any legitimate pgm could cause them
(hiding in a crowd)
=> can’t help in detecion
Section 3 – Computer Security and Information Assurance – Spring 2006 © 2006 by Leszek T. Lilien 47
Virus Signatures (3)
Detecting Virus Signatures (2)
Section 3 – Computer Security and Information Assurance – Spring 2006 © 2006 by Leszek T. Lilien 50
Virus Signatures (6)
Detecting Virus Signatures (5)
Encrypting virus: Encrypts its object code (each time with a
different/random key), decrypts code to run
decr_key
procedure decrypt
Section 3 – Computer Security and Information Assurance – Spring 2006 © 2006 by Leszek T. Lilien 51
Virus Signatures (7)
Detecting Virus Signatures (6)
...
Q: Q: Is there any signature for encryption virus that a
polymorphic
decr_key – random key used to encrypt/decrypt –
polymorphic
procedure decrypt (or a pointer to a library decrypt procedure)
– unencrypted, static
=> procedure decrypt of V is its signature
visible to a scanner
But: Virus writer can use polymorphic techniques on
Section 3 – Computer Security and Information Assurance – Spring 2006 © 2006 by Leszek T. Lilien 52
e. Preventing Virus Infections
Preventing Virus Infections
Use commercial software from
trustworthy sources
But even this is not an absolute
guarantee of virus-free code!
Test new software on isolated computers
Open only safe attachments
Keep recoverable system image in safe place
Backup executable system files
Use virus scanners often (daily)
Update virus detectors daily
Databases of virus signatures change very often
[cf. B. Endicott-Popovsky]
No absolute guarantees even if you follow all the rules –
just much better chances of preventing a virus
Section 3 – Computer Security and Information Assurance – Spring 2006 © 2006 by Leszek T. Lilien 53
f. Seven Truths About Viruses
Viruses can infect any platform
Viruses can modify “hidden” / “read only” files
Viruses can appear anywhere in system
Viruses spread anywhere sharing occurs
Viruses cannot remain in memory aftera complete power
off/power on on reboot
But virus reappears if saved on disk (e.g., in the boot sector)
Viruses infect software that runs hardware
There are firmware viruses (if firmware writeable by s/w)
Viruses can be malevolent, benign, or benevolent
Hmmm...
Would you like a benevolent virus doing good things (like compressing
pgms to save storage) but without your knowledge?
Berkeley UNIX
Used their resources to attack still more computers
damage to Internet
Some uninfected networks were scared into disconnecting from
Internet => severed connections stopped necessary work
Made many computers unusable via resource exhaustion
Was a rabbit – supposedly by mistake unintended by its writer
Perpetrator was convicted in 1990 ($10,000 fine + 400 hrs of
community service + 3-year suspended jail sentence)
Caused forming Computer Emergency Response Team
(CERT) at CMU
[cf. textbook &
Section 3 – Computer Security and Information Assurance – Spring 2006 B. Endicott-Popovsky]
© 2006 by Leszek T. Lilien 55
Case Studies (2)
Other case studies [textbook – interesting reading]
The Brain (Pakistani) Virus (1986)
Code Red (2001)
Denial-of-service (DoS) attack on www.whitehouse.gov
Web Bugs (generic potentially malicious code on web
pages)
Placing a cookie on your hard drive
Cookie collects statistics on user’s surfing habits
Can be used to get your IP address, which can then be used to
target you for attack
Block cookies or delete cookies periodically (e.g., using browser
command; in MS IE: Tools>Internet Options-General:Delete
Cookies)
Tool: Bugnosis from Privacy Foundation – locates web bugs
Section 3 – Computer Security and Information Assurance – Spring 2006 © 2006 by Leszek T. Lilien 56
h. Virus Removal and
System Recovery After Infection
Fixing a system after infection by virus V:
1) Disinfect (remove) viruses (using antivirus pgm)
Can often remove V from infected file for T w/o
damaging T
if V code can be separated from T code and V did
not corrupt T
Have to delete T if can’t separate V from T code
2) Recover files:
- deleted by V
- modified by V
- deleted during disinfection (by antivirus pgm)
=> need file backups!
Make sure to have backups of (at least) important files
Section 3 – Computer Security and Information Assurance – Spring 2006 © 2006 by Leszek T. Lilien 57
3.3.2. Targeted Malicious Code
Targeted = written to attack a particular system, a particular
application, and for a particular purpose
Outline:
a. Trapdoors
b. Salami attack
c. Covert channels
Section 3 – Computer Security and Information Assurance – Spring 2006 © 2006 by Leszek T. Lilien 58
a. Trapdoors (1)
Original def:
Trapdoor / backdoor - A hidden computer flaw known to an
intruder, or a hidden computer mechanism (usually
software) installed by an intruder, who can activate the trap
door to gain access to the computer without being blocked
by security services or mechanisms.
A broader definition:
Trapdoor – an undocumented entry point to a module
Inserted during code development
For testing
As a hook for future extensions
As emergency access in case of s/w failure
Section 3 – Computer Security and Information Assurance – Spring 2006 © 2006 by Leszek T. Lilien 59
Trapdoors (2)
Testing:
With stubs and drivers for unit testing (Fig. 3-10 p. 138)
Testing with debugging code inserted into tested
modules
May allow programmer to modify internal module variables
Section 3 – Computer Security and Information Assurance – Spring 2006 © 2006 by Leszek T. Lilien 60
b. Salami attack
Salami attack - merges bits of seemingly inconsequential
data to yield powerful results
Old example: interest calculation in a bank:
Fractions of 1 ¢ „shaved off” n accounts and deposited in
attacker’s account
Nobody notices/cares if 0.1 ¢ vanishes
Can accumulate to a large sum
Easy target for salami attacks: Computer computations
combining large numbers with small numbers
Require rounding and truncation of numbers
Relatively small amounts of error from these op’s are
accepted as unavoidable – not checked unless a strong
suspicion
Attacker can hide „salami slices” within the error margin
Section 3 – Computer Security and Information Assurance – Spring 2006 © 2006 by Leszek T. Lilien 61
c. Covert Channels (CC) (1)
Outline:
i. Covert Channels - Definition and Examples
ii. Types of Covert Channels
iii. Storage Covert Channels
iv. Timing Covert Channels
v. Identifying Potential Covert Channels
vi. Covert Channels - Conclusions
Section 3 – Computer Security and Information Assurance – Spring 2006 © 2006 by Leszek T. Lilien 62
i. CC – Definition and Examples (1)
So far: we looked at malicious pgms that perform wrong
actions
Now: pgms that disclose confidential/secret info
They violate confidentiality, secrecy, or privacy of info
Covert channels = channels of unwelcome disclosure of info
Extract/leak data clandestinely
Examples
1) An old military radio communication network
The busiest node is most probably the command center
Nobody is so naive nowadays
2) Secret ways spies recognize each other
Holding a certain magazine in hand
Exchanging a secret gesture when approaching each other
...
Section 3 – Computer Security and Information Assurance – Spring 2006 © 2006 by Leszek T. Lilien 63
Covert Channels – Definition and Examples (2)
How programmers create covert channels?
Providing pgm with built-in Trojan horse
Uses covert channel to communicate extracted data
Spy
(Spy - e.g., programmer who put Trojan into pgm;
directly or via Spy Pgm)
Section 3 – Computer Security and Information Assurance – Spring 2006 © 2006 by Leszek T. Lilien 64
Covert Channels – Definition and Examples (3)
Section 3 – Computer Security and Information Assurance – Spring 2006 © 2006 by Leszek T. Lilien 65
Covert Channels – Definition and Examples (4)
Example – ctd.
How Trojan within pgm can leak a 4-bit value of a
protected variable X?
cf. Fig. 3-12, p.143
Trojan signals value of X as follows:
Bit-1 = 1 if >1 space follows ‘ACCOUNT CODE:’; 0 otherwise
Bit-2 = 1 if last digit in ‘seconds’ field is >5; 0 otherwise
Bit-3 = 1 if heading uses ‘TOTALS’; 0 otherwise (uses ‘TOTAL’)
Bit-4 = 1 if no space follows subtotals line; 0 otherwise
Section 3 – Computer Security and Information Assurance – Spring 2006 © 2006 by Leszek T. Lilien 66
ii. Types of Covert Channels
Types of covert channels
Storage covert channels
Convey info by presence or absence of an object in
storage
Timing covert channels
Convey info by varying the speed at which things
happen
Section 3 – Computer Security and Information Assurance – Spring 2006 © 2006 by Leszek T. Lilien 67
iii. Storage Channels (1)
Example of storage channel: file lock covert channel
Protected variable X has n bits: X1, ..., Xn
Trojan within Service Pgm leaks value of X
Trojan and Spy Pgm synchronized, so can „slice” time
into n intervals
File FX (not used by anybody else)
To signal that Xk=1, Trojan locks file FX for interval k
(1≤ k ≤ n)
To signal that Xk=0, Trojan unlocks file FX for interval k
Spy Pgm tries to lock FX during each interval
If it succeds during k-th interval, Xk = 0 (FX was unlocked)
Otherwise, Xk = 1 (FX was locked)
(see Fig. 3-13, 3-14 – p.144-145)
Q: Why FX should not be used by anybody else?
Section 3 – Computer Security and Information Assurance – Spring 2006 © 2006 by Leszek T. Lilien 68
Storage Channels (2)
Example of storage channel: file lock covert channel
...
Q: Why FX should not be used by anybody else?
A: Any other user lockin/unlocking FX would interfere with
Trojan’s covert channel signaling.
Section 3 – Computer Security and Information Assurance – Spring 2006 © 2006 by Leszek T. Lilien 69
Storage Channels (3)
Examples of covert storage channels (synchronized intervals!)
Covert channels can use:
File locks (discussed above)
Disk storage quota
To signal Xk=1, Trojan create enormous file (consuming
most of available disk space)
Spy Pgm attempts to create enormous file. If Spy fails
(bec. no disk space available), Xk = 1; otherwise, Xk = 0
Existence of a file
To signal Xk=1, Trojan creates file FX (even empty file)
Spy Pgm atempts to create file named FX. If Spy fails
(bec. FX already exists), Xk = 1; otherwise, Xk = 0
Other resources - similarly
Section 3 – Computer Security and Information Assurance – Spring 2006 © 2006 by Leszek T. Lilien 70
Storage Channels (4)
Covert storage channels require:
Shared resource
To indicate Xk=1 or Xk=0
Synchronized time
To know which bit is signaled:
in interval k, Xk is signaled
Section 3 – Computer Security and Information Assurance – Spring 2006 © 2006 by Leszek T. Lilien 71
iv. Timing Channels
Recall: Timing channels convey info by varying the speed
at which things happen
Simple example of timing channel:
Multiprogramming system „slices” processor time for
programs running on the processor
2 processes only: Trojan (Pgm w/ Trojan) and Spy Pgm
Trojan receives all odd slices (unless abstains)
Spy Pgm receives all even slices (unless abstains)
Trojan signals Xk=1 by using its time slice,
signals Xk=0 by abstaining from using its slice
see: Fig.3-15, p.147 – how ‘101’ is signaled
Details: Trojan takes Slice 1 (its 1st slice) signaling X1=1
Trojan abstains from taking Slice 3 (its 2nd slice) signaling X2=0
Trojan takes Slice 5 (its 3rd slice) signaling X3=1
Section 3 – Computer Security and Information Assurance – Spring 2006 © 2006 by Leszek T. Lilien 72
v. Identifying Potential Covert Channels (1)
Covert channels are not easy to identify
Otherwise wouldn’t be covert, right?
Two techniques for locating covert channels:
1) Shared Resource Matrix
2) Information Flow Method
Section 3 – Computer Security and Information Assurance – Spring 2006 © 2006 by Leszek T. Lilien 73
Identifying Potential Covert Channels (2)
Process 1 Process 2
Lock on FX R, M R, M
X (confid.) R
Section 3 – Computer Security and Information Assurance – Spring 2006 © 2006 by Leszek T. Lilien 74
Identifying Potential Covert Channels (3)
...
Pgm 1 Pgm 2
Lock on FX R, M R, M
X (confid.) R
Section 3 – Computer Security and Information Assurance – Spring 2006 © 2006 by Leszek T. Lilien 75
Identifying Potential Covert Channels (4)
...
Process 1 Process 2
Lock on FX R, M R, M
X (confid.) R
Section 3 – Computer Security and Information Assurance – Spring 2006 © 2006 by Leszek T. Lilien 76
Identifying Potential Covert Channels (5)
Section 3 – Computer Security and Information Assurance – Spring 2006 © 2006 by Leszek T. Lilien 77
Identifying Potential Covert Channels (6)
Section 3 – Computer Security and Information Assurance – Spring 2006 © 2006 by Leszek T. Lilien 78
Identifying Potential Covert Channels (7)
Variants of IFM:
1) IFM during compilation
2) IFM on design specs
Section 3 – Computer Security and Information Assurance – Spring 2006 © 2006 by Leszek T. Lilien 79
Covert Channels - Conclusions
Covert channels are a serious threat to confidentiality and
thus security („CIA” = security)
Section 3 – Computer Security and Information Assurance – Spring 2006 © 2006 by Leszek T. Lilien 80
3.4. Controls for Security
How to control security of pgms during their development
and maintenance
Outline:
a. Introduction
b. Developmental controls for security
c. Operating system controls for security
d. Administrative controls for security
e. Conclusions
Section 3 – Computer Security and Information Assurance – Spring 2006 © 2006 by Leszek T. Lilien 81
a. Introduction
„Better to prevent than to cure”
Preventing security flaws
We have seen a lot of possible security flaws
How to prevent (some of) them?
Software engineering concentrates on developing and
maintaining quality s/w
We’ll take a look at some techniques useful
specifically for developing/ maintaining secure s/w
1) Modularity
Modules should be:
Single-purpose - logically/functionally
Small - for a human to grasp
Simple - for a human to grasp
Independent – high cohesion, low coupling
High cohesion – highly focused on (single) purpose
Low coupling – free from interference from other modules
Modularity should improve correctness
Fewer flaws => better security
Section 3 – Computer Security and Information Assurance – Spring 2006 © 2006 by Leszek T. Lilien 84
Developmental Controls for Security (3)
2) Encapsulation
Minimizing info sharing with other modules
=> Limited interfaces reduce # of covert channels
Well documented interfaces
„Hiding what should be hidden and showing what should
be visible.”
3) Information hiding
Module is a black box
Well defined function and I/O
Easy to know what module does but not how it does it
Reduces complexity, interactions, covert channels, ...
=> better security
Section 3 – Computer Security and Information Assurance – Spring 2006 © 2006 by Leszek T. Lilien 85
Developmental Controls for Security (4)
Informal
Team of reviewers
before development
Walk-throughs
Inspection
2) Hazard analysis
= systematic techniques to expose
potentially hazardous system states,
incl. security vulnerabilities
Components of HA
Hazard lists
Begins Day 1
Techniques
HAZOP – hazard and operability studies
3) Testing – phases:
Module/component/unit testing of indiv. modules
White Box / Clear box testing – testers can examine design and
Section 3 – Computer Security and Information Assurance – Spring 2006 © 2006 by Leszek T. Lilien 89
Developmental Controls for Security (8)
4) Good design
Good design uses:
i. Modularity / encapsulation / info hiding
ii. Fault tolerance
iii. Consistent failure handling policies
iv. Design rationale and history
v. Design patterns
Section 3 – Computer Security and Information Assurance – Spring 2006 © 2006 by Leszek T. Lilien 90
Developmental Controls for Security (9)
4) Good design – cont.1a
ii. Using fault tolerance for reliability and security
System tolerates component failures
Section 3 – Computer Security and Information Assurance – Spring 2006 © 2006 by Leszek T. Lilien 91
Developmental Controls for Security (10)
4) Good design – cont.1b
Example 1: Majority voting (using h/w redundancy)
3 processor running the same s/w
E.g., in a spaceship
Section 3 – Computer Security and Information Assurance – Spring 2006 © 2006 by Leszek T. Lilien 92
Developmental Controls for Security (11)
Retrying
Correcting
Restore previous state, correct sth, run service using the same
code as before
Reporting
Restore previous state, report failure to error handler, don’t
rerun service
Section 3 – Computer Security and Information Assurance – Spring 2006 © 2006 by Leszek T. Lilien 94
Developmental Controls for Security (13)
Understandability
Reuse
Correctness
Better testing
and deployment
Make plans to handle unwelcome events should they
occur
Risk prediction/mgmt are esp. important for security
consequences
Risk prediction/mgmt helps to select proper security
Section 3 – Computer Security and Information Assurance – Spring 2006 © 2006 by Leszek T. Lilien 96
Developmental Controls for Security (15)
6) Static analysis
Before system is up and running, examine its design and
7) Configuration management
= process of controling system modifications during
development and maintenance
Offers security benefits by scrutinizing new/changed code
E.g., neutralizing it
Proliferation of different versions and releases
Older and newer
For different platforms
For different application environments (and/or customers
categories)
Section 3 – Computer Security and Information Assurance – Spring 2006 © 2006 by Leszek T. Lilien 98
Developmental Controls for Security (17)
Adaptive changes
To maintain control over system’s modifications
Perfective changes
To perfect existing acceptable system functions
Preventive changes
To prevent system’s performance degradation to
unacceptable levels
Section 3 – Computer Security and Information Assurance – Spring 2006 © 2006 by Leszek T. Lilien 99
Developmental Controls for Security (18)
Activities involved in configuration management process
(performed by reps from developers, customers, users, etc.)
1) Baseline identification
Certain release/version (R/v) selected & frozen as
baseline
Other R’s/v’s described as changes to the baseline
3) Configuration auditing
System must be audited regularly — to verify:
Recording of changes
the field
Peformed by independent parties
4) Status accounting
Records info about system components
from scratch)
Version
Change history
Section 3 – Computer Security and Information Assurance – Spring 2006 © 2006 by Leszek T. Lilien 101
Developmental Controls for Security (20)
by CCB
Section 3 – Computer Security and Information Assurance – Spring 2006 © 2006 by Leszek T. Lilien 102
Developmental Controls for Security (21)
8) Additional developmental controls
8a) Learning from mistakes
Avoiding such mistakes in the future enhances security
Section 3 – Computer Security and Information Assurance – Spring 2006 © 2006 by Leszek T. Lilien 104
Operating System Controls for Security (2)
Trusted software
– code rigorously developed an analyzed so we can trust that
it does all and only what specs say
Trusted code establishes foundation upon which untrusted
code runs
Trusted code establishes security baseline for the whole
system
In particular, OS can be trusted s/w
Section 3 – Computer Security and Information Assurance – Spring 2006 © 2006 by Leszek T. Lilien 105
Operating System Controls for Security (3)
2) Enforcement of integrity
OS keeps integrity of its data and other resources even
Section 3 – Computer Security and Information Assurance – Spring 2006 © 2006 by Leszek T. Lilien 106
Operating System Controls for Security (4)
Section 3 – Computer Security and Information Assurance – Spring 2006 © 2006 by Leszek T. Lilien 107
Operating System Controls for Security (5)
2) Confinement
OS can confine access to resources by suspected pgm
Example 1: strict compartmentalization
Pgm can affect data and other pgms only within its
compartment
Example 2: sandbox for untrusted pgms
Can limit spread of viruses
Section 3 – Computer Security and Information Assurance – Spring 2006 © 2006 by Leszek T. Lilien 108
Operating System Controls for Security (6)
3) Audit log / access log
Records who/when/how (e.g., for how long)
accessed/used which objects
Events logged: logins/logouts, file accesses, pgm
ecxecutions, device uses, failures, repeated
unsuccessful commands (e.g., many repeated failed
login attempts can indicate an attack)
Audit frequently for unusual events, suspicious patterns
Forensic measure not protective measure
Forensics – investigation to find who broke law,
policies, or rules
Section 3 – Computer Security and Information Assurance – Spring 2006 © 2006 by Leszek T. Lilien 109
d. Administrative Controls for Security (1)
They prohibit or demand certain human behavior via
policies, procedures, etc.
They include:
1) Standards of program development
2) Security audits
3) Separation of duties
Section 3 – Computer Security and Information Assurance – Spring 2006 © 2006 by Leszek T. Lilien 110
Administrative Controls for Security (2)
They include:
Testing S&G
2) Security audits
Check compliance with S&G
Section 3 – Computer Security and Information Assurance – Spring 2006 © 2006 by Leszek T. Lilien 111
Administrative Controls for Security (3)
3) Separation of duties
Break sensitive tasks into 2 pieces to be performed by
Section 3 – Computer Security and Information Assurance – Spring 2006 © 2006 by Leszek T. Lilien 112
e. Conclusions (for Controls for Security)
Developmental / OS / administrative controls help
produce/maintain higher-quality (also more secure) s/w
Art and science - no „silver bullet” solutions
„A good developer who truly understands security will
incorporate security into all phases of development.”
[textbook, p. 172]
Summary: [cf. B. Endicott-Popovsky]