0% found this document useful (0 votes)
11 views76 pages

20CB620 Unit 4

Uploaded by

Kumar Subrato
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as ODP, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views76 pages

20CB620 Unit 4

Uploaded by

Kumar Subrato
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as ODP, PDF, TXT or read online on Scribd
You are on page 1/ 76

Logic based system

UNIT - 4
Malicious Logic

Computer viruses, worms, and Trojan horses are effective tools
with which to attack computer systems.

They assume an authorized user’s identity.

This makes most traditional access controls useless. This
chapter presents several types of malicious logic,focusing on
Trojan horses and computer viruses, and discusses defenses.

Malicious logic is a set of instructions that cause a site’s
security policy to be violated
Trojan Horses

A critical observation is the notion of “tricked.” Suppose the user root
executed the script unintentionally (for example, by typing “ls” in the
directory containing this file).
cp /bin/sh /tmp/.xxsh
chmod u+s,o+x /tmp/.xxsh
rm ./ls
ls $*


This would be a violation of the security policy. However, if root
deliberately typed the security policy would not be violated.
Trojan Horses

A Trojan horse is a program with an overt
(documented or known) effect and a covert
(undocumented or unexpected) effect.

In the preceding example, the overt purpose is to list
the files in a directory. The covert purpose is to create
a shell that is set uid to the user executing the script.

Hence, this program is a Trojan horse
Trojan Horses

Trojan horses can make copies of themselves. One of the
earliest Trojan horses was a version of the game .

When this game was played, it created an extra copy of itself.
These copies spread, taking up much room.

The program was modified to delete one copy of the earlier
version and create two copies of the modified program

A propagating Trojan horse (also called a replicating
Trojan horse) is a Trojan horse that creates a copy of itself.
Computer Viruses
VIRUS – Vital Information Resources Under Siege
It refers to the type of malicious software or malware
that can cause damage to your data, files, and software
through replication.
A computer virus is a program that inserts itself into one
or more files and then performs some (possibly null)
action
The first phase, in which the virus inserts itself into
a file, is called the insertion phase.
The second phase, in which it performs some
action, is called the execution phase.
The following pseudocode fragment shows how a
simple computer virus works
beginvirus:
if spread-condition then begin
for some set of target files do begin
if target is not infected then begin
determine where to place virus instructions
copy instructions from beginvirus to endvirus
into target
alter target to execute added instructions
end;
end;
end;
perform some action(s)
goto beginning of infected program
endvirus:
Boot Sector Infectors
The boot sector is the part of a disk used to bootstrap
the system or mount a disk.
Code in that sector is executed when the system
“sees” the disk for the first time
When the system boots, or the disk is mounted, any
virus in that sector is executed
A boot sector infector is a virus that inserts itself into
the boot sector of a disk
Boot Sector Infectors
EXAMPLE: The Brain virus for the IBM PC is a boot sector
infector.
When the system boots from an infected disk, the virus is in
the boot sector and is loaded. It moves the disk interrupt
vector (location 13H or 19) to an alternative interrupt vector
(location 6DH or 109) and sets the disk interrupt vector
location to invoke the Brain virus now in memory.
It then loads the original boot sector and continues the
boot.
Executable Infectors
An executable infector is a virus that infects
executable programs.
The PC variety of executable infectors are called
COM or EXE viruses because they infect programs
with those extensions. Figure illustrates how infection
can occur. The virus can prepend itself to the
executable (as shown in the figure) or append itself.
Executable Infectors
Executable Infectors
EXAMPLE: The Jerusalem virus (also called the Israeli
virus) is triggered when an infected program is executed.
The virus first puts the value 0E0H into register ax and
invokes the DOS service interrupt (21H). If on return the
high eight bits of register ax contain 03H, the virus is
already resident on the system and the executing version
quits, invoking the original program. Otherwise, the virus
sets itself up to respond to traps to the DOS service
interrupt vector
Multipartite Viruses
A multipartite virus is one that can infect either
boot sectors or applications
Such a virus typically has two parts, one for each
type. When it infects an executable, it acts as an
executable infector; when it infects a boot sector,
it works as a boot sector infector.
TSR Viruses
A terminate and stay resident (TSR) virus is one that stays active
(resident) in memory after the application (or bootstrapping, or disk
mounting) has terminated.
TSR viruses can be boot sector infectors or executable infectors.
Both the Brain and Jerusalem viruses are TSR viruses.
Viruses that are not TSR execute only when the host application is
executed (or the disk containing the infected boot sector is
mounted).
An example is the Encroacher virus, which appends itself to the
ends of executables
Stealth Viruses
Stealth viruses are viruses that conceal the infection of files.
These viruses intercept calls to the operating system that
access files.
If the call is to obtain file attributes, the original attributes of
the file are returned. If the call is to read the file, the file is
disinfected as its data is returned.
But if the call is to execute the file, the infected file is
executed.
Encrypted Viruses
Computer virus detectors often look for known
sequences of code to identify computer viruses.
To conceal these sequences, some viruses encipher
most of the virus code, leaving only a small decryption
routine and a random cryptographic key in the clear.
An encrypted virus is one that enciphers all of the
virus code except for a small decryption routine.
Encrypted Viruses
Polymorphic Viruses
A polymorphic virus is a virus that changes its form each time it
inserts itself into another program
Consider an encrypted virus. The body of the virus varies depending
on the key chosen, so detecting known sequences of instructions will
not detect the virus. However, the decryption algorithm can be
detected.
Polymorphic viruses were designed to prevent this.
They change the instructions in the virus to something equivalent but
different. In particular, the deciphering code is the segment of the
virus that is changed. In some sense, they are successors to the
encrypting viruses and are often used in conjunction with them.
Macro Viruses
A macro virus is a virus composed of a sequence of
instructions that is interpreted, rather than executed directly.
Conceptually, macro viruses are no different from ordinary
computer viruses.
They can execute on any system that can interpret the
instructions. For example, a spreadsheet virus executes when the
spreadsheet interprets these instructions.
If the macro language allows the macro to access files or other
systems, the virus can access them, too.
Computer Worms
A computer virus infects other programs. A variant
of the virus is a program that spreads from
computer to computer, spawning copies of itself
on each one
A computer worm is a program that copies
itself from one computer to another
Other Forms of Malicious Logic
Rabbits and Bacteria
• Some malicious logic multiplies so rapidly that resources
become exhausted. This creates a denial of service attack
• A bacterium or a rabbit is a program that absorbs all of some
class of resource
• A bacterium is not required to use all resources on the system.
• Resources of a specific class, such as file descriptors or process
table entry slots, may not affect currently running processes.
They will affect new processes
Other Forms of Malicious Logic
Logic Bombs
• Some malicious logic triggers on an external event, such as a
user logging in or the arrival of midnight
• A logic bomb is a program that performs an action that violates
the security policy when some external event occurs.
Defenses
Defending against malicious logic takes
advantage of several different characteristics of
malicious logic to detect, or to block, its execution.
The defenses inhibit the suspect behavior
Malicious Logic Acting as Both Data
andacts
Some malicious logic Instructions
as both data and instructions.
A computer virus inserts code into another program. During
this writing, the object being written into the file is data.
The virus then executes itself. The instructions it executes
are the same as what it has just written.
Here, the object is treated as an executable set of
instructions.
Protection mechanisms based on this property treat all
programs as type “data” until some certifying authority
changes the type to “executable”
Malicious Logic Assuming the
Identity of a User
Because a user (unknowingly) executes malicious
logic, that code can access and affect objects
within the user’s protection domain.
So, limiting the objects accessible to a given
process run by the user is an obvious protection
technique.
Malicious Logic Assuming the
1.
Identity of
Information Flow Metrics
a User
Define the flow distance metric fd(x) for some information x
as follows. Initially, all information has fd(x) = 0. Whenever x is
shared, fd(x) increases by 1. Whenever x is used as input to a
computation, the flow distance of the output is the maximum
of the flow distance of the input.

Information is accessible only while its flow distance is less


than some particular value
Malicious Logic Assuming the
1. Reducing theIdentity
Rights of a User
The user can reduce her associated protection domain when running a
suspect program. This follows from the principle of least privilege
2. Sandboxing
Sandboxes and virtual machines implicitly restrict
process rights. A common implementation of this approach
is to restrict the program by modifying it. Usually, special
instructions inserted into the object code cause traps
whenever an instruction violates the security policy.
Malicious Logic Crossing Protection
Domain Boundaries by Sharing
Inhibiting users in different protection domains
from sharing programs or data will inhibit
malicious logic from spreading among those
domains.
This takes advantage of the separation implicit in
integrity policies
Malicious Logic Altering Files
Mechanisms using manipulation detection codes (or
MDCs) apply some function to a file to obtain a set of bits
called the signature block and then protect that block.
If, after recomputing the signature block, the result differs
from the stored signature block, the file has changed,
possibly as a result of malicious logic altering the file.
Malicious Logic Performing Actions
Fault-tolerant Beyond Specification
techniques keep systems functioning correctly
when the software or hardware fails to perform to specifications.
Proof-Carrying Code
A technique that combines specification and integrity checking .
This method, called proof-carrying code (PCC), requires a “code
consumer” (user) to specify a safety requirement
The “code producer” (author) generates a proof that the code
meets the desired safety property and integrates that proof with
the executable code. This produces a PCC binary
Malicious Logic Altering Statistical
Characteristics
Like human languages, programs have specific
statistical characteristics that malicious logic might
alter.
Detection of such changes may lead to detection
of malicious logic
The Notion of Trust
The effectiveness of any security mechanism
depends on the security of the underlying base on
which the mechanism is implemented and the
correctness of the implementation.
If the trust in the base or in the implementation is
misplaced, the mechanism will not be secure.
Vulnerability Analysis
Vulnerabilities arise from computer system design,
implementation, maintenance, and operation.
This chapter presents a general technique for
testing for vulnerabilities in all these areas and
discusses several models of vulnerabilities
Vulnerability Analysis
A “computer system” is more than hardware and
software; it includes the policies, procedures, and
organization under which that hardware and
software is used.
Lapses in security can arise from any of these
areas or from any combination of these areas.
Vulnerability Analysis
When someone breaks into a computer system, that
person takes advantage of lapses in procedures,
technology, or management allowing unauthorized access
or actions.
The specific failure of the controls is called a vulnerability
or security flaw; using that failure to violate the site security
policy is called exploiting the vulnerability.
One who attempts to exploit the vulnerability is called an
attacker
Vulnerability Analysis
Formal verification and property-based testing are techniques
for detecting vulnerabilities.
Both are based on the design and/or implementation of the
computer system, but a “computer system” includes policies,
procedures, and an operating environment, and these external
factors can be difficult to express in a form amenable to formal
verification or property-based testing.
Yet these factors determine whether or not a computer system
implements the site security policy to an acceptable degree
Vulnerability Analysis
Formal Verification
Tester believes there to be flaws in a system.
Given the hypothesis the tester determines the state in which the
vulnerability will arise. This is the precondition.
The tester puts the system into that state and analyzes the system.
After the analysis, the tester will have information about the resulting
state of the system (the post conditions) that can be compared with the
site security policy.
If the security policy and the post conditions are inconsistent, the
hypothesis (that a vulnerability exists) is correct.
Vulnerability Analysis
Penetration testing is a testing technique, not a proof technique.
It can never prove the absence of security flaws; it can only prove their
presence.
In theory, formal verification can prove the absence of vulnerabilities.
However, to be meaningful, a formal verification proof must include all
external factors. Hence, formal verification proves the absence of flaws
within a particular program or design and not the absence of flaws within
the computer system as a whole.
Incorrect configuration, maintenance, or operation of the program or
system may introduce flaws that formal verification will not detect.
Vulnerability Analysis
Penetration Studies
A penetration study is a test for evaluating the strengths of all security
controls on the computer system.
The goal of the study is to violate the site security policy. A penetration
study (also called a tiger team attack or red team attack) is not a
replacement for careful design and implementation with structured
testing.
It provides a methodology for testing the system in toto( A n
attacker
who is able to control a step in the supply chain can
alter the product for malicious logics), once it is in place.
Unlike other testing and verification technologies, it examines
procedural and operational controls as well as technological controls
Goals
A penetration test is an authorized attempt to violate
specific constraints stated in the form of a security or
integrity policy.
This formulation implies a metric for determining whether
the study has succeeded.
It also provides a framework in which to examine those
aspects of procedural, operational, and technological
security mechanisms relevant to protecting the particular
aspect of system security in question.
Goals
EXAMPLE: A company obtains documents from other vendors
and, after 30 days, publishes them on the World Wide Web. The
vendors require that the documents be confidential for that
length of time. A penetration study of this site might set the goal
of obtaining access to a specific file; the test could be limited to
30 days in order to duplicate the conditions under which the site
will operate. An alternative goal might be to gain access to any
of these files; in this case, no time limit should be specified
because a test could involve planting of Trojan horses that
would last more than 30 days.
Layering of Tests
A penetration test is designed to characterize the
effectiveness of security mechanisms and controls
to attackers.
To this end, these studies are conducted from an
attacker’s point of view, and the environment in
which the tests are conducted is that in which a
putative attacker would function.
Layering of Tests
A layering model for a penetration study.
1. External attacker with no knowledge of the
system. At this level, the testers know that the
target system exists and have enough information
to identify it once they reach it.
Layering of Tests
2. External attacker with access to the system. At
this level, the testers have access to the system and
can proceed to log in or to invoke network services
available to all hosts on the network
3. Internal attacker with access to the system. At
this level, the testers have an account on the system
and can act as authorized users of the system
Methodology at Each Layer
The penetration testing methodology springs from
the Flaw Hypothesis Methodology.
The usefulness of a penetration study comes from
the documentation and conclusions drawn from
the study and not from the success or failure of
the attempted penetration.
Flaw Hypothesis Methodology
The Flaw Hypothesis Methodology was developed
at System Development Corporation and provides
a framework for penetration
It consists of four steps
Flaw Hypothesis Methodology
1. Information gathering. In this step, the testers become
familiar with the system’s functioning. They examine
the system’s design, its implementation, its operating
procedures, and its use. The testers become as
familiar with the system as possible.
2. Flaw hypothesis. Drawing on the knowledge gained in
the first step, and on knowledge of vulnerabilities in
other systems, the testers hypothesize flaws of the
system under study
Flaw Hypothesis Methodology
3. Flaw testing. The testers test their hypothesized flaws. If a
flaw does not exist (or cannot be exploited), the testers go back
to step 2. If the flaw is exploited, they proceed to the next step.
4. Flaw generalization. Once a flaw has been successfully
exploited, the testers attempt to generalize the vulnerability and
find others similar to it. They feed their new understanding (or
new hypothesis) back into step 2 and iterate until the test is
concluded.
5. Flaw elimination. The testers suggest ways to eliminate
the flaw or to use procedural controls to ameliorate it.
Vulnerability Classification
Vulnerability classification frameworks describe
security flaws from various perspectives.
Some frameworks describe vulnerabilities by
classifying the techniques used to exploit them.
Others characterize vulnerabilities in terms of the
software and hardware components and interfaces
that make up the vulnerability.
The goal of vulnerability analysis is to develop methodologies
that provide the following abilities
1. The ability to specify, design, and implement a computer
system without vulnerabilities.
2. The ability to analyze a computer system to detect
vulnerabilities (which feeds into the Flaw Hypothesis
Methodology step of penetration testing).
3. The ability to address any vulnerabilities introduced during
the operation of the computer system (possibly leading to
a redesign or reimplementation of the flawed
components).
4. The ability to detect attempted exploitatons of vulnerabilities.
Two Security Flaws
Two widely known security vulnerabilities in some
versions of the UNIX operating system.
We will use these vulnerabilities as examples
when comparing and contrasting the various
frameworks
Suppose the user wishes to log to an existing file. The
following code fragment opens the file for writing
if (access(“/usr/tom/X”, W_OK) == 0){ if ((fd =
open(“/usr/tom/X”, O_WRONLY|O_APPEND) )< 0){ /*
handle error: cannot open file */
}}
Two Security Flaws
Auditing
Logging is the recording of events or statistics
to provide information about system use and
performance.
Auditing is the analysis of log records to
present information about the system in a
clear and understandable manner
Anatomy of an Auditing System
An auditing system consists of three components:
• The logger
• The analyzer
• The notifier.
These components collect data, analyze it, and
report the results
Logger
Logging mechanisms record information. The type and
quantity of information are dictated by system or program
configuration parameters.
The mechanisms may record information in binary or
human-readable form or transmit it directly to an analysis
mechanism
A log-viewing tool is usually provided if logs are recorded in
binary form, so a user can examine the raw data or
manipulate it using text-processing tools.
Analyzer
An analyzer takes a log as input and analyzes it.
The results of the analysis may lead to changes in
the data being recorded, to detection of some
event or problem, or both.
Notifier
The analyzer passes the results of the analysis to
the notifier.
The notifier informs the analyst, and other
entities, of the results of the audit.
The entities may take some action in response to
these results
A Posterior Design
The design of an effective auditing subsystem is
straightforward when one is aware of all possible
policy violations and can detect them.
Most security breaches arise on existing systems
that were not designed with security
considerations in mind.
A Posterior Design
Auditing may have two different goals.
The first goal is to detect any violations of a stated policy
The second is to detect actions that are known to be part of an attempt to
breach security.
The difference is subtle but important.
The first goal focuses on the policy and, as with the a priori design of an
auditing subsystem, records (attempted) actions hat violate the policy.
The second goal focuses on specific actions that the managers of the
system have determined indicate behavior that poses a threat to system
securitY
Auditing to Detect Violations of a
Known Policy
Design mechanisms for checking that the actions
and settings are in fact consistent with the policy.
There are two ways to proceed:
1. State-based auditing
2. Transition-based auditing.
State-based auditing
A state-based logging mechanism records
information about a system’s state.
A state-based auditing mechanism determines
whether or not a state of the system is
unauthorized
Transition-Based Auditing
A transition-based logging mechanism records
information about an action on a system.
A transition-based auditing mechanism examines
the current state of the system and the proposed
transition (command) to determine if the result will
place the system in an unauthorized state
Auditing to Detect Known Violations
of a Policy
The security policy is not stated explicitly. However,
certain behaviors are considered to be “non-secure.”
For example, an attack that floods a network to the
point that it is not usable, or accessing of a
computer by an unauthorized person, would violate
the implicit security policy.
Auditing to Detect Known Violations
of a Policy
Auditing Mechanisms
Different systems approach logging in different
ways. Most systems log all events by default and
allow the system administrator to disable the
logging of specific events..
Auditing
Secure Systems
Mechanisms
Systems designed with security in mind have auditing
mechanisms integrated with the system design and
implementation.
Typically, these systems provide a language or interface
that allows system managers to configure the system to
report specific events or to monitor accesses by a particular
subject or to a particular object.
This is controlled at the audit subsystem so that irrelevant
actions or accesses are not recorded
Auditing Mechanisms
Non-secure Systems
Auditing subsystems for systems not designed with security
in mind are generally for purposes of accounting.
Although these subsystems can be used to check for
egregious security violations, they rarely record the level of
detail or the types of events that enable security officers to
determine if security has been violated.
The level of detail needed is typically provided by an added
subsystem.
xxsh cp means copy [syntax: cp from_dir /to_dir].

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy