0% found this document useful (0 votes)
19 views96 pages

ST Mod1 CH1

The document discusses software testing concepts like errors, faults, failures and incidents. It also defines key terms like test, quality attributes, requirements and input domains. Correctness of a program depends on its behavior for all possible inputs from the input domain as defined by requirements.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views96 pages

ST Mod1 CH1

The document discusses software testing concepts like errors, faults, failures and incidents. It also defines key terms like test, quality attributes, requirements and input domains. Correctness of a program depends on its behavior for all possible inputs from the input domain as defined by requirements.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 96

SOFTWARE TESTING

1.A Craftsman’s Approach


Paul C Jorgensen
Software Testing and Analysis – Process,
Principles and Techniques
Mauro Pezze, Michal Young
2.Foundation of software testing
-Aditya P Mathur
What is Software testing?
• Software testing is the process of evaluation
a software item to detect differences between given
input and expected output.

• Also to determine the feature of A software item.

• Testing assesses the quality of the product. ...

• In other words software testing is a verification and


validation process.
Chapter 1

A perspective on
Testing
Basic Definitions:
Errors
• People make Errors.
• A synonym for error is mistake.
• When people make mistakes while coding , they are
called bugs.
• Errors tend to propagate.
Ex: a requirement error may be magnified during design
and amplified still more during coding.
Fault
• A fault is the result of an error.
• Defect is a good synonym for fault.
2 types of faults.
1. Faults of omission
2. Faults of commission
• Fault of omission occurs when we fail to enter correct
information.
• Fault of commission occurs when we enter something
into a representation that is incorrect.

• The two types faults of omission are more difficult to


detect and resolve.
Failure
• Occurs when a faulty piece of code is executed
leading to an incorrect state that propagates to the
program’s output.

• Two subtleties arise here are:


1. Failures only occur in an executable representation,
which is usually be source code or loaded object code.
2. Failure relates only to faults of commission.
Incident
• Whenever a failure occurs, it may or may not be
readily apparent (visible or clear) to the user.

• An incident is the symptom associated with a


failure that alerts the user to the occurrence of a
failure.
Test
• Testing is concerned with errors, faults, failures and
incidents.

• A test is act of exercising software using test cases.

• A test has 2 distinct goals.


-- To find failures.
-- To demonstrate correct execution.
• The following points can be summarized from the
testing life cycle.

• The first three phases are putting bugs in.

• The testing phase is finding bugs.

• The last three phases are getting bugs out.

• The fault resolution is another opportunity for


errors.
• When a fix causes formerly correct software to
misbehave, the fix is deficient.
• The process of testing can be subdivided into
separate steps:
• Test planning
• Test case development
• Running test cases
• Evaluating test results.
SOFTWARE QUALITY
• Software quality is a multidimensional quantity and is
measurable.
Quality Attributes
• There exist several measure of software quality.
• These can be divided to static and dynamic quality
attributes.
• Static quality attributes
• It refers to the actual code and related documentation
• Dynamic quality attributes relates to the behavior
of the application while in use .

• Static quality attributes include structured,


maintainable and testable code as well as the
availability of correct and complete
documentation.
• Example: A poorly documented piece of code will be
harder to understand and hence difficult to modify.
• A poorly structured code might be harder to modify
and difficult to test.
• Dynamic quality Attributes:
• Reliability
• Correctness
• Completeness
• Consistency
• Usability
• Performance
Reliability:
• It refers to the probability of failure free operation.
Correctness:
• Refers to the correct operation and is always with reference
to some artifact.
• For a Tester, correctness is w.r.t to the requirements
• For a user correctness is w.r.t the user manual
Completeness:
• Refers to the availability of all the features listed in the
requirements or in the user manual.
• An incomplete software is one that does not fully implement
all features required.
Consistency:
• Refers to adherence to a common set of conventions and
assumptions.
• Ex: All buttons in the user interface might follow a
common-color coding convention.
• Usability:
• Refer to ease with which an application can be used.

• This is an area in itself and there exist techniques for


usability testing.

• Psychology plays an important role in the design of


techniques for usability testing.

• Usability testing is a testing done by its potential users.

• The development organization invites a selected set of


potential users and asks them to test the product.
• Usability:
• Users in turn test for ease of use, functionality as
expected, performance, safety and security.

• Users thus serve as an important source of tests that


developers or testers within the organization might
not have conceived.

• Usability testing is sometimes referred to as user-


centric testing.
• Performance:

• Refers to the time the application takes to perform a


requested task.

• Performance is considered as a non-functional


requirement.
• Reliability:
• (Software reliability is the probability of failure free operation
of software over a given time interval & under given
conditions.) ANSI/IEEE Std
• Software reliability can vary from one operational profile to
another.
• Software reliability is the probability of failure free operation
of software in its intended environments.
• The term environment refers to the software and hardware
elements needed to execute the application.
• These elements include the operating system(OS) hardware
requirements and any other applications needed for
communication.
1.3 Requirements, Behaviour and
Correctness:
• Product (or) software are designed in response to
requirements. (Requirements specify the functions
that a product is expected to perform.)

• During the development of the product, the


requirement might have changed from what was
stated originally.

• Regardless of any change, the expected behaviour


of the product is determined by the tester’s
understanding of the requirements during
testing.
Example:
• Requirement 1: It is required to write a
program that inputs two integer and outputs
the maximum of these.

• Requirement 2: It is required to write a


program that inputs a sequence of integers
and outputs the sorted version of this
sequence.
• Suppose that the program max is developed to
satisfy requirement 1 above. The expected output
of max when the input integers are 13 and 19 can
be easily determined to be 19.

• Suppose now that the tester wants to know if the


two integers are to be input to the program on
one line followed by a carriage return typed in
after each number.

• The requirement as stated above fails to provide


an answer to this question. This example
illustrates the incompleteness requirements 1.
• The second requirement in (the above example
is ambiguous.
• It is not clear from this requirement whether the
input sequence is to be sorted in ascending or
descending order.
• The behaviour of sort program, written to
satisfy this requirement, will depend on the
decision taken by the programmers while
writing sort.
• Testers are often faced with
incomplete/ambiguous requirements
• In such situations a testers may resort to a
variety of ways to determine what behaviour to
expect from the program under test.

• Regardless of the nature of the requirements,


testing requires the determination of the
expected behaviour of the program under test.

• The observed behaviour of the program is


compared with the expected behaviour to
determine if the program functions as desired.
1.3.1 Input Domain and Program Correctness
• A program is considered correct if it behaves as desired on all
possible test inputs.

• Usually, the set of all possible inputs is too large for the program to
be executed on each input.

• For integer value, -32,768 to 32,767. This requires 232 executions of


max.

• Testing a program on all possible inputs is known as “exhaustive


testing”.

• If the requirements are complete and unambiguous, it should


be possible to determine the set of all possible inputs.
• Definition: Input Domain
• The set of all possible inputs to program P is known
as the input domain, or input space, of P.

• Modified requirement 2: It is required to write a


program that inputs a sequence of integers and
outputs the integers in this sequence sorted in either
ascending or descending order.
The order of the output sequence is determined by an
input request character which should be “A” when an
ascending sequence is desired, and “D” otherwise
while providing input to the program, the request
character is entered first followed by the sequence of
integers to be sorted. The sequence is terminated
with a period.

Definition: Correctness
• A program is considered correct if it behaves as
expected on each element of its input domain.
1.3.2 Valid and Invalid Inputs:
• The input domains are derived from the
requirements. It is difficult to determine the input
domain for incomplete requirements.

• Identifying the set of invalid inputs and testing the


program against these inputs are important parts of
the testing activity.

• Even when the requirements fail to specify the


program behaviour on invalid inputs, the programmer
does treat these in one way or another.
• Testing a program against invalid inputs might
reveal errors in the program.

• Ex: sort program < E 7 19...> The sort program


enters into an infinite loop and neither asks the user
for any input nor responds to anything typed by the
user. This observed behaviour points to a possible
error in sort.
1.4 Correctness versus reliability
1.4.1 Correctness
• Though correctness of a program is desirable, it is almost
never the objective of testing.

• To establish correctness via testing would imply testing a


program on all elements in the input domain, which is
impossible to accomplish in most cases.

• Thus, correctness is established via mathematical proofs of


programs.

• While correctness attempts to establish that the program is


error-free, testing attempts to find if there are any errors in it.
completeness
• Thus, completeness of testing does not necessarily
demonstrate that a program is error-free.

• As testing progresses error might be revealed.

• Removal of errors from the program. Usually improves the


chances, or the probability, of the program executing without
any failure.

• Also testing, debugging and the error-removal process


together increase confidence in the correct functioning of the
program under test.
Example:
• Consider the following program that inputs two integers x
and y and prints the values of f(x,y) or g(x,y) depending on
the condition x<y.

integer x, y
input x, y
if(x<y) ->this condition should be x≤𝑦
{
print f(x, y)
}
else
{
print g(x, y)
}
• Suppose that function f produces incorrect result whenever it
is invoked with x=y and that f(x, y)≠ g(x, y), x=y.

• In its present form the program fails when tested with equal
input values because function g is invoked instead of function f.

• When the error is removed by changing the condition x<y to


x≤𝑦, the program fails again when the input values are the
same.

• The latter failure is due to the error in function f.

• In this program, when the error in f is also removed, the


program will be correct assuming that all other code is correct.
1.4.2 Reliability
• “The reliability of a program P is the probability of its
successful execution on a randomly selected element from its
input domain”

Correctness Reliability
Binary Metric Continuous metric from 0 to 1

• Ex: Program P takes a pair of integers as input. Input domain is


set of all pairs of integers.
• Suppose in actual use input to P is, {<(0,0) (-1,1) (1,-1)>}.
• If P fails on exactly one of the three possible input pairs then
reliability is 2/3.
1.4.3 Program use and the operational profile
• “Operational profile is a numerical description of
how a program is used.”
• Ex: Consider sort program,

Operational Profile 1 Operational Profile 2


Sequence Probability Sequence Probability
Numbers only 0.9 Numbers only 0.1
Alphanumeric 0.1 Alphanumeric 0.9
strings strings

• OP1: mostly for sorting sequence of numbers


• OP2: mostly for sorting alphanumeric strings
1.5 Testing and Debugging
• (Testing is the process of determining if a program
behaves as expected.)
• In the process one may discover errors in the program
under test.
• However, when testing reveals an error, the process
used to determine the cause of this error and to
remove it is known as debugging.
• Testing and debugging are often used as two related
activities in a cyclic manner.
• Steps are
1. Preparing a test plan
2. Constructing test data
3. Executing the program
4. Specifying program behaviour
5. Assessing the correctness of program behaviour
6. Construction of oracle
1. Preparing a test plan:
• A test cycle is often guided by a test plan.

• When relatively small programs are being tested, a


test plan is usually informal and in the tester’s mind
or there may be no plan at all.

• Example test plan: Consider following items such as


the method used for testing, method for evaluating the
adequacy(sufficient) of test cases, and method to
determine if a program has failed or not.
Test plan for sort:
The sort program is to be tested to meet the requirements
given in example
1. Execute the program on at least two input sequence one
with “A” and the other with “D” as request characters.
2. Execute the program on an empty input sequence
3. Test the program for robustness against erroneous
input such as “R” typed in as the request character.
4. All failures of the test program should be recorded in a
suitable file using the company failure report form.
2.Constructing Test Data:
 A test case is a pair consisting of test data to be input to the
program and the expected output.
 The test data is a set of values, one for each input variable.
 A test set is a collection of zero or more test cases. (Test
Data)

• Program requirements and the test plan help in the


construction of test data.
• Execution of the program on test data might begin after all
or a few test cases have been constructed.
• Based on the results obtained, the testers decide whether to
continue the construction of additional test cases or to
enter the debugging phase.
The following test cases are generated for the sort program using the test plan in the
previous figure.
3.Executing the program:
 Execution of a program under test is the next significant
step in the testing.
• The complexity of actual program execution is dependent
on the program itself.
 Testers might be able to construct a test harness
(automated test framework) to aid in program execution.
• The harness initializes any global variables, inputs a test
case, and executes the program.
• The output generated by the program may be saved in a
file for subsequent examination by a tester.
• In preparing this test harness assume that:
(a) Sort is coded as a procedure
(b) The get-input procedure reads the request character & the
sequence to be sorted into variables request_char, num_item and
in_numbers, test_setup procedure-invoked first to set up the test
includes identifying and opening the file containing tests.
 check_output procedure serve as the oracle that checks if the
program under test behaves correctly.
 report_failure: output from sort is incorrect. May be reported via a
message(or)saved in a file.
 print_sequence: prints the sequence generated by the sort program.
This also can be saved in file for subsequent examination.
4.Specifying program behaviour

• State vector: collecting the current values of program


variables into a vector known as the state vector.
• An indication of where the control of execution is at any
instant of time can be given by using an identifier associated
with the next program statement (PC).
• State sequence diagram can be used to specify the
behavioural requirements.
• This same specification can then be used during the testing to
ensure if the application confirms to the requirements.
5.Assessing the correctness of program
Behaviour: It has two steps:

1. Observes the behaviour

2. Analyzes the observed behaviour.

• Above task, extremely complex for large distributed system. The


entity that performs the task of checking the correctness of the
observed behaviour is known as an oracle.

• Human oracle is the best available oracle. But – error prone,


slower, checks only trivial I/O behaviours.

• Oracle can also be programs designed to check the behaviour of


other programs.
Oracle: Example
6.Construction of oracles:
• Construction of automated oracles, such as the one to check a
matrix multiplication program or a sort program, Requires
determination of I/O relationship.

• When tests are generated from models such as finite-state


machines(FSMs) or state charts, both inputs and the
corresponding outputs are available.

• This makes it possible to construct an oracle while generating


the tests.
Example
• Consider a program named Hvideo that allows one to keep
track of home videos.

• In the data entry mode, it displays a screen in which the user


types in information about a DVD.

• In search mode, the program displays a screen into which a


user can type some attribute of the video being searched for
and set up a search criterion.

• To test Hvideo we need to create an oracle that checks


whether the program function correctly in data entry and
search modes.
• The input generator generates a data entry request.

• The input generaor now requests the oracle to test if Hvideo


performed its task correctly on the input given for data entry.

• The oracle uses the input to check if the information to be


entered into the database has been entered correctly or not.

• The oracle returns a pass or no pass to the input generator.


Test case

• Test case has an identity and is associated


with a program behavior

• A test case also has a set of inputs and


expected outputs.
Test Cases
• The essence of software testing is to determine a
set of test cases for the item to be tested.

• What information should be in a test case?

• Inputs of two types : Preconditions and actual


inputs.

• Expected outputs of two types : Post conditions


and actual outputs
• The act of testing entails establishing the necessary
preconditions, providing the test case inputs,
observing outputs, comparing these with the
expected outputs, and then ensuring that the
expected postconditions exist to determine
whether the test passed.

• Test cases need to be developed, reviewed, used,


managed, and saved.
Typical test case information
Test Case ID

Purpose

Preconditions

Inputs

Expected Outputs

Post Conditions

Execution History

Date Result Version Run By


Insights from a Venn Diagram
• Consider set S of specified behavior and set P of
programmed behavior.

• If certain specified behaviors have not been


programmed (implemented) then they are faults of
omission.

• If certain programmed (implemented) behaviors have


not been specified then they are faults of commission.

• The intersection of S and P is the correct portion, that


is, behaviors that are both specified and implemented.
• The new circle is for test cases
• There may be specified behaviors that are not tested, specified
behaviors that are tested and test cases that correspond to
unspecified behaviors.
• There may be programmed behaviors that are not tested ,
programmed behaviors that are tested and test cases that
correspond to unprogrammed behaviors.
• If specified behaviors exist for which no test cases are
available the testing is necessarily incomplete.
• If certain test cases correspond to unspecified behaviors,
some possibilities arise:
– Either such a test case is unwarranted.
– The specification is deficient.
Identifying Test Cases
• Two fundamental approaches are used to
identify test cases
• Functional Testing
• Structural Testing

• Each of these approaches has several distinct


test case identification methods, commonly
called testing methods.
Functional Testing

• Functional testing is based on the view that any program


can be considered to be a function that maps values from
its input domain to values in its output range.
• This leads to term black box testing, in which the content
is not known and function of black box is understood
completely in terms of inputs and outputs.
• The functional test cases have two distinct advantages:
• They are independent of the software that is
implemented, so if the implementation changes, the
test cases are still useful.
• Test case development can occur in parallel with
implementation thereby reducing the overall project
development interval.

• On the other hand the disadvantages are:


• Significant redundancies may exist among the test
cases.
• Compounded by the possibility of gaps of untested
software.
Structural Testing:
• This is also known as white box testing.
• The ability to “see inside” the black box allows the
tester to identify test cases based on how the
function is implemented.
• To really understand structural testing, familiarity
with concepts of linear graph theory is essential.
• With these concepts the tester can rigorously
describe exactly what is tested.
1.10. Test generation strategies
• Any form of test generation uses a source document.

• In the most informal of test methods, the source document resides


in the mind of the tester who generates tests based on a knowledge
of the requirements.

• In several commercial environments, the process is a bit more


formal.

• The tests are generated using a mix of formal and informal methods
either directly from the requirements document serving as the source.

– In more advanced test processes, requirements serve as a source


for the development of formal models.
Test generation strategies
• Model based: require that a subset of the requirements be
modeled using a formal notation (usually graphical).

• Models: Finite State Machines, Timed automata, Petri net, etc.

• Specification based: require that a subset of the requirements


be modeled using a formal mathematical notation.

• Examples: B, Z, and Larch.

• Code based: generate tests directly from the code


TEST METRICS

• The term metric refers to a standard of


measurement.

• In software testing, there exist a variety of metrics.


• There are four general core areas that assist in the design of
metrics -> schedule, quality, resources and size.
• Schedule related metrics: Measure actual completion times
of various activities and compare these with estimated time
to completion.
• Quality related metrics: Measure quality of a product or a
process
• Resource related metrics: Measure items such as cost in
dollars, man power and test executed.
• Size-related metrics: Measure size of various objects such
as the source code and number of tests in a test suite
• Organizational metrics: Metrics at the level of an organization
are useful in overall project planning and management.

• Ex: the number of defects reported after product release,


averaged over a set of products developed and marketed by an
organization, is a useful metric of product quality at the
organizational level.

• Organizational metrics allow senior management to monitor


the overall strength of the organization and points to areas
of weakness.

• Thus, these metrics help senior management in setting new


goals and plan for resources needed to realize these goals.
• Project metrics:

• Project metrics relate to a specific project, for example the


I/O device testing project or a compiler project.

• These are useful in the monitoring and control of a specific


project.

• 1. The ratio of Actual-to-planned system test effort is one


project metrics. Test effort could be measured in terms of the
tester-man-months.
• Process metrics:
• Every project uses some test process. Big-bang approach
well suited for small single person projects.
• The goal of a process metric is to assess the goodness of
the process.
• Test process consists of several phases like unit test,
integration test, system test, one can measure how many
defects were found in each phase.
• It is well known that the later a defect is found, the costlier
it is to fix.
• Product metrics:

• Useful in making decisions related to the product.


• Cyclomatic complexity: V(G)= E-N+2P Program p
containing N node, E edges and p connected procedures.

• Larger value of V(G)->higher program complexity &


program more difficult to understand &test than one with a
smaller values.

• V(G)->values 5 or less are recommended

• Halstead complexity Number of error(B) found using


program size(S) and Effort(E)

B= 7.6 𝐸^0.667 𝑆^0.33


• Product metrics: OO software
• Metrics are reliability, defect density, defect severity, test
coverage, cyclomatic complexity, weighted methods/class,
response set, number of children.
• Static and dynamic metrics: Static metrics are those computed
without having to execute the product.
• Ex: no. of testable entities in an application.
• Dynamic metric requires code execution.
• Ex: no. of testable entities actually covered by a test suite is a
dynamic quality.
Testability:
• According to IEEE, testability is the “degree to which a
system or component facilitates the establishment of test
criteria and the performance of tests to determine
whether those criteria have been met”.

• Two types:

1. static testability metrics

2. dynamic testability metrics


• Static testability metric: Software complexity is one static
testability metric.

• more complex an application, the lower the testability, that


is higher the effort required to test it.

• Dynamic metrics for testability includes various code-based


coverage criteria.

• Ex: when it is difficult to generate tests that satisfy the


statement coverage criterion is considered to have low
testability than one for which it is easier to construct such
tests.
Error and Fault Taxonomies:
• Errors and faults hinge on the distinction between
process and product: process refers to how we do
something and product is the end result of a process.

• Comprehensive treatment of types of faults, are defined


in the IEEE Standard Classification for Software
Anomalies, 1993
Faults classified by severity
1. Mild Misspelled word
2. Moderate Misleading or redundant information
3. Annoying Truncated names, bill for $0.00
4. Disturbing Some transaction(s) not processed
5. Serious Lose a transaction
6. Very serious Incorrect transaction execution
7. Extreme Frequent ”very serious” errors
8. Intolerable Database corruption
9. Catastrophic System shutdown
10. Infectious Shutdown that spreads to others
Input/output Faults
Type Instances
Input Correct input not accepted
Incorrect input accepted
Description wrong or missing

Output Wrong format


Wrong result
Correct result at wrong time
Incomplete or missing result
Spurious result
Spelling/ grammar
Logic Faults
• Missing case(s)
• Duplicate case(s)
• Extreme condition neglected
• Misinterpretation
• Missing condition
• Extraneous condition(s)
• Test of wrong variable
• Incorrect loop iteration
• Wrong operator ( < instead of <=)
Computation Faults
• Incorrect algorithm
• Missing computation
• Incorrect operand
• Incorrect operation
• Parenthesis error
• Insufficient precision(round-off)
• Wrong built-in function
Interface Faults
• Incorrect interrupt handling
• I/O timing
• Call to wrong procedure
• Call to non existent procedure
• Parameter mismatch(type , number)
• Incompatible types
• Superfluous inclusion
Data Faults
• Incorrect initialization
• Incorrect storage/access
• Wrong flag/index value
• Incorrect packing/unpacking
• Wrong variable used
• Wrong data reference
• Scaling/units error
• Incorrect type
• Inconsistent data
Levels of Testing
• Levels of testing echo the levels of abstraction found in
the waterfall model of the SDLC.
• This emphasizes the correspondence between testing
and design levels.
• The three levels of definition correspond directly to
three levels of testing.
• Structural testing is appropriate at unit level while
functional testing is appropriate at system level.
Levels of abstraction and testing in the waterfall
model
Testing and verification
• Program verification aims at proving the correctness of
programs by showing that it contains no errors.

• This is very different from testing that aims at uncovering


errors in a program.

• Program verification and testing are best considered as


complementary techniques.

• In practice, program verification is often avoided, and the


focus is on testing.
• Testing is not a perfect technique in that a program might
contain errors despite the success of a set of tests.

• Verification promises to verify that a program is free from


errors.

• However, the person/tool who verified a program might have


made a mistake in the verification process;

• there might be an incorrect assumption on the input


conditions;

• incorrect assumptions might be made regarding the


components that interface with the program, and so on.
Static Testing
• Static testing is carried out without executing the
application under test this is in contrast to dynamic testing
that requires one or more execution of the application
under test.

• Static testing is useful in that it may lead to the discovery of


faults in the application.

• As well ambiguities and errors in the requirements and


other application related documents at relatively low cost.
Elements of static testing
Static testing accesses the following
–Requirement documents
–Design documents
–User manuals
–Static testing tools
–Static testing tools takes the application code as a input and
generates a verity of data useful in the test process
Walkthroughs
• Walkthrough is an informal process to review any application
related document. Eg: requirements are reviewed using process
termed requirements walkthrough.

Requirement walkthrough

Code walkthrough ( peer code review ):code review

• Walkthrough is begins with Review plan

• Test team must review Requirements to match User needs and are
Free from ambiguities and inconsistencies

• Functional and nonfunctional requirements are reviewed

• Detailed report is generated by understanding desired application.


Inspections
• Inspection is a formal process usually associated with code
• Formal code inspection is done to improve code quality at
lower cost
• The team works with the help of an inspection plan which
consists of following elements
Statement of purpose
Work product to be inspected ( Code & Document
inspection)
Team formation ( Roles & Tasks to perform )
Rate at which inspection task is to be completed
Data collection forms ( Consist defects discovered, Code
standard violation and Time spent in each task )
Inspection team members roles
•Moderator ( Takes charge of process and leads the review )
•Reader ( Reads the code by using code browser and large monitor )
•Recorder ( Records any error discovered or issue to be looked into )
•Author ( Actual developer who helps other to understand the code )

1. Static code analysis tools – Control Flow Graph(CFG), Data flow


graph
2. Software complexity and static testing :software analysis tool often
compute the complexity metrics using one ore more complexity
metrics.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy