S STM Unit 3
S STM Unit 3
Paths, Path products and Regular expressions : Path products & path expression, reduction procedure,
applications, regular expressions & flow anomaly detection.
This unit gives an in depth overview of Paths of various flow graphs, their interpretations and application.
MOTIVATION:
o Flow graphs are being an abstract representation of programs.
o Any question about a program can be cast into an equivalent question about an appropriate
flowgraph.
o Most software development, testing and debugging tools use flow graphs analysis techniques.
PATH PRODUCTS:
o Normally flow graphs used to denote only control flow connectivity.
o The simplest weight we can give to a link is a name.
o Using link names as weights, we then convert the graphical flow graph into an equivalent
algebraic like expressions which denotes the set of all possible paths from entry to exit for the
flow graph.
o Every link of a graph can be given a name.
o The link name will be denoted by lower case italic letters.
o In tracing a path or path segment through a flow graph, you traverse a succession of link names.
o The name of the path or path segment that corresponds to those links is expressed naturally by
concatenating those link names.
o For example, if you traverse links a,b,c and d along some path, the name for that path segment is
abcd. This path name is also called a path product. Figure 5.1 shows some examples:
PATH EXPRESSION:
o Consider a pair of nodes in a graph and the set of paths between those node.
o Denote that set of paths by Upper case letter such as X,Y. From Figure 5.1c, the members of the
path set can be listed as follows:
ac+abc+abbc+abbbc+abbbbc+...........
o The + sign is understood to mean "or" between the two nodes of interest, paths ac, or abc, or
abbc, and so on can be taken.
o Any expression that consists of path names and "OR"s and which denotes a set of paths between
two nodes is called a "Path Expression.".
PATH PRODUCTS:
o The name of a path that consists of two successive path segments is conveniently expressed by
the concatenation or Path Product of the segment names.
o For example, if X and Y are defined as X=abcde,Y=fghij,then the path corresponding to X
followed by Y is denoted by
XY=abcdefghij
o Similarly,
o YX=fghijabcde
o aX=aabcde
o Xa=abcdea
XaX=abcdeaabcde
o If X and Y represent sets of paths or path expressions, their product represents the set of paths
that can be obtained by following every element of X by any element of Y in all possible ways.
For example,
o X = abc + def + ghi
o Y = uvw + z
Then,
XY = abcuvw + defuvw + ghiuvw + abcz + defz + ghiz
o If a link or segment name is repeated, that fact is denoted by an exponent. The exponent's value
denotes the number of repetitions:
o a1 = a; a2 = aa; a3 = aaa; an = aaaa . . . n times.
Similarly, if
X = abcde
then
X1 = abcde
X2 = abcdeabcde = (abcde)2
X3 = abcdeabcdeabcde = (abcde)2abcde
= abcde(abcde)2 = (abcde)3
RULE 1: A(BC)=(AB)C=ABC
where A,B,C are path names, set of path names or path expressions.
o The zeroth power of a link name, path product, or path expression is also needed for
completeness. It is denoted by the numeral "1" and denotes the "path" whose length is zero - that
is, the path that doesn't have any links.
o a0 = 1
o X0 = 1
PATH SUMS:
o The "+" sign was used to denote the fact that path names were part of the same set of paths.
o The "PATH SUM" denotes paths in parallel between nodes.
o Links a and b in Figure 5.1a are parallel paths and are denoted by a + b. Similarly, links c and d
are parallel paths between the next two nodes and are denoted by c + d.
o The set of all paths between nodes 1 and 2 can be thought of as a set of parallel paths and
denoted by eacf+eadf+ebcf+ebdf.
o If X and Y are sets of paths that lie between the same pair of nodes, then X+Y denotes the
UNION of those set of paths. For example, in Figure 5.2:
The first set of parallel paths is denoted by X + Y + d and the second set by U + V + W + h + i +
j. The set of all paths in this flowgraph is f(X + Y + d)g(U + V + W + h + i + j)k
DISTRIBUTIVE LAWS:
o The product and sum operations are distributive, and the ordinary rules of multiplication apply;
that is
o If a set consists of paths names and a member of that set is added to it, the "new" name, which is
already in that set of names, contributes nothing and can be ignored.
o For example,
o if X=a+aa+abc+abcd+def then
X+a = X+aa = X+abc = X+abcd = X+def = X
It follows that any arbitrary sum of identical path expressions reduces to the same path
expression.
LOOPS:
o Loops can be understood as an infinite set of parallel paths. Say that the loop consists of a single
link b. then the set of all paths through that loop point is b0+b1+b2+b3+b4+b5+..............
Figure 5.3: Examples of path loops.
o This potentially infinite sum is denoted by b* for an individual link and by X* when X is a path
expression.
notation: ab*c=ac+abc+abbc+abbbc+................
o Evidently,
o It is more convenient to denote the fact that a loop cannot be taken more than a certain, say n,
number of times.
o A bar is used under the exponent to denote the fact as
RULES 6 - 16:
o The following rules can be derived from the previous rules:
o RULE 6: Xn + Xm = Xn if n>m
RULE 6: Xn + Xm = Xm if m>n
RULE 7: XnXm = Xn+m
RULE 8: XnX* = X*Xn = X*
RULE 9: XnX+ = X+Xn = X+
RULE 10: X*X+ = X+X* = X+
RULE 11: 1 + 1 = 1
RULE 12: 1X = X1 = X
Following or preceding a set of paths by a path of zero length does not change the set.
RULE 13: 1n = 1n = 1* = 1+ = 1
No matter how often you traverse a path of zero length,It is a path of zero length.
RULE 14: 1++1 = 1*=1
The null set of paths is denoted by the numeral 0. it obeys the following rules:
RULE 15: X+0=0+X=X
RULE 16: 0X=X0=0
If you block the paths of a graph for or aft by a graph that has no paths , there wont be any paths.
REDUCTION PROCEDURE:
o In the first way, we remove the self-loop and then multiply all outgoing links by Z*.
o In the second way, we split the node into two equivalent nodes, call them A and A' and put in a
link between them whose path expression is Z*. Then we remove node A' using steps 4 and 5 to
yield outgoing links whose path expressions are Z*X and Z*Y.
A REDUCTION PROCEDURE - EXAMPLE:
o Let us see by applying this algorithm to the following graph where we remove several nodes in
order; that is
o Removing the loop and then node 6 result in the following expression:
o a(bgjf)*b(c+gkh)d((ilhd)*imf(bjgf)*b(c+gkh)d)*(ilhd)*e
o You can practice by applying the algorithm on the following flowgraphs and generate their
respective path expressions:
Figure 5.6: Some graphs and their path expressions.
APPLICATIONS:
APPLICATIONS:
o The purpose of the node removal algorithm is to present one very generalized concept- the path
expression and way of getting it.
o Every application follows this common pattern:
1. Convert the program or graph into a path expression.
2. Identify a property of interest and derive an appropriate set of "arithmetic" rules that
characterizes the property.
3. Replace the link names by the link weights for the property of interest. The path
expression has now been converted to an expression in some algebra, such as ordinary
algebra, regular expressions, or boolean algebra. This algebraic expression summarizes
the property of interest over the set of all paths.
4. Simplify or evaluate the resulting "algebraic" expression to answer the question you
asked.
HOW MANY PATHS IN A FLOWGRAPH ?
o The question is not simple. Here are some ways you could ask it:
1. What is the maximum number of different paths possible?
2. What is the fewest number of paths possible?
3. How many different paths are there really?
4. What is the average number of paths?
o Determining the actual number of different paths is an inherently difficult problem because there
could be unachievable paths resulting from correlated and dependent predicates.
o If we know both of these numbers (maximum and minimum number of possible paths) we have a
good idea of how complete our testing is.
o Asking for "the average number of paths" is meaningless.
MAXIMUM PATH COUNT ARITHMETIC:
o Label each link with a link weight that corresponds to the number of paths that link represents.
o Also mark each loop with the maximum number of times that loop can be taken. If the answer is
infinite, you might as well stop the analysis because it is clear that the maximum number of paths
will be infinite.
o There are three cases of interest: parallel links, serial links, and loops.
o This arithmetic is an ordinary algebra. The weight is the number of paths in each set.
o EXAMPLE:
The following is a reasonably well-structured program.
Each link represents a single link and consequently is given a weight of "1" to start. Lets
say the outer loop will be taken exactly four times and inner Loop Can be taken zero or
three times Its path expression, with a little work, is:
A: The flow graph should be annotated by replacing the link name with the maximum of
paths through that link (1) and also note the number of times for looping.
B: Combine the first pair of parallel loops outside the loop and also the pair in the outer
loop.
C: Multiply the things out and remove nodes to clear the clutter.
For the Inner Loop:
D:Calculate the total weight of inner loop, which can execute a min. of 0 times and max.
of 3 times. So, it inner loop can be evaluated as follows:
13 = 10 + 11 + 12 + 13 = 1 + 1 + 1 + 1 = 4
Alternatively, you could have substituted a "1" for each link in the path expression and then simplified, as
follows:
a(b+c)d{e(fi)*fgj(m+l)k}*e(fi)*fgh
= 1(1 + 1)1(1(1 x 1)31 x 1 x 1(1 + 1)1)41(1 x 1)31 x 1 x 1
3
= 2(1 1 x (2))413
= 2(4 x 2)4 x 4
4
= 2 x 8 x 4 = 32,768
STRUCTURED FLOWGRAPH:
Structured code can be defined in several different ways that do not involve ad-hoc rules such as not using
GOTOs.
A structured flowgraph is one that can be reduced to a single link by successive application of the
transformations of Figure 5.7.
Figure 5.7: Structured Flowgraph Transformations.
The node-by-node reduction procedure can also be used as a test for structured code.
Flow graphs that DO NOT contain one or more of the graphs shown below (Figure 5.8) as subgraphs are
structured.
0. Jumping into loops
1. Jumping out of loops
2. Branching into decisions
3. Branching out of decisions
The values of the weights are the number of members in a set of paths.
EXAMPLE:
Applying the arithmetic to the earlier example gives us the identical steps unitl step 3
(C) as below:
From Step 4, the it would be different from the previous example:
If you observe the original graph, it takes at least two paths to cover and that it can be
done in two paths.
If you have fewer paths in your test plan than this minimum you probably haven't
covered. It's another check.
CALCULATING THE PROBABILITY:
Path selection should be biased toward the low - rather than the high-probability paths.
This raises an interesting question:
What is the probability of being at a certain point in a routine?
This question can be answered under suitable assumptions, primarily that all probabilities involved are
independent, which is to say that all decisions are independent and uncorrelated.
We use the same algorithm as before : node-by-node removal of uninteresting nodes.
Weights, Notations and Arithmetic:
Probabilities can come into the act only at decisions (including decisions associated with
loops).
Annotate each outlink with a weight equal to the probability of going in that direction.
Evidently, the sum of the outlink probabilities must equal 1
For a simple loop, if the loop will be taken a mean of N times, the looping probability is
N/(N + 1) and the probability of not looping is 1/(N + 1).
A link that is not part of a decision node has a probability of 1.
The arithmetic rules are those of ordinary arithmetic.
In this table, in case of a loop, P A is the probability of the link leaving the loop and P L is
the probability of looping.
The rules are those of ordinary probability theory.
1. If you can do something either from column A with a probability of P A or from
column B with a probability P B, then the probability that you do either is P A +
PB.
2. For the series case, if you must do both things, and their probabilities are
independent (as assumed), then the probability that you do both is the product
of their probabilities.
For example, a loop node has a looping probability of PL and a probability of not
looping of PA, which is obviously equal to I - PL.
Following the above rule, all we've done is replace the outgoing probability with 1 - so
why the complicated rule? After a few steps in which you've removed nodes, combined
parallel terms, removed loops and the like, you might find something like this:
EXAMPLE:
Here is a complicated bit of logic. We want to know the probability associated with
cases A, B, and C.
Let us do this in three parts, starting with case A. Note that the sum of the probabilities
at each decision node is equal to 1. Start by throwing away anything that isn't on the
way to case A, and then apply the reduction procedure. To avoid clutter, we usually
leave out probabilities equal to 1.
CASE A:
Case B is simpler:
Case C is similar and should yield a probability of 1 - 0.125 - 0.158 = 0.717:
This checks. It's a good idea when doing this sort of thing to calculate all the
probabilities and to verify that the sum of the routine's exit probabilities does equal 1.
If it doesn't, then you've made calculation error or, more likely, you've left out some
branching probability.
How about path probabilities? That's easy. Just trace the path of interest and multiply
the probabilities as you go.
Alternatively, write down the path name and do the indicated arithmetic operation.
Say that a path consisted of links a, b, c, d, e, and the associated probabilities were .2, .5,
1., .01, and I respectively. Path abcbcbcdeabddea would have a probability of 5 x 10-10.
Long paths are usually improbable.
1. Combine the parallel links of the outer loop. The result is just the mean of the
processing times for the links because there aren't any other links leaving the first node.
Also combine the pair of links at the beginning of the flowgraph..
3. Use the cross-term step to eliminate a node and to create the inner self - loop.
4. Finally, you can get the mean processing time, by using the arithmetic rules as follows:
PUSH/POP, GET/RETURN:
This model can be used to answer several different questions that can turn up in
debugging. It can also help decide which test cases to design.
The question is:
Given a pair of complementary operations such as PUSH (the stack) and POP (the stack), considering the set
of all possible paths through the routine, what is the net effect of the routine? PUSH or POP? How many
times? Under what conditions?
Here are some other examples of complementary operations to which this model applies:
GET/RETURN a resource block.
OPEN/CLOSE a file.
START/STOP a device or process.
The numeral 1 is used to indicate that nothing of interest (neither PUSH nor POP)
occurs on a given link.
"H" denotes PUSH and "P" denotes POP. The operations are commutative, associative,
and distributive.
These expressions state that the stack will be popped only if the inner loop is not taken.
The stack will be left alone only if the inner loop is iterated once, but it may also be
pushed.
For all other values of the inner loop, the stack will only be pushed.
EXAMPLE 2 (GET / RETURN):
Exactly the same arithmetic tables used for previous example are used for GET /
RETURN a buffer block or resource, or, in fact, for any pair of complementary
operations in which the total number of operations in either direction is cumulative.
The arithmetic tables for GET/RETURN are:
"G" denotes GET and "R" denotes RETURN.
G(G + R)G(GR)*GGR*R
= G(G + R)G3R*R
= (G + R)G3R*
4 2
= (G + G )R*
This expression specifies the conditions under which the resources will be balanced on
leaving the routine.
If the upper branch is taken at the first decision, the second loop must be taken four
times.
If the lower branch is taken at the first decision, the second loop must be taken twice.
For any other values, the routine will not balance. Therefore, the first loop does not have
to be instrumented to verify this behavior because its impact should be nil.
LIMITATIONS AND SOLUTIONS:
The main limitation to these applications is the problem of unachievable paths.
The node-by-node reduction procedure, and most graph-theory-based algorithms work well when all paths
are possible, but may provide misleading results when some paths are unachievable.
The approach to handling unachievable paths (for any application) is to partition the graph into subgraphs
so that all paths in each of the subgraphs are achievable.
The resulting subgraphs may overlap, because one path may be common to several different subgraphs.
Each predicate's truth-functional value potentially splits the graph into two subgraphs. For n predicates,
there could be as many as 2n subgraphs.
THE PROBLEM:
o The generic flow-anomaly detection problem (note: not just data-flow anomalies, but any flow
anomaly) is that of looking for a specific sequence of options considering all possible paths
through a routine.
o Let the operations be SET and RESET, denoted by s and r respectively, and we want to know if
there is a SET followed immediately a SET or a RESET followed immediately by a RESET (an
ss or an rr sequence).
o Some more application examples:
1.A file can be opened (o), closed (c), read (r), or written (w). If the file is read or written
to after it's been closed, the sequence is nonsensical. Therefore, cr and cw are
anomalous. Similarly, if the file is read before it's been written, just after opening, we
may have a bug. Therefore, or is also anomalous. Furthermore, oo and cc, though not
actual bugs, are a waste of time and therefore should also be examined.
2. A tape transport can do a rewind (d), fast-forward (f), read (r), write (w), stop (p), and
skip (k). There are rules concerning the use of the transport; for example, you cannot go
from rewind to fast-forward without an intervening stop or from rewind or fast-forward
to read or write without an intervening stop. The following sequences are anomalous: df,
dr, dw, fd, and fr. Does the flowgraph lead to anomalous sequences on any path? If so,
what sequences and under what circumstances?
3. The data-flow anomalies discussed in Unit 4 requires us to detect the dd, dk, kk, and ku
sequences. Are there paths with anomalous data flows?
THE METHOD:
o Annotate each link in the graph with the appropriate operator or the null operator 1.
o Simplify things to the extent possible, using the fact that a + a = a and 12 = 1.
o You now have a regular expression that denotes all the possible sequences of operators in that
graph. You can now examine that regular expression for the sequences of interest.
o EXAMPLE: Let A, B, C, be nonempty sets of character sequences whose smallest string is at
least one character long. Let T be a two-character string of characters. Then if T is a substring of
(i.e., if T appears within) ABnC, then T will appear in AB2C. (HUANG's Theorem)
o As an example, let
A = pp
B = srr
C = rp
T = ss
INTRODUCTION:
o The functional requirements of many programs can be specified by decision tables, which
provide a useful basis for program and test design.
o Consistency and completeness can be analyzed by using boolean algebra, which can also be used
as a basis for test design. Boolean algebra is trivialized by using Karnaugh-Veitch charts.
o "Logic" is one of the most often used words in programmers' vocabularies but one of their least
used techniques.
o Boolean algebra is to logic as arithmetic is to mathematics. Without it, the tester or programmer
is cut off from many test and design techniques and tools that incorporate those techniques.
o Logic has been, for several decades, the primary tool of hardware logic designers.
o Many test methods developed for hardware logic can be adapted to software logic testing.
Because hardware testing automation is 10 to 15 years ahead of software testing automation,
hardware testing methods and its associated theory is a fertile ground for software testing
methods.
o As programming and test techniques have improved, the bugs have shifted closer to the process
front end, to requirements and their specifications. These bugs range from 8% to 30% of the total
and because they're first-in and last-out, they're the costliest of all.
o The trouble with specifications is that they're hard to express.
o Boolean algebra (also known as the sentential calculus) is the most basic of all logic systems.
o Higher-order logic systems are needed and used for formal specifications.
o Much of logical analysis can be and is embedded in tools. But these tools incorporate methods to
simplify, transform, and check specifications, and the methods are to a large extent based on
boolean algebra.
o KNOWLEDGE BASED SYSTEM:
The knowledge-based system (also expert system, or "artificial intelligence" system)
has become the programming construct of choice for many applications that were once
considered very difficult.
Knowledge-based systems incorporate knowledge from a knowledge domain such as
medicine, law, or civil engineering into a database. The data can then be queried and
interacted with to provide solutions to problems in that domain.
One implementation of knowledge-based systems is to incorporate the expert's
knowledge into a set of rules. The user can then provide data and ask questions based on
that data.
The user's data is processed through the rule base to yield conclusions (tentative or
definite) and requests for more data. The processing is done by a program called the
inference engine.
Understanding knowledge-based systems and their validation problems requires an
understanding of formal logic.
o Decision tables are extensively used in business data processing; Decision-table preprocessors as
extensions to COBOL are in common use; boolean algebra is embedded in the implementation of
these processors.
o Although programmed tools are nice to have, most of the benefits of boolean algebra can be
reaped by wholly manual means if you have the right conceptual tool: the Karnaugh-Veitch
diagram is that conceptual tool.
DECISION TABLES:
Figure 6.1 is a limited - entry decision table. It consists of four areas called the condition stub, the
condition entry, the action stub, and the action entry.
Each column of the table is a rule that specifies the conditions under which the actions named in the action
stub will take place.
The condition stub is a list of names of conditions.
A rule specifies whether a condition should or should not be met for the rule to be satisfied. "YES" means
that the condition must be met, "NO" means that the condition must not be met, and "I" means that the
condition plays no part in the rule, or it is immaterial to that rule.
The action stub names the actions the routine will take or initiate if the rule is satisfied. If the action entry
is "YES", the action will take place; if "NO", the action will not take place.
The table in Figure 6.1 can be translated as follows:
Action 1 will take place if conditions 1 and 2 are met and if conditions 3 and 4 are not met (rule 1) or if
conditions 1, 3, and 4 are met (rule 2).
"Condition" is another word for predicate.
Decision-table uses "condition" and "satisfied" or "met". Let us use "predicate" and TRUE / FALSE.
Now the above translations become:
1. Action 1 will be taken if predicates 1 and 2 are true and if predicates 3 and 4 are false (rule 1), or
if predicates 1, 3, and 4 are true (rule 2).
2. Action 2 will be taken if the predicates are all false, (rule 3).
3. Action 3 will take place if predicate 1 is false and predicate 4 is true (rule 4).
In addition to the stated rules, we also need a Default Rule that specifies the default action to be taken
when all other rules fail. The default rules for Table in Figure 6.1 is shown in Figure 6.3
Figure 6.3 : The default rules of Table in Figure 6.1
DECISION-TABLE PROCESSORS:
o Decision tables can be automatically translated into code and, as such, are a higher-order
language
o If the rule is satisfied, the corresponding action takes place
o Otherwise, rule 2 is tried. This process continues until either a satisfied rule results in an action or
no rule is satisfied and the default action is taken
o Decision tables have become a useful tool in the programmers kit, in business data processing.
DECISION-TABLES AS BASIS FOR TEST CASE DESIGN:
0. The specification is given as a decision table or can be easily converted into one.
1. The order in which the predicates are evaluated does not affect interpretation of the rules or the
resulting action - i.e., an arbitrary permutation of the predicate order will not, or should not,
affect which action takes place.
2. The order in which the rules are evaluated does not affect the resulting action - i.e., an arbitrary
permutation of rules will not, or should not, affect which action takes place.
3. Once a rule is satisfied and an action selected, no other rule need be examined.
4. If several actions can result from satisfying a rule, the order in which the actions are executed
doesn't matter
DECISION-TABLES AND STRUCTURE:
o Decision tables can also be used to examine a program's structure.
o Figure 6.4 shows a program segment that consists of a decision tree.
o These decisions, in various combinations, can lead to actions 1, 2, or 3.
Figure 6.4 : A Sample Program
o If the decision appears on a path, put in a YES or NO as appropriate. If the decision does not
appear on the path, put in an I, Rule 1 does not contain decision C, therefore its entries are: YES,
YES, I, YES.
o The corresponding decision table is shown in Table 6.1
o Similalrly, If we expand the immaterial cases for the above Table 6.1, it results in Table 6.2 as
below:
R1 RULE 2 R3 RULE 4 R5 R6
CONDITION A YY YYYY YY NNNN NN NN
CONDITION B YY NNNN YY YYNN NY YN
CONDITION C YN NNYY YN YYYY NN NN
CONDITION D YY YNNY NN NYYN YY NN
GENERAL:
o Logic-based testing is structural testing when it's applied to structure (e.g., control flowgraph of
an implementation); it's functional testing when it's applied to a specification.
o In logic-based testing we focus on the truth values of control flow predicates.
o A predicate is implemented as a process whose outcome is a truth-functional value.
o For our purpose, logic-based testing is restricted to binary predicates.
o We start by generating path expressions by path tracing as in Unit V, but this time, our purpose is
to convert the path expressions into boolean algebra, using the predicates' truth values (e.g., A
and ) as weights.
BOOLEAN ALGEBRA:
o STEPS:
1. Label each decision with an uppercase letter that represents the truth value of the
predicate. The YES or TRUE branch is labeled with a letter (say A) and the NO or
FALSE branch with the same letter overscored (say ).
2. The truth value of a path is the product of the individual labels. Concatenation or
products mean "AND". For example, the straight-through path of Figure 6.5, which goes
via nodes 3, 6, 7, 8, 10, 11, 12, and 2, has a truth value of ABC. The path via nodes 3, 6,
7, 9 and 2 has a value of .
3. If two or more paths merge at a node, the fact is expressed by use of a plus sign (+)
which means "OR".
o Using this convention, the truth-functional values for several of the nodes can be expressed in
terms of segments from previous nodes. Use the node name to identify the point.
o There are only two numbers in boolean algebra: zero (0) and one (1). One means "always true"
and zero means "always false".
o RULES OF BOOLEAN ALGEBRA:
Boolean algebra has three operators: X (AND), + (OR) and (NOT)
X : meaning AND. Also called multiplication. A statement such as AB (A X B) means
"A and B are both true". This symbol is usually left out as in ordinary algebra.
+ : meaning OR. "A + B" means "either A is true or B is true or both".
meaning NOT. Also negation or complementation. This is read as either "not A" or
"A bar". The entire expression under the bar is negated.
The following are the laws of boolean algebra:
In all of the above, a letter can represent a single sentence or an entire boolean algebra
expression. Individual letters in a boolean algebra expression are called Literals (e.g. A,B)
The product of several literals is called a product term (e.g., ABC, DE).
An arbitrary boolean expression that has been multiplied out so that it consists of the sum of products
(e.g., ABC + DEF + GH) is said to be in sum-of-products form.
The result of simplifications (using the rules above) is again in the sum of product form and each product
term in such a simplified version is called a prime implicant. For example, ABC + AB + DEF reduces by rule 20
to AB + DEF; that is, AB and DEF are prime implicants.
The path expressions of Figure 6.5 can now be simplified by applying the
rules. The following are the laws of boolean algebra:
Similarly,
The deviation from the specification is now clear. The functions should have been:
Loops complicate things because we may have to solve a boolean equation to determine what predicate-
value combinations lead to where.
KV CHARTS:
INTRODUCTION:
o If you had to deal with expressions in four, five, or six variables, you could get bogged down in
the algebra and make as many errors in designing test cases as there are bugs in the routine you're
testing.
o Karnaugh-Veitch chart reduces boolean algebraic manipulations to graphical trivia.
o Beyond six variables these diagrams get cumbersome and may not be effective.
SINGLE VARIABLE:
o Figure 6.6 shows all the boolean functions of a single variable and their equivalent representation
as a KV chart.
o Each box corresponds to the combination of values of the variables for the row and column of
that box.
o A pair may be adjacent either horizontally or vertically but not diagonally.
o Any variable that changes in either the horizontal or vertical direction does not appear in the
expression.
o In the fifth chart, the B variable changes from 0 to 1 going down the column, and because the A
variable's value for the column is 1, the chart is equivalent to a simple A.
o Figure 6.8 shows the remaining eight functions of two variables.
OR
THREE VARIABLES:
o KV charts for three variables are shown below.
o As before, each box represents an elementary term of three variables with a bar appearing or not
appearing according to whether the row-column heading for that box is 0 or 1.
o A three-variable chart can have groupings of 1, 2, 4, and 8 boxes.
o A few examples will illustrate the principles:
Figure 6.8 : KV Charts for Functions of Three Variables.
o You'll notice that there are several ways to circle the boxes into maximum-sized covering groups.