Chapter 1 STQA Srinivasan Desikan
Chapter 1 STQA Srinivasan Desikan
Principles of Testing
In this chapter—
Almost everything we use today has an element of software in it. In the early days of evolution
of software, the users of software formed a small number compared to the total strength of an
organization. Today, in a typical workplace (and at home), just about everyone uses a computer
and software. Administrative staff use office productivity software (replacing the typewriters of
yesteryears). Accountants and finance people use spreadsheets and other financial packages to
help them do much faster what they used to do with calculators (or even manually). Everyone in
an organization and at home uses e-mail and the Internet for entertainment, education,
communication, interaction, and for getting any information they want. In addition, of course, the
“technical” people use programming languages, modeling tools, simulation tools, and database
management systems for tasks that they were mostly executing manually a few years earlier.
The above examples are just some instances where the use of software is “obvious” to the users.
However, software is more ubiquitous and pervasive than seen in these examples. Software
today is as common as electricity was in the early part of the last century. Almost every gadget
and device we have at home and at work is embedded with a significant amount of software.
Mobile phones, televisions, wrist watches, and refrigerators or any kitchen equipment all have
embedded software.
Another interesting dimension is that software is being used now in mission critical situations
where failure is simply unacceptable. There is no way one can suggest a solution of “please
shutdown and reboot the system” for a software that is in someone's pacemaker! Almost every
service we have taken for granted has software. Banks, air traffic controls, cars are all powered
by software that simply cannot afford to fail. These systems have to run reliably, predictably, all
the time, every time.
This pervasiveness, ubiquity, and mission criticality places certain demands on the way the
software is developed and deployed.
First, an organization that develops any form of software product or service must put in every
effort to drastically reduce and, preferably, eliminate any defects in each delivered product or
service. Users are increasingly intolerant of the hit-and-miss approach that characterized
software products. From the point of view of a software development organization also, it may
not be economically viable to deliver products with defects. For instance, imagine finding a
defect in the software embedded in a television after it is shipped to thousands of customers.
How is it possible to send “patches” to these customers and ask them to “install the patch?”
Thus, the only solution is to do it right the first time, before sending a product to the customer.
Second, defects are unlikely to remain latent for long. When the number of users was limited and
the way they used the product was also predictable (and highly restricted), it was quite possible
that there could be defects in the software product that would never get detected or uncovered for
a very long time. However, with the number of users increasing, the chances of a defect going
undetected are becoming increasingly slim. If a defect is present in the product, someone will hit
upon it sooner than later.
Finally, the consequence and impact of every single defect needs analysis, especially for mission
critical applications. It may be acceptable to say that 99.9% of defects are fixed in a product for a
release, and only 0.1% defects are outstanding. It appears to be an excellent statistics to go ahead
and release the product. However, if we map the 0.1% failure in mission critical applications, the
data will look like this.
This book focuses on software testing. Traditionally, testing is defined as being narrowly
confined to testing the program code. We would like to consider testing in a broader context as
encompassing all activities that address the implications of producing quality products discussed
above. Producing a software product entails several phases (such as requirements gathering,
design, and coding) in addition to testing (in the traditional sense of the term). While testing is
definitely one of the factors (and one of the phases) that contributes to a high quality product, it
alone cannot add quality to a product. Proper interaction of testing with other phases is essential
for a good product. These interactions and their impact are captured in the grid in Figure 1.1.
If the quality of the other phases is low and the effectiveness of testing is low (lower left-hand
corner of the grid), the situation is not sustainable. The product will most likely go out of
business very soon. Trying to compensate for poor quality in other phases with increased
emphasis on the testing phase (upper left-hand corner of the grid) is likely to put high pressure
on everyone as the defects get detected closer to the time the product is about to be released.
Similarly, blindly believing other phases to be of high quality and having a poor testing phase
(lower right-hand side of the grid) will lead to the risky situation of unforeseen defects being
detected at the last minute. The ideal state of course is when high quality is present in all the
phases including testing (upper right-hand corner of the grid). In this state, the customers feel the
benefits of quality and this promotes better teamwork and success in an organization.
1.2 ABOUT THIS CHAPTER
In this chapter, we discuss some of the basic principles of testing. We believe that these
principles are fundamental to the objective of testing, namely, to provide quality products to
customers. These principles also form the motivation for the rest of the book. Thus this chapter
acts as an anchor for the rest of the book.
1. The goal of testing is to find defects before customers find them out.
2. Exhaustive testing is not possible; program testing can only show the presence of defects,
never their absence.
3. Testing applies all through the software life cycle and is not an end-of-cycle activity.
4. Understand the reason behind the test.
5. Test the tests first.
6. Tests develop immunity and have to be revised constantly.
7. Defects occur in convoys or clusters, and testing should focus on these convoys.
8. Testing encompasses defect prevention.
9. Testing is a fine balance of defect prevention and defect detection.
10. Intelligent and well-planned automation is key to realizing the benefits of testing.
11. Testing requires talented, committed people who believe in themselves and work in
teams.
We will take up each of these principles in the subsequent sections. Where appropriate, we will
illustrate the principle with a simple story from outside the arena of information technology to
drive home the point.
We would like to assign a broader meaning to the term “customer.” It does not mean just
external customers. There are also internal customers. For example, if a product is built using
different components from different groups within an organization, the users of these different
components should be considered customers, even if they are from the same organization.
Having this customer perspective enhances the quality of all the activities including testing.
We can take the internal customer concept a step further where the development team considers
the testing team as its internal customer. This way we can ensure that the product is built not
only for usage requirements but also for testing requirements. This concept improves
“testability” of the product and improves interactions between the development and testing
teams.
Sales representative / Engineer: “This car has the best possible transmission and brake, and
accelerates from 0 to 80 mph in under 20 seconds!”
Customer: “Well, that may be true, but unfortunately it accelerates (even faster) when I press
the brake pedal!”
We would like to urge the reader to retain these two perspectives—customer perspective and
perspective of quality not being an add-on in the end, but built in every activity and component
right from the beginning—throughout the book.
If our job is to give a complete car to the customer (and not ask the customers to paint the car)
and if our intent is to make sure the car works as expected, without any (major) problems, then
we should ensure that we catch and correct all the defects in the car ourselves. This is the
fundamental objective of testing. Anything we do in testing, it behoves us to remember that.
Consider a program that is supposed to accept a six-character code and ensure that the first
character is numeric and rests of the characters are alphanumeric. How many combinations of
input data should we test, if our goal is to test the program exhaustively?
The first character can be filled up in one of 10 ways (the digits 0-9). The second through sixth
characters can each be filled up in 62 ways (digits 0-9, lower case letters a-z and capital letters
A-Z). This means that we have a total of 10 x (625) or 9, 161, 328, 320 valid combinations of
values to test. Assuming that each combination takes 10 seconds to test, testing all these valid
combinations will take approximately 2, 905 years!
Therefore, after 2, 905 years, we may conclude that all valid inputs are accepted. But that is not
the end of the story—what will happen to the program when we give invalid data? Continuing
the above example, if we assume there are 10 punctuation characters, then we will have to spend
a total of 44, 176 years to test all the valid and invalid combinations of input data.
All this just to accept one field and test it exhaustively. Obviously, exhaustive testing of a real
life program is never possible.
All the above mean that we can choose to execute only a subset of the tests. To be effective, we
should choose a subset of tests that can uncover the maximum number of errors. We will discuss
in Chapter 4, on Black Box Testing, and Chapter 3, on White Box Testing, some techniques such
as equivalence partitioning, boundary value analysis, code path analysis, and so on which help in
identifying subsets of test cases that have a higher likelihood of uncovering defects.
Testing can only prove the presence of defects, never their absence.
Nevertheless, regardless of which subset of test cases we choose, we can never be 100% sure that
there are no defects left out. But then, to extend an old cliche, nothing can be certain other than
death and taxes, yet we live and do other things by judiciously managing the uncertainties.
Defects in a product can come from any phase. There could have been errors while gathering
initial requirements. If a wrong or incomplete requirement forms the basis for the design and
development of a product, then that functionality can never be realized correctly in the eventual
product. Similarly, when a product design—which forms the basis for the product development
(a la coding)—is faulty, then the code that realizes the faulty design will also not meet the
requirements. Thus, an essential condition should be that every phase of software development
(requirements, design, coding, and so on) should catch and correct defects at that phase, without
letting the defects seep to the next stage.
Let us look at the cost implications of letting defects seep through. If, during requirements
capture, some requirements are erroneously captured and the error is not detected until the
product is delivered to the customer, the organization incurs extra expenses for
In Figure 1.2 the defects in requirements are shown in gray. The coloured figure is available on
Illustrations. As you can see, these gray boxes are carried forward through three of the
subsequent stages—design, coding, and testing.
Figure 1.2 How defects from early phases add to the costs.
When this erroneous product reaches the customer after the testing phase, the customer may
incur a potential downtime that can result in loss of productivity or business. This in turn would
reflect as a loss of goodwill to the software product organization. On top of this loss of goodwill,
the software product organization would have to redo all the steps listed above, in order to rectify
the problem.
The cost of building a product and the number of defects in it increase steeply with the number
of defects allowed to seep into the later phases.
Similarly, when a defect is encountered during the design phase (though the requirements were
captured correctly, depicted by yellow), the costs of all of the subsequent phases (coding, testing,
and so on) have to be incurred multiple times. However, presumably, the costs would be lower
than in the first case, where even the requirements were not captured properly. This is because
the design errors (represented by yellow boxes) are carried forward only to the coding and
testing phases. Similarly, a defect in the coding phase is carried forward to the testing phase
(green boxes). Again, as fewer phases are affected by this defect (compared to requirements
defects or design defects), we can expect that the cost of defects in coding should be less than the
earlier defects. As can be inferred from the above discussion, the cost of a defect is compounded
depending on the delay in detecting the defect.
Hence, smaller the lag time between defect injection (i.e., when the defect was introduced) and
defect detection (i.e., when the defect was encountered and corrected), lesser are the unnecessary
costs. Thus, it becomes essential to catch the defects as early as possible. Industry data has
reaffirmed these findings. While there is no consensus about the costs incurred due to delay in
defect detection, a defect introduced during the requirement phase that makes it to the final
release may cost as much as a thousand times the cost of detecting and correcting the defect
during requirements gathering itself.
A saint sat meditating. A cat that was prowling around was disturbing his concentration. Hence
he asked his disciples to tie the cat to a pillar while he meditated. This sequence of events
became a daily routine. The tradition continued over the years with the saint's de-scendents and
the cat's descendents. One day, there were no cats in the hermitage. The disciples got panicky
and searched for a cat, saying, “We need a cat. Only when we get a cat, can we tie it to a pillar
and only after that can the saint start meditating!”
Testing requires asking about and understanding what you are trying to test, knowing what the
correct outcome is, and why you are performing any test. If we carry out tests without
understanding why we are running them, we will end up in running inappropriate tests that do
not address what the product should do. In fact, it may even turn out that the product is modified
to make sure the tests are run successfully, even if the product does not meet the intended
customer needs!
Understanding the rationale of why we are testing certain functionality leads to different types of
tests, which we will cover in Part II of the book. We do white box testing to check the various
paths in the code and make sure they are exercised correctly. Knowing which code paths should
be exercised for a given test enables making necessary changes to ensure that appropriate paths
are covered. Knowing the external functionality of what the product should do, we design black
box tests. Integration tests are used to make sure that the different components fit together.
Internationalization testing is used to ensure that the product works with multiple languages
found in different parts of the world. Regression testing is done to ensure that changes work as
designed and do not have any unintended side-effects.
An audiologist was testing a patient, telling her, “I want to test the range within which you can
hear. I will ask you from various distances to tell me your name, and you should tell me your
name. Please turn back and answer.” The patient understood what needs to be done.
From the above example, it is clear that it is the audiologist who has a hearing problem, not the
patient! Imagine if the doctor prescribed a treatment for the patient assuming that the latter could
not hear at 20 feet and 30 feet.
Tests are also artifacts produced by human beings, much as programs and documents are. We
cannot assume that the tests will be perfect either! It is important to make sure that the tests
themselves are not faulty before we start using them. One way of making sure that the tests are
tested is to document the inputs and expected outputs for a given test and have this description
validated by an expert or get it counter-checked by some means outside the tests themselves. For
example, by giving a known input value and separately tracing out the path to be followed by the
program or the process, one can manually ascertain the output that should be obtained. By
comparing this “known correct result” with the result produced by the product, the confidence
level of the test and the product can be increased. The practices of reviews and inspection and
meticulous test planning discussed in Chapter 3 and Chapter 15 provide means to test the test.
Test the tests first—a defective test is more dangerous than a defective product!
Defects are like pests; testing is like designing the right pesticides to catch and kill the pests; and
the test cases that are written are like pesticides. Just like pests, defects develop immunity against
test cases! As and when we write new test cases and uncover new defects in the product, other
defects that were “hiding” underneath show up.
Every year, pests of various types attack fields and crops. Agriculture and crop experts find the
right antidote to counter these pests and design pesticides with new and improved formulae.
Interestingly, the pests get used to the new pesticides, develop immunity, and render the new
pesticides ineffective. In subsequent years, the old pesticides have to be used to kill the pests
which have not yet developed this immunity and new and improved formulae that can combat
these tougher variants of pests have to be introduced. This combination of new and old pesticides
could sometimes even hinder the effectiveness of the (working) old pesticide. Over time, the old
pesticides become useless. Thus, there is a constant battle between pests and pesticides to get
ahead of the other. Sometimes pesticides win, but in a number of cases, the pests do succeed to
defy the latest pesticides. This battle results in a constant churning and evolution of the nature
and composition of pesticides.
There are two possible ways to explain how products develop this “immunity” against test cases.
One explanation is that the initial tests go a certain distance into the code and are stopped from
proceeding further because of the defects they encounter. Once these defects are fixed, the tests
proceed further, encounter newer parts of the code that have not been dealt with before, and
uncover new defects. This takes a “white box” or a code approach to explain why new defects
get unearthed with newer tests.
Tests are like pesticides—you have to constantly revise their composition to tackle new pests
(defects).
A second explanation for immunity is that when users (testers) start using (exercising) a product,
the initial defects prevent them from using the full external functionality. As tests are run, defects
are uncovered, and problems are fixed, users get to explore new functionality that has not been
used before and this causes newer defects to be exposed. This “black box” view takes a
functionality approach to explain the cause for this “more we test more defects come up”
phenomenon.
An alternative way of looking at this problem is not that the defects develop immunity but the
tests go deeper to further diagnose a problem and thus eventually “kill the defects.”
Unfortunately, given the complex nature of software and the interactions among multiple
components, this final kill happens very rarely. Defects still survive the tests, haunt the
customers, and cause untold havoc.
The need for constantly revising the tests to be run, with the intent of identifying new strains of
the defects, will take us to test planning and different types of tests, especially regression tests.
Regression tests acknowledge that new fixes (pesticides) can cause new “side-effects” (new
strains of pests) and can also cause some older defects to appear. The challenge in designing and
running regression tests centers around designing the right tests to combat new defects
introduced by the immunity acquired by a program against old test cases. We will discuss
regression tests in Chapter 8.
All of us experience traffic congestions. Typically, during these congestions, we will see a
convoy effect. There will be stretches of roads with very heavy congestions, with vehicles
looking like they are going in a convoy. This will be followed by a stretch of smooth sailing
(rather, driving) until we encounter the next convoy.
Defects in a program also typically display this convoy phenomenon. They occur in clusters.
Glenford Myers, in his seminal work on software testing [MYER-79], proposed that the
probability of the existence of more errors in a section of a program is proportional to the
number of errors already found in that section.
Testing can only find a part of defects that exist in a cluster; fixing a defect may introduce
another defect to the cluster.
This may sound counter-intuitive, but can be logically reasoned out. A fix for one defect
generally introduces some instability and necessitates another fix. All these fixes produce side-
effects that eventually cause the convoy of defects in certain parts of the product.
From a test planning perspective, this means that if we find defects in a particular part of
product, more—not less—effort should be spent on testing that part. This will increase the return
on investments in testing as the purpose of testing is find the defects. This also means that
whenever a product undergoes any change, these error-prone areas need to be tested as they may
get affected. We will cover these aspects in Chapter 8, Regression Testing.
Figure 1.4 The number of defects yet to be found increases with the number of defects
uncovered.
A fix for a defect is made around certain lines of code. This fix can produce side-effects around
the same piece of code. This sets in spiraling changes to the program, all localized to certain
select portions of the code. When we look at the code that got the fixes for the convoy of defects,
it is likely to look like a piece of rag! Fixing a tear in one place in a shirt would most likely cause
damage in another place. The only long-term solution in such a case is to throw away the shirt
and create a new one. This amounts to a re-architecting the design and rewriting the code.
There was a wooden bridge on top of a river in a city. Whenever people walked over it to cross
the river, they would fall down. To take care of this problem, the city appointed a strong
policeman to stand under the bridge to save people who fall down. While this helped the problem
to some extent, people continued to fall down the bridge. When the policeman moved to a
different position, a new policeman was appointed to the job. During the first few days, instead
of standing at the bottom of the bridge and saving the falling people, the new policeman worked
with an engineer and fixed the hole on the bridge, which had not been noticed by the earlier
policeman. People then stopped falling down the bridge and the new policeman did not have
anyone to save. (This made his current job redundant and he moved on to do other things that
yielded even better results for himself and the people…)
Testers are probably best equipped to know the problems customers may encounter. Like the
second police officer in the above story, they know people fall and they know why people fall.
Rather than simply catch people who fall (and thereby be exposed to the risk of a missed catch),
they should also look at the root cause for falling and advise preventive action. It may not be
possible for testers themselves to carry out preventive action. Just as the second police officer
had to enlist the help of an engineer to plug the hole, testers would have to work with
development engineers to make sure the root cause of the defects are addressed. The testers
should not feel that by eliminating the problems totally their jobs are at stake. Like the second
policeman, their careers can be enriching and beneficial to the organization if they harness their
defect detection experience and transform some of it to defect prevention initiatives.
Prevention is better than cure—you may be able to expand your horizon much farther.
Defect prevention is a part of a tester's job. A career as a tester can be enriching and rewarding, if
we can balance defect prevention and defect detection activities. Some of these career path
possibilities are encapsulated in a three-stage model in Chapter 13, Common People Issues. We
will now visit the question of what is the right balance between defect prevention and defect
detection.
1.11 THE ENDS OF THE PENDULUM
The eventual goal of any software organization is to ensure that the customers get products that
are reasonably free of defects. There are two approaches to achieving this goal. One is to focus
on defect detection and correction and the second is to focus on defect prevention. These are also
called quality control focus and quality assurance focus.
Quality assurance is normally associated with process models such as CMM, CMMI, ISO 9001,
and so on. Quality control, on the other hand, is associated with testing (that form the bulk of the
discussions in this book). This has caused an unnatural dichotomy between these two functions.
Unfortunately, organizations view these two functions as mutually exclusive, “either-or” choices.
We have even heard statements such as “with good processes, testing becomes redundant” or
“processes are mere overheads—we can find out everything by testing.” It is almost as if there
are two schools of thought at either extremes of a pendulum—one rooting for defect prevention
(quality assurance) focus and the other rooting for the defect detection (quality control) focus. It
is also common to find an organization swinging from one extreme to another over time, like a
pendulum (Figure 1.5).
Figure 1.5 Quality control and quality assurance as two methods to achieve quality.
Rather than view defect prevention and defect detection as mutually exclusive functions or ends
of a pendulum, we believe it is worthwhile to view these two as supplementary activities, being
done in the right mix. Figure 1.6 gives a defect prevention—defect detection grid, which views
the two functions as two dimensions. The right mix of the two activities corresponds to choosing
the right quadrant in this grid.
Figure 1.6 Relationship between defect detection focus and defect prevention focus.
When the focus on defect prevention is low, the emphasis on the use of appropriate standards,
reviews, and processes are very low. This acts as an ideal “breeding ground” for defects. Most of
the effort in ensuring quality of a product is left in the hands of the testing and defect detection
team. If the focus on defect detection is also low (represented by the lower left-hand quadrant),
this is a bad state for an organization to be in. Lack of testing and defect detection activities does
not “kill” these defects in time; hence the defects reach the customers. This is obviously not a
healthy state to be in.
Even when the defect detection focus increases, with continued low defect prevention focus
(upper left hand quadrant), the testing functions become a high-adrenalin rush, high-pressure job.
Most defects are detected in the last minute—before the product release. Testers thus become
superheroes who “save the day” by finding all the defects just in time. They may also become
adversaries to developers as they always seem to find problems in what the developers do. This
quadrant is better than the previous one, but ends up being difficult to sustain because the last-
minute adrenalin rush burns people out faster.
Three Chinese doctors were brothers. The youngest one was a surgeon and well known in all
parts of the world. He could find tumors in the body and remove them. The middle one was a
doctor who could find out disease in its early days and prescribe medicine to cure it. He was
known only in the city they lived in. The eldest of the brothers was not known outside the house,
but his brothers always took his advice because he was able to tell them how to prevent any
illness before they cropped up. The eldest brother may not have been the most famous, but he
was surely the most effective.
Preventing an illness is more effective than curing it. People who prevent defects usually do not
get much attention. They are usually the unsung heroes of an organization. Those who put out
the fires are the ones who get visibility, not necessarily those who make sure fires do not happen
in the first place. This, however, should not deter the motivation of people from defect
prevention.
As we saw in the previous section, defect prevention and defect detection are not mutually
exclusive. They need to be balanced properly for producing a quality product. Defect prevention
improves the quality of the process producing the products while defect detection and testing is
needed to catch and correct defects that escape the process. Defect prevention is thus process
focused while defect detection is product focused. Defect detection acts as an extra check to
augment the effectiveness of defect prevention.
An increase in defect prevention focus enables putting in place review mechanisms, upfront
standards to be followed, and documented processes for performing the job. This upfront and
proactive focus on doing things right to start with causes the testing (or defect detection) function
to add more value, and enables catching any residual defects (that escape the defect prevention
activities) before the defects reach the customers. Quality is institutionalized with this
consistently high focus on both defect prevention and defect detection. An organization may
have to allocate sufficient resources for sustaining a high level of both defect prevention and
defect detection activities (upper right-hand quadrant in Figure 1.6).
Defect prevention and defect detection should supplement each other and not be considered as
mutually exclusive.
However, an organization should be careful about not relying too much on defect prevention and
reducing the focus on defect detection (lower right-hand quadrant in Figure 1.6). Such a high
focus on defect prevention and low focus on defect detection would not create a feeling of
comfort amongst the management on the quality of product released since there are likely to be
minimal internal defects found. This feeling will give rise to introduction of new processes to
improve the effectiveness of defect detection. Too much of processes and such defect prevention
initiatives may end up being perceived as a bureaucratic exercise, not flexible or adaptable to
different scenarios. While processes bring in discipline and reduce dependency on specific
individuals, they—when not implemented in spirit—could also end up being double-edged
swords, acting as a damper to people's drive and initiative. When an organization pays equally
high emphasis to defect prevention and defect detection (upper right corner in the grid), it may
appear that it is expensive but this investment is bound to have a rich payback by institutional
quality internally and making the benefits visible externally to the customers.
An organization should choose the right place on each of these two—defect detection and defect
prevention—dimensions and thus choose the right place in the grid. The relative emphasis to be
placed on the two dimensions will vary with the type of product, closeness to the release date,
and the resources available. Making a conscious choice of the balance by considering the various
factors will enable an organization to produce better quality products. It is important for an
organization not to over-emphasize one of these at the expense of the other, as the next section
will show.
As we can see from all the above discussions, testing requires abundant talent in multiple
dimensions. People in the testing profession should have a customer focus, understanding the
implications from the customer's perspective. They should have adequate analytical skills to be
able to choose the right subset of tests and be able to counter the pesticide paradox. They should
think ahead in terms of defect prevention and yet be able to spot and rectify errors that crop up.
Finally (as we will see in the next section), they must be able to perform automation functions.
Despite all these challenging technical and inter-personal skills required, testing still remains a
not-much-sought-after function. There was an interesting experiment that was described by De
Marco and Lister in their book, Peopleware [DEMA-1987]. The testing team was seeded with
motivated people who were “free from cognitive dissonance that hampers developers when
testing their own programs.” The team was given an identity (by a black dress, amidst the
traditionally dressed remainder of the organization) and tremendous importance. All this
increased their pride in work and made their performance grow by leaps and bounds, “almost
like magic.” Long after the individual founding members left and were replaced by new people,
the “Black Team” continued its existence and reputation.
The biggest bottleneck in taking up testing as a profession is the lack of self-belief. This lack of
self-belief and apparent distrust of the existence of career options in testing makes people view
the profession as a launching pad to do other software functions (notably, “development,” a
euphemism for coding). As a result, testers do not necessarily seek a career path in testing and
develop skepticism towards the profession.
We have devoted an entire chapter in Part III of the book to career aspirations and other similar
issues that people face. A part of the challenge that is faced is the context of globalization—the
need to harness global resources to stay competitive. We address the organizational issues arising
out of this in another chapter in Part III.
A farmer had to use water from a well which was located more than a mile away. Therefore, he
employed 100 people to draw water from the well and water his fields. Each of those employed
brought a pot of water a day but this was not sufficient. The crops failed.
Just before the next crop cycle, the farmer remembered the failures of the previous season. He
thought about automation as a viable way to increase productivity and avoid such failures. He
had heard about motorcycles as faster means of commuting (with the weight of water).
Therefore, he got 50 motorcycles, laid off 50 of his workers and asked each rider to get two pots
of water.Apparently, the correct reasoning was that thanks to improved productivity (that is,
speed and convenience of a motorcycle), he needed fewer people. Unfortunately, he choose to
use motorcycles just before his crop cycle started. Hence for the first few weeks, the workers
were kept busy learning to use the motorcycle. In the process of learning to balance the
motorcycles, the number of pots of water they could fetch fell. Added to this, since the number
of workers was also lower, the productivity actually dropped. The crops failed again.
The next crop cycle came. Now all workers were laid off except one. The farmer bought a truck
this time to fetch water. This time he realized the need for training and got his worker to learn
driving. However, the road leading to the farm from the well was narrow and the truck did not
help in bringing in the water. No portion of the crop could be saved this time also.
After these experiences the farmer said, “My life was better without automation!”
If you go through the story closely there appear to be several reasons for the crop failures that are
not to do with the automation intent at all. The frustration of the farmer should not be directed at
automation but on the process followed for automation and the inappropriate choices made. In
the second crop cycle, the reason for failure was lack of skills and in the third cycle it is due to
improper tool implementation.
Failures outnumber successes in automation. Equal skills and focus are needed for automation as
in product development.
In the first crop cycle, the farmer laid off his workers immediately after the purchase of
motorcycles and expected cost and time to come down. He repeated the same mistake for the
third crop cycle. Automation does not yield results immediately.
The moral of the above story as it applies to testing is that automation requires careful planning,
evaluation, and training. Automation may not produce immediate returns. An organization that
expects immediate returns from automation may end up being disappointed and wrongly blame
automation for their failures, instead of objectively looking at their level of preparedness for
automation in terms of planning, evaluation, and training.
A large number of organizations fail in their automation initiatives and revert to manual testing.
Unfortunately, they conclude—wrongly—that automation will never work.
Testing, by nature, involves repetitive work. Thus, it lends itself naturally to automation.
However, automation is a double-edged sword. Some of the points that should be kept in mind
while harping on automation are as follows.
Know first why you want to automate and what you want to automate, before
recommending automation for automation's sake.
Evaluate multiple tools before choosing one as being most appropriate for your need.
Try to choose tools to match your needs, rather than changing your needs to match the
tool's capabilities.
Train people first before expecting them to be productive.
Do not expect overnight returns from automation.
We have discussed several basic principles of testing in this chapter. These principles provide an
anchor to the other chapters that we have in rest of the book. We have organized the book into
five parts. The first part (which includes this chapter) is Setting the Context, which sets the
context for the rest of the book. In the chapter that follows, we cover Software Development Life
Cycle (SDLC) Models in the context of testing, verification and validation activities.
In Part II, Types of Testing, we cover the common types of testing. Chapters 3 through 10 cover
white box testing, black box testing, integration testing, system and acceptance testing,
performance testing, regression testing, internationalization testing, and ad hoc testing.
Part III, Select Topics in Specialized Testing, addresses two specific and somewhat esoteric
testing topics—object oriented testing in Chapter 11 and usability and accessibility testing in
Chapter 12.
Part IV, People and Organizational Issues in Testing, provides an oftignored perspective.
Chapter 13 addresses the common people issues like misconceptions, career path concerns and
so on. Chapter 14 address the different organizational structures in vogue to set up effective
testing teams, especially in the context of globalization.
The final part, Part V, Test Management and Automation, addresses the process, management,
and automation issues to ensure effective testing in an organization. Chapter 16 discusses test
planning management and execution. This discusses various aspects of putting together a test
plan, tracking a testing project and related issues. Chapter 17 goes into details of the benefits,
challenges, and approaches in test automation—an area of emerging and increasing importance
in the test community. The final chapter, Chapter 18, goes into details of what data are required
to be captured and what analysis is to be performed for measuring effectiveness of testing,
quality of a product and similar perspectives and how this information can be used to achieve
quantifiable continuous improvement.
While we have provided the necessary theoretical foundation in different parts of the book, our
emphasis throughout the book has been on the state of practice. This section should set the
context for what the reader can expect in rest of the book.
REFERENCES
One of the early seminal works on testing is [MYER-79]. In particular, the example of trying to
write test cases for verifying three numbers to be the sides of a valid triangle still remains one of
the best ways to bring forth the principles of testing. [DEMA-87] provides several interesting
perspectives of the entire software engineering discipline. The concept of black team has been
illustrated in that work. The emphasis required for process and quality assurance methodologies
and the balance to be struck between quality assurance and quality control are brought out in
[HUMP-86]. Some of the universally applicable quality principles are discussed in the classics
[CROS-80] and [DEMI-86]. [DIJK-72], a Turing Award lecture brings out the doctrine of
program testing can never prove the absence of defects. [BEIZ-90] discusses the pesticide
paradox.
1. We have talked about the pervasiveness of software as a reason why defects left in a
product would get detected sooner than later. Assume that televisions with embedded
software were able to download, install and self-correct patches over the cable network
automatically and the TV manufacturer told you that this would just take five minutes
every week “at no cost to you, the consumer.” Would you agree? Give some reasons why
this is not acceptable.
2. Your organization has been successful in developing a client-server application that is
installed at several customer locations. You are changing the application to be a hosted,
web-based application that anyone can use after a simple registration process. Outline
some of the challenges that you should expect from a quality and testing perspective of
the changed application.
3. The following were some of the statements made by people in a product development
organization. Identify the fallacies if any in the statements and relate it to the principles
discussed in this chapter.
1. “The code for this product is generated automatically by a CASE tool — it is
therefore defect - free.”
2. “We are certified according to the latest process models — we do not need
testing.”
3. “We need to test the software with dot matrix printers because we have never
released a product without testing with a dot matrix printer.”
4. “I have run all the tests that I have been running for the last two releases and I
don't need to run any more tests.”
5. “This automation tool is being used by our competitors -hence we should also use
the same tool.”
4. Assume that each defect in gathering requirements allowed to go to customers costs $10,
000, and that the corresponding costs for design defects and coding defects are $1, 000
and $100, respectively. Also, assume that current statistics indicate that on average ten
new defects come from each of the phases. In addition, each phase also lets the defects
from the previous phase seep through. What is the total cost of the defects under the
current scenario? If you put a quality assurance process to catch 50% of the defects from
each phase not to go to the next phase, what are the expected cost savings?
5. You are to write a program that adds two two-digit integers. Can you test this program
exhaustively? If so, how many test cases are required? Assuming that each test case can
be executed and analyzed in one second, how long would it take for you to run all the
tests?
6. We argued that the number of defects left in a program is proportional to the number of
defects detected. Give reasons why this argument looks counterintuitive. Also, give
practical reasons why this phenomenon causes problems in testing.
Find answers on the fly, or master something new. Subscribe today. See pricing options.
Recommended
Queue
History
Topics
Tutorials
Settings
Support
Get the App
Sign Out
© 2017 Safari. Terms of Service / Privacy Policy