Issues in Acceptance Testing: No Collaboration - As A Specification of A System That Acts As The Shared Definition of
Issues in Acceptance Testing: No Collaboration - As A Specification of A System That Acts As The Shared Definition of
The traditional model of gathering requirements and building specifications is based on a lot
of formalising, handing over and translating. Business analysts first extract knowledge about
requirements from customers, formalising it into specifications and handing that over to
developers and testers. Developers extract knowledge from that and translate it into
executable code, which is handed over to testers. Testers then take the specifications,
extract knowledge from them and translate it into verification scripts, which are then applied
to the code that was handed over to them by the developers. In theory, this works just fine
and everyone is happy. In practice, this process is essentially flawed and leaves huge
communication gaps at every step. Important ideas fall through those gaps and mysteriously
disappear. After every translation, information gets distorted and misunderstood, leading to
large mistakes once the ideas come through the other end of the pipe. Tester’s independent
interpretation might help to correct interpretations of developers, or it might very well be a
completely different misinterpretation of the system requirements. With agile processes, the
feedback loop is much shorter then in a traditional process, so problems get discovered
quickly. If agile acceptance testing is not applied, even with other agile practices in place,
there is still a lot of space for mistakes. Instead of discovering problems, we need to work out
how to stop them from appearing in the first place.
People seem somehow surprised with this result even though most of us have encountered it
and used to amusing effects in childhood. Antony Marcano draws a parallel between the
traditional software development process and the Telephone game. The Telephone game
works by getting a group of children in a line and then the first child whispers a phrase or
sentence to the next child. The second child whispers what it has heard to the third one and
so on. The last child in the line says what it has heard out loud, which is often significantly
different from the original phrase. Although those cumulative differences may have been
amusing when we were children, they are not so funny when it comes to solving real
problems that obstruct people in doing their jobs.
Symptoms of this problem are developers writing acceptance tests themselves, testers being
expected to handle everything about acceptance testing and business dictating tests without
any feedback from developers or testers.
3. Tests unusable as live documentation — one of the greatest benefits of agile acceptance
testing is that we are gradually building a human readable specification of the system. This
specification is ideally automated to a great degree so that we can be confident that it is in sync
with the code. This solves the problem of the running code being the only thing you can really
trust about what the system does. Correct, easily understandable and accessible documentation
is crucial for future change. If tests don’t serve as a live documentation that those benefits are
lost. Symptoms of this problem are very technical tests, long tests, hard to understand tests,
tests that are poorly organised so that you can’t quickly find relevant tests for a particular piece
of functionality.
5. Focusing on tools — some teams focus too much on tools and features of particular tools,
disregarding things that do not naturally fit into their chosen toolset. This makes the specification
vague in parts that aren’t covered by a particular tool. Symptoms of this problem are ignoring
the UI (as UI tests are hard to automate) and wasting time on converting specifications in a
format suitable for a particular tool during (which breaks the flow of the discussion and wastes
valuable time).
6. Not considering acceptance testing as value added activity — agile acceptance testing
is not about QA, but more about specifying and agreeing on what needs to be done. Teams that
consider it part of QA often delegate the responsibility to it to junior programmers and testers,
which effectively means that junior team members will be writing the specification (which is
wrong on so many levels that I don’t know where to start).
7. Did not Understand that UAT is Performed at the Worst Possible Time in the Project
Most UAT efforts happen at the end of the project because this is when the entire system is
assembled or installed. Until the end of the project, users may be able to test parts of the
system or application, but not the system as a whole. This is bad because the end of the project
is the worst time to find and fix major problems. Each problem found and fixed in system testing
or UAT which has been in the system since requirements has a 10x cost factor had it been
found or fixed in requirements or design. This is due to the ripple effect that the fix may require
in other areas of the system.
8. Underestimating the skill required to do this well — introducing acceptance testing is a
tall order and often means changing the way teams are organised and approach specifications.
This requires a lot of time and investment in skill-up, mastering tools, facilitating workshops and
dealing with resistance to change. Underestimating the effort required to do this properly might
cause teams to think that they failed and get disappointed too early.
Collaborative Specifications
Traditional processes rely on domain experts or analysts to get the specifications and
requirements documents right. These processes do not harness the knowledge of the whole
team. Software developers and testers are typically kept out of the loop. However,
developers and testers have key technical insights that can help to specify better solutions
or avoid technical difficulties.
Instead of relying on one person to get the specifications right in isolation, include the whole
team in specifying the solution. People coming from different backgrounds use different
heuristics to solve problems and have different ideas. Involving a diverse group of people in
producing the specifications helps to avoid groupthink. Technical experts can suggest better
solutions. Testers can communicate their concerns about potential problems. Collaborative
specifications harness the knowledge and experience of the whole team.
Specification workshops
Collaborative and incremental specifications require the input of project stakeholders,
domain experts, software developers and testers. Getting so many people together can
be a real challenge, especially since domain experts and stakeholders have other things
to do. In order to keep up with the pace of short iterations, we need to have a very
efficient flow of information. As an iteration might only last two weeks, delaying key
decisions for even a day or two can have a serious impact on the outcome of the
iteration. Specifications need to be nailed down and efficiently communicated to all
project participants during each iteration. We need to identify functional gaps and clear
up inconsistencies and misunderstandings. Written specifications are fine as
documentation, but they do not support an efficient flow of information.
Instead of using written specifications and relying on people to review them separately,
organise a specification workshop at the start of each iteration. Get everyone in the
same room, discuss examples, and resolve issues, allowing people to voice their
concerns. Ensure that all participants build a consistent shared understanding of what
the system should do, so that developers and testers have enough information to
complete their work for the current iteration. Specification workshops are an efficient use
of the time of senior project stakeholders, as we can schedule them regularly in
advance. Write down and clean up the examples identifed and use them as
documentation of the results of the workshop, but do not rely on written specifications for
communication.
Communicating Intent
Traditionally, requirements specify what the system should do without explaining why.
Developers and testers can then only blindly follow the specifications, relying on them to
be 100% correct and precise. Without really understanding why something is being
done, they cannot even spot problems that are obvious to domain experts, nor can they
verify that what they are doing is actually what the customers want.
Defining what the system should do is crucial to communicating the specifications
effectively, but explaining why gives developers and testers a much needed framework
to understand what the customers really want. This knowledge enables them to spot
inconsistencies and functional gaps, incorrect requirements and problems. These issues
can then be sorted out during development rather than after the initial delivery.
Live Documentation
Large specification documents often get out-of-date as soon as development starts, as
people implement change requests directly in the code without updating the
specifications. Out-of-date specifications are misleading, but are often all we have
available to explain the system. The code is the only true description of what the the
system does, but it is unusable for communication.
An incrementally built set of acceptance tests does not suffer from these issues. As
acceptance tests are automated and connected directly to the code, we can have the
same level of confidence in them as we have in the code. Acceptance tests are by
nature easily understandable (or at least they should be) so the set of implemented
acceptance tests serves as a very reliable source of information on the system.
Specification workshops
Collaborative and incremental specifications require the input of project stakeholders,
domain experts, software developers and testers. Getting so many people together can
be a real challenge, especially since domain experts and stakeholders have other things
to do. In order to keep up with the pace of short iterations, we need to have a very
efficient flow of information. As an iteration might only last two weeks, delaying key
decisions for even a day or two can have a serious impact on the outcome of the
iteration. Specifications need to be nailed down and efficiently communicated to all
project participants during each iteration. We need to identify functional gaps and clear
up inconsistencies and misunderstandings. Written specifications are fine as
documentation, but they do not support an efficient flow of information.
Instead of using written specifications and relying on people to review them separately,
organize a specification workshop at the start of each iteration. Get everyone in the
same room, discuss examples, and resolve issues, allowing people to voice their
concerns. Ensure that all participants build a consistent shared understanding of what
the system should do, so that developers and testers have enough information to
complete their work for the current iteration. Specification workshops are an efficient use
of the time of senior project stakeholders, as we can schedule them regularly in
advance. Write down and clean up the examples identified and use them as
documentation of the results of the workshop, but do not rely on written specifications for
communication.
7. Did not Understand that UAT is Performed at the Worst Possible Time in the Project
Solutions for this is to involve users throughout the project from the very beginning. When
users are providing input to user requirements, they can also be defining acceptance criteria
and can be involved in requirement reviews and inspections.
Solution for this is to match the Intensity of the Test to the Relative Risk and the Skills of the
Users. Not every project requires extensive testing. However, for those projects that control
high levels of assets or affect personal safely, extensive validation is required. Users and
others on the project may question the need for defined test cases and test scripts, but
when viewed in the light of project and business (or operational) risks, the time and
resources spent in effective testing are resources well spent.
To match testing to the risk, perform a risk assessment that can be quantified and
documented. Just guessing at the level of risk is not good enough to explain after a critical
failure why you thought something was a low risk of failure. The risk assessment should
indicate which system and business areas are the most exposed to risk. This allows test
resources to be allocated where they will have the greatest impact to detect defects that
may have severe negative consequences.
The solution is to have users design tests that model their world. The test is to determine if
the system or application will correctly support the real world conditions.
The solution for this is at least hold limited review sessions with the actual users. Complete
review sessions which examine items such as user requirements in detail are even better.
Also, have contingency plans in place when unexpected problems are found during UAT. If
users are unwilling or unable to participate in the project, raise this situation as a risk in the
project status reports.