Ase Module 3
Ase Module 3
DEPLOYMENT PIPELINE
It's a series of automated steps that software code goes through from development to
production, ensuring code quality and reliability at each stage.
The primary purpose is to automate the building, testing, and deployment of software.
It ensures that code changes are thoroughly tested before reaching production, reducing the
risk of introducing bugs or issues.
• Quality Checks: Performing code analysis, security scans, and other checks.
• Manual Approvals: Some stages may require manual intervention for critical
decision-making.
• Reliability: Ensures that only code that passes all tests and quality checks is
deployed, reducing the risk of production failures.
• Consistency: Ensures that each code change follows the same deployment
process.
• Visibility: Provides visibility into the status of code changes as they progress
through the pipeline.
1
Continuous Monitoring:
After deployment, continuous monitoring and alerting help detect and address issues in
production promptly.
• Popular CI/CD tools include Jenkins, Travis CI, Circle CI, GitLab CI/CD, and GitHub
Actions.
• Parallel and Sequential Stages: Pipelines can have parallel stages for faster
execution or sequential stages for strict control.
• Infrastructure as Code (IaC): IaC tools like Terraform or AWS CloudFormation can
be integrated into the pipeline to manage infrastructure changes.
Continues integration (CI) and continuous deployment (CD) are two important steps related
with deployment to increase software project development productivity and quality.
This enables the team to create large and complex system with much higher level of
confidence and control and facilitates immediate and rapid feedback.
The objective of a deployment pipeline is to execute the various phases of software delivery
from development to the end user on a simple, click of a button with powerful feedback in the
next stage.
Testing the code on a simple button, with rapid feedback, the code is given to the next stage
of deployment, where a production like environment is mirrored to implement the software.
The developers can see, the build stage in the release process and the problems in each
stage. The managers also can watch the key metrics such as cycle time, productivity, code
quality etc. In this process, everybody in the delivery process, gets what they want at the
time when they need them. Also increased visibility into the whole process leading to making
the release process with improved feedback. In short deployment pipeline is the
implementation of end-to-end automation of build, deploy, test and release processes. This
approach is used to create, test and deploy complex system with higher quality with
significantly lower cost and lower risk.
2
Defining the deployment pipeline, it’s the automated manifestation of the entire process of
getting the software code from the Version Control (VC) into the hands of the users. The
changes moving through a deployment pipeline is shown in the Figure 5.2
The input to the deployment pipeline is a software code or a revision in the version control
every time. When a change is created, it creates a build and has to pass through a
sequence of testing process before taking it to production phase. The sequence of testing
stages evaluates the build from different perspectives and this initiates the continuous
integration process. When the build phase passes through each of the test, the code fitness
increases. After testing, it leads to a production like environment. The major trade-offs in the
deployment is shown in the Figure 5.3
In addition to the bugs which can occur in the software code, there are also possibilities that
the newly released software that may break down due to unforeseen interaction between the
components of the system and its environment. For example, the cases where the new
network topology, or a slight difference in the configuration of the production server may
cause issues.
Since the deployment and release production releases are also automated. These faces can
be executed rapidly, repeatable and in a reliable way. Therefore, it is easy to perform
releases more frequently. Any in any case, if it is required to step back to an earlier version,
this is also possible. This causes the releases to be done without much risk.
3
To achieve these benefits, It is the compulsory requirement to automate a suit of test that
prove that the release candidates is fit for release automation in deployment to testing,
staging and production environment can remove the manual intensive error prone steps.
The main stages in a deployment pipeline, include the commit stage, the automated
acceptance test stages, It is followed by manual test stages and finally, the release stage.
In the commit stage, the system needs to work at the technical level. It should come by,
should be able to pass a suit of primary unit Level automated testing and should also run
code analysis.
In the automated accepted test stages. It asserts that the system or code works both
functionally and at non-functional level, the behaviour of the code should meet the needs of
its end users and the specifications of the customer.
The manual test stages require that this system is usable and fulfils its requirements. It
should be able to detect any defect which is not traced in automated testing process, and
also should verify that it provides value to the end user. This stage includes exploratory
testing environment, integration and environments and also user acceptance testing.
Finally, in the release stage, it should deliver the system to the users, either in the form of a
package. The software or should be able to deploy it into a production of staging
environment. Here the staging environment is an environment for or a testing environment,
which is identical, to the production environment. This whole process is an automated
software delivery process which is modelled using the deployment pipeline And the objective
is the effective continuous integration with the build pipeline, and deployment, production
line.
4
Though this is automated, it doesn't mean that there is no human interaction with the system
through this release process. But rather it ensures that all the error prone and complex steps
are automated in a reliable and repeatable in execution way.
That the ability to deploy a system with all stages of its development by just with the press of
a button increases the chances of frequent use of testing analysis and development.
The deployment pipeline process is shown in the Figure 5.4. this starts with the developers
committing their changes into the version control system. Then the continuous integration
management system responds to the commit by triggering a new instance of the deployment
pipeline. In this commit stage, the pipeline compiles the code, runs the unit test, performs
code analysis and create installers.
Once all the unit tests are passed, the code is assembled. The executable code is then
assembled into binaries and stored in artifacts. Repositories used by continuous integration
servers to store the binaries and relevant artifacts include tools like Nexus and antifactory to
manage the artifacts effectively.
In the next stage, the pipeline prepares a test base to use for acceptance. Testing by the
modern continuous integration servers can execute these jobs in parallel on a build grid.
5
The acceptance testing, though automated takes longer time. Therefore, the continuous
integration server splits these tests induced which can be executed in parallel so that the
speed can be increased and also the feedback will be faster. The acceptance testing, phase
is automatically triggered by the successful, completion of the first stage in the pipeline.
Once the acceptance testing is successfully completed the pipeline automatically branches
to the deployment of the built into various environments. This includes the user acceptance,
testing capacity, testing and production.
The operations team designs automated scripts that can perform the deployment. Every
tester is able to see the release candidates available to them as well as its status regarding
the stage. Each build has passed the checking can be viewed, also the comments along with
them and any other comments on each of the builds created. Finally, on successful
completion of the test stage, It should be able to press a button to deploy the selected built
by running the deployments script into a certain relevant environment.
The automated execution of these various stages, the basic purpose is to get faster
feedback. Ability to see which build is deployed into which environment, And in which stages
is the code, is in your pipeline and what stage is the code has passed.
The executable code of the software known as binaries or the source files are in the form of
.jars, .NET assemblies. When the code is repeatedly compiled in various stages like the
commit, again in the acceptance, testing time, again for the capacity testing, there is a risk of
introducing some difference in each stages. To keep the deployment pipeline efficient, It is
essential to ensure that no changes has been introduced either maliciously or by mistake,
intentionally or non-intentionally between the creation of the binaries and performing the
release. This will keep the deployment pipeline more efficient. This can be done by storing
the hashes of the binaries at the time and they are created and verifying that the binary is
identical at every subsequent stage in the process. The binaries should be created only one
during the commit stage of the build. The binary should be stored in file systems but not in
6
version control and should be able to be retrieved for later stages in the pipeline. Always the
binaries should not be environment specific as the binary files are not intended to run in a
single environment.
The software, when being deployed to different environment should use the same process.
Normally deployment is done by the developers, very rarely testers or analysts do the
deployment. The risk associated with the deployment in each environment is inversely
proportional to the frequency of development, which means more the deployment tried less
the frequency and lower the risk.
The deployment process must be tested many times in different environments, since every
environment is different, in some way that includes reference in operating system,
Middleware, Configuration setting. The location of the databases and external services and
other configuration information, that needs to be set at the deployment time. This requires
different, deployments script for every environment. This does not mean that different
requirements script is required for every environment. A solution to this can be done by
using a property file to hold the configuration information. This property, file should be
checked-in into the version control as well. Other ways of supplying deployment time
configuration include keeping it indirectly services like LDAP or active directory or storing it in
database and accessing it through applications like escape.
Once the deployment method is same for every environment Then if the deployment doesn't
work in a particular environment, the causes can be any mistake in the applications and the
environment specific configuration file or a problem with the infrastructure, or the services on
which the application depends or the configuration of the environment, in which the
deployment is done.
When the application is deployed using an automated script. The smoke test verifies that the
software is up and running. This means launching the application and checking to ensure
that the main screen comes up with the expected content. It also checks whether the
services that the application depends are also up and running like database messaging bus
or any external service required.
A smoke test or deployment test is the most important test. Once the unit test suit is up and
running, if the smoke test runs successfully, it gives the team confidence that the application
can actually run, contrary If the application doesn't run the smoke test, gives an idea that
7
some basic diagnosis is required for the application, to know whether the application is down
because something that the application depends is not working.
Several problems can occur when the production environment is significantly different from
the testing and development and environment. To increase the confidence of deploying and
release, the actual work needs to be tested and continuous integration on environments that
are very similar as much as possible to the production environment. Exact copies of the
production can be used to run the manual and automated test. This requires certain
practices to be followed and ensured such as the infrastructure which includes network
topology and firewall configuration that requires to be saved and maintained, the operating
system configuration including the patches needs to be verified. Ensure that the application
stack is the same and finally, the applications data must also be known and valid.
Before the introduction of continuous, integration different parts of the project or the process
ran in different schedule. Some may take hours, some maybe run out to the weekend.
Whereas in the deployment pipeline, Contrary to this it takes a different approach on
completion of the first to the next stage by triggering the next phase execution.
Once a change is checked into the version control, creating a version 1, this in turn will
trigger the first stage acceptance testing in the pipeline, which includes built and the unit test.
When this stage passes, this triggers the second stage that includes automated acceptance
test. On the way, if any developer checks in another change, resulting in the creation of
version 2/version 3, This in turn again triggers the build and unit test once again.
Pipeline uses an intelligent scheduling to operate different versions. Once an instance of the
built is created and unit tests are finished, the continuous integration system checks for new
changes availability, and if it is available, it builds the most recent set available. In case if the
most recent built breaks the build and unit tests, the build system doesn't know which which
one (version 2, version 3 as the case maybe) is to be committed. At this stage, the
developers need to work it by themselves manually.
While processing the automated acceptance test, when the build process finds any revision
to existing revision 3 as revision 4, it accepts the latest revision. Now, when the final
acceptance test finishes, the continuous integration system scheduler, If finds further a new
changes available known as version 5, it will trigger a run of the acceptance test against
version 5 once again.
8
If any part of the pipeline fails, stop the line.
The main objective of deployment by plane. Is that when the team checks a new code into
the version control, it successfully, builds and passes every test in such a way that it
produces rapid repeatable reliable releases. This applies to the entire deployment pipeline,
that is if it deployment to an environment fails, the whole team owns that failure. They should
stop and fix it before doing anything else.
The building and testing of simple projects can be accomplished using integrated
development environment (IDE). But in large projects it demands more control and demands
script for build, test and packaging the application especially in a distributed team.
In an automated continuous integration environment, the continuous integration server run
scripts or command to create binaries. For example, the rails project can run the default
Rake task. The .NET projects use MsBuild, Java projects use Ant, Maven Buildr, and
Gradle. In automatic developed deployment, the software has lot of complexities as it
requires a series of steps that include configuring the application, initialising data, configuring
the infrastructure and the operating systems etc.
The decisions on Deployments are taken together by the developers and operations
personal together.
Build Tools
The software development uses automated build tools like Ant and Make and it's several
variants are available. For example, when test needs to be run on a software code, it is
necessary to combine the code first and then run the test. To set up the test data, the
compiling requires installation of the environment. Every build tool has two important
essential features, that include all those things that the build tool does, and all those things
the build tool depends on. The build tools can be Task oriented or Product oriented. A task-
oriented build tool includes 'Ant' and 'NAnt' and 'MsBuild'. The build dependency network in
the figure 6.1, is described in terms of the set of tasks whereas in the case of a product-
oriented tool such 'Make' describe all the activities in terms of product they generate such as
executables. For correctness of process the build tools require to be optimized. To ensure
that the build tool can achieve a given performance, it has certain prerequisites. These
prerequisites are to be executed exactly once and if missed the results of the build process
will be not be good. The prerequisites when executed more than once that will result in
reverse results.
9
Ant build it to will set up test data in it, compile source, again compile-test and then run-test.
In a task-oriented tool, each task will know whether or not it has already been executed as
part of the build process, even though the init task is involved twice, it would really be
executed once. In a product-oriented tool, the compiled source and compile test will result in
a single file that contains all the compiled code known as source.so and tests.so. The run
test generates file called testreports.zip. A product-oriented build system ensures to invoke
Run test after compiled source and compiled test. The product-oriented build tools, keep
Timestamps in the files generated for each task. For example, while compiling C/C++. In
some cases, the build ensures that the source code files are compiled only if it is changed
from the last build, known as Incremental Build. In languages that run virtual machine, the
compiler just creates the Bytecode and the Virtual Machines Just In Time (JIT) compiler
does the optimization at the runtime.
Ant: With the development of several cross-platform development in Java, the Make tool
had many limitations. The Java community experimented combining Make with XML and the
result was Apache Build tool, which is a fully cross platform tool. This includes a set of tasks
written in Java to perform common operations such as compilation and file system
manipulation. Moreover, and can be easily extended with new tasks written in Java.
Therefore, this tool quickly became the de facto build tool for Java projects. And it's a task-
oriented tool in which the runtime components of Ant is written in Java. But and scripts use
an external DSL written in XML. This combination gives and powerful cross-platform
capabilities making it extremely flexible and powerful, even though and suffers from several
shortcomings.
The disadvantages include, the Ant requires build scripts to be written in XML, which is
difficult. The model spends great deal of time to write scripts to compile create Jar, files, run
Test etc. The declarative language used in Ant and its syntax creates confusion to the
developers. Generating build statistics and the operations are quite difficult.
NAnt and MSBuild, the .NET framework introduced by the Microsoft has many features in
common with Java language and its environment. Any entity that uses NAnt essentially uses
the same syntax as used by Ant with some few differences. NAnt was later modified by
Microsoft with some minor variations and this model is called MSBuild. NAnt was built as a
descendant of Ant but is more tightly integrated into visual studio solutions and projects and
managing dependencies. But both NAnt and MSBuild has many limitations that is suffered
by Ant.
Maven: has attempted to remove large number of disadvantages that Ant files had in
managing complex domain. Maven can perform almost any build, deploy, test and release
task in a single command. This is done by without writing many lines. One of the important
11
features of Maven is that it supports automated management of Java libraries and
dependencies between the project which was one of the major concerns in Java projects.
Another advantage of Maven is that it supports complex and rigid software using a
partitioning scheme, that allows it to decompose complex solutions into smaller components.
In spite of this, maven has several disadvantages too. First, If the project is not confirming to
Mavens structure and life cycle, it is extremely difficult to work with this and forces
development team to structure their projects according to Maven's requirement dictating
inflexibility. In this perspective Ant is far more flexible than Maven.
The second problem with Maven is that it uses DSL return in XML, which requires to write
code if to be extended. Maven has several plugins, which can very comfortably do anything
in an average Java project. The third problem, with maven, is that it has a default
configuration that is self-updating since the core of Maven is very small. So, in order to make
it functional it downloads it's own plugin from the internet. Every time when maven is run, it
tries to update itself as a result of the upgrade or downgrade of one of its plugins.
Sometimes it can fail unpredictably which may lead to unpredictable build. Long time is
taken to restructure a build according to Marvin assumption. If the versions of the
dependencies used to buy maven is not effectively tracked, it may end up with diamond
Dependency problems and breakages as due to Maven changing the version of the library
without notice.
Rake: is a product-oriented tool as well as a task oriented too. l that can be used to easily
reproduce. It is a Ruby-build tool. It uses an internal domain specific language DSL in Ruby.
The rake scripts are done in plain ruby script and it uses rubies API to carry out the task.
Therefore, it creates powerful platform independent build-files with good understanding of
dependencies. The use of a general-purpose language has made this tool available for
normal development and maintaining the build scripts. Along with this, in case of occurrence
of bug in execution, the Rake build script has a stacked race to help the developer
understand what went wrong. Ruby provides classes which are open for extension and can
add more methods to Rake classes from within the build script for debugging purpose.
Therefore Rake is a general purpose, build descripting tool. There are two general
disadvantages of the Rake. First, it requires to ensure that decent amount of runtime is
available on the platform and second the interaction with Ruby gems is required.
Buildr: This is a new generation build tool that include builder, gradle, and Gantt. This uses
the approach of simplicity and power of Rake. However, it attempts to make more complex
challenges of dependency management and multi-project build extremely easy. Everything
that can be done in Rake can also be done in Buildr. That is Buildr is built on top of Rake.
12
Builder is much faster than Maven. Moreover, it is extremely simple to customize tasks, and
create new one. Build tools like Buildr or Gradle can be preferred when starting new Java
projects looking for replacement for Ant or Maven.
Psake: Basic of Psake is an internal DSL written in power shell, which supports task-
oriented dependency network. This is meant for Windows users.
The deployment pipeline provides, excellent organizing of the various activities in the Build
script. Instead of a single script, containing every operation performed in the deployment, the
scripts are divided into separate scripts for each stage in the pipeline. Therefore, for the
commit stage, its ‘commit script’ contains all the targets required to compile the application,
runs the ‘commit test suit’ and performs the ‘static analysis’ of the code. Followed by the
Commit Test, a functional ‘Acceptance test script’ calls the deployment tool to deploy the
application in an appropriate environment and prepares the required data. The script for any
known functional tests such as ‘Trust-test’ or ‘Security test’ also can be scripted.
In an applications deployment process “binary” are put to the target environments in which
the applications deployment is done. These binaries are a bunch of files, created in the build
process and any ‘library files’ of the application requires and all the other ‘static files’ must be
checked into the version control. These files when distributed across the file system, makes
it very inefficient and causes high maintenance. Therefore, “packaging systems” where used
which contain the file of a single operating system (OS) or a small set of related OS. This
mechanism uses OS packaging technology to build up everything that needs to be deployed.
For example, Debian and Ubuntu both use Debian packaging system, RedHat, Sushi and
other flavours of Linux use RedHat packaging system. Windows users use Microsoft Installer
System. The deployment process uses environment management tools like puppet, CF-
engine, marimba by uploading the package to the organization repository and these tools
could install the correct version of the package. In short, packaging the binaries should be
automated and it's a part of the deployment pipeline. Though, some commercial middleware
servers use special tools to perform deployments using a hybrid approach.
Any deployment process should leave the target environment in the same exact correct state
regardless of the state it finds. This is achieved by using a known good baseline
environment, which is provisioned either automatically or through virtualization. This
environment should include all the appropriate middleware and anything else the application
requires to work. The deployment process in this case can easily fetch the version of their
application mentioned and deploy it in the environment, using the deployment tools for the
middleware. If the application is tested, build and integrated as a single base, then it should
be synced, deployed as a single piece. This means that every time when the deployment is
done everything from the binaries derived to the revision must be available in the version
control. Implement only those artifacts which minimize the change on implementation.
14
Evolve the deployment system incrementally.
In an automated deployment process, the release of his software is done at the push of a
button in deployment system. It is a collection of simple incremental steps which over time
creates a sophisticated system incrementally. The incremental development and evolution of
a deployment system starts by getting the operation team and developers to work together
to automate the deployment of the application into a testing environment. The operation
people should be comfortable with the tools used for deploying also. The developers need to
use the same process to deploy and run the application in the development and
environment. This can be refined by refining the scripts used in the acceptance test and
environment to deploy and run the application. Next, it is to be ensured that the operation
team uses the same tools to deploy the application into staging and finally the production.
Commit Stage
The commit stage begins with the mark of a project as committed to the version control
system. The commit can either be a failure or successful. If it is successful, a collection of
binary artifacts and deployable assemblies are used in the subsequent test and release
stages. The commit stage ensures that the project will minimize the time spent on code level
integration with quality of the code and speed the delivery to begin deployment when marked
by the commit stage. When a change is checked-in into the main-line or trunk in the version
control, the continuous integration server detects the change checks out the source code,
and performs a series of tasks. The commit stage is shown in the Figure 7.1. This includes,
Combining and running the commit test against the integrated source code.
Creating binaries that can be deployed into any environment.
Performing analysis, necessary to check the code base.
Next creating any other artifacts that will be used later in the deployment line.
The Tasks are automated by build script that is run by the CI server.
The binaries and other reports are stored in the central artifact repository
The primary goal of deployment pipeline, is to eliminate any build which is not fit to make into
production line. The principal goal of commit stage is to create deployable artifacts, or it
should fail fast and notify the team regarding the reason for failure. There are some
principles and practices that make effective commit stage. This includes:
15
Provide fast and useful feedback.
What should break the commit stage?
Dent the commit stage carefully.
Give developers on a ship.
Use a build master for every large teams.
Provide fast and useful feedback. The commit test failure of occurs due to three
causes a) syntax error introduced in the code caught by compilation in compiled Languages
b) Second any semantic error introduced in the application causing the test to fail, c) the
problem with configuration of the application or its environment. In any of the reasons of
failure, the commit stage, should notify the developers as soon as they commit test are
complete and provide a concise summary of the reasons of the failure such as list of failed,
tests, compile, errors, or any other arrow conditions. Errors can be easily fixed if they are
detected early, i.e., the problems found in the commit stage is significantly simpler than
those identified later in the process. This makes the deployment pipeline efficient. All the
changes with the existing commits are integrated with the mainline, and an automated
proofreading of the integrated application is performed to identify the errors as early as
possible.
What should break the commit stage? The commit stage can fail in circumstances
that include compilation failure, test break or any environmental problems. Otherwise, the
commit states accept reporting that everything is okay. In some cases, the compilation
shows lot of warnings, moreover the commit stage, can easily be a false positive suggesting
the quality of the application. Therefore, tagging a commit stage as binary with either
success or failure is not the best practise, rather require richer information/metrics such as
Code coverage and other metrics upon completion of the commit stage run. This information
can be aggregated in graph or as a sliding scale. This means that continuous review of the
16
application's quality and quality enforcement metrics review needs to be done through the
commit stage.
Tend the commit stage carefully. The commit stage includes build script and script to
run unit test, static analysis tools etc. These clips are very important and needs to be
maintained carefully with same importance as that of the codebase. Because a poor build
system is very expensive if taken to the higher stages. Any problem in the build stage draws
expensive development effort away from the important job of creating the business
behaviour of the application, and also slows-down anyone who is trying to implement the
business behaviour. Therefore, constant work needs to be done to improve the quality
design and performance of this script in the commit stage. The script modularity requires to
be maintained that is separate the task from each other. Separate code that runs different
stages of the deployment requires to be maintained. Separate script for separate
environment specific configuration, and the build scripts, also requires to be separate.
Give developers ownership. All the members in the delivery team, should have a sense
of ownership for the commit stage. Though this is the case, the presence of an expert
specialist can establish good structures, patterns and use of technology to transfer the
knowledge to the delivery team.
Use of a build master for very large teams. In case with larger and widely spread
teams, it is useful to have somebody to play the role of a build master, whose job is to
oversee and direct the maintenance of the build to encourage and enforce build discipline.
This is also very important in teams which are new to continuous integration.
The commit stage in a deployment pipeline has both inputs and outputs. The inputs include
source code and the outputs are binaries and reports the reports generated and the code
test result to work out what went wrong. If a test fails and reports from the analysis of the
code base, includes test coverage, cyclomatic complexity, cut and paste analysis. The
Coupling information and other useful metrics that helped to establish the quality of code
base. The binaries generated. n the commit stage will be reused throughout the pipeline,
which is to be released to the user.
Artifact repository. The outputs of the commit stage, which include the reports, binaries are
stored in artifact repository, to be reused in the later stages of the pipeline useful for the
17
team members. Though, this is to be reused but because of several reasons the version
control system is not the right place to store the artifacts. The artifact repository is different
kind of version control system that keeps only few artifacts. It does not store any failed stage
in deployment pipeline. It should be possible to track back any version from the release of
software to any of the revisions in the version control. It should be possible to connect with
any instance of the deployment pipeline and the revisions in the version control system. The
figure 7.2 shows the use of artifact repository
The acceptance criteria for a good configuration management strategy are that the binary
creation process should be repeatable? The modern continuous integration servers provide
an artifact repository. Which has settings that allow unwanted artifacts to be removed after
some time. This also provide mechanism to specify declaratively which artifacts needs to be
stored in the repository. And also provide a web interface for the team to access the reports
and binaries. Dedicated artifacts repository such as Nexus and other Maven style repository
managers can handle binaries and for storing reports.
3) On successful completion, the binaries as well as any reports and metadata are
saved to the artifact repository.
18
5) The continuous integration server runs the ‘acceptance test’ and reuse the binaries
created by the commit stage.
7) The testers obtain the list of all binaries which have passed acceptance test and the
press of a button runs automated process to deploy them into manual testing
environment.
9) On successful completion of manual Testing, testers update the status of the release
candidate to indicate. It has ‘passed manual testing’.
10) Next, the continuous integration server retrieves, the latest candidate that has
passed acceptance, testing or manual, testing depending upon the stage in the
pipeline configuration from the artifact repository and deploys. The application to the
production test environment.
11) Next, the capacity tests are run against the release can rate.
12) On successful completion of the capacity test, the status of the candidate is updated
to “capacity tested “.
14) Once the code has passed all the relevant stages, it is ready for release. It can be
released by anybody with appropriate authorization.
15) And the conclusion of the release process, the RC is marked as “released”.
There are some important principles and practices that govern the design of a
commit tests suite. In an automation test suite pyramid as shown in the figure 7.3
below, the unit test forms the vast majority of the test. The important property that
unit test should have is that it should be fast to execute. The application fails to build
if the test used is not sufficiently fast. The second important property of unit test is
that they should cover large proportion of the code base, which is around 80%. This
gives a good confidence that the application will pass. The unit test starts only unit
test cases which are small parts and the unit test you should complete in just few
seconds. In comparison to unit test the acceptance test are only few in number,
which is subdivided into service test and UI tests. The service test and the UI test
19
take longer time to execute because they run against the full running system. All of
these levels are essential to ensure that the application is working and delivering the
expected business value. The automation pyramid is shown in the diagram 7.3
below.
There are several design strategies which is required for designing commit test.
These are used to achieve the goal to minimize the scope of any given test and keep
it focused on testing one aspect of this system. They include:
1. Avoid the user interface
2. Use Dependency injection.
3. Avoid the Database.
4. Avoid a Synchrony in Unit Test.
5. Use Test Doubles.
6. Minimize State in test.
7. Faking time
8. Brute force
Avoid the user interface: Since the user interface is the most obvious place where
bugs occur. The natural tendency is to over focus with test efforts fetching more
code. Therefore, the best recommendation with commit test is to avoid the test via
UI. Because, the UI involves a lot of components or levels of software under it and
this may be problematic and takes effort and time to get all the paces, ready for the
test before, executing the test itself.
20
Avoid the database: If the automated testing code is written to test the interaction
with some layer of the code. It stores the results in data base, and produces very
slow run. But this is difficult when several tests need to be executed in success
succession and creates a complexity of infrastructure setup making the whole testing
approach complex to establish and manage. Here the database can be avoided from
the code base by good layering and separation of concerns from the code and result
in the development of better code. Which means the developer must be able to
separate the code under test from its storage.
Avoid Asynchrony in Unit test: Asynchronous behaviour of the code within the
scope of a single test will make the test process very difficult. Therefore, the simplest
approach is to avoid the centrally by splitting the test by running one test run up to
the point of its boundary and then a separate test starts. For example, if the system
is posting a message and then is acting on the message, the row message can be
wrapped in a separate technology with an interface of the user. The call of the
message, can be confirmed, whether it is as per the expectation. In one test case,
this can be done by implementing it in a separate step or messaging interface. After
this is second test can be added which verifies a behaviour of the message handler.
Using Test Doubles: The unit test is focused on a small closed related, number of
code components which are typically a single class or few closely related classes.
This shows good and encapsulation. Several problems occur while testing in the
middle of network of relationships and may require lengthy setup. Among all the
surrounding classes, this can be solved by faking the interactions with the class
dependent. The normal solution used this stubbing the code of such dependencies.
Stubbing is a replacement of a part of a system with a simulated version that
provides quicker responses. Stubbs is widely used for large scale components and
subsystems. There are several mocking or stubbing tools such available Mokito
Rhino, EasyMock, JMock, NMock and Mocha.
Minimizing State in Test: The unit test asserts the behaviour of the system. It is
difficult for effective test design with the creation of different state around the test.
The problem is that some input values are required to any component of the system
to get some results back. The test is written by organizing the relevant data
21
structure, so that the inputs can be submitted in the correct form and results can be
compared with the expected output. But without proper care, the system and
associated tests becomes more and more complex and it is easy to fall into trap of
building. Elaborate hard to understand and hard to maintain data structure. In order
to support the test. In order to avoid and minimize the dependency on the state of
the test, always it is sensible to maintain a constant focus on the complexity of the
environment which is to be constructed in order to run the test.
Brute Force: Developers always wish for a fast commit cycle. The speed of the
commit cycle should have the ability to identify the most common errors, which are
likely to be introduced. The optimization process is involved and works solely
through trial and error. In some cases, it is better to accept slower commit stage than
to spend too much time, optimizing the test for speed or reducing the proportion of
bugs got. Normally the aim is to keep the commit stage under 10 minutes. Using
Build Grid can make the commit run faster.
22