0% found this document useful (0 votes)
1 views105 pages

Software Engineering Note by Biswajit Saha

Software Engineering is a systematic approach to the design, development, operation, and maintenance of software systems, focusing on objectives like maintainability, efficiency, and reliability. It encompasses various software types, including system software, business software, and embedded software, and follows structured development life cycle (SDLC) models such as Waterfall, Agile, and Spiral. Each model has distinct phases and methodologies to ensure successful software delivery and project management.

Uploaded by

bongnewtown
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1 views105 pages

Software Engineering Note by Biswajit Saha

Software Engineering is a systematic approach to the design, development, operation, and maintenance of software systems, focusing on objectives like maintainability, efficiency, and reliability. It encompasses various software types, including system software, business software, and embedded software, and follows structured development life cycle (SDLC) models such as Waterfall, Agile, and Spiral. Each model has distinct phases and methodologies to ensure successful software delivery and project management.

Uploaded by

bongnewtown
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 105

Software Engineering

Software is a program or set of programs containing instructions that provide desired


functionality. And Engineering is the process of designing and building something that serves
a particular purpose and finds a cost-effective solution to problems.

Software Engineering is a systematic, disciplined, quantifiable study and approach to the


design, development, operation, and maintenance of a software system.

Dual Role of Software:

1. As a product –

• It delivers the computing potential across networks of Hardware.


• It enables the Hardware to deliver the expected functionality.
• It acts as an information transformer because it produces, manages, acquires, modifies,
displays, or transmits information.

2. As a vehicle for delivering a product –

• It provides system functionality (e.g., payroll system)


• It controls other software (e.g., an operating system)
• It helps build other software (e.g., software tools)

Objectives of Software Engineering:


1. Maintainability –
It should be feasible for the software to evolve to meet changing requirements.
2. Efficiency –
The software should not make wasteful use of computing devices such as memory,
processor cycles, etc.
3. Correctness –
A software product is correct if the different requirements as specified in the SRS document
have been correctly implemented.
4. Reusability –
A software product has good reusability if the different modules of the product can easily be
reused to develop new products.
5. Testability –
Here software facilitates both the establishment of test criteria and the evaluation of the
software with respect to those criteria.
6. Reliability –
It is an attribute of software quality. The extent to which a program can be expected to
perform its desired function, over an arbitrary time period.
7. Portability –
In this case, the software can be transferred from one computer system or environment to
another.
8. Adaptability –
In this case, the software allows differing system constraints and the user needs to be
satisfied by making changes to the software.
9. Interoperability – Capability of 2 or more functional units to process data cooperatively.

Program vs Software Product:


1. A program is a set of instructions that are given to a computer in order to achieve a specific
task whereas software is when a program is made available for commercial business and is
properly documented along with its licensing. Software=Program+documentation+licensing.
2. A program is one of the stages involved in the development of the software, whereas a
software development usually follows a life cycle, which involves the feasibility study of the
project, requirement gathering, development of a prototype, system design, coding, and
testing.

Classification of Software

On the basis of application:

1. System Software –
System Software is necessary to manage the computer resources and support the execution of
application programs. Software like operating systems, compilers, editors and drivers etc.,
come under this category. A computer cannot function without the presence of these.
Operating systems are needed to link the machine dependent needs of a program with the
capabilities of the machine on which it runs. Compilers translate programs from high-level
language to machine language.

2. Networking and Web Applications Software –


Networking Software provides the required support necessary for computers to interact with
each other and with data storage facilities. The networking software is also used when
software is running on a network of computers (such as World Wide Web). It includes all
network management software, server software, security and encryption software and software
to develop web-based applications like HTML, PHP, XML, etc.

3. Embedded Software –
This type of software is embedded into the hardware normally in the Read Only Memory
(ROM) as a part of a large system and is used to support certain functionality under the
control conditions. Examples are software used in instrumentation and control applications
like washing machines, satellites, microwaves etc.

4. Reservation Software –
A Reservation system is primarily used to store and retrieve information and perform
transactions related to air travel, car rental, hotels, or other activities. They also provide access
to bus and railway reservations, although these are not always integrated with the main
system. These are also used to relay computerized information for users in the hotel industry,
making a reservation and ensuring that the hotel is not overbooked.

5. Business Software –
This category of software is used to support the business applications and is the most widely
used category of software. Examples are software for inventory management, accounts,
banking, hospitals, schools, stock markets, etc.

6. Entertainment Software –
Education and entertainment software provides a powerful tool for educational agencies,
especially those that deal with educating young children. There is a wide range of
entertainment software such as computer games, educational games, translation software,
mapping software, etc.

7. Artificial Intelligence Software –


Software like expert systems, decision support systems, pattern recognition software, artificial
neural networks, etc. come under this category. They involve complex problems which are not
affected by complex computations using non-numerical algorithms.

8. Scientific Software –
Scientific and engineering software satisfies the needs of a scientific or engineering user to
perform enterprise specific tasks. Such software is written for specific applications using
principles, techniques and formulae specific to that field. Examples are software like
MATLAB, AUTOCAD, PSPICE, ORCAD, etc.

9. Utilities Software –
The programs coming under this category perform specific tasks and are different from other
software in terms of size, cost and complexity. Examples are anti-virus software, voice
recognition software, compression programs, etc.

10. Document Management Software –


A Document Management Software is used to track, manage and store documents in order to
reduce the paperwork. Such systems are capable of keeping a record of the various versions
created and modified by different users (history tracking). They commonly provide storage,
versioning, metadata, security, as well as indexing and retrieval capabilities.
On the basis of copyright:

1. Commercial –
It represents the majority of software which we purchase from software companies,
commercial computer stores, etc. In this case, when a user buys a software, they acquire a
license key to use it. Users are not allowed to make the copies of the software. The copyright
of the program is owned by the company.

2. Shareware –
Shareware software is also covered under copyright but the purchasers are allowed to make
and distribute copies with the condition that after testing the software, if the purchaser adopts
it for use, then they must pay for it.
In both of the above types of software, changes to software are not allowed.

3. Freeware –
In general, according to freeware software licenses, copies of the software can be made both
for archival and distribution purposes but here, distribution cannot be for making a profit.
Derivative works and modifications to the software are allowed and encouraged. Decompiling
of the program code is also allowed without the explicit permission of the copyright holder.

4. Public Domain –
In case of public domain software, the original copyright holder explicitly relinquishes all
rights to the software. Hence software copies can be made both for archival and distribution
purposes with no restrictions on distribution. Modifications to the software and reverse
engineering are also allowed.
Need of SDLC
The development team must determine a suitable life cycle model for a particular plan and then
observe to it.

Without using an exact life cycle model, the development of a software product would not be in
a systematic and disciplined manner. When a team is developing a software product, there must
be a clear understanding among team representative about when and what to do. Otherwise, it
would point to chaos and project failure. This problem can be defined by using an example.
Suppose a software development issue is divided into various parts and the parts are assigned to
the team members. From then on, suppose the team representative is allowed the freedom to
develop the roles assigned to them in whatever way they like. It is possible that one
representative might start writing the code for his part, another might choose to prepare the test
documents first, and some other engineer might begin with the design phase of the roles assigned
to him. This would be one of the perfect methods for project failure.

A software life cycle model describes entry and exit criteria for each phase. A phase can begin
only if its stage-entry criteria have been fulfilled. So without a software life cycle model, the
entry and exit criteria for a stage cannot be recognized. Without software life cycle models, it
becomes tough for software project managers to monitor the progress of the project.

What is SDLC?

SDLC is a process followed for a software project, within a software organization. It consists of
a detailed plan describing how to develop, maintain, replace and alter or enhance specific
software. The life cycle defines a methodology for improving the quality of software and the
overall development process.
The following figure is a graphical representation of the various stages of a typical SDLC.
A typical Software Development Life Cycle consists of the following stages −

Stage 1: Planning and Requirement Analysis

Requirement analysis is the most important and fundamental stage in SDLC. It is performed by
the senior members of the team with inputs from the customer, the sales department, market
surveys and domain experts in the industry. This information is then used to plan the basic
project approach and to conduct product feasibility study in the economical, operational and
technical areas.
Planning for the quality assurance requirements and identification of the risks associated with
the project is also done in the planning stage. The outcome of the technical feasibility study is to
define the various technical approaches that can be followed to implement the project
successfully with minimum risks.

Stage 2: Defining Requirements

Once the requirement analysis is done the next step is to clearly define and document the
product requirements and get them approved from the customer or the market analysts. This is
done through an SRS (Software Requirement Specification) document which consists of all
the product requirements to be designed and developed during the project life cycle.

Stage 3: Designing the Product Architecture

SRS is the reference for product architects to come out with the best architecture for the product
to be developed. Based on the requirements specified in SRS, usually more than one design
approach for the product architecture is proposed and documented in a DDS - Design Document
Specification.
This DDS is reviewed by all the important stakeholders and based on various parameters as risk
assessment, product robustness, design modularity, budget and time constraints, the best design
approach is selected for the product.
A design approach clearly defines all the architectural modules of the product along with its
communication and data flow representation with the external and third party modules (if any).
The internal design of all the modules of the proposed architecture should be clearly defined
with the minutest of the details in DDS.

Stage 4: Building or Developing the Product

In this stage of SDLC the actual development starts and the product is built. The programming
code is generated as per DDS during this stage. If the design is performed in a detailed and
organized manner, code generation can be accomplished without much hassle.
Developers must follow the coding guidelines defined by their organization and programming
tools like compilers, interpreters, debuggers, etc. are used to generate the code. Different high
level programming languages such as C, C++, Pascal, Java and PHP are used for coding. The
programming language is chosen with respect to the type of software being developed.
Stage 5: Testing the Product

This stage is usually a subset of all the stages as in the modern SDLC models, the testing
activities are mostly involved in all the stages of SDLC. However, this stage refers to the testing
only stage of the product where product defects are reported, tracked, fixed and retested, until
the product reaches the quality standards defined in the SRS.

Stage 6: Deployment in the Market and Maintenance

Once the product is tested and ready to be deployed it is released formally in the appropriate
market. Sometimes product deployment happens in stages as per the business strategy of that
organization. The product may first be released in a limited segment and tested in the real
business environment (UAT- User acceptance testing).
Then based on the feedback, the product may be released as it is or with suggested
enhancements in the targeting market segment. After the product is released in the market, its
maintenance is done for the existing customer base.
SDLC Models
Software Development life cycle (SDLC) is a spiritual model used in project management that
defines the stages include in an information system development project, from an initial
feasibility study to the maintenance of the completed application.

There are different software development life cycle models specify and design, which are
followed during the software development phase. These models are also called "Software
Development Process Models." Each process model follows a series of phase unique to its type
to ensure success in the step of software development.

Here, are some important phases of SDLC life cycle:

Waterfall Model

The waterfall is a universally accepted SDLC model. In this method, the whole process of
software development is divided into various phases.The waterfall model is a continuous
software development model in which development is seen as flowing steadily downwards (like
a waterfall) through the steps of requirements analysis, design, implementation, testing
(validation), integration, and maintenance. Linear ordering of activities has some significant
consequences. First, to identify the end of a phase and the beginning of the next, some
certification techniques have to be employed at the end of each step. Some verification and
validation usually do this mean that will ensure that the output of the stage is consistent with its
input (which is the output of the previous step), and that the output of the stage is consistent with
the overall requirements of the system.

RAD Model
RAD or Rapid Application Development process is an adoption of the waterfall model; it targets
developing software in a short period. The RAD model is based on the concept that a better
system can be developed in lesser time by using focus groups to gather system requirements.

o Business Modeling
o Data Modeling
o Process Modeling
o Application Generation
o Testing and Turnover

Spiral Model

The spiral model is a risk-driven process model. This SDLC model helps the group to adopt
elements of one or more process models like a waterfall, incremental, waterfall, etc. The spiral
technique is a combination of rapid prototyping and concurrency in design and development
activities.

Each cycle in the spiral begins with the identification of objectives for that cycle, the different
alternatives that are possible for achieving the goals, and the constraints that exist. This is the
first quadrant of the cycle (upper-left quadrant).

The next step in the cycle is to evaluate these different alternatives based on the objectives and
constraints. The focus of evaluation in this step is based on the risk perception for the project.

The next step is to develop strategies that solve uncertainties and risks. This step may involve
activities such as benchmarking, simulation, and prototyping.

V-Model

In this type of SDLC model testing and the development, the step is planned in parallel. So, there
are verification phases on the side and the validation phase on the other side. V-Model joins by
Coding phase.

Incremental Model

The incremental model is not a separate model. It is necessarily a series of waterfall cycles. The
requirements are divided into groups at the start of the project. For each group, the SDLC model
is followed to develop software. The SDLC process is repeated, with each release adding more
functionality until all requirements are met. In this method, each cycle act as the maintenance
phase for the previous software release. Modification to the incremental model allows
development cycles to overlap. After that subsequent cycle may begin before the previous cycle
is complete.
Agile Model

Agile methodology is a practice which promotes continues interaction of development and


testing during the SDLC process of any project. In the Agile method, the entire project is divided
into small incremental builds. All of these builds are provided in iterations, and each iteration
lasts from one to three weeks.

Any agile software phase is characterized in a manner that addresses several key assumptions
about the bulk of software projects:

1. It is difficult to think in advance which software requirements will persist and which will
change. It is equally difficult to predict how user priorities will change as the project
proceeds.
2. For many types of software, design and development are interleaved. That is, both
activities should be performed in tandem so that design models are proven as they are
created. It is difficult to think about how much design is necessary before construction is
used to test the configuration.
3. Analysis, design, development, and testing are not as predictable (from a planning point
of view) as we might like.

Iterative Model

It is a particular implementation of a software development life cycle that focuses on an initial,


simplified implementation, which then progressively gains more complexity and a broader
feature set until the final system is complete. In short, iterative development is a way of breaking
down the software development of a large application into smaller pieces.

Big bang model

Big bang model is focusing on all types of resources in software development and coding, with
no or very little planning. The requirements are understood and implemented when they come.

This model works best for small projects with smaller size development team which are working
together. It is also useful for academic software development projects. It is an ideal model where
requirements are either unknown or final release date is not given.

Prototype Model

The prototyping model starts with the requirements gathering. The developer and the user meet
and define the purpose of the software, identify the needs, etc.

A 'quick design' is then created. This design focuses on those aspects of the software that will be
visible to the user. It then leads to the development of a prototype. The customer then checks the
prototype, and any modifications or changes that are needed are made to the prototype.
Looping takes place in this step, and better versions of the prototype are created. These are
continuously shown to the user so that any new changes can be updated in the prototype. This
process continue until the customer is satisfied with the system. Once a user is satisfied, the
prototype is converted to the actual system with all considerations for quality and security.
Waterfall Model:
Classical waterfall model is the basic software development life cycle model. It is very simple
but idealistic. Earlier this model was very popular but nowadays it is not used. But it is very
important because all the other software development life cycle models are based on the
classical waterfall model.
Classical waterfall model divides the life cycle into a set of phases. This model considers that
one phase can be started after completion of the previous phase. That is the output of one phase
will be the input to the next phase. Thus the development process can be considered as a
sequential flow in the waterfall. Here the phases do not overlap with each other. The different
sequential phases of the classical waterfall model are shown in the below

figure:

Let us now learn about each of these phases in brief details:


1. Feasibility Study: The main goal of this phase is to determine whether it would be
financially and technically feasible to develop the software.
The feasibility study involves understanding the problem and then determine the various
possible strategies to solve the problem. These different identified solutions are analyzed
based on their benefits and drawbacks, The best solution is chosen and all the other phases
are carried out as per this solution strategy.

2. Requirements analysis and specification: The aim of the requirement analysis and
specification phase is to understand the exact requirements of the customer and document
them properly. This phase consists of two different activities.
• Requirement gathering and analysis: Firstly all the requirements regarding the software
are gathered from the customer and then the gathered requirements are analyzed. The goal
of the analysis part is to remove incompleteness (an incomplete requirement is one in
which some parts of the actual requirements have been omitted) and inconsistencies
(inconsistent requirement is one in which some part of the requirement contradicts with
some other part).
• Requirement specification: These analyzed requirements are documented in a software
requirement specification (SRS) document. SRS document serves as a contract between
development team and customers. Any future dispute between the customers and the
developers can be settled by examining the SRS document.

3. Design: The aim of the design phase is to transform the requirements specified in the SRS
document into a structure that is suitable for implementation in some programming
language.

4. Coding and Unit testing: In coding phase software design is translated into source code
using any suitable programming language. Thus each designed module is coded. The aim of
the unit testing phase is to check whether each module is working properly or not.

5. Integration and System testing: Integration of different modules are undertaken soon after
they have been coded and unit tested. Integration of various modules is carried out
incrementally over a number of steps. During each integration step, previously planned
modules are added to the partially integrated system and the resultant system is tested.
Finally, after all the modules have been successfully integrated and tested, the full working
system is obtained and system testing is carried out on this.
System testing consists three different kinds of testing activities as described below :

• Alpha testing: Alpha testing is the system testing performed by the development team.
• Beta testing: Beta testing is the system testing performed by a friendly set of customers.
• Acceptance testing: After the software has been delivered, the customer performed the
acceptance testing to determine whether to accept the delivered software or to reject it.

6. Maintainence: Maintenance is the most important phase of a software life cycle. The effort
spent on maintenance is the 60% of the total effort spent to develop a full software. There
are basically three types of maintenance :
• Corrective Maintenance: This type of maintenance is carried out to correct errors that
were not discovered during the product development phase.
• Perfective Maintenance: This type of maintenance is carried out to enhance the
functionalities of the system based on the customer’s request.
• Adaptive Maintenance: Adaptive maintenance is usually required for porting the
software to work in a new environment such as work on a new computer platform or with
a new operating system.
Advantages of Classical Waterfall Model

Classical waterfall model is an idealistic model for software development. It is very simple, so
it can be considered as the basis for other software development life cycle models. Below are
some of the major advantages of this SDLC model:
• This model is very simple and is easy to understand.
• Phases in this model are processed one at a time.
• Each stage in the model is clearly defined.
• This model has very clear and well undestood milestones.
• Process, actions and results are very well documented.
• Reinforces good habits: define-before- design,
design-before-code.
• This model works well for smaller projects and projects where requirements are well
understood.

Drawbacks of Classical Waterfall Model


Classical waterfall model suffers from various shortcomings, basically we can’t use it in real
projects, but we use other software development lifecycle models which are based on the
classical waterfall model. Below are some major drawbacks of this model:
• No feedback path: In classical waterfall model evolution of a software from one phase to
another phase is like a waterfall. It assumes that no error is ever committed by developers
during any phases. Therefore, it does not incorporate any mechanism for error correction.
• Difficult to accommodate change requests: This model assumes that all the customer
requirements can be completely and correctly defined at the beginning of the project, but
actually customers’ requirements keep on changing with time. It is difficult to accommodate
any change requests after the requirements specification phase is complete.
• No overlapping of phases: This model recommends that new phase can start only after the
completion of the previous phase. But in real projects, this can’t be maintained. To increase
the efficiency and reduce the cost, phases may overlap.

Waterfall Model - Application

Every software developed is different and requires a suitable SDLC approach to be followed
based on the internal and external factors. Some situations where the use of Waterfall model is
most appropriate are −
• Requirements are very well documented, clear and fixed.
• Product definition is stable.
• Technology is understood and is not dynamic.
• There are no ambiguous requirements.
• Ample resources with required expertise are available to support the product.
• The project is short.
Iterative Waterfall Model

In software development project, the classical waterfall model is hard to use. So, Iterative
waterfall model can be thought of as incorporating the necessary changes to the classical
waterfall model to make it usable in practical software development projects. It is almost same as
the classical waterfall model except some changes are made to increase the efficiency of the
software development.
The iterative waterfall model provides feedback paths from every phase to its preceding phases,
which is the main difference from the classical waterfall model.

Feedback paths introduced by the iterative waterfall model are shown in the figure below.

When errors are detected at some later phase, these feedback paths allow correcting errors
committed by programmers during some phase. The feedback paths allow the phase to be
reworked in which errors are committed and these changes are reflected in the later phases. But,
there is no feedback path to the stage – feasibility study, because once a project has been taken,
does not give up the project easily.
It is good to detect errors in the same phase in which they are committed. It reduces the effort
and time required to correct the errors.
Phase Containment of Errors: The principle of detecting errors as close to their points of
commitment as possible is known as Phase containment of errors.

Advantages of Iterative Waterfall Model

• Feedback Path: In the classical waterfall model, there are no feedback paths, so there is no
mechanism for error correction. But in iterative waterfall model feedback path from one phase
to its preceding phase allows correcting the errors that are committed and these changes are
reflected in the later phases.
• Simple: Iterative waterfall model is very simple to understand and use. That’s why it is one of
the most widely used software development models.
Drawbacks of Iterative Waterfall Model

• Difficult to incorporate change requests: The major drawback of the iterative waterfall
model is that all the requirements must be clearly stated before starting of the development
phase. Customer may change requirements after some time but the iterative waterfall model
does not leave any scope to incorporate change requests that are made after development
phase starts.
• Incremental delivery not supported: In the iterative waterfall model, the full software is
completely developed and tested before delivery to the customer. There is no scope for any
intermediate delivery. So, customers have to wait long for getting the software.
• Overlapping of phases not supported: Iterative waterfall model assumes that one phase can
start after completion of the previous phase, But in real projects, phases may overlap to reduce
the effort and time needed to complete the project.
• Risk handling not supported: Projects may suffer from various types of risks. But, Iterative
waterfall model has no mechanism for risk handling.
• Limited customer interactions: Customer interaction occurs at the start of the project at the
time of requirement gathering and at project completion at the time of software delivery.
These fewer interactions with the customers may lead to many problems as the finally
developed software may differ from the customers’ actual requirements.
Spiral Model

Spiral model is one of the most important Software Development Life Cycle models, which
provides support for Risk Handling. In its diagrammatic representation, it looks like a spiral
with many loops. The exact number of loops of the spiral is unknown and can vary from
project to project. Each loop of the spiral is called a Phase of the software development
process. The exact number of phases needed to develop the product can be varied by the
project manager depending upon the project risks. As the project manager dynamically
determines the number of phases, so the project manager has an important role to develop a
product using the spiral model.
The Radius of the spiral at any point represents the expenses(cost) of the project so far, and the
angular dimension represents the progress made so far in the current phase.
The below diagram shows the different phases of the Spiral Model: –

Each phase of the Spiral Model is divided into four quadrants as shown in the above figure.
The functions of these four quadrants are discussed below-

1. Objectives determination and identify alternative solutions: Requirements are gathered


from the customers and the objectives are identified, elaborated, and analyzed at the start of
every phase. Then alternative solutions possible for the phase are proposed in this quadrant.
2. Identify and resolve Risks: During the second quadrant, all the possible solutions are
evaluated to select the best possible solution. Then the risks associated with that solution are
identified and the risks are resolved using the best possible strategy. At the end of this
quadrant, the Prototype is built for the best possible solution.

3. Develop next version of the Product: During the third quadrant, the identified features are
developed and verified through testing. At the end of the third quadrant, the next version of
the software is available.

4. Review and plan for the next Phase: In the fourth quadrant, the Customers evaluate the so
far developed version of the software. In the end, planning for the next phase is started.

Risk Handling in Spiral Model


A risk is any adverse situation that might affect the successful completion of a software
project. The most important feature of the spiral model is handling these unknown risks after
the project has started. Such risk resolutions are easier done by developing a prototype. The
spiral model supports coping up with risks by providing the scope to build a prototype at every
phase of the software development.
The Prototyping Model also supports risk handling, but the risks must be identified
completely before the start of the development work of the project. But in real life project risk
may occur after the development work starts, in that case, we cannot use the Prototyping
Model. In each phase of the Spiral Model, the features of the product dated and analyzed, and
the risks at that point in time are identified and are resolved through prototyping. Thus, this
model is much more flexible compared to other SDLC models.

Why Spiral Model is called Meta Model?


The Spiral model is called a Meta-Model because it subsumes all the other SDLC models. For
example, a single loop spiral actually represents the Iterative Waterfall Model. The spiral
model incorporates the stepwise approach of the Classical Waterfall Model. The spiral model
uses the approach of the Prototyping Model by building a prototype at the start of each phase
as a risk-handling technique. Also, the spiral model can be considered as supporting the
evolutionary model – the iterations along the spiral can be considered as evolutionary levels
through which the complete system is built.

Advantages of Spiral Model:


Below are some advantages of the Spiral Model.

1. Risk Handling: The projects with many unknown risks that occur as the development
proceeds, in that case, Spiral Model is the best development model to follow due to the risk
analysis and risk handling at every phase.

2. Good for large projects: It is recommended to use the Spiral Model in large and complex
projects.
3. Flexibility in Requirements: Change requests in the Requirements at later phase can be
incorporated accurately by using this model.

4. Customer Satisfaction: Customer can see the development of the product at the early phase
of the software development and thus, they habituated with the system by using it before
completion of the total product.

Disadvantages of Spiral Model:


Below are some main disadvantages of the spiral model.

1. Complex: The Spiral Model is much more complex than other SDLC models.

2. Expensive: Spiral Model is not suitable for small projects as it is expensive.

3. Too much dependability on Risk Analysis: The successful completion of the project is
very much dependent on Risk Analysis. Without very highly experienced experts, it is going
to be a failure to develop a project using this model.

4. Difficulty in time management: As the number of phases is unknown at the start of the
project, so time estimation is very difficult.

Spiral Model Application

The Spiral Model is widely used in the software industry as it is in sync with the natural
development process of any product, i.e. learning with maturity which involves minimum risk
for the customer as well as the development firms.
The following pointers explain the typical uses of a Spiral Model −
• When there is a budget constraint and risk evaluation is important.
• For medium to high-risk projects.
• Long-term project commitment because of potential changes to economic priorities as
the requirements change with time.
• Customer is not sure of their requirements which is usually the case.
• Requirements are complex and need evaluation to get clarity.
• New product line which should be released in phases to get enough customer feedback.
• Significant changes are expected in the product during the development cycle.

Incremental process model:

Incremental process model is also know as Successive version model.


First, a simple working system implementing only a few basic features is built and then that is
delivered to the customer. Then thereafter many successive iterations/ versions are
implemented and delivered to the customer until the desired system is released.

A, B, C are modules of Software Product that are incrementally developed and delivered.
Life cycle activities –
Requirements of Software are first broken down into several modules that can be incrementally
constructed and delivered. At any time, the plan is made just for the next increment and not for
any kind of long term plans. Therefore, it is easier to modify the version as per the need of the
customer. Development Team first undertakes to develop core features (these do not need
services from other features) of the system.
Once the core features are fully developed, then these are refined to increase levels of
capabilities by adding new functions in Successive versions. Each incremental version is
usually developed using an iterative waterfall model of development.
As each successive version of the software is constructed and delivered, now the feedback of
the Customer is to be taken and these were then incorporated in the next version. Each version
of the software have more additional features over the previous ones.
After Requirements gathering and specification, requirements are then spitted into several
different versions starting with version-1, in each successive increment, next version is
constructed and then deployed at the customer site. After the last version (version n), it is now
deployed at the client site.
When to use this –
1. Funding Schedule, Risk, Program Complexity, or need for early realization of benefits.
2. When Requirements are known up-front.
3. When Projects having lengthy developments schedules.
4. Projects with new Technology.

Advantages –
• Error Reduction (core modules are used by the customer from the beginning of the phase
and then these are tested thoroughly)
• Uses divide and conquer for breakdown of tasks.
• Lowers initial delivery cost.
• Incremental Resource Deployment.

Disadvantages –
• Requires good planning and design.
• Total cost is not lower.
• Well defined module interfaces are required.
RAD (Rapid Application Development) Model
RAD is a linear sequential software development process model that emphasizes a concise
development cycle using an element based construction approach. If the requirements are well
understood and described, and the project scope is a constraint, the RAD process enables a
development team to create a fully functional system within a concise time period.

RAD (Rapid Application Development) is a concept that products can be developed faster and of
higher quality through:

o Gathering requirements using workshops or focus groups


o Prototyping and early, reiterative user testing of designs
o The re-use of software components
o A rigidly paced schedule that refers design improvements to the next product version
o Less formality in reviews and other team communication

The various phases of RAD are as follows:

1.Business Modelling: The information flow among business functions is defined by answering
questions like what data drives the business process, what data is generated, who generates it,
where does the information go, who process it and so on.

2. Data Modelling: The data collected from business modeling is refined into a set of data
objects (entities) that are needed to support the business. The attributes (character of each entity)
are identified, and the relation between these data objects (entities) is defined.

Java Try Catch


3. Process Modelling: The information object defined in the data modeling phase are
transformed to achieve the data flow necessary to implement a business function. Processing
descriptions are created for adding, modifying, deleting, or retrieving a data object.

4. Application Generation: Automated tools are used to facilitate construction of the software;
even they use the 4th GL techniques.

5. Testing & Turnover: Many of the programming components have already been tested since
RAD emphasis reuse. This reduces the overall testing time. But the new part must be tested, and
all interfaces must be fully exercised.

When to use RAD Model?

o When the system should need to create the project that modularizes in a short span time
(2-3 months).
o When the requirements are well-known.
o When the technical risk is limited.
o When there's a necessity to make a system, which modularized in 2-3 months of period.
o It should be used only if the budget allows the use of automatic code generating tools.

Advantage of RAD Model

o This model is flexible for change.


o In this model, changes are adoptable.
o Each phase in RAD brings highest priority functionality to the customer.
o It reduced development time.
o It increases the reusability of features.

Disadvantage of RAD Model

o It required highly skilled designers.


o All application is not compatible with RAD.
o For smaller projects, we cannot use the RAD model.
o On the high technical risk, it's not suitable.
o Required user involvement.
Agile Model
Agile SDLC model is a combination of iterative and incremental process models with focus on
process adaptability and customer satisfaction by rapid delivery of working software product.
Agile Methods break the product into small incremental builds. These builds are provided in
iterations. Each iteration typically lasts from about one to three weeks. Every iteration involves
cross functional teams working simultaneously on various areas like −

• Planning
• Requirements Analysis
• Design
• Coding
• Unit Testing and
• Acceptance Testing.
At the end of the iteration, a working product is displayed to the customer and important
stakeholders.

What is Agile?

Agile model believes that every project needs to be handled differently and the existing methods
need to be tailored to best suit the project requirements. In Agile, the tasks are divided to time
boxes (small time frames) to deliver specific features for a release.
Iterative approach is taken and working software build is delivered after each iteration. Each
build is incremental in terms of features; the final build holds all the features required by the
customer.
Here is a graphical illustration of the Agile Model −
The Agile thought process had started early in the software development and started becoming
popular with time due to its flexibility and adaptability.
The most popular Agile methods include Rational Unified Process (1994), Scrum (1995),
Crystal Clear, Extreme Programming (1996), Adaptive Software Development, Feature Driven
Development, and Dynamic Systems Development Method (DSDM) (1995). These are now
collectively referred to as Agile Methodologies, after the Agile Manifesto was published in
2001.
Following are the Agile Manifesto principles −
• Individuals and interactions − In Agile development, self-organization and motivation
are important, as are interactions like co-location and pair programming.
• Working software − Demo working software is considered the best means of
communication with the customers to understand their requirements, instead of just
depending on documentation.
• Customer collaboration − As the requirements cannot be gathered completely in the
beginning of the project due to various factors, continuous customer interaction is very
important to get proper product requirements.
• Responding to change − Agile Development is focused on quick responses to change
and continuous development.

Agile Vs Traditional SDLC Models

Agile is based on the adaptive software development methods, whereas the traditional SDLC
models like the waterfall model is based on a predictive approach. Predictive teams in the
traditional SDLC models usually work with detailed planning and have a complete forecast of
the exact tasks and features to be delivered in the next few months or during the product life
cycle.
Predictive methods entirely depend on the requirement analysis and planning done in the
beginning of cycle. Any changes to be incorporated go through a strict change control
management and prioritization.
Agile uses an adaptive approach where there is no detailed planning and there is clarity on
future tasks only in respect of what features need to be developed. There is feature driven
development and the team adapts to the changing product requirements dynamically. The
product is tested very frequently, through the release iterations, minimizing the risk of any
major failures in future.
Customer Interaction is the backbone of this Agile methodology, and open communication
with minimum documentation are the typical features of Agile development environment. The
agile teams work in close collaboration with each other and are most often located in the same
geographical location.

Agile Model - Pros and Cons


Agile methods are being widely accepted in the software world recently. However, this method
may not always be suitable for all products. Here are some pros and cons of the Agile model.
The advantages of the Agile Model are as follows −
• Is a very realistic approach to software development.
• Promotes teamwork and cross training.
• Functionality can be developed rapidly and demonstrated.
• Resource requirements are minimum.
• Suitable for fixed or changing requirements
• Delivers early partial working solutions.
• Good model for environments that change steadily.
• Minimal rules, documentation easily employed.
• Enables concurrent development and delivery within an overall planned context.
• Little or no planning required.
• Easy to manage.
• Gives flexibility to developers.
The disadvantages of the Agile Model are as follows −
• Not suitable for handling complex dependencies.
• More risk of sustainability, maintainability and extensibility.
• An overall plan, an agile leader and agile PM practice is a must without which it will not
work.
• Strict delivery management dictates the scope, functionality to be delivered, and
adjustments to meet the deadlines.
• Depends heavily on customer interaction, so if customer is not clear, team can be driven
in the wrong direction.
• There is a very high individual dependency, since there is minimum documentation
generated.
• Transfer of technology to new team members may be quite challenging due to lack of
documentation.
Iterative Model:
In the Iterative model, iterative process starts with a simple implementation of a small set of the
software requirements and iteratively enhances the evolving versions until the complete system
is implemented and ready to be deployed.
An iterative life cycle model does not attempt to start with a full specification of requirements.
Instead, development begins by specifying and implementing just part of the software, which is
then reviewed to identify further requirements. This process is then repeated, producing a new
version of the software at the end of each iteration of the model.

Iterative Model - Design

Iterative process starts with a simple implementation of a subset of the software requirements
and iteratively enhances the evolving versions until the full system is implemented. At each
iteration, design modifications are made and new functional capabilities are added. The basic
idea behind this method is to develop a system through repeated cycles (iterative) and in smaller
portions at a time (incremental).
The following illustration is a representation of the Iterative and Incremental model −

Iterative and Incremental development is a combination of both iterative design or iterative


method and incremental build model for development. "During software development, more
than one iteration of the software development cycle may be in progress at the same time." This
process may be described as an "evolutionary acquisition" or "incremental build" approach."
In this incremental model, the whole requirement is divided into various builds. During each
iteration, the development module goes through the requirements, design, implementation and
testing phases. Each subsequent release of the module adds function to the previous release. The
process continues till the complete system is ready as per the requirement.
The key to a successful use of an iterative software development lifecycle is rigorous validation
of requirements, and verification & testing of each version of the software against those
requirements within each cycle of the model. As the software evolves through successive
cycles, tests must be repeated and extended to verify each version of the software.
Iterative Model - Application

Like other SDLC models, Iterative and incremental development has some specific applications
in the software industry. This model is most often used in the following scenarios −
• Requirements of the complete system are clearly defined and understood.
• Major requirements must be defined; however, some functionalities or requested
enhancements may evolve with time.
• There is a time to the market constraint.
• A new technology is being used and is being learnt by the development team while
working on the project.
• Resources with needed skill sets are not available and are planned to be used on contract
basis for specific iterations.
• There are some high-risk features and goals which may change in the future.

Iterative Model - Pros and Cons

The advantage of this model is that there is a working model of the system at a very early stage
of development, which makes it easier to find functional or design flaws. Finding issues at an
early stage of development enables to take corrective measures in a limited budget.
The disadvantage with this SDLC model is that it is applicable only to large and bulky software
development projects. This is because it is hard to break a small software system into further
small serviceable increments/modules.
The advantages of the Iterative and Incremental SDLC Model are as follows −
• Some working functionality can be developed quickly and early in the life cycle.
• Results are obtained early and periodically.
• Parallel development can be planned.
• Progress can be measured.
• Less costly to change the scope/requirements.
• Testing and debugging during smaller iteration is easy.
• Risks are identified and resolved during iteration; and each iteration is an easily managed
milestone.

The disadvantages of the Iterative and Incremental SDLC Model are as follows −
• More resources may be required.
• Although cost of change is lesser, but it is not very suitable for changing requirements.
• More management attention is required.
• System architecture or design issues may arise because not all requirements are gathered
in the beginning of the entire life cycle.
• Defining increments may require definition of the complete system.
• Not suitable for smaller projects.
• Management complexity is more.
• End of project may not be known which is a risk.
Prototype Model
The prototype model requires that before carrying out the development of actual software, a
working prototype of the system should be built. A prototype is a toy implementation of the
system. A prototype usually turns out to be a very crude version of the actual system, possible
exhibiting limited functional capabilities, low reliability, and inefficient performance as
compared to actual software. In many instances, the client only has a general view of what is
expected from the software product. In such a scenario where there is an absence of detailed
information regarding the input to the system, the processing needs, and the output requirement,
the prototyping model may be employed.

Steps of Prototype Model

1. Requirement Gathering and Analyst


2. Quick Decision
3. Build a Prototype
4. Assessment or User Evaluation
5. Prototype Refinement
6. Engineer Product

Advantage of Prototype Model

1. Reduce the risk of incorrect user requirement


2. Good where requirement are changing/uncommitted
3. Regular visible process aids management
4. Support early product marketing
5. Reduce Maintenance cost.
6. Errors can be detected much earlier as the system is made side by side.

Disadvantage of Prototype Model

1. An unstable/badly implemented prototype often becomes the final product.


2. Require extensive customer collaboration
o Costs customer money
o Needs committed customer
o Difficult to finish if customer withdraw
o May be too customer specific, no broad market
3. Difficult to know how long the project will last.
4. Easy to fall back into the code and fix without proper requirement analysis, design,
customer evaluation, and feedback.
5. Prototyping tools are expensive.
6. Special tools & techniques are required to build a prototype.
7. It is a time-consuming process.
V-Model:
The V-model is an SDLC model where execution of processes happens in a sequential manner
in a V-shape. It is also known as Verification and Validation model.
The V-Model is an extension of the waterfall model and is based on the association of a testing
phase for each corresponding development stage. This means that for every single phase in the
development cycle, there is a directly associated testing phase. This is a highly-disciplined
model and the next phase starts only after completion of the previous phase.

V-Model - Design

Under the V-Model, the corresponding testing phase of the development phase is planned in
parallel. So, there are Verification phases on one side of the ‘V’ and Validation phases on the
other side. The Coding Phase joins the two sides of the V-Model.
The following illustration depicts the different phases in a V-Model of the SDLC.

V-Model - Verification Phases

There are several Verification phases in the V-Model, each of these are explained in detail
below.

Business Requirement Analysis

This is the first phase in the development cycle where the product requirements are understood
from the customer’s perspective. This phase involves detailed communication with the
customer to understand his expectations and exact requirement. This is a very important activity
and needs to be managed well, as most of the customers are not sure about what exactly they
need. The acceptance test design planning is done at this stage as business requirements can
be used as an input for acceptance testing.

System Design

Once you have the clear and detailed product requirements, it is time to design the complete
system. The system design will have the understanding and detailing the complete hardware and
communication setup for the product under development. The system test plan is developed
based on the system design. Doing this at an earlier stage leaves more time for the actual test
execution later.

Architectural Design

Architectural specifications are understood and designed in this phase. Usually more than one
technical approach is proposed and based on the technical and financial feasibility the final
decision is taken. The system design is broken down further into modules taking up different
functionality. This is also referred to as High Level Design (HLD).
The data transfer and communication between the internal modules and with the outside world
(other systems) is clearly understood and defined in this stage. With this information,
integration tests can be designed and documented during this stage.

Module Design

In this phase, the detailed internal design for all the system modules is specified, referred to
as Low Level Design (LLD). It is important that the design is compatible with the other
modules in the system architecture and the other external systems. The unit tests are an essential
part of any development process and helps eliminate the maximum faults and errors at a very
early stage. These unit tests can be designed at this stage based on the internal module designs.

Coding Phase

The actual coding of the system modules designed in the design phase is taken up in the Coding
phase. The best suitable programming language is decided based on the system and architectural
requirements.
The coding is performed based on the coding guidelines and standards. The code goes through
numerous code reviews and is optimized for best performance before the final build is checked
into the repository.
Validation Phases

The different Validation Phases in a V-Model are explained in detail below.

Unit Testing

Unit tests designed in the module design phase are executed on the code during this validation
phase. Unit testing is the testing at code level and helps eliminate bugs at an early stage, though
all defects cannot be uncovered by unit testing.

Integration Testing

Integration testing is associated with the architectural design phase. Integration tests are
performed to test the coexistence and communication of the internal modules within the system.

System Testing

System testing is directly associated with the system design phase. System tests check the entire
system functionality and the communication of the system under development with external
systems. Most of the software and hardware compatibility issues can be uncovered during this
system test execution.

Acceptance Testing

Acceptance testing is associated with the business requirement analysis phase and involves
testing the product in user environment. Acceptance tests uncover the compatibility issues with
the other systems available in the user environment. It also discovers the non-functional issues
such as load and performance defects in the actual user environment.

V- Model ─ Application

V- Model application is almost the same as the waterfall model, as both the models are of
sequential type. Requirements have to be very clear before the project starts, because it is
usually expensive to go back and make changes. This model is used in the medical development
field, as it is strictly a disciplined domain.
The following pointers are some of the most suitable scenarios to use the V-Model application.
• Requirements are well defined, clearly documented and fixed.
• Product definition is stable.
• Technology is not dynamic and is well understood by the project team.
• There are no ambiguous or undefined requirements.
• The project is short.
V-Model - Pros and Cons

The advantage of the V-Model method is that it is very easy to understand and apply. The
simplicity of this model also makes it easier to manage. The disadvantage is that the model is
not flexible to changes and just in case there is a requirement change, which is very common in
today’s dynamic world, it becomes very expensive to make the change.
The advantages of the V-Model method are as follows −
• This is a highly-disciplined model and Phases are completed one at a time.
• Works well for smaller projects where requirements are very well understood.
• Simple and easy to understand and use.
• Easy to manage due to the rigidity of the model. Each phase has specific deliverables and
a review process.
The disadvantages of the V-Model method are as follows −
• High risk and uncertainty.
• Not a good model for complex and object-oriented projects.
• Poor model for long and ongoing projects.
• Not suitable for the projects where requirements are at a moderate to high risk of
changing.
• Once an application is in the testing stage, it is difficult to go back and change a
functionality.
• No working software is produced until late during the life cycle.
Big Bang Model
The Big Bang model is an SDLC model where we do not follow any specific process. The
development just starts with the required money and efforts as the input, and the output is the
software developed which may or may not be as per customer requirement. This Big Bang
Model does not follow a process/procedure and there is a very little planning required. Even the
customer is not sure about what exactly he wants and the requirements are implemented on the
fly without much analysis.
Usually this model is followed for small projects where the development teams are very small.

Big Bang Model ─ Design and Application

The Big Bang Model comprises of focusing all the possible resources in the software
development and coding, with very little or no planning. The requirements are understood and
implemented as they come. Any changes required may or may not need to revamp the complete
software.
This model is ideal for small projects with one or two developers working together and is also
useful for academic or practice projects. It is an ideal model for the product where requirements
are not well understood and the final release date is not given.

Big Bang Model - Pros and Cons

The advantage of this Big Bang Model is that it is very simple and requires very little or no
planning. Easy to manage and no formal procedure are required.
However, the Big Bang Model is a very high risk model and changes in the requirements or
misunderstood requirements may even lead to complete reversal or scraping of the project. It is
ideal for repetitive or small projects with minimum risks.
The advantages of the Big Bang Model are as follows −
• This is a very simple model
• Little or no planning required
• Easy to manage
• Very few resources required
• Gives flexibility to developers
• It is a good learning aid for new comers or students.
The disadvantages of the Big Bang Model are as follows −
• Very High risk and uncertainty.
• Not a good model for complex and object-oriented projects.
• Poor model for long and ongoing projects.
• Can turn out to be very expensive if requirements are misunderstood.
Comparison of different life cycle models

Classical Waterfall Model: The Classical Waterfall model can be considered as the basic model
and all other life cycle models are based on this model. It is an ideal model. However, the
Classical Waterfall model cannot be used in practical project development, since this model does
not support any mechanism to correct the errors that are committed during any of the phases but
detected at a later phase. This problem is overcome by the Iterative Waterfall model through the
inclusion of feedback paths.
Iterative Waterfall Model: The Iterative Waterfall model is probably the most used software
development model. This model is simple to use and understand. But this model is suitable only
for well-understood problems and is not suitable for the development of very large projects and
projects that suffer from a large number of risks.
Prototyping Model: The Prototyping model is suitable for projects, which either the customer
requirements or the technical solutions are not well understood. This risks must be identified
before the project starts. This model is especially popular for the development of the user
interface part of the project.
Evolutionary Model: The Evolutionary model is suitable for large projects which can be
decomposed into a set of modules for incremental development and delivery. This model is
widely used in object-oriented development projects. This model is only used if incremental
delivery of the system is acceptable to the customer.
Spiral Model: The Spiral model is considered as a meta-model as it includes all other life cycle
models. Flexibility and risk handling are the main characteristics of this model. The spiral model
is suitable for the development of technically challenging and large software that is prone to
various risks that are difficult to anticipate at the start of the project. But this model is more
complex than the other models.
Agile Model: The Agile model was designed to incorporate change requests quickly. In this
model, requirements are decomposed into small parts that can be incrementally developed. But
the main principle of the Agile model is to deliver an increment to the customer after each Time-
box. The end date of an iteration is fixed, it can’t be extended. This agility is achieved by
removing unnecessary activities that waste time and effort.

Selection of appropriate life cycle model for a project: Selection of proper lifecycle model to
complete a project is the most important task. It can be selected by keeping the advantages and
disadvantages of various models in mind. The different issues that are analyzed before selecting
a suitable life cycle model are given below :
• Characteristics of the software to be developed: The choice of the life cycle model largely
depends on the type of the software that is being developed. For small services projects, the
agile model is favored. On the other hand, for product and embedded development, the
Iterative Waterfall model can be preferred. The evolutionary model is suitable to develop an
object-oriented project. User interface part of the project is mainly developed through
prototyping model.
• Characteristics of the development team: Team member’s skill level is an important factor
to deciding the life cycle model to use. If the development team is experienced in developing
similar software, then even an embedded software can be developed using the Iterative
Waterfall model. If the development team is entirely novice, then even a simple data
processing application may require a prototyping model.
• Risk associated with the project: If the risks are few and can be anticipated at the start of the
project, then prototyping model is useful. If the risks are difficult to determine at the
beginning of the project but are likely to increase as the development proceeds, then the spiral
model is the best model to use.
• Characteristics of the customer: If the customer is not quite familiar with computers, then
the requirements are likely to change frequently as it would be difficult to form complete,
consistent and unambiguous requirements. Thus, a prototyping model may be necessary to
reduce later change requests from the customers. Initially, the customer’s confidence is high
on the development team. During the lengthy development process, customer confidence
normally drops off as no working software is yet visible. So, the evolutionary model is useful
as the customer can experience a partially working software much earlier than whole complete
software. Another advantage of the evolutionary model is that it reduces the customer’s
trauma of getting used to an entirely new system.
Requirements Engineering Process
Requirement Engineering is the process of defining, documenting and maintaining the
requirements. It is a process of gathering and defining service provided by the system.
Requirements Engineering Process consists of the following main activities:
• Requirements elicitation
• Requirements specification
• Requirements verification and validation
• Requirements management

Requirements Elicitation:
It is related to the various ways used to gain knowledge about the project domain and
requirements. The various sources of domain knowledge include customers, business manuals,
the existing software of same type, standards and other stakeholders of the project.
The techniques used for requirements elicitation include interviews, brainstorming, task analysis,
Delphi technique, prototyping, etc. Some of these are discussed here. Elicitation does not
produce formal models of the requirements understood. Instead, it widens the domain knowledge
of the analyst and thus helps in providing input to the next stage.

Requirements specification:
This activity is used to produce formal software requirement models. All the requirements
including the functional as well as the non-functional requirements and the constraints are
specified by these models in totality. During specification, more knowledge about the problem
may be required which can again trigger the elicitation process.
The models used at this stage include ER diagrams, data flow diagrams(DFDs), function
decomposition diagrams(FDDs), data dictionaries, etc.

Requirements verification and validation:


Verification: It refers to the set of tasks that ensures that the software correctly implements a
specific function.
Validation: It refers to a different set of tasks that ensures that the software that has been built is
traceable to customer requirements.
If requirements are not validated, errors in the requirement definitions would propagate to the
successive stages resulting in a lot of modification and rework.
The main steps for this process include:
• The requirements should be consistent with all the other requirements i.e no two requirements
should conflict with each other.
• The requirements should be complete in every sense.
• The requirements should be practically achievable.
Reviews, buddy checks, making test cases, etc. are some of the methods used for this.

Requirements management:
Requirement management is the process of analyzing, documenting, tracking, prioritizing and
agreeing on the requirement and controlling the communication to relevant stakeholders. This
stage takes care of the changing nature of requirements. It should be ensured that the SRS is as
modifiable as possible so as to incorporate changes in requirements specified by the end users at
later stages too. Being able to modify the software as per requirements in a systematic and
controlled manner is an extremely important part of the requirements engineering process.

Classification of Software Requirements

A software requirement can be of 3 types:


• Functional requirements
• Non-functional requirements
• Domain requirements

Functional Requirements: These are the requirements that the end user specifically demands
as basic facilities that the system should offer. All these functionalities need to be necessarily
incorporated into the system as a part of the contract. These are represented or stated in the
form of input to be given to the system, the operation performed and the output expected. They
are basically the requirements stated by the user which one can see directly in the final product,
unlike the non-functional requirements.
For example, in a hospital management system, a doctor should be able to retrieve the
information of his patients. Each high-level functional requirement may involve several
interactions or dialogues between the system and the outside world. In order to accurately
describe the functional requirements, all scenarios must be enumerated
There are many ways of expressing functional requirements e.g., natural language, a structured
or formatted language with no rigorous syntax and formal specification language with proper
syntax.

Non-functional requirements: These are basically the quality constraints that the system must
satisfy according to the project contract. The priority or extent to which these factors are
implemented varies from one project to other. They are also called non-behavioral
requirements.
They basically deal with issues like:
• Portability
• Security
• Maintainability
• Reliability
• Scalability
• Performance
• Reusability
• Flexibility

Non-functional requirements are classified into following types:
• Interface constraints
• Performance constraints: response time, security, storage space, etc.
• Operating constraints
• Life cycle constraints: mantainability, portability, etc.
• Economic constraints
The process of specifying non-functional requirements requires the knowledge of the
functionality of the system, as well as the knowledge of the context within which the system
will operate.

Domain requirements: Domain requirements are the requirements which are characteristic of
a particular category or domain of projects. The basic functions that a system of a specific
domain must necessarily exhibit come under this category. For instance, in an academic
software that maintains records of a school or college, the functionality of being able to access
the list of faculty and list of students of each grade is a domain requirement. These
requirements are therefore identified from that domain model and are not user specific.

What is SRS?
A software requirements specification (SRS) is a description of a software system to be
developed. It lays out functional and non-functional requirements, and may include a set of use
cases that describe user interactions that the software must provide.
Why SRS?
In order to fully understand one’s project, it is very important that they come up with a SRS
listing out their requirements, how are they going to meet it and how will they complete the
project. It helps the team to save upon their time as they are able to comprehend how are going
to go about the project. Doing this also enables the team to find out about the limitations and
risks early on.
Quality Characteristics of a good SRS

1. Correctness:
User review is used to ensure the correctness of requirements stated in the SRS. SRS is said to
be correct if it covers all the requirements that are actually expected from the system.
2. Completeness:
Completeness of SRS indicates every sense of completion including the numbering of all the
pages, resolving the to be determined parts to as much extent as possible as well as covering
all the functional and non-functional requirements properly.
3. Consistency:
Requirements in SRS are said to be consistent if there are no conflicts between any set of
requirements. Examples of conflict include differences in terminologies used at separate
places, logical conflicts like time period of report generation, etc.

4. Unambiguousness:
A SRS is said to be unambiguous if all the requirements stated have only 1 interpretation.
Some of the ways to prevent unambiguousness include the use of modelling techniques like
ER diagrams, proper reviews and buddy checks, etc.
5. Ranking for importance and stability:
There should a criterion to classify the requirements as less or more important or more
specifically as desirable or essential. An identifier mark can be used with every requirement to
indicate its rank or stability.
6. Modifiability:
SRS should be made as modifiable as possible and should be capable of easily accepting
changes to the system to some extent. Modifications should be properly indexed and cross-
referenced.
7. Verifiability:
A SRS is verifiable if there exists a specific technique to quantifiably measure the extent to
which every requirement is met by the system. For example, a requirement stating that the
system must be user-friendly is not verifiable and listing such requirements should be avoided.
8. Traceability:
One should be able to trace a requirement to design component and then to code segment in
the program. Similarly, one should be able to trace a requirement to the corresponding test
cases.
9. Design Independence:
There should be an option to choose from multiple design alternatives for the final system.
More specifically, the SRS should not include any implementation details.
10. Testability:
A SRS should be written in such a way that it is easy to generate test cases and test plans from
the document.
11. Understandable by the customer:
An end user maybe an expert in his/her specific domain but might not be an expert in
computer science. Hence, the use of formal notations and symbols should be avoided to as
much extent as possible. The language should be kept easy and clear.
12. Right level of abstraction:
If the SRS is written for the requirements phase, the details should be explained explicitly.
Whereas, for a feasibility study, fewer details can be used. Hence, the level of abstraction
varies according to the purpose of the SRS.

Requirements Elicitation
Requirements elicitation is perhaps the most difficult, most error-prone and most
communication intensive software development. It can be successful only through an effective
customer-developer partnership. It is needed to know what the users really need.

Requirements elicitation Activities:


Requirements elicitation includes the subsequent activities. Few of them are listed below –
• Knowledge of the overall area where the systems is applied.
• The details of the precise customer problem where the system are going to be applied must be
understood.
• Interaction of system with external requirements.
• Detailed investigation of user needs.
• Define the constraints for system development.

Requirements elicitation Methods:

There are a number of requirements elicitation methods. Few of them are listed below –
1. Interviews
2. Brainstorming Sessions
3. Facilitated Application Specification Technique (FAST)
4. Quality Function Deployment (QFD)
5. Use Case Approach

The success of an elicitation technique used depends on the maturity of the analyst, developers,
users, and the customer involved.
1. Interviews:
Objective of conducting an interview is to understand the customer’s expectations from the
software.
It is impossible to interview every stakeholder hence representatives from groups are selected
based on their expertise and credibility.
Interviews maybe be open-ended or structured.
1. In open-ended interviews there is no pre-set agenda. Context free questions may be asked to
understand the problem.
2. In structured interview, agenda of fairly open questions is prepared. Sometimes a proper
questionnaire is designed for the interview.
2. Brainstorming Sessions:
• It is a group technique
• It is intended to generate lots of new ideas hence providing a platform to share views
• A highly trained facilitator is required to handle group bias and group conflicts.
• Every idea is documented so that everyone can see it.
• Finally, a document is prepared which consists of the list of requirements and their priority if
possible.

3. Facilitated Application Specification Technique:


It’s objective is to bridge the expectation gap – difference between what the developers think
they are supposed to build and what customers think they are going to get.
A team oriented approach is developed for requirements gathering.
Each attendee is asked to make a list of objects that are-
1. Part of the environment that surrounds the system
2. Produced by the system
3. Used by the system
Each participant prepares his/her list, different lists are then combined, redundant entries are
eliminated, team is divided into smaller sub-teams to develop mini-specifications and finally a
draft of specifications is written down using all the inputs from the meeting.
4. Quality Function Deployment:
In this technique customer satisfaction is of prime concern, hence it emphasizes on the
requirements which are valuable to the customer.
3 types of requirements are identified –
• Normal requirements –
In this the objective and goals of the proposed software are discussed with the customer.
Example – normal requirements for a result management system may be entry of marks,
calculation of results, etc
• Expected requirements –
These requirements are so obvious that the customer need not explicitly state them. Example –
protection from unauthorized access.
• Exciting requirements –
It includes features that are beyond customer’s expectations and prove to be very satisfying
when present. Example – when unauthorized access is detected, it should backup and
shutdown all processes.

5. Use Case Approach:


This technique combines text and pictures to provide a better understanding of the requirements.
The use cases describe the ‘what’, of a system and not ‘how’. Hence, they only give a functional
view of the system.
The components of the use case design includes three major things – Actor, Use cases, use case
diagram.
1. Actor –
It is the external agent that lies outside the system but interacts with it in some way. An actor
maybe a person, machine etc. It is represented as a stick figure. Actors can be primary actors
or secondary actors.
• Primary actors – It requires assistance from the system to achieve a goal.
• Secondary actor – It is an actor from which the system needs assistance.
2. Use cases –
They describe the sequence of interactions between actors and the system. They capture
who(actors) do what(interaction) with the system. A complete set of use cases specifies all
possible ways to use the system.
3. Use case diagram –
A use case diagram graphically represents what happens when an actor interacts with a
system. It captures the functional aspect of the system.
Software Design
Software design is a mechanism to transform user requirements into some suitable form, which
helps the programmer in software coding and implementation. It deals with representing the
client's requirement, as described in SRS (Software Requirement Specification) document, into a
form, i.e., easily implementable using programming language.

The software design phase is the first step in SDLC (Software Design Life Cycle), which moves
the concentration from the problem domain to the solution domain. In software design, we
consider the system to be a set of components or modules with clearly defined behaviors &
boundaries.

Objectives of Software Design

Following are the purposes of Software design:

1. Correctness:Software design should be correct as per requirement.


2. Completeness:The design should have all components like data structures, modules, and
external interfaces, etc.
3. Efficiency:Resources should be used efficiently by the program.
4. Flexibility:Able to modify on changing needs.
5. Consistency:There should not be any inconsistency in the design.
6. Maintainability: The design should be so simple so that it can be easily maintainable by
other designers.

Software design Strategies:

Software design is a process to conceptualize the software requirements into software


implementation. Software design takes the user requirements as challenges and tries to find
optimum solution. While the software is being conceptualized, a plan is chalked out to find the
best possible design for implementing the intended solution.
There are multiple variants of software design. Let us study them briefly:

1.Structured Design
Structured design is a conceptualization of problem into several well-organized elements of
solution. It is basically concerned with the solution design. Benefit of structured design is, it
gives better understanding of how the problem is being solved. Structured design also makes it
simpler for designer to concentrate on the problem more accurately.
Structured design is mostly based on ‘divide and conquer’ strategy where a problem is broken
into several small problems and each small problem is individually solved until the whole
problem is solved.
The small pieces of problem are solved by means of solution modules. Structured design
emphasis that these modules be well organized in order to achieve precise solution.
These modules are arranged in hierarchy. They communicate with each other. A good
structured design always follows some rules for communication among multiple modules,
namely -
Cohesion - grouping of all functionally related elements.
Coupling - communication between different modules.
A good structured design has high cohesion and low coupling arrangements.

Coupling and Cohesion

When a software program is modularized, its tasks are divided into several modules based on
some characteristics. As we know, modules are set of instructions put together in order to
achieve some tasks. They are though, considered as single entity but may refer to each other to
work together. There are measures by which the quality of a design of modules and their
interaction among them can be measured. These measures are called coupling and cohesion.
Cohesion

Cohesion is a measure that defines the degree of intra-dependability within elements of a


module. The greater the cohesion, the better is the program design.
There are seven types of cohesion, namely –

• Co-incidental cohesion - It is unplanned and random cohesion, which might be the result
of breaking the program into smaller modules for the sake of modularization. Because it
is unplanned, it may serve confusion to the programmers and is generally not-accepted.
• Logical cohesion - When logically categorized elements are put together into a module,
it is called logical cohesion.
• Temporal Cohesion - When elements of module are organized such that they are
processed at a similar point in time, it is called temporal cohesion.
• Procedural cohesion - When elements of module are grouped together, which are
executed sequentially in order to perform a task, it is called procedural cohesion.
• Communicational cohesion - When elements of module are grouped together, which are
executed sequentially and work on same data (information), it is called communicational
cohesion.
• Sequential cohesion - When elements of module are grouped because the output of one
element serves as input to another and so on, it is called sequential cohesion.
• Functional cohesion - It is considered to be the highest degree of cohesion, and it is
highly expected. Elements of module in functional cohesion are grouped because they all
contribute to a single well-defined function. It can also be reused.

Coupling

Coupling is a measure that defines the level of inter-dependability among modules of a


program. It tells at what level the modules interfere and interact with each other. The lower the
coupling, the better the program.
There are five levels of coupling, namely -

• Content coupling - When a module can directly access or modify or refer to the content
of another module, it is called content level coupling.
• Common coupling- When multiple modules have read and write access to some global
data, it is called common or global coupling.
• Control coupling- Two modules are called control-coupled if one of them decides the
function of the other module or changes its flow of execution.
• Stamp coupling- When multiple modules share common data structure and work on
different part of it, it is called stamp coupling.
• Data coupling- Data coupling is when two modules interact with each other by means of
passing data (as parameter). If a module passes data structure as parameter, then the
receiving module should use all its components.
Ideally, no coupling is considered to be the best.
Differentiate between Coupling and Cohesion

Coupling Cohesion

Coupling is also called Inter-Module Cohesion is also called Intra-Module Binding.


Binding.
Coupling shows the relationships Cohesion shows the relationship within the
between modules. module.
Coupling shows the Cohesion shows the module's
relative independence between the relative functional strength.
modules.
While creating, you should aim for low While creating you should aim for high
coupling, i.e., dependency among cohesion, i.e., a cohesive component/ module
modules should be less. focuses on a single function (i.e., single-
mindedness) with little interaction with other
modules of the system.
In coupling, modules are linked to the In cohesion, the module focuses on a single
other modules. thing.

2.Function Oriented Design

In function-oriented design, the system is comprised of many smaller sub-systems known as


functions. These functions are capable of performing significant task in the system. The system
is considered as top view of all functions.
Function oriented design inherits some properties of structured design where divide and
conquer methodology is used.
This design mechanism divides the whole system into smaller functions, which provides means
of abstraction by concealing the information and their operation.. These functional modules can
share information among themselves by means of information passing and using information
available globally.
Another characteristic of functions is that when a program calls a function, the function changes
the state of the program, which sometimes is not acceptable by other modules. Function
oriented design works well where the system state does not matter and program/functions work
on input rather than on a state.
Design Process

• The whole system is seen as how data flows in the system by means of data flow
diagram.
• DFD depicts how functions changes data and state of entire system.
• The entire system is logically broken down into smaller units known as functions on the
basis of their operation in the system.
• Each function is then described at large.

Design Notations

Design Notations are primarily meant to be used during the process of design and are used to
represent design or design decisions. For a function-oriented design, the design can be
represented graphically or mathematically by the following

Software analysis and design includes all activities, which help the transformation of
requirement specification into implementation. Requirement specifications specify all
functional and non-functional expectations from the software. These requirement specifications
come in the shape of human readable and understandable documents, to which a computer has
nothing to do.
Software analysis and design is the intermediate stage, which helps human-readable
requirements to be transformed into actual code.
Let us see few analysis and design tools used by software designers:

Data Flow Diagram

Data flow diagram is graphical representation of flow of data in an information system. It is


capable of depicting incoming data flow, outgoing data flow and stored data. The DFD does not
mention anything about how data flows through the system.
There is a prominent difference between DFD and Flowchart. The flowchart depicts flow of
control in program modules. DFDs depict flow of data in the system at various levels. DFD
does not contain any control or branch elements.

Types of DFD

Data Flow Diagrams are either Logical or Physical.

• Logical DFD - This type of DFD concentrates on the system process, and flow of data in
the system.For example in a Banking software system, how data is moved between
different entities.
• Physical DFD - This type of DFD shows how the data flow is actually implemented in
the system. It is more specific and close to the implementation.

DFD Components

DFD can represent Source, destination, storage and flow of data using the following set of
components -

• Entities - Entities are source and destination of information data. Entities are represented
by a rectangles with their respective names.
• Process - Activities and action taken on the data are represented by Circle or Round-
edged rectangles.
• Data Storage - There are two variants of data storage - it can either be represented as a
rectangle with absence of both smaller sides or as an open-sided rectangle with only one
side missing.
• Data Flow - Movement of data is shown by pointed arrows. Data movement is shown
from the base of arrow as its source towards head of the arrow as destination.

Levels of DFD

• Level 0 - Highest abstraction level DFD is known as Level 0 DFD, which depicts the
entire information system as one diagram concealing all the underlying details. Level 0
DFDs are also known as context level DFDs.

• Level 1 - The Level 0 DFD is broken down into more specific, Level 1 DFD. Level 1
DFD depicts basic modules in the system and flow of data among various modules. Level
1 DFD also mentions basic processes and sources of information.
• Level 2 - At this level, DFD shows how data flows inside the modules mentioned in
Level 1.
Higher level DFDs can be transformed into more specific lower level DFDs with deeper
level of understanding unless the desired level of specification is achieved.

Structure Charts

Structure chart is a chart derived from Data Flow Diagram. It represents the system in more
detail than DFD. It breaks down the entire system into lowest functional modules, describes
functions and sub-functions of each module of the system to a greater detail than DFD.
Structure chart represents hierarchical structure of modules. At each layer a specific task is
performed.
Here are the symbols used in construction of structure charts -
• Module - It represents process or subroutine or task. A control module branches to more
than one sub-module. Library Modules are re-usable and invokable from any module.

• Condition - It is represented by small diamond at the base of module. It depicts that


control module can select any of sub-routine based on some condition.

• Jump - An arrow is shown pointing inside the module to depict that the control will jump

in the middle of the sub-module.


• Loop - A curved arrow represents loop in the module. All sub-modules covered by loop
repeat execution of module.

• Data flow - A directed arrow with empty circle at the end represents data flow.

• Control flow - A directed arrow with filled circle at the end represents control flow.
Pseudo-Code

Pseudo code is written more close to programming language. It may be considered as


augmented programming language, full of comments and descriptions.
Pseudo code avoids variable declaration but they are written using some actual programming
language’s constructs, like C, Fortran, Pascal etc.
Pseudo code contains more programming details than Structured English. It provides a method
to perform the task, as if a computer is executing the code.

Example

Program to print Fibonacci up to n numbers.


void function Fibonacci
Get value of n;
Set value of a to 1;
Set value of b to 1;
Initialize I to 0
for (i=0; i< n; i++)
{
if a greater than b
{
Increase b by a;
Print b;
}
else if b greater than a
{
increase a by b;
print a;
}
}
Decision Tables

A Decision table represents conditions and the respective actions to be taken to address them, in
a structured tabular format.
It is a powerful tool to debug and prevent errors. It helps group similar information into a single
table and then by combining tables it delivers easy and convenient decision-making.

Creating Decision Table

To create the decision table, the developer must follow basic four steps:

• Identify all possible conditions to be addressed


• Determine actions for all identified conditions
• Create Maximum possible rules
• Define action for each rule
Decision Tables should be verified by end-users and can lately be simplified by eliminating
duplicate rules and actions.

Example

Let us take a simple example of day-to-day problem with our Internet connectivity. We begin
by identifying all problems that can arise while starting the internet and their respective possible
solutions.
We list all possible problems under column conditions and the prospective actions under
column Actions.

Conditions/Actions Rules

Shows Connected N N N N Y Y Y Y

Conditions Ping is Working N N Y Y N N Y Y

Opens Website Y N Y N Y N Y N

Check network cable X

Check internet router X X X X


Actions Restart Web Browser X

Contact Service provider X X X X X X

Do no action
Table : Decision Table – In-house Internet Troubleshooting
Entity-Relationship Model

Entity-Relationship model is a type of database model based on the notion of real world entities
and relationship among them. We can map real world scenario onto ER database model. ER
Model creates a set of entities with their attributes, a set of constraints and relation among them.
ER Model is best used for the conceptual design of database. ER Model can be represented as
follows :

• Entity - An entity in ER Model is a real world being, which has some properties
called attributes. Every attribute is defined by its corresponding set of values,
called domain.
For example, Consider a school database. Here, a student is an entity. Student has
various attributes like name, id, age and class etc.
• Relationship - The logical association among entities is called relationship.
Relationships are mapped with entities in various ways. Mapping cardinalities define the
number of associations between two entities.
Mapping cardinalities:

o one to one
o one to many
o many to one
o many to many
Data Dictionary

Data dictionary is the centralized collection of information about data. It stores meaning and
origin of data, its relationship with other data, data format for usage etc. Data dictionary has
rigorous definitions of all names in order to facilitate user and software designers.
Data dictionary is often referenced as meta-data (data about data) repository. It is created along
with DFD (Data Flow Diagram) model of software program and is expected to be updated
whenever DFD is changed or updated.

Requirement of Data Dictionary

The data is referenced via data dictionary while designing and implementing software. Data
dictionary removes any chances of ambiguity. It helps keeping work of programmers and
designers synchronized while using same object reference everywhere in the program.
Data dictionary provides a way of documentation for the complete database system in one
place. Validation of DFD is carried out using data dictionary.

Contents

Data dictionary should contain information about the following

• Data Flow
• Data Structure
• Data Elements
• Data Stores
• Data Processing
Data Flow is described by means of DFDs as studied earlier and represented in algebraic form
as described.

= Composed of

{} Repetition
() Optional
+ And
[/] Or

Data Elements

Data elements consist of Name and descriptions of Data and Control Items, Internal or External
data stores etc. with the following details:

• Primary Name
• Secondary Name (Alias)
• Use-case (How and where to use)
• Content Description (Notation etc. )
• Supplementary Information (preset values, constraints etc.)

Data Store

It stores the information from where the data enters into the system and exists out of the system.
The Data Store may include -

• Files
o Internal to software.
o External to software but on the same machine.
o External to software and system, located on different machine.
• Tables
o Naming convention
o Indexing property

Data Processing

There are two types of Data Processing:

• Logical: As user sees it


• Physical: As software sees it
3.Object Oriented Design

Object oriented design works around the entities and their characteristics instead of functions
involved in the software system. This design strategies focuses on entities and its
characteristics. The whole concept of software solution revolves around the engaged entities.
Let us see the important concepts of Object Oriented Design:

The different terms related to object design are:

1. Objects: All entities involved in the solution design are known as objects. For example,
person, banks, company, and users are considered as objects. Every entity has some
attributes associated with it and has some methods to perform on the attributes.
2. Classes: A class is a generalized description of an object. An object is an instance of a
class. A class defines all the attributes, which an object can have and methods, which
represents the functionality of the object.
3. Messages: Objects communicate by message passing. Messages consist of the integrity
of the target object, the name of the requested operation, and any other action needed to
perform the function. Messages are often implemented as procedure or function calls.
4. Abstraction In object-oriented design, complexity is handled using abstraction.
Abstraction is the removal of the irrelevant and the amplification of the essentials.
5. Encapsulation: Encapsulation is also called an information hiding concept. The data and
operations are linked to a single unit. Encapsulation not only bundles essential
information of an object together but also restricts access to the data and methods from
the outside world.
6. Inheritance: OOD allows similar classes to stack up in a hierarchical manner where the
lower or sub-classes can import, implement, and re-use allowed variables and functions
from their immediate superclasses.This property of OOD is called an inheritance. This
makes it easier to define a specific class and to create generalized classes from specific
ones.
7. Polymorphism: OOD languages provide a mechanism where methods performing
similar tasks but vary in arguments, can be assigned the same name. This is known as
polymorphism, which allows a single interface is performing functions for different
types. Depending upon how the service is invoked, the respective portion of the code gets
executed.
Design Process

Software design process can be perceived as series of well-defined steps. Though it varies
according to design approach (function oriented or object oriented, yet It may have the
following steps involved:

• A solution design is created from requirement or previous used system and/or system
sequence diagram.
• Objects are identified and grouped into classes on behalf of similarity in attribute
characteristics.
• Class hierarchy and relation among them is defined.
• Application framework is defined.

Software Design Approaches


Here are two generic approaches for software designing:

Top Down Design

We know that a system is composed of more than one sub-systems and it contains a number of
components. Further, these sub-systems and components may have their on set of sub-system
and components and creates hierarchical structure in the system.
Top-down design takes the whole software system as one entity and then decomposes it to
achieve more than one sub-system or component based on some characteristics. Each sub-
system or component is then treated as a system and decomposed further. This process keeps on
running until the lowest level of system in the top-down hierarchy is achieved.
Top-down design starts with a generalized model of system and keeps on defining the more
specific part of it. When all components are composed the whole system comes into existence.
Top-down design is more suitable when the software solution needs to be designed from scratch
and specific details are unknown.

Bottom-up Design

The bottom up design model starts with most specific and basic components. It proceeds with
composing higher level of components by using basic or lower level components. It keeps
creating higher level components until the desired system is not evolved as one single
component. With each higher level, the amount of abstraction is increased.
Bottom-up strategy is more suitable when a system needs to be created from some existing
system, where the basic primitives can be used in the newer system.
Both, top-down and bottom-up approaches are not practical individually. Instead, a good
combination of both is used.

Coding
The coding is the process of transforming the design of a system into a computer language
format. This coding phase of software development is concerned with software translating design
specification into the source code. It is necessary to write source code & internal documentation
so that conformance of the code to its specification can be easily verified.

Coding is done by the coder or programmers who are independent people than the designer. The
goal is not to reduce the effort and cost of the coding phase, but to cut to the cost of a later stage.
The cost of testing and maintenance can be significantly reduced with efficient coding.

Goals of Coding

1. To translate the design of system into a computer language format: The coding is the
process of transforming the design of a system into a computer language format, which
can be executed by a computer and that perform tasks as specified by the design of
operation during the design phase.
2. To reduce the cost of later phases: The cost of testing and maintenance can be
significantly reduced with efficient coding.
3. Making the program more readable: Program should be easy to read and understand. It
increases code understanding having readability and understandability as a clear objective
of the coding activity can itself help in producing more maintainable software.

For implementing our design into code, we require a high-level functional language. A
programming language should have the following characteristics:

Characteristics of Programming Language


Following are the characteristics of Programming Language:

Readability: A good high-level language will allow programs to be written in some methods
that resemble a quite-English description of the underlying functions. The coding may be done in
an essentially self-documenting way.

Portability: High-level languages, being virtually machine-independent, should be easy to


develop portable software.

Generality: Most high-level languages allow the writing of a vast collection of programs, thus
relieving the programmer of the need to develop into an expert in many diverse languages.

Brevity: Language should have the ability to implement the algorithm with less amount of code.
Programs mean in high-level languages are often significantly shorter than their low-level
equivalents.

Error checking: A programmer is likely to make many errors in the development of a computer
program. Many high-level languages invoke a lot of bugs checking both at compile-time and
run-time.

Cost: The ultimate cost of a programming language is a task of many of its characteristics.

Quick translation: It should permit quick translation.

Efficiency: It should authorize the creation of an efficient object code.

Modularity: It is desirable that programs can be developed in the language as several separately
compiled modules, with the appropriate structure for ensuring self-consistency among these
modules.

Widely available: Language should be widely available, and it should be feasible to provide
translators for all the major machines and all the primary operating systems.
A coding standard lists several rules to be followed during coding, such as the way variables are
to be named, the way the code is to be laid out, error return conventions, etc.

Coding Standards

General coding standards refers to how the developer writes code, so here we will discuss some
essential standards regardless of the programming language being used.

The following are some representative coding standards:

1. Indentation: Proper and consistent indentation is essential in producing easy to read and
maintainable programs.
Indentation should be used to:
o Emphasize the body of a control structure such as a loop or a select statement.
o Emphasize the body of a conditional statement
o Emphasize a new scope block
2. Inline comments: Inline comments analyze the functioning of the subroutine, or key
aspects of the algorithm shall be frequently used.
3. Rules for limiting the use of global: These rules file what types of data can be declared
global and what cannot.
4. Structured Programming: Structured (or Modular) Programming methods shall be
used. "GOTO" statements shall not be used as they lead to "spaghetti" code, which is
hard to read and maintain, except as outlined line in the FORTRAN Standards and
Guidelines.
5. Naming conventions for global variables, local variables, and constant identifiers: A
possible naming convention can be that global variable names always begin with a capital
letter, local variable names are made of small letters, and constant names are always
capital letters.
6. Error return conventions and exception handling system: Different functions in a
program report the way error conditions are handled should be standard within an
organization. For example, different tasks while encountering an error condition should
either return a 0 or 1 consistently.

Coding Guidelines

General coding guidelines provide the programmer with a set of the best methods which can be
used to make programs more comfortable to read and maintain. Most of the examples use the C
language syntax, but the guidelines can be tested to all languages.

The following are some representative coding guidelines recommended by many software
development organizations.

1. Line Length: It is considered a good practice to keep the length of source code lines at or
below 80 characters. Lines longer than this may not be visible properly on some terminals and
tools. Some printers will truncate lines longer than 80 columns.

2. Spacing: The appropriate use of spaces within a line of code can improve readability.

3. The code should be well-documented: As a rule of thumb, there must be at least one
comment line on the average for every three-source line.

4. The length of any function should not exceed 10 source lines: A very lengthy function is
generally very difficult to understand as it possibly carries out many various functions. For the
same reason, lengthy functions are possible to have a disproportionately larger number of bugs.

5. Do not use goto statements: Use of goto statements makes a program unstructured and very
tough to understand.

6. Inline Comments: Inline comments promote readability.


7. Error Messages: Error handling is an essential aspect of computer programming. This does
not only include adding the necessary logic to test for and handle errors but also involves making
error messages meaningful.

Programming Style

Programming style refers to the technique used in writing the source code for a computer
program. Most programming styles are designed to help programmers quickly read and
understands the program as well as avoid making errors. (Older programming styles also focused
on conserving screen space.) A good coding style can overcome the many deficiencies of a first
programming language, while poor style can defeat the intent of an excellent language.

The goal of good programming style is to provide understandable, straightforward, elegant code.
The programming style used in a various program may be derived from the coding standards or
code conventions of a company or other computing organization, as well as the preferences of
the actual programmer.

Some general rules or guidelines in respect of programming style:


1. Clarity and simplicity of Expression: The programs should be designed in such a manner so
that the objectives of the program is clear.

Hello Java Program for Beginners

2. Naming: In a program, you are required to name the module, processes, and variable, and so
on. Care should be taken that the naming style should not be cryptic and non-representative.

3. Control Constructs: It is desirable that as much as a possible single entry and single exit
constructs used.

4. Information hiding: The information secure in the data structures should be hidden from the
rest of the system where possible. Information hiding can decrease the coupling between
modules and make the system more maintainable.

5. Nesting: Deep nesting of loops and conditions greatly harm the static and dynamic behavior
of a program. It also becomes difficult to understand the program logic, so it is desirable to avoid
deep nesting.

6. User-defined types: Make heavy use of user-defined data types like enum, class, structure,
and union. These data types make your program code easy to write and easy to understand.

7. Module size: The module size should be uniform. The size of the module should not be too
big or too small. If the module size is too large, it is not generally functionally cohesive. If the
module size is too small, it leads to unnecessary overheads.

8. Module Interface: A module with a complex interface should be carefully examined.

9. Side-effects: When a module is invoked, it sometimes has a side effect of modifying the
program state. Such side-effect should be avoided where as possible.
Software Testing

Software Testing is evaluation of the software against requirements gathered from users and
system specifications. Testing is conducted at the phase level in software development life cycle
or at module level in program code. Software testing comprises of Validation and Verification.
There are seven principles in software testing:
• Testing shows presence of defects: The goal of software testing is to make the software
fail. Software testing reduces the presence of defects. Software testing talks about the
presence of defects and doesn’t talk about the absence of defects. Software testing can
ensure that defects are present but it can not prove that software is defects free. Even
multiple testing can never ensure that software is 100% bug-free. Testing can reduce the
number of defects but not removes all defects.
• Exhaustive testing is not possible: It is the process of testing the functionality of a software
in all possible inputs (valid or invalid) and pre-conditions is known as exhaustive testing.
Exhaustive testing is impossible means the software can never test at every test cases. It can
test only some test cases and assume that software is correct and it will produce the correct
output in every test cases. If the software will test every test cases then it will take more
cost, effort, etc. and which is impractical.
• Early Testing: To find the defect in the software, early test activity shall be started. The
defect detected in early phases of SDLC will very less expensive. For better performance of
software, software testing will start at initial phase i.e. testing will perform at the
requirement analysis phase.
• Defect clustering: In a project, a small number of the module can contain most of the
defects. Pareto Principle to software testing state that 80% of software defect comes from
20% of modules.
• Pesticide paradox: Repeating the same test cases again and again will not find new bugs.
So it is necessary to review the test cases and add or update test cases to find new bugs.
• Testing is context dependent: Testing approach depends on context of software developed.
Different types of software need to perform different types of testing. For example, The
testing of the e-commerce site is different from the testing of the Android application.
• Absence of errors fallacy: If a built software is 99% bug-free but it does not follow the
user requirement then it is unusable. It is not only necessary that software is 99% bug-free
but it also mandatory to fulfill all the customer requirements.

Software Verification :
Software Verification is the process of checking that a software achieves its goal without any
bugs. It is the process to ensure whether the product that is developed is right or not. It verifies
whether the developed product fulfills the requirements that we have. Verification is static
testing.
Verification means Are we building the product right?
Software Validation :
Software Validation is the process of checking whether the software product is up to the mark
or in other words product has high level requirements. It is the process of checking the
validation of product i.e. it checks what we are developing is the right product. it is validation
of actual and expected product. Validation is the dynamic testing.
Validation means Are we building the right product?

The difference between Verification and Validation is as follow:

Verification Validation

It includes checking documents, design, It includes testing and validating the actual
codes and programs. product.

Verification is the static testing. Validation is the dynamic testing.

It does not include the execution of the code. It includes the execution of the code.

Methods used in verification are reviews, Methods used in validation are Black Box
walkthroughs, inspections and desk- Testing, White Box Testing and non-functional
checking. testing.

It checks whether the software meets the


It checks whether the software conforms to requirements and expectations of a customer or
specifications or not. not.

It can find the bugs in the early stage of the It can only find the bugs that could not be
development. found by the verification process.

The goal of verification is application and


software architecture and specification. The goal of validation is an actual product.

Validation is executed on software code with


Quality assurance team does verification. the help of testing team.

It comes before validation. It comes after verification.


Verification Validation

It consists of checking of documents/files It consists of execution of program and is


and is performed by human. performed by computer.

Manual Vs Automated Testing

Testing can either be done manually or using an automated testing tool:


• Manual - This testing is performed without taking help of automated testing tools. The
software tester prepares test cases for different sections and levels of the code, executes
the tests and reports the result to the manager.
Manual testing is time and resource consuming. The tester needs to confirm whether or
not right test cases are used. Major portion of testing involves manual testing.
• Automated This testing is a testing procedure done with aid of automated testing tools.
The limitations with manual testing can be overcome using automated test tools.
A test needs to check if a webpage can be opened in Internet Explorer. This can be easily done
with manual testing. But to check if the web-server can take the load of 1 million users, it is
quite impossible to test manually.
There are software and hardware tools which helps tester in conducting load testing, stress
testing, regression testing.

Testing Approaches

Tests can be conducted based on two approaches –

• Functionality testing
• Implementation testing
When functionality is being tested without taking the actual implementation in concern it is
known as black-box testing. The other side is known as white-box testing where not only
functionality is tested but the way it is implemented is also analyzed.
Exhaustive tests are the best-desired method for a perfect testing. Every single possible value in
the range of the input and output values is tested. It is not possible to test each and every value
in real world scenario if the range of values is large.
Black-box testing
It is carried out to test functionality of the program. It is also called ‘Behavioral’ testing. The
tester in this case, has a set of input values and respective desired results. On providing input, if
the output matches with the desired results, the program is tested ‘ok’, and problematic
otherwise.

In this testing method, the design and structure of the code are not known to the tester, and
testing engineers and end users conduct this test on the software.
Black-box testing techniques:
1. Syntax Driven Testing – This type of testing is applied to systems that can be syntactically
represented by some language. For example- compilers,language that can be represented by
context free grammar. In this, the test cases are generated so that each grammar rule is used at
least once.
2. Equivalence partitioning – It is often seen that many type of inputs work similarly so
instead of giving all of them separately we can group them together and test only one input of
each group. The idea is to partition the input domain of the system into a number of
equivalence classes such that each member of class works in a similar way, i.e., if a test case
in one class results in some error, other members of class would also result into same error.

To calculate the square root of a number, the equivalence classes will be:
(a) Valid inputs:
• Whole number which is a perfect square- output will be an integer.
• Whole number which is not a perfect square- output will be decimal number.
• Positive decimals
(b) Invalid inputs:
• Negative numbers(integer or decimal).
• Characters other that numbers like “a”,”!”,”;”,etc.

3. Boundary value analysis – Boundaries are very good places for errors to occur. Hence if
test cases are designed for boundary values of input domain then the efficiency of testing
improves and probability of finding errors also increase. For example – If valid range is 10 to
100 then test for 10,100 also apart from valid and invalid inputs.
4. Cause effect Graphing – This technique establishes relationship between logical input
called causes with corresponding actions called effect. The causes and effects are represented
using Boolean graphs. The following steps are followed:
1. Identify inputs (causes) and outputs (effect).
2. Develop cause effect graph.
3. Transform the graph into decision table.
4. Convert decision table rules to test cases.

For example, in the following cause effect graph:

It can be converted into decision table like:


Each column corresponds to a rule which will become a test case for testing. So there will be 4
test cases.
5. Requirement based testing – It includes validating the requirements given in SRS of
software system.
6. Compatibility testing – The test case result not only depend on product but also
infrastructure for delivering functionality. When the infrastructure parameters are changed it is
still expected to work properly.

White-box testing
It is conducted to test program and its implementation, in order to improve code efficiency or
structure. It is also known as ‘Structural’ testing.

White box testing techniques analyze the internal structures the used data structures, internal
design, code structure and the working of the software rather than just the functionality as in
black box testing. It is also called glass box testing or clear box testing or structural testing.
Working process of white box testing:
• Input: Requirements, Functional specifications, design documents, source code.
• Processing: Performing risk analysis for guiding through the entire process.
• Proper test planning: Designing test cases so as to cover entire code. Execute rinse-repeat
until error-free software is reached. Also, the results are communicated.
• Output: Preparing final report of the entire testing process.
Testing techniques:

• Statement coverage: In this technique, the aim is to traverse all statement at least once.
Hence, each line of code is tested. In case of a flowchart, every node must be traversed at
least once. Since all lines of code are covered, helps in pointing out faulty code.

Statement Coverage Example

• Branch Coverge: In this technique, test cases are designed so that each branch from all
decision points are traversed at least once. In a flowchart, all edges must be traversed at least
once.
4 test cases required such that all branches of all decisions are covered, i.e, all edges of

flowchart are covered

• Condition Coverage: In this technique, all individual conditions must be covered as shown
in the following example:

1. READ X, Y
2. IF(X == 0 || Y == 0)
3. PRINT ‘0’
In this example, there are 2 conditions: X == 0 and Y == 0. Now, test these conditions get
TRUE and FALSE as their values. One possible example would be:
• #TC1 – X = 0, Y = 55
• #TC2 – X = 5, Y = 0

• Path coverage Testing: In this technique, control flow graphs are made from code or
flowchart and then Cyclomatic complexity is calculated which defines the number of
independent paths so that the minimal number of test cases can be designed for each
independent path.

Steps:
0. Make the corresponding control flow graph
1. Calculate the cyclomatic complexity
2. Find the independent paths
3. Design test cases corresponding to each independent path

Flow graph notation: It is a directed graph consisting of nodes and edges. Each node
represents a sequence of statements, or a decision point. A predicate node is the one that
represents a decision point that contains a condition after which the graph splits. Regions are
bounded by nodes and edges.
• Loop Testing: Loops are widely used and these are fundamental to many algorithms hence,
their testing is very important. Errors often occur at the beginnings and ends of loops.

0. Simple loops: For simple loops of size n, test cases are designed that:
• Skip the loop entirely
• Only one pass through the loop
• 2 passes
• m passes, where m < n
• n-1 ans n+1 passes
1. Nested loops: For nested loops, all the loops are set to their minimum count and we start
from the innermost loop. Simple loop tests are conducted for the innermost loop and this
is worked outwards till all the loops have been tested.
2. Concatenated loops: Independent loops, one after another. Simple loop tests are applied
for each.
If they’re not independent, treat them like nesting.

Advantages:
1. White box testing is very thorough as the entire code and structures are tested.
2. It results in the optimization of code removing error and helps in removing extra lines of
code.
3. It can start at an earlier stage as it doesn’t require any interface as in case of black box
testing.
4. Easy to automate.
Disadvantages:
1. Main disadvantage is that it is very expensive.
2. Redesign of code and rewriting code needs test cases to be written again.
3. Testers are required to have in-depth knowledge of the code and programming language as
opposed to black box testing.
4. Missing functionalities cannot be detected as the code that exists is tested.
5. Very complex and at times not realistic.
Differences between Black Box Testing vs White Box Testing:

Black Box Testing White Box Testing

It is a way of software testing in which the It is a way of testing the software in which the
internal structure or the program or the code tester has knowledge about the internal structure
is hidden and nothing is known about it. or the code or the program of the software.

It is mostly done by software testers. It is mostly done by software developers.

No knowledge of implementation is needed. Knowledge of implementation is required.

It can be referred as outer or external software


testing. It is the inner or the internal software testing.

It is functional test of the software. It is structural test of the software.

This testing can be initiated on the basis of This type of testing of software is started after
requirement specifications document. detail design document.

It is mandatory to have knowledge of


No knowledge of programming is required. programming.

It is the behavior testing of the software. It is the logic testing of the software.

It is applicable to the higher levels of testing It is generally applicable to the lower levels of
of software. software testing.

It is also called closed testing. It is also called as clear box testing.

It is least time consuming. It is most time consuming.

It is not suitable or preferred for algorithm


testing. It is suitable for algorithm testing.

Can be done by trial and error ways and Data domains along with inner or internal
methods. boundaries can be better tested.

Types of Black Box Testing: Types of White Box Testing:

• A. Functional Testing • A. Path Testing

• B. Non-functional testing • B. Loop Testing


Black Box Testing White Box Testing

• C. Regression Testing • C. Condition testing

Testing Levels
Testing itself may be defined at various levels of SDLC. The testing process runs parallel to
software development. Before jumping on the next stage, a stage is tested, validated and
verified.
Testing separately is done just to make sure that there are no hidden bugs or issues left in the
software. Software is tested on various levels -

Unit Testing

While coding, the programmer performs some tests on that unit of program to know if it is error
free. Testing is performed under white-box testing approach. Unit testing helps developers
decide that individual units of the program are working as per requirement and are error free.

Integration Testing
Integration testing is the process of testing the interface between two software units or
module. It’s focus on determining the correctness of the interface. The purpose of the
integration testing is to expose faults in the interaction between integrated units. Once all the
modules have been unit tested, integration testing is performed.

Integration test approaches –


There are four types of integration testing approaches. Those approaches are the following:

1. Big-Bang Integration Testing –


It is the simplest integration testing approach, where all the modules are combining and
verifying the functionality after the completion of individual module testing. In simple words,
all the modules of the system are simply put together and tested. This approach is practicable
only for very small systems. If once an error is found during the integration testing, it is very
difficult to localize the error as the error may potentially belong to any of the modules being
integrated. So, debugging errors reported during big bang integration testing are very
expensive to fix.

Advantages:
• It is convenient for small systems.
Disadvantages:
• There will be quite a lot of delay because you would have to wait for all the modules to be
integrated.
• High risk critical modules are not isolated and tested on priority since all modules are tested
at once.

2. Bottom-Up Integration Testing –


In bottom-up testing, each module at lower levels is tested with higher modules until all
modules are tested. The primary purpose of this integration testing is, each subsystem is to test
the interfaces among various modules making up the subsystem. This integration testing uses
test drivers to drive and pass appropriate data to the lower level modules.

Advantages:
• In bottom-up testing, no stubs are required.
• A principle advantage of this integration testing is that several disjoint subsystems can be
tested simultaneously.
Disadvantages:
• Driver modules must be produced.
• In this testing, the complexity that occurs when the system is made up of a large number of
small subsystem.

3. Top-Down Integration Testing –


Top-down integration testing technique used in order to simulate the behaviour of the lower-
level modules that are not yet integrated.In this integration testing, testing takes place from top
to bottom. First high-level modules are tested and then low-level modules and finally
integrating the low-level modules to a high level to ensure the system is working as intended.
Advantages:
• Separately debugged module.
• Few or no drivers needed.
• It is more stable and accurate at the aggregate level.
Disadvantages:
• Needs many Stubs.
• Modules at lower level are tested inadequately.

4. Mixed Integration Testing –


A mixed integration testing is also called sandwiched integration testing. A mixed integration
testing follows a combination of top down and bottom-up testing approaches. In top-down
approach, testing can start only after the top-level module have been coded and unit tested. In
bottom-up approach, testing can start only after the bottom level modules are ready. This
sandwich or mixed approach overcomes this shortcoming of the top-down and bottom-up
approaches. A mixed integration testing is also called sandwiched integration testing.
Advantages:
• Mixed approach is useful for very large projects having several sub projects.
• This Sandwich approach overcomes this shortcoming of the top-down and bottom-up
approaches.
Disadvantages:
• For mixed integration testing, require very high cost because one part has Top-down
approach while another part has bottom-up approach.
• This integration testing cannot be used for smaller system with huge interdependence
between different modules.

System Testing

The software is compiled as product and then it is tested as a whole. This can be accomplished
using one or more of the following tests:
• Functionality testing - Tests all functionalities of the software against the requirement.
• Performance testing - This test proves how efficient the software is. It tests the
effectiveness and average time taken by the software to do desired task. Performance
testing is done by means of load testing and stress testing where the software is put
under high user and data load under various environment conditions.
• Security & Portability - These tests are done when the software is meant to work on
various platforms and accessed by number of persons.

Acceptance Testing
When the software is ready to hand over to the customer it has to go through last phase of
testing where it is tested for user-interaction and response. This is important because even if the
software matches all user requirements and if user does not like the way it appears or works, it
may be rejected.
• Alpha testing - The team of developer themselves perform alpha testing by using the
system as if it is being used in work environment. They try to find out how user would
react to some action in software and how the system should respond to inputs.
• Beta testing - After the software is tested internally, it is handed over to the users to use
it under their production environment only for testing purpose. This is not as yet the
delivered product. Developers expect that users at this stage will bring minute problems,
which were skipped to attend.

Regression Testing
Whenever a software product is updated with new code, feature or functionality, it is tested
thoroughly to detect if there is any negative impact of the added code. This is known as
regression testing.

Testing vs. Quality Control, Quality Assurance and Audit

We need to understand that software testing is different from software quality assurance,
software quality control and software auditing.
• Software quality assurance - These are software development process monitoring
means, by which it is assured that all the measures are taken as per the standards of
organization. This monitoring is done to make sure that proper software development
methods were followed.
• Software quality control - This is a system to maintain the quality of software product.
It may include functional and non-functional aspects of software product, which enhance
the goodwill of the organization. This system makes sure that the customer is receiving
quality product for their requirement and the product certified as ‘fit for use’.
• Software audit - This is a review of procedure used by the organization to develop the
software. A team of auditors, independent of development team examines the software
process, procedure, requirements and other aspects of SDLC. The purpose of software
audit is to check that software and its development process, both conform standards,
rules and regulations.
Debugging
In the context of software engineering, debugging is the process of fixing a bug in the
software. In other words, it refers to identifying, analyzing and removing errors. This activity
begins after the software fails to execute properly and concludes by solving the problem and
successfully testing the software. It is considered to be an extremely complex and tedious task
because errors need to be resolved at all stages of debugging.

Debugging Process: Steps involved in debugging are:


• Problem identification and report preparation.
• Assigning the report to software engineer to the defect to verify that it is genuine.
• Defect Analysis using modeling, documentations, finding and testing candidate flaws, etc.
• Defect Resolution by making required changes to the system.
• Validation of corrections.

Debugging Strategies:
1. Study the system for the larger duration in order to understand the system. It helps debugger
to construct different representations of systems to be debugging depends on the need. Study
of the system is also done actively to find recent changes made to the software.
2. Backwards analysis of the problem which involves tracing the program backward from the
location of failure message in order to identify the region of faulty code. A detailed study of
the region is conducting to find the cause of defects.
3. Forward analysis of the program involves tracing the program forwards using breakpoints or
print statements at different points in the program and studying the results. The region where
the wrong outputs are obtained is the region that needs to be focused to find the defect.
4. Using the past experience of the software debug the software with similar problems in
nature. The success of this approach depends on the expertise of the debugger.

Debugging Tools:
Debugging tool is a computer program that is used to test and debug other programs. A lot of
public domain software like gdb and dbx are available for debugging. They offer console-
based command line interfaces. Examples of automated debugging tools include code based
tracers, profilers, interpreters, etc.
Some of the widely used debuggers are:
• Radare2
• WinDbg
• Valgrind
Difference Between Debugging and Testing:
Debugging is different from testing. Testing focuses on finding bugs, errors, etc whereas
debugging starts after a bug has been identified in the software. Testing is used to ensure that
the program is correct and it was supposed to do with a certain minimum success rate. Testing
can be manual or automated. There are several different types of testing like unit testing,
integration testing, alpha and beta testing, etc.
Debugging requires a lot of knowledge, skills, and expertise. It can be supported by some
automated tools available but is more of a manual process as every bug is different and
requires a different technique, unlike a pre-defined testing mechanism.
Software Maintenance
Software maintenance is a part of the Software Development Life Cycle. Its primary goal is to
modify and update software application after delivery to correct errors and to improve
performance. Software is a model of the real world. When the real world changes, the software
require alteration wherever possible.

Software Maintenance is an inclusive activity that includes error corrections, enhancement of


capabilities, deletion of obsolete capabilities, and optimization.

Need for Maintenance

Software Maintenance is needed for:-

o Correct errors
o Change in user requirement with time
o Changing hardware/software requirements
o To improve system efficiency
o To optimize the code to run faster
o To modify the components
o To reduce any unwanted side effects.

Thus the maintenance is required to ensure that the system continues to satisfy user
requirements.

Types of maintenance

In a software lifetime, type of maintenance may vary based on its nature. It may be just a
routine maintenance tasks as some bug discovered by some user or it may be a large event in
itself based on maintenance size or nature. Following are some types of maintenance based on
their characteristics:
• Corrective Maintenance - This includes modifications and updations done in order to
correct or fix problems, which are either discovered by user or concluded by user error
reports.
• Adaptive Maintenance - This includes modifications and updations applied to keep the
software product up-to date and tuned to the ever changing world of technology and
business environment.
• Perfective Maintenance - This includes modifications and updates done in order to keep
the software usable over long period of time. It includes new features, new user
requirements for refining the software and improve its reliability and performance.
• Preventive Maintenance - This includes modifications and updations to prevent future
problems of the software. It aims to attend problems, which are not significant at this
moment but may cause serious issues in future.
Software Project
A Software Project is the complete procedure of software development from requirement
gathering to testing and maintenance, carried out according to the execution methodologies, in a
specified period of time to achieve intended software product.

Need of software project management

Software is said to be an intangible product. Software development is a kind of all new stream
in world business and there’s very little experience in building software products. Most
software products are tailor made to fit client’s requirements. The most important is that the
underlying technology changes and advances so frequently and rapidly that experience of one
product may not be applied to the other one. All such business and environmental constraints
bring risk in software development hence it is essential to manage software projects efficiently.

The image above shows triple constraints for software projects. It is an essential part of software
organization to deliver quality product, keeping the cost within client’s budget constrain and
deliver the project as per scheduled. There are several factors, both internal and external, which
may impact this triple constrain triangle. Any of three factor can severely impact the other two.
Therefore, software project management is essential to incorporate user requirements along with
budget and time constraints.

Software Project Management consists of many activities, that includes planning of the project,
deciding the scope of product, estimation of cost in different terms, scheduling of tasks, etc.

Software Project Management consists of many activities, that includes planning of the project,
deciding the scope of product, estimation of cost in different terms, scheduling of tasks, etc.

The list of activities are as follows:

1. Project planning and Tracking


2. Project Resource Management
3. Scope Management
4. Estimation Management
5. Project Risk Management
6. Scheduling Management
7. Project Communication Management
8. Configuration Management

Now we will discuss all these activities -

1. Project Planning: It is a set of multiple processes, or we can say that it a task that performed
before the construction of the product starts.

Difference between JDK, JRE, and JVM

2. Scope Management: It describes the scope of the project. Scope management is important
because it clearly defines what would do and what would not. Scope Management create the
project to contain restricted and quantitative tasks, which may merely be documented and
successively avoids price and time overrun.

3. Estimation management: This is not only about cost estimation because whenever we start to
develop software, but we also figure out their size(line of code), efforts, time as well as cost.

If we talk about the size, then Line of code depends upon user or software requirement.

If we talk about effort, we should know about the size of the software, because based on the size
we can quickly estimate how big team required to produce the software.

If we talk about time, when size and efforts are estimated, the time required to develop the
software can easily determine.

And if we talk about cost, it includes all the elements such as:

o Size of software
o Quality
o Hardware
o Communication
o Training
o Additional Software and tools
o Skilled manpower

4. Scheduling Management: Scheduling Management in software refers to all the activities to


complete in the specified order and within time slotted to each activity. Project managers define
multiple tasks and arrange them keeping various factors in mind.

For scheduling, it is compulsory -


o Find out multiple tasks and correlate them.
o Divide time into units.
o Assign the respective number of work-units for every job.
o Calculate the total time from start to finish.
o Break down the project into modules.

5. Project Resource Management: In software Development, all the elements are referred to as
resources for the project. It can be a human resource, productive tools, and libraries.

Resource management includes:

o Create a project team and assign responsibilities to every team member


o Developing a resource plan is derived from the project plan.
o Adjustment of resources.

6. Project Risk Management: Risk management consists of all the activities like identification,
analyzing and preparing the plan for predictable and unpredictable risk in the project.

Several points show the risks in the project:

o The Experienced team leaves the project, and the new team joins it.
o Changes in requirement.
o Change in technologies and the environment.
o Market competition.

7. Project Communication Management: Communication is an essential factor in the success


of the project. It is a bridge between client, organization, team members and as well as other
stakeholders of the project such as hardware suppliers.

From the planning to closure, communication plays a vital role. In all the phases, communication
must be clear and understood. Miscommunication can create a big blunder in the project.

8. Project Configuration Management: Configuration management is about to control the


changes in software like requirements, design, and development of the product.

The Primary goal is to increase productivity with fewer errors.

Some reasons show the need for configuration management:

o Several people work on software that is continually update.


o Help to build coordination among suppliers.
o Changes in requirement, budget, schedule need to accommodate.
o Software should run on multiple systems.
Tasks perform in Configuration management:

o Identification
o Baseline
o Change Control
o Configuration Status Accounting
o Configuration Audits and Reviews

Project Management Tools


The risk and uncertainty rises multifold with respect to the size of the project, even when the
project is developed according to set methodologies.
There are tools available, which aid for effective project management. A few are described -

Gantt Chart

Gantt charts was devised by Henry Gantt (1917). It represents project schedule with respect to
time periods. It is a horizontal bar chart with bars representing activities and time scheduled for
the project activities.

PERT Chart
PERT (Program Evaluation & Review Technique) chart is a tool that depicts project as network
diagram. It is capable of graphically representing main events of project in both parallel and
consecutive way. Events, which occur one after another, show dependency of the later event
over the previous one.

Events are shown as numbered nodes. They are connected by labeled arrows depicting sequence
of tasks in the project.

Resource Histogram

This is a graphical tool that contains bar or chart representing number of resources (usually
skilled staff) required over time for a project event (or phase). Resource Histogram is an
effective tool for staff planning and coordination.

Critical Path Analysis

This tools is useful in recognizing interdependent tasks in the project. It also helps to find out
the shortest path or critical path to complete the project successfully. Like PERT diagram, each
event is allotted a specific time frame. This tool shows dependency of event assuming an event
can proceed to next only if the previous one is completed.
The events are arranged according to their earliest possible start time. Path between start and
end node is critical path which cannot be further reduced and all events require to be executed
in same order.
Software Project Planning
A Software Project is the complete methodology of programming advancement from
requirement gathering to testing and support, completed by the execution procedures, in a
specified period to achieve intended software product.

Need of Software Project Management

Software development is a sort of all new streams in world business, and there's next to no
involvement in structure programming items. Most programming items are customized to
accommodate customer's necessities. The most significant is that the underlying technology
changes and advances so generally and rapidly that experience of one element may not be
connected to the other one. All such business and ecological imperatives bring risk in software
development; hence, it is fundamental to manage software projects efficiently.

Software Project Manager

Software manager is responsible for planning and scheduling project development. They manage
the work to ensure that it is completed to the required standard. They monitor the progress to
check that the event is on time and within budget. The project planning must incorporate the
major issues like size & cost estimation scheduling, project monitoring, personnel selection
evaluation & risk management. To plan a successful software project, we must understand:

o Scope of work to be completed


o Risk analysis
o The resources mandatory
o The project to be accomplished
o Record of being followed

Software Project planning starts before technical work start. The various steps of planning
activities are:

The size is the crucial parameter for the estimation of other activities. Resources requirement are
required based on cost and development time. Project schedule may prove to be very useful for
controlling and monitoring the progress of the project. This is dependent on resources &
development time.
Project size estimation techniques

Estimation of the size of software is an essential part of Software Project Management. It helps
the project manager to further predict the effort and time which will be needed to build the
project. Various measures are used in project size estimation. Some of these are:
• Lines of Code
• Number of entities in ER diagram
• Total number of processes in detailed data flow diagram
• Function points

1. Lines of Code (LOC): As the name suggest, LOC count the total number of lines of source
code in a project. The units of LOC are:
• KLOC- Thousand lines of code
• NLOC- Non comment lines of code
• KDSI- Thousands of delivered source instruction
The size is estimated by comparing it with the existing systems of same kind. The experts use it
to predict the required size of various components of software and then add them to get the total
size.
Advantages:
• Universally accepted and is used in many models like COCOMO.
• Estimation is closer to developer’s perspective.
• Simple to use.

Disadvantages:
• Different programming languages contains different number of lines.
• No proper industry standard exist for this technique.
• It is difficult to estimate the size using this technique in early stages of project.

2. Number of entities in ER diagram: ER model provides a static view of the project. It


describes the entities and its relationships. The number of entities in ER model can be used to
measure the estimation of size of project. Number of entities depends on the size of the project.
This is because more entities needed more classes/structures thus leading to more coding.

Advantages:
• Size estimation can be done during initial stages of planning.
• Number of entities is independent of programming technologies used.

Disadvantages:
• No fixed standards exist. Some entities contribute more project size than others.
• Just like FPA, it is less used in cost estimation model. Hence, it must be converted to LOC.

3. Total number of processes in detailed data flow diagram: Data Flow Diagram(DFD)
represents the functional view of a software. The model depicts the main processes/functions
involved in software and flow of data between them. Utilization of number of functions in DFD
to predict software size. Already existing processes of similar type are studied and used to
estimate the size of the process. Sum of the estimated size of each process gives the final
estimated size.

Advantages:
• It is independent of programming language.
• Each major processes can be decomposed into smaller processes. This will increase the
accuracy of estimation

Disadvantages:
• Studying similar kind of processes to estimate size takes additional time and effort.
• All software projects are not required to construction of DFD.

4. Function Point Analysis: In this method, the number and type of functions supported by the
software are utilized to find FPC(function point count). The steps in function point analysis are:
• Count the number of functions of each proposed type.
• Compute the Unadjusted Function Points(UFP).
• Find Total Degree of Influence(TDI).
• Compute Value Adjustment Factor(VAF).
• Find the Function Point Count(FPC).
The explanation of above points given below:
Advantages:
• It can be easily used in the early stages of project planning.
• It is independing on the programming language.
• It can be used to compare different projects even if they use different technologies(database,
language etc).
Disadvantages:
• It is not good for real time systems and embedded systems.
• Many cost estimation models like COCOMO uses LOC and hence FPC must be converted to
LOC.
COCOMO Model
Cocomo (Constructive Cost Model) is a regression model based on LOC, i.e number of Lines of
Code. It is a procedural cost estimate model for software projects and often used as a process of
reliably predicting the various parameters associated with making a project such as size, effort,
cost, time and quality. It was proposed by Barry Boehm in 1970 and is based on the study of 63
projects, which make it one of the best-documented models.
The key parameters which define the quality of any software products, which are also an
outcome of the Cocomo are primarily Effort & Schedule:
• Effort: Amount of labor that will be required to complete a task. It is measured in person-
months units.
• Schedule: Simply means the amount of time required for the completion of the job, which is,
of course, proportional to the effort put. It is measured in the units of time such as weeks,
months.
Different models of Cocomo have been proposed to predict the cost estimation at different
levels, based on the amount of accuracy and correctness required. All of these models can be
applied to a variety of projects, whose characteristics determine the value of constant to be used
in subsequent calculations. These characteristics pertaining to different system types are
mentioned below.

Boehm’s definition of organic, semidetached, and embedded systems:


1. Organic – A software project is said to be an organic type if the team size required is
adequately small, the problem is well understood and has been solved in the past and also the
team members have a nominal experience regarding the problem.
2. Semi-detached – A software project is said to be a Semi-detached type if the vital
characteristics such as team-size, experience, knowledge of the various programming
environment lie in between that of organic and Embedded. The projects classified as Semi-
Detached are comparatively less familiar and difficult to develop compared to the organic
ones and require more experience and better guidance and creativity. Eg: Compilers or
different Embedded Systems can be considered of Semi-Detached type.
3. Embedded – A software project with requiring the highest level of complexity, creativity, and
experience requirement fall under this category. Such software requires a larger team size than
the other two models and also the developers need to be sufficiently experienced and creative
to develop such complex models.
All the above system types utilize different values of the constants used in Effort Calculations.
Types of Models: COCOMO consists of a hierarchy of three increasingly detailed and
accurate forms. Any of the three forms can be adopted according to our requirements. These
are types of COCOMO model:

1. Basic COCOMO Model


2. Intermediate COCOMO Model
3. Detailed COCOMO Model

The first level, Basic COCOMO can be used for quick and slightly rough calculations of
Software Costs. Its accuracy is somewhat restricted due to the absence of sufficient factor
considerations.
Intermediate COCOMO takes these Cost Drivers into account and Detailed
COCOMO additionally accounts for the influence of individual project phases, i.e in case of
Detailed it accounts for both these cost drivers and also calculations are performed phase wise
henceforth producing a more accurate result. These two models are further discussed below.
Estimation of Effort: Calculations –

1 Basic Model –

The above formula is used for the cost estimation of for the basic COCOMO model, and
also is used in the subsequent models. The constant values a,b,c and d for the Basic Model
for the different categories of system:
Software Projects a b c d

Organic 2.4 1.05 2.5 0.38

Semi Detached 3.0 1.12 2.5 0.35

Embedded 3.6 1.20 2.5 0.32


The effort is measured in Person-Months and as evident from the formula is dependent on
Kilo-Lines of code.
The development time is measured in Months.
These formulas are used as such in the Basic Model calculations, as not much consideration
of different factors such as reliability, expertise is taken into account, henceforth the
estimate is rough.
2. Intermediate Model –
The basic Cocomo model assumes that the effort is only a function of the number of lines of
code and some constants evaluated according to the different software system. However, in
reality, no system’s effort and schedule can be solely calculated on the basis of Lines of
Code. For that, various other factors such as reliability, experience, Capability. These factors
are known as Cost Drivers and the Intermediate Model utilizes 15 such drivers for cost
estimation.
Classification of Cost Drivers and their attributes:
(i) Product attributes –
• Required software reliability extent
• Size of the application database
• The complexity of the product
(ii) Hardware attributes –
• Run-time performance constraints
• Memory constraints
• The volatility of the virtual machine environment
• Required turnabout time
(iii) Personnel attributes –
• Analyst capability
• Software engineering capability
• Applications experience
• Virtual machine experience
• Programming language experience
(iv) Project attributes –
• Use of software tools
• Application of software engineering methods
• Required development schedule

3. Detailed Model –
Detailed COCOMO incorporates all characteristics of the intermediate version with an
assessment of the cost driver’s impact on each step of the software engineering process. The
detailed model uses different effort multipliers for each cost driver attribute. In detailed
cocomo, the whole software is divided into different modules and then we apply COCOMO in
different modules to estimate effort and then sum the effort.
The Six phases of detailed COCOMO are:
16. Planning and requirements
17. System design
18. Detailed design
19. Module code and test
20. Integration and test
21. Cost Constructive model
The effort is calculated as a function of program size and a set of cost drivers are given
according to each phase of the software lifecycle.

Software Metrics
A software metric is a measure of software characteristics which are measurable or countable.
Software metrics are valuable for many reasons, including measuring software performance,
planning work items, measuring productivity, and many other uses.

Within the software development process, many metrics are that are all connected. Software
metrics are similar to the four functions of management: Planning, Organization, Control, or
Improvement.

Classification of Software Metrics

Software metrics can be classified into two types as follows:

1. Product Metrics: These are the measures of various characteristics of the software product.
The two important software characteristics are:

Exception Handling in Java - Javatpoint

1. Size and complexity of software.


2. Quality and reliability of software.

These metrics can be computed for different stages of SDLC.

2. Process Metrics: These are the measures of various characteristics of the software
development process. For example, the efficiency of fault detection. They are used to measure
the characteristics of methods, techniques, and tools that are used for developing software.
Types of Metrics

Internal metrics: Internal metrics are the metrics used for measuring properties that are viewed
to be of greater importance to a software developer. For example, Lines of Code (LOC) measure.

External metrics: External metrics are the metrics used for measuring properties that are viewed
to be of greater importance to the user, e.g., portability, reliability, functionality, usability, etc.

Hybrid metrics: Hybrid metrics are the metrics that combine product, process, and resource
metrics. For example, cost per FP where FP stands for Function Point Metric.

Project metrics: Project metrics are the metrics used by the project manager to check the
project's progress. Data from the past projects are used to collect various metrics, like time and
cost; these estimates are used as a base of new software. Note that as the project proceeds, the
project manager will check its progress from time-to-time and will compare the effort, cost, and
time with the original effort, cost and time. Also understand that these metrics are used to
decrease the development costs, time efforts and risks. The project quality can also be improved.
As quality improves, the number of errors and time, as well as cost required, is also reduced.

Advantage of Software Metrics

Comparative study of various design methodology of software systems.

For analysis, comparison, and critical study of different programming language concerning their
characteristics.
In comparing and evaluating the capabilities and productivity of people involved in software
development.

In the preparation of software quality specifications.

In the verification of compliance of software systems requirements and specifications.

In making inference about the effort to be put in the design and development of the software
systems.

In getting an idea about the complexity of the code.

Disadvantage of Software Metrics

The application of software metrics is not always easy, and in some cases, it is difficult and
costly.

The verification and justification of software metrics are based on historical/empirical data
whose validity is difficult to verify.

These are useful for managing software products but not for evaluating the performance of the
technical staff.
Size Oriented Metrics
LOC Metrics

It is one of the earliest and simpler metrics for calculating the size of the computer program. It is
generally used in calculating and comparing the productivity of programmers. These metrics are
derived by normalizing the quality and productivity measures by considering the size of the
product as a metric.

Following are the points regarding LOC measures:

1. In size-oriented metrics, LOC is considered to be the normalization value.


2. It is an older method that was developed when FORTRAN and COBOL programming
were very popular.
3. Productivity is defined as KLOC / EFFORT, where effort is measured in person-months.
4. Size-oriented metrics depend on the programming language used.
5. As productivity depends on KLOC, so assembly language code will have more
productivity.
6. LOC measure requires a level of detail which may not be practically achievable.
7. The more expressive is the programming language, the lower is the productivity.

Based on the LOC/KLOC count of software, many other metrics can be computed:

a. Errors/KLOC.
b. $/ KLOC.
c. Defects/KLOC.
d. Pages of documentation/KLOC.
e. Errors/PM.
f. Productivity = KLOC/PM (effort is measured in person-months).
g. $/ Page of documentation.

Advantages of LOC

1. Simple to measure

Disadvantage of LOC

1. It is defined on the code. For example, it cannot measure the size of the specification.
2. It characterizes only one specific view of size, namely length, it takes no account of
functionality or complexity
3. Bad software design may cause an excessive line of code
4. It is language dependent
5. Users cannot easily understand it

What is Risk?
"Tomorrow problems are today's risk." Hence, a clear definition of a "risk" is a problem that
could cause some loss or threaten the progress of the project, but which has not happened yet.

These potential issues might harm cost, schedule or technical success of the project and the
quality of our software device, or project team morale.

Risk Management is the system of identifying addressing and eliminating these problems before
they can damage the project.

We need to differentiate risks, as potential issues, from the current problems of the project.

OOPs Concepts in Java

Different methods are required to address these two kinds of issues.

For example, staff storage, because we have not been able to select people with the right
technical skills is a current problem, but the threat of our technical persons being hired away by
the competition is a risk.

Risk Management

A software project can be concerned with a large variety of risks. In order to be adept to
systematically identify the significant risks which might affect a software project, it is essential
to classify risks into different classes. The project manager can then check which risks from each
class are relevant to the project.

There are three main classifications of risks which can affect a software project:

1. Project risks
2. Technical risks
3. Business risks

1. Project risks: Project risks concern differ forms of budgetary, schedule, personnel, resource,
and customer-related problems. A vital project risk is schedule slippage. Since the software is
intangible, it is very tough to monitor and control a software project. It is very tough to control
something which cannot be identified. For any manufacturing program, such as the
manufacturing of cars, the plan executive can recognize the product taking shape.

2. Technical risks: Technical risks concern potential method, implementation, interfacing,


testing, and maintenance issue. It also consists of an ambiguous specification, incomplete
specification, changing specification, technical uncertainty, and technical obsolescence. Most
technical risks appear due to the development team's insufficient knowledge about the project.

3. Business risks: This type of risks contain risks of building an excellent product that no one
need, losing budgetary or personnel commitments, etc.

Other risk categories

1. 1. Known risks: Those risks that can be uncovered after careful assessment of the project
program, the business and technical environment in which the plan is being developed,
and more reliable data sources (e.g., unrealistic delivery date)
2. 2. Predictable risks: Those risks that are hypothesized from previous project experience
(e.g., past turnover)
3. 3. Unpredictable risks: Those risks that can and do occur, but are extremely tough to
identify in advance.

Principle of Risk Management

1. Global Perspective: In this, we review the bigger system description, design, and
implementation. We look at the chance and the impact the risk is going to have.
2. Take a forward-looking view: Consider the threat which may appear in the future and
create future plans for directing the next events.
3. Open Communication: This is to allow the free flow of communications between the
client and the team members so that they have certainty about the risks.
4. Integrated management: In this method risk management is made an integral part of
project management.
5. Continuous process: In this phase, the risks are tracked continuously throughout the risk
management paradigm.
Risk Management Activities
Risk management consists of three main activities, as shown in fig:

Risk Assessment

The objective of risk assessment is to division the risks in the condition of their loss, causing
potential. For risk assessment, first, every risk should be rated in two methods:

o The possibility of a risk coming true (denoted as r).


o The consequence of the issues relates to that risk (denoted as s).

Based on these two methods, the priority of each risk can be estimated:

p=r*s

Where p is the priority with which the risk must be controlled, r is the probability of the risk
becoming true, and s is the severity of loss caused due to the risk becoming true. If all identified
risks are set up, then the most likely and damaging risks can be controlled first, and more
comprehensive risk abatement methods can be designed for these risks.

1. Risk Identification: The project organizer needs to anticipate the risk in the project as early
as possible so that the impact of risk can be reduced by making effective risk management
planning.

A project can be of use by a large variety of risk. To identify the significant risk, this might
affect a project. It is necessary to categories into the different risk of classes.
There are different types of risks which can affect a software project:

1. Technology risks: Risks that assume from the software or hardware technologies that are
used to develop the system.
2. People risks: Risks that are connected with the person in the development team.
3. Organizational risks: Risks that assume from the organizational environment where the
software is being developed.
4. Tools risks: Risks that assume from the software tools and other support software used to
create the system.
5. Requirement risks: Risks that assume from the changes to the customer requirement and
the process of managing the requirements change.
6. Estimation risks: Risks that assume from the management estimates of the resources
required to build the system

2. Risk Analysis: During the risk analysis process, you have to consider every identified risk and
make a perception of the probability and seriousness of that risk.

There is no simple way to do this. You have to rely on your perception and experience of
previous projects and the problems that arise in them.

It is not possible to make an exact, the numerical estimate of the probability and seriousness of
each risk. Instead, you should authorize the risk to one of several bands:

1. The probability of the risk might be determined as very low (0-10%), low (10-25%),
moderate (25-50%), high (50-75%) or very high (+75%).
2. The effect of the risk might be determined as catastrophic (threaten the survival of the
plan), serious (would cause significant delays), tolerable (delays are within allowed
contingency), or insignificant.

Risk Control

It is the process of managing risks to achieve desired outcomes. After all, the identified risks of a
plan are determined; the project must be made to include the most harmful and the most likely
risks. Different risks need different containment methods. In fact, most risks need ingenuity on
the part of the project manager in tackling the risk.

There are three main methods to plan for risk management:


1. Avoid the risk: This may take several ways such as discussing with the client to change
the requirements to decrease the scope of the work, giving incentives to the engineers to
avoid the risk of human resources turnover, etc.
2. Transfer the risk: This method involves getting the risky element developed by a third
party, buying insurance cover, etc.
3. Risk reduction: This means planning method to include the loss due to risk. For
instance, if there is a risk that some key personnel might leave, new recruitment can be
planned.

Risk Leverage: To choose between the various methods of handling risk, the project plan must
consider the amount of controlling the risk and the corresponding reduction of risk. For this, the
risk leverage of the various risks can be estimated.

Risk leverage is the variation in risk exposure divided by the amount of reducing the risk.

Risk leverage = (risk exposure before reduction - risk exposure after reduction) / (cost of
reduction)

1. Risk planning: The risk planning method considers each of the key risks that have been
identified and develop ways to maintain these risks.

For each of the risks, you have to think of the behavior that you may take to minimize the
disruption to the plan if the issue identified in the risk occurs.

You also should think about data that you might need to collect while monitoring the plan so that
issues can be anticipated.

Again, there is no easy process that can be followed for contingency planning. It rely on the
judgment and experience of the project manager.

2. Risk Monitoring: Risk monitoring is the method king that your assumption about the product,
process, and business risks has not changed.
Software Reliability
Software Reliability means Operational reliability. It is described as the ability of a system or
component to perform its required functions under static conditions for a specific period.

Software reliability is also defined as the probability that a software system fulfills its assigned
task in a given environment for a predefined number of input cases, assuming that the hardware
and the input are free of error.

Software Reliability is an essential connect of software quality, composed with functionality,


usability, performance, serviceability, capability, installability, maintainability, and
documentation. Software Reliability is hard to achieve because the complexity of software turn
to be high. While any system with a high degree of complexity, containing software, will be hard
to reach a certain level of reliability, system developers tend to push complexity into the software
layer, with the speedy growth of system size and ease of doing so by upgrading the software.

For example, large next-generation aircraft will have over 1 million source lines of software on-
board; next-generation air traffic control systems will contain between one and two million lines;
the upcoming International Space Station will have over two million lines on-board and over 10
million lines of ground support software; several significant life-critical defense systems will
have over 5 million source lines of software. While the complexity of software is inversely
associated with software reliability, it is directly related to other vital factors in software quality,
especially functionality, capability, etc.

Software Failure Mechanisms

The software failure can be classified as:

Transient failure: These failures only occur with specific inputs.

Permanent failure: This failure appears on all inputs.

Recoverable failure: System can recover without operator help.

Unrecoverable failure: System can recover with operator help only.

Non-corruption failure: Failure does not corrupt system state or data.

Corrupting failure: It damages the system state or data.

Software failures may be due to bugs, ambiguities, oversights or misinterpretation of the


specification that the software is supposed to satisfy, carelessness or incompetence in writing
code, inadequate testing, incorrect or unexpected usage of the software or other unforeseen
problems.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy