0% found this document useful (0 votes)
27 views107 pages

Se Material

The document discusses software engineering, defining it as the application of engineering principles to software development. It explains why software engineering is needed due to increasing program size and complexity. Key techniques in software engineering are abstraction, decomposition, and structured programming. The document also outlines the evolution of program design techniques and different software development life cycle models.

Uploaded by

Bharath Ram
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views107 pages

Se Material

The document discusses software engineering, defining it as the application of engineering principles to software development. It explains why software engineering is needed due to increasing program size and complexity. Key techniques in software engineering are abstraction, decomposition, and structured programming. The document also outlines the evolution of program design techniques and different software development life cycle models.

Uploaded by

Bharath Ram
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 107

(19A05404T)- SOFTWARE ENGINEERING (Unit-1)

Software Engineering

Software Engineering

The term software engineering is the product of two words, software, and engineering.
The software is a collection of integrated programs.
Engineering is the application of scientific and practical knowledge to invent, design, build, maintain,
and improve frameworks, processes, etc.
Software Engineering is an engineering branch related to the evolution of software product using
well-defined scientific principles, techniques, and procedures. The result of software engineering is an
effective and reliable software product.
Definitions
Def-1: IEEE defines software engineering as:
a. The application of a systematic, disciplined, quantifiable approach to the development,
operation and maintenance of software; that is, the application of engineering to
software.
b. The study of approaches as in the above statement.

Def-2: Fritz Bauer, a German computer scientist, defines software engineering as:
Software engineering is the establishment and use of sound engineering principles in order to obtain
economically software that is reliable and work efficiently on real machines.
Why is Software Engineering required? (Need/importance of Software Engineering)
 Without using software engineering principles it would be difficult to develop large programs.
In industry it is usually needed to develop large programs to accommodate multiple functions.
 A problem with developing such large commercial programs is that the complexity and
difficulty levels of the programs increase exponentially with their sizes.
 For example, a program of size 1,000 lines of code has some complexity. But a program with
10,000 LOC is not just 10 times more difficult to develop, but may as well turn out to be 100
times more difficult unless software engineering principles are used.
 In such situations software engineering techniques come to the rescue.

Software Engineering is required due to the following reasons:

 To manage Large software


 For more Scalability
 Cost Management
 To manage the dynamic nature of software
 For better quality Management
The necessity of software engineering appears because of a higher rate of
progress in user requirements and the environment on which the program is working.

 Huge Programming: It is simpler to manufacture a wall than to a house or building, similarly,


as the measure of programming become extensive engineering has to step to give it a
scientific process.
 Adaptability: If the software procedure were not based on scientific and engineering ideas, it
would be simpler to re-create new software than to scale an existing one.
 Cost: The cost of programming remains high if the proper process is not adapted.

Designed by Dr. Penchal, NECN for JNTUA-R19


(19A05404T)- SOFTWARE ENGINEERING (Unit-1)

 Dynamic Nature: If the quality of the software is continually changing, new upgrades need to
be done in the existing one.
 Quality Management: Better procedure of software development provides a better and
quality software product.

Characteristics of a good software engineer


The features that good software engineers should possess are as follows:
 Exposure to systematic methods, i.e., familiarity with software engineering principles.

 Good technical knowledge and Domain knowledge.


 Good programming abilities.
 Good communication skills. These skills comprise of oral, written, and interpersonal skills.
 High motivation.
 Sound knowledge of fundamentals of computer
science.
 Ability to work in a team and Discipline, etc.
Abstraction and Decomposition
Software engineering helps to reduce programming
complexity. Software engineering principles use two
important techniques to reduce problem complexity:
Abstraction and Decomposition.
Principle of abstraction
• Principle of abstraction (in fig) implies that a
problem can be simplified by omitting irrelevant
details.
• Once the simpler problem is solved then the omitted details can be taken into consideration to
solve the next lower level abstraction, and so
on.
Decomposition:
• In this technique, a complex problem is divided
into several smaller problems and then the
smaller problems are solved one by one.
• However, in this technique any random
decomposition of a problem into smaller parts
will not help.

Designed by Dr. Penchal, NECN for JNTUA-R19


(19A05404T)- SOFTWARE ENGINEERING (Unit-1)

• The problem has to be decomposed such that each component of the decomposed problem can be
solved independently and then the solution of the different components can be combined to get the
full solution.
• A good decomposition of a problem as shown in fig. 33.5 should minimize interactions among
various components.
• If the different subcomponents are interrelated, then the different components cannot be solved
separately and the desired reduction in complexity will not be realized.
Program vs. Software Product

 Programs are developed by individuals for their personal use. They are therefore, small in size
and have limited functionality but software products are extremely large.
 In case of a program, the programmer himself is the sole user but on the other hand, in case of
a software product, most users are not involved with the development.
 In case of a program, a single developer is involved but in case of a software product, a large
number of developers are involved.
 For a program, the user interface may not be very important, because the programmer is the
sole user.
 On the other hand, for a software product, user interface must be carefully designed and
implemented because developers of that product and users of that product are totally
different.
 In case of a program, very little documentation is expected, but a software product must be
well documented.
 A program can be developed according to the programmer’s individual style of development,
but a software product must be developed using the accepted software engineering
principles.

Evolution of Program Design (software Engineering) Techniques

a. During the 1950s, most programs were being written in assembly language. These programs
were limited to about a few hundreds of lines of assembly code. Every programmer developed
programs in his own individual style - based on his intuition. This type of programming was
called Exploratory Programming.
b. The next significant development which occurred during early 1960s in the area computer
programming was the high-level language programming. Use of high-level language
programming reduced development efforts and development time significantly. Languages
like FORTRAN, ALGOL, and COBOL were introduced at that time.

c. Structured Programming: As the size and complexity of programs kept on increasing, the
exploratory programming style proved to be insufficient. To cope with this problem,
experienced programmers advised other programmers to pay particular attention to the
design of the program’s control flow structure.

 A structured program uses three types of program constructs i.e. selection, sequence and
iteration.
 Structured programs avoid unstructured control flows by restricting the use of GOTO
statements.

Designed by Dr. Penchal, NECN for JNTUA-R19


(19A05404T)- SOFTWARE ENGINEERING (Unit-1)


Structured programming uses single entry, single-exit program constructs such as if-then-
else, do-while, etc.
 Structured programs are easier to maintain. They require less effort and time for
development. They are amenable to easier debugging and usually fewer errors are made
in the course of writing such programs.
d. Data Structure-Oriented Design: Pay more attention to the design of data structure, of the
program rather than to the design of its control structure.
e. Data Flow-Oriented Design: Next significant development in the late 1970s was the
development of data flow-oriented design technique. Every program reads data and then
processes that data to produce some output.
f. Object-Oriented Design: Object-oriented design (1980s) is the latest and very widely used
technique. It has an intuitively appealing design approach in which natural objects (such as
employees, pay-roll register, etc.) occurring in a problem are first identified. Relationships
among objects are determined.
g. Modern practice: The modern practice of software development is to develop the software
through several well-defined stages such as requirements specification, design, coding,
testing, etc., and attempts are made to detect and fix as many errors as possible in the same
phase in which they occur.
Now, projects are first thoroughly planned. Project planning normally includes preparation of
various types of estimates, resource scheduling, and development of project tracking plans.
Several techniques and tools for tasks such as configuration management, cost estimation,
scheduling, etc. are used for effective software project management.

Software development life cycle (SDLC) models

Software Life Cycle Model (also called Process Model)


1. A software life cycle model (also called process model) is a descriptive and diagrammatic
representation of the software life cycle.
2. A life cycle model represents all the activities required to make a software product transit
through its life cycle phases.
3. It also captures the order in which these activities are to be undertaken.
4. In other words, a life cycle model maps the different activities performed on a software
product from its inception to its retirement.
5. Different life cycle models may map the basic development activities to phases in different
ways.
6. Thus, no matter which life cycle model is followed, the basic activities are included in all life
cycle models though the activities may be carried out in different orders in different life cycle
models.
7. During any life cycle phase, more than one activity may also be carried out.
The Need for a Life Cycle Model

 The development team must identify a suitable life cycle model for the particular project and
then adhere to it. Without using a particular life cycle model, the development of a software
product would not be in a systematic and disciplined manner.

Designed by Dr. Penchal, NECN for JNTUA-R19


(19A05404T)- SOFTWARE ENGINEERING (Unit-1)

 When a software product is being developed by a team there must be a clear understanding
among team members about when and what to do. Otherwise it would lead to chaos and
project failure.
Example:
 Suppose a software development problem is divided into several parts and the parts are
assigned to the team members. From then on, suppose the team members are allowed the
freedom to develop the parts assigned to them in whatever way they like. It is possible that
one member might start writing the code for his part, another might decide to prepare the
test documents first, and some other engineer might begin with the design phase of the parts
assigned to him. This would be one of the perfect recipes for project failure.
 A software life cycle model defines entry and exit criteria for every phase. So without a
software life cycle model, the entry and exit criteria for a phase cannot be recognized.

A few important and commonly used life cycle models are as follows:

Waterfall Model

The classical waterfall model is intuitively the most obvious way to develop software. Though the
classical waterfall model is elegant and intuitively obvious, we will see that it is not a practical model in
the sense that it cannot be used in actual software development projects.
Thus, we can consider this model to be a theoretical way of developing software. But all other life
cycle models are essentially derived from the classical waterfall model. So, in order to be able to
appreciate other life cycle models, we must first learn
the classical waterfall model.

Classical waterfall model divides the life cycle into the


following phases as shown in fig.
1. Feasibility study
2. Requirements analysis and specification
3. Design
4. Coding and unit testing
5. Integration and system testing
6. Maintenance

1. Feasibility Study
The main aim of feasibility study is to determine whether it would be financially, technically,
timely, operationally etc feasible to develop the product
• At first project managers or team leaders try to have a rough understanding of what is required to
be done by visiting the client side. They study different input data to the system , output data and
they look at the various constraints on the behaviour of the system.
• After they have an overall understanding of the problem, they investigate the different solutions
that are possible.
• They pick the best solution and determine whether the solution is feasible financially and
technically. They check whether the customer budget would meet the cost of the product and
whether they have sufficient technical expertise in the area of development.
2. Requirements Analysis and Specification
The aim of the requirements analysis and specification phase is to understand the exact
requirements of the customer and to document them properly. This phase consists of two distinct
activities, namely

Designed by Dr. Penchal, NECN for JNTUA-R19


(19A05404T)- SOFTWARE ENGINEERING (Unit-1)

o Requirements gathering and analysis, and


o Requirements specification
 Requirements gathering and analysis :
The goal of the requirements gathering activity is to collect all relevant information from the
customer regarding the product to be developed with a view to clearly understand the
customer requirements.
 Requirements specification :
After requirements gathering, the requirements specification activity can start. During this
activity, the user requirements are systematically organized into a Software Requirements
Specification (SRS) document.
The important components of SRS document are functional requirements, the non-functional
requirements, and the goals of implementation.
3. Design
The goal of the design phase is to transform the requirements specified in the SRS document into
a structure that is suitable for implementation in some programming language. In technical terms,
during the design phase the software architecture is derived from the SRS document. Two distinctly
different approaches are available:
o Traditional design approach and
o Object-oriented design approach.
 Traditional design approach: Traditional design consists of two different activities;
first a structured analysis of the requirements specification is carried out where the
detailed structure of the problem is examined. This is followed by a structured design
activity. During structured design, the results of structured analysis are transformed
into the software design.
 Object-oriented design approach: In this technique, various objects that occur in the
problem domain and the solution domain are first identified, and the different
relationships that exist among these objects are identified. The object structure is
further refined to obtain the detailed design.
 At the start of design phase, context diagram and different levels of DFDs are produced
according to the SRS document. At the end of this phase module structure (structure chart) is
produced.

4. Coding and Unit Testing


The purpose of the coding and unit testing phase (sometimes called the implementation phase) of
software development is to translate the software design into source code. Each component of the
design is implemented as a program module. The end-product of this phase is a set of program
modules that have been individually tested.
During this phase, each module is unit tested to determine the correct working of all the individual
modules. It involves testing each module in isolation as this is the most efficient way to debug the
errors identified at this stage.

5. Integration and System Testing


Integration of different modules is undertaken once they have been coded and unit tested. During the
integration and system testing phase, the modules are integrated in a planned manner.

Integration is normally carried out incrementally over a number of steps. Finally, when all the modules
have been successfully integrated and tested, system testing is carried out.
The goal of system testing is to ensure that the developed system conforms to the requirements laid
out in the SRS document.
System testing usually consists of three different kinds of testing activities:
Designed by Dr. Penchal, NECN for JNTUA-R19
(19A05404T)- SOFTWARE ENGINEERING (Unit-1)

 α – testing: It is the system testing performed by the development team.


 β – testing: It is the system testing performed by a friendly set of customers.
 Acceptance testing: It is the system testing performed by the customer himself after
product delivery to determine whether to accept or reject the delivered product.
6. Maintenance
Maintenance involves performing any one or more of the following three kinds of activities:

 Correcting errors that were not discovered during the product development phase. This is
called corrective maintenance.
 Improving the implementation of the system, and enhancing the functionalities of the system
according to the customer’s requirements. This is called perfective maintenance.
 Porting the software to work in a new environment. For example, porting may be required to
get the software to work on a new computer platform or with a new operating system. This is
called adaptive maintenance.

Shortcomings of the Classical Waterfall Model


 The classical waterfall model is an idealistic one since it assumes that no development error is
ever committed by the engineers during any of the life cycle phases.
 However, in practical development environments, the engineers do commit a large number of
errors in almost every phase of the life cycle.
Disadvantages:
 It is difficult to measure progress within stages.
 Poor model for long and ongoing projects.
 No working software is produced until late during the life cycle.
 High amounts of risk and uncertainty.
 Not a good model for long and object oriented projects.
 Cannot accommodate changing requirements.
When Should You Use It ?
1. Requirements are clear and fixed that may not change.
2. There are no ambiguous requirements (no confusion).
3. It is good to use this model when the technology is well understood.
4. The project is short and cast is low.
5. Risk is zero or minimum.
Advantages:
 It is simple and easy to understand and use.
 It is easy to manage.
 It works well for smaller and low budget projects where requirements are very well
understood.
 Clearly defined stages and
well understood.
 It is easy to arrange tasks.
 Process and results are well
documented.

Iterative Waterfall model


In a practical software development,
the classical waterfall model is hard to

Designed by Dr. Penchal, NECN for JNTUA-R19


(19A05404T)- SOFTWARE ENGINEERING (Unit-1)

use. So, Iterative waterfall model can be thought of as incorporating the necessary changes to the
classical waterfall model to make it usable in practical software development.
It is almost same as the classical waterfall model except some changes are made to increase the
efficiency of the software development.
The iterative waterfall model provides feedback paths from every phase to its preceding phases,
which is the main difference from the classical waterfall model.
Feedback paths introduced by the iterative waterfall model are shown in the figure below.

When errors are detected at some later phase, these feedback paths allow correcting errors
committed by programmers during some phase.

Incremental Model

• Incremental Model is a process of software development where requirements divided into multiple
standalone modules of the software development cycle.
• In this model, each module goes through the requirements, design, implementation and testing
phases. Every subsequent release of the module adds function to the previous release.
• The process continues until the complete system achieved.

When we use the Incremental Model?


o When the requirements are superior.
o A project has a lengthy development schedule.
o When Software team are not very well skilled or trained.
o When the customer demands a quick release of the product.
o You can develop prioritized requirements first.

Designed by Dr. Penchal, NECN for JNTUA-R19


(19A05404T)- SOFTWARE ENGINEERING (Unit-1)

Advantage of Incremental Model


o Errors are easy to be recognized.
o Easier to test and debug
o More flexible.
o Simple to manage risk because it handled during its iteration.
o The Client gets important functionality early.
Disadvantage of Incremental Model
o Need for good planning
o Total Cost is high.
o Well defined module interfaces are needed.

Evolutionary model
This model is a combination of incremental and iterative models. In the evolutionary model, all work
divided into smaller chunks. These chunks present to the customer one by one. The confidence of the
customer increased. This model also allows for changing requirements as well as all development
done into different pieces and maintains all the work as a chunk.
Where the evolutionary model is useful
• It is very useful in a large project where you can easily find a module for step by step
implementation.
• The evolutionary model is used when
the users need to start using the
many features instead of waiting for
the complete software.
• The evolutionary model is also very
useful in object-oriented software
development because all the
development is divided into different
units.
Disadvantages of Evolutionary Model
• It is difficult to divide the problem into
several parts, that would be
acceptable to the customer which can
be incrementally implemented and
delivered.
The following are the evolutionary models.
1. The prototyping model
2. the spiral model
3. the concurrent development model

Prototype Model

The prototype model requires that before carrying out the development of actual software, a working
prototype of the system should be built. A prototype is a toy implementation of the system.

Designed by Dr. Penchal, NECN for JNTUA-R19


(19A05404T)- SOFTWARE ENGINEERING (Unit-1)

The Need for a Prototype


There are several uses of a prototype. An
important purpose is to illustrate the input data
formats, messages, reports, and the interactive
dialogues to the customer. This is a valuable
mechanism for gaining better understanding of
the customer’s needs.
• how screens might look like
• how the user interface would behave
• how the system would produce outputs, etc.
Steps of Prototype Model
1. Requirement Gathering and Analyze
2. Quick Decision
3. Build a Prototype
4. Assessment or User Evaluation
5. Prototype Refinement
6. Engineer Product
Advantage of Prototype Model
1. Reduce the risk of incorrect user requirement
2. Good where requirement are changing/uncommitted
3. Regular visible process aids management
4. Support early product marketing
5. Reduce Maintenance cost.
6. Errors can be detected much earlier as the system is made side by side.
Disadvantage of Prototype Model
1. Require extensive customer collaboration
o Costs customer money
o Needs committed customer
o Difficult to finish if customer withdraw
o May be too customer specific, no broad market
2. Difficult to know how long the project will last.
3. It is a time-consuming process.

Spiral Model

• The spiral model, initially proposed by Boehm, is an evolutionary software process model that
couples the iterative feature of prototyping with the controlled and systematic aspects of the
linear sequential model.
• It implements the potential for rapid development of new versions of the software.
• Using the spiral model, the software is developed in a series of incremental releases.
• During the early iterations, the additional release may be a paper model or prototype.
• During later iterations, more and more complete versions of the engineered system are produced.

Designed by Dr. Penchal, NECN for JNTUA-R19


(19A05404T)- SOFTWARE ENGINEERING (Unit-1)

Each cycle in the spiral is divided into four parts:


Objective setting: Each cycle in the spiral starts with the identification of purpose for that cycle, the
various alternatives that are possible for achieving the targets, and the constraints that exists.
Risk Assessment and reduction: The next phase in the cycle is to calculate these various alternatives
based on the goals and constraints. The focus of evaluation in this stage is located on the risk
perception for the project.
Development and validation: The next phase is to develop strategies that resolve uncertainties and
risks. This process may include activities such as benchmarking, simulation, and prototyping.
Planning: Finally, the next step is planned. The project is reviewed, and a choice made whether to
continue with a further period of the spiral. If it is determined to keep, plans are drawn up for the next
step of the project.
The development phase depends on the remaining risks. For example, if performance or user-
interface risks are treated more essential than the program development risks, the next phase may be
an evolutionary development that includes developing a more detailed prototype for solving the risks.
The risk-driven feature of the spiral model allows it to accommodate any mixture of a specification-
oriented, prototype-oriented, simulation-oriented, or another type of approach. An essential element
of the model is that each period of the spiral is completed by a review that includes all the products
developed during that cycle, including plans for the next cycle. The spiral model works for
development as well as enhancement projects.
When to use Spiral Model?
o When deliverance is required to be frequent.
o When the project is large
o When requirements are unclear and complex
o When changes may require at any time
o Large and high budget projects
Advantages
o High amount of risk analysis
o Useful for large and mission-critical projects.
Disadvantages
o Can be a costly model to use.
o Risk analysis needed highly particular expertise
o Doesn't work well for smaller projects.

Designed by Dr. Penchal, NECN for JNTUA-R19


(19A05404T)- SOFTWARE ENGINEERING (Unit-1)

RAD (Rapid Application Development) Model


RAD is a linear sequential software development process model that emphasizes a concise
development cycle using an element based construction approach. If the requirements are well
understood and described, and the project scope is a constraint, the RAD process enables a
development team to create a fully functional system within a concise time period.

RAD (Rapid Application Development) is a concept that products can be developed faster and of
higher quality through:

o Gathering requirements using workshops or focus groups


o Prototyping and early, reiterative user testing of designs
o The re-use of software components
o A rigidly paced schedule that refers design improvements to the next product version
o Less formality in reviews and other team communication

The various phases of RAD are as follows:


1.Business Modelling: The information flow among business functions is defined by answering
questions like what data drives the business process, what data is generated, who generates it, where
does the information go, who process it and so on.
2. Data Modelling: The data collected from business modeling is refined into a set of data objects
(entities) that are needed to support the business. The attributes (character of each entity) are
identified, and the relation between these data objects (entities) is defined.
3. Process Modelling: The information object defined in the data modeling phase are transformed to
achieve the data flow necessary to implement a business function. Processing descriptions are created
for adding, modifying, deleting, or retrieving a data object.
4. Application Generation: Automated tools are used to facilitate construction of the software; even
they use the 4th GL techniques.
5. Testing & Turnover: Many of the programming components have already been tested since RAD
emphasis reuse. This reduces the overall testing time. But the new part must be tested, and all
interfaces must be fully exercised.
When to use RAD Model?
o When the system should need to create the project that modularizes in a short span time (2-3
months).

Designed by Dr. Penchal, NECN for JNTUA-R19


(19A05404T)- SOFTWARE ENGINEERING (Unit-1)

o When the requirements are well-known.


o When the technical risk is limited.
o When there's a necessity to make a system, which modularized in 2-3 months of period.
o It should be used only if the budget allows the use of automatic code generating tools.
Advantage of RAD Model
o This model is flexible for change.
o In this model, changes are adoptable.
o Each phase in RAD brings highest priority functionality to the customer.
o It reduced development time.
o It increases the reusability of features.
Disadvantage of RAD Model
o It required highly skilled designers.
o All application is not compatible with RAD.
o For smaller projects, we cannot use the RAD model.
o On the high technical risk, it's not suitable.
o Required user involvement.

Agile Model
• The meaning of Agile is versatile.
• Agile method proposes incremental and iterative approach to software design.
• AGILE methodology is a practice that promotes continuous iteration of development and testing
throughout the software development lifecycle of the project. In the Agile model, both
development and testing activities are concurrent.
• Agile software development refers to a group of software development methodologies based on
iterative development, where requirements and solutions evolve through collaboration between
self-organizing cross-functional teams.
• Agility is achieved by fitting the process to the project, removing activities that may not be essential
for a specific project. Also, anything that is
wastage of time and effort is avoided.
Agile principles
1. The highest priority of this process is
to satisfy the customer.
2. Acceptance of changing requirement
even late in development.
3. Frequently deliver working software
in small time span.
4. Throughout the project business
people and developers work together
on daily basis.
5. Primary measure of progress is
working software.
Phases of Agile Model:
Following are the phases in the Agile model are as follows:
1. Requirements gathering
2. Design the requirements
3. Construction/ iteration
4. Testing/ Quality assurance
5. Deployment
6. Feedback

Designed by Dr. Penchal, NECN for JNTUA-R19


(19A05404T)- SOFTWARE ENGINEERING (Unit-1)

When to use the Agile Model?


o When frequent changes are required.
o When a highly qualified and experienced team is available.
o When a customer is ready to have a meeting with a software team all the time.
o When project size is small.
Advantage(Pros) of Agile Method:
1. Frequent Delivery
2. Face-to-Face Communication with clients.
3. Efficient design and fulfils the business requirement.
4. Anytime changes are acceptable.
5. It reduces total development time.
Disadvantages(Cons) of Agile Model:
1. Due to the shortage of formal documents, it creates confusion and crucial decisions taken
throughout various phases can be misinterpreted at any time by different team members.
2. Due to the lack of proper documentation, once the project completes and the developers
allotted to another project, maintenance of the finished project can become a difficulty.
Agile Testing Methods:
o Scrum
o Crystal
o Dynamic Software Development Method(DSDM)
o Feature Driven Development(FDD)
o Lean Software Development
o eXtreme Programming(XP)

Scrum
 SCRUM is an agile development process focused primarily on ways to manage tasks in team-
based development conditions.
 SCRUM concentrates specifically on how to manage tasks within a team-based development
environment.
 Basically, Scrum is derived from activity that occurs during a rugby match.
 Scrum believes in empowering the development team and advocates working in small teams
(say- 7 to 9 members).
It consists of three roles, and their responsibilities are explained as follows:
o Product owner: The product owner makes the product backlog, prioritizes the delay and is
responsible for the distribution of functionality on each repetition.
o Scrum Master: The scrum can set up the master team, arrange the meeting and remove
obstacles for the process
o Scrum Team: The team manages its work and organizes the work to complete the sprint or
cycle.
Process flow of Scrum Methodologies:
Process flow of scrum testing is as follows:
 Each iteration of a scrum is known as Sprint
 Product backlog is a list where all details are entered to get the end-product
 During each Sprint, top user stories of Product backlog are selected and turned into Sprint
backlog
 Team works on the defined sprint backlog
 Team checks for the daily work

Designed by Dr. Penchal, NECN for JNTUA-R19


(19A05404T)- SOFTWARE ENGINEERING (Unit-1)

 At the end of the sprint, team delivers product functionality


Scrum Practices
Practices are described in detailed

eXtreme Programming(XP)
Extreme Programming technique is very helpful when there is constantly changing demands or
requirements from the customers or when they are not sure about the functionality of the system. It
advocates frequent "releases" of the product in short development cycles, which inherently improves
the productivity of the system and also introduces a checkpoint where any customer requirements can
be easily implemented.

Business requirements are gathered in terms of stories. All those stories are stored in a place called
the parking lot.
In this type of methodology, releases are based on the shorter cycles called Iterations with span of 14
days time period. Each iteration includes phases like coding, unit testing and system testing where at
each phase some minor or major functionality will be built in the application.
SCRUM vs xP

There are however some differences, some of them very subtle, and particularly in the following 4
aspects:
1. Iteration length
Scrum
 Typically from two weeks to one month long.
XP
 Typically one or two weeks long.

Designed by Dr. Penchal, NECN for JNTUA-R19


(19A05404T)- SOFTWARE ENGINEERING (Unit-1)

2. Whether requirements are allowed to be modified in iteration


Scrum
 Do not allow changes into their sprints.
 Once the sprint planning meeting is completed and a commitment made to deliver a set of
product backlog items, that set of items remains unchanged through the end of the sprint.
XP
 Much more amenable to change within their iterations.
 As long as the team hasn’t started work on a particular feature, a new feature of equivalent
size can be swapped into the XP team’s iteration in exchange for the un-started feature.
3. Whether User Story is implemented strictly according to priority in iterations.
Scrum
 Scrum product owner prioritizes the product backlog but the team determines the sequence
in which they will develop the backlog items.
 A Scrum team will very likely choose to work on the second most important.
XP
 Work in a strict priority order.
 Features to be developed are prioritized by the customer (Scrum’s Product Owner) and
the team is required to work on them in that order.
Crystal
 Introduced by Alistair Cockburn, Crystal Methods, which is a collection of Agile software
development approaches, focuses primarily on people and the interaction among them while
they work on a software development project.
 Unlike more fixed frameworks like scrum, crystal recognizes that different teams will perform
differently depending on team size, criticality, and priority of the project and encourages users
to adapt the framework for their individual situation.
 For example, a small team can keep itself aligned with regular communication, so it doesn't
need much status reporting and documentation, whereas a large team is likely to get out-of-
sync and would benefit from a more structured approach.
These are categorized by color, according to the number of people in the project;
 Crystal clear - Teams with less than 8 people
 Crystal yellow - Teams with between 10 and 20 people
 Crystal orange - Teams with between 20-50 people
 Crystal red - Teams with between 50-100 people
Main practices recommended by Crystal
 An iterative and incremental development approach
 Active user involvement
 Delivering on commitments

Designed by Dr. Penchal, NECN for JNTUA-R19


(19A05404T)- SOFTWARE ENGINEERING (Unit-1)

Dynamic Software Development Method (DSDM)


DSDM is a Rapid Application Development (RAD) approach to software development and provides an
agile project delivery framework. The important aspect of DSDM is that the users are required to be
involved actively, and the teams are given the power to make decisions. Frequent delivery of product
becomes the active focus with DSDM. The techniques used in DSDM are
1. Time Boxing
2. MoSCoW Rules (Must Have, Should Have, Could Have, Won't Have this time)
3. Prototyping

Feature Driven Development (FDD):


This method focuses on "Designing and Building" features. Feature-Driven Development (FDD) is
customer-centric, iterative, and incremental, with the goal of delivering tangible software results often
and efficiently.

Lean Software Development:


Lean software development methodology follows the principle "just in time production." The lean
method indicates the increasing speed of software development and reducing costs.

Designed by Dr. Penchal, NECN for JNTUA-R19


(19A05404T)- SOFTWARE ENGINEERING (Unit-1)

Software Project Management (SPM)


What is Project?
A project is a group of tasks that need to complete to reach a clear result. A project also defines as a
set of inputs and outputs which are required to achieve a goal.
What is software project management?
Software project management is an art and discipline of planning and supervising software projects in
which software projects planned, implemented, monitored and controlled.
Goal of SPM:
The main goal of software project management is to enable a group of developers to work effectively
towards the successful completion of a project.
Prerequisite of software project management?
There are three needs for software project management. These are:
 Time
 Cost
 Quality
Software Project Management Complexities
1. Invisibility: Software remains invisible, until its development is complete and it is operational.
Anything that is invisible is difficult to manage and control.
2. Changeability: Frequent changes to the requirements and the invisibility of software are
possibly the two major factors making software project management a complex task.
3. Complexity: Even a moderate sized software has millions of functions that interact with each
other in many ways—data coupling, serial and concurrent runs, state transitions, control
dependency, file sharing, etc.
4. Uniqueness: Every software project is usually associated with many unique features or
situations.
5. Exactness of the solution: Exactness of the solution introduces additional risks and contributes
to the complexity of managing software projects
Responsibilities of a software project manager (or SPM Activities)
A software project manager takes the overall responsibility of steering a project to success. We can
broadly classify a project manager’s varied responsibilities into the following two major categories:
1. Project planning, and
2. Project monitoring and control.

1. Project planning:
 Project planning is undertaken immediately after the feasibility study phase and before the
starting of the requirements analysis and specification phase.
 Project planning involves estimating several characteristics of a project and then planning the
project activities based on these estimates made.
2. Project monitoring and control:
 Project monitoring and control activities are undertaken once the development activities start.
 The focus of project monitoring and control activities is to ensure that the software
development proceeds as per plan.

Designed by Dr. Penchal, NECN for JNTUA-R19


(19A05404T)- SOFTWARE ENGINEERING (Unit-1)

Software Project Planning


Definition: Project planning is a discipline for stating how to complete a project within a certain
timeframe, usually with defined stages, and with designated resources. During project planning, the
project manager performs the following activities.

1. Estimation: The following project attributes are estimated.


• Cost: How much is it going to cost to develop the software product?
• Duration: How long is it going to take to develop the product?
• Effort: How much effort would be necessary to develop the product?
2. Scheduling: After all the necessary project parameters have been estimated, the schedules for
manpower and other resources are developed.
3. Staffing: Staff organization and staffing plans are made.
4. Risk management: This includes risk identification, analysis, and abatement planning.
5. Miscellaneous plans: This includes making several other plans such as quality assurance plan, and
configuration management plan, etc
Precedence ordering among planning activities
Size is the most fundamental parameter based on which all other estimations and project plans are
made.

The size is the crucial parameter for the estimation of other activities. Resources requirement are
required based on Efforts and development time. Project schedule may prove to be very useful for
controlling and monitoring the progress of the project.
1. Sliding Window Planning: In the sliding window planning technique, starting with an initial plan,
the project is planned more accurately over a number of stages.
2. The SPMP Document of Project Planning: Once project planning is complete, project managers
document their plans in a software project management plan (SPMP) document.
Organization of the software project management plan (SPMP) document

1. Introduction
(a) Objectives
(b) Major Functions
(c) Performance Issues
(d) Management and Technical Constraints
2. Project estimates
(a) Historical Data Used
(b) Estimation Techniques Used
(c) Effort, Resource, Cost, and Project Duration Estimates
3. Schedule
(a) Work Breakdown Structure
(b) Gantt Chart Representation
(d) PERT Chart Representation

Designed by Dr. Penchal, NECN for JNTUA-R19


(19A05404T)- SOFTWARE ENGINEERING (Unit-1)

4. Project resources
(a) People
(b) Hardware and Software
(c) Special Resources
5. Staff organization
(a) Team Structure
(b) Management Reporting
6. Risk management plan
(a) Risk Analysis
(b) Risk Identification
(c) Risk Estimation
(d) Risk Abatement (reduction) Procedures
7. Project tracking and control plan
(a) Metrics to be tracked
(b) Tracking plan
(c) Control plan
8. Miscellaneous plans
(a) Process Tailoring
(b) Quality Assurance Plan
(c) Configuration Management Plan
(d) Validation and Verification
(e) System Testing Plan
(f ) Delivery, Installation, and Maintenance Plan

Metrics for Project Size Estimation


Currently, two metrics are popularly being used to measure size:
—lines of code (LOC)
— function point (FP).
-Lines of Code (LOC): This metric measures the size of a project by counting the number of source
instructions in the developed program. Obviously, while counting the number of source instructions,
comment lines, and header lines are ignored.

Estimating LoC
 Accurate estimation of LOC count at the beginning of a project is a very difficult task.
 One can possibly estimate the LOC count at the starting of a project, only by using some form
of systematic guess typically involves the following.
 The project manager divides the problem into modules, and each module into sub-
modules and so on, until the LOC of the leaf-level modules are small enough to be
predicted.
 To be able to predict the LOC count for the various leaf-level modules sufficiently

—function point (FP): This metric measures the size of a project by considering that “a software
product is directly dependent on the number of different high-level functions or features it supports”.

Function point (FP) metric computation


FP is computed using the following three steps:
Step 1: using a heuristic expression
The unadjusted function points (UFP) is computed as the weighted sum of five characteristics
UFP = (I)*4 + (O)*5 + (Q)*4 + (F)*10 + (N)*10
 (I):Number of Inputs: Each data item input by the user is counted.
 (O): Number of Outputs: include reports printed, screen outputs, error messages produced, etc.

Designed by Dr. Penchal, NECN for JNTUA-R19


(19A05404T)- SOFTWARE ENGINEERING (Unit-1)

 (Q) :Number inquiries: An inquiry is a user command (without any data input) inquiries are print
account balance, print all student grades, display rank holders’ names, etc.
 (F): Number of Files: The files referred to here are logical files
 (N): Number of Interfaces: different mechanisms that are used to exchange information like data
files on tapes, disks, communication links with other systems, etc.

Step 2: Refine parameters


Each parameter (input, output, etc.) refined as Simple Average Complex for computing.

Step 3: Refine UFP based on complexity of the overall project


In the final step, several factors (14 parameters) that can impact the overall project size are
considered to refine the UFP computed in step 2.
A technical complexity factor (TCF) is computed as (0.65+0.01*DI).

Project Estimation

A large number of estimation techniques have been proposed by researchers. These can broadly be
classified into three main categories:
1. Empirical estimation techniques
2. Heuristic techniques
3. Analytical estimation techniques
1. Empirical Estimation Techniques
While using this technique, prior experience with development of similar products is helpful.
Empirical estimation techniques are based on common sense and subjective decisions, over the years,
2. Heuristic Techniques
Heuristic techniques assume that the relationships that exist among the different project parameters
can be satisfactorily modeled using suitable mathematical expressions.
Different heuristic estimation models can be divided into the following two broad Categories
[1]. single variable and
[2]. multivariable models.
[1].Single variable estimation models assume that various project characteristic can be predicted
based on a single previously estimatedcharacteristic of the software such as its size.

Example: Basic COCOMO Model.


[2].A multivariable cost estimation model assumes that a parameter can be predicted based on
the values of more than one independent parameter.

Example: Intermediate COCOMO Model.


3. Analytical Estimation Techniques:
Unlike empirical and heuristic techniques, analytical techniques do have certain scientific basis. An
example of an analytical technique is Halstead’s software science.

Designed by Dr. Penchal, NECN for JNTUA-R19


(19A05404T)- SOFTWARE ENGINEERING (Unit-1)

COCOMO Model
Boehm proposed COCOMO (Constructive Cost Estimation Model) in 1981.COCOMO is one of the most
generally used software estimation models in the world. COCOMO predicts the efforts and schedule
of a software product based on the size of the software.
The necessary steps in this model are:
1. Get an initial estimate of the development effort from evaluation of thousands of delivered
lines of source code (KDLOC).
2. Determine a set of 15 multiplying factors from various attributes of the project.
3. Calculate the effort estimate by multiplying the initial estimate with all the multiplying factors
i.e., multiply the values in step1 and step2.
To determine the initial effort Ei in person-months the equation used is of the type is shown below
Ei=a*(KDLOC)b
The value of the constant a and b are depends on the project type.
In COCOMO, projects are categorized into three types:
1. Organic
2. Semidetached
3. Embedded
1.Organic: if the project deals with developing a well-understood application program, the size of the
development team is reasonably small, and the team members are experienced in developing similar
methods of projects.
Examples: Simple business systems, simple inventory management systems, and data processing
systems.
2. Semidetached: A development project can be treated with semidetached type if the development
consists of a mixture of experienced and inexperienced staff. Team members may have finite
experience in related systems but may be unfamiliar with some aspects of the order being developed.
Example: new operating system (OS), a Database Management System (DBMS
3. Embedded: If the software being developed is strongly coupled to complex hardware, or if the
stringent regulations on the operational method exist.
Example: ATM, Air Traffic control.
According to Boehm, software cost estimation should be done through three stages:
1. Basic Model
2. Intermediate Model
3. Detailed Model

1. Basic COCOMO Model: The basic Cocomo model considers that the effort is only a function of
the number of lines of code and some constants calculated according to the various software
systems
The following expressions give the basic COCOMO estimation model:
Effort=a1*(KLOC)a2 PM
Tdev=b1*(efforts)b2 Months
Where
 KLOC is the estimated size of the software product indicate in Kilo Lines of Code
 a1,a2,b1,b2 are constants for each group of software products,
 Tdev is the estimated time to develop the software, expressed in months,
 Effort is the total effort required to develop the software product, expressed in person months.
What is a person-month?
o Person-month (PM) is a popular unit for effort measurement.
o Person-month (PM) is considered to be an appropriate unit for measuring effort,
o because developers are typically assigned to a project for a certain number of months.

Designed by Dr. Penchal, NECN for JNTUA-R19


(19A05404T)- SOFTWARE ENGINEERING (Unit-1)

Estimation of development Effort


For the three classes of software products, the formulas for estimating the effort based on the code
size are shown below:
Organic: Effort = 2.4(KLOC) 1.05 PM
Semi-detached: Effort = 3.0(KLOC) 1.12 PM
Embedded: Effort = 3.6(KLOC) 1.20 PM
Estimation of development Time
For the three classes of software products, the formulas for estimating the development time based
on the effort are given below:
Organic: Tdev = 2.5(Effort) 0.38 Months
Semi-detached: Tdev = 2.5(Effort) 0.35 Months
Embedded: Tdev = 2.5(Effort) 0.32 Months

Example1: Suppose a project was estimated to be 400 KLOC. Calculate the effort and development time for each of the three
model i.e., organic, semi-detached & embedded.
Solution: The basic COCOMO equation takes the form:
Effort=a1*(KLOC) a2 PM
Tdev=b1*(efforts)b2 Months
Estimated Size of project= 400 KLOC
(i)Organic Mode
E = 2.4 * (400)1.05 = 1295.31 PM
D = 2.5 * (1295.31)0.38=38.07 PM
(ii)Semidetached Mode
E = 3.0 * (400)1.12=2462.79 PM
D = 2.5 * (2462.79)0.35=38.45 PM
(iii) Embedded Mode
E = 3.6 * (400)1.20 = 4772.81 PM
D = 2.5 * (4772.8)0.32 = 38 PM

2. Intermediate Model: The intermediate COCOMO model refines the initial estimates obtained
through the basic COCOMO model by using a set of 15 cost drivers based on various attributes of
software engineering.
Classification of Cost Drivers and their attributes:
Product attributes -
1. Required software reliability extent
2. Size of the application database
Intermediate COCOMO equation:
3. The complexity of the product
E=ai (KLOC)bi*EAF
Hardware attributes -
D=ci (E)di
4. Run-time performance constraints
5. Memory constraints
6. The volatility of the virtual machine environment
7. Required turnabout time
Personnel attributes - Coefficients for intermediate COCOMO
8. Analyst capability
9. Software engineering capability Project ai bi ci di
10. Applications experience
11. Virtual machine experience Organic 2.4 1.05 2.5 0.38
12. Programming language experience
Semidetached 3.0 1.12 2.5 0.35
Project attributes -
13. Use of software tools Embedded 3.6 1.20 2.5 0.32
14. Application of software engineering methods
15. Required development schedule
3. Detailed COCOMO Model: Detailed COCOMO incorporates all qualities of the standard version
with an assessment of the cost drivers effect on each method of the software engineering process.

Designed by Dr. Penchal, NECN for JNTUA-R19


(19A05404T)- SOFTWARE ENGINEERING (Unit-1)

Halstead's Software Science


Halstead’s software science is an analytical technique to measure size development effort, and
development cost of software products.
According to Halstead's "A computer program is an implementation of an algorithm considered to
be a collection of tokens which can be classified as either operators or operand."

Token Count
In these metrics, a computer program is considered to be a collection of tokens, which may be
classified as either operators or operands. All software science metrics can be defined in terms of
these basic symbols. These symbols are called as a
token.
The basic measures are
 n1 = count of unique operators.
 n2 = count of unique operands.
 N1 = count of total occurrences of operators.
 N2 = count of total occurrence of operands.
 V* = Min volume of the most briefed program in
which a problem can be coded.
Halstead metrics are:
1. Vocabulary(n) :(total tokens) , that is the size of the
program can be expressed as N = N1 + N2.
2. Estimated Program Length(N): is a the number of unique operators and operands
3. Program Volume (V) :It is the actual size of a program in “bits” .
4. Program Level (L): representing a program written at the highest possible level.
5. Program Difficulty(D): is proportional to the number of the unique operator in the program.
6. Programming Effort (E): The amount of mental activity needed to translate the existing algorithm
into implementation in the specified program language.
7. Faults ( B ): The number of faults in a program is a function of its volume.

Designed by Dr. Penchal, NECN for JNTUA-R19


(19A05404T)- SOFTWARE ENGINEERING (Unit-1)

Project Scheduling
 A schedule in your project’s time table actually consists of sequenced activities and
milestones that are needed to be delivered under a given period of time.
 The most common and important form of project schedule is PERT ,CPM and Gantt chart.

Scheduling Process:
1. Identify all the major activities that need to be carried out to complete the project.
2. Break down each activity into tasks.
3. Determine the dependency among different tasks.
4. Establish the estimates for the time durations necessary to complete the tasks.
5. Represent the information in the form of an activity network.
6. Determine task starting and ending dates from the information represented in the activity
network.
7. Determine the critical path. A critical path is a chain of tasks that determines the duration of
the project.
8. Allocate resources to tasks.
Work Breakdown Structure
1. Work breakdown structure (WBS) is used to recursively decompose a given set of activities into
smaller activities.
2. WBS provides a notation for representing the activities, sub-activities, and tasks needed to be
carried out in order to solve a problem. Each of these is represented using a rectangle (see Figure).
3. The root of the tree is labeled by the project name. Each node of the tree is broken down into
smaller activities that are made the children of the node.
4. Figure 3.7 represents the WBS of management information system (MIS) software.

Designed by Dr. Penchal, NECN for JNTUA-R19


(19A05404T)- SOFTWARE ENGINEERING (Unit-1)

Activity Networks:

An activity network shows the different activities making up a project, their estimated durations, and
their interdependencies. Two equivalent representations for activity networks are possible and are in
use:
Activity on Node (AoN): In this representation, each activity is represented by a rectangular (some use
circular) node and the duration of the activity is shown alongside each task in the node. The inter-task
dependencies are shown using directional edges (see Figure ).

Activity on Edge (AoE): In this representation tasks are associated with the edges. The edges are also
annotated with the task duration. The nodes in the graph represent project milestones.

Critical Path Method (CPM)


CPM is an algorithmic approach to determine the critical paths and slack times for tasks not on the
critical paths involves calculating the following quantities:
1. Minimum time (MT): It is the minimum time required to complete the project.
2. Earliest start (ES): It is the time of a task is the maximum of all paths from the start to this task.
3. Latest start time (LST): It is the difference between MT and the maximum of all paths from this
task to the finish.
4. Earliest finish time (EF): The EF for a task is the sum of the earliest start time of the task and the
duration of the task.
5. Latest finish (LF): LF indicates the latest time by which a task can finish without affecting the final
completion time of the project.
6. Slack time (ST): The slack time (or float time) is the total time that a task may be delayed before it
will affect the end time of the project.
Example: : The activity network with computed ES and EF values has been shown in Figure

Designed by Dr. Penchal, NECN for JNTUA-R19


(19A05404T)- SOFTWARE ENGINEERING (Unit-1)

The activity network with computed LS and LF values has been shown in Figure

 The CPM can be used to determine the duration of a project, but does not provide any
indication of the probability of meeting that schedule.
PERT Charts
Project evaluation and review technique (PERT) charts are a more sophisticated form of activity chart.
Each task is annotated with three estimates:
 Optimistic (O): The best possible case task completion time.
 Most likely estimate (M): Most likely task completion time.
 Worst case (W): The worst possible case task completion time.
The PERT chart representation of the MIS problem of Figure 3

Designed by Dr. Penchal, NECN for JNTUA-R19


(19A05404T)- SOFTWARE ENGINEERING (Unit-1)

Gantt Charts
Gantt chart has been named after its developer Henry Gantt. Gantt chart is a special type of bar chart
where each bar represents an activity. The bars are drawn along a time line. The length of each bar is
proportional to the duration of time planned for the corresponding activity.
A Gantt chart representation for the MIS problem of Figure 3

We can summarize the differences between the two as listed in the table below:
Gantt chart PERT chart
Gantt chart is defined as the bar chart. PERT chart is similar to a network diagram
Gantt chart is often used for Small Projects PERT chart can be used for large and complex Projects
Gantt chart focuses on the time required to
PERT chart focuses on the dependency of relationships.
complete a task
PERT chart could be sometimes confusing and complex
Gantt chart is simpler and more straightforward
but can be used for visualizing critical path

Personnel Planning(Staffing)
Personnel Planning deals with staffing. Staffing deals with the appoint personnel for the position that
is identified by the organizational structure. It involves:
 Defining requirement for personnel
 Recruiting (identifying, interviewing, and selecting candidates)
 Compensating
 Developing and promoting agent
 For personnel planning and scheduling, it is helpful to have efforts and schedule size for the
subsystems and necessary component in the system.
 Typically the staff required for the project is small during requirement and design, the maximum
during implementation and testing, and drops again during the last stage of integration and
testing.
 Using the COCOMO model, average staff requirement for various phases can be calculated as the
effort and schedule for each method are known.
 When the schedule and average staff level for every action are well-known, the overall personnel
allocation for the project can be planned.
 This plan will indicate how many people will be required for different activities at different times
for the duration of the project.

Designed by Dr. Penchal, NECN for JNTUA-R19


(19A05404T)- SOFTWARE ENGINEERING (Unit-1)

Organization and Team Structures


Usually every software development organization handles several projects at any time. Software
organizations assign different teams of developers to handle different software projects.
(A). Organization Structure
Essentially there are three broad ways in which a software development organization can be
structured
 Functional format,
 project format, and
 Matrix format.
1. Functional format: In the functional format, the development staff are divided based on the specific
functional group to which they belong to. This format has schematically been shown in Figure
2. Project format: In the project format, the development staff are divided based on the project for
which they work (See Figure).

The main advantages of a functional organization are:


• Ease of staffing
• Production of good quality documents
• Job specialization
• Efficient handling of the problems associated with manpower turnover
3. Matrix format: A matrix organization is intended to provide the advantages of both functional and
project structures. In a matrix organization, the pool of functional specialists is assigned to different
projects as needed.

Designed by Dr. Penchal, NECN for JNTUA-R19


(19A05404T)- SOFTWARE ENGINEERING (Unit-1)

(B). Team Structure


There are many ways to organize the project team. Some important ways are as follows :
1. Hierarchical team organization
2. Chief-programmer team organization
3. Egoless team organization
1. Hierarchical team organization :
In this, the people of organization at different levels following a tree structure. People at bottom level
generally possess most detailed knowledge about the system. People at higher levels have broader
appreciation of the whole project.

Benefits of hierarchical team organization :


 It limits the number of communication paths and stills allows for the needed communication.
 It is well suited for the development of the hierarchical software products.
 Large software projects may have several levels.
Limitations of hierarchical team organization:
 As information has to be travel up the levels, it may get distorted.
2.Chief-programmer team organization:
This team organization is composed of a small
team consisting the following team members :

 The Chief programmer : It is the person


who is actively involved in the planning,
specification and design process and
ideally in the implementation process
as well.
 The project assistant : It is the closest technical co-worker of the chief programmer.
 The project secretary : It relieves the chief programmer and all other programmers of
administration tools.
 Specialists : These people select the implementation language, implement individual system
components and employ software tools and carry out tasks.

Advantages of Chief-programmer team organization :


 Centralized decision-making
 Reduced communication paths
 Small teams are more productive than large teams
 The chief programmer is directly involved in system development and can exercise the better
control function.
Disadvantages of Chief-programmer team organization :
 Project survival depends on one person only.
 Can cause the psychological problems as the “chief programmer” is like the “king” who takes
all the credit and other members are resentful.

Designed by Dr. Penchal, NECN for JNTUA-R19


(19A05404T)- SOFTWARE ENGINEERING (Unit-1)

3. Egoless Team Organization:


Egoless programming is a state of mind in which programmer are supposed to separate themselves
from their product. Here group, ‘leadership’ rotates based on tasks to be performed and differing
abilities of members.
Risk management
Risk
 Definition: “Risk” is a problem that could cause some loss or threaten the progress of the project,
but which has not happened yet.
 These potential issues might harm cost, schedule or technical success of the project and the
quality of our software device, or project team morale.
Risk Management
Risk Management is the system of identifying addressing and eliminating potential issues (might harm
cost, schedule or technical success of the project) before they can damage the project
There are three main classifications of risks which can affect a software project:

Types of Risks
1. Project risks
2. Technical risks
3. Business risks
1. Project risks: Project risks concern differ forms of budgetary, schedule, personnel, resource, and
customer-related problems.
2. Technical risks: Technical risks concern potential method, implementation, interfacing, testing, and
maintenance issue. It also consists of an ambiguous specification, incomplete specification, changing
specification, technical uncertainty, and technical obsolescence.
3. Business risks: This type of risks contain risks of building an excellent product that no one need,
losing budgetary or personnel commitments, etc.

Principle of Risk Management

1. Global Perspective: In this, we review the bigger system description, design, and
implementation. We look at the chance and the impact the risk is going to have.
2. Take a forward-looking view: Consider the threat which may appear in the future and create
future plans for directing the next events.
3. Open Communication: This is to allow the free flow of communications between the client
and the team members so that they have certainty about the risks.
4. Integrated management: In this method risk management is made an integral part of project
management.
5. Continuous process: In this phase, the
risks are tracked continuously
throughout the risk management
paradigm.

Risk Management Activities

Risk management consists of two main activities,


as shown in fig:

Designed by Dr. Penchal, NECN for JNTUA-R19


(19A05404T)- SOFTWARE ENGINEERING (Unit-1)

(1).Risk Assessment

The objective of risk assessment is to division the risks in the condition of their loss, causing potential.

For risk assessment, first, every risk should be rated in two methods:

1. The possibility of a risk coming true


2. The consequence of the issues relates to that risk

(2).Risk Control(Mitigation)

It is the process of managing risks to achieve desired outcomes.


There are three main methods:
Avoid the risk: This may take several ways such as discussing with the client to change the
requirements to decrease the scope of the work, giving incentives to the engineers to avoid the risk of
human resources turnover, etc.
Transfer the risk: This method involves getting the risky element developed by a third party, buying
insurance cover, etc.
Risk reduction: This means planning method to include the loss due to risk. For instance, if there is a
risk that some key personnel might leave, new recruitment can be planned.

Designed by Dr. Penchal, NECN for JNTUA-R19


(19A05404T)- SOFTWARE ENGINEERING (Unit-1)

Software Configuration Management


Definition: Software Configuration Management (SCM) is a technique of identifying, organizing,
and controlling modification to software being built by a programming team.

Why do we need Configuration Management?


 When we develop software, the product (software) undergoes many changes in their
maintenance phase; we need to handle these changes effectively.
 Multiple people are working on software which is consistently updating. It may be a
method where multiple version, branches, authors are involved in a software project, and
the team is geographically distributed and works concurrently. It changes in user
requirements, and policy, budget, schedules need to be accommodated
Importance of SCM
 It is practical in controlling and managing the access to various SCIs e.g., by preventing the
two members of a team for checking out the same component for modification at the
same time.
 It provides the tool to ensure that changes are being properly implemented.
 It has the capability of describing and storing the various constituent of software.
 SCM is used in keeping a system in a consistent state by automatically producing derived
version upon modification of the same component.
SCM Process
It uses the tools which keep that the necessary change has
been implemented adequately to the appropriate component.
The SCM process defines a number of tasks:
1. Identification of objects in the software configuration
2. Version Control
3. Change Control
4. Configuration Audit
5. Status Reporting

1. Identification: Unit of Text created by a software engineer during analysis, design, code, or
test.
2. Version Control: Version Control combines procedures and tools to handle different version of
configuration objects that are generated during the software process.
3. Change Control: The "check-in" and "check-out" process implements two necessary elements
of change control-access.
4. Configuration Audit: SCM audits to verify that the software product satisfies the baselines
requirements and ensures that what is built and what is delivered.
5. Status Reporting: Configuration Status reporting providing accurate status and current
configuration data to developers, testers, end users, customers and stakeholders

Designed by Dr. Penchal, NECN for JNTUA-R19


(19A05404T)- Requirements analysis and specification (Unit-2)

THE NATURE OF SOFTWARE

Nature of Software:

For understanding of the nature of software, it is important to examine the characteristics of software
that make it different from other things that human beings build.
1. Software is developed or engineered: Since software is purely logical rather than a physical
system element. It is not manufactured in the classical sense. Software projects cannot be
managed as if they were manufacturing projects (You may have a factory where each person
has a specific set of tasks they follow. A worker may tighten a screw all day long).
2. Software doesn’t “wear out: Stated simply, the hardware begins to wear out. Software is not
susceptible to the environmental maladies that cause hardware to wear out. However,
Software deteriorates due to changes.
Figure depicts failure rate as a function of time for hardware. The relationship, often called the
“bathtub curve,” indicates that hardware exhibits relatively high failure rates early in its life;
defects are corrected and the failure rate drops to a steady-state level (hopefully, quite low)
for some period of time.

3. Software is custom built: A software component should be designed and implemented so


that it can be reused in many different programs. Although the industry is moving toward
component-based construction, most software continues to be custom built:
Changing Nature of Software (Software Application Domains):
Nowadays, seven broad categories of computer software present continuing challenges for software
engineers .which is given below:
1. System Software: System software is a collection of programs which are written to service other
programs.
2. Application Software: Application software is defined as programs that solve a specific business
need.
3. Engineering and Scientific Software: This software is used to facilitate the engineering function
and task..
4. Embedded Software: Embedded software resides within the system or product and is used to
implement and control feature and function for the end-user and for the system itself.
5. Product-line Software: Designed to provide a specific capability for use by many different
customers
6. Web Application: It is a client-server computer program which the client runs on the web browser
7. Artificial Intelligence Software: Application within this area includes robotics, expert system,
pattern recognition, artificial neural network, theorem proving etc.

Designed by Dr. Penchal, NECN for JNTUA-R19


(19A05404T)- Requirements analysis and specification (Unit-2)

THE UNIQUE NATURE OF WEBAPPS


The following attributes are encountered in the vast majority of WebApps.

1. Network intensiveness. A WebApps resides on a network and must serve the needs of a diverse
community of clients. The network may enable worldwide access and communication.
2. Concurrency. A large number of users may access the WebApp at one time.
3. Unpredictable load. The number of users of the WebApp may vary by orders of magnitude from
day to day.
4. Performance. WebApp should have good performance.
5. Availability. WebApps often demand access on a 24/7/365 basis.
6. Data driven. The primary function of many WebApps is to use hypermedia to present text,
graphics, audio, and video content to the end user.
7. Content sensitive. The quality and aesthetic (beauty) nature of content remains an important
determinant of the quality of a WebApp .
8. Continuous evolution: Unlike conventional application software that evolves over a series of
planned, chronologically spaced releases, Web applications evolve continuously.
9. Immediacy: WebApps often exhibit a timeto-market that can be a matter of a few days or weeks.
10. Security: Because WebApps are available via network access, , In order to protect sensitive content
and provide secure modes of data transmission, strong security measures must be implemented. .
11. Aesthetics. When a WebApp has been designed to market or sell products or ideas, aesthetics may
have as much to do with success as technical design.

SOFTWARE MYTHS

Software Myth: Software engineering professionals recognize myths as “misleading attitudes that
have caused serious problems for managers and practitioners alike”. Software myths propagate false
beliefs and confusion in the minds of management, users and developers.
1. Management Myths
2. User Myths
3. Developer Myths
1. Management Myths
Managers, who own software development responsibility, are often under strain and pressure to
maintain a software budget, time constraints, improved quality, and many other considerations.
Common management myths are listed in Table.
#  Myth  Reality
1.  The members of an organization Standards are often incomplete, inadaptable, and
can acquire all-the information, outdated.
they require from a manual, which  Developers are often unaware of all the established
contains standards, procedures, standards.
and principl  Developers rarely follow all the known standards
because not all the standards tend to decrease the
delivery time of software while maintaining its
quality
2.  If the project is behind schedule, Adding more manpower to the project, which is already
increasing the number of behind schedule, further delays the project.
programmers can reduce the time  New workers take longer to learn about the project
gap as compared to those already working on the project

Designed by Dr. Penchal, NECN for JNTUA-R19


(19A05404T)- Requirements analysis and specification (Unit-2)

3.  If the project is outsourced to a  Outsourcing software to a third party does not


Third party, the management can help the organization, which is incompetent in
relax and let the other firm managing and controlling the software project
develop software for them. internally. The organization invariably suffers
when it out sources the software project

2. User Myths
#  Myth  Reality
1.  Brief requirement stated in the  Starting development with incomplete and
initial process is enough to start ambiguous requirements often lead to software
development; failure.
 Detailed requirements can be  Adding requirements at a later stage often requires
added at the later stages. repeating the entire development process.
2.  Software is flexible; hence  Incorporating change requests earlier in the
software requirement changes development process costs lesser than those that
can be added during any phase of occurs at later stages. This is because incorporating
the development process. changes later may require redesigning and extra
resources.

3. Developer Myths
#  Myth  Reality
1.  Software development is  50% to 70% of all the efforts are expended after the
considered complete when the  software is delivered to the user
code is delivered.
2.  The success of a software project  The quality of programs is not the only factor that
depends on the quality of the makes the project successful instead the
product produced. documentation and software configuration also play
a crucial role.
3.  Software engineering requires  Software engineering is about creating quality at
unnecessary documentation, every level of the software project.
which slows down the project.  Proper documentation enhances quality which
results in reducing the amount of rework.
4.  The only product that is delivered  The deliverables of a successful project includes not
after the completion of a project is only the working program but also the
the working program(s). documentation to guide the users for using the
software.
5.  Software quality can be assessed  The quality of software can be measured during any
only after the program is phase of development process by applying some
executed. quality assurance mechanism. One such mechanism
is formal technical review that can be effectively
used during each phase of development to uncover
certain errors

Designed by Dr. Penchal, NECN for JNTUA-R19


(19A05404T)- Requirements analysis and specification (Unit-2)

REQUIREMENTS GATHERING AND ANALYSIS

 The requirements analysis and specification phase starts after the feasibility study.
 The requirements analysis and specification phase ends when The Requirements Specification
Document (SRS) has been developed and reviewed.
(1). The Goal of The Requirements Analysis And Specification Phase:
 The goal of the requirements analysis and specification phase is to clearly understand the customer
requirements and to systematically organize the requirements into a document called the Software
Requirements Specification (SRS) document.
(2). Who carries out requirements analysis and specification?
 Requirements analysis and specification activity is usually carried out by a few experienced
members called by system analysts, and it normally requires them to spend some time at the
customer site.
(3). How is the SRS document validated?
 Once the SRS document is ready, it is first reviewed internally by the project team to
ensure that it accurately captures all the user requirements, and that it is understandable,
consistent, unambiguous, and complete.
 The SRS document is then given to the customer for review.
 After the customer has reviewed and agrees to it, it forms the basis for all future
development activities and also serves as a contract document between the customer and
the development organization.
Activities

We can conceptually divide the requirements gathering and analysis activity into two separate tasks:
1. Requirements gathering (requirements elicitation) Process.
2. Requirements analysis

1. Requirements Gathering Process(Requirements Elicitation):


The primary objective of the requirements gathering task is to collect the requirements from
the stakeholders (usually a person, or a group of persons who either directly or indirectly are
concerned with the software).

 Requirements gathering - The analyst discusses with the client and end users and knows their
expectations from the software.
 Organizing Requirements - The analyst prioritizes and arranges the requirements in order of
importance, urgency and convenience.
 Negotiation & discussion - If requirements are ambiguous/conflicts, then it is then negotiated
and discussed with stakeholders.
 Documentation - All formal & informal, functional and non-functional requirements are
documented and made available for next phase processing.
(a). Requirement Gathering /Elicitation Techniques:
(i). Interviews: Interviews are strong medium to collect requirements. Organization may conduct
several types of interviews such as:
 Structured (closed) interviews, where every single information to gather is decided in
advance.
 Non-structured (open) interviews, where information to gather is not decided in advance.

Designed by Dr. Penchal, NECN for JNTUA-R19


(19A05404T)- Requirements analysis and specification (Unit-2)

 Oral interviews
 Written interviews
 One-to-one interviews which are held between two persons across the table.
 Group interviews which are held between groups of participants. They help to uncover
any missing requirement as numerous people are involved.
(ii). Surveys: Organization may conduct surveys among various stakeholders by querying about
their expectation and requirements from the upcoming system.
(iii). Questionnaires: A document with pre-defined set of objective questions and respective
options is handed over to all stakeholders to answer, which are collected and compiled.
(iv). Task analysis: Team of engineers and developers may analyze the operation for which the
new system is required. If the client already has some software to perform certain operation, it is
studied and requirements of proposed system are collected.
(v).Domain Analysis: Every software falls into some domain category. The expert people in the
domain can be a great help to analyze general and specific requirements.
(vi). Brainstorming: An informal debate is held among various stakeholders and all their inputs are
recorded for further requirements analysis.
(vii). Prototyping: Prototyping is building user interface without adding detail functionality for user
to interpret the features of intended software product.
(viii). Observation: Team of experts visit the client’s organization or workplace and observe the
actual working of the existing installed systems.
2. Requirements Analysis
After requirements gathering is complete, the analyst analyses the gathered requirements to form a
clear understanding of the exact customer requirements.
The main purpose of the requirements analysis activity is to analyse the gathered requirements to
remove all ambiguities, incompleteness, and inconsistencies from the gathered customer
requirements and to obtain a clear understanding of the software to be developed.
The following basic questions pertaining to the project should be clearly understood by the analyst
before carrying out analysis:

? What is the problem?


? Why is it important to solve the problem?
? What exactly are the data input to the system and what exactly are the data output by the
system?
? What are the possible procedures that need to be followed to solve the problem?
? What are the likely complexities that might arise while solving the problem?
? If there are external software or hardware with which the developed software has to
interface, then what should be the data interchange formats with the external systems?

During requirements analysis, the analyst needs to identify and resolve three main types of problems
in the requirements:
• Anomaly
• Inconsistency
• Incompleteness
o Anomaly: An ambiguity is an anomaly. When a requirement is ambiguous, several interpretations
of that requirement are possible. Any anomaly in any of the requirements can lead to the
development of an incorrect system.

Example: When the temperature becomes high, the heater should be switched off.

Designed by Dr. Penchal, NECN for JNTUA-R19


(19A05404T)- Requirements analysis and specification (Unit-2)

Words like “high”, “low”, “goo d”, “bad” are ambiguous.


o Inconsistency: Two requirements are said to be inconsistent, if one of the requirements
contradicts the other.

Example: Consider the following two requirements that were collected from two different
stakeholders in a process control application development project.

 The furnace should be switched-off when the temperature of the furnace rises above 500.
 When the temperature of the furnace rises above 500 the water shower should be switched-
on and the furnace should remain on.
o Incompleteness: An incomplete set of requirements is one in which some requirements have been
overlooked.
Example:
If a student secures a grade point average (GPA) of less than 6, then the parents of the student must
be intimated about the regrettable performance through a (postal) letter as well as through e-mail.
However, on an examination of all requirements, it was found that there is no provision by which
either the postal or e-mail address of the parents of the students can be entered into the system.
Note: Can an analyst detect all the problems existing in the gathered requirements?
Ans: A few problems in the requirements be very subtle and escape even the most experienced eyes.

Designed by Dr. Penchal, NECN for JNTUA-R19


(19A05404T)- Requirements analysis and specification (Unit-2)

SOFTWARE REQUIREMENTS SPECIFICATION (SRS)


SRS document:
The SRS document is the final outcome of the requirements analysis and specification phase.
(1). Users of SRS Document: Usually a large number of different people need the SRS document for
Very different purposes. Some of the important categories of users of the SRS document are as
follows:
a. Users, customers, and marketing personnel
b. Software developers
c. Test engineers
d. User documentation writers
e. Project managers
f. Maintenance engineers:
(2). Why Spend Time and Resource to Develop an SRS Document? (Importance of SRS)
A well-formulated SRS document finds a variety of usage other than the primary intended usage as a
basis for starting the software development work. The following are the major important uses of a
well-formulated SRS document:
a. Forms an agreement between the customers and the developers: A good SRS document sets
the stage for the customers to form their expectation about the software and the developers
about what is expected from the software.
b. Reduces future reworks: The process of preparation of the SRS document forces the
stakeholders to rigorously think about all of the requirements before design and development
get underway. This reduces later redesign, recoding, and retesting.
c. Provides a basis for estimating costs and schedules: Project managers usually estimate the
size of the software from an analysis of the SRS document. Based on this estimate they make
other estimations such as the effort required to develop the software and the total cost of
development. The SRS document also serves as a basis for price negotiations with the
customer. The project manager also uses the SRS document for work scheduling.
d. Provides a baseline for validation and verification: The SRS document provides a baseline
against which compliance of the developed software can be checked. It is also used by the test
engineers to create the test plan.
e. Facilitates future extensions: The SRS document usually serves as a basis for planning future
enhancements.

Designed by Dr. Penchal, NECN for JNTUA-R19


(19A05404T)- Requirements analysis and specification (Unit-2)

Characteristics of a Good SRS Document (IEEE 830 Guidelines)

IEEE Recommended Practice for Software Requirements Specifications [IEEE830] describes the content
and qualities of a good software requirements specification (SRS). Some of the identified desirable
qualities of an SRS document are the following:
Attributes of Good SRS Document
1. Concise: The SRS document should be concise and at the same time unambiguous, consistent,
and complete.
2. Implementation-independent: The SRS should be free of design and implementation
decisions unless those decisions reflect actual requirements. This means that the SRS
document should specify the externally visible behavior of the system and not discuss the
implementation issues.
3. The SRS document should describe the system to be developed as a black box, and should
specify only the externally visible behavior of the system. For this reason, the SRS document is
also called the black-box specification of the software being developed.
4. Traceable: Traceability is also important to verify the results of a phase with respect to the
previous phase and to analyse the impact of changing a requirement on the design elements
and the code.
Example: It should be possible to trace a specific requirement to the design elements that
implement it and vice versa. Similarly, it should be possible to trace a requirement to the code
segments that implement it and the test cases that test this requirement and vice versa.
5. Modifiable: Customers frequently change the requirements during the software development
due to a variety of reasons. To cope up with the requirements changes, the SRS document
should be easily modifiable. For this, an SRS document should be well-structured.
6. Identification of response to undesired events: The SRS document should discuss the system
responses to various undesired events and exceptional conditions that may arise.
7. Verifiable: All requirements of the system as documented in the SRS document should be
verifiable. This means that it should be possible to design test cases based on the description
of the functionality as to whether or not requirements have been met in an implementation.
Example: “When the name of a book is entered, the software should display whether the book
is available for issue or it has been loaned out” is verifiable.

Designed by Dr. Penchal, NECN for JNTUA-R19


(19A05404T)- Requirements analysis and specification (Unit-2)

Attributes of Bad SRS Documents


1. Over-specification: It occurs when the analyst tries to address the “how to” aspects in the SRS
document. For example, in the library automation problem, one should not specify whether
the library membership records need to be stored indexed on the member’s first name or on
the library member’s identification (ID) number. Over-specification restricts the freedom of
the designers in arriving at a good design solution.
2. Forward references: One should not refer to aspects that are discussed much later in the SRS
document. Forward referencing seriously reduces readability of the specification.
3. Wishful thinking: This type of problems concern description of aspects which would be
difficult to implement.
4. Noise: The term noise refers to presence of material not directly relevant to the software
development process.
For example, in the register customer function, suppose the analyst writes that customer
registration department is manned by clerks who report for work between 8 am and 5 pm, 7
days a week. This information can be called noise.

Important Categories of Customer Requirements (As per the IEEE 830 guidelines)

As per the IEEE 830 guidelines, the important categories of user requirements are the following.
An SRS document should clearly document the following aspects of software:
(i) Functional requirements
(ii). Non-functional requirements
— Design and implementation constraints
— External interfaces required
— Other non-functional requirements
(iii). Goals of implementation.
(i). Functional Requirements
Requirements, which are related to functional aspect of software fall into this category.
They define functions and functionality within and from the software system.
Examples -
 Search option given to user to search from various invoices.
 User should be able to mail any report to management.
 Users can be divided into groups and groups can be given separate rights.
 Should comply business rules and administrative functions.
 Software is developed keeping downward compatibility intact.
(ii). Non-Functional Requirements
The non-functional requirements are non-negotiable obligations that must be supported by the
software. Non-functional requirements usually address aspects concerning:
 External interfaces,
 User interfaces,
 Maintainability,
 Portability,
 Usability,
 Maximum number of concurrent users,
 Timing and throughput (transactions per second, etc.).
 Design and implementation constraints:
Design and implementation constraints describe any items or issues that will limit the options
available to the developers.

Designed by Dr. Penchal, NECN for JNTUA-R19


(19A05404T)- Requirements analysis and specification (Unit-2)

Example constraints can be:


 Corporate or regulatory policies that needs to be honored;
 Hardware limitations;
 Interfaces with other applications;
 Specific technologies, tools, and databases to be used; specific communications protocols to
be used; security considerations; design conventions or programming standards to be
followed,
Consider an example of a constraint that can be included —Oracle DBMS needs to be used as this
would facilitate easy interfacing with other applications that are already operational in the
organization.
 External interfaces required: Examples of external interfaces are— hardware, software and
communication interfaces, user interfaces, report formats, etc.
 Other non-functional requirements: This section contains a description of non- functional
requirements that are neither design constraints and nor are external interface requirements.
Example: Performance requirement such as the number of transactions completed per unit time.
(iii). Goals of implementation: The ‘goals of implementation’ part of the SRS document offers some
general suggestions regarding the software to be developed. These are not binding on the developers,
and they may take these suggestions into account if possible.

Example: The developers may use these suggestions while choosing among different design solutions

CASE STUDTY: ATM (Check additional Material)

REPRESENTING COMPLEX REQUIREMENTS USING DECISION TABLES AND DECISION TREES

Decision tree:

A decision tree gives a graphic view of the processing logic involved in decision making and the
corresponding actions taken. The edges of a decision tree represent conditions and the leaf nodes
represent the actions to be performed depending on the outcome of testing the condition.
Example: -
Consider Library Membership Automation Software (LMS) where it should support the following three
options: ƒ
a. New member ƒ
b. Renewal ƒ
c. Cancel membership
a. New member option:
Decision: When the 'new member' option is selected, the software asks details about the member
like the member's name, address, phone number etc.
Action: If proper information is entered then a membership record for the member is created and
a bill is printed for the annual membership charge plus the security deposit payable.
b. Renewal option:
Decision: If the 'renewal' option is chosen, the LMS asks for the member's name and his
membership number to check whether he is a valid member or not.
Action: If the membership is valid then membership expiry date is updated and the annual
membership bill is printed, otherwise an error message is displayed.

Designed by Dr. Penchal, NECN for JNTUA-R19


(19A05404T)- Requirements analysis and specification (Unit-2)

c. Cancel membership option:


Decision: If the 'cancel membership' option is selected, then the software asks for member's name
and his membership number.
Action: The membership is cancelled, a cheque for the balance amount due to the member is
printed and finally the membership record is deleted from the database.
Decision tree representation of the above example –
The following tree shows the graphical representation of the above example. After getting information
from the user, the system makes a decision and then performs the corresponding actions.

Decision table

A decision table is used to represent the complex processing logic in a tabular or a matrix form. The
upper rows of the table specify the variables or conditions to be evaluated. The lower rows of the
table specify the actions to be taken when the corresponding conditions are satisfied. A column in a
table is called a rule. A rule implies that if a condition is true, then the corresponding action is to be
executed.
Example: -
Consider the previously discussed LMS example. The following decision table shows how to represent
the LMS problem in a tabular form. Here the table is divided into two parts, the upper part shows the
conditions and the lower part shows what actions are taken. Each column of the table is a rule.

From the above table you can easily understand that, if the valid selection condition is false then the
action taken for this condition is 'display error message'. Similarly, the actions taken for other
conditions can be inferred from the table.

Designed by Dr. Penchal, NECN for JNTUA-R19


(19A05404T)- Requirements analysis and specification (Unit-2)

FORMAL SYSTEM SPECIFICATION

Formal System Specification methods provide us with tools to precisely describe a system and show
that a system is correctly implemented.
Formal Technique:
Definition: A formal technique is a mathematical method to specify a hardware and/or software
system, verify whether a specification is realisable, verify that an implementation satisfies its
specification, prove properties of a system without necessarily running the system, etc.
Why Formal Technique: ?
 To accurately describing the execution behavior of a language.
 English descriptions are often incomplete and ambiguous.
 Compiler writers must implement the language description accurately.
 Programmers want the same behaviour on different platforms.
Note:
There is no single widely acceptable notation or formalism for describing semantics
Formal System Types

Formal Language
Formal specification language consists of: —syn ,sem, and sat.
The set syn is called the syntactic domain, the set sem is called the semantic domain, and the relation
sat is called the satisfaction relation.

Designed by Dr. Penchal, NECN for JNTUA-R19


(19A05404T)- Requirements analysis and specification (Unit-2)

For a given specification syn, and model of the system sem, if sat (syn, sem), then syn is said to be the
specification of sem, and sem is said to be the specificand of syn.
 Syntactic domains: The syntactic domain of a formal specification language consists of an
alphabet of symbols and a set of formation rules to construct wellformed formulas from the
alphabet.
 Semantic domains: Formal techniques can have considerably different semantic domains.
Abstract data type specification languages are used to specify algebras, theories, and programs.
 Satisfaction relation: Given the model of a system, it is important to determine whether an
element of the semantic domain satisfies the specifications.

Model-oriented vs. property-oriented approaches:

Formal methods are usually classified into two broad categories:


 Model – oriented
 Property – oriented approaches.
In a model-oriented style, one defines a system’s behavior directly by constructing a model of the
system in terms of mathematical structures such as tuples, relations, functions, sets, sequences, etc.
In the property-oriented style, the system's behavior is defined indirectly by stating its properties,
usually in the form of a set of axioms that the system must satisfy.
Operational Semantics
Describe the meaning of a program by executing its statements on a machine, either simulated or
actual. The change in the state of the machine (memory, registers, etc.) defines the meaning of the
statement.

 Linear semantics: In this approach, a run o f a system is described by a sequence (possibly


infinite) of events or states.
 Branching semantics: In this approach, the behaviour of a system is represented by a directed
graph.
 Maximally parallel semantics: In this approach, all the concurrent actions enabled at any state
are assumed to be taken together.
 Partial order semantics: Under this view, the semantics ascribed to a system is a structure of
states satisfying a partial order relation among the states (events).

Designed by Dr. Penchal, NECN for JNTUA-R19


(19A05404T)- Requirements analysis and specification (Unit-2)

AXIOMATIC SPECIFICATION

In axiomatic specification of a system, first-order logic is used to write the pre- and post-
conditions to specify the operations of the system in the form of axioms.

 The pre-conditions basically capture the conditions that must be satisfied before an operation
can successfully be invoked. In essence, the pre-conditions capture the requirements on the
input parameters of a function.
 The post-conditions are the conditions that must be satisfied when a function post-conditions
are essentially constraints on the results produced for the function execution to be considered
successful.
How to develop an axiomatic specifications?
The following are the sequence of steps that can be followed to systematically develop the axiomatic
specifications of a function:
1. Establish the range of input values over which the function should behave correctly. Establish
the constraints on the input parameters as a predicate.
2. Specify a predicate defining the condition which must hold on the output of the function if it
behaved properly.
3. Establish the changes made to the function’s input parameters after execution of the
function..
4. Combine all of the above into pre- and post-conditions of the function.
Example:
Specify the pre- and post-conditions of a function that takes a real number as argument and returns
half the input value if the input is less than or equal to 100, or else returns double the value.

Example:
Axiomatically specify a function named search which takes an integer array and an integer key value as
its arguments and returns the index in the array where the key value is present.

ALGEBRAIC SPECIFICATION
 In the algebraic specification technique, an object class or type is specified in terms of
relationships existing between the operations defined on that type.
 Essentially, algebraic specifications define a system as a heterogeneous algebra. A
heterogeneous algebra is a collection of different sets on which several operations are defined
An algebraic specification is usually presented as.
 Types section
 Exception section
 Syntax section
 Equations section
 construction operators
 Inspection operators

Designed by Dr. Penchal, NECN for JNTUA-R19


(19A05404T)- Requirements analysis and specification (Unit-2)

Example 1: Let us specify a data type point supporting the operations create, xcoord, ycoord, isequal;
where the operations have their usual.

Example: 2: Let us specify a bounded FIFO queue having a maximum size of MaxSize and supporting
the operations create, append, remove, first, and isempty; where the operations have their usual
meaning.

Designed by Dr. Penchal, NECN for JNTUA-R19


(19A05404T)- Requirements analysis and specification (Unit-3)

SOFTWARE DESIGN

During the software design phase, the design document is produced, based on the customer
requirements as documented in the SRS document.

Definition: The activities carried out during the design phase (called as design process) transform the
SRS document into the design document.

Classification of Design Activities:


We can broadly classify into two important stages.
• Preliminary (or high-level) design, and
• Detailed design.
Preliminary (or high-level) design:
 A problem is decomposed into a set of modules. The control relationships among the modules
are identified, and also the interfaces among various modules are identified.
 The outcome of high-level design is called the program structure or the software architecture
Detailed design:
 Once the high-level design is complete, detailed design is undertaken.
 During detailed design each module is examined carefully to design its data structures and the
algorithms.
Characteristics GOOD SOFTWARE DESIGN
In fact, the definition of a “good” software design can vary depending on the exact application being
designed. For example, “memory size used up by a program”
Characteristics of good Software Design are listed below:
 Correctness: A good design should first of all be correct. That is, it should correctly implement
all the functionalities of the system.
 Understandability: A good design should be easily understandable. Unless a design solution is
easily understandable, it would be difficult to implement and maintain it.
 Efficiency: A good design solution should adequately address resource, time, and cost
optimization issues.
 Maintainability: A good design should be easy to change. This is an important requirement,
since change requests usually keep coming from the customer even after product release.
Understandability of a Design
 While performing the design of a certain problem, assume that we have arrived at a large
number of design solutions and need to choose the best one. Obviously all incorrect designs
have to be discarded first.
 Out of the correct design solutions, how can we identify the best one?
Understandability of a design solution is possibly the most important issue to be considered while
judging the goodness of a design
NOTE:
A design solution should be modular and layered to be understandable

Designed by Dr. Penchal, NECN for JNTUA-R19


(19A05404T)- Requirements analysis and specification (Unit-3)

A good design follows:


(a). Modularity design principle
(b). Layered design principle
(a). Modularity:

1) A modular design is an effective decomposition of a problem.


2) It is a basic characteristic of any good design solution.
3) A modular design, in simple words, implies that the problem has been decomposed into a set
of modules that have only limited interactions with each other.
4) Decomposition of a problem into modules facilitates taking advantage of the divide and
conquers principle.
5) If different modules have either no interactions or little interactions with each other, then
each module can be understood separately.

Note:
A design solution is said to be highly modular, if the different modules in the solution have high
cohesion and their inter-module couplings are low.

(b). Layered design:


A layered design is one in which when the call relations among different modules are represented
graphically, it would result in a tree-like diagram with clear layering. In a layered design solution, the
modules are arranged in a hierarchy of layers. A module can only invoke functions of the modules in
the layer immediately below it.

Designed by Dr. Penchal, NECN for JNTUA-R19


(19A05404T)- Requirements analysis and specification (Unit-3)

COHESION AND COUPLING


Cohesion: is a measure of the functional strength of a module.
Coupling: is is a measure of the degree of interaction (or interdependence) between the two modules
A design solution is said to be highly modular, if the different modules in the solution have high
cohesion and their inter-module couplings are low.

Difference between COHESION AND COUPLING

COHESION COUPLING

[1]. Cohesion is the concept of intra module. 1. Coupling is the concept of inter module.

[2]. Cohesion represents the relationship within 2. Coupling represents the relationships between
module. modules.

[3]. Increasing in cohesion is good for software. 3. Increasing in coupling is avoided for software.

[4]. Cohesion represents the functional strength 4. Coupling represents the independence among
of modules. modules.

[5]. Highly cohesive gives the best software. 5. Whereas loosely coupling gives the best software.

Types/levels of Cohesion:
LOW
There are many levels of cohesion.
Coincidental cohesion
1. Coincidental cohesion
Logical association
2. Logical association
3. Temporal cohesion Temporal cohesion
4. Procedural cohesion Procedural cohesion
5. Communicational cohesion
Communicational cohesion
6. Sequential cohesion HIGH Sequential cohesion
7. Informational cohesion
8. Functional cohesion Informational cohesion

Functional cohesion

1. Coincidental cohesion (worst)


Coincidental cohesion is when parts of a
module are grouped arbitrarily;
Example:
Transaction Processing System: In a transaction processing system (TPS), the get-input, print-error,
and summarize-members functions are grouped into one module. The grouping does not have any
relevance to the structure of the problem.

Designed by Dr. Penchal, NECN for JNTUA-R19


(19A05404T)- Requirements analysis and specification (Unit-3)

2. Logical cohesion
Logical cohesion is when parts of a module are grouped because they are logically categorized to do
the same thing, even if they are different by nature.
Examples:
Input handling routines: Grouping all mouse and keyboard handling routines
3. Temporal cohesion
Temporal cohesion is when parts of a module are grouped by when they are processed – the parts
are processed at a particular time in program execution.
Examples:
The set of functions responsible for initialization, start-up, shutdown of some process, etc. exhibit
temporal cohesion
4. Procedural cohesion
A module is said to possess procedural cohesion, if the set of functions of the module are all part of
a procedure (algorithm) in which certain sequence of steps have to be carried out for achieving an
objective.
Examples:
The algorithm for decoding a message
5. Communicational/informational cohesion
A module is said to have communicational cohesion, if all functions of the module refer to or update
the same data structure.
Examples:
Module determines customer details like use customer account no to find and return customer
name and loan balance.
6. Sequential cohesion
Sequential cohesion is when parts of a module are grouped because the output from one part is the
input to another part like an assembly line (e.g. a function which reads data from a file and
processes the data).
Examples:
In a TPS, the get-input, validate-input, sort-input functions are grouped into one module.
7. Functional cohesion (best)
Functional cohesion is when parts of a module are grouped because they all contribute to a single
well-defined task of the module (e.g. lexical analysis of an XML string).Focused (strong, single
minded purpose) and no element doing unrelated activities
Examples:
Read transaction record
Low
Content Coupling
Types/levels of Coupling:
Common Coupling
1. Content Coupling
External Coupling
2. Common Coupling
3. External Coupling Control Coupling
4. Control Coupling
Stamp Coupling
5. Stamp Coupling High
Data Coupling
6. Data Coupling

1. Content Coupling: In a content coupling, one module can modify the data of another module or
control flow is passed from one module to the other module. This is the worst form of coupling
and should be avoided.
2. Common Coupling: The modules have shared data such as global data structures.

Designed by Dr. Penchal, NECN for JNTUA-R19


(19A05404T)- Requirements analysis and specification (Unit-3)

3. External Coupling: In external coupling, the modules depend on other modules, external to the
software being developed or to a particular type of hardware.
Ex- protocol, external file, device format, etc.
4. Control Coupling: If the modules communicate by passing control information, then they are said
to be control coupled.
Example- sort function that takes comparison function as an argument
5. Stamp Coupling In stamp coupling, the complete data structure is passed from one module to
another module..
6. Data Coupling: If the dependency between the modules is based on the fact that they
communicate by passing only data, then the modules are said to be data coupled.
Example-customer billing system

Control Hierarchy

Control Hierarchy: The control hierarchy represents the organization of program components in terms
of their call relationships. Thus we can say that the control hierarchy of a design is determined by the
order in which different modules call each other.

Layering:

 In a layered design solution, the modules are arranged into several layers based on their call
relationships. A module is allowed to call only the modules that are at a lower layer. That is, a
module should not call a module that is either at a higher
layer or even in the same layer.
 An important characteristic feature of a good design
solution is layering of the modules. A layered design
achieves control abstraction and is easier to understand and
debug.
 In a layered design, the top-most module in the hierarchy
can be considered as a manager that only invokes the
services of the lower level module to discharge its
responsibility.

Terminologies associated with a layered design:


 Superordinate and subordinate modules: In a control hierarchy, a module that controls
another module is said to be superordinate to it. Conversely, a module controlled by another
module is said to be subordinate to the controller.
 Visibility: A module B is said to be visible to another module A, if A directly calls B. Thus, only
the immediately lower layer modules are said to be visible to a module.
 Control abstraction: In a layered design, a module should only invoke the functions of the
modules that are in the layer immediately below it. In other words, the modules at the higher
layers, should not be visible (that is, abstracted out) to the modules at the lower layers. This is
referred to as control abstraction.
 Depth and width: Depth and width of a control hierarchy provide an indication of the number
of layers and the overall span of control respectively. For the design of Figure 5.6(a), the depth
is 3 and width is also 3.

Designed by Dr. Penchal, NECN for JNTUA-R19


(19A05404T)- Requirements analysis and specification (Unit-3)

 Fan-out: Fan-out is a measure of the number of modules that are directly controlled by a
given module. In Figure 5.6(a), the fan-out of the module M1 is 3. A design in which the
modules have very high fan-out numbers is not a good design. The reason for this is that a
very high fan-out is an indication that the module lacks cohesion. A module having a large fan-
out (greater than 7) is likely to implement several different functions and not just a single
cohesive function.
 Fan-in: Fan-in indicates the number of modules that directly invoke a given module. High fan-
in represents code reuse and is in general, desirable in a good design. In Figure 5.6(a), the fan-
in of the module M1 is 0, that of M2 is 1, and that of M5 is 2.

Software design approaches

There are two fundamentally different approaches to software design that are in use today—
 function-oriented design,
 and object-oriented design
Function-oriented Design: A system is viewed as something that performs a set of functions. Starting
at this high-level view of the system, each function is successively refined into more detailed functions.
Example: consider a function create-new library-member which essentially creates the record for a
new member, assigns a unique membership number to him, and prints a bill towards his membership
charge.
Function Oriented Design Strategies:
1. Data Flow Diagram (DFD)
2. Data Dictionaries
3. Structure Charts
4. Pseudo Code

The following are the salient features of the function-oriented design approach:
Top-down decomposition: In top-down decomposition, starting at a high-level view of the system,
each high-level function is successively refined into more detailed functions.
Centralized system state: The system state is centralized and shared among different functions.

Designed by Dr. Penchal, NECN for JNTUA-R19


(19A05404T)- Requirements analysis and specification (Unit-3)

Object-oriented design: In the object-oriented design approach,


the system is viewed as collection of objects (i.e. entities). The
state is decentralized among the objects and each object manages
its own state information.

For example, in a Library Automation Software, each library


member may be a separate object with its own data and functions
to operate on these data.

Object Oriented Design Strategies


1. Class
2. Attributes
3. Objects
4. Methods (Behaviour )
5. Message
Function-oriented vs. object-oriented design approach

COMPARISON FUNCTION ORIENTED DESIGN OBJECT ORIENTED DESIGN


FACTORS
Abstraction The basic abstractions, which are given The basic abstractions are not the real world
to the user, are real world functions. functions but are the data abstraction where the
real world entities are represented.
Function Functions are grouped together by Function are grouped together on the basis of the
which a higher level function is data they operate since the classes are associated
obtained. with their methods.
State In this approach the state information is In this approach the state information is not
information often represented in a centralized represented is not represented in a centralized
shared memory. memory but is implemented or distributed among
the objects of the system.
Approach It is a top down approach. It is a bottom up approach.
Begins basis Begins by considering the use-case Begins by identifying objects and classes.
diagrams and the scenarios.
Decompose In function oriented design we We decompose in class level.
decompose in function/procedure
level.
Use This approach is mainly used for This approach is mainly used for evolving system
computation sensitive application. which mimics business case.

Designed by Dr. Penchal, NECN for JNTUA-R19


(19A05404T)- Requirements analysis and specification (Unit-3)

Overview of SA/SD methodology

The Structured Analysis (SA)/Structured Design (SD) technique can b e used to perform the high-level
design of software.

 SA: Structured analysis is to capture the detailed structure of the system as perceived by the user.
 SD: Structured design is to define the structure of the solution that is suitable for implementation
in some programming language.
 During structured analysis, the SRS document is transformed into a data flow diagram (DFD).
 During structured design, the DFD model is transformed into a structure chart.

As shown in Figure 6.1, the structured analysis activity transforms the SRS document into a graphic
model called the DFD model. During structured analysis, functional decomposition of the system is
achieved. On the other hand, during structured design, all functions identified during structured
analysis are mapped to a module structure.

Structured Analysis
The structured analysis technique is based on the following underlying principles:
1. Top-down decomposition approach.
2. Application of divide and conquer principle. Through this each high level function is
independently decomposed into detailed functions.
3. Graphical representation of the analysis results using data flow diagrams (DFDs).

Data Flow Diagrams (DFDs)

1. A DFD is a hierarchical graphical model of a system that shows the different processing activities
or functions that the system performs and the data interchange among those functions.
2. DFD model only represents the data flow aspects and does not show the sequence of execution of
the different functions and the conditions based on which a function may or may not be executed.
3. In the DFD terminology, each function is called a process or a bubble. It is useful to consider each
function as a processing station (or process) that consumes some input data and produces some
output data.
4. Starting with a set of high-level functions that a system performs, a DFD model represents the sub-
functions performed by the functions using a hierarchy of diagrams.

Designed by Dr. Penchal, NECN for JNTUA-R19


(19A05404T)- Requirements analysis and specification (Unit-3)

Data Dictionaries
Overview:
Data Dictionary is the major component in the structured analysis model of the system. A data
dictionary in Software Engineering means a file or a set of files that includes a database’s metadata
(hold records about other objects in the database), like data ownership, relationships of the data to
another object, and some other data.

The data dictionary, in general, includes information about the following:


o Name of the data item
o Aliases
o Description/purpose
o Related data items
o Range of values
o Data structure definition/Forms
Composite data items can be defined in terms of primitive data items using the following data
definition operators.
Notations Meaning
+ Denotes composition of two data items, e.g. a+b represents data a and b
[,,] Represents selection, i.e. any one of the data items listed inside the square bracket can
occur For example, [a,b] represents either a occurs or b occurs.
() The contents inside the bracket represent optional data which may or may not appear.
a+(b) represents either a or a+b occurs.
{} Represents iterative data definition, e.g. {name}5 represents five name data.
{name}* represents zero or more instances of name data.
= represents equivalence, e.g. a=b+c means that a is a composite data item comprising of
both b and c
/* */ Anything appearing within /* and */ is considered as comment.

Primitive symbols used for constructing DFDs

There are essentially five different types of symbols used for constructing DFDs. These primitive
symbols are depicted in Figure 6.2. The meanings of these symbols are explained as follows:

 Function symbol: A function is represented using a circle. This symbol is called a process or a
bubble. Bubbles are annotated with the names of the corresponding functions.

Designed by Dr. Penchal, NECN for JNTUA-R19


(19A05404T)- Requirements analysis and specification (Unit-3)

 External entity symbol: An external entity such as a librarian, a library member, etc. is
represented by a rectangle. The external entities are essentially those physical entities
external to the software system which interact with the system by inputting data to the
system or by consuming the data produced by the system.
 Data flow symbol: A directed arc (or an arrow) is used as a data flow symbol. A data flow
symbol represents the data flow occurring between two processes or between an external
entity and a process in the direction of the data flow arrow.
 Data store symbol: A data store is represented using two parallel lines. It represents a logical
file. That is, a data store symbol can represent either a data structure or a physical file on disk.
 Output symbol: The output symbol is used when a hard copy is produced.

Synchronous and asynchronous operations:

 Synchronous: If two bubbles are directly connected by a data flow arrow, then they are
synchronous.
Example: Here, the validate-number bubble can start processing only after the read number
bubble has supplied data to it; and the read-number bubble has to wait until the validate-
number bubble has consumed its data.
 Asynchronous: if two bubbles are connected through a data store.

Levels in DFD
Levels in DFD are numbered 0, 1, 2 or beyond. Here, we will see primarily three levels in the data flow
diagram, which are:
 0-level DFD,
 1-level DFD, and
 2-level DFD.
o 0-level DFDM (Context diagram)
The context diagram is the most abstract (highest level) data flow representation of a system. It
represents the entire system as a single bubble. The bubble in the context diagram is annotated
with the name of the software system being developed (usually a noun). Ccontext diagram
represents the entire software requirement as a single bubble with input and output data denoted
by incoming and outgoing arrows.
o Level 1 DFD
The level 1 DFD usually contains three to seven bubbles. That is, the system is represented as
performing three to seven important functions.
o Note: What if a system has more than seven high-level requirements identified in the SRS
document? In this case, some of the related requirements have to be combined and represented
as a single bubble in the level 1 DFD.

Designed by Dr. Penchal, NECN for JNTUA-R19


(19A05404T)- Requirements analysis and specification (Unit-3)

o Numbering of bubbles
It is necessary to number the different bubbles occurring in the DFD. These numbers help in
uniquely identifying any bubble in the DFD from its bubble number. The bubble at the context
level is usually assigned the number 0 to indicate that it is the 0 level DFD. Bubbles at level 1 are
numbered, 0.1, 0.2, 0.3, etc.
Ex:
(RMS Calculating Software) A software system called RMS calculating software would read three
integral numbers from the user in the range of –1000 and +1000 and would determine the root mean
square (RMS) of the three input numbers and display it.

Commonly made errors while constructing a DFD model


 Many beginners commit the mistake of drawing more than one bubble in the context diagram.
Context diagram should depict the system as a single bubble.
 Many beginners create DFD models in which external entities appearing at all levels of DFDs.
All external entities interacting with the system should be represented only in the context
diagram. The external entities should not appear in the DFDs at any other level.
 It is a common oversight to have either too few or too many bubbles in a DFD. Only three to
seven bubbles per diagram should be allowed. This also means that each bubble in a DFD
should be decomposed three to seven bubbles in the next level.
 Many beginners leave the DFDs at the different levels of a DFD model unbalanced.
 A common mistake committed by many beginners while developing a DFD model is
attempting to represent control information in a DFD.

Designed by Dr. Penchal, NECN for JNTUA-R19


(19A05404T)- Requirements analysis and specification (Unit-3)

Data dictionary:

data-items: {integer}3
rms: float
valid-data:data-items
a: integer
b: integer
c: integer
asq: integer
bsq: integer
csq: integer
msq: integer
Example: Tic-Tac-Toe Computer Game:
Tic-tac-toe is a computer game in which a human player and the computer make alternate moves on a
3 × 3 square. A move consists of marking a previously unmarked square. The player who is first to
place three consecutive marks along a straight line (i.e., along a row, column, or diagonal) on the
square wins. As soon as either of the human player or the computer wins, a message congratulating
the winner should be displayed. If neither player manages to get three consecutive marks along a
straight line, and all the squares on the board are filled up, then the game is drawn. The computer
always tries to win a game. The context diagram and the level 1 DFD are shown in Figure 6.9.

Data dictionary

1. move: integer /* number between 1 to 9 */


2. display: game+result
3. game: board
4. board: {integer}9
5. result: [“computer won”, “human won”, “drawn”]

Designed by Dr. Penchal, NECN for JNTUA-R19


(19A05404T)- Requirements analysis and specification (Unit-3)

Example: Supermarket Prize Scheme

Designed by Dr. Penchal, NECN for JNTUA-R19


(19A05404T)- Requirements analysis and specification (Unit-3)

Example 6.5 (Personal Library Software) Perform structured analysis for the personal library
software of Example 6.5.

Shortcomings of the DFD model

 We judge the function performed by a bubble from its label. However, a short label may not
capture the entire functionality of a bubble
 Not-well defined control aspects are not defined by a DFD.
 The method of carrying out decomposition to arrive at the successive levels and the ultimate level
to which decomposition is carried out are highly subjective and depend on the choice and
judgment of the analyst.
 For the same problem, several alternative DFD representations are possible

Designed by Dr. Penchal, NECN for JNTUA-R19


(19A05404T)- Requirements analysis and specification (Unit-3)

Extending DFD Technique to Make it Applicable to Real-time Systems

 In a real-time system, some of the high-level functions are associated with deadlines. Therefore, a
function must not only produce correct results but also should produce them by some pre-
specified time.
 For real-time systems, execution time is an important consideration for arriving at a correct
design. Therefore, explicit representation of control and event flow aspects is essential. One of the
widely accepted techniques for extending the DFD technique to real-time system analysis is the
Ward and Mellor technique.
 In the Ward and Mellor notation, a type of process that handles only Control flows is introduced.
These processes representing control processing are denoted using dashed bubbles. Control flows
are shown using dashed lines/arrows.

Control specifications represent the behavior of the system in two different ways:

 It contains a State Transition Diagram (STD). The STD is a sequential specification of behavior.
 It contains a Program Activation Table (PAT). The PAT is a combinatorial specification of behavior.
 PAT represents invocation sequence of bubbles in a DFD.

State Transition Diagram (STD): Chess Game Program Activation Table (PAT). : Vending Machine

STRUCTURED DESIGN

The aim of structured design is to transform the results of the structured analysis (that i s, the DFD
model) into a structure chart.
The basic building blocks using which structure charts are designed are as following:
o Rectangular boxes: A rectangular box represents a module. Usually, every rectangular box is
annotated with the name of the module it represents
o Module invocation arrows: An arrow connecting two modules implies that during program
execution control is passed from one module to the other in the direction of the connecting
arrow
o Data flow arrows: These are small arrows appearing alongside the module invocation arrows.
The data flow arrows are annotated with the corresponding data name.

Designed by Dr. Penchal, NECN for JNTUA-R19


(19A05404T)- Requirements analysis and specification (Unit-3)

o Library modules: A library module is usually represented by a rectangle with double edges.
Libraries comprise the frequently called modules. Usually, when a module is invoked by many
other modules, it is made into a library module.
o Selection: The diamond symbol represents the fact that one module of several modules
connected with the diamond symbol i s invoked depending on the outcome of the condition
attached with the diamond symbol.
o Repetition: A loop around the control flow arrows denotes that the respective modules are
invoked repeatedly.
Flow chart versus structure chart
1. A structure chart differs from a flow chart in three principal ways:
2. It is usually difficult to identify the different modules of a program from its flow chart
representation. Data interchange among different modules is not represented in a flow chart.
3. Sequential ordering of tasks that i s inherent to a flow chart is suppressed in a structure chart.
Transformation of a DFD Model into Structure Chart
Structured design provides two strategies to guide transformation of a DFD into a structure chart:
1. Transform analysis
2. Transaction analysis
 As in transform analysis, first all data entering into the DFD need to be identified.
 In a transaction-driven system, different data items may pass through different computation
paths through the DFD.
Normally, one would start with the level 1 DFD, transform it into module representation using either
the transform or transaction analysis and then proceed toward the lower level DFDs.
1. Transform analysis:
Transform analysis identifies the primary functional
components (modules) and the input and output
data for these components. The first step in
transform analysis is to divide the DFD into three
types of parts:
• Input.
• Processing.
• Output
Draw the structure chart for the RMS software

2. Transaction analysis
Transaction analysis is an alternative to transform analysis and is useful while designing transaction
processing programs. A transaction allows the user to perform some specific type of work by using the
software. For example, ‘issue book’, ‘return book’, ‘query book’, etc., are transactions.
Structure chart for the Tic-tac-toe software

Designed by Dr. Penchal, NECN for JNTUA-R19


(19A05404T)- Requirements analysis and specification (Unit-3)

Draw the structure chart for the Supermarket Prize:

DETAILED DESIGN:
During detailed design the pseudo code description of the processing and the different data structures
are designed for the different modules of the structure chart. These are usually described in the form
of module specifications (MSPEC). MSPEC is usually written using structured English.
DESIGN REVIEW:
After a design is complete, the design is required to be reviewed. After a design is complete, the
design is required to be reviewed.
 Traceability: Whether each bubble of the DFD can be traced to some module in the structure
chart and vice versa. They check whether each functional requirement in the SRS document
can be traced to some bubble in the DFD model and vice versa.
 Correctness: Whether all the algorithms and data structures of the detailed design are correct.
 Maintainability: Whether the design can be easily maintained in future.
 Implementation: Whether the design can be easily and efficiently be implemented. After the
points raised by the reviewers are addressed by the designers, the design document becomes
ready for implementation.

BASIC OBJECT-ORIENTATION (OO) CONCEPTS

The following diagrams depict basics of Object-Orientation (OO) Concepts.

(a). Object: An object-oriented


program usually represents a
tangible real-world entity such as a
library member, a book, an issue
register, etc. Each object essentially
consists of some data that is private
to the object and a set of functions
(termed as operations or methods)
that operate on those data.

Designed by Dr. Penchal, NECN for JNTUA-R19


(19A05404T)- Requirements analysis and specification (Unit-3)

A key advantage of considering a system as a set of objects is the following:


When the system is analyzed, developed, and implemented in terms of objects, it becomes easy to
understand the design and the implementation of the system, since objects provide an excellent
decomposition of a large problem into small parts.
(b). Class: Collection of objects is called class. It is a logical entity. A class can also be defined as a
blueprint from which you can create an individual object. Class doesn't consume any space. Class is an
abstract data type (ADT).
Class Relationships: Classes in a programming solution can be related to each other in the
Following four ways:
• Inheritance
• Association and link
• Aggregation and composition
• Dependency
 Inheritance:
o When one object acquires all the properties and behaviors of a parent object, it is
known as inheritance. It provides code reusability. It is used to achieve runtime
polymorphism. . In Figure, observe that the classes Faculty, Students, and Staff have
been derived from the base class LibraryMember through an inheritance relationship.

o Each derived class can be considered as a specialisation of its base class because it
modifies or extends the basic properties of the base class in certain ways. Therefore,
the inheritance relationship can be viewed as a generalisation-specialisation
relationship.
o When a new definition of a method that existed in the base class is provided in a
derived class, the method is said to be overridden in the derived class.
 Association and link:
o Association is a common type of relation among classes.
o When two classes are associated, the relationship between two objects of the
corresponding classes is called a link. An association describes a group of similar links.
o Consider the following example. A Student can register in one Elective subject. In this
example, the class Student is associated with the class Elective Subject.
o In unary association, two (or more) different objects of the same class are linked by
the association relationship.

Designed by Dr. Penchal, NECN for JNTUA-R19


(19A05404T)- Requirements analysis and specification (Unit-3)

 Composition and Aggregation:


o Composition and aggregation represent part/whole relationships among objects.
Objects which contain other objects are called composite objects.
o Example: A Book object can have upto ten Chapters. In this case, a Book object is said
to be composed of upto ten Chapter objects. The composition/aggregation
relationship can also be read as follows—“A Book has upto ten Chapter objects”. The
composition/aggregation relationship is also known as has a relationship.

o
Dependency: A class is said to be dependent on another class, if any changes to the
latter class necessitates a change to be made to the dependent class.
o Abstract class: Classes that are not intended to produce instances of them are called
abstract classes. In other words, an abstract class cannot be instantiated.
Difference among Aggregation and Composition are Association

 In both aggregation and composition object of one class "owns"


object of another class.
 But there is a subtle difference. In Composition the object of class
that is owned by the object of it's owning class cannot live on it's
own (Also called "death relationship").
 It will always live as a part of it's owning object where as
in Aggregation the dependent object is standalone and can exist
even if the object of owning class is dead.

How to Identify Class Relationships:


In the following, we give examples of a few key words (shown in italics) that indicate the specific
relationships among two classes A and B:
(i).Composition
 B is a permanent part of A
 A is made up of Bs
 A is a permanent collection of Bs
(ii).Aggregation
 B is a part of A
 A contains B
 A is a collection of Bs

Designed by Dr. Penchal, NECN for JNTUA-R19


(19A05404T)- Requirements analysis and specification (Unit-3)

(iii). Inheritance
 A is a kind of B
 A is a specialisation of B
 A behaves like B
(iv).Association
 A delegates to B
 A needs help from B
 A collaborates with B.
(c).Abstraction: The abstraction mechanism allows us to represent a problem in a simpler way by
considering only those aspects that is relevant to some purpose and omitting all other details that are
irrelevant. Abstraction is supported in two different ways in object-oriented designs (OODs). These are
the following:

 Feature abstraction: inheritance mechanism can be thought of as providing feature


abstraction
 Data abstraction: An object itself can be considered as a data abstraction entity, because it
abstracts out the exact way in which it stores its various private data items and it merely
provides a set of methods to other objects to access and manipulate these data items.

(d).Encapsulation: The data of an object is encapsulated within its methods. To access the data
internal to an object, other objects have to invoke its methods, and cannot directly access the data.

Encapsulation offers the following three important


advantages:

o Protection from unauthorised data access:


o Data hiding
o Weak coupling

(e).Polymorphism:

Polymorphism literally means poly ( many ) morphism (forms).

o Static polymorphism: Static polymorphism occurs when multiple methods implement the
same operation but at compiled-time (statically).
o Dynamic polymorphism: Dynamic polymorphism is
also called dynamic binding. In dynamic binding,the
exact method that would be invoked (bound) on a
method call can only be known at the run time
(dynamically).

(f).Method overriding: When a new definition of a method


that existed in the base class is provided in a derived class,
the method is said to be overridden in the derived class.
Advantages of OOD:
The main reason for the popularity of OOD is that it holds out the following promises
1. Code and design reuse
2. Increased productivity
3. Ease of testing and maintenance

Designed by Dr. Penchal, NECN for JNTUA-R19


(19A05404T)- Requirements analysis and specification (Unit-3)

4. Better code and design understandability enabling development of large programs


Out of all the above mentioned advantages, it is usually agreed that the chief advantage of OOD is
improved productivity—which comes about due to a variety of factors, such as the following:
1. Code reuse by the use of predeveloped class libraries
2. Code reuse due to inheritance
3. Simpler and more intuitive abstraction, i.e., better management of inherent problem and code
complexity
4. Better problem decomposition
Disadvantages of OOD:
1. Project-oriented program to run a little slower
2. Spatial locality of data becomes weak and this leads to higher cache miss ratios and
consequently to larger memory access times

UML Diagrams

Unified Modeling Language (UML) can be used to document object-oriented analysis and design
results that have been obtained using any methodology. UML was developed to standardize the large
number of object-oriented modelling notations:

1. OMT [Rumbaugh 1991]


2. Booch’s methodology [Booch 1991]
3. OOSE [Jacobson 1992]
4. Odell’s methodology [Odell 1992]
5. Shlaer and Mellor methodology[Shlaer 1992]
As shown in Figure 7.12, OMT had the most profound influence on UML.
Evolution of UML:

Model: A model is an abstraction of a real problem (or situation), and is constructed by leaving out
unnecessary details. This reduces the problem complexity and makes it easy to understand the
problem (or situation).

UML DIAGRAMS

UML diagrams can capture the following views (models) of a system:

1. User’s view
2. Structural view
3. Behaviourial view
4. Implementation view
5. Environmental view

Designed by Dr. Penchal, NECN for JNTUA-R19


(19A05404T)- Requirements analysis and specification (Unit-3)

Figure 7.14 shows the different views that the UML diagrams can document.

1. User View: The users’ view captures the view of the system in terms of the functionalities
offered by the system to its users.
2. Structural view: The structural model is also called the static model, since the structure of a
system does not change with time.
3. Behavioral view: The behavioral view captures how objects interact with each other in time
to realise the system behavior. The system behavior captures the time-dependent (dynamic)
behavior of the system. It therefore constitutes the dynamic model of the system.
4. Implementation view: This view captures the important components of the system and their
interdependencies. For example, the implementation view might show the GUI part, the
middleware, and the database part as the different parts and also would capture their
interdependencies.
5. Environmental view: This view models how the different components are implemented on
different pieces of hardware

USE CASE MODEL

 Intuitively, the use cases represent the different ways in which a system can be used by the users.
 A use case can be viewed as a set of related scenarios tied
together by a common goal. The main line sequence and
each of the variations are called scenarios or instances of
the use case. Each scenario is a single path of user events
and system activity.
 In contrast to all other types of UML diagrams, the use
case model represents a functional or process model of a
system.
 Both human users and external systems can be
represented by stick person icons (Called Actor).
 When a stick person icon represents an external system, it is annotated by the stereotype
<<external system>>.

Example: The use case model for the Tic-tac-toe game software.

Designed by Dr. Penchal, NECN for JNTUA-R19


(19A05404T)- Requirements analysis and specification (Unit-3)

The use case diagram of the Super market prize:

Text description
 U1: register-customer: Using this use case, the customer can register himself by providing the
necessary details.
o Scenario 1: Mainline sequence
1. Customer: select register customer option
2. System: display prompt to enter name, address, and telephone number.
3. Customer: enter the necessary values
4. System: display the generated id and the message that the customer has
successfully been registered.
o Scenario 2: At step 4 of mainline sequence
4: System: displays the message that the customer has already registered.
o Scenario 3: At step 4 of mainline sequence
4: System: displays message that some input information have not been entered. The
system displays a prompt to enter the missing values.
 U2: register-sales: Using this use case, the clerk can register the details of the purchase made by a
customer.
o Scenario 1: Mainline sequence
1. Clerk: selects the register sales option.
2. System: displays prompt to enter the purchase details and the id of the customer.
3. Clerk: enters the required details.
4. System: displays a message of having successfully registered the sale.
 U3: select-winners. Using this use case, the manager can generate the winner list.
o Scenario 1: Mainline sequence
1. Manager: selects the select-winner option.
2. System: displays the gold coin and the surprise gift winner list.

Designed by Dr. Penchal, NECN for JNTUA-R19


(19A05404T)- Requirements analysis and specification (Unit-3)

Generalization: Use case generalisation can be used


when you have one use case that is similar to
another, but does something slightly differently or
something more.

Includes: he includes relationship implies one use


case includes the behaviour of another use case in its sequence of events and actions.

Extends: The main idea behind the extends relationship among use cases is that it allows you show
optional system behaviour. An optional system behaviour is executed only if certain conditions hold,
otherwise the optional behaviour is not executed

USE CASE PACKAGING: Packaging is the mechanism provided by UML to handle complexity. When we
have too many use cases in the top-level diagram, we can package the related use cases

Designed by Dr. Penchal, NECN for JNTUA-R19


(19A05404T)- Requirements analysis and specification (Unit-3)

Characteristics of a Good User Interface


The user interface portion of a software product is responsible for all interactions with the user.

1. Speed of learning:
a. A good user interface should be easy to learn.
b. Speed of learning is hampered by complex syntax and semantics of the command
issue procedures.
c. A good user interface should not require its users to memorise commands.
d. Neither should the user be asked to remember information from one screen to
another while performing various tasks using the interface.

Besides, the following three issues are crucial to enhance the Speed of learning:

o U s e of metaphors1 and intuitive command name: Speed of learning an


interface is greatly facilitated if these are based on some dayto-day real-life examples
or some physical objects with which the users are familiar with.
 (Ex: shopping cart).
o Consistency: Once, a user learns about a command, he should be able to use the
similar commands in different circumstances for carrying out similar actions.
o Component-based interface: Users can learn an interface faster if the interaction style
of the interface is very similar to the interface of other applications with which the
user is already familiar with.
2. Speed of use: Speed of use of a user interface is determined by the time and user effort
necessary to initiate and execute different commands. This characteristic of the interface
is some times referred to as productivity support of the interface
3. Speed of recall: Once users learn how to use an interface, the speed with which they can
recall the command issue procedure should be maximised.
4. Error prevention: A good user interface should minimise the scope of committing errors
while initiating different commands.
5. Aesthetic and attractive: A good user interface should be attractive to use. An attractive
user interface catches user attention and fancy.
6. Consistency: The commands supported by a user interface should be consistent.
7. Feedback: A good user interface must provide feedback to various user actions.
8. Support for multiple skill levels: A good user interface should support multiple levels of
sophistication of command issue procedure for different categories of users.
9. Error recovery (undo facility): While issuing commands, even the expert users can commit
errors. Therefore, a good user interface should allow a user to undo a mistake committed
by him while using the interface.
10. User guidance and on-line help: Users seek guidance and on-line help when they either
forget a command or are unaware of some features of the software.

Designed by Dr. Penchal, NECN for JNTUA-R19


(19A05404T)- Requirements analysis and specification (Unit-3)

User Guidance and On-line Help

 Users may seek help about the operation of the software any time while using the software.
 This is provided by the on-line help system.

On-line help system: Users expect the on-line help messages to


be tailored to the context in which they invoke the “help
system”. Therefore, a good online help system should keep
track of what a user is doing while invoking the help system and
provide the output message in a context-dependent way.

Guidance messages: The guidance messages should be carefully designed to prompt the user
about the next actions he might pursue, the current status of the system, the progress so far made
in processing his last command, etc.

Error messages: Error messages are generated by a system either when the user commits some
error or when some errors encountered by the system during processing due to some exceptional
conditions, such as out of memory, communication link broken, etc.

Mode-based Vs Mode-less Interface

A mode is a state or collection of states in which only a subset of all user interaction tasks can be
performed. In a modeless interface, the same set of commands can be invoked at any time during the
running of the software. Thus, a modeless interface has only a single mode and all the commands are
available all the time during the operation of the software. On the other hand, in a mode-based
interface, different set of commands can be invoked depending on the mode in which the system is,
i.e. the mode at any instant is determined by the sequence of commands already issued by the user.

A mode-based interface can be represented using a state transition diagram, where each node of the
state transition diagram would represent a mode. Each state of the state transition diagram can be
annotated with the commands that are
meaningful in that state.

Fig 9.2 shows the interface of a word


processing program. The top-level menu
provides the user with a gamut of
operations like file open, close, save, etc.
When the user chooses the open option,
another frame is popped up which limits
the user to select a name from one of the
folders.

Designed by Dr. Penchal, NECN for JNTUA-R19


(19A05404T)- Requirements analysis and specification (Unit-3)

Types of user interfaces

Broadly speaking, user interfaces can be classified into the following three categories:

1. Command language-based interfaces


2. Menu-based interfaces
3. Direct manipulation interfaces

1.Command Language-based Interface:

 As the name itself suggests, is based on designing a command language which the user can
use to issue the commands.
 The user is expected to frame the appropriate commands in the language and type them
appropriately whenever required.
 Command language-based interfaces allow fast interaction with the computer and simplify the
input of complex commands.
 Further, a command language-based interface can be implemented even on cheap
alphanumeric terminals.
 Usually, command language-based interfaces are difficult to learn and require the user to
memorise the set of primitive commands

Issues in designing a command language-based interface:

Two overbearing command design issues are to:

 Reduce the number of primitive commands that a user has to remember and
 Minimize the total typing required.

2. Menu-based Interface

 An important advantage of a menu-based interface over a command language-based interface


is that a menu-based interface does not require the users to remember the exact syntax of the
commands.
 A menu-based interface is based on recognition of the command names, rather than
recollection.
 menu-based interface the typing effort is minimal as most interactions are carried out through
menu selections using a pointing device

Scrolling menu: Sometimes the full choice list is large and cannot be displayed within the menu area,
scrolling of the menu items is required.

Walking menu:. In this technique, when a menu item is selected, it causes further menu items to be
displayed adjacent to it in a sub-menu.

Hierarchical menu: This type of menu is suitable for small screens with limited display area such as
that in mobile phones. In a hierarchical menu, the menu items are organised in a hierarchy or tree
structure

Designed by Dr. Penchal, NECN for JNTUA-R19


(19A05404T)- Requirements analysis and specification (Unit-3)

3. Direct Manipulation Interfaces:

 Direct manipulation interfaces present the interface to the user in the form of visual models
(i.e., icons2 or objects).
 For this reason, direct manipulation interfaces are sometimes called as iconic interfaces.
 In this type of interface, the user issues commands by performing actions on the visual
representations of the objects, e.g., pull an icon representing a file into an icon representing a
trash box, for deleting the file.

Component-based GUI development

The current style of user interface development is component-based. It recognises that every user
interface can easily be built from a handfuls of predefined components such as menus, dialog boxes,
forms, etc.
Window System:
Most modern graphical user interfaces are
developed using some window system. A
window is a rectangular area on the screen. A
window can be considered to be a virtual
screen, in the sense that it provides an interface
to the user for carrying out independent
activities, e.g., one window can be used for
editing a program and another for drawing
pictures, etc

Designed by Dr. Penchal, NECN for JNTUA-R19


(19A05404T)- Requirements analysis and specification (Unit-3)

Window management system (WMS)


A graphical user interface typically consists of a large number of windows. Therefore, it is necessary to
have some systematic way to manage these windows. Most graphical user interface development
environments do this through a window
management system (WMS).
A WMS consists of two parts (see Figure 9.4):
• A window manager, and
• A window system.
Window manager is the component of WMS
with which the end user interacts to do various
window-related operations such as window
repositioning, window resizing, iconification,
etc.
The window manager can be considered as a
special kind of client that makes use of the
services (function calls) supported by the
window system. The application programmer can also directly invoke the services of the window
system to develop the user interface.

Component-based development

A development style based on widgets (A widget is the short form of a window object ) is called
component-based (or widget-based ) GUI development style.
Types of Widgets

1. Label widget:. A label widget does nothing except to display a label, i.e., it does not have any
other interaction capabilities and is not sensitive to mouse clicks. A label widget is often used
as a part of other widgets.
2. Container widget: These widgets do not stand by themselves, but exist merely to contain
other widgets. Other widgets are created as children of the container widget.
3. Pop-up menu: These are transient and task specific. A pop-up menu appears upon pressing
the mouse button, irrespective of the mouse position.

4. Pull-down menu : These are more permanent and general. You have to move the cursor to a
specific location and pull down this type of menu.
5. Dialog boxes: We often need to select multiple elements from a selection list. A dialog box
remains visible until explicitly dismissed by the user.
6. Push button: A push button contains key words or pictures that describe the action that is
triggered when you activate the button.
Designed by Dr. Penchal, NECN for JNTUA-R19
(19A05404T)- Requirements analysis and specification (Unit-3)

7. Radio buttons: A set of radio buttons are used when only one option has to be selected out of
many options.
8. Combo boxes: A combo box looks like a button until the user interacts with it. When the user
presses or clicks it, the combo box displays a menu of items to choose from.

A USER INTERFACE DESIGN METHODOLOG

A GUI Design Methodology

Interface design methodology consists of the following important steps:

 Examine the use case model of the software.


 Task and object modelling.
 Metaphor selection.
 Interaction design and rough layout.
 Detailed presentation and graphics design.
 GUI construction and Usability evaluation

Use case model: The Use-case model is defined as a model which is used to show how users interact
with the system in order to solve a problem. As such, the use case model defines the user's objective,
the interactions between the system and the user, and the system's behavior required to meet these
objectives.

Task and object modeling: A task is a human activity intended to achieve some goals. Examples of task
goals can be as follows:
 Reserve an airline seat
 Buy an item
 Transfer money from one account to another
 Book a cargo for transmission to an address
Metaphor selection: The first place one should look for while trying to identify the candidate
metaphors is the set of parallels to objects, tasks, and terminologies of the use cases.
Ex: White board, Shopping cart,, Post box etc.

Interaction design and rough layout : The interaction design involves mapping the subtasks into
appropriate controls, and other widgets such as forms, text box, etc. This involves making a choice
from a set of available components that would best suit the subtask. Rough layout concerns how the
controls, an other widgets to be organised in windows.

Detailed presentation and graphics design Each window should represent either an object or many
objects that have a clear relationship to each other

GUI construction and Usability evaluation: Based on the usability and context proper GUI
construction is important.

Ex: Some of the windows have to be defined as modal dialogs. When a window is a modal dialog, no
other windows in the application are accessible until the current window is closed. When a modal
dialog is closed, the user is returned to the window from which the modal dialog was invoked.

Designed by Dr. Penchal, NECN for JNTUA-R19


(19A05404T)- CODING AND TESTING (Unit-4)

Coding Standards and Guidelines

 Coding is undertaken once the design phase is complete and the design documents have been
successfully reviewed.
 In the coding phase, every module specified in the design document is coded and unit tested.
 During unit testing, each module is tested in isolation from other modules. That is, a module is
tested independently as and when its coding is complete.
 After all the modules of a system have been coded and unit tested, the integration and system
testing phase is undertaken.
 Over the years, the general perception of testing as monkeys typing in random data and trying to
crash the system has changed. Now testers are looked upon as masters of specialised concepts,
techniques, and tools.

CODING: The objective of the coding phase is to transform the design of a system into code in a high-
level language, and then to unit test this code.

Coding Standards and Guidelines

 Normally, good software development organisations require their programmers to adhere to


some well-defined and standard style of coding which is called their coding standard.
 It is mandatory for the programmers to follow the coding standards.
 Compliance of their code to coding standards is verified during code inspection. Any code that
does not conform to the coding standards is rejected during code review and the code is reworked
by the concerned programmer.
 Good software development organisations usually develop their own coding standards and
guidelines depending on what suits their organization best and based on the specific types of
software they develop.
Coding Standards
1. Rules for limiting the use of global: These rules list what types of data can be declared global and
what cannot, with a view to limit the data that needs to be defined with global scope.
2. Standard headers for different modules: The header of different modules should have standard
format and information for ease of understanding and maintenance.
The following is an example of header format that is being used in some companies:
a. Name of the module.
b. Date on which the module was created.
c. Author’s name.
d. Modification history.
e. Synopsis of the module.
f. Different functions supported in the module, along with their input/output parameters.
g. Global variables accessed/modified by the module
3. Naming conventions for global variables, local variables, and constant identifiers: A popular
naming convention is that variables are named using mixed case lettering. Global variable names
would always start with a capital letter (e.g., GlobalData) and local variable names start with small
letters (e.g., localData). Constant names should be formed using capital letters only (e.g.,
CONSTDATA).

Designed by Dr. Penchal, NECN for JNTUA-R19


(19A05404T)- CODING AND TESTING (Unit-4)

4. Conventions regarding error return values and exception handling mechanisms: The way error
conditions are reported by different functions in a program should be standard within an
organization. For example, all functions while encountering an error condition should either return
a 0 or 1 consistently, independent of which programmer has written the code. This facilitates
reuse and debugging.
Coding Guidelines
1. Do not use a coding style that is too clever or too difficult to understand: Code should be easy to
understand. Many inexperienced engineers actually take pride in writing cryptic and
incomprehensible code. Clever coding can obscure meaning of the code and reduce code
understandability; thereby making maintenance and debugging difficult and expensive.
2. Avoid obscure side effects: The side effects of a function call include modifications to the
parameters passed by reference, modification of global variables, and I/O operations. An obscure
side effect is one that is not obvious from a casual examination of the code.
3. Do not use an identifier for multiple purposes: Programmers often use the same identifier to
denote several temporary entities.
4. Each variable should be given a descriptive name indicating its purpose.
5. Use of variables for multiple purposes usually makes future enhancements more difficult.
6. Code should be well-documented: As a rule of thumb, there should be at least one comment line
on the average for every three source lines of code.
7. Length of any function should not exceed 10 source lines: A lengthy function is usually very
difficult to understand as it probably has a large number of variables and carries out many
different types of computations. For the same reason, lengthy functions are likely to have
disproportionately larger number of bugs.
8. Does not use GO TO statements: Use of GO TO statements makes a program unstructured. This
makes the program very difficult to understand, debug, and maintain

CODE REVIEW

 Review is a very effective technique to remove defects from source code. In fact, review has been
acknowledged to be more cost-effective in removing defects as compared to testing.
 Testing is an effective defect removal mechanism. However, testing is applicable to only
executable code.
 The reason behind why code review is a much more cost-effective strategy to eliminate errors
from code compared to testing is that reviews directly detect errors. On the other hand, testing
only helps detect failures and significant effort is needed to locate the error during debugging.
 Normally, the following two types of reviews are carried out on the code of a module:
o Code walkthrough.
o Code inspection.

Code walkthrough:

1. The main objective of code walkthrough is to discover the algorithmic and logical errors in the
code.
2. Code walkthrough is an informal code analysis technique.
3. In this technique, a module is taken up for review after the module has been coded,
successfully compiled, and all syntax errors have been eliminated.

Designed by Dr. Penchal, NECN for JNTUA-R19


(19A05404T)- CODING AND TESTING (Unit-4)

4. A few members of the development team are given the code a couple of days before the
walkthrough meeting.
5. Each member selects some test cases and simulates execution of the code by hand (i.e., traces
the execution through different statements and functions of the code).
6. The members note down their findings of their walkthrough and discuss those in a
walkthrough meeting where the coder of the module is present.
Code Inspection:
1. The principal aim of code inspection is to check for the presence of some common types of
errors that usually creep into code due to programmer mistakes and oversights and to check
whether coding standards have been adhered to.
2. The programmer usually receives feedback on programming style, choice of algorithm, and
programming techniques.
Following is a list of some classical programming errors which can be checked during code
inspection:
 Use of uninitialized variables.
 Jumps into loops.
 Non-terminating loops.
 Incompatible assignments.
 Array indices out of bounds.
 Improper storage allocation and deallocation.
 Use of incorrect logical operators or incorrect precedence among operators.
 Dangling reference caused when the referenced memory has not been allocated.

SOFTWARE DOCUMENTATION
When software is developed, in addition to the executable files and the source code, several kinds of
documents such as users’ manual, software requirements specification (SRS) document, design
document, test document, installation manual, etc., are developed as part of the software engineering
process. All these documents are considered a vital part of any good software development practice.
Good documents are helpful in the following ways:

 Good documents help enhance understandability of code


 Documents help the users to understand and effectively use the system.
 Good documents help to effectively tackle the manpower turnover1 problem.
 Production of good documents helps the manager to effectively track the progress of the
project.

Designed by Dr. Penchal, NECN for JNTUA-R19


(19A05404T)- CODING AND TESTING (Unit-4)

Different types of software documents can broadly be classified into the following:
Internal documentation:
1. These are provided in the source code itself.
2. Internal documentation can be provided in the code in several forms.
3. The important types of internal documentation are the following:
a. Comments embedded in the source code.
b. Use of meaningful variable names.
c. Module and function headers.
d. Code indentation.
e. Code structuring (i.e., code decomposed into modules and functions).
f. Use of enumerated types.
g. Use of constant identifiers.
h. Use of user-defined data types
External documentation: These are the supporting documents such as SRS document, installation
document, user manual, design document, and test document
Gunning’s fog index:
Gunning’s fog index Gunning’s fog index (developed by Robert Gunning in 1952) is a metric
that has been designed to measure the readability of a document. The computed metric value
(fog index) of a document indicates the number of years of formal education that a person
should have, in order to be able to comfortably understand that document.

Example 10.1 Consider the following sentence: “The Gunning’s fog index is based on the
premise that use of short sentences and simple words makes a document easy to understand.”
Calculate its Fog index. The fog index of the above example sentence is

If a users’ manual is to be designed for use by factory workers whose educational qualification is class
8, then the document should be written such that the Gunning’s fog index of the document does not
exceed 8.

TESTING

Definition: Testing a program involves executing the program with a set of test inputs and observing if
the program behaves as expected. If the program fails to behave as expected, then the input data and
the conditions under which it fails are noted for later debugging and error correction.

Designed by Dr. Penchal, NECN for JNTUA-R19


(19A05404T)- CODING AND TESTING (Unit-4)

Terminologies

Few important terminologies that have been standardised by the IEEE Standard Glossary of Software
Engineering Terminology [IEEE90]:

1. Mistake: A mistake is essentially any programmer action that later shows up as an incorrect
result during program execution. A programmer may commit a mistake in almost any
development activity.
Example, during coding a programmer might commit the mistake of not initializing a certain
variable, or might overlook the errors that might arise in some exceptional situations such as
division by zero in an arithmetic operation.
2. Error: An error is the result of a mistake committed by a developer in any of the development
activities. One example of an error is a call made to a wrong function. The terms error, fault,
bug, and defect are considered to be synonyms in the area of program testing
3. Failure: A failure of a program essentially denotes an incorrect behaviour exhibited by the
program during its execution. Every failure is caused by some bugs present in the program.
Example: A program crashes on an input.

Note: It may be noted that mere presence of an error in a program code may not necessarily lead to a
failure during its execution.

In the above code, if the variable roll assumes zero or some negative value under some circumstances,
then an array index out of bound type of error would result.

4. Test Case: A test case is a specification of the inputs, execution conditions, testing procedure,
and expected results that define a single test to be executed to achieve a particular software
testing objective.
5. Test suite: A test suite is the set of all test that have been designed by a tester to test a given
program.
6. Testability: A program is more testable, if it can be adequately tested with less number of test
cases. Obviously, a less complex program is more testable.

Verification vs Validation
Barry Boehm described verification and validation as the following:
 Verification: Are we building the product right?
 Validation: Are we building the right product?
1. Verification:
Verification is the process of checking that a software achieves its goal without any bugs. It
is the process to ensure whether the product that is developed is right or not. It verifies
whether the developed product fulfils the requirements that we have.
Verification is Static Testing.
Activities involved in verification:
 Inspections

Designed by Dr. Penchal, NECN for JNTUA-R19


(19A05404T)- CODING AND TESTING (Unit-4)

 Reviews
 Walkthroughs
2. Validation:
Validation is the process of checking whether the software product is up to the mark or in
other words product has high level requirements. It is the process of checking the validation
of product i.e. it checks what we are developing is the right product. it is validation of actual
and expected product. Validation is the Dynamic Testing.

Activities involved in validation:


1. Black box testing
2. White box testing
3. Unit testing
4. Integration testing
Error detection techniques = Verification techniques + Validation techniques

Testing Activities

1. Test suite design


2. Running test cases and checking the results to detect failures
3. Locate error
4. Error correction

Levels of Testing
A software product is normally tested in three levels or stages
A software product is normally tested in three levels or stages:
1. Unit testing: During unit testing, the individual functions (or units) of a program are tested.
2. Integration testing: After testing all the units individually, the units are slowly integrated and
tested after each step of integration (integration testing).
3. System testing: Finally, the fully integrated system is tested (system testing).

Designed by Dr. Penchal, NECN for JNTUA-R19


(19A05404T)- CODING AND TESTING (Unit-4)

Unit Testing
Unit testing is defined as a type of software testing where individual
components of software are tested.

Unit testing of software product is carried out during the development of


an application. An individual component may be either an individual
function or a procedure. Unit testing is typically performed by the
developer.

Objective of Unit Testing:


1. To isolate a section of code.
2. To verify the correctness of code.
3. To test every function and procedure.
4. To fix bug early in development cycle and to save costs
Integration testing
Integration testing is the process of testing the interface between two software units or module. It’s
focus on determining the correctness of the interface. There are four types of integration testing
approaches. Those approaches are the following:

a. Big-Bang Integration Testing: It is the simplest integration testing approach, where all the
modules are combining and verifying the functionality after the completion of individual
module testing.
b. Bottom-Up Integration Testing: In bottom-up testing, each module at lower levels is tested
with higher modules until all modules are tested.
c. Top-Down Integration Testing: Top-down integration testing technique used in order to
simulate the behavior of the lower-level modules that are not yet integrated.
d. Mixed Integration Testing: A mixed integration testing is also called sandwiched integration testing.
A mixed integration testing follows a combination of top down and bottom-up testing approaches
System Testing
System Testing is carried out on the whole system in the context of either system requirement
specifications or functional requirement specifications or in the context of both. System testing tests
the design and behavior of the system and also the expectations of the customer. It is performed to
test the system beyond the bounds mentioned in the software requirements specification (SRS).

BLACK-BOX TESTING

In black-box testing, test cases are designed from an examination of the input/output values only and no
knowledge of design or code is required. The following are the two main approaches available to design black
box test cases:

 Equivalence class partitioning


 Boundary value analysis

 Equivalence Class Partitioning

In the equivalence class partitioning approach, the domain of input values to the program under test is
partitioned into a set of equivalence classes. The partitioning is done such that for every input data
belonging to the same equivalence class, the program behaves similarly.
Example 1 : For a software that computes the square root of an input integer that can assume values
in the range of 0 and 5000. Determine the equivalence classes and the black box test suite.

Designed by Dr. Penchal, NECN for JNTUA-R19


(19A05404T)- CODING AND TESTING (Unit-4)

Answer: There are three equivalence classes—The set of negative integers, the set of integers in the
range of 0 and 5000, and the set of integers larger than 5000. Therefore, the test cases must include
representatives for each of the three equivalence classes. A possible test suite can be: {–5,500,6000}.

Example 2: Design equivalence class partitioning test suite for a function that reads a character string
of size less than five characters and displays whether it is a palindrome.

Answer: The equivalence classes are the leaf level classes shown in Figure 10.4. The equivalence
classes are palindromes, non-palindromes, and invalid inputs. Now, selecting one representative value
from each equivalence class, we have the required test suite: {abc,aba,abcdef}.

 Boundary Value Analysis


Boundary value analysis-based test suite design involves designing test cases using the values at the
boundaries of different equivalence classes.
Example, programmers may improperly use < instead of <=, or conversely <= for <, etc.
Example 10.9 For a function that computes the square root of the integer values in the range of 0 and
5000, determine the boundary value test suite.
Answer: There are three equivalence classes—The set of negative integers, the set of integers in the
range of 0 and 5000, and the set of integers larger than 5000. The boundary value-based test suite is:
{0,-1,5000,5001}.

Important steps in the black-box test suite design approach:


1. Examine the input and output values of the program.
2. Identify the equivalence classes.
3. Design equivalence class test cases by picking one representative value from each equivalence
class.
4. Design the boundary value test cases as follows. Examine if any equivalence class is a range of
values. Include the values at the boundaries of such equivalence classes in the test suite.

WHITE-BOX TESTING

White-box testing is an important type of unit testing. A large number of white-box testing strategies
exist. A white-box testing strategy can either be (i) coverage-based or (ii) fault based.
Coverage-based testing
A coverage-based testing strategy attempts to execute (or cover) certain elements of a program.
Popular examples of coverage-based testing strategies are statement coverage, branch coverage,
multiple condition coverage, and path coverage-based testing.

Designed by Dr. Penchal, NECN for JNTUA-R19


(19A05404T)- CODING AND TESTING (Unit-4)

Fault-based testing
A fault-based testing strategy targets to detect certain types of faults. These faults that a test strategy
focuses on constitute the fault model of the strategy. An example of a fault-based strategy is
mutation testing (testers change specific components of an application's source code to ensure a
software test suite will be able to detect the changes).

Stronger versus weaker testing


We have mentioned that a large number of white-box testing strategies have been proposed. It
therefore becomes necessary to compare the effectiveness of different testing strategies in detecting
faults. We can compare two testing strategies by determining whether one is stronger, weaker, or
Complementary to the other.

 Stronger testing strategy covers all program elements covered by the weaker testing strategy, and
the stronger strategy additionally covers at least one program element that is not covered by the
weaker strategy.
 If a stronger testing has been performed, then a weaker testing need not be carried out.

COVERAGE-BASED TESTING
Statement Coverage: The statement coverage strategy aims to design test cases so as to execute
every statement in a program at least once.
Example Design statement coverage-based test suite for the following Euclid’s GCD computation
program:

Answer: To design the test cases for the statement coverage, the conditional expression of the while
statement needs to be made true and the conditional expression of the if statement needs to be made
both true and false. By choosing the test set {(x = 3, y = 3), (x = 4, y = 3), (x = 3, y =4)}, all statements of
the program would be executed at least once.

Branch Coverage: A test suite satisfies branch coverage, if it makes each branch condition
in the program to assume true and false values in turn.
Example 2: Design branch coverage -based test suite for the following Euclid’s GCD computation
program
Answer: The test suite {(x = 3, y = 3), (x = 3, y = 2), (x = 4, y = 3), (x =3, y = 4)} achieves branch coverage.

Note: Branch coverage-based testing is stronger than statement coverage-based testing.

Multiple Condition Coverage: In the multiple condition (MC) coverage-based testing, test cases are
designed to make each component of a composite conditional expression to assume both true and
false values. For example, consider the composite conditional expression ((c1 .and.c2 ).or.c3). A test
suite would achieve MC coverage, if all the component conditions c1, c2 and c3 are each made to
assume both true and false values.
Consider the following C program segment:

Designed by Dr. Penchal, NECN for JNTUA-R19


(19A05404T)- CODING AND TESTING (Unit-4)

The program segment has a bug in the second component condition, it should have been
temperature<50. The test suite {temperature=160, temperature=40} achieves branch
coverage. But, it is not able to check that setWarningLightOn(); should not be called for
temperature values within 150 and 50.

Path Coverage: A test suite achieves path coverage if it exeutes each linearly independent paths ( o r
basis paths ) at least once. A linearly independent path can be defined in terms of the control flow
graph (CFG) of a program.
Control flow graph (CFG) : A control flow graph describes the sequence in which the different
instructions of a program get executed. we can define a CFG as follows.
A CFG is a directed graph consisting of a set of nodes and edges (N, E), such that each node n 􀀀 N
corresponds to a unique program statement and an edge exists between two nodes if control can
transfer from one node to the other.

McCabe’s Cyclomatic Complexity Metric:


 Cyclomatic complexity Metric is the quantitative measure of the number of linearly
independent paths in it.
 Cyclomatic complexity of a program is a measure of the psychological complexity or the
level of difficulty in understanding the program.
 It is a software metric used to indicate the complexity of a program.
 It is computed using the Control Flow Graph of the program.
 The nodes in the graph indicate the smallest group of commands of a program, and a
directed edge in it connects the two nodes i.e. if second command might immediately follow
the first command.
 McCabe’s cyclomatic complexity defines an upper bound on the number of independent paths
in a program.
For example, if source code contains no control flow statement then its cyclomatic complexity will
be 1 and source code contains a single path in it.

Designed by Dr. Penchal, NECN for JNTUA-R19


(19A05404T)- CODING AND TESTING (Unit-4)

Similarly, if the source code contains one if condition then cyclomatic complexity will be 2 because
there will be two paths one for true and the other for false.

There are three different ways to compute the cyclomatic complexity.

Method 1: Given a control flow graph G of a program, the cyclomatic complexity V(G) can be
computed as:
V(G) = E – N + 2
Where, N is the number of nodes of the control flow graph and E is the number of edges in the control
flow graph.
For the CFG of example shown in the above Figure E = 7 and N = 6. Therefore,
the value of the Cyclomatic complexity = 7 – 6 + 2 = 3.

Method 2: An alternate way of computing the cyclomatic complexity of aprogram is based on a visual
inspection of the control flow graph is as follows. —In this method, the cyclomatic complexity V (G)
for a graph G is given by the following expression:
V(G) = Total number of non-overlapping bounded areas + 1
From a visual examination of the CFG the number of bounded areas is 2. Therefore the cyclomatic
complexity, computed with this method is also 2+1=3.
Method 3: The cyclomatic complexity of a program can also be easily computed by computing the
number of decision and loop statements of the program. If N is the number of decision and loop
statements of a program, then the McCabe’s metric is equal to N + 1.

FAULT-BASED TESTING
Mutation Testing: Mutation test cases are designed to help detect specific types of faults in a
program. The idea behind mutation testing is to make a few arbitrary changes to a program at a time.
Each time the program is changed, it is called a mutated program and the change effected is called a
mutant.

Mutated program is tested against the original test suite of the program. If there exists at least one
test case in the test suite for which a mutated program yields an incorrect result, then the mutant is
said to be dead, since the error introduced by the mutation operator has successfully been detected
by the test suite.

DEBUGGING

After a failure has been detected, it is necessary to first identify the program statement(s) that are in
error and are responsible for the failure, the error can then be fixed.

Debugging Approaches

Brute force method: This is the most common method of debugging but is the least efficient method.
In this approach, print statements are inserted throughout the program to print the intermediate
values with the hope that some of the printed values will help to identify the statement in error.

Backtracking: This is also a fairly common approach. In this approach, starting from the statement at
which an error symptom has been observed, the source code is traced backwards until the error is
discovered. Unfortunately, as the number of source lines to be traced back increases, the number of

Designed by Dr. Penchal, NECN for JNTUA-R19


(19A05404T)- CODING AND TESTING (Unit-4)

potential backward paths increases and may become unmanageably large for complex programs,
limiting the use of this approach.
Cause elimination method: In this approach, once a failure is observed, the symptoms of the failure
(i.e., certain variable is having a negative value though it should be positive, etc.) are noted. Based on
the failure symptoms, the causes which could possibly have contributed to the symptom is developed
and tests are conducted to eliminate each.
Program slicing: This technique is similar to back tracking. In the backtracking approach, one often has
to examine a large number of statements. However, the search space is reduced by defining slices.

PROGRAM ANALYSIS TOOLS

A program analysis tool usually is an automated tool that takes either the source code or the
executable code of a program as input and produces reports regarding several important
characteristics of the program, such as its size, complexity, adequacy of commenting, adherence to
programming standards, adequacy of testing, etc. We can classify various program analysis tools into
the following two broad categories:
1. Static analysis tools
2. Dynamic analysis tools
Static Analysis Tools:
Static program analysis tools assess and compute various characteristics of a program without
executing it.
Typically, static analysis tools analyse the source code to compute certain metrics characterising the
source code (such as size, cyclomatic complexity, etc.) and also report certain analytical conclusions.
These also check the conformance of the code with the prescribed coding standards. In this context, it
displays the following analysis results:
 To what extent the coding standards have been adhered to?
 Whether certain programming errors such as uninitialised variables, mismatch between actual
and formal parameters, variables that are declared but never used, etc., exist?
 A list of all such errors is displayed.

Dynamic Analysis Tools


Dynamic program analysis tools can be used to evaluate several program characteristics based on an
analysis of the run time behaviour of a program. These tools usually record and analyse the actual
behaviour of a program while it is being executed.
For example, the dynamic analysis tool can report the statement, branch, and path coverage achieved
by a test suite. If the coverage achieved is not satisfactory more test cases can be designed, added to
the test suite, and run. Further, dynamic analysis results can help eliminate redundant test cases from
a test suite.

SYSTEM TESTING
System tests are designed to validate a fully developed system to assure that it meets its
requirements. The test cases are therefore designed solely based on the SRS document.

There are essentially three main kinds of system testing depending on who carries out testing:

1. Alpha Testing: Alpha testing refers to the system testing carried out by the test team within the
developing organisation.

Designed by Dr. Penchal, NECN for JNTUA-R19


(19A05404T)- CODING AND TESTING (Unit-4)

2. Beta Testing: Beta testing is the system testing performed by a select group of friendly customers.
3. Acceptance Testing: Acceptance testing is the system testing performed by the customer to
determine whether to accept the delivery of the system.

Functionality and Performance test cases

The system test cases can be classified into functionality and performance test cases. The
functionality tests are designed to check whether the software satisfies the functional requirements as
documented in the SRS document. The performance tests, on the other hand, test the conformance of
the system with the non-functional requirements of the system.

Smoke Testing:

Before a fully integrated system is accepted for system testing, smoke testing is performed. Smoke
testing is done to check whether at least the main functionalities of the software are working properly.
Unless the software is stable and at least the main functionalities are working satisfactorily, system
testing is not undertaken.

For smoke testing, a few test cases are designed to check whether the basic functionalities are
working. For example, for a library automation system, the smoke tests may check whether books can
be created and deleted, whether member records can be created and deleted, and whether books can
be loaned and returned.

Performance Testing

1. Performance testing is carried out to check whether the system meets the nonfunctional
requirements identified in the SRS document. There are several types of performance testing
corresponding to various types of non-functional requirements. All performance tests can be
considered as black-box tests.
2. Stress testing: Stress testing is also known as endurance testing. Stress testing evaluates system
performance when it is stressed for short periods of time.
For example, suppose an operating system is supposed to support fifteen concurrent transactions,
then the system is stressed by attempting to initiate fifteen or more transactions simultaneously.
3. Volume testing: Volume testing checks whether the data structures (buffers, arrays, queues,
stacks, etc.) have been designed to successfully handle extraordinary situations.
4. Configuration testing: Configuration testing is used to test system behaviour in various hardware
and software configurations specified in the requirements.
5. Compatibility testing: This type of testing is required when the system interfaces with external
systems (e.g., databases, servers, etc.). Compatibility aims to check whether the interfaces with
the external systems are performing as required.
6. Regression testing: This type of testing is required when a software is maintained to fix some bugs
or enhance functionality, performance, etc.
7. Recovery testing: Recovery testing tests the response of the system to the presence of faults, or
loss of power, devices, services, data, etc.
8. Maintenance testing: This addresses testing the diagnostic programs, and other procedures that
are required to help maintenance of the system.
9. Documentation testing: It is checked whether the required user manual, maintenance manuals,
and technical manuals exist and are consistent.

Designed by Dr. Penchal, NECN for JNTUA-R19


(19A05404T)- CODING AND TESTING (Unit-4)

10. Usability testing: Usability testing concerns checking the user interface to see if it meets all user
requirements concerning the user interface. During usability testing, the display screens,
messages, report formats, and other aspects relating to the user interface requirements are
tested.
11. Security testing: Security testing is essential for software that handle or process confidential data
that is to be gurarded against stealing.

Regression Testing

Regression Testing is the process of testing the modified parts of the code and the parts that might get
affected due to the modifications to ensure that no new errors have been introduced in the software
after the modifications have been made.
When to do regression testing?
1. When a new functionality is added to the system and the code has been modified to absorb and
integrate that functionality with the existing code.
2. When some defect has been identified in the software and the code is debugged to fix it.
3. When the code is modified to optimize it’s working.
Process of Regression testing:
1. Firstly, whenever we make some changes to the source code for any reasons like adding new
functionality, optimization, etc. then our program when executed fails in the previously designed
test suite for obvious reasons.
2. After the failure, the source code is debugged in order to identify the bugs in the program.
3. After identification of the bugs in the source code, appropriate modifications are made.
4. Then appropriate test cases are selected from the already existing test suite which covers all the
modified and affected parts of the source code.
5. We can add new test cases if required. In the end regression testing is performed using the
selected test cases.

TESTING OBJECT-ORIENTED PROGRAMS


UNIT TESTING

 Traditional Techniques Considered Not Satisfactory for Testing Object-oriented Programs.


 Adequate testing of individual methods does not ensure that a class has been satisfactorily tested.
 An object is the basic unit of testing of object-oriented programs.

Grey-Box Testing of Object-oriented Programs

1. Model-based testing is important for object oriented programs,


2. For object-oriented programs, several types of test cases can be designed based on the design
models of object-oriented programs. These are called the grey-box test cases.
3. The following are some important types of grey-box testing that can be carried on based on UML
models:
State-model-based testing
 State coverage: Each method of an object is tested at each state of the object.
 State transition coverage: It is tested whether all transitions depicted in the state model
work satisfactorily.
 State transition path coverage: All transition paths in the state model are tested.

Designed by Dr. Penchal, NECN for JNTUA-R19


(19A05404T)- CODING AND TESTING (Unit-4)

Use case-based testing:

 Scenario coverage: Each use case typically consists of a mainline scenario and several
alternate scenarios. For each use case, the mainline and all alternate sequences are tested to
check if any errors show up.

Class diagram-based testing

 Testing derived classes: All derived classes of the base class have to be instantiated and
tested. In addition to testing the new methods defined in the derived . c lass, the inherited
methods must be retested.
 Association testing: All association relations are tested.
 Aggregation testing: Various aggregate objects are created and tested.

INTEGRATION TESTING

There are two main approaches to integration testing of object-oriented programs:


• Thread-based
• Use based
Thread-based approach: In this approach, all classes that need to collaborate to realise the behaviour
of a single use case are integrated and tested.

Use-based approach: Use-based integration begins by testing classes that either need no service from
other classes or need services from at most a few other classes. After these classes have been
integrated and tested, classes that use the services from the already integrated classes are integrated
and tested. This is continued till all the classes have been integrated and tested.

Designed by Dr. Penchal, NECN for JNTUA-R19


by vj
by vj
by vj
by vj
by vj
by vj
by vj
by vj
by vj
by vj
by vj
by vj
by vj
by vj

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy