0% found this document useful (0 votes)
108 views36 pages

SE - QB Updated With Ans

1. This document contains a question bank for the subject 191AIE504T - Software Engineering from Easwari Engineering College. 2. The question bank covers topics from Unit 1 - Introduction such as definitions of software engineering, software, characteristics and categories of software, challenges in software development, software process models including prescriptive models like waterfall model and incremental model, and agile methodology. 3. Questions assess students' understanding of fundamental software engineering concepts, ability to compare different process models, explain their advantages and disadvantages, and differentiate between agile and traditional waterfall models of development.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
108 views36 pages

SE - QB Updated With Ans

1. This document contains a question bank for the subject 191AIE504T - Software Engineering from Easwari Engineering College. 2. The question bank covers topics from Unit 1 - Introduction such as definitions of software engineering, software, characteristics and categories of software, challenges in software development, software process models including prescriptive models like waterfall model and incremental model, and agile methodology. 3. Questions assess students' understanding of fundamental software engineering concepts, ability to compare different process models, explain their advantages and disadvantages, and differentiate between agile and traditional waterfall models of development.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 36

EASWARI ENGINEERING COLLEGE

(AUTONOMOUS)

DEPARTMENT OF ARTIFICIAL INTELLIGENCE


AND DATA SCIENCE

191AIE504T - SOFWARE ENGINEERING


QUESTION BANK

III YEAR –B.TECH (AI&DS)


July 2023 to October 2023

PREPARED BY APPROVED BY

Mrs.K.P.Revathi HOD
UNIT I
INTRODUCTION

PART - A

1. What is software engineering?


Software engineering is a discipline in which theories, methods and tools are applied to develop professional software.

2.Define software engineering.


Software engineering:1) The application of a systematic, disciplined, quantifiable approach to the development, operation, and
maintenance of software: that is, the application of engineering to software

3. What is Software?
Software is nothing but a collection of computer programs that are related documents that are indented to provide desired
features, functionalities and better performance.

4. What are the characteristics of the software?


* Software is engineered, not manufactured.
* Software does not wear out.
* Most software is custom built rather than being assembled from components.

5. What are the various categories of software?


* System software
* Application software
* Engineering/Scientific software
* Embedded software

6. What are the challenges in software?


* Copying with legacy systems.
* Heterogeneity challenge
* Delivery times challenge.

7. Define software process.


Software process is defined as the structured set of activities that are required to develop the software system.

8. What are the fundamental activities of a software process?


* Specification
* Design and implementation
* Validation
* Evolution
9.What is prescriptive and specialized process models?

Prescriptive process models:


The water fall model, Incremental process model, Evolutionary process model, and The spiral model
Specialized process models:
Component-based development, The formal methods model and Aspect-oriented software development

10.What are the umbrella activities of a software process?


* Software project tracking and control.
* Risk management.
* Software Quality Assurance.
* Formal Technical Reviews.
* Software Configuration Management.
* Work product preparation and production.
* Reusability management.
* Measurement.

11. What are the merits of incremental model?


i) The incremental model can be adopted when there is less number of people involved in the project.
ii) Technical risks can be managed with each increment.
iii) For a very small time span, at least core product can be delivered to the customer.
12. List the task regions in the Spiral model.
* Customer communication - it is suggested to establish customer communication.
* Planning – All planning activities are carried out
* Risk analysis – The tasks required to calculate technical and management risks.
* Engineering – tasks required to build one or more representations of applications
* Construct and release – tasks required to construct, test, install the applications
* Customer evaluation - tasks are performed and implemented at installation stage based on the customer evaluation.

13. What are the drawbacks of spiral model?


i) It is based on customer communication. If the communication is not proper then the software product that gets developed will
not be the up to the mark.
ii) It demands considerable risk assessment. If the risk assessment is done properly then only the successful product can be
obtained.

14. Name the Evolutionary process Models.


i. Incremental model
ii. Spiral model
iii. WIN-WIN spiral model
iv. Concurrent Development

15. What is Agile Methodology?


Agile methodology is a practice that promotes continuous iteration of development and testing throughout the software
development lifecycle of the project. Both development and testing activities are concurrent unlike the Waterfall model
16.Compare Agile and Waterfall model of software development

Agile Model Waterfall Model

Agile method proposes incremental and Development of the software flows sequentially from start
iterative approach to software design point to end point.

The agile process is broken into individual The design process is not broken into an individual models
models that designers work on

The customer has early and frequent The customer can only see the product at the end of the
opportunities to look at the product and make project
decision and changes to the project

Agile model is considered unstructured Waterfall model are more secure because they are so plan
compared to the waterfall model oriented

Small projects can be implemented very All sorts of project can be estimated and completed.
quickly. For large projects, it is difficult to
estimate the development time.

Error can be fixed in the middle of the Only at the end, the whole product is tested. If the
project. requirement error is found or any changes have to be made,
the project has to start from the beginning

17. Define Extreme Programming.


Extreme Programming (XP) is an agile software development framework that aims to produce higher quality software, and higher quality
of life for the development team. XP is the most specific of the agile frameworks regarding appropriate engineering practices for software
development.

PART B :
1. Neatly explain all the Prescriptive process models and Specialized process models May: 03, 05,
06,09,10,14,16 Dec : 04,08,09,12 ,16
 ―Prescriptive‖ means a set of process elements—framework activities, software engineering
actions, tasks, work products, quality assurance, and change control mechanisms for each project. Each
process model also prescribes a process flow (also called a work flow)—that is, the manner in which
the process elements are interrelated to one another.
 The software process model is also known as Software Development Life Cycle (SDLC) Model for or
software paradigm.
 Various prescriptive process models are

 .Incremental Model
 Prototyping
 RAD Model
 Spiral Model
 Concurrent

Need for Process Model


Each team member in software product development will understand –what is the next activity and
how to do it. Thus process model will bring the definiteness and discipline in overall development process.

1. The Waterfall Model

Communication
Plannin g

Estimation Modeling

Construction

Deployment

Delivery

The waterfall model, sometimes called the classic life cycle, is a systematic, sequential approach to
software development that begins with customer specification of requirements and progresses through planning,
modeling, construction, and deployment, culminating in ongoing support of the completed software. A variation
in the representation of the waterfall model is called the V-model. Represented in figure, the V-model depicts
the relationship of quality assurance actions to the actions associated with
Communication, modeling, and early construction activities. As software team moves down the left
side of the V, basic problem requirements are refined into progressively more detailed and technical
representations of the problem and its solution. Once code has been generated, the team moves up the right side
of the V, essentially performing a series of tests that validate each of the models created as the team moved
down the left side.
The V-model provides a way of visualizing how verification and validation actions are applied to
earlier engineering work. The waterfall model is the oldest paradigm for software engineering.
Disadvantages:
1. It is difficult to follow the sequential flow in software development process. If some changes are made at
some phases then it may cause confusion.
2. The requirement analysis is done initially and sometimes it is not possible to state all the requirements
explicitly in the beginning. This causes difficulty in the projects.
3. The customer can see the working model of the project only at the end. After reviewing of the working
model; if the customer gets dissatisfied then it causes serious problem.
The waterfall model can serve as a useful process model in situations where requirements are fixed and work is
to proceed to completion in a linear manner.
2. Incremental Process Models
The incremental model delivers series of releases to the customer. These releases are called increments.
 A process model that is designed to produce the software in increments. The incremental model
combines elements of linear and parallel process flows the inncremental model applies linear sequences
in a staggered fashion as calendar time progresses. Each linear sequence produces deliverable
―increments‖ of the software in a manner that is similar to the increments produced by an
evolutionary process flow.
 For example, word-processing software developed using the incremental paradigm might deliver basic
file management, editing, and document production functions in the first increment; more sophisticated
editing and document production capabilities in the second increment; spelling and grammar checking
in the third increment; and advanced page layout capability in the fourth increment. It should be noted
that the process flow for any increment can incorporate the prototyping paradigm. When an
incremental model is used, the first increment is often a core product.
 The incremental process model focuses on the delivery of an operational product with each increment.
Early increments are stripped-down versions of the final product, but they do provide capability that
serves the user and also provide a platform for evaluation by the user. Incremental development is
particularly useful when staffing is unavailable for a complete implementation by the business deadline
that has been established for the project.
 Early increments can be implemented with fewer people. If the core product is well received, then
additional staff (if required) can be added to implement the next increment. In addition, increments can
be planned to manage technical risks.
Advantages
 Incremental development is particularly useful when staffing is unavailable for a complete
implementation by the business deadline that has been established for the project. Early increments can
be implemented with fewer people.
i).RAD Model
RAD model is Rapid Application Development model. It is a type of incremental model. In RAD
model the components or functions are developed in parallel as if they were mini projects. The developments
are time boxed, delivered and then assembled into a working prototype. This can quickly give the customer

something to see and use and to provide feedback regarding the delivery and their requirements.

Business modeling: The information flow is identified between various business functions.
Data modeling: Information gathered from business modeling is used to define data objects that are needed for
the business.
Process modeling: Data objects defined in data modeling are converted to achieve the business information
flow to achieve some specific business objective.
Description are identified and created for CRUD of data objects.
Application generation: Automated tools are used to convert process models into code and the actual system.
Testing and turnover: Test new components and all the interfaces.
Advantages of the RAD model:
 Reduced development time.
 Increases reusability of components
 Quick initial reviews occur
 Encourages customer feedback
 Integration from very beginning solves a lot of integration issues.

Disadvantages of RAD model:


 Depends on strong team and individual performances for identifying business requirements.
 Only system that can be modularized can be built using RAD
 Requires highly skilled developers/designers.
 High dependency on modeling skills
 Inapplicable to cheaper projects as cost of modeling and automated code generation is very high.

When to use RAD model:


 RAD should be used when there is a need to create a system that can be modularized in 2-3 months of
time.
 It should be used if there‘s high availability of designers for modeling and the budget is high enough to
afford their cost along with the cost of automated code generating tools.
 RAD SDLC model should be chosen only if resources with high business knowledge are available and
there is a need to produce the system in a short span of time (2-3 months).
3. Evolutionary Process Models
Evolutionary models are iterative. They are characterized in a manner that enables you to develop
increasingly more complete versions of the software two common evolutionary process models.
i) Prototyping: Prototyping can be used as a stand-alone process model; it is more commonly used as a
technique that can be implemented within the context of any one of the process models. The prototyping
paradigm assists the stakeholders to better understand what is to be built when requirements are fuzzy.

The prototyping paradigm begins with communication. We meet with other stakeholders to define the
overall objectives for the software, identify whatever requirements are known, and outline areas where further
definition is mandatory. Prototyping iteration is planned quickly, and modeling occurs. A quick design focuses
on a representation of those aspects of the software that will be visible to end users (e.g., human interface layout
or output display formats). The quick design leads to the construction of a prototype.

The prototype is deployed and evaluated by stakeholders, who provide feedback that is used to further refine
requirements. Iteration occurs as the prototype is tuned to satisfy the needs of various stakeholders, while at
the same time enabling you to better understand what needs to be done. The prototype serves as a mechanism
for identifying software requirements. If a working prototype is to be built, you can make use of existing
program fragments or apply tools (e.g., report generators and window managers) that enable working programs
to be generated quickly.
Disadvantages:
In the first version itself, customer often wants ―few fixes‖ rather than rebuilding of the system whereas
rebuilding of new system maintains high level of quality.
Sometimes developer may make implementation compromises to get prototype working quickly. Later on
developer may become comfortable with compromises and forget why they are inappropriate.
made. Anchor point milestones—a combination of work products and conditions that are attained along
the path of the spiral—are noted for each evolutionary pass.
The first circuit around the spiral might result in the development of a product specification; each pass
through the planning region results in adjustments to the project plan. Cost and schedule are adjusted based on
feedback derived from the customer after delivery. The project manager adjusts the planned number of iterations
required to complete the software.
Advantages
The spiral model is a realistic approach to the development of large-scale systems and software.
Because software evolves as the process progresses, the developer and customer better understand and react to
risks at each evolutionary level.

Concurrent Models
The concurrent development model, sometimes called concurrent engineering, allows a software team
to represent iterative and concurrent elements of any of the process models. Figure provides a schematic
representation of one software engineering activity.
The activity—modeling—may be in any one of the states12 noted at any given time. Similarly, other
activities, actions, or tasks (e.g., communication or construction) can be represented in an analogous manner.
For example, early in a project the communication activity has completed its first iteration and exists in the
awaiting changes state.
The modeling activity (which existed in the inactive state) while initial communication was completed,
now makes a transition into the under development state. If, however, the customer indicates that changes in
requirements must be made, the modeling activity moves from the under development state into the awaiting
changes state.
Concurrent modeling defines a series of events that will trigger transitions from state to state for each
of the software engineering activities, actions, or tasks. This generates the event analysis model correction,
which will trigger the requirements analysis action from the done state into the awaiting changes state .

UNIT II– SOFTWARE REQUIREMENT SPECIFICATION

1. What is requirement engineering?


Requirement engineering is the process of establishing the services that the customer requires from the system and the constraints
under which it operates and is developed.

2. What are the elements of Analysis model?


i. Data Dictionary
ii. Entity Relationship Diagram
iii. Data Flow Diagram
iv. State Transition Diagram
v. Control Specification
vi. Process specification.

3. What are functional requirements?


Functional requirements are” statements of services the system should provide how the system should react to particular input and
how the system should behave in particular situation.

4. What are non functional requirements?


Non functional requirements are constraints on the services or functions offered by the system such as timing constraints,
constraints on the development process, standards, etc……..

5.What is User requirements?


User requirements are statements, in a natural language plus diagrams, of what services the system is expected to
provide and the constraints under which it must operate.
6.What are the system requirements?
System requirements set out the systems functions, services and operational constraints in detail.
The system requirements document should be precise. It should define exactly what is to be implemented. It may be the contract between
the system buyer and the software developer
7.What are the characteristics of SRS?
i. Correct – The SRS should be made up to date when appropriate requirements are identified.
ii. Unambiguous – When the requirements are correctly understood then only it is possible to write an unambiguous software.
iii. Complete – To make SRS complete, it should be specified what a software designer wants to create software.
iv. Consistent – It should be consistent with reference to the functionalities identified.
v. Specific – The requirements should be mentioned specifically.
vi. Traceable – What is the need for mentioned requirement?
8.What are the requirement engineering processes?
● Feasibility studies
● Requirement elicitation and analysis
● Requirements validation
● Requirement management

9. What is the outcome of feasibility study?


The outcome of feasibility study is the results obtained from the following questions:
● Which system contributes to organizational objectives?
● Whether the system can be engineered? Is it within the budget?
● Whether the system can be integrated with other existing system?
10.What is petri nets in structured system analysis?
Its a Formal technique for describing concurrent interrelated activities
It Consists of four parts
(1) A set of places
(2) A set of transitions
(3) An input function
(4) An output function

11.Define Data Dictionary.


The data dictionary can be defined as an organized collection of all the data elements of the system with precise and rigorous
definitions so that user and system analyst will have a common understanding of inputs, outputs, components of stores and
intermediate calculations.

PART B:
1. Explain functional & non-functional requirements. May: 14,16

Requirement engineering is the process of establishing the services that the customer requires from system and the constraints
under which it operates and is developed. The requirements themselves are the descriptions of the system services and constraints that are
generated during the requirements engineering process. A requirement can range from a high-level abstract statement of a service or of a
system constraint to a detailed mathematical functional specification.

Types of Requirements

User System Software Functional


Requirements Requirements Specification Requirements
Requirements (SRS)

1. Functional Requirements 2. Non Functional Requirements

Types of requirements User requirements


It is a collection of statements in natural language plus description of the service the system provides and its operational
constraints. It is written for customers.

Guidelines for Writing User Requirements For example


Consider a spell checking and correcting system a word processor. The user requirements can be given in natural language as the
system should posses a traditional word dictionary and user supplied dictionary. It shall provide a user- activated facility which checks the
spelling of words in the document against spellings in the system dictionary and user-supplied dictionaries.
When a word is found in the document which is not given in the dictionary, then the system should suggest 10 alternative words. These
alternative words should be based on a match between the word found and corresponding words in the dictionaries. When a word is found in
the document which is not in any dictionary, the system should propose following options to user:
 Ignore the corresponding instance of the word and go to next sentence.

 Ignore all instances of the word

 Replace the word with a suggested word from the dictionary

 Edit the word with user-supplied text

 Ignore this instance and add the word to a specified dictionary


System Requirement
System requirements are more detailed specifications of system functions, services and constraints than user requirements.
 They are intended to be a basis for designing the system.

 They may be incorporated into the system contract.

 The system requirements can be expressed using system models.

 The requirements specify what the system does and design specifies how it does.
System requirement should simply describe the external behavior of the system and its operational constraints. They should not be
concerned with how the system should be designed or implemented. For a complex software system design it is necessary to give all the
requirements in detail. Usually, natural language is used to write system requirements specification and user requirements.

Software specification
It is a detailed software description that can serve as a basis for design or implementation. Typically it is written for software
developers.
Functional Requirements
Functional requirements should describe all the required functionality or system services.
The customer should provide statement of service. It should be clear how the system should react to particular inputs and how a
particular system should behave in particular situation. Functional requirements are heavily dependent upon the type of software, expected
users and the type of system where the software is used.
Functional user requirements may be high-level statements of what the system should do but functional system requirements
should describe the system services in detail.
For example: Consider a library system in which there is a single interface provided to multiple databases. These databases are
collection of articles from different libraries. A user can search for, download and print these articles for a personal study.
From this example we can obtain functional Requirements as-
 The user shall be able to search either all of the initial set of databases or select a subset from it.

 The system shall provide appropriate viewers for the user to read documents in the document store.

 A unique identifier (ORDER_ID) should be allocated to every order. This identifier can be copied by the user to the account's permanent
storage area.
Problems Associated with Requirements
Requirements imprecision

 Problems arise when requirements are not precisely stated.

 Ambiguous requirements may be interpreted in different ways by developers and users.

 Consider meaning of term 'appropriate viewers'


User intention - special purpose viewer for each different document type;
Developer interpretation - Provide a text viewer that shows the contents of the document.
Requirements completeness and consistency
The requirements should be both complete and consistent. Complete means they should include descriptions of all facilities
required. Consistent means there should be no conflicts or contradictions in the descriptions of the system facilities. Actually in practice, it is
impossible to produce a complete and consistent requirements document.

Examples of functional requirements


The LIBSYS system:
A library system that provides a single interface to a number of databases of articles in different libraries. Users can search for,
download and print these articles for personal study. The user shall be able to search either all of the initial set of databases or select a subset
from it.
The system shall provide appropriate viewers for the user to read documents in the document store. Every order shall be allocated a unique
identifier (ORDER_ID) which the user shall be able to copy to the accounts permanent storage area.

Non-Functional requirements
Requirements that are not directly concerned with the specific functions delivered by the system Typically relate to the system as
a whole rather than the individual system features Often could be deciding factor on the survival of the system (e.g. reliability, cost,
response time)

Non-Functional requirements classifications:


Product requirements
These requirements specify how a delivered product should behave in a particular way. Most NFRs are concerned with
specifying constraints on the behavior of the executing system.
Specifying product requirements
Some product requirements can be formulated precisely, and thus easily quantified.
• Performance
• Capacity
Others are more difficult to quantify and, consequently, are often stated informally.
• Usability
Process requirements
Process requirements are constraints placed upon the development process of the system.
Process requirements include:

 Requirements on development standards and methods which must be followed

 CASE tools which should be used

 The management reports which must be provided

Examples of process requirements


The development process to be used must be explicitly defined and must be conformant with ISO 9000 standards. The system
must be developed using the XYZ suite of CASE tools.
Management reports setting out the effort expended on each identified system component must be produced every two weeks. A disaster
recovery plan for the system development must be specified.
External requirements
May be placed on both the product and the process. Derived from the environment in which the system is developed.
External requirements are based on:

 application domain information

 organizational considerations

 the need for the system to work with other systems

 health and safety or data protection regulations

 or even basic natural laws such as the laws of physics


Examples of external requirements
Medical data system the organizations data protection officer must certify that all data is maintained according to data protection
legislation before the system is put into operation.
Train protection system. The time required to bring the train to a complete halt is computed using the following function:

Organizational requirements
The requirements which are consequences of organizational policies and procedures come under this category. For instance:
process standards used implementation requirements.

2. Narrate the importance of SRS. Explain typical SRS structure and its parts. . Show the IEEE template of SRS document. Dec:
05, Nov: 12 ,May :16

An SRS is basically an organization's understanding (in writing) of a customer or potential client's system requirements and
dependencies at a particular point in time (usually) prior to any actual design or development work. It's a two-way insurance policy that
assures that both the client and the organization understand the other's requirements from that perspective at a given point in time.
The SRS document itself states in precise and explicit language those functions and capabilities a software system (i.e., a
software application, an eCommerce Web site, and so on) must provide, as well as states any required constraints by which the system must
abide. The SRS also functions as a blueprint
for completing a project with as little cost growth as possible. The SRS is often referred to as the "parent" document because all subsequent
project management documents, such as design specifications, statements of work, software architecture specifications, testing and
validation plans, and documentation plans, are related to it.
It's important to note that an SRS contains functional and nonfunctional requirements only; it doesn'tq offer design suggestions,
possible solutions to technology or business issues, or any other information other than what the development team understands the
customer's system requirements to be.
A well-designed, well-written SRS accomplishes four major goals:
It provides feedback to the customer. An SRS is the customer's assurance that the development organization understands the
issues or problems to be solved and the software behavior necessary to address those problems. Therefore, the SRS should be written in
natural language (versus a formal language, explained later in this article), in an unambiguous manner that may also include charts, tables,
data flow diagrams, decision tables, and so on.
It decomposes the problem into component parts. The simple act of writing down software requirements in a well-designed
format organizes information, places borders around the problem, solidifies ideas, and helps break down the problem into its component
parts in an orderly fashion.
It serves as an input to the design specification. As mentioned previously, the SRS serves as the parent document to subsequent
documents, such as the software design specification and statement of work. Therefore, the SRS must contain sufficient detail in the
functional system requirements so that a design solution can be devised.
It serves as a product validation check. The SRS also serves as the parent document for testing and validation strategies that will
be applied to the requirements for verification.
SRSs are typically developed during the first stages of "Requirements Development," which is the initial product development
phase in which information is gathered about what requirements are needed--and not. This information-gathering stage can include onsite
visits, questionnaires, surveys, interviews, and perhaps a return-on-investment (ROI) analysis or needs analysis of the customer or client's
current business environment. The actual specification, then, is written after the requirements have been gathered and analyzed.
SRS development process can offer several benefits:
Technical writers are skilled information gatherers, ideal for eliciting and articulating customer requirements. The presence of a
technical writer on the requirements-gathering team helps balance the type and amount of information extracted from customers, which can
help improve the SRS.
Technical writers can better assess and plan documentation projects and better meet customer document needs. Working on SRSs
provides technical writers with an opportunity for learning about customer needs firsthand--early in the product development process.
Technical writers know how to determine the questions that are of concern to the user or customer regarding ease of use and
usability. Technical writers can then take that knowledge and apply it not only to the specification and documentation development, but also
to user interface development, to help ensure the UI (User Interface) models the customer requirements.
Technical writers involved early and often in the process, can become an information resource throughout the process, rather than
an information gatherer at the end of the process.
IEEE) have identified nine topics that must be addressed when designing and writing an SRS:
 Interfaces

 Functional Capabilities

 Performance Levels

 Data Structures/Elements

 Safety

 Reliability

 Security/Privacy

 Quality

 Constraints and Limitations


An SRS document typically includes four ingredients are:
 A template

 A method for identifying requirements and linking sources

 Business operation rules

 A traceability matrix
UNIT – III SOFTWARE DESIGN

1. What are the elements of design model?


i. Data design
ii. Architectural design
iii. Interface design
iv. Component-level design

2. Define design process.


Design process is a sequence of steps carried through which the requirements are translated into a system or software model.

3. List the principles of a software design.


i. The design process should not suffer from “tunnel vision”
ii. The design should be traceable to the analysis model.
iii. The design should exhibit uniformity and integration.
iv. Design is not coding.
v. The design should not reinvent the wheel.

4. What is the benefit of modular design?


Changes made during testing and maintenance becomes manageable and they do not affect other modules.

5. What is a cohesive module?


A cohesive module performs only “one task” in software procedure with little interaction with other modules. In other words
cohesive module performs only one thing.

6. What are the different types of Cohesion?


i. Coincidentally cohesive - The modules in which the set I\of tasks are related with each other loosely.
ii. Logically cohesive – A module that performs the tasks that are logically related with each other.
iii. Temporal cohesion – The module in which the tasks need to be executed in some specific time span.
iv. Procedural cohesion – When processing elements of a module are related with one another and must be executed in some
specific order.
v. Communicational cohesion – When the processing elements of a module share the data then such module is called
communicational cohesive.

7. What is coupling?
Coupling is the measure of interconnection among modules in a program structure. It depends on the interface complexity
between modules.

8. What are the various types of coupling?


i. Data coupling – The data coupling is possible by parameter passing or data interaction.
ii. Control coupling – The modules share related control data in control coupling.
iii. Common coupling – The common data or a global data is shared among modules.
iv. Content coupling – Content coupling occurs when one module makes use of data or control information maintained in another
module.

9. What are the common activities in design process?


i. System structuring – The system is subdivided into principle subsystems components and communications between these
subsystems are identified.
ii. Control modeling – A model of control relationships between different parts of the system is established.
iii. Modular decomposition – The identified subsystems are decomposed into modules.

10. What are the benefits of horizontal partitioning?


i. Software that is easy to test.
ii. Software that is easier to maintain.
iii. Propagation of fewer side effects.
iv. Software that is easier to extend.

11. What is vertical partitioning?


Vertical partitioning often called factoring suggests that the control and work should be distributed top-down in program
structure.

12. What are the advantages of vertical partitioning?


i. These are easy to maintain changes.
ii. They reduce the change impact and error propagation.

13. What are the various elements of data design?


i. Data object – The data objects are identified and relationship among various data objects can be represented using ERD or data
dictionaries.
ii. Databases – Using software design model, the data models are translated into data structures and data bases at the application
level.
iii. Data warehouses – At the business level useful information is identified from various databases and the data warehouses are
created.

14. Name the commonly used architectural styles.


i. Data centered architecture.
ii. Data flow architecture.
iii. Call and return architecture.
iv. Object-oriented architecture.
v. Layered architecture.

15. What is Transform mapping?


The transform mapping is a set of design steps applied on the DFD in order to map the transformed flow characteristics into
specific architectural style.

16. What is an Architectural design?

The architectural design defines the relationship between major structural elements of the software, the “design patterns” that can
be used to achieve the requirements that have been defined for the system.

17. What is data design?


The data design transforms the information domain model created during analysis into the data structures that will be required to
implement the software.

18.What is User interface design?


User interface design creates an effective communication medium between a human and a computer

PART B:
1. Explain about the various design process & design concepts considered during design. May:
03.06,07,08, Dec: 05

Software design is an iterative process through which requirements are translated into a ―blueprint‖
for constructing the software. Initially, the blueprint depicts a holistic view of software. That is, the design is
represented at a high level of abstraction a level that can be directly traced to the specific system objective and
more detailed data, functional, and behavioral requirements.
Three characteristics that serve as a guide for the evaluation of a good design:
• The design must implement all of the explicit requirements contained in the requirements model, and it must
accommodate all of the implicit requirements desired by stakeholders.
• The design must be a readable, understandable guide for those who generate code and for those who test and
subsequently support the software.
• The design should provide a complete picture of the software, addressing the data, functional, and behavioral
domains from an implementation perspective.

Some guidelines:
1. A design should exhibit an architecture that (1) has been created using recognizable architectural styles or
patterns, (2) is composed of components that exhibit good design characteristics and (3) can be implemented in
an evolutionary fashion, thereby facilitating implementation and testing.
2. A design should be modular; that is, the software should be logically partitioned into elements or
subsystems.
3. A design should contain distinct representations of data, architecture, interfaces, and components.
4. A design should lead to data structures that are appropriate for the classes to be implemented and are drawn
from recognizable data patterns.
5. A design should lead to components that exhibit independent functional characteristics.
6. A design should lead to interfaces that reduce the complexity of connections between components and with
the external environment.
7. A design should be derived using a repeatable method that is driven by information obtained during
software requirements analysis.
8. A design should be represented using a notation that effectively communicates its meaning.

Quality Attributes
• Usability is assessed by considering human factors, overall aesthetics, consistency, and documentation.

• Reliability is evaluated by measuring the frequency and severity of failure, the accuracy of output results, the
mean-time-to-failure (MTTF), the ability to recover from failure, and the predictability of the program.
• Performance is measured by considering processing speed, response time, resource consumption,
throughput, and efficiency.
• Supportability combines the ability to extend the program (extensibility),adaptability, serviceability—
these three attributes represent a more common term, maintainability—and in addition, testability, compatibility,
configurability
Design Concepts
The software design concept provides a framework for implementing the right software. Following are
certain issues that are considered while designing the software:
i) The abstraction means an ability to cope up with the complexity. At the highest level of abstraction, a
solution is stated in broad terms using the language of the problem environment. At lower levels of abstraction,
a more detailed description of the solution is provided. A procedural abstraction refers to a sequence of
instructions that have a specific and limited function.
An example of a procedural abstraction would be the word open for a door. Open implies a long sequence of
procedural steps (e.g., walk to the door, reach out and grasp knob, turn knob and pull door, step away from
moving door, etc.) A data abstraction is a named collection of data that describes a data object. In the context of
the procedural abstraction open, we can define a data abstraction called door. Like any data object, the data
abstraction for door would encompass a set of attributes that describe the door (e.g., door type, swing direction,
opening mechanism, weight, dimensions). It follows that the procedural abstraction open would make use of
information contained in the attributes of the data abstraction door.
ii) Software architecture means ―the overall structure of the software and the ways in which that structure
provides conceptual integrity for a system‖. Architecture is the structure or organization of program components
(modules), the manner in which these components interact, and the structure of data that are used by the
components. The architectural design can be represented using one or more of a number of different models
Structural models represent architecture as an organized collection of program components.
Framework models increase the level of design abstraction by attempting to identify repeatable architectural
design frameworks that are encountered in similar types of applications.
Dynamic models address the behavioral aspects of the program architecture, indicating how the structure or
system configuration may change as a function of external events.

iii) Information Hiding: The principle of information hiding suggests that modules be ―characterized by
design decisions that (each) hides from all others.‖ In other words, modules should be specified and
designed so that information (algorithms and data) contained within a module is inaccessible to other modules
that have no need for such information.
iv) The concept of functional independence is a direct outgrowth of separation of concerns, modularity, and
the concepts of abstraction and information hiding. Independence is assessed using two qualitative criteria:
cohesion and coupling. Cohesion is an indication of the relative functional strength of a module. Coupling is an
indication of the relative interdependence among modules. Cohesion is a natural extension of the information-
hiding concept. A cohesive module performs a single task, requiring little interaction with other components in
other parts of a program. Coupling is an indication of interconnection among modules in a software structure.
Coupling depends on the interface complexity between modules, the point at which entry or reference is made to
a module, and what data pass across the interface.
v) Refinement is actually a process of elaboration. Refinement helps to reveal low-level details as design
progresses. Both concepts allow you to create a complete design model as the design evolves.
vi) Refactoring is the process of changing a software system in such a way that it does not alter the external
behavior of the code yet improves its internal structure.‖ When software is refactored, the existing design is
examined for redundancy, unused design elements, inefficient or unnecessary algorithms, poorly constructed or
inappropriate data structures, or any other design failure that can be corrected to yield a better design.
vii) Design classes that refine the analysis classes by providing design detail that will enable the classes to
be implemented, and implement a software infrastructure that supports the business solution.
Five different types of design classes, each representing a different layer of the design architecture, can be
developed:
• User interface classes define all abstractions that are necessary for human computer interaction (HCI). In
many cases, HCI occurs within the context of a metaphor (e.g., a checkbook, an order form, a fax machine), and
the design classes for the interface may be visual representations of the elements of the metaphor.
• Business domain classes The classes identify the attributes and services (methods) that are required to
implement some element of the business domain.
• Process classes implement lower-level business abstractions required to fully manage the business domain
classes.
• Persistent classes represent data stores (e.g., a database) that will persist beyond the execution of the
software.
• System classes implement software management and control functions that enable the system to operate and
communicate within its computing environment and with the outside world.

2. Explain in details about architectural styles and design. May: 07, 14

The architecture is not the operational software. Rather, it is a representation that enables Software
Engineer to:
(1) Analyze the effectiveness of the design in meeting its stated requirements,
(2) Consider architectural alternatives at a stage when making design changes is still relatively easy, and
(3) Reduce the risks associated with the construction of the software.
Architectural model or style is a pattern for creating the system architecture for given problem. Architectural
style as a descriptive mechanism to differentiate the house from other styles (e.g., A-frame, raised ranch, Cape
Cod). But more important, the architectural style is also a template for construction.
The software that is built for computer-based systems also exhibits one of many architectural styles.
Each style describes a system category that encompasses
(1) a set of components (e.g., a database, computational modules) that perform a function required by a
system; (2) a set of connectors that enable ―communication,
coordination and cooperation‖ among components; (3) constraints that define how components can be integrated
to form the system; and (4) semantic models that enable a designer to understand the overall properties of a
system. An architectural style is a transformation that is imposed on the design of an entire system.
The intent is to establish a structure for all components of the system. An architectural pattern, like an
architectural style, imposes a transformation on the design of architecture. However, a pattern differs from a
style in a number of fundamental ways: (1) the scope of a pattern is less broad, focusing on one aspect of the
architecture rather than the architecture in its entirety; (2) a pattern imposes a rule on the architecture, describing
how the software will handle some aspect of its functionality at the infrastructure level (e.g., concurrency)

The commonly used architectural styles are

1. Data-centered architectures. A data store (e.g., a file or database) resides at the center of this architecture
and is accessed frequently by other components that update, add, delete, or otherwise modify data within the
store. Figure illustrates a typical data-centered style. Client software accesses a central repository. In some cases
the data repository is passive. That is, client software accesses the data independent of any changes to the data or
the actions of other client software. A variation on this approach transforms the repository into a ―blackboard‖
that sends notifications to client software when data of interest to the client changes. Data- centered
architectures promote inerrability. That is, existing
Components can be changed and new client components added to the architecture without concern
about other clients (because the client components operate independently). In addition, data can be passed
among clients using the blackboard mechanism (i.e., the blackboard component serves to coordinate the transfer
of information between clients). Client components independently execute processes.
2. Data-flow architectures. This architecture is applied when input data are to be transformed through a series
of computational or manipulative components into output data. A pipe-and-filter pattern has a set of
components, called filters, connected by pipes that transmit data from onecomponent to the next. Each filter
works independently of those components upstream anddownstream, is designed to expect data input of a
certain form, and produces data output (to the next filter) of a specified form. However, the filter does not
require knowledge of the workings of its neighboring filters. If the data flow degenerates into a single line of
transforms, it is termed batch sequential. This structure accepts a batch of data and then applies a series of
sequential components (filters) to transform it.
3. Call and return architectures. This architectural style enables you to achieve a program structure that is
relatively easy to modify and scale. A number of sub styles exist within this category:

.
Fig: Layered Architecture

As architectural design begins, the software to be developed must be put into context—that is, the
design should define the external entities (other systems, devices, people) that the software interacts with and
the nature of the interaction. This information can generally be acquired from the requirements model and all
other information gathered during requirements engineering. Once context is modeled and all external software
interfaces have been described, you can identify a set of architectural archetypes. An archetype is an abstraction
(similar to a class) that represents one element of system behavior. The set of archetypes provides a collection
of abstractions that must be modeled architecturally if the system is to be constructed, but the archetypes
themselves do not provide enough implementation detail.
Representing the System in Context
At the architectural design level, a software architect uses an architectural context diagram (ACD) to
model the manner in which software interacts with entities external to its boundaries. The generic structure of
the architectural context diagram is illustrated in figure.
Referring to the figure, systems that interoperate with the target system (the system for which an architectural
design is to be developed) are represented as
• Super ordinate systems—those systems that use the target system as part of some higher-level processing
scheme.
• Subordinate systems—those systems that are used by the target system and provide data or processing that
are necessary to complete target system functionality.
• Peer-level systems—those systems that interact on a peer-to-peer basis (i.e. information is either produced or
consumed by the peers and the target system.
• Actors—entities (people, devices) that interact with the target system by producing or consuming information
that is necessary for requisite processing.

3. Describe the concepts of cohesion and coupling. State difference between cohesion and coupling with a
suitable examples. May: 03 , 08, 15,16.

Cohesion
• Cohesion is the ―single-mindedness‘ of a component
• It implies that a component or class encapsulates only attributes and operations that are closely related to
one another and to the class or component itself
• The objective is to keep cohesion as high as possible.
• The kinds of cohesion can be ranked in order from highest (best) to lowest (worst) Functional
• A module performs one and only one computation and then returns a result.
Layer
• A higher layer component accesses the services of a lower layer component Communicational
• All operations that access the same data are defined within one class Kinds of cohesion

Sequential cohesion
Components or operations are grouped in a manner that allows the first to provide input to the next and so on in
order to implement a sequence of operations Procedural cohesion
Components or operations are grouped in a manner that allows one to be invoked immediately after the
preceding one was invoked, even when no data passed between them
Temporal cohesion
Operations are grouped to perform a specific behavior or establish a certain state such as program start-up or
when an error is detected
Utility cohesion
Components, classes, or operations are grouped within the same category because of similar general functions
but are otherwise unrelated to each other

Coupling
• As the amount of communication and collaboration increases between operations and classes, the complexity
of the computer-based system also increases
• As complexity rises, the difficulty of implementing, testing, and maintaining software also increases
• Coupling is a qualitative measure of the degree to which operations and classes are connected to one another
• The objective is to keep coupling as low as possible
• The kinds of coupling can be ranked in order from lowest (best) to highest (worst)
• Data coupling • Operation A() passes one or more atomic data operands to operation B(); the less the number
of operands, the lower the level of coupling.
• Stamp coupling
A whole data structure or class instantiation is passed as a parameter to an operation
• Control coupling Operation A() invokes operation B() and passes a control flag to B that directs logical flow
within B(). Consequently, a change in B() can require a change to be made to the meaning of the control flag
passed by A(), otherwise an error may result
• Common coupling
A number of components all make use of a global variable, which can lead to uncontrolled error propagation
and unforeseen side effects
• Content coupling
One component secretly modifies data that is stored internally in another component
• Subroutine call coupling
When one operation is invoked it invokes another operation within side of it
• Type use coupling
Component A uses a data type defined in component B, such as for an instance
variable or a local variable declaration • If/when the type definition changes, every
component that declares a variable of that data type must also change.
S.No Coupling Cohesion
1. Coupling represents how the modules are In cohesion the cohesive module performs
connected with other modules or with other only one thing.
modules or
with the outside world.
2. With coupling interface complexity With cohesion data hiding can be
is decided. done.
3. The goal of coupling is to achieve The goal of cohesion is to achieve
lowest coupling. high cohesion.
4. Various types of coupling are : Data coupling , Various types of cohesion are: Coincidental
control coupling common coupling , content cohesion, logical cohesion , Temporal
coupling. cohesion,
Procedural cohesion.

UNIT – IV TESTING AND MAINTENANCE


1. Define software testing?
Software testing is a critical element of software quality assurance and represents the ultimate review of specification, design, and
coding.
2. What are the objectives of testing?
i. Testing is a process of executing a program with the intend of finding an error.
ii. A good test case is one that has high probability of finding an undiscovered error.
iii. A successful test is one that uncovers as an-yet undiscovered error.
3. What are the testing principles the software engineer must apply while performing the software testing?
i. All tests should be traceable to customer requirements.
ii. Tests should be planned long before testing begins.
iii. The pareto principle can be applied to software testing-80% of all errors uncovered during testing will likely be traceable to
20% of all program modules.
iv. Testing should begin “in the small” and progress toward testing “in the large”. v. Exhaustive testing is not possible.
vi. To be most effective, an independent third party should conduct testing.
4. What are the two levels of testing?
i. Component testing - Individual components are tested. Tests are derived from developer’s experience.
ii. System Testing - The group of components are integrated to create a system or sub-system is done. These tests are based on the
system specification.
5. What are the various testing activities?
i. Test planning
ii. Test case design
iii. Test execution
iv. Data collection
v. Effective evaluation
6. Write short note on black box testing.
The black box testing is also called as behavioral testing. This method
Fully focus on the functional requirements of the software. Tests are derived that fully exercise all functional requirements.
7. What is cyclomatic complexity?
Cyclomatic complexity is software metric that gives the quantitative Measure of logical complexity of the program.
8. How to compute the cyclomatic complexity?
The cyclomatic complexity can be computed by any one of the following ways.
1. The numbers of regions of the flow graph correspond to the cyclomatic complexity.
2. Cyclomatic complexity (G), for the flow graph G, is defined as: V(G)=E-N+2, E -- number of flow graph edges, N -- number
of flow graph nodes
3. V(G) = P+1 Where P is the number of predicate nodes contained in the flow graph.
9. Distinguish between verification and validation.
Verification refers to the set of activities that ensure that software correctly implements a specific function.
Validation refers to a different set of activities that ensure that the software that has been built is traceable to the customer
requirements.
10. What are the various testing strategies for conventional software?
i. Unit testing
ii. Integration testing.
iii. Validation testing.
iv. System testing.

11. Write about drivers and stubs.


Drivers and stub software need to be developed to test incompatible software.
The “driver” is a program that accepts the test data and prints the relevant results.
The “stub” is a subprogram that uses the module interfaces and performs the minimal data manipulation if required.
12. What are the approaches of integration testing?
The integration testing can be carried out using two approaches.
1. The non-incremental testing.
2. Incremental testing.
13. Distinguish between alpha and beta testing
Alpha and beta testing are the types of acceptance testing.
Alpha test : The alpha testing is attesting in which the version of complete software is tested by the customer under the supervision of
developer. This testing is performed at developer’s site.
Beta test : The beta testing is a testing in which the version of the software is tested by the customer without the developer being present.
This testing is performed at customer’s site.

14. What are the various types of system testing?


1. Recovery testing – is intended to check the system’s ability to recover from failures.
2. Security testing – verifies that system protection mechanism prevent improper penetration or
data alteration.
3. Stress testing – Determines breakpoint of a system to establish maximum service level.
4. Performance testing – evaluates the run time performance of the software, especially real-time software.

15. Define debugging.


Debugging is defined as the process of removal of defect. It occurs as a consequence of successful testing.

16. What are the common approaches in debugging?


Brute force method:
The memory dumps and run-time tracks are examined and program with write statements is loaded to obtain clues to
error causes.
Back tracking method:
The source code is examined by looking backwards from symptom to potential causes of errors.
Cause elimination method:
This method uses binary partitioning to reduce the number of locations where errors can exists
17. What is business process reengineering(BPR model)?
❖ Business process reengineering is the act of recreating a core business process with the goal of improving product output, quality,
or reducing costs.
❖ Typically, it involves the analysis of company workflows, finding processes that are sub-par or inefficient, and figuring out ways
to get rid of them or change them.

PART B:

1. Illustrate white box testing. May: 04, 07, Dec: 07, May: 15

White box testing:


 White-box testing of software is predicated on close examination of
procedural detail.
 Logical paths through the software are tested by providing test cases that exercise specific sets of
conditions and/or loops.
 The "status of the program" may be examined at various points.
 White-box testing, sometimes called glass-box testing is a test case design method that uses the control
structure of the procedural design to derive test cases.
 Using this method, SE can derive test cases that

 Guarantee that all independent paths within a module have been exercised at least once
 Exercise all logical decisions on their true and false sides,
 Execute all loops at their boundaries and within their operational bounds
 Exercise internal data structures to ensure their validity.
 Basis path testing:
 Basis path testing is a white-box testing technique
 To derive a logical complexity measure of a procedural design.
 Test cases derived to exercise the basis set are guaranteed to execute every statement in the program at
least one time.
Methods:
1. Flow graph notation
2. Independent program paths or Cyclomatic complexity
3. Deriving test cases
4. Graph Matrices
Flow Graph Notation:
 Start with simple notation for the representation of control flow (called flow graph). It represent
logical control flow.
Fig. A represent program control structure and fig. B maps the flowchart into a corresponding flow graph.
In fig. B each circle, called flow graph node, represent one or more procedural statement.
 A sequence of process boxes and decision diamond can map into a single node.
 The arrows on the flow graph, called edges or links, represent flow of control and are parallel to
flowchart arrows.
 An edge must terminate at a node, even if the node does not represent any procedural statement.

 Areas bounded by edges and nodes are called regions. When counting regions, we include the are
outside the graph as a region.
 When compound conditions are encountered in procedural design, flow graph becomes slightly more
complicated.

 When we translating PDL segment into flow graph, separate node is created for each condition.
 Each node that contains a condition is called predicate node and is characterized by two or more edges
comes from it.
Independent program paths or Cyclomatic complexity:
 An independent path is any path through the program that introduces at least one new set of processing
statement or new condition.
 For example, a set of independent paths for flow graph:
 Path 1: 1-11
 Path 2: 1-2-3-4-5-10-1-11
 Path 3: 1-2-3-6-8-9-1-11
 Path 4: 1-2-3-6-7-9-1-11
 Note that each new path introduces a new edge.
 The path 1-2-3-4-5-10-1-2-3-6-8-9-1-11 is not considered to e an independent path because it is simply
a combination of already specified paths and does not traverse any new edges.
 Test cases should be designed to force execution of these paths (basis set).
 Every statement in the program should be executed at least once and every condition will have been
executed on its true and false.
 Cyclomatic complexity is a software metrics that provides a quantitative measure of the logical
complexity of a program.
 It defines no. of independent pats in the basis set and also provides number of test that must be
conducted.
 One of three ways to compute cyclomatic complexity:
1. The no. of regions corresponds to the cyclomatic complexity. Cyclomatic complexity,
V(G), for a flow graph, G, is defined as
V(G) = E - N + 2
where E is the number of flow graph edges, N is the number of flow graph nodes. Cyclomatic complexity,
V(G), for a flow graph, G, is also defined as V(G) = P + 1 where P is the number of predicate nodes edges.
So the value of V(G) provides us with upper bound of test cases. Deriving Test
Cases:
 It is a series of steps method.
 The procedure average depicted in PDL.
 Average, an extremely simple algorithm, contains compound conditions and loops.
To derive basis set, follow the steps.
1. Using the design or code as a foundation, draw a corresponding flow graph.
A flow graph is created by numbering those PDL statements that will be mappedinto corresponding flow graph node.

2. Explain the various types of black box testing methods. Dec: 07,16 May: 15
Black box testing:
 Also called behavioral testing, focuses on the functional requirements of the software.
 It enables the software engineer to derive sets of input conditions that will fully exercise all functional
requirements for a program.
 Black-box testing is not an alternative to white-box techniques but it is complementary approach.
 Black-box testing attempts to find errors in the following categories:
 Incorrect or missing functions,
 Interface errors,
 Errors in data structures or external data base access.
 Behavior or performance errors,
 Initialization and termination errors.
 Black-box testing purposely ignored control structure, attention is focused on the information domain.
Tests are designed to answer the following questions:
 How is functional validity tested?
 How is system behavior and performance tested?
 What classes of input will make good test cases?
 By applying black-box techniques, we derive a set of test cases that satisfy the following criteria
 Test cases that reduce the number of additional test cases that must be designed to achieve
reasonable testing (i.e minimize effort and time)
 Test cases that tell us something about the presence or absence of classes of errors
 Black box testing methods
 Graph-Based Testing Methods
 Equivalence partitioning
 Boundary value analysis (BVA)
 Orthogonal Array Testing Graph-
Based Testing Methods:
 To understand the objects that are modeled in software and the relationships that connects these
objects.
 Next step is to define a series of tests that verify ―all objects have the expected relationship to one
another.
 Stated in other way:
o Create a graph of important objects and their relationships
o Develop a series of tests that will cover the graph
 So that each object and relationship is exercised and errors are uncovered.
 Begin by creating graph –
 a collection of nodes that represent objects
 links that represent the relationships between objects
 node weights that describe the properties of a node
 link weights that describe some characteristic of a link.
 Nodes are represented as circles connected by links that take a number of different forms.
 A directed link (represented by an arrow) indicates that a relationship moves in only one direction.
 A bidirectional link, also called a symmetric link, implies that the relationship applies in both
directions.
 Parallel links are used when a number of different relationships are established between graph nodes.

 Object #1 = new file menu select


 Object #2 = document window
 Object #3 = document text
 Referring to example figure, a menu select on new file generates a document window.
 The link weight indicates that the window must be generated in less than 1.0 second.
 The node weight of document window provides a list of the window attributes that are to be expected
when the window is generated.

 An undirected link establishes a symmetric relationship between the new file menu select and
document text,
 parallel links indicate relationships between document window and document text
 Number of behavioral testing methods that can make use of graphs:
 Transaction flow modeling.
o The nodes represent steps in some transaction and the links represent the logical connection
between steps
 Finite state modeling.
o The nodes represent different user observable states of the software and the links represent the
transitions that occur to move from state to state. (Starting point and ending point)
 Data flow modeling.
o The nodes are data objects and the links are the transformations that occur to translate one data
object into another.
o Timing modeling. The nodes are program objects and the links are the sequential connections
between those objects.
o Link weights are used to specify the required execution times as the program executes.
Equivalence Partitioning:
 Equivalence partitioning is a black-box testing method that divides the input domain of a program into
classes of data from which test cases can be derived.
 Test case design for equivalence partitioning is based on an evaluation of equivalence classes for an
input condition.
 An equivalence class represents a set of valid or invalid states for input conditions.
 Typically, an input condition is a specific numeric value, a range of values, a set of related values, or a
Boolean condition.
 To define equivalence classes follow the guideline
 If an input condition specifies a range, one valid and two invalid equivalence classes are defined.
 If an input condition requires a specific value, one valid and two invalid equivalence classes are
defined.
 If an input condition specifies a member of a set, one valid and one invalid equivalence class are
defined.
 If an input condition is Boolean, one valid and one invalid class are defined.
 Example:
area code—blank or three-digit number prefix—three-digit number
not beginning with 0 or 1 suffix—four-digit number
password—six digit alphanumeric string
commands— check, deposit, bill pay, and the like area code:
o Input condition, Boolean—the area code may or may not be present.
o Input condition, value— three digit number
o Input condition, range—values defined between 200 and 999, with specific exceptions.

o Input condition, value—four-digit length


password:
o Input condition, Boolean—a password may or may not be present.
o Input condition, value—six-character string. command:
o Input condition, set— check, deposit, bill pay. Boundary
Value Analysis (BVA):
 Boundary value analysis is a test case design technique that complements equivalence partitioning.
 Rather than selecting any element of an equivalence class, BVA leads to the selection of test cases at
the "edges" of the class.
 In other word, Rather than focusing solely on input conditions, BVA derives test cases from the output
domain as well.
Guidelines for BVA
1. If an input condition specifies a range bounded by values a and b, test cases should be designed with
values a and b and just above and just below a and b.
2. If an input condition specifies a number of values, test cases should be developed that exercise the
minimum and maximum numbers. Values just above and below minimum and maximum are also
tested.
3. Apply guidelines 1 and 2 to output conditions.
4. If internal program data structures have prescribed boundaries be certain to design a test case to
exercise the data structure at its boundary
Orthogonal Array Testing:
 The number of input parameters is small and the values that each of the parameters may take are
clearly bounded.
 When these numbers are very small (e.g., three input parameters taking on three discrete values each),
it is possible to consider every input permutation .
 However, as the number of input values grows and the number of discrete values for each data item
increases (exhaustive testing occurs)
 Orthogonal array testing can be applied to problems in which the input domain is relatively small but
too large to accommodate exhaustive testing.
 Orthogonal Array Testing can be used to reduce the number of combinations and provide maximum
coverage with a minimum number of test cases.

Example:
 Consider the send function for a fax application.
 Four parameters, P1, P2, P3, and P4, are passed to the send function. Each takes on three discrete
values.
 P1 takes on values:
o P1 = 1, send it now
o P1 = 2, send it one hour later
o P1 = 3, send it after midnight
 P2, P3, and P4 would also take on values of 1, 2 and 3, signifying other send functions.
 OAT is an array of values in which each column represents a Parameter - value that can take a certain
set of values called levels.
 Each row represents a test case.
 Parameters are combined pair-wise rather than representing all possible combinations of parameters
and levels

3. Explain about various testing strategy. May: 05, 06, Dec: 08, May: 10, 13

Unit Testing Details:


• Interfaces tested for proper information flow.
• Local data are examined to ensure that integrity is maintained.
• Boundary conditions are tested.
• Basis path testing should be used.
• All error handling paths should be tested.
• Drivers and/or stubs need to be developed to test incomplete software.
1. Unit Testing:
In unit testing the individual components are tested independently to ensure their quality. The focus is to
uncover the errors in design and implementation.
The various tests that are conducted during the unit test are described as below –
1. Module interfaces are tested for proper information flow in and out of the program.
2. Local data are examined to ensure that integrity is maintained.
3. Boundary conditions are tested to e~ that the module operates properly at boundaries established to limit or
restrict processing.
4. All the basis (independent) paths are tested for ensuring that all statements in the module have been executed
only once.
5. All error handling paths should be tested.
2. Integration Testing:
Integration testing tests integration or interfaces between components, interactions to different parts of
the system such as an operating system, file system and hardware or interfaces between systems. Also after
integrating two different components together we do the integration testing. As displayed in the image below
when two different modules ‗Module A‘ and ‗Module B‘ are integrated then the integration testing is done.
A group of depedendent components are tested together to ensure their quality of their integration
unit.
The objective is to take unit tested components and build a program structure that has been dictated by
software design.
The integration testing can be carried out using two approaches.

Big Bang integration testing:


In Big Bang integration testing all components or modules is integrated simultaneously, after which
everything is tested as a whole. As per the below image all the modules from ‗Module′ to ‗Module 6′ are
integrated simultaneously then the testing is carried out.

Advantage:
Big Bang testing has the advantage that everything is finished before integration testing starts.

Disadvantage:
The major disadvantage is that in general it is time consuming and difficult to trace the cause of failures because
of this late integration.

Top-Down Integration Testing:

B F G
C

D E

• Is an incremental approach in which modules are integrated by moving down through the control structure.
Top town Integration process can be performed using following steps.
1. Main program used as a test driver and stubs are substitutes for components directly subordinate to it.
2. Subordinate stubs are replaced one at a time with real components (following the depth-first or breadth-first
approach).
3. Tests are conducted as each component is integrated.
4. On completion of each set of tests and other stub is replaced with a real component.
5. Regression testing may be used to ensure that new errors not introduced.
Advantages of Top-Down approach:
The tested product is very consistent because the integration testing is basically performed in an environment
that almost similar to that of reality
Stubs can be written with lesser time because when compared to the drivers then Stubs are simpler to author.
Disadvantages of Top-Down approach:
Basic functionality is tested at the end of cycle
Bottom-Up Integration:
In Bottom-Up Integration the modules at the lowest levels are integrated first, then integration is done
by moving upward through the control structure.
Bottom up Integration process can be performed using following steps.
• Low level components are combined in clusters that perform a specific software function.
• A driver (control program) is written to coordinate test case input and output.
• The cluster is tested.
• Drivers are removed and clusters are combined moving upward in the program structure.

Advantage of Bottom-Up approach:


 In this approach development and testing can be done together so that the product or application will be
efficient and as per the customer specifications.

Disadvantages of Bottom-Up approach:


 We can catch the Key interface defects at the end of cycle
 It is required to create the test drivers for modules at all levels except the top control
3. Regression Testing:
• The selective retesting of a software system that has been modified to ensure that any bugs have been fixed
and that no other previously working functions have failed as a result of the reparations and that newly added
features have not created problems with previous versions of the software. Also referred to as verification
testing, regression testing is initiated after a programmer has attempted to fix a recognized problem or has added
source code to a program that may have inadvertently introduced errors. It is a quality control measure to ensure
that the newly modified code still complies with its specified requirements and that unmodified code has not
been affected by the maintenance activity.
• Regression test suit contains 3 different classes of test cases
□ Representative sample of existing test cases is used to exercise all software functions.
□ Additional test cases focusing software functions likely to be affected by the change.
□ Tests cases that focus on the changed software components.
4. Smoke Testing:
Is a kind of integration testing technique used for time critical projects wherein the projects needs to be assessed
on frequent basis.
Following activities need to be carried out in smoke testing:
• Software components already translated into code are integrated into a build.
• A series of tests designed to expose errors that will keep the build from performing its functions are created.
•The build is integrated with the other builds and the entire product is smoke tested daily using either top-down
or bottom integration.
5. Validation Testing
□ Ensure that each function or performance characteristic conforms to its specification.
□ Configuration review or audit is used to ensure that all elements of the software configuration have been
properly developed, catalogued, and documented to allow its support during its maintenance phase.
6. Acceptance Testing:
• Is a kind of testing conducted to ensure that the software works correctly for intended user in his or her
normal work environment.
Types of acceptance testing are:
• Alpha test
Is a testing in which the version of the complete software is tested by customer under the supervision of the
developer at the developer‘s site.
• Beta test
Is a testing in which the version of the complete software is tested by customer at his or her own site without the
developer being present

7. System Testing:
The system test is a series of tests conducted to fully the computer based system. Various types of system tests
are:
• Recovery testing
Is intended to checks system‘s ability to recover from failures
• Security testing
Security testing verifies that system protection mechanism prevents improper penetration or data alteration
• Stress testing
Determines breakpoint of a system to establish maximum service level. Program is checked to see how well it
deals with abnormal resource demands

8. Performance testing
Performance testing evaluates the run-time performance of software. Performance Testing:
• Stress test.
• Volume test.
• Configuration test (hardware & software).
• Compatibility.
• Regression tests.
Security tests.
• Timing tests.
• Environmental tests.
• Quality tests.
• Recovery tests.
• Maintenance tests.
• Documentation tests.
• Human factors tests.
Testing Life Cycle:
• Establish test objectives.
• Design criteria (review criteria).
□ Correct.
□ Feasible.
□ Coverage.
□ Demonstrate functionality.
UNIT – V PROJECT MANAGEMENT

1. Define measure.
Measure is defined as a quantitative indication of the extent, amount, dimension, or size of some attribute of a product or process.

2. Define metrics.
Metrics is defined as the degree to which a system component, or process possesses a given attribute.
3. What are the types of metrics?
Direct metrics – It refers to immediately measurable attributes.
Example – Lines of code, execution speed.
Indirect metrics – It refers to the aspects that are not immediately quantifiable or measurable. Example – functionality of a
program.

4. Write short note on the various estimation techniques.


Algorithmic cost modeling – the cost estimation is based on the size of the software.
Expert judgment – The experts from software development and the application domain.
Estimation by analogy – The cost of a project is computed by comparing the project to a similar project in the same application
domain.
Parkinson’s Law – The cost is determined by available resources rather than by objective assessment.
Pricing to win – The project costs whatever the customer ready to spend it.

5. What is COCOMO model?


COnstructive COst MOdel is a cost model, which gives the estimate of number of man-months it will take to develop the
software product.

6. What is the purpose of timeline chart?


The purpose of the timeline chart is to emphasize the scope of the individual task. Hence set of tasks are given as input to the
timeline chart.

7. What is EVA?
Earned Value Analysis is a technique of performing quantitative analysis of the software project. It provides a common value
scale for every task of software project. It acts as a measure for software project progress.

8. What is software maintenance?


Software maintenance is an activity in which program is modified after it has been put into use.

9. What are the types of software maintenance?


Corrective maintenance – Means the maintenance for correcting the software faults.
Adaptive maintenance – Means maintenance for adapting the change in environment.
Perfective maintenance – Means modifying or enhancing the system to meet the new requirements.
Preventive maintenance – Means changes made to improve future maintainability.

10. What is meant by Software project management?

Software project management is an activity of organizing, planning and scheduling software projects.

11. What is meant by risk management?


Risk management is an activity in which risks in the software projects are identified.

12.What is RMMM?
RMMM - Risk Mitigation Monitoring and Management. Its an effective strategy to assist the project team for
dealing with risk

13. What is meant by software project scheduling?

Software project scheduling is an activity that distributes estimated effort across the planned project duration by allocating the
effort to specified software engineering tasks.

14. Write about software change strategies.


The software change strategies that could be applied separately or together are:
Software maintenance – The changes are made in the software due to requirements.
Architectural transformation – It is the process of changing one architecture into another form.
Software re-engineering – New features can be added to existing system and then the system is reconstructed for better use of it in
future.

15.How is the risk exposure RE computed?


RE = p x c
Where p = probability of occurrence for a risk
c = cost of the project

PART B :

1. Describe two metrics which have been used to measure the software. May 04,05

Software process and project metrics are quantitative measures. The software measures are collected by
software engineers and software metrics are analyzed by software managers.
• They are a management tool.
• They offer insight into the effectiveness of the software process and the projects that are conducted using the
process as a framework.
• Basic quality and productivity data are collected.
• These data are analyzed, compared against past averages, and assessed.
• The goal is to determine whether quality and productivity improvements have occurred.
• The data can also be used to pinpoint problem areas.
• Remedies can then be developed and the software process can be improved. Use of Measurement:
• Can be applied to the software process with the intent of improving it on a continuous basis.
• Can be used throughout a software project to assist in estimation, quality control, productivity assessment, and
project control.
• Can be used to help assess the quality of software work products and to assist in tactical decision making as a
project proceeds.
Reason for measure:
• To characterize in order to
• Gain an understanding of processes, products, resources, and environments
• Establish baselines for comparisons with future assessments
• To evaluate in order to determine status with respect to plans
• To predict in order to gain understanding of relationships among processes and products
• Build models of these relationships
• To improve in order to Identify roadblocks, root causes, inefficiencies, and other opportunities for improving
product quality and process performance

Metric in Process Domain:


• Process metrics are collected across all projects and over long periods of time.
• They are used for making strategic decisions.
• The intent is to provide a set of process indicators that lead to long-term software process improvement.
• The only way to know how/where to improve any process is to
• Measure specific attributes of the process
• Develop a set of meaningful metrics based on these attributes
• Use the metrics to provide indicators that will lead to a strategy for improvement
Properties of Process Metrics
• Use common sense and organizational sensitivity when interpreting metrics data
• Provide regular feedback to the individuals and teams who collect measures and metrics
• Don‘t use metrics to evaluate individuals
• Work with practitioners and teams to set clear goals and metrics that will be used to achieve them
• Never use metrics to threaten individuals or teams
• Metrics data that indicate a problem should not be considered ―negative‖
– Such data are merely an indicator for process improvement
• Don‘t obsess on a single metric to the exclusion of other important metrics
Metrics in Project Domain
• Project metrics enable a software project manager to
– Assess the status of an ongoing project
– Track potential risks
– Uncover problem areas before their status becomes critical
– Adjust work flow or tasks
– Evaluate the project team‘s ability to control quality of software work products
• Many of the same metrics are used in both the process and project domain
• Project metrics are used for making tactical decisions
– They are used to adapt project workflow and technical activities
Use of Project Metrics:
• The first application of project metrics occurs during estimation
– Metrics from past projects are used as a basis for estimating time and effort
• As a project proceeds, the amount of time and effort expended are compared to original estimates
• As technical work commences, other project metrics become important
– Production rates are measured (represented in terms of models created, review hours, function points, and
delivered source lines of code)
– Error uncovered during each generic framework activity (i.e, communication, planning, modeling,
construction, deployment) are measured

Categories of Software Measurement


• Two categories of software measurement
– Direct measures of the
• Software process (cost, effort, etc.)
• Software product (lines of code produced, execution speed, defects reported over time, etc.)
– Indirect measures of the
• Software product (functionality, quality, complexity, efficiency, reliability,
maintainability, etc.)
• Project metrics can be consolidated to create process metrics for an organization
Size oriented metrics
• Derived by normalizing quality and/or productivity measures by considering the size of the software produced
• Thousand lines of code (KLOC) are often chosen as the normalization value
• Metrics include
o Errors per KLOC - Errors per person-month
o Defects per KLOC - KLOC per person-month
o Dollars per KLOC - Dollars per page of documentation
o Pages of documentation per KLOC
Size-oriented metrics are not universally accepted as the best way to measure the software process
Opponents argue that KLOC measurements
o Are dependent on the programming language
o Penalize well-designed but short programs
o Cannot easily accommodate nonprocedural languages
o Require a level of detail that may be difficult to achieve
Function oriented metrics
• Function-oriented metrics use a measure of the functionality delivered by the application as a normalization
value
• Most widely used metric of this type is the function point: FP = count total * [0.65 +
* sum (value adj. factors)]
• Function point values on past projects can be used to compute, for example, the average number of lines of
code per function point (e.g., 60)
• Like the KLOC measure, function point use also has proponents and opponents
• Proponents claim that
o FP is programming language independent
o FP is based on data that are more likely to be known in the early stages of a project, making it more attractive
as an estimation approach Opponents claim that
o FP requires some ―sleight of hand‖ because the computation is based on subjective data
o Counts of the information domain can be difficult to collect after the fact
o FP has no direct physical meaning…it‘s just a number
Object Oriented Metrics:
• Average number of support classes per key class
o Key classes are identified early in a project (e.g., at requirements analysis)
o Estimation of the number of support classes can be made from the number of key classes
o GUI applications have between two and three times more support classes as key classes
o Non-GUI applications have between one and two times more support classes as key classes
Number of subsystems
o A subsystem is an aggregation of classes that support a function that is visible to the end user of a system

Metrics for Software Quality


Correctness
o This is the number of defects per KLOC, where a defect is a verified lack of conformance to requirements
o Defects
Defects are those problems reported by a program user after the program is released for general use
o Maintainability
This describes the ease with which a program can be corrected if an error is found, adapted if the environment
changes, or enhanced if the customer has changed requirements
o Mean time to change (MTTC) :
The time to analyze, design, implement, test, and distribute a change to all users Maintainable programs on
average have a lower MTTC

Defect Removal Efficiency (DRE)


DRE represents the effectiveness of quality assurance activities. The DRE also helps the project manager to
assess the progress of software project as it gets developed through its scheduled work task. Any errors that
remain uncovered and are found in later tasks are called defects.
The defect removal efficiency can be defined as
DRE = E / (E + D) Where DRE is the defect removal efficiency, E is the error, D is the defect.
Measuring Quality
Following are the measure of the software quality:
1. Correctness: Is a degree to which the software produces the desired functionality. The correctness can
be measured as Correctness = Defects per KLOC.
2. Integrity: Integrity is basically an ability of the system to withstand against the attacks. There are two
attributes that are associated with integrity: threat and security.
3. Usability: User friendliness of the system or ability of the system that indicates the usefulness of the
system.
4. Maintainability: Is an ability of the system to accommodate the corrections made after encountering
errors, adapting the environment changes in the system in order to satisfy the user.

2. What are the categories of software risks? Give an overview about risk management. May: 14
Risk is a potential problem – it might happen and it might not conceptual definition of risk
o Risk concerns future happenings
o Risk involves change in mind, opinion, actions, places, etc.
o Risk involves choice and the uncertainty that choice entails Two
characteristics of risk
o Uncertainty – the risk may or may not happen, that is, there are no 100% risks (those, instead, are called
constraints)
o Loss – the risk becomes a reality and unwanted consequences or losses occur
Risk Categorization
1) Project risks
They threaten the project plan. If they become real, it is likely that the project schedule will slip and that costs
will increase
2) Technical risks
They threaten the quality and timeliness of the software to be produced. If they become real, implementation
may become difficult or impossible
3) Business risks
They threaten the viability of the software to be built. If they become real, they jeopardize the project or the
product
Sub-categories of Business risks
i) Market risk – building an excellent product or system that no one really wants
ii) Strategic risk – building a product that no longer fits into the overall business strategy for the company
iii)Sales risk – building a product that the sales force doesn't understand how to sell
iv) Management risk – losing the support of senior management due to a change in focus or a change in people
v) Budget risk – losing budgetary or personnel commitment
4. Known risks
Those risks that can be uncovered after careful evaluation of the project plan, the business and technical
environment in which the project is being developed, and other reliable information sources (e.g., unrealistic
delivery date)
5. Predictable risks
Those risks that are extrapolated from past project experience (e.g., past turnover)
6. Unpredictable risks
Those risks that can and do occur, but are extremely difficult to identify in advance
Risk Identification
Risk identification is a systematic attempt to specify threats to the project plan. By identifying known
and predictable risks, the project manager takes a first step toward avoiding them when possible and controlling
them when necessary
Generic risks
Risks that are a potential threat to every software project. Product-specific
risks
Risks that can be identified only by those a with a clear understanding of the technology, the people, and the
environment that is specific to the software that is to be built. This requires examination of the project plan and
the statement of scope. "What special characteristics of this product may threaten our project plan?"

Risk Item Checklist


Used as one way to identify risks
Focuses on known and predictable risks in specific subcategories can be organized in several ways
o A list of characteristics relevant to each risk subcategory
o Questionnaire that leads to an estimate on the impact of each risk
o A list containing a set of risk component and drivers and their probability of occurrence

Known and Predictable Risk Categories


o Product size – risks associated with overall size of the software to be built
o Business impact – risks associated with constraints imposed by management or the marketplace
o Customer characteristics – risks associated with sophistication of the customer and the developer's ability
to communicate with the customer in a timely manner Process definition – risks associated with the degree to
which the software process has been defined and is followed
o Development environment – risks associated with availability and quality of the tools to be used to build the
project
o Technology to be built – risks associated with complexity of the system to be built and the "newness" of the
technology in the system
o Staff size and experience – risks associated with overall technical and project experience of the software
engineers who will do the work
The project manager identifies the risk drivers that affect the following risk components
o Performance risk - the degree of uncertainty that the product will meet its requirements and be fit for its
intended use
o Cost risk - the degree of uncertainty that the project budget will be maintained
o Support risk - the degree of uncertainty that the resultant software will be easy to correct, adapt, and enhance
o Schedule risk - the degree of uncertainty that the project schedule will be maintained and that the product
will be delivered on time

Risk projection
Risk projection (or estimation) attempts to rate each risk in two ways
The probability that the risk is real. The consequence of the problems associated with the risk, should it occur.
The project planner, managers, and technical staff perform four risk projection steps. The intent of these steps is
to consider risks in a manner that leads to prioritization. Be prioritizing risks, the software team can allocate
limited resources where they will have the most impact

Steps
1. Establish a scale that reflects the perceived likelihood of a risk (e.g., 1-low, 10- high)
2. Delineate the consequences of the risk

3. Estimate the impact of the risk on the project and product


Risk Table
A risk table provides a project manager with a simple technique for risk projection
It consists of five columns
Risk Summary – short description of the risk Risk
Category – one of seven risk categories
Probability – estimation of risk occurrence based on group input Impact – (1)
catastrophic (2) critical (3) marginal (4) negligible
RMMM – Pointer to a paragraph in the Risk Mitigation, Monitoring, and Management Plan
Risk Summary Risk Category Probability Impact (1-4) RMMM

RISK MITIGATION, MONITORING, AND MANAGEMENT


An effective strategy must consider three issues: risk avoidance, risk monitoring, and risk management
and contingency planning. If a software team adopts a proactive approach to risk, avoidance is always the best
strategy. This is achieved by developing a plan for risk mitigation. For example, assume that high staff turnover
is noted as a project risk r1.
Based on past history and management intuition, the likelihood l1 of high turnover is estimated to be
0.70 (70 percent, rather high) and the impact x1 is projected as critical. That is, high turnover will have a critical
impact on project cost and schedule. To mitigate this risk, you would develop a strategy for reducing turnover.
Among the possible steps to be taken are:
• Meet with current staff to determine causes for turnover (e.g., poor working conditions, low pay, and
competitive job market).
• Mitigate those causes that are under your control before the project starts.
• Once the project commences, assume turnover will occur and develop techniques to ensure continuity when
people leave.
• Organize project teams so that information about each development activity is widely dispersed.
• Define work product standards and establish mechanisms to be sure that all models and documents are
developed in a timely manner.
• Conduct peer reviews of all work (so that more than one person is ―up to speed‖).
• Assign a backup staff member for every critical technologist.
As the project proceeds, risk-monitoring activities commence. The project manager monitors factors
that may provide an indication of whether the risk is becoming more or less likely. In the case of high staff
turnover, the general attitude of team members based on project pressures, the degree to which the team has
jelled, interpersonal relationships among team members, potential problems with
compensation and benefits, and the availability of jobs within the company and outside it are all monitored.
In addition to monitoring these factors, a project manager should monitor the effectiveness of risk
mitigation steps. For example, a risk mitigation step noted here called for the definition of work product
standards and mechanisms to be sure that work products are developed in a timely manner. This is one
mechanism for ensuring continuity, should a critical individual leave the project. The project manager should
monitor work products carefully to ensure that each can stand on its own and that each imparts information that
would be necessary if a newcomer were forced to join the software team somewhere in the middle of the project.
Risk management and contingency planning assumes that mitigation efforts have failed and that the
risk has become a reality. Continuing the example, the project is well under way and a number of people
announce that they will be leaving. If the mitigation strategy has been followed, backup is available, information
is documented, and knowledge has been dispersed across the team.
In addition, you can temporarily refocus resources (and readjust the project schedule) to those functions
that are fully staffed, enabling newcomers who must be added to the team to ―get up to speed.‖ Those
individuals who are leaving are asked to stop all work and spend their last weeks in ―knowledge transfer
mode.‖ This might include video-based knowledge capture, the development of ―commentary documents or
Wikis,‖ and/or meeting with other team members who will remain on the project.
It is important to note that risk mitigation, monitoring, and management (RMMM) steps incur
additional project cost. For example, spending the time to back up every critical technologist costs money. Part
of risk management, therefore, is to evaluate when the benefits accrued by the RMMM steps are outweighed by
the costs associated with implementing them. In essence, you perform a classic cost- benefit analysis.
If risk aversion steps for high turnover will increase both project cost and duration by an estimated 15 percent,
but the predominant cost factor is ―backup,‖ management may decide not to implement this step. On the other
hand, if the risk aversion steps are projected to increase costs by 5 percent and duration by only 3 percent,
management will likely put all into place.
THE RMMM PLAN
A risk management strategy can be included in the software project plan, or the risk management steps
can be organized into a separate risk mitigation, monitoring, and management plan (RMMM). The RMMM plan
documents all work performed as part of risk analysis and is used by the project manager as part of the overall
project plan.
There are three issues in strategy for handling the risk is
1) Risk avoidance 2) Risk Monitoring 3) Risk management
Risk mitigation
Risk mitigation means preventing the risks to occur. Following are the steps to be taken for mitigating the risks:
1. Communicate with the concerned staff to find of probable risk.
2. Find out and eliminate all those causes that can create risk before the project starts.
3. Conduct timely reviews in order to speed up the work.
Risk Monitoring
The risk monitoring process following things must be monitored by the project manager,
1. The approach or the behavior of the team members as pressure of project varies.
2. The degree in which the team performs with the spirit of ―team work‖.
3. The type of co-operation among the team members.
4. The types of problems that are occurring.
Risk Management
Project manager performs this task when risk becomes a reality. If project manager is successful in
applying the project mitigation effectively then it becomes very much easy to manage the risks.
Some software teams do not develop a formal RMMM document. Rather, each risk is documented
individually using a risk information sheet. In most cases, the RIS is maintained using a database system so that
creation and information entry, priority ordering, searches, and other analysis may be accomplished easily.
The format of the RIS is illustrated in Figure .Once RMMM has been documented and the project has
begun, risk mitigation and monitoring steps commence. As I have already discussed, risk mitigation is a
problem avoidance activity. Risk monitoring is a project tracking

Activity with three primary objectives: (1) to assess whether predicted risks do, in fact, occur; (2) to ensure that
risk aversion steps defined for the risk are being properly applied; and (3) to collect information that can be used
for future risk analysis. In many cases, the problems that occur during a project can be traced to more than one
risk. Another job of risk monitoring is to attempt to allocate origin.
3. Describe function point analysis with a neat example. Dec:06 Nov: 10
• Function-oriented metrics use a measure of the functionality delivered by the application as a normalization
value
• Most widely used metric of this type is the function point: FP = count total * [0.65 +
0.01 * sum (value adj. factors)]
• Function point values on past projects can be used to compute, for example, the average number of lines of
code per function point (e.g., 60)
• Like the KLOC measure, function point use also has proponents and opponents
• Proponents claim that
o FP is programming language independent
o FP is based on data that are more likely to be known in the early stages of a project, making it more attractive
as an estimation approach
o FP requires some ―sleight of hand‖ because the computation is based on subjective data
o Counts of the information domain can be difficult to collect after the fact
o FP has no direct physical meaning…it‘s just a number Function points
are derived using
1. Countable measures of the software requirements domain.
2. Assessments of the software complexity.
Calculate Function Point
The data for following information domain characteristics are collected.
1. Number of user inputs – Each user input which provides distinct application data to the software is counted.
2. Number of user outputs – Each user output that provides application data to the user is counted, e.g –
Screens, reports, error messages.
3. Number of user inquiries – An on-line input that results in the generation of some immediate software
response in the form of an output.
4. Number of files – Each logical master file, i.e a logical grouping of data that may be part of a database or a
separate file.
5. Number of external interfaces – All machine readable interfaces that are used to transmit information to
another system are counted.
The organization needs to develop criteria which determine whether a particular entry is simple, average, or
complex.The weighting factors should be determined by observations or by experiments.

Domain Weighting Factor


Count Count
Characteristics Simple Average Complex
Number of user
X 3 4 6
input
Number of user
X 4 3 7
output
Number of user
X 3 4 6
inquiries
Number of fiels X 7 10 15
Number of
external X 5 7 10
interfaces

Count totals

The count table can be computed with the help of above table.
Now the software complexity can be computed by answering following questions. These are complexity
adjustment values.
 Rate the above factors according to the following scale:
 Function Points(FP) = Count total x (0.65 + (0.01 x Sum(Fi))
Once the functional point is calculated then we can compute various measures as follows
 Productivity = FP / person – month
 Quality = Number of faults / FP
 Cost = $ / FP
 Documentation = Pages of documentation / FP Advantages:
3b ) Consider the following function point components and their complexity. If the total degree of influence is
52,find the estimated function points.

Function Type Estimated Count Complexity


ELF 2 7
ILF 4 10
EQ 22 4
EO 16 5
EI 24 4
Solution :
FP = UFC x VAF
Where , FP = Function Point
UFC = FP Count Total
VAF = Value Adjustment Factor
UFC = 2x7+4x10+22x4+16x5+24x4 = 318 VAF =
[0.65 + (0.01 x ∑(Fi))]
= [0.65 + (0.01 x 52)]
= 1.17
So, FP Estimated = (318 x 1.17 )= 372

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy