SE - QB Updated With Ans
SE - QB Updated With Ans
(AUTONOMOUS)
PREPARED BY APPROVED BY
Mrs.K.P.Revathi HOD
UNIT I
INTRODUCTION
PART - A
3. What is Software?
Software is nothing but a collection of computer programs that are related documents that are indented to provide desired
features, functionalities and better performance.
Agile method proposes incremental and Development of the software flows sequentially from start
iterative approach to software design point to end point.
The agile process is broken into individual The design process is not broken into an individual models
models that designers work on
The customer has early and frequent The customer can only see the product at the end of the
opportunities to look at the product and make project
decision and changes to the project
Agile model is considered unstructured Waterfall model are more secure because they are so plan
compared to the waterfall model oriented
Small projects can be implemented very All sorts of project can be estimated and completed.
quickly. For large projects, it is difficult to
estimate the development time.
Error can be fixed in the middle of the Only at the end, the whole product is tested. If the
project. requirement error is found or any changes have to be made,
the project has to start from the beginning
PART B :
1. Neatly explain all the Prescriptive process models and Specialized process models May: 03, 05,
06,09,10,14,16 Dec : 04,08,09,12 ,16
―Prescriptive‖ means a set of process elements—framework activities, software engineering
actions, tasks, work products, quality assurance, and change control mechanisms for each project. Each
process model also prescribes a process flow (also called a work flow)—that is, the manner in which
the process elements are interrelated to one another.
The software process model is also known as Software Development Life Cycle (SDLC) Model for or
software paradigm.
Various prescriptive process models are
.Incremental Model
Prototyping
RAD Model
Spiral Model
Concurrent
Communication
Plannin g
Estimation Modeling
Construction
Deployment
Delivery
The waterfall model, sometimes called the classic life cycle, is a systematic, sequential approach to
software development that begins with customer specification of requirements and progresses through planning,
modeling, construction, and deployment, culminating in ongoing support of the completed software. A variation
in the representation of the waterfall model is called the V-model. Represented in figure, the V-model depicts
the relationship of quality assurance actions to the actions associated with
Communication, modeling, and early construction activities. As software team moves down the left
side of the V, basic problem requirements are refined into progressively more detailed and technical
representations of the problem and its solution. Once code has been generated, the team moves up the right side
of the V, essentially performing a series of tests that validate each of the models created as the team moved
down the left side.
The V-model provides a way of visualizing how verification and validation actions are applied to
earlier engineering work. The waterfall model is the oldest paradigm for software engineering.
Disadvantages:
1. It is difficult to follow the sequential flow in software development process. If some changes are made at
some phases then it may cause confusion.
2. The requirement analysis is done initially and sometimes it is not possible to state all the requirements
explicitly in the beginning. This causes difficulty in the projects.
3. The customer can see the working model of the project only at the end. After reviewing of the working
model; if the customer gets dissatisfied then it causes serious problem.
The waterfall model can serve as a useful process model in situations where requirements are fixed and work is
to proceed to completion in a linear manner.
2. Incremental Process Models
The incremental model delivers series of releases to the customer. These releases are called increments.
A process model that is designed to produce the software in increments. The incremental model
combines elements of linear and parallel process flows the inncremental model applies linear sequences
in a staggered fashion as calendar time progresses. Each linear sequence produces deliverable
―increments‖ of the software in a manner that is similar to the increments produced by an
evolutionary process flow.
For example, word-processing software developed using the incremental paradigm might deliver basic
file management, editing, and document production functions in the first increment; more sophisticated
editing and document production capabilities in the second increment; spelling and grammar checking
in the third increment; and advanced page layout capability in the fourth increment. It should be noted
that the process flow for any increment can incorporate the prototyping paradigm. When an
incremental model is used, the first increment is often a core product.
The incremental process model focuses on the delivery of an operational product with each increment.
Early increments are stripped-down versions of the final product, but they do provide capability that
serves the user and also provide a platform for evaluation by the user. Incremental development is
particularly useful when staffing is unavailable for a complete implementation by the business deadline
that has been established for the project.
Early increments can be implemented with fewer people. If the core product is well received, then
additional staff (if required) can be added to implement the next increment. In addition, increments can
be planned to manage technical risks.
Advantages
Incremental development is particularly useful when staffing is unavailable for a complete
implementation by the business deadline that has been established for the project. Early increments can
be implemented with fewer people.
i).RAD Model
RAD model is Rapid Application Development model. It is a type of incremental model. In RAD
model the components or functions are developed in parallel as if they were mini projects. The developments
are time boxed, delivered and then assembled into a working prototype. This can quickly give the customer
something to see and use and to provide feedback regarding the delivery and their requirements.
Business modeling: The information flow is identified between various business functions.
Data modeling: Information gathered from business modeling is used to define data objects that are needed for
the business.
Process modeling: Data objects defined in data modeling are converted to achieve the business information
flow to achieve some specific business objective.
Description are identified and created for CRUD of data objects.
Application generation: Automated tools are used to convert process models into code and the actual system.
Testing and turnover: Test new components and all the interfaces.
Advantages of the RAD model:
Reduced development time.
Increases reusability of components
Quick initial reviews occur
Encourages customer feedback
Integration from very beginning solves a lot of integration issues.
The prototyping paradigm begins with communication. We meet with other stakeholders to define the
overall objectives for the software, identify whatever requirements are known, and outline areas where further
definition is mandatory. Prototyping iteration is planned quickly, and modeling occurs. A quick design focuses
on a representation of those aspects of the software that will be visible to end users (e.g., human interface layout
or output display formats). The quick design leads to the construction of a prototype.
The prototype is deployed and evaluated by stakeholders, who provide feedback that is used to further refine
requirements. Iteration occurs as the prototype is tuned to satisfy the needs of various stakeholders, while at
the same time enabling you to better understand what needs to be done. The prototype serves as a mechanism
for identifying software requirements. If a working prototype is to be built, you can make use of existing
program fragments or apply tools (e.g., report generators and window managers) that enable working programs
to be generated quickly.
Disadvantages:
In the first version itself, customer often wants ―few fixes‖ rather than rebuilding of the system whereas
rebuilding of new system maintains high level of quality.
Sometimes developer may make implementation compromises to get prototype working quickly. Later on
developer may become comfortable with compromises and forget why they are inappropriate.
made. Anchor point milestones—a combination of work products and conditions that are attained along
the path of the spiral—are noted for each evolutionary pass.
The first circuit around the spiral might result in the development of a product specification; each pass
through the planning region results in adjustments to the project plan. Cost and schedule are adjusted based on
feedback derived from the customer after delivery. The project manager adjusts the planned number of iterations
required to complete the software.
Advantages
The spiral model is a realistic approach to the development of large-scale systems and software.
Because software evolves as the process progresses, the developer and customer better understand and react to
risks at each evolutionary level.
Concurrent Models
The concurrent development model, sometimes called concurrent engineering, allows a software team
to represent iterative and concurrent elements of any of the process models. Figure provides a schematic
representation of one software engineering activity.
The activity—modeling—may be in any one of the states12 noted at any given time. Similarly, other
activities, actions, or tasks (e.g., communication or construction) can be represented in an analogous manner.
For example, early in a project the communication activity has completed its first iteration and exists in the
awaiting changes state.
The modeling activity (which existed in the inactive state) while initial communication was completed,
now makes a transition into the under development state. If, however, the customer indicates that changes in
requirements must be made, the modeling activity moves from the under development state into the awaiting
changes state.
Concurrent modeling defines a series of events that will trigger transitions from state to state for each
of the software engineering activities, actions, or tasks. This generates the event analysis model correction,
which will trigger the requirements analysis action from the done state into the awaiting changes state .
PART B:
1. Explain functional & non-functional requirements. May: 14,16
Requirement engineering is the process of establishing the services that the customer requires from system and the constraints
under which it operates and is developed. The requirements themselves are the descriptions of the system services and constraints that are
generated during the requirements engineering process. A requirement can range from a high-level abstract statement of a service or of a
system constraint to a detailed mathematical functional specification.
Types of Requirements
The requirements specify what the system does and design specifies how it does.
System requirement should simply describe the external behavior of the system and its operational constraints. They should not be
concerned with how the system should be designed or implemented. For a complex software system design it is necessary to give all the
requirements in detail. Usually, natural language is used to write system requirements specification and user requirements.
Software specification
It is a detailed software description that can serve as a basis for design or implementation. Typically it is written for software
developers.
Functional Requirements
Functional requirements should describe all the required functionality or system services.
The customer should provide statement of service. It should be clear how the system should react to particular inputs and how a
particular system should behave in particular situation. Functional requirements are heavily dependent upon the type of software, expected
users and the type of system where the software is used.
Functional user requirements may be high-level statements of what the system should do but functional system requirements
should describe the system services in detail.
For example: Consider a library system in which there is a single interface provided to multiple databases. These databases are
collection of articles from different libraries. A user can search for, download and print these articles for a personal study.
From this example we can obtain functional Requirements as-
The user shall be able to search either all of the initial set of databases or select a subset from it.
The system shall provide appropriate viewers for the user to read documents in the document store.
A unique identifier (ORDER_ID) should be allocated to every order. This identifier can be copied by the user to the account's permanent
storage area.
Problems Associated with Requirements
Requirements imprecision
Non-Functional requirements
Requirements that are not directly concerned with the specific functions delivered by the system Typically relate to the system as
a whole rather than the individual system features Often could be deciding factor on the survival of the system (e.g. reliability, cost,
response time)
organizational considerations
Organizational requirements
The requirements which are consequences of organizational policies and procedures come under this category. For instance:
process standards used implementation requirements.
2. Narrate the importance of SRS. Explain typical SRS structure and its parts. . Show the IEEE template of SRS document. Dec:
05, Nov: 12 ,May :16
An SRS is basically an organization's understanding (in writing) of a customer or potential client's system requirements and
dependencies at a particular point in time (usually) prior to any actual design or development work. It's a two-way insurance policy that
assures that both the client and the organization understand the other's requirements from that perspective at a given point in time.
The SRS document itself states in precise and explicit language those functions and capabilities a software system (i.e., a
software application, an eCommerce Web site, and so on) must provide, as well as states any required constraints by which the system must
abide. The SRS also functions as a blueprint
for completing a project with as little cost growth as possible. The SRS is often referred to as the "parent" document because all subsequent
project management documents, such as design specifications, statements of work, software architecture specifications, testing and
validation plans, and documentation plans, are related to it.
It's important to note that an SRS contains functional and nonfunctional requirements only; it doesn'tq offer design suggestions,
possible solutions to technology or business issues, or any other information other than what the development team understands the
customer's system requirements to be.
A well-designed, well-written SRS accomplishes four major goals:
It provides feedback to the customer. An SRS is the customer's assurance that the development organization understands the
issues or problems to be solved and the software behavior necessary to address those problems. Therefore, the SRS should be written in
natural language (versus a formal language, explained later in this article), in an unambiguous manner that may also include charts, tables,
data flow diagrams, decision tables, and so on.
It decomposes the problem into component parts. The simple act of writing down software requirements in a well-designed
format organizes information, places borders around the problem, solidifies ideas, and helps break down the problem into its component
parts in an orderly fashion.
It serves as an input to the design specification. As mentioned previously, the SRS serves as the parent document to subsequent
documents, such as the software design specification and statement of work. Therefore, the SRS must contain sufficient detail in the
functional system requirements so that a design solution can be devised.
It serves as a product validation check. The SRS also serves as the parent document for testing and validation strategies that will
be applied to the requirements for verification.
SRSs are typically developed during the first stages of "Requirements Development," which is the initial product development
phase in which information is gathered about what requirements are needed--and not. This information-gathering stage can include onsite
visits, questionnaires, surveys, interviews, and perhaps a return-on-investment (ROI) analysis or needs analysis of the customer or client's
current business environment. The actual specification, then, is written after the requirements have been gathered and analyzed.
SRS development process can offer several benefits:
Technical writers are skilled information gatherers, ideal for eliciting and articulating customer requirements. The presence of a
technical writer on the requirements-gathering team helps balance the type and amount of information extracted from customers, which can
help improve the SRS.
Technical writers can better assess and plan documentation projects and better meet customer document needs. Working on SRSs
provides technical writers with an opportunity for learning about customer needs firsthand--early in the product development process.
Technical writers know how to determine the questions that are of concern to the user or customer regarding ease of use and
usability. Technical writers can then take that knowledge and apply it not only to the specification and documentation development, but also
to user interface development, to help ensure the UI (User Interface) models the customer requirements.
Technical writers involved early and often in the process, can become an information resource throughout the process, rather than
an information gatherer at the end of the process.
IEEE) have identified nine topics that must be addressed when designing and writing an SRS:
Interfaces
Functional Capabilities
Performance Levels
Data Structures/Elements
Safety
Reliability
Security/Privacy
Quality
A traceability matrix
UNIT – III SOFTWARE DESIGN
7. What is coupling?
Coupling is the measure of interconnection among modules in a program structure. It depends on the interface complexity
between modules.
The architectural design defines the relationship between major structural elements of the software, the “design patterns” that can
be used to achieve the requirements that have been defined for the system.
PART B:
1. Explain about the various design process & design concepts considered during design. May:
03.06,07,08, Dec: 05
Software design is an iterative process through which requirements are translated into a ―blueprint‖
for constructing the software. Initially, the blueprint depicts a holistic view of software. That is, the design is
represented at a high level of abstraction a level that can be directly traced to the specific system objective and
more detailed data, functional, and behavioral requirements.
Three characteristics that serve as a guide for the evaluation of a good design:
• The design must implement all of the explicit requirements contained in the requirements model, and it must
accommodate all of the implicit requirements desired by stakeholders.
• The design must be a readable, understandable guide for those who generate code and for those who test and
subsequently support the software.
• The design should provide a complete picture of the software, addressing the data, functional, and behavioral
domains from an implementation perspective.
Some guidelines:
1. A design should exhibit an architecture that (1) has been created using recognizable architectural styles or
patterns, (2) is composed of components that exhibit good design characteristics and (3) can be implemented in
an evolutionary fashion, thereby facilitating implementation and testing.
2. A design should be modular; that is, the software should be logically partitioned into elements or
subsystems.
3. A design should contain distinct representations of data, architecture, interfaces, and components.
4. A design should lead to data structures that are appropriate for the classes to be implemented and are drawn
from recognizable data patterns.
5. A design should lead to components that exhibit independent functional characteristics.
6. A design should lead to interfaces that reduce the complexity of connections between components and with
the external environment.
7. A design should be derived using a repeatable method that is driven by information obtained during
software requirements analysis.
8. A design should be represented using a notation that effectively communicates its meaning.
Quality Attributes
• Usability is assessed by considering human factors, overall aesthetics, consistency, and documentation.
• Reliability is evaluated by measuring the frequency and severity of failure, the accuracy of output results, the
mean-time-to-failure (MTTF), the ability to recover from failure, and the predictability of the program.
• Performance is measured by considering processing speed, response time, resource consumption,
throughput, and efficiency.
• Supportability combines the ability to extend the program (extensibility),adaptability, serviceability—
these three attributes represent a more common term, maintainability—and in addition, testability, compatibility,
configurability
Design Concepts
The software design concept provides a framework for implementing the right software. Following are
certain issues that are considered while designing the software:
i) The abstraction means an ability to cope up with the complexity. At the highest level of abstraction, a
solution is stated in broad terms using the language of the problem environment. At lower levels of abstraction,
a more detailed description of the solution is provided. A procedural abstraction refers to a sequence of
instructions that have a specific and limited function.
An example of a procedural abstraction would be the word open for a door. Open implies a long sequence of
procedural steps (e.g., walk to the door, reach out and grasp knob, turn knob and pull door, step away from
moving door, etc.) A data abstraction is a named collection of data that describes a data object. In the context of
the procedural abstraction open, we can define a data abstraction called door. Like any data object, the data
abstraction for door would encompass a set of attributes that describe the door (e.g., door type, swing direction,
opening mechanism, weight, dimensions). It follows that the procedural abstraction open would make use of
information contained in the attributes of the data abstraction door.
ii) Software architecture means ―the overall structure of the software and the ways in which that structure
provides conceptual integrity for a system‖. Architecture is the structure or organization of program components
(modules), the manner in which these components interact, and the structure of data that are used by the
components. The architectural design can be represented using one or more of a number of different models
Structural models represent architecture as an organized collection of program components.
Framework models increase the level of design abstraction by attempting to identify repeatable architectural
design frameworks that are encountered in similar types of applications.
Dynamic models address the behavioral aspects of the program architecture, indicating how the structure or
system configuration may change as a function of external events.
iii) Information Hiding: The principle of information hiding suggests that modules be ―characterized by
design decisions that (each) hides from all others.‖ In other words, modules should be specified and
designed so that information (algorithms and data) contained within a module is inaccessible to other modules
that have no need for such information.
iv) The concept of functional independence is a direct outgrowth of separation of concerns, modularity, and
the concepts of abstraction and information hiding. Independence is assessed using two qualitative criteria:
cohesion and coupling. Cohesion is an indication of the relative functional strength of a module. Coupling is an
indication of the relative interdependence among modules. Cohesion is a natural extension of the information-
hiding concept. A cohesive module performs a single task, requiring little interaction with other components in
other parts of a program. Coupling is an indication of interconnection among modules in a software structure.
Coupling depends on the interface complexity between modules, the point at which entry or reference is made to
a module, and what data pass across the interface.
v) Refinement is actually a process of elaboration. Refinement helps to reveal low-level details as design
progresses. Both concepts allow you to create a complete design model as the design evolves.
vi) Refactoring is the process of changing a software system in such a way that it does not alter the external
behavior of the code yet improves its internal structure.‖ When software is refactored, the existing design is
examined for redundancy, unused design elements, inefficient or unnecessary algorithms, poorly constructed or
inappropriate data structures, or any other design failure that can be corrected to yield a better design.
vii) Design classes that refine the analysis classes by providing design detail that will enable the classes to
be implemented, and implement a software infrastructure that supports the business solution.
Five different types of design classes, each representing a different layer of the design architecture, can be
developed:
• User interface classes define all abstractions that are necessary for human computer interaction (HCI). In
many cases, HCI occurs within the context of a metaphor (e.g., a checkbook, an order form, a fax machine), and
the design classes for the interface may be visual representations of the elements of the metaphor.
• Business domain classes The classes identify the attributes and services (methods) that are required to
implement some element of the business domain.
• Process classes implement lower-level business abstractions required to fully manage the business domain
classes.
• Persistent classes represent data stores (e.g., a database) that will persist beyond the execution of the
software.
• System classes implement software management and control functions that enable the system to operate and
communicate within its computing environment and with the outside world.
The architecture is not the operational software. Rather, it is a representation that enables Software
Engineer to:
(1) Analyze the effectiveness of the design in meeting its stated requirements,
(2) Consider architectural alternatives at a stage when making design changes is still relatively easy, and
(3) Reduce the risks associated with the construction of the software.
Architectural model or style is a pattern for creating the system architecture for given problem. Architectural
style as a descriptive mechanism to differentiate the house from other styles (e.g., A-frame, raised ranch, Cape
Cod). But more important, the architectural style is also a template for construction.
The software that is built for computer-based systems also exhibits one of many architectural styles.
Each style describes a system category that encompasses
(1) a set of components (e.g., a database, computational modules) that perform a function required by a
system; (2) a set of connectors that enable ―communication,
coordination and cooperation‖ among components; (3) constraints that define how components can be integrated
to form the system; and (4) semantic models that enable a designer to understand the overall properties of a
system. An architectural style is a transformation that is imposed on the design of an entire system.
The intent is to establish a structure for all components of the system. An architectural pattern, like an
architectural style, imposes a transformation on the design of architecture. However, a pattern differs from a
style in a number of fundamental ways: (1) the scope of a pattern is less broad, focusing on one aspect of the
architecture rather than the architecture in its entirety; (2) a pattern imposes a rule on the architecture, describing
how the software will handle some aspect of its functionality at the infrastructure level (e.g., concurrency)
1. Data-centered architectures. A data store (e.g., a file or database) resides at the center of this architecture
and is accessed frequently by other components that update, add, delete, or otherwise modify data within the
store. Figure illustrates a typical data-centered style. Client software accesses a central repository. In some cases
the data repository is passive. That is, client software accesses the data independent of any changes to the data or
the actions of other client software. A variation on this approach transforms the repository into a ―blackboard‖
that sends notifications to client software when data of interest to the client changes. Data- centered
architectures promote inerrability. That is, existing
Components can be changed and new client components added to the architecture without concern
about other clients (because the client components operate independently). In addition, data can be passed
among clients using the blackboard mechanism (i.e., the blackboard component serves to coordinate the transfer
of information between clients). Client components independently execute processes.
2. Data-flow architectures. This architecture is applied when input data are to be transformed through a series
of computational or manipulative components into output data. A pipe-and-filter pattern has a set of
components, called filters, connected by pipes that transmit data from onecomponent to the next. Each filter
works independently of those components upstream anddownstream, is designed to expect data input of a
certain form, and produces data output (to the next filter) of a specified form. However, the filter does not
require knowledge of the workings of its neighboring filters. If the data flow degenerates into a single line of
transforms, it is termed batch sequential. This structure accepts a batch of data and then applies a series of
sequential components (filters) to transform it.
3. Call and return architectures. This architectural style enables you to achieve a program structure that is
relatively easy to modify and scale. A number of sub styles exist within this category:
.
Fig: Layered Architecture
As architectural design begins, the software to be developed must be put into context—that is, the
design should define the external entities (other systems, devices, people) that the software interacts with and
the nature of the interaction. This information can generally be acquired from the requirements model and all
other information gathered during requirements engineering. Once context is modeled and all external software
interfaces have been described, you can identify a set of architectural archetypes. An archetype is an abstraction
(similar to a class) that represents one element of system behavior. The set of archetypes provides a collection
of abstractions that must be modeled architecturally if the system is to be constructed, but the archetypes
themselves do not provide enough implementation detail.
Representing the System in Context
At the architectural design level, a software architect uses an architectural context diagram (ACD) to
model the manner in which software interacts with entities external to its boundaries. The generic structure of
the architectural context diagram is illustrated in figure.
Referring to the figure, systems that interoperate with the target system (the system for which an architectural
design is to be developed) are represented as
• Super ordinate systems—those systems that use the target system as part of some higher-level processing
scheme.
• Subordinate systems—those systems that are used by the target system and provide data or processing that
are necessary to complete target system functionality.
• Peer-level systems—those systems that interact on a peer-to-peer basis (i.e. information is either produced or
consumed by the peers and the target system.
• Actors—entities (people, devices) that interact with the target system by producing or consuming information
that is necessary for requisite processing.
3. Describe the concepts of cohesion and coupling. State difference between cohesion and coupling with a
suitable examples. May: 03 , 08, 15,16.
Cohesion
• Cohesion is the ―single-mindedness‘ of a component
• It implies that a component or class encapsulates only attributes and operations that are closely related to
one another and to the class or component itself
• The objective is to keep cohesion as high as possible.
• The kinds of cohesion can be ranked in order from highest (best) to lowest (worst) Functional
• A module performs one and only one computation and then returns a result.
Layer
• A higher layer component accesses the services of a lower layer component Communicational
• All operations that access the same data are defined within one class Kinds of cohesion
Sequential cohesion
Components or operations are grouped in a manner that allows the first to provide input to the next and so on in
order to implement a sequence of operations Procedural cohesion
Components or operations are grouped in a manner that allows one to be invoked immediately after the
preceding one was invoked, even when no data passed between them
Temporal cohesion
Operations are grouped to perform a specific behavior or establish a certain state such as program start-up or
when an error is detected
Utility cohesion
Components, classes, or operations are grouped within the same category because of similar general functions
but are otherwise unrelated to each other
Coupling
• As the amount of communication and collaboration increases between operations and classes, the complexity
of the computer-based system also increases
• As complexity rises, the difficulty of implementing, testing, and maintaining software also increases
• Coupling is a qualitative measure of the degree to which operations and classes are connected to one another
• The objective is to keep coupling as low as possible
• The kinds of coupling can be ranked in order from lowest (best) to highest (worst)
• Data coupling • Operation A() passes one or more atomic data operands to operation B(); the less the number
of operands, the lower the level of coupling.
• Stamp coupling
A whole data structure or class instantiation is passed as a parameter to an operation
• Control coupling Operation A() invokes operation B() and passes a control flag to B that directs logical flow
within B(). Consequently, a change in B() can require a change to be made to the meaning of the control flag
passed by A(), otherwise an error may result
• Common coupling
A number of components all make use of a global variable, which can lead to uncontrolled error propagation
and unforeseen side effects
• Content coupling
One component secretly modifies data that is stored internally in another component
• Subroutine call coupling
When one operation is invoked it invokes another operation within side of it
• Type use coupling
Component A uses a data type defined in component B, such as for an instance
variable or a local variable declaration • If/when the type definition changes, every
component that declares a variable of that data type must also change.
S.No Coupling Cohesion
1. Coupling represents how the modules are In cohesion the cohesive module performs
connected with other modules or with other only one thing.
modules or
with the outside world.
2. With coupling interface complexity With cohesion data hiding can be
is decided. done.
3. The goal of coupling is to achieve The goal of cohesion is to achieve
lowest coupling. high cohesion.
4. Various types of coupling are : Data coupling , Various types of cohesion are: Coincidental
control coupling common coupling , content cohesion, logical cohesion , Temporal
coupling. cohesion,
Procedural cohesion.
PART B:
1. Illustrate white box testing. May: 04, 07, Dec: 07, May: 15
Guarantee that all independent paths within a module have been exercised at least once
Exercise all logical decisions on their true and false sides,
Execute all loops at their boundaries and within their operational bounds
Exercise internal data structures to ensure their validity.
Basis path testing:
Basis path testing is a white-box testing technique
To derive a logical complexity measure of a procedural design.
Test cases derived to exercise the basis set are guaranteed to execute every statement in the program at
least one time.
Methods:
1. Flow graph notation
2. Independent program paths or Cyclomatic complexity
3. Deriving test cases
4. Graph Matrices
Flow Graph Notation:
Start with simple notation for the representation of control flow (called flow graph). It represent
logical control flow.
Fig. A represent program control structure and fig. B maps the flowchart into a corresponding flow graph.
In fig. B each circle, called flow graph node, represent one or more procedural statement.
A sequence of process boxes and decision diamond can map into a single node.
The arrows on the flow graph, called edges or links, represent flow of control and are parallel to
flowchart arrows.
An edge must terminate at a node, even if the node does not represent any procedural statement.
Areas bounded by edges and nodes are called regions. When counting regions, we include the are
outside the graph as a region.
When compound conditions are encountered in procedural design, flow graph becomes slightly more
complicated.
When we translating PDL segment into flow graph, separate node is created for each condition.
Each node that contains a condition is called predicate node and is characterized by two or more edges
comes from it.
Independent program paths or Cyclomatic complexity:
An independent path is any path through the program that introduces at least one new set of processing
statement or new condition.
For example, a set of independent paths for flow graph:
Path 1: 1-11
Path 2: 1-2-3-4-5-10-1-11
Path 3: 1-2-3-6-8-9-1-11
Path 4: 1-2-3-6-7-9-1-11
Note that each new path introduces a new edge.
The path 1-2-3-4-5-10-1-2-3-6-8-9-1-11 is not considered to e an independent path because it is simply
a combination of already specified paths and does not traverse any new edges.
Test cases should be designed to force execution of these paths (basis set).
Every statement in the program should be executed at least once and every condition will have been
executed on its true and false.
Cyclomatic complexity is a software metrics that provides a quantitative measure of the logical
complexity of a program.
It defines no. of independent pats in the basis set and also provides number of test that must be
conducted.
One of three ways to compute cyclomatic complexity:
1. The no. of regions corresponds to the cyclomatic complexity. Cyclomatic complexity,
V(G), for a flow graph, G, is defined as
V(G) = E - N + 2
where E is the number of flow graph edges, N is the number of flow graph nodes. Cyclomatic complexity,
V(G), for a flow graph, G, is also defined as V(G) = P + 1 where P is the number of predicate nodes edges.
So the value of V(G) provides us with upper bound of test cases. Deriving Test
Cases:
It is a series of steps method.
The procedure average depicted in PDL.
Average, an extremely simple algorithm, contains compound conditions and loops.
To derive basis set, follow the steps.
1. Using the design or code as a foundation, draw a corresponding flow graph.
A flow graph is created by numbering those PDL statements that will be mappedinto corresponding flow graph node.
2. Explain the various types of black box testing methods. Dec: 07,16 May: 15
Black box testing:
Also called behavioral testing, focuses on the functional requirements of the software.
It enables the software engineer to derive sets of input conditions that will fully exercise all functional
requirements for a program.
Black-box testing is not an alternative to white-box techniques but it is complementary approach.
Black-box testing attempts to find errors in the following categories:
Incorrect or missing functions,
Interface errors,
Errors in data structures or external data base access.
Behavior or performance errors,
Initialization and termination errors.
Black-box testing purposely ignored control structure, attention is focused on the information domain.
Tests are designed to answer the following questions:
How is functional validity tested?
How is system behavior and performance tested?
What classes of input will make good test cases?
By applying black-box techniques, we derive a set of test cases that satisfy the following criteria
Test cases that reduce the number of additional test cases that must be designed to achieve
reasonable testing (i.e minimize effort and time)
Test cases that tell us something about the presence or absence of classes of errors
Black box testing methods
Graph-Based Testing Methods
Equivalence partitioning
Boundary value analysis (BVA)
Orthogonal Array Testing Graph-
Based Testing Methods:
To understand the objects that are modeled in software and the relationships that connects these
objects.
Next step is to define a series of tests that verify ―all objects have the expected relationship to one
another.
Stated in other way:
o Create a graph of important objects and their relationships
o Develop a series of tests that will cover the graph
So that each object and relationship is exercised and errors are uncovered.
Begin by creating graph –
a collection of nodes that represent objects
links that represent the relationships between objects
node weights that describe the properties of a node
link weights that describe some characteristic of a link.
Nodes are represented as circles connected by links that take a number of different forms.
A directed link (represented by an arrow) indicates that a relationship moves in only one direction.
A bidirectional link, also called a symmetric link, implies that the relationship applies in both
directions.
Parallel links are used when a number of different relationships are established between graph nodes.
An undirected link establishes a symmetric relationship between the new file menu select and
document text,
parallel links indicate relationships between document window and document text
Number of behavioral testing methods that can make use of graphs:
Transaction flow modeling.
o The nodes represent steps in some transaction and the links represent the logical connection
between steps
Finite state modeling.
o The nodes represent different user observable states of the software and the links represent the
transitions that occur to move from state to state. (Starting point and ending point)
Data flow modeling.
o The nodes are data objects and the links are the transformations that occur to translate one data
object into another.
o Timing modeling. The nodes are program objects and the links are the sequential connections
between those objects.
o Link weights are used to specify the required execution times as the program executes.
Equivalence Partitioning:
Equivalence partitioning is a black-box testing method that divides the input domain of a program into
classes of data from which test cases can be derived.
Test case design for equivalence partitioning is based on an evaluation of equivalence classes for an
input condition.
An equivalence class represents a set of valid or invalid states for input conditions.
Typically, an input condition is a specific numeric value, a range of values, a set of related values, or a
Boolean condition.
To define equivalence classes follow the guideline
If an input condition specifies a range, one valid and two invalid equivalence classes are defined.
If an input condition requires a specific value, one valid and two invalid equivalence classes are
defined.
If an input condition specifies a member of a set, one valid and one invalid equivalence class are
defined.
If an input condition is Boolean, one valid and one invalid class are defined.
Example:
area code—blank or three-digit number prefix—three-digit number
not beginning with 0 or 1 suffix—four-digit number
password—six digit alphanumeric string
commands— check, deposit, bill pay, and the like area code:
o Input condition, Boolean—the area code may or may not be present.
o Input condition, value— three digit number
o Input condition, range—values defined between 200 and 999, with specific exceptions.
Example:
Consider the send function for a fax application.
Four parameters, P1, P2, P3, and P4, are passed to the send function. Each takes on three discrete
values.
P1 takes on values:
o P1 = 1, send it now
o P1 = 2, send it one hour later
o P1 = 3, send it after midnight
P2, P3, and P4 would also take on values of 1, 2 and 3, signifying other send functions.
OAT is an array of values in which each column represents a Parameter - value that can take a certain
set of values called levels.
Each row represents a test case.
Parameters are combined pair-wise rather than representing all possible combinations of parameters
and levels
3. Explain about various testing strategy. May: 05, 06, Dec: 08, May: 10, 13
Advantage:
Big Bang testing has the advantage that everything is finished before integration testing starts.
Disadvantage:
The major disadvantage is that in general it is time consuming and difficult to trace the cause of failures because
of this late integration.
B F G
C
D E
• Is an incremental approach in which modules are integrated by moving down through the control structure.
Top town Integration process can be performed using following steps.
1. Main program used as a test driver and stubs are substitutes for components directly subordinate to it.
2. Subordinate stubs are replaced one at a time with real components (following the depth-first or breadth-first
approach).
3. Tests are conducted as each component is integrated.
4. On completion of each set of tests and other stub is replaced with a real component.
5. Regression testing may be used to ensure that new errors not introduced.
Advantages of Top-Down approach:
The tested product is very consistent because the integration testing is basically performed in an environment
that almost similar to that of reality
Stubs can be written with lesser time because when compared to the drivers then Stubs are simpler to author.
Disadvantages of Top-Down approach:
Basic functionality is tested at the end of cycle
Bottom-Up Integration:
In Bottom-Up Integration the modules at the lowest levels are integrated first, then integration is done
by moving upward through the control structure.
Bottom up Integration process can be performed using following steps.
• Low level components are combined in clusters that perform a specific software function.
• A driver (control program) is written to coordinate test case input and output.
• The cluster is tested.
• Drivers are removed and clusters are combined moving upward in the program structure.
7. System Testing:
The system test is a series of tests conducted to fully the computer based system. Various types of system tests
are:
• Recovery testing
Is intended to checks system‘s ability to recover from failures
• Security testing
Security testing verifies that system protection mechanism prevents improper penetration or data alteration
• Stress testing
Determines breakpoint of a system to establish maximum service level. Program is checked to see how well it
deals with abnormal resource demands
8. Performance testing
Performance testing evaluates the run-time performance of software. Performance Testing:
• Stress test.
• Volume test.
• Configuration test (hardware & software).
• Compatibility.
• Regression tests.
Security tests.
• Timing tests.
• Environmental tests.
• Quality tests.
• Recovery tests.
• Maintenance tests.
• Documentation tests.
• Human factors tests.
Testing Life Cycle:
• Establish test objectives.
• Design criteria (review criteria).
□ Correct.
□ Feasible.
□ Coverage.
□ Demonstrate functionality.
UNIT – V PROJECT MANAGEMENT
1. Define measure.
Measure is defined as a quantitative indication of the extent, amount, dimension, or size of some attribute of a product or process.
2. Define metrics.
Metrics is defined as the degree to which a system component, or process possesses a given attribute.
3. What are the types of metrics?
Direct metrics – It refers to immediately measurable attributes.
Example – Lines of code, execution speed.
Indirect metrics – It refers to the aspects that are not immediately quantifiable or measurable. Example – functionality of a
program.
7. What is EVA?
Earned Value Analysis is a technique of performing quantitative analysis of the software project. It provides a common value
scale for every task of software project. It acts as a measure for software project progress.
Software project management is an activity of organizing, planning and scheduling software projects.
12.What is RMMM?
RMMM - Risk Mitigation Monitoring and Management. Its an effective strategy to assist the project team for
dealing with risk
Software project scheduling is an activity that distributes estimated effort across the planned project duration by allocating the
effort to specified software engineering tasks.
PART B :
1. Describe two metrics which have been used to measure the software. May 04,05
Software process and project metrics are quantitative measures. The software measures are collected by
software engineers and software metrics are analyzed by software managers.
• They are a management tool.
• They offer insight into the effectiveness of the software process and the projects that are conducted using the
process as a framework.
• Basic quality and productivity data are collected.
• These data are analyzed, compared against past averages, and assessed.
• The goal is to determine whether quality and productivity improvements have occurred.
• The data can also be used to pinpoint problem areas.
• Remedies can then be developed and the software process can be improved. Use of Measurement:
• Can be applied to the software process with the intent of improving it on a continuous basis.
• Can be used throughout a software project to assist in estimation, quality control, productivity assessment, and
project control.
• Can be used to help assess the quality of software work products and to assist in tactical decision making as a
project proceeds.
Reason for measure:
• To characterize in order to
• Gain an understanding of processes, products, resources, and environments
• Establish baselines for comparisons with future assessments
• To evaluate in order to determine status with respect to plans
• To predict in order to gain understanding of relationships among processes and products
• Build models of these relationships
• To improve in order to Identify roadblocks, root causes, inefficiencies, and other opportunities for improving
product quality and process performance
2. What are the categories of software risks? Give an overview about risk management. May: 14
Risk is a potential problem – it might happen and it might not conceptual definition of risk
o Risk concerns future happenings
o Risk involves change in mind, opinion, actions, places, etc.
o Risk involves choice and the uncertainty that choice entails Two
characteristics of risk
o Uncertainty – the risk may or may not happen, that is, there are no 100% risks (those, instead, are called
constraints)
o Loss – the risk becomes a reality and unwanted consequences or losses occur
Risk Categorization
1) Project risks
They threaten the project plan. If they become real, it is likely that the project schedule will slip and that costs
will increase
2) Technical risks
They threaten the quality and timeliness of the software to be produced. If they become real, implementation
may become difficult or impossible
3) Business risks
They threaten the viability of the software to be built. If they become real, they jeopardize the project or the
product
Sub-categories of Business risks
i) Market risk – building an excellent product or system that no one really wants
ii) Strategic risk – building a product that no longer fits into the overall business strategy for the company
iii)Sales risk – building a product that the sales force doesn't understand how to sell
iv) Management risk – losing the support of senior management due to a change in focus or a change in people
v) Budget risk – losing budgetary or personnel commitment
4. Known risks
Those risks that can be uncovered after careful evaluation of the project plan, the business and technical
environment in which the project is being developed, and other reliable information sources (e.g., unrealistic
delivery date)
5. Predictable risks
Those risks that are extrapolated from past project experience (e.g., past turnover)
6. Unpredictable risks
Those risks that can and do occur, but are extremely difficult to identify in advance
Risk Identification
Risk identification is a systematic attempt to specify threats to the project plan. By identifying known
and predictable risks, the project manager takes a first step toward avoiding them when possible and controlling
them when necessary
Generic risks
Risks that are a potential threat to every software project. Product-specific
risks
Risks that can be identified only by those a with a clear understanding of the technology, the people, and the
environment that is specific to the software that is to be built. This requires examination of the project plan and
the statement of scope. "What special characteristics of this product may threaten our project plan?"
Risk projection
Risk projection (or estimation) attempts to rate each risk in two ways
The probability that the risk is real. The consequence of the problems associated with the risk, should it occur.
The project planner, managers, and technical staff perform four risk projection steps. The intent of these steps is
to consider risks in a manner that leads to prioritization. Be prioritizing risks, the software team can allocate
limited resources where they will have the most impact
Steps
1. Establish a scale that reflects the perceived likelihood of a risk (e.g., 1-low, 10- high)
2. Delineate the consequences of the risk
Activity with three primary objectives: (1) to assess whether predicted risks do, in fact, occur; (2) to ensure that
risk aversion steps defined for the risk are being properly applied; and (3) to collect information that can be used
for future risk analysis. In many cases, the problems that occur during a project can be traced to more than one
risk. Another job of risk monitoring is to attempt to allocate origin.
3. Describe function point analysis with a neat example. Dec:06 Nov: 10
• Function-oriented metrics use a measure of the functionality delivered by the application as a normalization
value
• Most widely used metric of this type is the function point: FP = count total * [0.65 +
0.01 * sum (value adj. factors)]
• Function point values on past projects can be used to compute, for example, the average number of lines of
code per function point (e.g., 60)
• Like the KLOC measure, function point use also has proponents and opponents
• Proponents claim that
o FP is programming language independent
o FP is based on data that are more likely to be known in the early stages of a project, making it more attractive
as an estimation approach
o FP requires some ―sleight of hand‖ because the computation is based on subjective data
o Counts of the information domain can be difficult to collect after the fact
o FP has no direct physical meaning…it‘s just a number Function points
are derived using
1. Countable measures of the software requirements domain.
2. Assessments of the software complexity.
Calculate Function Point
The data for following information domain characteristics are collected.
1. Number of user inputs – Each user input which provides distinct application data to the software is counted.
2. Number of user outputs – Each user output that provides application data to the user is counted, e.g –
Screens, reports, error messages.
3. Number of user inquiries – An on-line input that results in the generation of some immediate software
response in the form of an output.
4. Number of files – Each logical master file, i.e a logical grouping of data that may be part of a database or a
separate file.
5. Number of external interfaces – All machine readable interfaces that are used to transmit information to
another system are counted.
The organization needs to develop criteria which determine whether a particular entry is simple, average, or
complex.The weighting factors should be determined by observations or by experiments.
Count totals
The count table can be computed with the help of above table.
Now the software complexity can be computed by answering following questions. These are complexity
adjustment values.
Rate the above factors according to the following scale:
Function Points(FP) = Count total x (0.65 + (0.01 x Sum(Fi))
Once the functional point is calculated then we can compute various measures as follows
Productivity = FP / person – month
Quality = Number of faults / FP
Cost = $ / FP
Documentation = Pages of documentation / FP Advantages:
3b ) Consider the following function point components and their complexity. If the total degree of influence is
52,find the estimated function points.