Unit - 1: 1. Define Software?
Unit - 1: 1. Define Software?
1. Define software?
Software can be defined as follows:
Instructions or computer programs that when executed provide desired
features, function & performance.
Data structures that enable the programs to adequately manipulate operation.
Documents (descriptive information) in both hard copy and virtual forms that
describes the operation and use of the programs.
Functionality.
Reliability.
Usability.
Efficiency.
Maintainability.
Portability.
4. Draw software and hardware failure rate curves as a function of time?
12. What is legacy software? What are the characteristics of legacy software?
Definition: Legacy software is old software that is still useful
Characteristics:
a. It cannot be simply discarded because it contains experienced and validated knowledge
b. It was developed to answer precise needs that are not covered by existing software
c. The system might be running on old hardware.
d. The system might be currently used by other people
13. List out the prescriptive process models for the development of software?
The list of prescriptive models is:
1. Waterfall model
2. V-model
3. Incremental model
4. Evolutionary models: Prototyping model, Spiral model and Concurrent model
19. Differentiate between personal software process and team software process model
Personal software process model (PSP): PSP measures the work product that is produced and
results the quality of the product. The psp model defines 5 framework activities
Planning ---> High level design ---> High level design review ---> Development --->Postmortem
Team software process model (TSP) : The goal of TSP is to build a “self-directed” project team
that organizes itself to produce high quality software.TSP has following framework activities:
UNIT – 2
4. Why scenario based modeling is getting popular in the field of requirements modeling
Scenario-based elements
i. Using scenario-based approach the system is described from the user’s point of
view.
ii. Use-case—descriptions of the interaction between an “actor” and the system are
developed.
iii. Activity diagrams and swim-lane diagrams can be developed to complement use-
case diagrams.
UNIT – 3
1. List out software quality attributes
a. Functionality: is assessed by evaluating the features set, the generality of the functions
that are delivered, and the security of the overall system.
b. Usability: is assessed by considering human factors, overall aesthetics, consistency, and
documentation.
c. Reliability: is evaluated by measuring the frequency and severity of failure, the accuracy
of output results, the mean-time-to-failure, the ability to recover from failure, and the
predictability of the program.
d. Performance: is measured by processing speed, response time, resource consumption,
throughput, and efficiency.
e. Supportability: combines the ability to extend the program extensibility, adaptability,
serviceability ➔ maintainability. In addition, testability, compatibility, configurability, etc.
16. Define Software architecture with IEEE definition and its types.
The IEEE Standard defines an architectural description (AD) as a “a collection of products to
document an architecture.”
The description itself is represented using multiple views, where each view is “a
representation of a whole system from the perspective of a related set of [stakeholder]
concerns.”
The IEEE Computer Society has proposed IEEE-Std, Recommended Practice for
Architectural Description of Software-Intensive System,
to establish a conceptual framework and vocabulary for use during the design of
software architecture,
Interface design
NSU—“a set of information and related navigation structures that collaborate in the fulfillment of
a subset of related user requirements”.
Ways of navigation (WoN)—represents the best navigation way or path for users
with certain profiles to achieve their desired goal or sub-goal.
2. What are the designs included in object oriented hypermedia design method
a. Conceptual design
b. Navigational design
c. Abstract Interface design
d. Implementation
UNIT – 4
Testing often accounts for more project effort than any other software engineering action. If it
is conducted haphazardly, time is wasted, unnecessary effort is expended, and even worse,
errors sneak through undetected. It would therefore seem reasonable to establish
a systematic strategy for testing software.
Verification refers to the set of tasks that ensure that software correctly implements a specific
function.
Validation refers to a different set of tasks that ensure that the software that has been built is
traceable to customer requirements.
Boehm states this another way:
a. Verification: "Are we building the product right?"
b. Validation: "Are we building the right product?"
Software Testing: Testing is the process of exercising a program with the specific intent of
finding errors prior to delivery to the end user.
Definition: Unit testing is a method by which individual units of source code are tested to
determine if they are fit for use.
GOAL: The goal of unit testing is to segregate each part of the program and test that the
individual parts are working correctly.
The following are activities are performed in unit testing:
i. Module interfaces are tested for proper information flow.
ii. Local data are examined to ensure that integrity is maintained.
iii. Boundary conditions are tested.
iv. Basis (independent) path are tested.
v. All error handling paths should be tested.
Definition: Integration Testing is a type of software testing where individual units are combined
and tested as a group. Integration Testing exposes defects in the interfaces and in the interactions
between integrated components or systems.
Smoke testing: Smoke Testing, also known as “Build Verification Testing”, is a type of
software testing that comprises of a non-exhaustive set of tests that aim at ensuring that the most
important functions work.
Regression testing: it is used to check for defects propagated to other modules by changes made
to existing program. Regression means retesting the unchanged parts of the application.
Test cases are re-executed in order to check whether previous functionality of application is
working fine and new changes have not
introduced any new bugs.
White Box Testing is Testing based on an analysis of the internal structure of the component or
system.
It is also known as Clear Box Testing, Open Box Testing, Glass Box Testing, Transparent Box
Testing, Code-Based Testing or Structural Testing, it is a software testing method in which the
internal structure/ design/ implementation of the item being tested is known to the tester.
WHITE BOX TESTING ADVANTAGES
i. Testing can be commenced at an earlier stage. One need not wait for the GUI to be available.
ii. Testing is more thorough, with the possibility of covering most paths.
Interface integrity – internal and external module interfaces are tested as each module or cluster is
added to the software
Making sure the software works correctly for intended user in his or her normal work
environment.
Debugging is the process of finding and resolving of defects that prevent correct operation of
computer software
Debugging (removal of a defect) occurs as a consequence of successful testing.
Common approaches (may be partially automated with debugging tools):
a. Brute force – memory dumps and run-time traces are examined for clues to error causes
b. Backtracking – source code is examined by looking backwards from symptom to
potential causes of errors
c. Cause elimination – uses binary partitioning to reduce the number of locations
potential where errors can exist)