Se Course File Quick Revision of Unit 1 and 2
Se Course File Quick Revision of Unit 1 and 2
COURSE FILE
Software Engineering
A.Renuka
Asst. Professor(C)
3. Will have experience and/or awareness of testing problems and will be able to
develop a simple testing report
UNIT-I:
UNIT-II:
Software Requirements: Functional and non-functional requirements, user requirements,
system requirements, interface specification, the software requirements document.
Requirements engineering process: Feasibility studies, requirements elicitation and analysis,
requirements validation, requirements management.
System models: Context models, behavioral models, data models, object models, structured
methods.
UNIT-III:
Design Engineering: Design process and design quality, design concepts, the design model.
Creating an architectural design: software architecture, data design, architectural styles and
patterns, architectural design, conceptual model of UML, basic structural modeling, class
diagrams, sequence diagrams, collaboration diagrams, use case diagrams, component diagrams.
UNIT-IV:
Testing Strategies: A strategic approach to software testing, test strategies for conventional
software, black-box and white-box testing, validation testing, system testing, the art of
debugging.
Product metrics: Software quality, metrics for analysis model, metrics for design model,
metrics for source code, metrics for testing, metrics for maintenance.
UNIT-V:
Metrics for Process and Products: Software measurement, metrics for software quality.
Risk management: Reactive Vs proactive risk strategies, software risks, risk identification, risk
projection, risk refinement, RMMM, RMMM plan.
Quality Management: Quality concepts, software quality assurance, software reviews, formal
technical reviews, statistical software quality assurance, software reliability, the ISO 9000
quality standards.
Text Books:
1. Software Engineering, A practitioner’s Approach- Roger S. Pressman, 6 th edition, Mc
GrawHill International Edition.
2. Software Engineering- Sommerville, 7th edition, Pearson Education.
3. The unified modeling language user guide Grady Booch, James Rambaugh, Ivar Jacobson,
Pearson Education.
References:
1. Software Engineering, an Engineering approach- James F. Peters, Witold Pedrycz, John
Wiely.
2. Software Engineering principles and practice- Waman S Jawadekar, The Mc Graw-Hill
Companies.
3. Fundamentals of object oriented design using UML Meiler page-Jones: Pearson Eductaion
LECTURE NOTES
UNIT-I
INTRODUCTION TO SOFTWARE ENGINEERING
Software: Software is
(1) Instructions (computer programs) that provide desired features, function, and
performance, when executed
(2) Data structures that enable the programs to adequately manipulate information,
(3) Documents that describe the operation and use of the programs.
Characteristics of Software:
(1) Software is developed or engineered; it is not manufactured in the classical sense.
(2) Software does not “wear out”
(3) Although the industry is moving toward component-based construction, most
software continues to be custom built.
Software Engineering:
(1) The systematic, disciplined quantifiable approach to the development, operation
and maintenance of software; that is, the application of engineering to software.
(2) The study of approaches as in (1)
The role of computer software has undergone significant change over a span of little more than 50
years
- Dramatic Improvements in hardware performance
- Vast increases in memory and storage capacity
- A wide variety of exotic input and output options
Later 1990s:
Yourdon revaluated the prospects of the software professional and
suggested “the rise and resurrection” of the American programmer.
The impact of the Y2K “time bomb” was at the end of 20th century
2000s progressed:
Johnson discussed the power of “emergence” a phenomenon that explains what
happens when interconnections among relatively simple entities result in a system
that “self-organizes to form more intelligent, more adaptive behavior”.
Yourdon revisited the tragic events of 9/11 to discuss the continuing impact of
global terrorism on the IT community
Wolfram presented a treatise on a “new kind of science” that posits a
unifying theory based primarily on sophisticated software simulations
Daconta and his colleagues discussed the evolution of “the semantic web”.
Today a huge software industry has become a dominant factor in the economies of the
industrialized world.
The 7 broad categories of computer software present continuing challenges for software engineers:
1) System software
2) Application software
3) Engineering/scientific software
4) Embedded software
5) Product-line software
6) Web-applications
7) Artificial intelligence software.
System software: System software is a collection of programs written to service other
programs.
The systems software is characterized by
- heavy interaction with computer hardware
- concurrent operation that requires scheduling, resource sharing, and
sophisticated process management
- complex data structures
- multiple external interfaces
compilers, editors and file management utilities.
Application software:
- Application software consists of standalone programs that solve a specific business
need.
- It facilitates business operations or management/technical decision making.
- It is used to control business functions in real-time
point-of-sale transaction processing, real-time manufacturing process control.
Ubiquitous computing: The challenge for software engineers will be to develop systems
and application software that will allow small devices, personal computers and enterprise
system to communicate across vast networks.
Net sourcing: The challenge for software engineers is to architect simple and
sophisticated applications that provide benefit to targeted end-user market worldwide.
Open Source: The challenge for software engineers is to build source that is self
descriptive but more importantly to develop techniques that will enable both customers
and developers to know what changes have been made and how those changes manifest
themselves within the software.
The “new economy”: The challenge for software engineers is to build applications that
will facilitate mass communication and mass product distribution.
SOFTWARE MYTHS
Beliefs about software and the process used to build it- can be traced to the earliest days
of computing myths have a number of attributes that have made them insidious.
Management myths: Manages with software responsibility, like managers in most
disciplines, are often under pressure to maintain budgets, keep schedules from slipping,
and improve quality.
Myth: We already have a book that’s full of standards and procedures for building
software - Wont that provide my people with everything they need to know?
Reality: The book of standards may very well exist but, is it used? Are software
practitioners aware of its existence? Does it reflect modern software engineering
practice?
Myth: If we get behind schedule, we can add more programmers and catch up.
Downloaded by Mamatha Reddy (k.mamatha9052@gmail.com)
lOMoARcPSD|34181993
Customer myths: The customer believes myths about software because software
managers and practitioners do little to correct misinformation. Myths lead to false
expectations and ultimately, dissatisfaction with the developer.
Myth: A general statement of objectives is sufficient to begin with writing programs - we
can fill in the details later.
Practitioner’s myths: Myths that are still believed by software practitioners: during the
early days of software, programming was viewed as an art from old ways and attitudes
die hard.
Myth: Once we write the program and get it to work, our jobs are done.
Reality: Someone once said that the sooner you begin writing code, the longer it’ll take
you to get done. Industry data indicate that between 60 and 80 percent of all effort
expended on software will be expended after it is delivered to the customer for the first
time.
Reality: A working program is only one part of a software configuration that includes
many elements. Documentation provides guidance for software support.
Myth: software engineering will make us create voluminous and unnecessary
documentation and will invariably slows down.
Reality: software engineering is not about creating documents. It is about creating
quality. Better quality leads to reduced rework. And reduced rework results in faster
delivery times.
Myth: The only deliverable work product for a successful project is the working program.
Tools
Methods
Process
A quality
focus
Software Engineering Layers
The foundation for software engineering is the process layer. Software engineering
process is the glue that holds the technology layers. Process defines a framework that
must be established for effective delivery of software engineering technology.
The software forms the basis for management control of software projects and
establishes the
context in which
- technical methods are applied,
- work products are produced,
- milestones are established,
- quality is ensured,
- And change is properly managed.
Software engineering methods rely on a set of basic principles that govern area of the technology
and include modeling activities.
Software engineering tools provide automated or semi automated support for the
process and the methods. When tools are integrated so that information created by one
tool can be used by another, a system for the support of software development, called
computer-aided software engineering, is established.
A PROCESS FRAMEWORK:
Software process must be established for effective delivery of software engineering
technology.
A process framework establishes the foundation for a complete software process
by identifying a small number of framework activities that are applicable to all
software projects, regardless of their size or complexity.
The process framework encompasses a set of umbrella activities that are
applicable across the entire software process.
Each framework activity is populated by a set of software engineering actions
Each software engineering action is represented by a number of different task
sets- each a collection of software engineering work tasks, related work products,
quality assurance points, and project milestones.
In brief
"A process defines who is doing what, when, and how to reach a certain goal."
A Process Framework
- establishes the foundation for a complete software process
- identifies a small number of framework activities
- applies to all s/w projects, regardless of size/complexity.
- also, set of umbrella activities
- applicable across entire s/w process.
- Each framework activity has
- set of s/w engineering actions.
- Each s/w engineering action (e.g., design) has
SOFTWARE ENGINEERING
- collection of related
project milestones.
Process framework
Umbrella activities
Framework activity #1
Work tasks
Software engineering Work products
Quality assurance points
Project milestones
action
Task sets
Work tasks
Software engineering action T Work products
Quality assurance points
ask
Project milestones
Framework activity #n
Work tasks
Work products
Software engineering
Quality assurance points
action Project milestones
Software
proces
2) Risk Management - assesses risks that may effect the outcome of the project or
the quality of the product.
3) Software Quality Assurance - defines and conducts the activities required to
ensure software quality.
4) Formal Technical Reviews - assesses software engineering work products in an
effort to uncover and remove errors before they are propagated to the next action
or activity.
5) Measurement - define and collects process, project and product measures that
assist the team in delivering software that needs customer’s needs, can be used in
conjunction with all other framework and umbrella activities.
6) Software configuration management - manages the effects of change
throughout the software process.
The CMMI defines each process area in terms of “specific goals” and the “specific
practices” required to achieve these goals. Specific practices refine a goal into a set of
process-related activities.
The specific goals (SG) and the associated specific practices(SP) defined for project planning are
SG 1 Establish estimates
SP 1.1 Estimate the scope of the project
SP 1.2 Establish estimates of work product and task
attributes SP 1.3 Define project life cycle
SP 1.4 Determine estimates of effort and cost
SG 2 Develop a Project Plan
SP 2.1 Establish the budget and
schedule SP 2.2 Identify project
risks
SP 2.3 Plan for data management
SP 2.4 Plan for needed knowledge and
skills SP 2.5 Plan stakeholder
involvement
SP 2.6 Establish the project plan
SG 3 Obtain commitment to the plan
SP 3.1 Review plans that affect the
project SP 3.2 Reconcile work and
resource levels SP 3.3 Obtain plan
commitment
In addition to specific goals and practices, the CMMI also defines a set of five generic
goals and related practices for each process area. Each of the five generic goals
corresponds to one of the five capability levels. Hence to achieve a particular capability
level, the generic goal for that level and the generic practices that correspond to that goal
must be achieved. To illustrate, the generic goals (GG) and practices (GP) for the
project planning process area are
PROCESS PATTERNS
The software process can be defined as a collection patterns that define a set of
activities, actions, work tasks, work products and/or related behaviors required to develop
computer software.
A process pattern provides us with a template- a consistent method for describing
an important characteristic of the software process. A pattern might be used to describe a
complete process and a task within a framework activity.
Pattern Name: The pattern is given a meaningful name that describes its
function within the software process.
Initial Context: The conditions under which the pattern applies are described prior to the
initiation of the pattern, we ask
(1) What organizational or team related activities have already occurred.
(2) What is the entry state for the process
(3) What software engineering information or project information already exists
Resulting Context: The conditions that will result once the pattern has been successfully
implemented are described. Upon completion of the pattern we ask
(1) What organizational or team-related activities must have occurred
(2) What is the exit state for the process
(3) What software engineering information or project information has been developed?
Known Uses: The specific instances in which the pattern is applicable are
indicated Process patterns provide and effective mechanism for describing
any software process.
The patterns enable a software engineering organization to develop a hierarchical process
description that begins at a high-level of abstraction.
Once process pattern have been developed, they can be reused for the definition of
process variants-that is, a customized process model can be defined by a software team
using the pattern as building blocks for the process models.
PROCESS ASSESSMENT
The existence of a software process is no guarantee that software will be delivered
on time, that it will meet the customer’s needs, or that it will exhibit the technical
characteristics that will lead to long-term quality characteristics. In addition, the process
itself should be assessed to be essential to ensure that it meets a set of basic process
criteria that have been shown to be essential for a successful software engineering.
Software
Software
Lead
Software Capability
Motivat
The best software process is one that is close to the people who will be doing the work.
Each software engineer would create a process that best fits his or her needs, and at the
same time meets the broader needs of the team and the organization. Alternatively, the
team itself would create its own process, and at the same time meet the narrower needs of
Downloaded by Mamatha Reddy (k.mamatha9052@gmail.com)
lOMoARcPSD|34181993
Planning: This activity isolates requirements and, base on these develops both size and
resource estimates. In addition, a defect estimate is made. All metrics are recorded on
worksheets or templates. Finally, development tasks are identified and a project schedule
is created.
High level design: External specifications for each component to be constructed are
developed and a component design is created. Prototypes are built when uncertainty
exists. All issues are recorded and tracked.
High level design review: Formal verification methods are applied to uncover errors in the
design. Metrics are maintained for all important tasks and work results.
Development: The component level design is refined and reviewed. Code is generated,
reviewed, compiled, and tested. Metrics are maintained for all important task and work
results.
Postmortem: Using the measures and metrics collected the effectiveness of the
process is determined. Measures and metrics should provide guidance for modifying
the process to improve its effectiveness.
PSP stresses the need for each software engineer to identify errors early and, as
important, to understand the types of errors that he is likely to make.
Team software process (TSP): The goal of TSP is to build a “self-directed project team
that organizes itself to produce high-quality software. The following are the objectives
for TSP:
Build self-directed teams that plan and track their work, establish goals, and
own their processes and plans. These can be pure software teams or integrated
product teams(IPT) of 3 to about 20 engineers.
Show managers how to coach and motivate their teams and how to help
them sustain peak performance.
Accelerate software process improvement by making CMM level 5 behavior normal and
expected.
Provide improvement guidance to high-maturity organizations.
Facilitate university teaching of industrial-grade
team skills. A self-directed team defines
- roles and responsibilities for each team member
- tracks quantitative project data
- identifies a team process that is appropriate for the project
- a strategy for implementing the process
- defines local standards that are applicable to the teams software engineering work;
- continually assesses risk and reacts to it
- Tracks, manages, and reports project status.
-
TSP defines the following framework activities: launch, high-level design,
implementation, integration and test, and postmortem.
Downloaded by Mamatha Reddy (k.mamatha9052@gmail.com)
lOMoARcPSD|34181993
TSP makes use of a wide variety of scripts, forms, and standards that serve to guide team
members in their work.
Scripts define specific process activities and other more detailed work functions that
are part of the team process.
Each project is “launched” using a sequence of tasks.
PROCESS MODELS
Prescriptive process models define a set of activities, actions, tasks, milestones, and
work products that are required to engineer high-quality software. These process models
are not perfect, but they do provide a useful roadmap for software engineering work.
Advantage:
It can serve as a useful process model in situations where requirements are fixed
and work is to proceed to complete in a linear manner.
The problems that are sometimes encountered when the waterfall model is applied are:
1. Real projects rarely follow the sequential flow that the model proposes. Although
the linear model can accommodate iteration, it does so indirectly. As a result,
changes can cause confusion as the project team proceeds.
2. It is often difficult for the customer to state all requirements explicitly. The
waterfall model requires this and has difficulty accommodating the natural
uncertainty that exist at the beginning of many projects.
3. The customer must have patience. A working version of the programs will not be
available until late in the project time-span. If a major blunder is undetected then
it can be disastrous until the program is reviewed.
incremen
Communic Pl anning
Model ing
a Constructi n
t#n ana
ly s i
c
Dep l o y m
e n tdeli
increment
Communicatio
n
Pl anning ion
s ig n
d
of
fe e d b
a c k
#2 Model ing
ana
Constrct io n
c
Dep l o y m
deliv ery
e
nt h
increment ion
lysi
sde o
e n tdeli
ver y
f e e d b a of 2nd
t
e
increment
#1 sig
deliv ery
n d
ck
increment s
of 1st e
increment t
Communica
Pl anning
Model ing
Constructi o
analysi n
sdesign code
Dep l o y m e n t
test
deli ver y
feedback
The incremental model combines elements of the waterfall model applied in an iterative
fashion.
The incremental model delivers a series of releases called increments that provide
progressively more functionality for the customer as each increment is delivered.
When an incremental model is used, the first increment is often a core product.
That is, basic requirements are addressed. The core product is used by the
customer. As a result, a plan is developed for the next increment.
The plan addresses the modification of the core product to better meet the needs
of the customer and the delivery of additional features and functionality.
This process is repeated following the delivery of each increment, until the
complete product is produced.
For example, word-processing software developed using the incremental paradigm might
deliver basic file management editing, and document production functions in the first
increment; more sophisticated editing, and document production capabilities in the
second increment; spelling and grammar checking in the third increment; and advanced
page layout capability in the fourth increment.
Team # n M o d e lin
g business
Co n st ru
n Team m odeling
Mo d edat a m ct io n
Communic # 2 businessodeling
m
com
ponent
at io Planning process m
odelingodeling
dat Co nst ructreuse
De
Tea # 1mling autom at
Mode alingodeling io n
ic code ployme nt
m process m
comgene
business
odeling ration int
ponent
test ing
modeling Const reuseructaut egr
dat a ion om at ic at
modeling code
compone ion
generat
process deli
6 0 - 9 0nt reuse ion t
modeling autest ing
omat v
days
ic code ery
generat fee
ion dba
t est ing ck
Evolutionary process models produce with each iteration produce an increasingly more
complete version of the software with every iteration.
Evolutionary models are iterative. They are characterized in a manner that enables
software engineers to develop increasingly more complete versions of the software.
PROTOTYPING:
Prototyping is more commonly used as a technique that can be implemented within the
context of anyone of the process model.
The prototyping paradigm begins with communication. The software engineer and
customer meet and define the overall objectives for the software, identify whatever
requirements are known, and outline areas where further definition is mandatory.
Prototyping iteration is planned quickly and modeling occurs. The quick design leads to
the construction of a prototype. The prototype is deployed and then evaluated by the
customer/user.
Iteration occurs as the prototype is tuned to satisfy the needs of the customer, while at the
same time enabling the developer to better understand what needs to be done.
Qu ick p lan
Mo d e lin g
Qu ick d e sig n
Deployment
De live r y Const ruct ion
& Fe e dback of
prot ot ype
Context:
If a customer defines a set of general objectives for software, but does not identify
detailed input, processing, or output requirements, in such situation prototyping paradigm
is best approach.
If a developer may be unsure of the efficiency of an algorithm, the adaptability of
an operating system then he can go for this prototyping method.
Advantages:
The prototyping paradigm assists the software engineer and the customer to better
understand what is to be built when requirements are fuzzy.
The prototype serves as a mechanism for identifying software requirements. If a
working prototype is built, the developer attempts to make use of existing program
fragments or applies tools.
demonstrate capability. After a time, the developer may become comfortable with
these choices and forget all the reasons why they were inappropriate. The less-
than-ideal choice has now become an integral part of the system.
communication
modelling
start
deployment
construction
Context: The spiral model can be adopted to apply throughout the entire life
cycle of an application, from concept development to maintenance.
Advantages:
It provides the potential for rapid development of increasingly more complete versions
of the software.
Draw Backs:
The spiral model is not a panacea. It may be difficult to convince customers
that the evolutionary approach is controllable. It demands considerable risk
assessment expertise and relies on this expertise for success. If a major risk is
not uncovered and managed, problems will undoubtedly occur.
UNIT-II
SOFTWARE REQUIREMENTS
Types of requirement:
User requirements
o Statements in natural language plus diagrams of the services the system
provides and its operational constraints. Written for customers.
System requirements
o A structured document setting out detailed descriptions of the system’s
functions, services and operational constraints. Defines what should be
implemented so may be part of a contract between client and contractor.
Definitions &
Specifications: User
Requirement Definition:
The software must provide the means of representing and accessing external files created
by other tools.
System Requirement specification:
The user should be provided with facilities to define the type of external files.
Each external file type may have an associated tool which may be applied to the file.
Each external file type may be represented as a specific icon on the user’s display.
Facilities should be provided for the icon representing an external file type to
be defined by the user.
When an user selects an icon representing an external file, the effect of that
selection is to apply the tool associated with the type of the external file to the
file represented by the selected icon.
Requirements readers:
FUNCTIONAL REQUIREMENTS:
Describe functionality or system services.
Depend on the type of software, expected users and the type of system
where the software is used.
Functional user requirements may be high-level statements of what the
system should do but functional system requirements should describe the
system services in detail.
The functional requirements for The LIBSYS system:
A library system that provides a single interface to a number of databases of
articles in different libraries.
Users can search for, download and print these articles for personal study.
Examples of functional requirements
The user shall be able to search either all of the initial set of databases or select a subset
from it.
The system shall provide appropriate viewers for the user to read documents
in the document store.
Every order shall be allocated a unique identifier (ORDER_ID) which the
user shall be able to copy to the account’s permanent storage area.
Requirements imprecision
Problems arise when requirements are not precisely stated.
Ambiguous requirements may be interpreted in different ways by developers and
users.
Consider the term ‘appropriate viewers’
o User intention - special purpose viewer for each different document type;
o Developer interpretation - Provide a text viewer that shows the contents of the
document.
Requirements completeness and consistency:
In principle, requirements should be both complete and
consistent. Complete
They should include descriptions of all
facilities required. Consistent
There should be no conflicts or contradictions in the descriptions of the
system facilities. In practice, it is impossible to produce a complete and
consistent requirements document.
NON-FUNCTIONAL REQUIREMENTS
These define system properties and constraints e.g. reliability, response
time and storage requirements. Constraints are I/O device capability,
system representations, etc.
Process requirements may also be specified mandating a particular CASE
system, programming language or development method.
Non-functional
requirements :
Product
requirements
• Requirements which specify that the delivered product must behave in a particular
way
e.g. execution speed, reliability, etc.
• Eg:The user interface for LIBSYS shall be implemented as simple HTML
without frames or Java applets.
Organisational requirements
• Requirements which are a consequence of organisational policies and
procedures e.g. process standards used, implementation requirements,
etc.
• Eg: The system development process and deliverable documents shall
conform to the process and deliverables defined in XYZCo-SP-
STAN-95.
External requirements
• Requirements which arise from factors which are external to the
system and its development process e.g. interoperability requirements,
legislative requirements, etc.
• Eg: The system shall not disclose any personal information about
customers apart from their name and reference number to the operators
of the system.
Requirements measures:
Property Measure
Speed Processed
transactions/second
User/Event response time
Screen refresh time
Size M Bytes
Number of ROM chips
Ease of use Training time
Number of help frames
Reliability Mean time to failure
Probability of
unavailability Rate of
failure occurrence
Availability
Robustness Time to restart after failure
Percentage of events causing
failure
Probability of data corruption on
failure
Requirements interaction:
• Conflicts between different non-functional requirements are common in complex systems.
• Spacecraft system
DOMAIN REQUIREMENTS
Derived from the application domain and describe system characteristics and
features that reflect the domain.
Domain requirements be new functional requirements, constraints on existing
requirements or define specific computations.
If domain requirements are not satisfied, the system may be unworkable.
Domain requirements
problems
Understandability
• Requirements are expressed in the language of the application domain;
• This is often not understood by software engineers developing the system.
Implicitness
• Domain specialists understand the area so well that they do not think
of making the domain requirements explicit.
2) USER REQUIREMENTS
Should describe functional and non-functional requirements in such a
way that they are understandable by system users who don’t have
detailed technical knowledge.
User requirements are defined using natural language, tables and diagrams
as these can be understood by all users.
Requirement problems
Database requirements includes both conceptual and detailed information
• Describes the concept of a financial accounting system that is to be included in
LIBSYS;
• However, it also includes the detail that managers can configure this
system - this is unnecessary at this level.
Grid requirement mixes three different kinds of requirement
• Conceptual functional requirement (the need for a grid);
• Non-functional requirement (grid units);
• Non-functional UI requirement (grid switching).
• Structured presentation
Guidelines for writing requirements
Invent a standard format and use it for all requirements.
Use language in a consistent way. Use shall for mandatory requirements,
should for desirable requirements.
Use text highlighting to identify key parts of the requirement.
Avoid the use of computer jargon.
3) SYSTEM REQUIREMENTS
More detailed specifications of system functions, services and constraints than user
requirements.
They are intended to be a basis for designing the system.
They may be incorporated into the system contract.
System requirements may be defined or illustrated using system models
Alternatives to NL specification:
Notation Description
Design This approach uses a language like a programming language but with
description more abstract features to specify the requirements by defining an
languages operational model of the system. This approach is not now widely
used although it can be useful for interface
specifications.
Form-based specifications
Definition of the function or entity.
Description of inputs and where they come from.
Description of outputs and where they go to.
Indication of other entities required.
Pre and post conditions (if appropriate).
The side effects (if any) of the function.
Tabular specification
Used to supplement natural language.
Particularly useful when you have to define a number of possible alternative courses of
action.
Graphical models
Graphical models are most useful when you need to show how state changes or
where you need to describe a sequence of actions.
1. Function
2. Description
3. Inputs
4. Source
5. Outputs
6. Destination
7. Action
8. Requires
9. Pre-condition
10. Post-condition
11. Side-effects
When a standard form is used for specifying functional requirements, the following
information should be included:
1. Description of the function or entity being specified
2. Description of its inputs and where these come from
3. Description of its outputs and where these go to
4. Indication of what other entities are used
5. Description of the action to be taken
6. If a functional approach is used, a pre-condition setting out what must be true
before the function is called and a post-condition specifying what is true after the
function is called
7. Description of the side effects of the operation.
4) INTERFACE SPECIFICATION
Most systems must operate with other systems and the operating interfaces
must be specified as part of the requirements.
Three types of interface may have to be defined
• Procedural interfaces where existing programs or sub-systems offer a
range of services that are accessed by calling interface procedures. These
interfaces are sometimes called Applicatin Programming Interfaces
(APIs)
• Data structures that are exchanged that are passed from one sub-
system to another. Graphical data models are the best notations for
this type of description
• Data representations that have been established for an existing sub-system
Formal notations are an effective technique for interface specification.
Requirements engineering:
The alternative perspective on the requirements engineering process presents the process
as a three-stage activity where the activities are organized as an iterative process around
a spiral. The amount of time and effort devoted to each activity in iteration depends on
the stage of the overall process and the type of system being developed. Early in the
process, most effort will be spent on understanding high-level business and non-
functional requirements and the user requirements. Later in the process, in the outer rings
of the spiral, more effort will be devoted to system requirements engineering and system
modeling.
1) FEASIBILITY STUDIES
A feasibility study decides whether or not the proposed system is worthwhile. The
input to the feasibility study is a set of preliminary business requirements, an outline
description of the system and how the system is intended to support business processes.
The results of the feasibility study should be a report that recommends whether or not it
worth carrying on with the requirements engineering and system development process.
• A short focused study that checks
– If the system contributes to organisational objectives;
– If the system can be engineered using current technology and within budget;
– If the system can be integrated with other systems that are used.
Process activities
1. Requirements discovery
– Interacting with stakeholders to discover their requirements. Domain
requirements are also discovered at this stage.
2. Requirements classification and organisation
– Groups related requirements and organises them into coherent clusters.
3. Prioritisation and negotiation
– Prioritising requirements and resolving requirements conflicts.
4. Requirements documentation
– Requirements are documented and input into the next round of the spiral.
The process cycle starts with requirements discovery and ends with requirements
documentation. The analyst’s understanding of the requirements improves with each
round of the cycle.
Requirements classification and organization is primarily concerned with identifying
overlapping requirements from different stakeholders and grouping related requirements.
The most common way of grouping requirements is to use a model of the system
architecture to identify subsystems and to associate requirements with each sub-system.
Inevitably, stakeholders have different views on the importance and priority of
requirements, and sometimes these view conflict. During the process, you should
organize regular stakeholder negotiations so that compromises can be reached.
In the requirement documenting stage, the requirements that have been elicited are
documented in such a way that they can be used to help with further requirements
discovery.
REQUIREMENTS DISCOVERY:
• Requirement discovery is the process of gathering information about the
proposed and existing systems and distilling the user and system requirements
from this information.
• Sources of information include documentation, system stakeholders and the
specifications of similar systems.
• They interact with stakeholders through interview and observation and may
use scenarios and prototypes to help with the requirements discovery.
• Stakeholders range from system end-users through managers and external
stakeholders such as regulators who certify the acceptability of the system.
• For example, system stakeholder for a bank ATM include
1. Bank customers
2. Representatives of other banks
3. Bank managers
4. Counter staff
5. Database administrators
6. Security managers
7. Marketing department
8. Hardware and software maintenance engineers
9. Banking regulators
Requirements sources( stakeholders, domain, systems) can all be represented as system
viewpoints, where each viewpoints, where each viewpoint presents a sub-set of the
requirements for the system.
Viewpoints:
• Viewpoints are a way of structuring the requirements to represent the
Viewpoint identification:
• Identify viewpoints using
– Providers and receivers of system services;
– Systems that interact directly with the system being specified;
– Regulations and standards;
– Sources of business and non-functional requirements.
– Engineers who have to develop and maintain the system;
– Marketing and other business viewpoints.
Interviewing
In formal or informal interviewing, the RE team puts questions to stakeholders about
the system that they use and the system to be developed.
There are two types of interview
- Closed interviews where a pre-defined set of questions are answered.
- Open interviews where there is no pre-defined agenda and a range of issues
are explored with stakeholders.
-
Interviews in practice:
• Normally a mix of closed and open-ended interviewing.
• Interviews are good for getting an overall understanding of what stakeholders
do and how they might interact with the system.
• Interviews are not good for understanding domain requirements
– Requirements engineers cannot understand specific domain terminology;
– Some domain knowledge is so familiar that people find it hard to
articulate or think that it isn’t worth articulating.
Effective interviewers:
• Interviewers should be open-minded, willing to listen to stakeholders and
should not have pre- conceived ideas about the requirements.
• They should prompt the interviewee with a question or a proposal and should
not simply expect them to respond to a question such as ‘what do you want’.
•
Scenarios:
Scenarios are real-life examples of how a system can be used.
• They should include
– A description of the starting situation;
– A description of the normal flow of events;
– A description of what can go wrong;
– Information about other concurrent activities;
– A description of the state when the scenario finishes.
Use cases
• Use-cases are a scenario based technique in the UML which identify the actors in
an interaction and which describe the interaction itself.
• A set of use cases should describe all possible interactions with the system.
• Sequence diagrams may be used to add detail to use-cases by showing the
sequence of event processing in the system.
•
ETHNOGRAPHY:
• A social scientists spends a considerable time observing and analysing how people
actually work.
• People do not have to explain or articulate their work.
• Social and organisational factors of importance may be observed.
• Ethnographic studies have shown that work is usually richer and more complex
than suggested by simple system models.
Focused ethnography:
• Developed in a project studying the air traffic control process
• Combines ethnography with prototyping
• Prototype development results in unanswered questions which focus the ethnographic
analysis.
• The problem with ethnography is that it studies existing practices which may
have some historical basis which is no longer relevant.
Scope of ethnography:
• Requirements that are derived from the way that people actually work rather than
the way I which process definitions suggest that they ought to work.
• Requirements that are derived from cooperation and awareness of other people’s activities.
3) REQUIREMENTS VALIDATION
• Concerned with demonstrating that the requirements define the system that the
customer really wants.
• Requirements error costs are high so validation is very important
– Fixing a requirements error after delivery may cost up to 100 times the
cost of fixing an implementation error.
Requirements checking:
• Validity: Does the system provide the functions which best support the customer’s needs?
• Consistency: Are there any requirements conflicts?
• Completeness: Are all functions required by the customer included?
• Realism: Can the requirements be implemented given available budget and technology
• Verifiability: Can the requirements be checked?
Requirements reviews:
• Regular reviews should be held while the requirements definition is being formulated.
• Both client and contractor staff should be involved in reviews.
• Reviews may be formal (with completed documents) or informal. Good
communications between developers, customers and users can resolve problems
at an early stage.
Review checks:
• Verifiability: Is the requirement realistically testable?
• Comprehensibility: Is the requirement properly understood?
• Traceability: Is the origin of the requirement clearly stated?
• Adaptability: Can the requirement be changed without a large impact on other
requirements?
4) REQUIREMENTS MANAGEMENT
• Requirements management is the process of managing changing
Requirements change
• The priority of requirements from different viewpoints changes during the development
process.
• System customers may specify requirements from a business perspective that
conflict with end-user requirements.
• The business and technical environment of the system changes during its development.
Requirements evolution:
Traceability:
Traceability is concerned with the relationships between requirements, their sources and the
system design
• Source traceability
– Links from requirements to stakeholders who proposed these requirements;
• Requirements traceability
– Links between dependent requirements;
• Design traceability - Links from the requirements to the design;
CASE tool support:
• Requirements storage
– Requirements should be managed in a secure, managed data store.
• Change management
– The process of change management is a workflow process whose stages
can be defined and information flow between these stages partially
automated.
• Traceability management
– Automated retrieval of the links between requirements.
Change management:
SYSTEM MODELLING
System modelling helps the analyst to understand the functionality of the
1) CONTEXT MODELS:
Context models are used to illustrate the operational context of a system -
they show what lies outside the system boundaries.
Social and organisational concerns may affect the decision on where to
position system boundaries.
Architectural models show the system and its relationship with other systems.
2) BEHAVIOURAL MODELS:
Behavioural models are used to describe the overall behaviour of a system.
Two types of behavioural model are:
o Data processing models that show how data is processed as it moves through the
system;
o State machine models that show the systems response to events.
These models show different perspectives so both of them are required to
describe the system’s behaviour.
Data-processing models:
Data flow diagrams (DFDs) may be used to model the system’s data processing.
These show the processing steps as data flows through a system.
Statecharts:
Allow the decomposition of a model into sub-models (see following slide).
A brief description of the actions is included following the ‘do’ in each state.
Can be complemented by tables describing the states and the stimuli.
State Descripti
on
Waiting The oven is waiting for input. The display shows the current time.
Half power The oven power is set to 300 watts. The display shows ‘Half power’.
Full power The oven power is set to 600 watts. The display shows ‘Full power’.
Set time The cooking time is set to the user’s input value. The display shows the
cooking time selected and is updated as the time is set.
Disabled Oven operation is disabled for safety. Interior oven light is on. Display
shows ‘Not ready’.
Enabled Oven operation is enabled. Interior oven light is off. Display shows ‘Ready to
cook’.
Operation Oven in operation. Interior oven light is on. Display shows the timer
countdown. On completion of cooking, the buzzer is sounded for 5
seconds. Oven light is on. Display shows ‘Cooking complete’ while
buzzer is sounding.
Stimulus Descripti
on
Half power The user has pressed the half power button
Full power The user has pressed the full power button
Timer The user has pressed one of the timer buttons
Number The user has pressed a numeric key
Door open The oven door switch is not closed
Door closed The oven door switch is closed
Start The user has pressed the start button
Cancel The user has pressed the cancel button
Data dictionaries
Data dictionaries are lists of all of the names used in the system models.
Descriptions of the entities, relationships and attributes are also included.
Advantages
o Support name management and avoid duplication;
o Store of organisational knowledge linking analysis, design and implementation;
Many CASE workbenches support data dictionaries.
4) OBJECT MODELS:
Object models describe the system in terms of object classes and their associations.
Inheritance models:
Organise the domain object classes into a hierarchy.
Classes at the top of the hierarchy reflect the common features of all classes.
Object classes inherit their attributes and services from one or more super-
classes. these may then be specialised as necessary.
Class hierarchy design can be a difficult process if duplication in different
branches is to be avoided.
Multiple inheritance:
Rather than inheriting the attributes and services from a single parent class,
a system which supports multiple inheritance allows object classes to
inherit from several super-classes.
This can lead to semantic conflicts where attributes/services with the same
name in different super-classes have different semantics.
Multiple inheritance makes class hierarchy reorganisation more complex.
Multiple inheritance
Object aggregation:
An aggregation model shows how classes that are collections are composed of other
classes.
Aggregation models are similar to the part-of relationship in semantic data models.
Object aggregation
Sequence diagrams (or collaboration diagrams) in the UML are used to model
interaction between objects.
Sequence diagrams
These show the sequence of events that take place during some user interaction with a
system.
You read them from top to bottom to see the order of the actions that take place.
Cash withdrawal from an ATM
• Validate card;
• Handle request;
• Complete transaction.
SensorManagement
Sensor
Security homeow
nerAcces
s
Personal comput er
externalAccess
Security Surveillance
homeManagement communication
5) STRUCTURED METHODS:
Structured methods incorporate system modelling as an inherent part of the method.
Methods define a set of models, a process for deriving these models and rules and guidelines
that should apply to the models.
CASE tools support system modelling as part of a structured method.
Method weaknesses:
They do not model non-functional system requirements.
They do not usually include information about whether a method is appropriate for a
given problem.
The may produce too much documentation.
The system models are sometimes too detailed and difficult for users to understand.
CASE workbenches:
A coherent set of tools that is designed to support related software process activities such
as analysis, design or testing.
Analysis and design workbenches support system modelling during both requirements
engineering and system design.
These workbenches may support a specific design method or may provide support for a
Downloaded by Mamatha Reddy (k.mamatha9052@gmail.com)
lOMoARcPSD|34181993
UNIT-III
DESIGN ENGINEERING
Design engineering encompasses the set of principals, concepts, and
practices that lead to the development of a high- quality system or product.
Design principles establish an overriding philosophy that guides the designer
in the work that is performed.
Design concepts must be understood before the mechanics of design practice are applied and
Design practice itself leads to the creation of various representations of the
software that serve as a guide for the construction activity that follows.
What is design:
Design is what virtually every engineer wants to do. It is the place where creativity
rules – customer’s requirements, business needs, and technical considerations all come
together in the formulation of a product or a system. Design creates a representation or
model of the software, but unlike the analysis model, the design model provides detail
about software data structures, architecture, interfaces, and components that are
necessary to implement the system.
Why is it important:
Design allows a software engineer to model the system or product that Is to be built.
This model can be assessed for quality and improved before code is generated, tests are
conducted, and end – users become involved in large numbers. Design is the place where
software quality is established.
Goals of design:
McGlaughlin suggests three characteristics that serve as a guide for the evaluation of a good design.
The design must implement all of the explicit requirements contained in the
analysis model, and it must accommodate all of the implicit requirements
desired by the customer.
The design must be a readable, understandable guide for those who generate
code and for those who test and subsequently support the software.
The design should provide a complete picture of the software, addressing the
data, functional, and behavioral domains from an implementation perspective.
Quality guidelines:
In order to evaluate the quality of a design representation we must establish
technical criteria for good design. These are the following guidelines:
1. A design should exhibit an architecture that
a. has been created using recognizable architectural styles or patterns
b. is composed of components that exhibit good design characteristics and
c. can be implemented in an evolutionary fashion, thereby facilitating
implementation and testing.
2. A design should be modular; that is, the software should be logically partitioned
into elements or subsystems.
Downloaded by Mamatha Reddy (k.mamatha9052@gmail.com)
lOMoARcPSD|34181993
Quality attributes:
The FURPS quality attributes represent a target for all software design:
Functionality is assessed by evaluating the feature set and capabilities of
the program, the generality of the functions that are delivered, and the
security of the overall system.
Usability is assessed by considering human factors, overall aesthetics,
consistency and documentation.
Reliability is evaluated by measuring the frequency and severity of failure, the
accuracy of output results, and the mean – time –to- failure (MTTF), the ability to
recover from failure, and the predictability of the program.
Performance is measured by processing speed, response time, resource
consumption, throughput, and efficiency
Supportability combines the ability to extend the program (extensibility),
adaptability, serviceability- these three attributes represent a more common term
maintainability
Not every software quality attribute is weighted equally as the software design is developed.
One application may stress functionality with a special emphasis on security.
Another may demand performance with particular emphasis on processing speed.
2) DESIGN CONCEPTS:
M.A Jackson once said:”The beginning of wisdom for a software engineer is to recognize
the difference between getting a program to work, and getting it right.” Fundamental
software design concepts provide the necessary framework for “getting it right.”
In the context of the procedural abstraction open, we can define a data abstraction called
door. Like any data object, the data abstraction for door would encompass a set of
attributes that describe the door (e.g., door type, swing operation, opening mechanism,
weight, dimensions). It follows that the procedural abstraction open would make use of
information contained in the attributes of the data abstraction door.
4. Architecture:
Software architecture alludes to “the overall structure of the software and the ways in
which that structure provides conceptual integrity for a system”. In its simplest form,
architecture is the structure or organization of program components (modules), the
manner in which these components interact, and the structure of data that are used by the
components.
One goal of software design is to derive an architectural rendering of a system. The
Downloaded by Mamatha Reddy (k.mamatha9052@gmail.com)
lOMoARcPSD|34181993
rendering serves as a framework from which more detailed design activities are
conducted.
The architectural design can be represented using one or more of a number of different
models. Structured models represent architecture as an organized collection of program
components.
Framework models increase the level of design abstraction by attempting to
identify repeatable architectural design frameworks that are encountered in
similar types of applications.
Dynamic models address the behavioral aspects of the program architecture, indicating
how the structure or system configuration may change as a function external events.
Process models focus on the design of the business or technical process that the system must
accommodate.
Functional models can be used to represent the functional hierarchy of a system.
III. Patterns:
Brad Appleton defines a design pattern in the following manner: “a pattern is a named
nugget of inside which conveys that essence of a proven solution to a recurring problem
within a certain context amidst competing concerns.” Stated in another way, a design
pattern describes a design structure that solves a particular design within a specific
context and amid “forces” that may have an impact on the manner in which the pattern is
applied and used.
The intent of each design pattern is to provide a description that enables a designer to determine
1) Whether the pattern is capable to the current work,
2) Whether the pattern can be reused,
3) Whether the pattern can serve as a guide for developing a similar, but
functionally or structurally different pattern.
IV. Modularity:
Software architecture and design patterns embody modularity; software is divided
into separately named and addressable components, sometimes called modules that are
integrated to satisfy problem requirements.
It has been stated that “modularity is the single attribute of software that allows a
program to be intellectually manageable”. Monolithic software cannot be easily grasped
by a software engineer. The number of control paths, span of reference, number of
variables, and overall complexity would make understanding close to impossible.
The “divide and conquer” strategy- it’s easier to solve a complex problem when
you break it into manageable pieces. This has important implications with regard to
modularity and software. If we subdivide software indefinitely, the effort required to
develop it will become negligibly small. The effort to develop an individual software
module does decrease as the total number of modules increases. Given the same set of
requirements, more modules means smaller individual size. However, as the number of
modules grows, the effort associated with integrating the modules also grow.
Under modularity or over modularity should be avoided. We modularize a design
so that development can be more easily planned; software increment can be defined and
Downloaded by Mamatha Reddy (k.mamatha9052@gmail.com)
lOMoARcPSD|34181993
delivered; chamges can be more easily accommodated; testing and debugging can be
conducted more efficiently, and long-term maintenance can be conducted without serious
side effects.
V. Information Hiding:
The principle of information hiding suggests that modules be “characterized by
design decision that hides from all others.”
Modules should be specified and designed so that information contained within a
module is inaccessible to other modules that have no need for such information.
Hiding implies that effective modularity can be achieved by defining a set of
independent modules that communicate with one another only that information necessary
to achieve software function. Abstraction helps to define the procedural entities that make
up the software. Hiding defines and enforces access constraints to both procedural detail
within a module and local data structure used by module.
The use of information hiding as a design criterion for modular systems provides
the greatest benefits when modifications are required during testing and later, during
software maintenance. Because most data and procedure are hidden from other parts of
the software, inadvertent errors introduced during modification are less likely to
propagate to other locations within software.
VII. Refinement:
Stepwise refinement is a top- down design strategy originally proposed by Niklaus
wirth. A program is development by successively refining levels of procedural detail. A
Downloaded by Mamatha Reddy (k.mamatha9052@gmail.com)
lOMoARcPSD|34181993
VIII. Refactoring :
Refactoring is a reorganization technique that simplifies the design of a component
without changing its function or behavior. Fowler defines refactoring in the following
manner: “refactoring is the process of changing a software system in such a way that it
does not alter the external behavior of the code yet improves its internal structure.”
When software is refactored, the existing design is examined for
redundancy, unused design elements, inefficient or unnecessary algorithms, poorly
constructed or inappropriate data structures, or any other design failure that can be
corrected to yield a better design. The designer may decide that the component should be
refactored into 3 separate components, each exhibiting high cohesion. The result will be
software that is easier to integrate, easier to test, and easier to maintain.
Create a new set of design classes that implement a software infrastructure to support
the design solution.
Five different types of design classes, each representing a different layer of the design
architecture are suggested.
User interface classes: define all abstractions that are necessary for human computer
interaction. In many cases, HCL occurs within the context of a metaphor and the
design classes for the interface may be visual representations of the elements of
the metaphor.
Business domain classes: are often refinements of the analysis classes defined
earlier. The classes identify the attributes and services that are required to
implement some element of the business domain.
Process classes implement lower – level business abstractions required to fully
manage the business domain classes.
Persistent classes represent data stores that will persist beyond the execution of the software.
System classes implement software management and control functions that enable
the system to operate and communicate within its computing environment and
with the outside world.
As the design model evolves, the software team must develop a complete
set of attributes and operations for each design class. The level of abstraction is reduced
as each analysis class is transformed into a design representation. Each design class be
reviewed to ensure that it is “well-formed.” They define four characteristics of a well-
formed design class.
Complete and sufficient: A design class should be the complete encapsulation of all
attributes and methods that can reasonably be expected to exist for the class. Sufficiency
Downloaded by Mamatha Reddy (k.mamatha9052@gmail.com)
lOMoARcPSD|34181993
ensures that the design class contains only those methods that are sufficient to achieve the
intent of the class, no more and no less.
High cohesion: A cohesive design class has a small, focused set of responsibilities and
single- mindedly applies attributes and methods to implement those responsibilities.
Low coupling: Within the design model, it is necessary for design classes to collaborate
with one another. However, collaboration should be kept to an acceptable minimum. If a
design model is highly coupled the system is difficult to implement, to test, and to
maintain over time. In general, design classes within a subsystem should have only
limited knowledge of classes in other subsystems. This restriction, called the law of
Demeter, suggests that a method should only sent messages to methods in neighboring
classes.
The elements of the design model use many of the same UML diagrams that were used in
the analysis model. The difference is that these diagrams are refined and elaborated as a
path of design; more implementation- specific detail is provided, and architectural
structure and style, components that reside within the architecture, and the interface
between the components and with the outside world are all emphasized.
It is important to mention however, that model elements noted along the
horizontal axis are not always developed in a sequential fashion. In most cases
preliminary architectural design sets the stage and is followed by interface design and
component-level design, which often occur in parallel. The deployment model us usually
delayed until the design has been fully developed.
At the program component level, the design of data structures and the associated
algorithms required to manipulate them is essential to the criterion of high-quality
applications.
At the application level, the translation of a data model into a database is pivotal to
achieving the business objectives of a system.
At the business level, the collection of information stored in disparate databases and
reorganized into a “data warehouse” enables data mining or knowledge discovery
that can have an impact on the success of the business itself.
Downloaded by Mamatha Reddy (k.mamatha9052@gmail.com)
lOMoARcPSD|34181993
The interface design elements for software tell how information flows into and
out of the system and how it is communicated among the components defined as
part of the architecture. There are 3 important elements of interface design:
The user interface(UI);
External interfaces to other systems, devices, networks, or other produces or
consumers of information; and
Internal interfaces between various design components.
These interface design elements allow the software to communicated externally
and enable internal communication and collaboration among the components that
populate the software architecture
ARCHITECTURAL DESIGN
1) SOFTWARE ARCHITECTURE:
What Is Architecture?
Architectural design represents the structure of data and program components
The design of software architecture considers two levels of the design pyramid
- data design
- architectural design.
Data design enables us to represent the data component of the architecture.
Architectural design focuses on the representation of the structure of software
components, their properties, and interactions.
DATA DESIGN:
The data design activity translates data objects as part of the analysis model into data
structures at
the software component level and, when necessary, a database architecture at the application level.
At the program component level, the design of data structures and the
associated algorithms required to manipulate them is essential to the
creation of high-quality applications.
At the application level, the translation of a data model (derived as part of
requirements engineering) into a database is pivotal to achieving the
business objectives of a system.
At the business level, the collection of information stored in disparate databases and
reorganized into a “data warehouse” enables data mining or knowledge discovery
that can have an impact on the success of the business itself.
To solve this challenge, the business IT community has developed data mining
techniques, also called knowledge discovery in databases (KDD), that navigate through
existing databases in an attempt to extract appropriate business-level information. An
alternative solution, called a data warehouse, adds an additional layer to the data
architecture. a data warehouse is a large, independent database that encompasses some,
but not all, of the data that are stored in databases that serve the set of applications
required by a business.
The representation of data structure should be known only to those modules that
must make direct use of the data contained within the structure.
A library of useful data structures and the operations that may be applied to
them should be developed.
A software design and programming language should support the specification and
realization of abstract data types.
Data-flow architectures. This architecture is applied when input data are to be transformed through a
series of computational or manipulative components into output data. A pipe and filter pattern has a
set of components, called filters, connected by pipes that transmit data from one component to the
next. Each filter works independently of those components upstream and downstream, is designed to
expect data input of a certain form, and produces data output of a specified form.
If the data flow degenerates into a single line of transforms, it is termed batch sequential. This
pattern accepts a batch of data and then applies a series of sequential components (filters) to
Downloaded by Mamatha Reddy (k.mamatha9052@gmail.com)
lOMoARcPSD|34181993
transform it.
(system architect) to achieve a program structure that is relatively easy to modify and
scale. A number of substyles [BAS98] exist within this category:
← Main program/subprogram architectures. This classic program structure
decomposes function into a control hierarchy where a “main” program invokes a
number of program components, which in turn may invoke still other
components. Figure 13.3 illustrates an architecture of this type.
← Remote procedure call architectures. The components of a main program/
subprogram architecture are distributed across multiple computers on a network
Persistence—Data persists if it survives past the execution of the process that created
it. Two patterns are common:
← a database management system pattern that applies the storage and
retrieval capability of a DBMS to the application architecture
Downloaded by Mamatha Reddy (k.mamatha9052@gmail.com)
lOMoARcPSD|34181993
← Define interaction modes in a way that does not force a user into unnecessary
or undesired actions. Word processor – spell checking – move to edit and back;
enter and exit with little or no effort
← Provide for flexible interaction. Several modes of interaction – keyboard,
mouse, digitizer pen or voice recognition, but not every action is amenable to
every interaction need. Difficult to draw a circle using keyboard commands.
← Allow user interaction to be interruptible and undoable. User stop and do
something and then resume where left off. Be able to undo any action.
← Streamline interaction as skill levels advance and allow the interaction to be
customized. Perform same actions repeatedly; have macro mechanism so user
can customize interface.
← Hide technical internals from the casual user. Never required to use OS
commands; file management functions or other arcane computing technology.
← Design for direct interaction with objects that appear on the screen. User has
feel of control when interact directly with objects; stretch an object.
Mandel defines design principles that enable an interface to reduce the user’s memory load:
User Model: The user model establishes the profile of end-users of the system. To
build an effective user interface, "all design should begin with an understanding of the
intended users, including profiles of their age, sex, physical abilities, education, cultural
or ethnic background, motivation, goals and personality" [SHN90]. In addition, users
can be categorized as
← Novices.
← Knowledgeable, intermittent users.
← Knowledgeable, frequent users.
Mental Model: The user’s mental model (system perception) is the image of the
system that end-users carry in their heads.
These models enable the interface designer to satisfy a key element of the
most important principle of user interface design: "Know the user, know the
tasks."
← Interface
Validation:
Validation focuses
on
← the ability of the interface to implement every user task correctly, to
accommodate all task variations, and to achieve all general user requirements;
← the degree to which the interface is easy to use and easy to learn; and
← the users’ acceptance of the interface as a useful tool in their work.
INTERFACE ANALYUSIS
A Key tenet of all software engineering process models is this: you better understand the problem
before you attempt to design a solution. In the case of user interface design, understanding the
problem means understanding (1) The people who will interact with the system through the
interface; (2) the tasks that tend-users must perform to do their work, (3) the content that is
presented as part of the inter face, an (4) the environment in which these tasks will be conducted.
In the sections that follow, we examine each of these elements of interface analysis with the
intent of establishing a solid foundation for the design tasks that follow.
User analysis
Earlier we noted that each user has a mental image or system perception of the software that may
be different from the mental image developed by other users.
User Interviews. The most direct approach, interviews involve representatives from the software
team who meet with end-users to better understand their needs, motivations work culture, and a
myriad of other issues. This can be accomplished in one-on-one meetings or through focus
groups.
Sales input. Sales people meet with customers an users on regular basis and can gather
information that will help the software team to categorize users and better understand their
requirements.
Marketing input. Market analysis can be invaluable in definition of market segments while
providing an understanding of how each segment might use the software in subtly different
ways.
Support input. Support staff talk with users on a daily basis, making them the most likely
soured of information on what works an what doesn’t, what users like and what they dislike,
what features generate questions, and what features are easy to use.
Downloaded by Mamatha Reddy (k.mamatha9052@gmail.com)
lOMoARcPSD|34181993
UNIT-IV
SOFTWARE ENGINEERING
Software Testing
Two major categories of software
testing Black box testing
White box testing
Black box testing
Treats the system as black box whose behavior can be determined by studying its input
and related output Not concerned with the internal structure of the program
Equivalence partitioning
1. Divides all possible inputs into classes such that there are a finite equivalence classes.
2. Equivalence class
5. Set of objects that can be linked by relationship
1. Reduces the cost of testing
2. Example
3. Input consists of 1 to 10
4. Then classes are n<1,1<=n<=10,n>10
5. Choose one valid class with value within the allowed range and two invalid
classes where values are greater than maximum value and smaller than
minimum value.
Boundary Value analysis
JJJ.Select input from equivalence classes such that the input lies at
the edge of the equivalence classes
KKK. Set of data lies on the edge or boundary of a class of input data or
generates the data that lies at the boundary of a class of output data
Example
4) If 0.0<=x<=1.0
5) Then test cases (0.0,1.0) for valid input and (-0.1 and 1.1) for
invalid input Orthogonal array Testing
6) To problems in which input domain is relatively small but too large for exhaustive testing
Software Quality
Conformance to explicitly stated functional and performance requirements, explicitly
documented development standards, and implicit characteristics that are expected
of all professionally developed software.
Factors that affect software quality can be categorized in two broad groups:
Factors that can be directly measured (e.g. defects uncovered during testing)
Factors that can be measured only indirectly (e.g. usability or maintainability
Usability
Efficiency
Maintainability
Product metrics
Product metrics for computer software helps us to assess quality.
Measure
Provides a quantitative indication of the extent, amount, dimension, capacity or size of
some attribute of a product or process
Metric(IEEE 93 definition)
A quantitative measure of the degree to which a system, component or process possess a given
attribute
Indicator
A metric or a combination of metrics that provide insight into the software process, a
software project or a product itself
Product Metrics for analysis,Design,Test and maintenance
Product metrics for the Analysis model
Function point Metric
First proposed by Albrecht
Measures the functionality delivered by the system
FP computed from the following parameters
Number of external inputs(EIS)
Number external outputs(EOS)
Number of external Inquiries(EQS)
Number of Internal Logical Files(ILF)
Number of external interface files(EIFS)
Each parameter is classified as simple, average or complex and weights are assigned as follows
av Compl
•Information Domain Count Simple g ex
EIS 3 4 6
EOS 4 5 7
EQS 3 4 6
ILFS 7 10 15
EIFS 5 7 10
DSQI=sigma of WiDi
i=1 to 6,Wi is weight assigned to Di
If sigma of wi is 1 then all weights are equal to 0.167
DSQI of present design be compared with past DSQI. If DSQI is significantly
lower than the average, further design work and review are indicated
METRIC FOR SOURCE CODE
HSS(Halstead Software science)
Primitive measure that may be derived after the code is generated or estimated
once design is complete
n1 = the number of distinct operators that appear in a program
n2 = the number of distinct operands that appear in a program
N1 = the total number of operator occurrences.
N2 = the total number of operand occurrence.
Overall program length N can be computed:
N = n1 log2 n1 + n2 log2 n2
V = N log2 (n1 + n2)
← SOFTWARE MEASUREMENT
Software measurement can be categorized in two ways.
(4) Direct measures of the software engineering process include cost and effort
applied. Direct measures of the product include lines of code (LOC) produced,
execution speed, memory size, and defects reported over some set period of time.
we choose lines of code as our normalization value. From the rudimentary data
contained in the table, a set of simple size-oriented metrics can be developed for each
project:
Function-Oriented Metrics
Function-oriented software metrics use a measure of the functionality delivered
by the application as a normalization value. Since ‘functionality’ cannot be measured
directly, it must be derived indirectly using other direct measures. Function-oriented
metrics were first proposed by Albrecht, who suggested a measure called the function
point. Function points are derived using an empirical relationship based on countable
(direct) measures of software's information domain and assessments of software
complexity.
← Proponents claim that FP is programming language independent, making it ideal
for application using conventional and nonprocedural languages, and that it is
based on data that are more likely to be known early in the evolution of a project,
making FP more attractive as an estimation approach.
← Opponents claim that the method requires some “sleight of hand ” in that
computation is basedsubjective rather than objective data, that counts of the
information domain can be difficult to collect after the fact, and that FP has no
direct physical meaning- it’s just a number.
Typical Function-Oriented Metrics:
← errors per FP (thousand lines of code)
← defects per FP
← $ per FP
← pages of documentation per FP
← FP per person-month
Measuring Quality
The measures of software quality are correctness, maintainability, integrity, and
usability. These measures will provide useful indicators for the project team.
← Correctness. Correctness is the degree to which the software performs its
required function. The most common measure for correctness is defects per
KLOC, where a defect is defined as a verified lack of conformance to
requirements.
← Maintainability. Maintainability is the ease with which a program can be
corrected if an error is encountered, adapted if its environment changes, or
← enhanced if the customer desires a change in requirements. A simple time-
oriented metric is mean-time-tochange (MTTC), the time it takes to analyze the
change request, design an appropriate modification, implement the change, test it,
and distribute the change to all users.
Downloaded by Mamatha Reddy (k.mamatha9052@gmail.com)
lOMoARcPSD|34181993
UNIT-V
RISK MANAGEMENT
REACTIVE VS. PROACTIVE RISK STRATEGIES
At best, a reactive strategy monitors the project for likely risks. Resources are set aside to
deal
with them, should they become actual problems. More commonly, the software team
does nothing about risks until something goes wrong. Then, the team flies into action
in an attempt to correct the problem rapidly. This is often called a fire fighting mode.
project team reacts to risks when they occur
mitigation—plan for additional resources in anticipation of fire fighting
fix on failure—resource are found and applied when the risk strikes
crisis management—failure does not respond to applied resources
and project is in jeopardy
A proactive strategy begins long before technical work is initiated. Potential risks
are identified, their probability and impact are assessed, and they are ranked by
importance. Then, the software team establishes a plan for managing risk.
9. formal risk analysis is performed
10. organization corrects the root causes of risk
examining risk sources that lie beyond the bounds of the
software o developing the skill to manage change
SOFTWARE RISK
Risk always involves two characteristics
Uncertainty—the risk may or may not happen; that is, there are no 100%
probable risks Loss—if the risk becomes a reality, unwanted
consequences or losses will occur.
When risks are analyzed, it is important to quantify the level of uncertainty in the degree
of loss associated with each risk. To accomplish this, different categories of risks are
considered.
Project risks threaten the project plan. That is, if project risks become real, it is likely
that project schedule will slip and that costs will increase.
Technical risks threaten the quality and timeliness of the software to be produced. If a
technical risk becomes a reality, implementation may become difficult or impossible.
Technical risks identify potential design, implementation, interface, verification, and
maintenance problems.
Business risks threaten the viability of the software to be built. Business risks often
Downloaded by Mamatha Reddy (k.mamatha9052@gmail.com)
lOMoARcPSD|34181993
jeopardize the project or the product. Candidates for the top five business risks are
Building a excellent product or system that no one really wants (market risk),
Building a product that no longer fits into the overall business strategy for the company (strategic
risk
Building a product that the sales force doesn't understand how to sell,
Losing the support of senior management due to a change in focus or a change in
people (management risk), and
Losing budgetary or personnel commitment (budget risks).
Known risks are those that can be uncovered after careful evaluation of the project
plan, the business and technical environment in which the project is being developed,
and other reliable information sources.
Predictable risks are extrapolated from past project experience.
Unpredictable risks are the joker in the deck. They can and do occur, but
they are extremely difficult to identify in advance.
2) RISK IDENTIFICATION
Risk identification is a systematic attempt to specify threats to the project plan. There are
two distinct types of risks.
6. Generic risks and
7. product-specific risks.
Generic risks are a potential threat to every software project.
Product-specific risks can be identified only by those with a clear understanding of
the technology, the people, and the environment that is specific to the project that is
to be built.
Known and predictable risks in the following generic subcategories:
LLL. Product size—risks associated with the overall size of the software to be built or modified.
MMM. Business impact—risks associated with constraints imposed by management or the
marketplace.
NNN. Customer characteristics—risks associated with the
sophistication of the customer and the developer's ability to communicate
with the customer in a timely manner.
OOO. Process definition—risks associated with the degree to which the
software process has been defined and is followed by the development
organization.
PPP. Development environment—risks associated with the availability and
quality of the tools to be used to build the product.
QQQ. Technology to be built—risks associated with the complexity of the
system to be built and the "newness" of the technology that is packaged by
the system.
RRR. Staff size and experience—risks associated with the overall technical and
project experience of the software engineers who will do the work.
13) Does the software engineering team have the right mix of skills?
14) Are project requirements stable?
Downloaded by Mamatha Reddy (k.mamatha9052@gmail.com)
lOMoARcPSD|34181993
15) Does the project team have experience with the technology to be
Implemented?
DD. Is the number of people on the project team adequate to do the job?
EE. Do all customer/user constituencies agree on the importance of the project and
on the requirements for the system/product to be built?
The impact of each risk driver on the risk component is divided into one of four impact
categories— negligible, marginal, critical, or catastrophic.
RISK PROJECTION
Risk projection, also called risk estimation, attempts to rate each risk in two ways—the
likelihood or probability that the risk is real and the consequences of the problems
associated with the risk, should it occur.
The project planner, along with other managers and technical staff, performs four risk projection activities:
establish a scale that reflects the perceived likelihood of a risk,
delineate the consequences of the risk,
estimate the impact of the risk on the project and the product, and
note the overall accuracy of the risk projection so that there will be no misunderstandings.
A project team begins by listing all risks (no matter how remote) in the first column of the table.
High-probability, high-impact risks percolate to the top of the table, and low-
probability risks drop to the bottom. This accomplishes first-order risk
prioritization.
The project manager studies the resultant sorted table and defines a cutoff line.
The cutoff line (drawn horizontally at some point in the table) implies that only risks that
lie above the line will be given further attention. Risks that fall below the line are re-
evaluated to accomplish second-order prioritization.
4.2 Assessing Risk Impact
Three factors affect the consequences that are likely if a risk does occur: its nature, its scope, and its
timing.
The nature of the risk indicates the problems that are likely if it occurs.
The scope of a risk combines the severity (just how serious is it?) with its overall distribution.
Finally, the timing of a risk considers when and for how long the impact will be felt.
The total risk exposure for all risks (above the cutoff in the risk table) can provide a
means for adjusting the final cost estimate for a project etc.
RISK REFINEMENT
One way for risk refinement is to represent the risk in condition-transition-consequence(CTC) format.
This general condition can be refined in the following manner:
Sub condition 1. Certain reusable components were developed by a third party with
no knowledge of internal design standards.
Sub condition 2. The design standard for component interfaces has not been
solidified and may not conform to certain existing reusable components.
Sub condition 3. Certain reusable components have been implemented in a language
that is not supported on the target environment.
To mitigate this risk, project management must develop a strategy for reducing
turnover. Among the possible steps to be taken are
Meet with current staff to determine causes for turnover (e.g., poor working
conditions, low pay, competitive job market).
Mitigate those causes that are under our control before the project starts.
Once the project commences, assume turnover will occur and develop
techniques to ensure continuity when people leave.
Organize project teams so that information about each development activity is
widely dispersed. Define documentation standards and establish mechanisms to
be sure that documents are
developed in a timely manner.
Conduct peer reviews of all work (so that more than one person is "up to
speed”). • Assign a backup staff member for every critical technologist.
As the project proceeds, risk monitoring activities commence. The following factors
can be monitored: General attitude of team members based on project
pressures.
The degree to which the team has jelled.
Interpersonal relationships among team
members. Potential problems with
compensation and benefits
The availability of jobs within the company and outside it.
Software safety and hazard analysis are software quality assurance activities that focus
on the identification and assessment of potential hazards that may affect software
negatively and cause an entire system to fail. If hazards can be identified early in the
software engineering process, software design features can be specified that will either
eliminate or control potential hazards.
QUALITY MANAGEMENT
QUALITY CONCEPTS:
Quality management
Downloaded by Mamatha Reddy (k.mamatha9052@gmail.com)
lOMoARcPSD|34181993
encompasses
(1) a quality management approach,
(2) effective software engineering technology (methods and tools),
(3) formal technical reviews that are applied throughout the software process,
(4) a multitier testing strategy,
(5) control of software documentation and the changes made to it,
(6) a procedure to ensure compliance with software development standards (when applicable), and
(7) measurement and reporting mechanisms.
Variation control is the heart of quality control.
1.1 Quality
The American Heritage Dictionary defines quality as “a characteristic or attribute of something.”
Quality of design refers to the characteristics that designers specify for an item.
Quality of conformance is the degree to which the design specifications are
followed during manufacturing.
In software development, quality of design encompasses requirements, specifications,
and the design of the system. Quality of conformance is an issue focused primarily on
implementation. If the implementation follows the design and the resulting system meets
its requirements and performance goals, conformance quality is high.
Robert Glass argues that a more “intuitive” relationship is in order:
User satisfaction = compliant product + good quality + delivery within budget and schedule
Quality costs may be divided into costs associated with prevention, appraisal, and failure.
Prevention costs include
quality planning
formal technical reviews
test equipment
training
Appraisal costs include activities to gain insight into product condition the “first
time through” each process. Examples of appraisal costs include
(4) in-process and interprocess inspection
Downloaded by Mamatha Reddy (k.mamatha9052@gmail.com)
lOMoARcPSD|34181993
The SQA group serves as the customer's in-house representative. That is, the
people who perform SQA must look at the software from the customer's point of view
Prepares an SQA plan for a project. The plan is developed during project planning and
is reviewed by all interested parties. Quality assurance activities performed by the
software engineering team and the SQA group are governed by the plan. The plan
identifies
evaluations to be performed
audits and reviews to be performed
standards that are applicable to the project
procedures for error reporting and tracking
documents to be produced by the SQA group
amount of feedback provided to the software project team
Audits designated software work products to verify compliance with those defined
as part of the software process. The SQA group reviews selected work products;
identifies, documents, and tracks deviations; verifies that corrections have been made;
and periodically reports the results of its work to the project manager.
Ensures that deviations in software work and work products are documented and
handled according to a documented procedure.Deviations may be encountered in the
project plan, process description, applicable standards, or technical work products.
3) SOFTWARE REVIEWS
Software reviews are a "filter" for the software engineering process. That is, reviews are
applied at various points during software development and serve to uncover errors and
defects that can then be removed. Software reviews "purify" the software engineering
activities that we have called analysis, design, and coding.
Many different types of reviews can be conducted as part of software
engineering. Each has its place. An informal meeting around the coffee machine is a
form of review, if technical problems are discussed. A formal presentation of software
design to an audience of customers, management, and technical staff is also a form of
review
A formal technical review is the most effective filter from a quality assurance
standpoint. Conducted by software engineers (and others) for software engineers, the
FTR is an effective means for improving software quality.
process so that they do not become defects after release of the software.
A number of industry studies indicate that design activities introduce between 50
and 65 percent of all errors during the software process. However, formal review
techniques have been shown to be up to 75 percent effective] in uncovering design
errors. By detecting and removing a large percentage of these errors, the review process
substantially reduces the cost of subsequent steps in the development and support phases.
To illustrate the cost impact of early error detection, we consider a series of
relative costs that are based on actual cost data collected for large software projects
Assume that an error uncovered
during design will cost 1.0 monetary unit to correct.
just before testing commences will cost 6.5 units;
during testing, 15 units;
and after release, between 60 and 100 units.
It is important to establish a follow-up procedure to ensure that items on the issues list
have been properly corrected.
SDRs attempt to quantify those work products that are primary targets for full FTRs.To
accomplish this the following steps are suggested…
← Inspect a fraction ai of each software work product, i. Record the number of faults, fi found
within
ai.
• Develop a gross estimate of the number of faults within work product i by multiplying fi by
1/ai.
← Sort the work products in descending order according to the gross estimate of the
number of faults in each.
← Focus available review resources on those work products that have the highest estimated
number of faults.
An attempt is made to trace each defect to its underlying cause (e.g., non-conformance to
specifications, design error, violation of standards, poor communication with the customer).
Using the Pareto principle (80 percent of the defects can be traced to 20 percent of all possible
causes), isolate the 20 percent (the "vital few").
Once the vital few causes have been identified, move to correct the problems that have caused the
For software, statistical quality assurance implies the following steps:
.
The application of the statistical SQA and the pareto principle can be summarized in a single
sentence: spend your time focusing on things that really matter, but first be sure that you understand
what really matters.
Define customer requirements and deliverables and project goals via well-defined methods of
customer communication
Measure the existing process and its output to determine current quality performance (collect
defect metrics)
Analyze defect metrics and determine the vital few causes.
If an existing software process is in place, but improvement is required, Six Sigma suggests two
additional steps.
Improve the process by eliminating the root causes of defects.
Control the process to ensure that future work does not reintroduce the causes of defects These
core and additional steps are sometimes referred to as the DMAIC (define, measure, analyze,
improve, and control) method.
If any organization is developing a software process (rather than improving and existing process), the
core steps are augmented as follows:
Design the process to avoid the root causes of defects and o to meet customer requirements
Verify that the process model will, in fact, avoid defects and meet customer requirements.
This variation is sometimes called the DMADV (define, measure, analyze, design and verify)
method.
6)THE ISO 9000 QUALITY STANDARDS
A quality assurance system may be defined as the organizational structure, responsibilities,
procedures, processes, and resources for implementing quality management.
ISO 9000 describes quality assurance elements in generic terms that can be applied to any
business regardless of the products or services offered.
ISO 9001:2000 is the quality assurance standard that applies to software engineering. The
standard contains 20 requirements that must be present for an effective quality assurance
system. Because the ISO 9001:2000 standard is applicable to all engineering disciplines, a
special set of ISO guidelines have been developed to help interpret the standard for use in
the software process.
The requirements delineated by ISO 9001 address topics such as
← management responsibility,
← quality system, contract review,
← design control,
← document and data control,
← product identification and traceability,
← process control,
← inspection and testing,
← corrective and preventive action,
← control of quality records,
Downloaded by Mamatha Reddy (k.mamatha9052@gmail.com)
lOMoARcPSD|34181993
SOFTWARE RELIABILITY
Software reliability is defined in statistical terms as "the probability of failure-
free operation of a computer program in a specified environment for a specified
time".