0% found this document useful (0 votes)
115 views35 pages

Software Engineering Notes

The document discusses the key activities involved in designing and developing information systems (I.S.) to solve organizational problems. It describes the main stages as system analysis, systems design, programming/implementation, testing, conversion, and maintenance. System analysis identifies problems, requirements, and alternative solutions. Systems design details how the system will meet requirements. Programming translates designs into working code. Testing verifies the system operates as intended. Conversion transitions users to the new system.

Uploaded by

Bravin
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
115 views35 pages

Software Engineering Notes

The document discusses the key activities involved in designing and developing information systems (I.S.) to solve organizational problems. It describes the main stages as system analysis, systems design, programming/implementation, testing, conversion, and maintenance. System analysis identifies problems, requirements, and alternative solutions. Systems design details how the system will meet requirements. Programming translates designs into working code. Testing verifies the system operates as intended. Conversion transitions users to the new system.

Uploaded by

Bravin
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 35

Design and Development of I.

SOFTWARE DEVELOPMENT

Definition: The activities that go into producing an “information systems” solutions


to “ an organizational” problem or opportunity/ challenge.
 It is part of organizational problem solving process.
 Problem may be :-
 Non performance as expected.
 Realization of areas, it should take advantage of
new opportunities (to perform more successfully).

System development is a structured problem solving process with distinct activities: -


 System analysis 3.3
 System design
 Programming/ implementation
 Testing
 Conversion
 Production
 Maintenance

3.2

NB: The activities takes place sequentially but since some may need to be repeated
while others may take place simultaneously, depending on the approach to the system
building employed.
Design and Development of I.S

SYSTEMS ANALYSIS

Definition: The analysis of a problem that the organization will try to solve with an
Information System.
It consists of: -
 Defining the problem
 Identifying its causes.
 Specifying the solution
 Identifying the information requirements that must be met
by a system solution (parameters)

Important factor
Thorough understanding of existing organization and system, therefore System
Analyst identifies stakeholders (owners and users) of data in organization, also analyst
identifies hardware and software serving the organization.
Analyst therefore is able to detail problems of the existing system by: -
1. Examining documents, work papers, and procedures
2. Observing system operations
3. Interviewing stake holders

Analyst therefore identifies problem areas and objectives to be achieved by


solution, which often is - building a new system
- Improving on existing one.
Design and Development of I.S

FEASIBILITY

System Analyst involves a feasibility study to determine whether a solution is


feasible, or achievable within the organization’s constraints and resources.

Definition
Feasibility study is a process/ way of determining whether a solution is achievable,
given the organizations’ resources and constraints.
Three major areas of feasibility must be addressed
 Technical feasibility
Determine whether the proposed solution can be implemented with the
available hardware, software, and technical resources.

 Economic feasibility
Determine whether the monetary benefits of the proposed solution
outweighs the cost

 Operational feasibility
Determine whether the solution is desirable within the existing managerial
and organizational framework.

System analysis process identifies several alternative solutions that can be pursued by
the organization.
Three basic solutions to a problem are: -
1. To do nothing and leave the existing situation unchanged
2. To modify/ enhance existing systems
3. To develop a new system

To select either 2nd or 3rd solutions, the analyst should produce a written systems
proposal/ report describing: -
 Cost and benefits
 Advantages and disadvantages of each alternative
 Technical features
 Organizational impacts

Therefore management can select.

ESTABLISHING SYSTEM REQUIREMENTS (work of systems analyst)

Definition
Systems requirement is a detailed statement of the (systems) needs that a new system
solution must satisfy, identify who needs it, where, when and how.
 It defines objectives of the new/ modified system
 It develops a detailed description of the functions the new system must
perform
 It must consider economic, technical and time constraints
Design and Development of I.S
 It must also consider goals, procedural and decision process of the origin

NB: Faulty/ poor requirements analysis is leading cause of system failure and high
system development costs.
It is difficult to attain them (requirements) if the user is unsure of what they want/
need.

NB System analysis describe what system should do to meet requirements


Design and Development of I.S

SYSTEMS DESIGN

Definition
Systems Design details how a system will meet the information requirements as
determined by the systems analysis i.e. how it will fulfill the objectives

 It is the overall plan or model for the system.


 It consists of all specifications that give the system its form and structure
 It is an exacting and creative task demanding imagination sensitivity to detail and
expert skills
 Have three objectives.

1. System designer is responsible for “considering alternative technology


configurations for carrying out and developing the system as described by
the analyst” - by analyzing performance of different hardware,
software, security, changeability, portability etc of hardware
2. Management and control of the technical realization, testing and training.
Also actual procurement of Hardware, consultants, Software needed by the
system.
3. Detail system specification that will deliver the functions identified during
system analysis, which should address all components of the system
solution.

There are two types of system design – LOGICAL AND PHYSICAL DESIGN

LOGICAL DESIGN lays out the components of the system and their relationship to
each other, as they would appear to user.
 Showa what the user would do not how
 Describes input and outputs, processing functions
procedures and controls

PHYSICAL DESIGN is the process of translating the abstract logical model into the
specific technical design for the new system
 Produces actual specifications for Hardware, Software, physical
database, input/output media, manual procedure and specific
control

End users must participate to specify requirements (that drive its system building
efforts), give the priorities, needs and biases. This increases acceptance, and reduces
unfamiliarity, power transfer and inter-group conflicts.
Design and Development of I.S

PROGRAMMING/ IMPLEMENTING/ DEVELOPMENT

Definition
Programming is the process of translating system specifications prepared during
design stage into (program code (solution)) a full operational system.
Programming is the process of translating system specification into program
code

System development involvement is workshop that offers the required hardware and
Software tools required by the developer and should have: -
 Language translators – compilers and assemblers
 Module linkers and libraries
 Error reporting features
 Specification requirements and validation features
 Documentation processing and production
 Estimation, planning and process monitoring tools
 Testing tools etc

TESTING

Definition
The exhaustive and thorough process that determines whether the system produces the
desired results under known conditions
 50% of budget is in testing
 Test date must be carefully prepared, results reviewed and corrections made in the
system

Testing is divided into three activities: -


 Unit testing
This is the process of testing each unit (sub-system) separately in the
system (program testing)

 System testing
It tests the functioning of the system as a whole (integrated system) to
determine if discrete units (sub-systems) will function together as planned

 Acceptance testing
Provides the final certification that the system is ready to be used in a
productive setting.

To ensure testing is clear and comprehensive a systematic test plan must be employed

NB: the development team and users prepare Test plan with details on how tests will
be carried out. It must detail: -
 Expected inputs
Design and Development of I.S
 Expected outputs
 Expected error reactions
 Expected communications
 Expected termination etc

CONVERSION

Definition
The process of changing from the old system to the new system.
- Starts by checking if the system will work
under real conditions

Four main conversion strategies can be employed


1. Parallel strategy is the safe and conservative conversion approach where both old
system and its potential replacement are run together for a time until the new
system is proved to be functioning correctly.
2. Direct cut over strategy is a risky conversion approach in which the new system
completely replaces the old one on appointed date.
3. Pilot study strategy is introduction of the new system to a limited area of the
organization until it is proven to be fully functional, after which full conversion
can take place
4. Phased approach strategy introduces the new system in stages (either by
functions or organizational units)

NB: A formal conversion plan provides a schedule of all activities required to install
a new system

Conversion requires
- End user training
- Detailed documentation

PRODUCTION MAINTENANCE

Definition
Production is the stage after system installation and conversion, during which users
and specialists to determine how well it has met its original goals review the system.
It also decides whether any revisions or modifications are in order

Maintenance is the changes in Hardware, Software, documentation, or procedures to


a production system to correct errors, meet new requirements or improve processing
efficiency.
Design and Development of I.S

SYSTEMS DESIGN

The process of interpreting functions of (each) system in a way it can be transformed


to one or more computer program

It involves taking scattered particulars under one idea so that every one (in the team)
understands it, also separation of the idea into parts that are related

A common design approach is the top-down approach where a design is developed in


a series of stages with each successive being a more refined version of the previous
It uses both programming language based notations of selection and interaction and
graphical notations (flow charts, data flow diagrams, syntax diagrams, HIPO
(Hierarchical Input Process Output) diagrams etc)

It should be easy and straightforward to check the design consistency and translate it
for implementation

It should be validated to verify it for correctness, i.e. comply with designer’s


perception of the solution and meet the requirements specified.

Independent experts not involved in its project should review it

The main activities of system design include: -


 Define output requirements, volume, frequency, format and distribution
 Specify input layout, frequency etc
 Develop overall system logic
 Determine control and audit procedure
 Establish information flow, data elements, output requirements, data
relationships etc
 Identify master files, working files, frequency of updating length of
retention and speed of response required from file
 Decide which storage device should be used
 Determine the file origin layout or database or data volume
 Identify PC programs and manual procedure required
 Prepare program (system) specification
 Develop general test requirements
 Revise estimation of operational costs of systems
 Document the design phase in a report for user and system management

DESIGN OBJECTIVES
i. Produce information with all desired qualities i.e. accuracy, efficiency etc
ii. Develop a system that confirms with organizations policy
iii. Reduce total volume and amount of input and output to minimum practical level
iv. Design a system that must be able to handle all envisaged expansion during its
life time
v. Develop a system which will link with other systems in the organization
vi. Develop and design a realistic and cost effective system
vii. Improve information flow
Design and Development of I.S
viii. Must be understood by its operators
ix. Identify system security therefore ensure confidentiality of data, minimize risks
and accidents

SOFTWARE DESIGN TOOLS


 Structured flow charts

PERFEC
CORRECTI Start or stop TIVE
VE MAINTE
MAINTENA NANCE
NCE
SYSTEM
ADAPTIVE RELEASE
MAINTENANCE
SYSTEM
RELEASE
PLANNING
Flow

Decision
CHAN
GE
IMPLE
MENTA
TION

 HIPO (Hierarchical Input Process Output)


Comprises two parts
i. Visual table of contents (V.T.O.C)
ii. Function Diagram

V.T.O.C shows relationship between each of the documents making up a HIPO


package. The numbers in the content section correspond to those in organization
section. The modules are in increasing detail. 3 – 5 level modules are common
Design and Development of I.S

IMP
AC
T
AN
CHA ALY
NGE SIS 7 6 8
REQ
UEST

5 1

4 Syste
m
3 2 Contents
devel a
- invoice
opm
program
ent
check
proc
database
ess g
Payments print
NB:
personal
each b
record
activ
calculate
ity
cost
entai
ls f
- payments
inter
actio c
Program 1.0 can call programs 2, 3, and 4, which in turn
n can call 2.1, 2.2, 3.1, 3.2 and
3.3 respectively with
the e
Date design * abstraction orga
Architectural design * modularity nizat d
Interface design *encapsulation ion
Component level design * cohesion
*compiling

 Activity diagram (UML)


Design and Development of I.S
PROFESSIONAL ISSUES IN SYSTEMS DEVELOPMENT

System development is a profession and belongs to the engineering discipline that


employs scientific methods in solving problems and providing solutions to the
society.

Profession is an employment (not mechanical), that require some degree of learning ,


a calling, habitual employment is a collective body of persons engaged in any
profession

There are a number of tasks carried out in an engineering organization and are
classified into their function: -
a) Production : activities that directly contribute to creating products and
services the organization sells
b) Quality management: activities necessary to ensure the quality of
products/ services maintained at this agreed level
c) Research and development: ways of creating/ improving products and
production process
d) Sales and Marketing: selling products/ services and involves activities
such as advertising, transporting, distribution etc

The main professional task in system development is on management of the tasks,


with an aim of producing system that meet users needs, on time and within budget.

Therefore main concerns of the management are: -


 Planning
 Progress monitoring
 Quality control
Design and Development of I.S

MANAGING SYSTEM DEVELOPMENT PROJECT

Effective system project management focuses on 4P’s i.e.


 People : recruiting, selection, performance management, training,
compensation, career development, organization and work design and
team/ culture development
 Product : product objective and scope should be established first,
alternative solutions considered, parameters established etc. defining the
product , therefore it is possible to estimate cost, effectiveness and project
breakdown to manageable schedules
 Process : frame work activities from which a comprehensive plan for
system development can be established
 Number of framework activities, made up of i.e. tasks, milestone,
work products and the project team adapts quality assurance points.
 Umbrella activities such as quality assurance, system configuration
management and measurement lays the process model
 Project : is planned and controlled system (project) undertaking to achieve
a goal/ attain a solution

PEOPLE
“System are not developed by individuals but teams”.
Players in system development: -
 Senior Managers: define business issues that have significant influence on the
project
 Project (technical) Manager: plan, motivate,, organize and control practitioners
who do the development work
 Practitioners : deliver the technical skill necessary to engineer a product
 Customers : specify the requirement for the system to be engineered
 End-users : interact with the released system/ product

SYSTEM DEVELOPMENT TEAM LEADERS


They should be
 Motivative: encourage team members
 Organizational : able to mould existing processes (or invent new) that
will enable the initial concept be translated to final product
 Innovative : able to generate new/ creative ideas/ solution
 Achiever : able to optimize productivity of team members
 Problem solver : able to diagnose technical and organizational issues that
are relevant and develop a solution
 Controller/ authoritative : able to take charge of the project therefore
confident to control
 Understanding and flexible : able to understand others point of view,
understand others reactions/ signals and change position flexibility, and
remain in control during high- stress situation.
Design and Development of I.S
The team should be motivated: -
 Provided a conducive working environment
 Properly rewarded
 Issued with properly drafted and interpreted specification and tasks
 Should be secured

PRODUCT
Major challenge to the system development manager is quantitative estimates and
organized plan. Production in view of scattered requirements and unavailability of
solid information and fluid (changing) requirements
Therefore examine the product and problem to be solved

First management activity will determine system scope by looking at: -


a) Context : How does it fit into a large system, product, or business context
b) Information objectives : what are its inputs and output requirements
c) Function and performance : what functions does it perform in order to
transform input into output

Problem decomposition / problem partitioning / problem elaboration.

PRODUCT
Generic phases that characterize system development process are; definition,
development and support. Appropriate engineering model must be employed: -
a) Linear sequential (traditional/ waterfall) model
b) Prototyping
c) RAD model
d) Spiral model
e) Incremental model

PROJECT PLANNING
Manages are responsible for
a) Writing project proposal
b) Writing project costing
c) Project planning and scheduling
d) Project monitoring and reviewing
e) Personnel selection and evaluation
f) Report writing and presentations

Project planning is concerned with identifying the activities, milestone and


deliverable produced by a project.
- a plan must be drawn to guide the
development towards the project goals
- system project estimation is activity
concerned with estimating the resources
required to accomplish the project plan

Project manager must anticipate problems which might arise and prepare tentative
solutions to the problem
Design and Development of I.S
- plan is used as the driver for the project
- plan (initial one) is not static but must be
modified on the project progress as mere
information becomes available

TYPES OF PLAN
a) Quality plan : describes quality procedures and standards that will be used in a
project
b) Validation plan : describes the approach, resources and schedule used for system
validation
c) Configuration management plan : describes the configuration management
procedures and structure to be used
d) Maintenance plan : predicts the maintenance requirements of the system,
maintenance cost and effort required
e) Staff development plan : describes how the skills and experience of the project
team members will be developed.

The planning process starts with an assessment of the constraints (required delivery
date, overall budget, staff available etc) affecting the project

This is carried out in conjunction with an estimation of project parameters such as


structure, size and distribution of functions

The program milestone and deliverables are then defined

A schedule (for the project) is drawn, analyzed and passed and subjected to later
reviews

PROJECT PLAN
Sets out: -
 Resources available to the project
 The work breakdown
 Schedule for carrying out the work

Project plan structure (for the development process): -


 Introduction: describes the objectives of the project and the constraints
(budget, time etc) affecting the project management.
 Project organization : describe the organization of development team,
people involved and their roles in the team
 Risk analysis: describes possible project risks and the likely hood of their
occurrences and risk reduction strategies are proposed .
 Hardware and software resources requirements
 Work breakdown: describes the breakdown of the project into activities
and identifies milestone and deliverable for each activity
 Project schedule: describes the dependencies between activities, the
estimated time required for each milestone and allocation of people to
activities
 Monitoring and reporting mechanisms: describes the management
reports to be produced, where and the monitoring mechanisms used.
Design and Development of I.S
PROJECT SCHEDULING

Estimation of time and resources required to complete activities and organization then
in a coherent sequence.

Involves separating the work (project) into separate activities and judging the time
required to complete these activities, some of which are carried out in parallel

Schedules must: -
 Properly co-ordinate the parallel activities properly
 Avoid situation where whole project is delayed for a critical task to
be finished

Schedules must have allowances (error allowances) that can cause delays in
completion therefore flexible

They must also estimate resources needed to complete each task (human effort,
hardware, software, finance (budget) etc)

NB: key estimation is to estimate as if nothing will go wrong, then increase the
estimate to cover anticipated problems. Also add a further contingency factor to cover
the problems.

Project schedule is usually presented as a set of charts showing


 Work breakdown
 Activity dependency
 Staff allocation

Such charts include: -


 Activity bar charts (3:5)
 Activity network chart
 Gantt charts (staff allocation Vs time chart) (fy 3.8)

PROJECT ESTIMATION
System (software) cost and effort estimate can never be exact, too many variables,
human, technical, environmental, political can affect system cost and efforts applied
to development

Project estimation strive to achieve a reliable cost and effort estimation

A number of options arise trying to achieve this: -

a) Delay estimation until late in the project (estimates done after the project)
b) Base estimates on similar projects that have already been completed
Design and Development of I.S
c) Use relating simple decomposition technique to generate project cost and
effort estimates
d) Use one or more empirical models for system cost and effort estimation

1st option is not practical. Estimate must be provided “upfront”


2nd option only works in similar projects
3rd and 4th should be used together to check on one another

decomposition techniques take a “divide and conquer approach to project estimation.


Project is decomposed (divided) into major functions and related activities and cost
and efforts estimated on each.

Empirical estimation models: based an experience (historical data) and takes a form

D= f(vi)
Where d = one of a number of estimated values (e.g. effort, cost, project duration etc)
Vi = selected independent parameters (e.g. estimated *LOC or *FP)

DECOMPOSITION TECHNIQUES

SOFTWARE/ SYSTEM SIZING

Accuracy of system (software) project estimate is predicted on a number of things


a) Degree to which the planner has properly estimated the size of the product to be
built.
b) Ability to translate the size estimate into human efforts, calendar time and money.
c) The degree to which project plan reflects the abilities of the system development
team
d) The ability of product requirements and the environment that supports the system
development efforts

Project estimate is as good as the estimate of the sizes of the work to be accomplished
Size is a quantifiable outcome of the system/ software project

Four different approaches to sizing problem are: -


a) “Fuzzy-logic” sizing: it is an approach that uses approximation reasoning
technique that are cornerstone of fuzzy logic, qualitative
b) function point sizing
c) change sizing

PROBLEM BASED ESTIMATION

LOC and FP are used in project estimation


a) As estimate variable used to “size” each element of the system software
b) As baseline metrics collected from past projects


Lines of Code and Function points
Design and Development of I.S
EMPIRICAL ESTIMATION MODELS

An estimation model for PC software uses empirically derived formula to predict


effort as a function of LOC or FP

E= A+Bx (ev)c

COCOMO MODEL (Barry Boehm)

COCOMO is Constructive Cost Model


It has been revised into COCOMO II
It is a hierarchy of estimates model that addresses the following areas: -
a) Application composition models – used in early development stages
when prototyping user interface, performance assessment and technology
maturity assessment.
b) Early design stage model – used after requirements have been established
and basic system architecture has been established.
c) Post architecture stages models – used during construction of the system.

Like other models, COCOMO II model require sizing information

QUANTITATIVE MANAGEMENT AND ASSURANCE

Responsibility of quality managers to ensure the required level of quality is achieved

Definition – Quality management involves defining appropriate procedures and


standards and checking that all engineers (developers) follow them

It depends on developing a “quality culture”

System quality is multi-dimensional – the product should meet specifications


a) Depending on customer need and wants , as well as the developers needs/
requirements which may not be included in specification
b) Some qualities are difficult to measure
c) Some specification are incomplete

“"Quality is hard to define, impossible to measure and easy to recognize.


Definition – “Quality is continually satisfying customer requirements” (Smith 1987)
International Standards Organization (ISO) – The totality of features and
characteristics of a product or service that bear on the ability to satisfy specified or
implied needs (ISO 1986)

Garvins view of quality (Garvin 1984) identifies five views of quality


a) The transcLendent view – Quality is immeasurable but can be seen,
sensed qqqor felt and appreciated e.g. art or music

Po99
M⁹" Kitchenham (1987)
Design and Development of I.S
b) Product based view – Quality is measured by the attributes/ ingredients in
a product
c) User based view – Quality is fitness for purpose, meeting needs as
specified
d) Value based view – the ability to provide the customer with the product/
services they want at the price they can afford.

Quality Assurance
Quality management System
- relevant procedures and standards to be
followed
- Quality Assurance assessments to be carried
out

Definition – controls to that


- Relevant procedures and standards are
followed
- Relevant deliverables are produced

Standards specification to be applied during development enforces Quality of


products. The specifications should include Quality Assurance (QA) standards to be
adopted, which should be of one of the recognized standards or clients specified ones
e.g.

ISO 9000 set – ISO


BS 5750 - British
EN 29000 - European

Good Quality System should have some measurable qualities: -


 Correctness
 Maintainability
 Integrity
 Usability

Correctness ensures the System operates correctly and provides the value to its user
and performs the required functions therefore defects must be fixed/ corrected

Maintainability is the ease with which system can be corrected if an error is


encountered, adept if its environment changes or enhance if the user desires a change
in requirement0 poll
Integrity is the measure of the system ability to withstand attacks (accidental or
intentional) to its security in terms of data (processing, performance), program and
documentation(
Usability is the measure of user friendliness of a system as measured in terms of
physical and intellectual skills required to learn the system, the time required to
become moderately efficient in using it, the net increase in productivity if used by
moderately efficient userq, and the general user attitude towards the system.

System Quality can be looked at in two ways: -


a) Quality of design
Design and Development of I.S
b) Quality of conformance

Quality of design – is the characteristic the designers specify for an item/ product.
The grade of the materials, tolerance and performance specifications

Quality of conformance
The degree to which the design specification are followed during development and
construction (implementation)

Since quality should be measurable, then quality assurance need to be put in place

Quality Assurance consists of Auditing and reporting functions of the management


Quality6
ad standards or client designed standards
Quality Assurance must lay down the working procedure to be adopted during
project lifetime, which includes: -

 Design reviews
 Program monitoring and reporting
 Program reviews
 Liaison mechanism
 Quality Assurance related procedure
 Test procedure
 Fault reporting
 Delivery mechanisms
 Safety aspects
 Resource usage

Correct usage of the procedures must be verified during the development phase

Quality Assurance system should be managed independently of development and


production departments and clients should have a right to access the contractors
Quality Assurance System and Plan

Quality Assurance builds clients confidence (increase acceptability) as well as


contractors own confidence in knowing that they are building the right system and
that it will be highly acceptable

Testing and error correction assures system will perform as expected without deflects
or collapse and also ensures accuracy and reliability.

POOR QUALITY SYSTEM


 High cost of maintenance and correcting errors (unnecessary maintenance)
 Low productivity
 Unreliability
 Risk of injury – of safely critical systems (e.g. robots)
 Loss of 1business due to errors
 Lack of confidence to (in ) the developers by clients.
Design and Development of I.S

SOFTWARE QUALITY ISSUES

METRICS
Metric is a quantitative measure of the degree to which a system, component, or
process processes a given attribute. Measurement occurs as a result of the collection
of one or more data points.
Software engineers collect, measure and develop metrics so that indicators can be
obtained.
Design and Development of I.S
Indicator is a metric or combination of metrics that provide insight into the
software process, project or product itself.
Indicator provides insight that enables project managers or software engineers
to adjust the process or project to make things better.
Metrics should be collected so that process and product indicators can be ascertained,
and enable software engineers, organizations gain insight into the efficiency of an
existing process (i.e. paradigm, software engineering tasks, work products, and
milestones).

Metrics is mainly applied in software productivity and quality. It is used to measure


software development “output” as a function of effort and time applied, measure the
“fitness of use “ of the product.
Software process, product.
Software processes, products and resources are measured to characterize and gain
understanding of them and establish baseline for comparisons with future
assessments.
Software is also measured:
 To evaluate and determine the status with respect to plans and ensure we don’t
get out of track during the engineering life cycle /lifetime.
 To predict, therefore gain understanding of relationships among processes and
products and the value observed can be used to predict others. This helps in
planning for the future trends, costs, time and quality.
 For projection and estimation costs useful in risk analysis and making design-cost
trade-offs.
 Also measuring for improvement after identifying problems, root causes,
inefficiency and other opportunities for improving the software quality and
process performance.

Metrics help managers assess what works and what doesn’t.


Process metrics
Are collected across all projects over long periods of time. This provides indicators
that lead to long term software process improvements.
Project indicators

Enable software managers to asses status of an on-going project, track potential risks,
uncover problem areas before they get critical, adjust workflow or tasks and evaluate
the project teams ability to control quality of software products.

Software Measurements
 Size-oriented metrics
 Function oriented metrics
 Extended function point metrics

Size oriented metrics


Size oriented metrics are derived by normalizing quality and productivity measures by
considering the size of the software produced.
This could be done by measuring,
1. Lines of code (LOC),
2. Effort ( persons – months effort )
Design and Development of I.S
3. The cost incurred in all activities in production lifetime (analysis, design,
coding, testing e.t.c.),
4. Documentation size in pages,
5. Error recorded before software release,
6. Defects after release,
7. And the total number of people involved in its development.

Functional oriented metrics


Functional oriented metrics uses a measure of functionality of software as a
normalized value.
 Function points are used to measure functionality in this metrics
 The functional points are derived using empirical relationship based on
countable measure of software information domain and assessment of
software complexity.
 The information domain value includes its number of user, input, output,
inquiries, file and external interface and weigh their complexity.

Extended function point metrics


Extended function point metrics accommodate functional points as well as behavioral
(control) dimensions hence work with feature points.
Quality software metrics should encompass a number of attributes such as:
 Simplicity
 Be empirically and intuitively persuasive
 Consistent and objective
 Programming language independent
 Consistent in its use of units and dimensions
 Have an effective mechanism for quality feedback.

Every level of software engineering has appropriate metrics that can be applied. These
includes:
 Metrics for analysis which include the function based metrics
 The bang metrics
 Metrics for specification quality
 Metrics for design model: architectural design metrics, component-level design
metrics (which tests coupling and cohesion)
 Interface design metrics
 The metrics for the source code
 Metrics for testing
 Metrics for maintenance

Software Metrics
Software metrics is any type of measurement, which relates to a software system
process or related documentation.
E.g. size measurement in lines of code
Fog index (Hunning 1962) =measure of readability of a product manual e.t.c.
Metrics fall into two classes
 Control metrics
Design and Development of I.S
They provide information about process quality therefore is related to product
quality.
 Predictor metrics
Measurement of a product attribute that can be used to predict an associated
product quality
E.g. Fog index predicts readability, cyclomatic complexity to predict
maintainability of software.

TESTING

Definition 2
Testing involves actual execution of program code using representative test data sets
to exercise the program and outputs are examined to detect any deviation from the
expected output

Definition 1
Testing is classified as dynamic verification and validation activities

Reviews can be applied to:


 Requirement specifications
 High level system designs
 Detailed designs
 Program code
 User documentation
 Operation of delivered system

Objectives of Testing
1. To demonstrate the operation of the software.
2. To detect errors in the software and therefore:
 Obtain a level of confidence,
 Produce measure of quality.

THE TESTING PROCESS


Systems in this case are tested, as a single unit therefore testing should proceed in
stages, where testing is carried out incrementally in conjunction with system
implementation.
The most widely used testing process consists of 5 stages:
(a) Unit testing
(b) Module testing
(c) Sub-system testing
(d) System testing
(e) Acceptance (alpha) testing.

(A) UNIT TESTING


Unit testing is where individual components are tested independently to ensure they
operate correctly.
(B) MODULE TESTING
A module is a collection of dependent components e.g. an object class, an abstract
data type or collection of procedures and functions.
Design and Development of I.S
Module testing is where related components (modules) are tested without other
system modules.
(C) SUB-SYSTEM TESTING
Sub-systems are integrated to make up a system.
Sub-system testing aims at finding errors of unanticipated interactions between sub-
systems and system components. Sub-system testing also aims at validating that the
system meets the functional and non-functional components.

(D) ACCEPTANCE TESTING (ALPHA TESTING)


Acceptance testing is also known as alpha testing or last testing.
In this case the system is tested with real data (from client) and not simulated test
data.
Acceptance testing:
 Reveals errors and omissions in systems requirements definition.
 Test whether the system meets the users’ needs or if the system performance is
acceptable.
Acceptance testing is carried out till users /clients agree it’s an acceptable
implementation of the system.

N/B 1:Beta testing


Beta testing approach is used for software to be marketed.
It involves delivering it to a number of potential customers who agree to use it and
report problems to the developers.
After this feedback, it is modified and released again for another beta testing or
general use.

N/B 2:
The five steps of testing are based on incremental system integration i.e.
(Unit testing – module testing – sub-system testing - system testing- acceptance
testing). But object oriented development is different and levels have clear/ distinct
 Operations and data forms objects –units
 Object integrated forms class (equivalent to) –modules
Therefore class testing is cluster testing.

MODES OF TESTING
1. Black Box Functional Testing
This is based on specification alone, without reference to implementation details.
2. White Box Or Structural, Glass Testing
This is based on inspection of code structure (implementation details, low level
design)

Black Box Functional Testing


Techniques
The test cases are derived from specification of the module /system /sub-system under
test.
The actual and expected results are compared.

Techniques
Design and Development of I.S
I. Equivalence partitioning
II. Boundary value analysis.

Equivalence partitioning
 Equivalence partitioning involves breaking down the input data into sets regarded
as “equivalent”.
 It relies on an assumption of “uniform behavior within ranges of input values that
are not significantly different in terms of the specification.
E.g.
-A program must handle from 1 to 10,000 records.
-If it can handle 40 records and 9,000 records chances are that it will work with
5,000 records.
-Therefore chances of detecting fault (if present) are equally good if any test case
from 1-10,000 is selected.
-Therefore if the program work for any one test case it will probably work for any
text cases in the range.
 The range 1-10,000 constitutes an equivalence class i.e. A set of test cases such
that any one member of the class is as good a test case as any other.
 Therefore classes:
Equivalence class 1 – less than one
Equivalence class 2 –1 to 10,000
Equivalence class 3 – more than 10,000.
 Equivalence partitioning requires a test case from each class be carried out.

Boundary value analysis.


Test case 1: 0 records i.e. member of class 1 adjacent to lower boundary.
Test case 2: 1 record i.e. lower boundary value.
Test case 3: 2 records i.e. adjacent lower boundary
Test case 4: 500 records i.e. member of class 2.
Teats case 5: 999 records i.e. adjacent to upper boundary
Test case 6: 10,000 records i.e. upper boundary value.
Test case 7: 10,001 records i.e. member of class and adjacent to upper boundary.

White box testing


Test cases are derived from examination of the program code with its following
coverage
 Statement coverage
 Branch coverage
 Multiple condition coverage
 Path coverage

Statement coverage
Statement coverage involves running a series of test cases that ensure every statement
in the code is executed at least once.
E.g. Pg. 6 (Comm 64 – St. h. book)

Branch coverage
Design and Development of I.S
Requires test data that causes each branch to have a true or false outcome
E.g. Pg. 6 (IBID)

Multiple condition coverage


Multiple conditional coverage requires all possible combinations of true or false
outcomes to be tested.
E.g. Pg. 7 (IBID)
Number of conditions x 2 = number of cases.

Path coverage
2.2
2.1
3.1
Inv
3.0
2.0
Path coverage
12oic
is concerned with testing all paths through a program basis path testing.
Path coverage e enables the logical complexity of a program to be measured.
It uses the measure
Pro to define a basis set of execution paths
Flow graphsgradepict logical flow through a program
m

Initialize

Do get character from file ------

If character <> new line add 1 to


Character count------------------

Else
Add 1to line
Count------

End if-------------
Design and Development of I.S

While not end of file --------

Print character and line count -----

Test cases are created to guarantee that every statement is executed at least once.
Basis of test cases:
123678
1235678
1234672
1235672

TEST PLANNING
Test planning is setting out standards for the testing process rather than describing
product tests.
Test plans allow developers get an overall picture of the system tests as well as ensure
required hardware, software, resources are available to the testing team.
Components of a test plan:
 Testing process
This is a description of the major phases of the testing process.
 Requirement traceability
This is a plan to test all requirements individually.
 Testing schedule
This includes the overall testing schedule and resource allocation.
 Test recording procedures
This is the systematic recording of test results.
 Hardware and software requirements.
Here you set out the software tools required and hardware utilization.
 Constraints
This involves anticipation of hardships /drawbacks affecting testing e.g.
staff shortage should be anticipated here.

N/B
Test plan should be revised regularly.

TESTING STRATEGIES
This is the general approach to the testing process.
There are different strategies depending on the type of system to be tested and
development process used: -
 Top-down testing
This involves testing from most abstract component downwards.
 Bottom-up testing
Design and Development of I.S
This involves testing from fundamental components upwards.
 Thread testing
This is testing for systems with multiple processes where the processing
of transactions threads through these processes.
 Stress testing
This relies on stressing the system by going beyond the specified limits
therefore testing on how well it can cope with overload situations.
 Back to back testing
It is used to test versions of a system and compare the outputs.

N/B
Large systems are usually tested using a mixture of strategies.

Top-down testing
Tests high levels of a system before testing its detailed components. The program is
represented as a single abstract component with sub-components represented by stubs.
Stubs have the same interface as the component, but limited functionality.
After top-level component (the system program) is tested. Its sub-components (sub-
systems) are implemented and tested through the same way and continues to the
bottom component (unit).
If top-down testing is used:
- Unnoticed errors maybe detected early (structured errors)
- Validation is done early in the process.
Disadvantages Of Using Top down Testing
1. It is difficult to implement because:
Stubs are required to simulate lower levels of the system. Complex components
are impractical to produce a stub that can be tested correctly.
Requires knowledge of internal pointer representation.
2. Test output is difficult to observe. Some higher levels do not generate output
therefore must be forced to do so e.g. (classes) therefore create an artificial
environment to generate test results.
N/b therefore it is not appropriat3e for object oriented systems but individual systems
may be tested.

Bottom-up testing
This is the opposite of top-down testing. This is testing modules at lower levels in the
heirarchy, then working up to the final level.
Advantages of bottom up are the disadvantages of top-down. +
1. Architectural faults are unlikely to be discovered till much of the system has been
tested.
2. It is appropriate for object oriented systems because individual objects can be
tested using their own test drivers, then integrated and collectively tested.

Thread testing (transaction flow testing-by Bezier 1990)


This is for testing real time systems.
It is an event-based approach where tests are based on the events, which trigger
system actions.
Design and Development of I.S
It may be used after objects have been individually tested and integrated into sub-
system.
-Processing of each external event “threads” its way through the system
processes or objects with processing carried out at each stage
It involves identifying and executing each possible processing thread.
The system should be analyzed to identify as many threads as possible.
After each thread has been tested with a single event, processing of multiple events of
same type should be tested without events of any other type (multiple-input thread
testing).
After multiple-input thread testing, the system is tested for its reaction’s to more than
one class of simultaneous event i.e. multiple thread testing.

SOFTWARE MAINTENACE

The maintenance stage of system development involves


a) correcting errors discovered after other stages of system development
b) improving implementation of the system units
c) enhancing system services as new requirements are perceived

Information is fed back to all previous development phases and errors and omissions
in original software requirements are discovered, program and design errors found
and need for new software functionality identified.

Definition 1
Maintenance is the process of changing a system after it has been delivered and is in
use.
Simple - correcting coding errors
Extensive - correcting design errors.
Enhancement- correcting specification errors or accommodate new requirements.

Definition 2
Maintenance is the evolution i.e. process of changing a system to maintain its ability
to survive.
Design and Development of I.S
Types of maintenance
There are three different types of maintenance:
a) Corrective maintenance:
This involves fixing discovered errors in software.
(Coding errors, design errors, requirement errors.)
b) Adaptive maintenance:
This is changing the software to operate in a different environment (operating
system, hardware) this doesn’t radically change the software functionality.
c) Perfective maintenance:
Implementing new functional or non-functional system requirements, generated
by software customers as their organization or business changes.
d) Preventive maintenance:
Making changes on software to prevent possible problems or difficulties (collapse,
slow down, stalling, self-destructive e.g. Y2K).

Operation stage involves


 use of documentation to train users of system and its resource
 system configuration
 repairs and maintenance
 safety precautions
 date control
 Train user to get help on the system.

Maintenance cost (fixing bugs) is usually higher than what software is original
due to: -
I. Program’s being maintained may be old, and not consistent to modern software
engineering techniques.
They may be unstructured and optimized for efficiency rather than
understandability.
II. Changes made may introduce new faults, which trigger further change
requests. This is mainly since complexity of the system may make it difficult to
assess the effects of a change.
III. Changes made tend to degrade system structure, making it harder to understand
and make further changes (program becomes less cohesive.)
IV. Loss of program links to its associated documentation therefore its
documentation is unreliable therefore need for a new one.

Factors affecting maintenance


Module independence
Use of design methods that allow easy change through concepts such as
functional independence or object classes (where one can be maintained
independently)
Quality of documentation
A program is easier to understand when supported by clear and concise
documentation.
Programming language and style
Use of a high level language and adopting a consistent style through out the
code.
Design and Development of I.S
Program validation and testing
Comprehensive validation of system design and program testing will reduce
corrective maintenance.
Configuration management
Ensure that all system documentation is kept consistent through out various
releases of system (documentation of new editions.)
Understanding of current system and staff availability
Original development staff may not always be available. Undocumented code
can be difficult to understand (team management).
Application domain
Clear and understood requirements.
Staff availability
Hardware stability
Dependence of program on external environment

MAINTENANCE PROCESS
Maintenance process is triggered by
(a) A set of change requests from users, management or customers.
(b) Cost and impact of the changes are assumed, If acceptable, new release is planned
(involving elements of adaptive, corrective and perfective maintenance).
(c) Changes are implemented and validated and new versions of system released.

4.0
1.0
Pr
St
It
St
Pr
St
oc
ar
er
op
oc
ar
es
tt
ati
es
ss
on
L
oo
ps

SYSTEM DOCUMENTATION
It is a very important aid to maintenance engineers.
Definition.
It includes all documents describing implementation of the system from requirements
specification to final test plan.
The documents include:
 Requirement documents and an associated rationale
 System architecture documents
Design and Development of I.S
 Design description
 Program source code
 Validation documents on how validation is done
 Maintenance guide for possible /known problems.

Documentation should be
 Clear and non-ambiguous
 Structured and directive
 Readable and presentable
 Tool-assisted (case tools) in production (automation).
PROGRAM EVALUATION DYNAMICS
Program evolution is the study of system change Lehman’s Laws (by Lehman
Beladay 1985) about system change.
The Laws are:
a) Law of Continuing change
Program used in real world environment must change or become progressively less
useful in the environment.
b) Law of Increasing complexity
Program changes make its structure more complex therefore extra resources must be
devoted to preserving and simplifying the structure.
c) Law of Large program evolution
Program evaluation in self-regulating process.
d) Law of Organization stability
In a program lifetime its rate of development is approximately constant and
independent of resources devoted to system development.
e) Law of Conservation familiarity
Over the system lifetime the incremental change in each release is approximately
constant.

1. CONFIGURATION MANAGEMENT

Software configuration
A collection of the items that comprise all information produced as part of the
software process.
The output of software process in information and include PC programs,
documentation and the data (in its program and external to it).

Software configuration management


Definition 1
These are a set of activities that are developed to manage change through out the life
cycle of PC software.
Changes are caused by:
 new business / market conditions and rules
 new customer needs
 reorganization or business growth
 Budgetary or scheduling constraints e.t.c.

Definition 2
Design and Development of I.S
The process, which controls the changes, made to a system and managers the different
versions of the evolving software product.
Involves development and application of procedures and standards for managing an
evolving system product. Procedures should be developed for building system
releasing them to customers
Standards should be developed for recording for recording and processing proposed
system changes and identifying and storing different versions of the system.
Configuration managers (team) are responsible for controlling software changes.
Controlled systems are called baselines. They are the starting point for controlled
evolution.

Software may exist in different configurations (versions).


 produced for different computers (hardware)
 produced for different operating system
 produced for different client-specific functions e.t.c

Configuration managers are responsible for keeping track of difference between


software versions and ensuring new versions are derived in a controlled way.
Are also responsible for ensuring that new versions are released to the correct
customers at the appropriate time.

Configuration management and associated documentation should be based on a set of


standards, which should be published in configuration management hand book (or
quality handbook) E.g. IEEE standards 8238-1983 which is standard for configuration
management plans.

Main configuration management’s activities:


1. configuration management planning (planning for product evolution)
2. managing changes to the systems
3. controlling versions and releases (of systems)
4. building systems from other components

Configuration Management And Planning


Configuration management and planing takes control of the systems after they have
been developed therefore planning the process must start during development.
The plan should be developed as part of overall project planning process.
The plan should include:
(a) Definitions of what entities are to be managed and formal scheme for identifying
these entities.
(b) Statement configuration management team.
(c) Configuration management policies for change and version control / management.
(d) Description of the tools to be used in configuration management and the process
to be used.
(e) Definition of the configuration database which will be used to record
configuration information. (Recording and retrieval of project information.)
(f) Description of management of external information
(g) Auditing procedures.

Configuration database is used to record all relevant information relating to


configuration to:
Design and Development of I.S
a) assert with assessing the impact of system changes
b) Provide management information about configuration management.
Configuration database defines/describes
 Customers who have taken delivery of a particular version
 Hardware and software operating system requirements to run a given version.
 The number of versions of system so far made and when they were made e.t.c.

2. CHANGE MANAGEMENT.
Change management process involves technical change analysis, cost-benefit analysis
and change tracking

Stages of change management:


(1) 1st Stage
1st stage in change management is to complete a change request from (CRF).
A change request form is a formal document that sets out the changes required to the
system, records recommendations regarding the change, estimated costs of the
change, dates when change was requested, approved, implemented and validated.
Also section where engineers outline how change is to be implemented.
(2) 2nd stage
It involves analysis of the change requested, for validity. (If it is invalid, duplicated or
already considered it is rejected). Any rejection should be returned to the person who
rejected it.
(3) 3rd stage
For valid changes. Assessment and costing of the change is made, the impact of the
change on the rest of system assessed, also check on how (technically) it will be
implemented.
(4) 4th Stage
Submission to Change Control Board (CCB) who decide whether on not the change
should be accepted (after considering cost, impact e.t.c)
(5) 5th Stage
After approval by the CCB the software is taken to software maintenance team for
implementation, after which it is validated (tested) then released (by configuration
management team and not maintenance team).

(3) VERSION AND RELEASE MANAGEMENT.


Version and release management are the processes of identifying and keeping track of
versions and new releases of system.
 Ensures release (time), right version is released.
 Some versions may be designed to operate in different hardware or software
(operating system) platforms though their functions are the same.
System release - version that is distributed to customer.
A release (isn’t only a set of programs but) includes: -
 Configuration files - defining how the release should be configured for
installations.
 Data files – needed for successful system operation.
 Installation programs - to help install the system on target hardware
 Electronic and paper documentation - describing the system.
All information must be availed to customers.
(4) SYSTEM BUILDING
Design and Development of I.S
System building is the process of combining the components of a system into a
program, which executes on a particular target configuration.
It involves
 Compilation of some components.
 Linking processes putting object code together to make executable systems.
System building tools have been developed to reduce the build time and effort of
compiling and linkage (e.g. “make” for Unix system)

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy