Ch.5 Software Effort Estimation
Ch.5 Software Effort Estimation
Delivering: Stages:
➢ Agreed functionality ➢ Set targets
➢ On time at the agreed cost ➢ Attempt to achieve targets
➢ With the required quality
A key point here is that developers may in fact be very competent, but incorrect estimates
leading to unachievable targets will lead to extreme customer dissatisfaction.
Strategic planning
Feasibility Study
System specification
Evaluation of supplier’s proposals
Project planning
❖ Measure of Work :
The project size is a measure of the problem complexity in terms of the effort and time
required to develop the product.
Two metrics are used to measure project size:
➢ Source Lines of Code (SLOC)
➢ Function point (FP)
FP is now-a-days favoured over SLOC:
Because of the many shortcomings of SLOC.
No precise defination
Difficult to estimate at start of a project
Only a code measure
Programmer-dependent
Does not consider code complexity
❖ Measure of effort:
Person-Month:
Bottom-up
➢ use when no past project data
➢ identify all tasks that have to be done – so quite time-consuming
➢ use when you have no data about similar past project
Top-down
➢ produce overall estimate based on project cost drivers
➢ based on past project data
➢ divide overall estimate between jobs to be done
There is often confusion between the two approaches as the first part of the bottom-up approach is a top-
down analysis of the tasks to be done, followed by the bottom- up adding up of effort for all the work to be
done.
Bottom-up estimating :
Break project into smaller and smaller components
Stop when you get to what one person can do in one/two weeks
Estimate costs for the lowest level activities
At each higher level calculate estimate by adding estimates for lower levels
Envisage the number and type of software modules in the final system
Estimate the SLOC of each identified module
Estimate the work content, taking into account complexity and technical difficulty
Calculate the work-days effort
Top-down estimates :
Algorithmic/Parametric models :
COCOMO originally was based on a size parameter of lines of code (actually ‘thousand of
delivered source code instructions’ or kdsi). Newer versions recognize the use of functions
points as a size measure, but convert them to a number called ‘equivalent lines of code (eloc).
Expert judgement :
Asking someone who is familiar with and knowledgeable about the application area and the
technologies to provide an estimate
Particularly appropriate where existing code is to be modified
Research shows that experts’ judgment in practice tends to be based on analogy.
Estimating by analogy:
Stages: identify :
➢ Significant features of the current project
➢ previous project(s) with similar features
➢ differences between the current and previous projects
➢ possible reasons for error (risk)
➢ measures to reduce uncertainty
Albrecht worked at IBM and needed a way of measuring the relative productivity of different
programming languages.
Needed some way of measuring the size of an application without counting lines of code.
Identified five types of component or functionality in an information system
Counted occurrences of each type of functionality in order to get an indication of the size of
an information system
• External input (EI) types – input transactions which update internal computer files
• External output (EO) types – transactions which extract and display data from internal
computer files. Generally involves creating reports.
• External inquiry (EQ) types – user initiated transactions which provide information but do not
update internal files. Normally the user inputs some data that guides the system to the information
the user needs.
• Logical internal file (LIF) types – These are standing files. Equates roughly to a data store in
systems analysis terms. Created and accessed by the target system
• External interface file types (EIF) – where data is retrieved from a data store which is actually
maintained by a different application.
Determine the complexity of each user type (high, average or low).
The International FP User Group (IFPUG) have developed and published extensive rules
governing FP counting. Hence Albrecht FPs is now often referred to as IFPUG FPs.
< 20 20 – 50 > 50
The boundaries shown in this table show how the complexity level for the logical internal files is
decided on.
There are similar tables for external inputs and outputs.
𝑊𝑖 , 𝑊𝑒 and 𝑊𝑜 are weightings derived by asking developers the proportions of effort spent in
previous projects developing the code dealing respectively with inputs, accessing and modifying
stored data and processing outputs.
Most FP counters use industry averages which are currently
0.58 for 𝑊𝑖 , 1.66 for 𝑊𝑒 and 0.26 for 𝑊𝑜 .
COSMIC FFPs stands for Common Software Measurement International Consortium Full
Function Points.
This approach is developed to measure the sizes of real-time or embedded systems.
In COSMIC method: the system architecture is decomposed into a hierarchy of software layers.
They define 4 data groups that a software component can deal with
➢ Entries (E), effected by sub-processes that moves the data group into the software
component in question from a user outside its boundary.
➢ Exits (X), effected by sub-processes that moves the data group from the software
component into a user outside its boundary.
➢ Reads (R), data movements that move data groups from a persistent storage (database) to
the SW component.
➢ Writes (W), data movements that move data groups from the SW component to a persistent
storage.
The overall FFP is derived by simply summing the counts of the four groups all together.
The method doesn’t take account of any processing of the data groups once they are moved into
the software component.
It is not recommended for systems that include complex mathematical algorithms.
COCOMO II: A Parametric Productivity Model
Organic mode.
• Small team,
• Small system,
• Interface requirements flexible,
• In-house software development.
➢ Examples: Systems such as payroll, inventory.
Embedded mode.
• Product has to operate within very tight constraints,
• the project team is large,
• development environment consists of many complex interfaces,
• Changes are very costly.
➢ Examples: Real-time systems such as those for air traffic control, ATMs, or weapon
systems.
Semi-detached mode.
• Combined elements from the two above modes or characteristics that come in between.
➢ Examples: Systems such as compilers, database systems, and editors.
System type c k
Problem: Assume that the size of an organic type software product has been estimated to be
32,000 lines of source code. Assume that the average salary of software engineers be 15,000/- per
month.
Determine
➢ The effort required to develop the software product
➢ Cost required to develop the product
Barry Boehm and his co-workers have refined a family of cost estimation models of which the
key one is COCOMO II.
This approach uses various multipliers and exponents the values of which have been set initially
by experts.
Each of the scale factors for project is rated according to a range of judgments: very low, low,
nominal, high, very high, extra high.
There is a number related to each rating of the individual scale factors - see Table
5.5.
These are summed, then multiplied by 0.01 and added to the constant (B=0.91) to get the overall
exponent scale factor.
Cost Estimation
Project cost can be obtained by multiplying the estimated effort (in man-month, from the effort
estimate) with the man power cost per month.
The entire project cost is incurred on account of the manpower cost alone.
In addition to manpower cost, a project would incur several other types of costs which we shall
refer to as the overhead costs.
The overhead costs would include the costs of hardware and software required for the project and
the company overheads for administration, office space, etc.
Depending on the expected values of the overhead costs, the project manager has to suitably scale
up the cost estimated by using the COCOMO formula.
Staffing Pattern
After the effort required to complete a software project has been estimated, the staffing
requirement for the project can be determined.
Norden was one of the first to investigate staffing pattern of general research and development
(R&D) type of projects.
Putnam’s extended the work of Norden.
Norden’s work
Putnam’s work:
Putnam studied the problem of staffing of software projects and found the staffing pattern for
software development projects characteristics very similar to R & D projects.
Putnam adapted the Rayleigh-Norden curve to relate the number of delivered lines of code to the
effort and the time required to develop the product.
After product delivery, the number of project staff falls consistently during product maintenance.
Putnam suggested that starting from a small number of developers, there should be a staff build up
and after a peak size has been achieved.
Experience shows that a very rapid build-up of project staff any time during the project
development correlates with schedule slippage.