Planning Software Project
Planning Software Project
Chapter 2
Introduction
Building computer software is a complex undertaking, particularly if it
involves many people working over a relatively long time.
That’s why software projects need to be managed. The management
activities varies among people involved in a software project.
A software engineer manages her day-to-day activities, planning,
monitoring, and controlling technical tasks.
Project managers plan, monitor, and control the work of a team of software
engineers.
Senior managers coordinate the interface between the business and software
Introduction
Software project management is an umbrella activity within software
engineering.
It begins before any technical activity is initiated and continues throughout
the definition, development and lifetime support of computer software.
The project management activity encompasses measurement and metrics,
estimation and scheduling, risk analysis, tracking, and control.
The goals of software project management are effective team, focusing their
attention on customer needs and product quality.
Management Spectrum
Effective software project management focuses on the four P’s: people,
product, process, and project.
People must be organized to perform software work effectively.
Communication with the customer and other stakeholders must occur so that
product scope and requirements are understood.
A process that is appropriate for the people and the product should be selected.
The project must be planned by estimating effort and calendar time to
accomplish work tasks: defining work products, establishing quality
checkpoints, and identifying mechanisms to monitor and control work defined
by the plan.
A. The People
The people are the primary key factor for successful organization.
For the successful software production environment, any organization must
perform the proper staffing, communication and coordination, work
environment, performance management, training, compensation (or
reward), competency analysis and development, career development,
workgroup development, team/culture development, and others.
People Capability Maturity Model (People-CMM) illustrate that “every
organization needs to continually improve its ability to attract, develop,
motivate, organize, and retain the workforce needed to accomplish its
strategic business objectives by improving the quality on its staff. ”.
B. The Product
Before a project can be planned, product objectives and scope should be
established, alternative solutions should be considered, and technical and
management constraints should be identified, without product
information, it is impossible to define reasonable (and accurate) estimates
of the cost, an effective assessment of risk, a realistic breakdown of
project tasks, or a manageable project schedule that provides a meaningful
indication of progress.
C. The Process
A software process provides the framework from which a
comprehensive plan for software development can be established.
Although, the detail operation to do for the project development may
be differ from problem to problem but the general framework for
software development remain same.
D. The Project
We conduct planned and controlled software projects for manage
complexity of project.
To avoid project failure, a software project manager and the software
engineers who build the product must avoid a set of common warning
signs, understand the critical success factors that lead to good project
management, and develop a commonsense approach for planning,
monitoring, and controlling the project.
The W5HH Principles
On software process and projects, Barry Boehm states: “you need an
organizing principle that scales down to provide simple plans for
simple projects.”
Boehm suggests an approach that addresses project objectives,
milestones and schedules, responsibilities, management and technical
approaches, and required resources.
He calls it the W5HH Principle, after a series of questions that lead to a
definition of key project characteristics and the resultant project plan:
The W5HH Principles
● Why is the system being developed? All stakeholders should assess the validity of business
reasons for the software work. Does the business purpose justify the expenditure of people,
time, and money?
● What will be done? The task set required for the project is defined.
● When will it be done? The team establishes a project schedule by identifying when project
tasks are to be conducted and when milestones are to be reached.
● Who is responsible for a function? The role and responsibility of each member of the
software team is defined.
● Where are they located organizationally? Not all roles and responsibilities reside within
software practitioners. The customer, users, and other stakeholders also have responsibilities
● How will the job be done technically and managerially? Once product scope is established, a
management and technical strategy for the project must be defined.
● How much of each resource is needed? The answer to this question is derived by developing
estimates based on answers to earlier questions.
Project Planning Process
Software project management begins with a set of activities that are
collectively called project planning.
The objective of software project planning is to provide a framework
that enables the manager to make reasonable estimates of resources,
cost, and schedule.
Software project planning encompasses five major
activities—estimation, scheduling, risk analysis, quality management
planning, and change management planning.
Project Planning Process
The schedule slippage, cost overrun, poor quality, and high
maintenance costs for software may cause due to the lack of planning.
Thus, planning attempt to define the best case and worst case scenario
so that project outcome can be bounded.
Although there is an inherent degree of uncertainty, the software team
embarks on a plan that has been established as a consequence of these
tasks.
Therefore, the plan must be adapted and updated as the project
proceeds because “The more you know, the better you estimate.
Therefore, update your estimates as the project progresses.”
Project Planning Process
The estimation attempt to determine how much money, effort, resources, and
time it will take to build a specific software-based system or product.
Estimation begins with a description of the scope of the problem.
The problem is then decomposed into a set of smaller problems, and each of
these is estimated using historical data and experience as guides. Problem
complexity and risk are considered before a final estimate is made.
After completion of estimation, project scheduling is started that defines
software engineering tasks and milestones, identifies who is responsible for
conducting each task, and specifies the inter-task dependencies that may
have a strong bearing on progress.
Software Scope
Software scope describes the functions and features that are to be delivered to end
users; the data that are input and output; the “content” that is presented to users as
a consequence of using the software; and the performance, constraints, interfaces,
and reliability that bound the system. Scope is defined using one of two
techniques:
● A narrative description of software scope is developed after communication
with all stakeholders.
● A set of use cases is developed by end users.
In general sense, the software scope indicates the acceptance limit of software by
end user or environment. The software scope must be unambiguous and
understandable at the management and technical level. Once scope has been
identified (with the concurrence of the customer), it is reasonable to ask: “Can we
build software to meet this scope? Is the project feasible?”
Feasibility
The feasibility explains the acceptance of the software in different condition of
technology,finance, time, resources, and process of operation, which all are explained
below:
● Technical Feasibility
The software product must be technically feasible by the use of present and near
future hardware and techniques. Also, it must be feasible to afford the technical
person for new techniques or features required on the software product.
3. Use relatively simple decomposition techniques to generate project cost and effort
estimates. Such decomposition techniques may be LOC-based estimation,
FP-based estimation, Process based estimation, Estimation with use case etc.
4. Use one or more empirical models (like COCOMO II) for software cost and effort
estimation.
Software metrics (qualitative measures) can be categorized similarly to
real world scenario.
But the result from the software measurement metrics is not sufficient for
the overall sizing of software because some metrics are hidden and
appears at the time of software construction.
The software estimation generally done in two levels: Top-down and
Bottom-up approach.
Top Down Estimation
Top-down estimation start at the system level and assess the overall
system functionality and how this is delivered through sub-systems.
● Usable without knowledge of the system architecture and the
components that might be part of the system.
● Takes into account costs such as integration, configuration
management and documentation.
● Can underestimate the cost of solving difficult low-level technical
problems.
Bottom Up Estimation
It starts at the component level and estimate the effort required for each
component. Add these efforts to reach a final estimate.
● Usable when the architecture of the system is known and
components identified.
● This can be an accurate method if the system has been designed in
detail.
● It may underestimate the costs of system level activities such as
integration and documentation.
Decomposition Techniques
Software project estimation is a form of problem solving, and in most cases, the
problem to be solved (i.e. developing a cost and effort estimate for a software
project) is too complex to be considered in one piece. For this reason, we should
decompose the problem, re-characterizing it as a set of smaller (and hopefully,
more manageable) problems.
The decomposition approach was discussed from two different points of view:
decomposition of the problem and decomposition of the process. Estimation uses
one or both forms of partitioning. But before an estimate can be made, the project
planner must understand the scope of the software to be built and generate an
estimate of its “size.”
A. Software Sizing
The accuracy of software project estimation is depends on:
● The size of product to be built.
● The human effort, calendar time, and availability of reusable software
components. The ability and experience of the software team.
● The stability of the product requirements and the supporting
environments.
In the context of project planning, size refers to a quantifiable outcome of the
software project. If a direct approach is taken, size can be measured in lines
of code (LOC). If an indirect approach is chosen, size is represented as
function points (FP).
B. Problem Based Estimation
In problem based estimation, productivity metrics can be
computed using lines of code and function points estimation.
LOC and FP estimation are distinct estimation techniques. Yet
both have a number of characteristics in common. LOC and FP
data are used in two ways during software project estimation: (1)
to “size” each element of the software and (2) as baseline metrics
collected from past projects and used in conjunction with
estimation variables to develop cost and effort projections.
The expected value for the estimation variable (S) can be
computed as a weighted average of the optimistic (sopt), most
likely (sm), and pessimistic (spess) estimates. The estimated value
may be of LOC or FP estimation techniques.
Software Measurement
The result from software measurement metric is not sufficient for the overall sizing
of software because some metrics are hidden and appear at the time of software
constriction. In spite of this, the measurement in the physical world can be
categorized into two ways: direct measurements and indirect measurements.
● Direct measure: Direct measure of software process includes cost and effort
applied which are easier to measure by person-month analysis. Also, it includes
line of code (LOC) produced, execution speed, memory size and defects
reported over some set period of time.
● Indirect measure: Indirect measures of the product include functionality,
quality, complexity, efficiency, reliability, maintenance possibilities, usability,
portability, etc. They are relatively difficult to measure.
a. Line of Code (LOC) measurement
LOC is simplest way to estimate the project size among all metrics available. The
measure was first proposed when programs were typed on cards with one line per
card.
To find the LOC at the beginning of a project, divide module into sub-module and
so on, until size of each module can be predicted.
The project size estimation by counting the number of source instruction is an
ambiguous task because there may have multiple lines of code used for
commenting, header lines of codes etc.
Thus, a commonly adopted convention is to count only the lines of codes that are
delivered to the customers as part of the product. The LOC estimation technique
has following limitations:
a. Line of Code (LOC) measurement
● The LOC will vary according to programming style i.e. complex
logic may reduce some codes by the replacement of the simple
logic.
● The reuse of codes will vary the program size.
● There may not be better quality on large sized programs.
● The accurate LOC are only computed after project completion.
● On project estimation, the software product not only depends on
the LOC but this may vary due to analysis, design, testing etc.
For example: If
Estimated total LOC on any project = 33,200
Organizational average productivity = 620 LOC/pm Labor rate per month
= $8000
Then,
Cost per LOC = ($8000/m)/(620LOC/pm) = $13
Total project cost = Total LOC x Cost per LOC = 33,200 x $13 =
$4,31,600
Estimated effort = Total LOC / Average productivity = (33,200 LOC) /
(620 LOC/person month) = 54 persons
b. Function based metric (FP)
The function point (FP) metric can be used effectively as a means
for measuring the functionality delivered by a system. Using
historical data, the FP metric can be used to:
● Estimate the cost or effort required to design, code and test the
software,
● Predicate the number of errors that will be encounter during
testing,
● Forecast the number of components in implemented system.
The major limitation of FP based estimation is that the weight of items
on metric is fixed which may not sufficient for all cases. Hence, this
problem can be solved by providing a range of weights for each item
based on simple, average or complex subjective determination.
● Function points are derived using an empirical relationship based on countable
(direct) measures of software’s information domain (UFP) and qualitative
assessments of software complexity (TCF). Information domain values are
defined in the following manner:
● Number of external inputs (EIs) (weight-4): Each external input originates
from a user or is transmitted from another application and provides distinct
application-oriented data or control information. Inputs are often used to
update internal logical files (ILFs). Inputs should be distinguished from
inquiries, which are counted separately
● Number of external outputs (EOs) (weight-5): Each external output is
derived data within the application that provides information to the user. In
this context external output refers to reports, screens, error messages, etc.
Individual data items within a report are not counted separately.
● Number of external inquiries (EQs) (weight-4): An external inquiry
is defined as an online input that results in the generation of some
immediate software response in the form of an online output (often
retrieved from an ILF).
● Number of internal logical files (ILFs) (weight-10): Each internal
logical file is a logical grouping of data that resides within the
application’s boundary and is maintained via external inputs.
● Number of external interface files (EIFs) (weight-10): Each
external interface file is a logical grouping of data that resides
external to the application but provides information that may be of
use to the application.
Function point (FP) = Unadjusted function point (UFP) x Technical
complexity factor (TCF)
Where,
Each of these questions is answered using a scale that ranges from 0 (not
important or applicable) to 5 (absolutely essential).
Note that, if these all 14 technical questions have given the average value
(3), then TCF = 0.65 + 0.01 x (14x3) = 1.07.
For example: If FP =UFP x TCF = 375 FP, average productivity =
6.5 FP/PM and labor rate = $8000 per month then
Cost per FP = ($8000) / (6.5 FP/PM) = $1230
Total project cost = 375 x $1230 = $461250
Estimated effort = Total FP / Average productivity = 375 FP / (6.5
FP/PM) = 58 persons
C. Empirical estimation models
An estimation model reflects the population of projects from which it has been
derived. Therefore, the model is domain sensitive. An estimation model for
computer software uses empirically derived formulas to predict effort as a
function of LOC or FP. Instead of using the tables described in previous sections,
the resultant values for LOC or FP are plugged into the estimation model.
The empirical data that support most estimation models are derived from a
limited sample of projects. For this reason, no estimation model is appropriate
for all classes of software and in all development environments. Therefore, if
agreement is poor, the model must be tuned and retested before it can be used.
The Structure Estimation model
A typical estimation model is derived using regression analysis
on data collected from past software projects.
The overall structure of such models takes the form: E =A + B x
(ev)^C, where A, B, and C are empirically derived constants, E is
effort in person-months, and ev is the estimation variable (either
LOC or FP).
The COCOMO II Model
The constructive cost model (COCOMO) is an algorithmic software cost
estimation model developed by Barry Boehm and published in 1981 on
his book named Software Engineering Economics.
The model uses a basic regression formula, with some parameters that are
derived from historical project data and current project characteristics.
With reference to the COCOMO, the COCOMO-II was developed in 1997
and finally published in 2000 in the book Software Cost Estimation with
COCOMO-II.
The COCOMO II Model
Thus, COCOMO-II is the successor of COCOMO-81 and is better
suited for estimating software development projects.
It provides more support for modern software development
processes and an updated project database.
The need for new model came as software development technology
moved from mainframe and overnight batch processing to desktop
development, cost reusability and the use of off-the-self software
components.
COCOMO II is actually a hierarchy of estimation models that address
the following areas:
● Application composition model: Used during the early stages of
software engineering, when prototyping of user interfaces,
consideration of software and system interaction, assessment of
performance, and evaluation of technology maturity are
paramount.
● Early design stage model: Used once requirements have been
stabilized and basic software architecture has been established.
● Post-architecture-stage model: Used during the construction of the
software.
Basic COCOMO II Model
COCOMO applies to three classes of software projects:
Organic: Developing well understood application programs, small
experienced team Semi Detached: mix of experienced and
non-experienced team, unfamiliar
Embedded: strongly coupled to computer hardware
Basic COCOMO
Effort = a (KLOC)^b PM
Time = c (Effort)^d Months
Number of people required = (Effort applied) / (Development time)
Example: COCOMO II Model
The program size is expressed in thousands of lines of code.
The coefficient a, b, c, and d are given as:
The size of organic software is estimated to be 32,000 LOC. The average salary
for software engineering is Rs. 15000/- per month. What will be effort and time
for the completion of the project?
Solution:
Effort applied = 2.4 x (32)^1.05 PM = 91.33 PM (Since: 32000 LOC = 32KLOC)
Time = 2.5 x (91.33)^0.38 Month = 13.899 Months
Cost = Time x Average salary per month = 13.899 x 15000 = Rs. 208480.85
People required = (Effort applied) / (development time) = 6.57 = 7 persons
D. Estimation for Object-Oriented Projects
It is worthwhile to supplement conventional software cost estimation methods with a
technique that has been designed explicitly for OO software. Lorenz and Kidd
suggest the following approach:
1. Develop estimates using effort decomposition, FP analysis, and any other
method that is applicable for conventional applications.
2. Using the requirements model, develop use cases and determine a count.
Recognize that the number of use cases may change as the project progresses.
3. From the requirements model, determine the number of key classes (called
analysis classes).
4. Categorize the type of interface for the application and develop a multiplier for
support classes: Multiply the number of key classes (step 3) by the multiplier to
obtain an estimate for the number of support classes.
5. Multiply the total number
of classes (key + support) by
the average number of work
units per class. Lorenz and
Kidd suggest 15 to 20
person-days per class.
● Product size—risks associated with the overall size of the software to be built or modified.
● Business impact—risks associated with constraints imposed by management or the
marketplace.
● Stakeholder characteristics—risks associated with the sophistication of the stakeholders and
the developer’s ability to communicate with stakeholders in a timely manner.
● Process definition—risks associated with the degree to which the software process has been
defined and is followed by the development organization.
● Development environment—risks associated with the availability and quality of the tools to
be used to build the product.
● Technology to be built—risks associated with the complexity of the system to be built and
the “newness” of the technology that is packaged by the system.
● Staff size and experience—risks associated with the overall technical and project experience
of the software engineers who will do the work.
Risk Components
The risk components defined by U.S. air force are defined in the following
manner:
Performance risk—the degree of uncertainty that the product will meet its
requirements and be fit for its intended use.
Cost risk—the degree of uncertainty that the project budget will be maintained.
Support risk—the degree of uncertainty that the resultant software will be easy
to correct, adapt, and enhance.
Schedule risk—the degree of uncertainty that the project schedule will be
maintained and that the product will be delivered on time.
The impact of each risk component is divided into one of four categories
—negligible, marginal, critical, or catastrophic.
C. Risk Projection and Risk Table
Risk estimation attempts to rate each risk by calculating the probability
that the risk is real and the consequences of the problems associated with
the risk, should it occur.
During risk estimation, the first step is the prioritizing risks and hence we
can allocate resources where they will have the most impact.
The process of categorization of various possible risks that may appears in
in project is called risk assessment.
The analysis of nature, scope and time are main component of risk
assessment.
C. Risk Projection and Risk Table
A risk table provides a simple technique for risk projection.
The risk table can be implemented as a spreadsheet model.
This enables easy manipulation and sorting of the entries.
Here, at first all risks are listed in the first column of the table.
This can be accomplished with the help of the risk item checklists reference.
Each risk is categorized in the second column.
The probability of occurrence of each risk is entered in the next column of
the table.
The probability value for each risk can be estimated by team members individually.
Finally, the table is sorted by probability and by impact. High-probability,
high-impact risks percolate to the top of the table, and low-probability
risks drop to the bottom.
This accomplishes first-order risk prioritization.
After that the table is sorted and defines a cutoff line.
The cutoff line (drawn horizontally at some point in the table) implies that
only risks that lie above the line will be given furtherattention.
Risks that fall below the line are reevaluated to accomplish second-order
prioritization. The column labeled RMMM contains a pointer into a risk
mitigation, monitoring, and management plan.
Risk impact and probability have a distinct influence on management
concern.
A risk factor that has a high impact but a very low probability of
occurrence should not absorb a significant amount of management time.
However, high-impact risks with moderate to high probability and
low-impact risks with high probability should be carried forward into the
risk analysis steps that follow.
Three factors affect the consequences that are likely if a risk does occur:
its nature, its scope, and its timing.
The overall risk exposure RE is determined using the following
relationship: RE = P X C, Where P is the probability of occurrence for a
risk, and C is the cost to the project should the risk occur.
Risk Refinement
During early stages of project planning, a risk may be stated quite generally. As time passes
and more is learned about the project and the risk, it may be possible to refine the risk into a
set of more detailed risks, each somewhat easier to mitigate, monitor, and manage. One way
to do this is to represent the risk in condition-transition-consequence (CTC) format. That is,
the risk is stated in the following form:
Given that <condition> then there is concern that (possibly) <consequence>. For example:
Risks can occur after the software has been successfully developed and
delivered to the customer.