0% found this document useful (0 votes)
21 views46 pages

Unit-2, SPM

Uploaded by

Akshay Dwivedi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views46 pages

Unit-2, SPM

Uploaded by

Akshay Dwivedi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 46

UNIT-2

Project Life Cycle and Effort


Estimation

Software process and Process Models

A software process model is an abstraction of the software development

process. The models specify the stages and order of a process. So, it is a

representation of the order of activities of the process and the sequence in

which they are performed.

A model will define the following:

● The tasks to be performed

● The input and output of each task

● The pre and post conditions for each task

● The flow and sequence of each task


Software Processes is a coherent set of activities for specifying, designing,

implementing and testing software systems. here are many different software

processes but all involve the following:

● Specification – defining what the system should do;

● Design and implementation – defining the organization of the system and

implementing the system;

● Validation – checking that it does what the customer wants;

● Evolution – changing the system in response to changing customer needs.

Types of Software Process Model

Software processes, methodologies and frameworks range from specific prescriptive

steps that can be used directly by an organization in day-to-day work, to flexible

frameworks that an organization uses to generate a custom set of steps tailored to the

needs of a specific project or group. In some cases a “sponsor” or “maintenance”

organization distributes an official set of documents that describe the process.

Software Process and Software Development Lifecycle Model

One of the basic notions of the software development process is SDLC models which

stands for Software Development Life Cycle models. There are many development life

cycle models that have been developed in order to achieve different required

objectives. The models specify the various stages of the process and the order in which
they are carried out. The most used, popular and important SDLC models are given

below:

● Waterfall model

● V model

● Incremental model

● RAD model

● Agile model

● Iterative model

● Spiral model

● Prototype model

Waterfall Model

The waterfall model is a breakdown of project activities into linear sequential phases,

where each phase depends on the deliverables of the previous one and corresponds to

a specialisation of tasks. The approach is typical for certain areas of engineering

design.
V Model

The V-model represents a development process that may be considered an extension

of the waterfall model and is an example of the more general V-model. Instead of

moving down in a linear way, the process steps are bent upwards after the coding

phase, to form the typical V shape. The V-Model demonstrates the relationships

between each phase of the development life cycle and its associated phase of testing.

The horizontal and vertical axes represent time or project completeness (left-to-right)

and level of abstraction (coarsest-grain abstraction uppermost), respectively.


Incremental model

The incremental build model is a method of software development where the model is

designed, implemented and tested incrementally (a little more is added each time)

until the product is finished. It involves both development and maintenance. The

product is defined as finished when it satisfies all of its requirements. Each iteration

passes through the requirements, design, coding and testing phases. And each

subsequent release of the system adds function to the previous release until all
designed functionally has been implemented. This model combines the elements of

the waterfall model with the iterative philosophy of prototyping.

Iterative Model

An iterative life cycle model does not attempt to start with a full specification of

requirements by first focusing on an initial, simplified set user features, which then

progressively gains more complexity and a broader set of features until the targeted

system is complete. When adopting the iterative approach, the philosophy of

incremental development will also often be used liberally and interchangeably.

In other words, the iterative approach begins by specifying and implementing just part

of the software, which can then be reviewed and prioritized in order to identify further

requirements. This iterative process is then repeated by delivering a new version of

the software for each iteration. In a light-weight iterative project the code may

represent the major source of documentation of the system; however, in a critical

iterative project a formal software specification may also be required.


RAD model

Rapid application development was a response to plan-driven waterfall processes,

developed in the 1970s and 1980s, such as the Structured Systems Analysis and Design

Method (SSADM). Rapid application development (RAD) is often referred as the

adaptive software development. RAD is an incremental prototyping approach to

software development that end users can produce better feedback when examining a

live system, as opposed to working strictly with documentation. It puts less emphasis

on planning and more emphasis on an adaptive process.

RAD may resulted in a lower level of rejection when the application is placed into

production, but this success most often comes at the expense of a dramatic overruns

in project costs and schedule. RAD approach is especially well suited for developing

software that is driven by user interface requirements. Thus, some GUI builders are

often called rapid application development tools.


Spiral model

The spiral model, first described by Barry Boehm in 1986, is a risk-driven software

development process model which was introduced for dealing with the shortcomings

in the traditional waterfall model. A spiral model looks like a spiral with many loops.

The exact number of loops of the spiral is unknown and can vary from project to

project. This model supports risk handling, and the project is delivered in loops. Each

loop of the spiral is called a Phase of the software development process.

The initial phase of the spiral model in the early stages of Waterfall Life Cycle that is

needed to develop a software product. The exact number of phases needed to develop

the product can be varied by the project manager depending upon the project risks. As

the project manager dynamically determines the number of phases, so the project

manager has an important role to develop a product using a spiral model.


Agile model

Agile is an umbrella term for a set of methods and practices based on the values and

principles expressed in the Agile Manifesto that is a way of thinking that enables teams

and businesses to innovate, quickly respond to changing demand, while mitigating

risk. Organizations can be agile using many of the available frameworks available such

as Scrum, Kanban, Lean, Extreme Programming (XP) and etc.


The Agile movement proposes alternatives to traditional project management. Agile

approaches are typically used in software development to help businesses respond to

unpredictability which refer to a group of software development methodologies based

on iterative development, where requirements and solutions evolve through

collaboration between self-organizing cross-functional teams.

The primary goal of being Agile is empowered the development team the ability to

create and respond to change in order to succeed in an uncertain and turbulent

environment. Agile software development approach is typically operated in rapid and

small cycles. This results in more frequent incremental releases with each release

building on previous functionality. Thorough testing is done to ensure that software

quality is maintained.
Choice of software process model

Factors in choosing a software process

Choosing the right software process model for your project can be difficult. If you
know your requirements well, it will be easier to select a model that best matches your
needs. You need to keep the following factors in mind when selecting your software
process model:

Project requirements

Before you choose a model, take some time to go through the project requirements and
clarify them alongside your organization’s or team’s expectations. Will the user need to
specify requirements in detail after each iterative session? Will the requirements
change during the development process?

Project size

Consider the size of the project you will be working on. Larger projects mean bigger
teams, so you’ll need more extensive and elaborate project management plans.

Project complexity

Complex projects may not have clear requirements. The requirements may change
often, and the cost of delay is high. Ask yourself if the project requires constant
monitoring or feedback from the client.

Cost of delay

Is the project highly time-bound with a huge cost of delay, or are the timelines
flexible?

Customer involvement

Do you need to consult the customers during the process? Does the user need to
participate in all phases?

Familiarity with technology

This involves the developers’ knowledge and experience with the project domain,
software tools, language, and methods needed for development.
Project resources

This involves the amount and availability of funds, staff, and other resources.

Rapid Application Development Model

RAD is a linear sequential software development process model that emphasizes a


concise development cycle using an element based construction approach. If the
requirements are well understood and described, and the project scope is a constraint,
the RAD process enables a development team to create a fully functional system within
a concise time period.

RAD (Rapid Application Development) is a concept that products can be developed


faster and of higher quality through:

● Gathering requirements using workshops or focus groups

● Prototyping and early, reiterative user testing of designs

● The re-use of software components

● A rigidly paced schedule that refers design improvements to the next product

version

● Less formality in reviews and other team communication


The various phases of RAD are as follows:

1.Business Modelling: The information flow among business functions is defined by


answering questions like what data drives the business process, what data is
generated, who generates it, where does the information go, who process it and so on.

2. Data Modelling: The data collected from business modeling is refined into a set of
data objects (entities) that are needed to support the business. The attributes
(character of each entity) are identified, and the relation between these data objects
(entities) is defined.

3. Process Modelling: The information object defined in the data modeling phase are
transformed to achieve the data flow necessary to implement a business function.
Processing descriptions are created for adding, modifying, deleting, or retrieving a
data object.

4. Application Generation: Automated tools are used to facilitate construction of the


software; even they use the 4th GL techniques.
5. Testing & Turnover: Many of the programming components have already been
tested since RAD emphasis reuse. This reduces the overall testing time. But the new
part must be tested, and all interfaces must be fully exercised.

When to use RAD Model?

● When the system should need to create the project that modularizes in a short

span time (2-3 months).

● When the requirements are well-known.

● When the technical risk is limited.

● When there's a necessity to make a system, which modularized in 2-3 months of

period.

● It should be used only if the budget allows the use of automatic code generating

tools.

Advantage of RAD Model

● This model is flexible for change.

● In this model, changes are adoptable.

● Each phase in RAD brings highest priority functionality to the customer.

● It reduced development time.

● It increases the reusability of features.

Disadvantage of RAD Model

● It required highly skilled designers.

● All application is not compatible with RAD.


● For smaller projects, we cannot use the RAD model.

● On the high technical risk, it's not suitable.

● Required user involvement.

Agile Methods

Agile is the ability to create and respond to change. It is a way of dealing with, and

ultimately succeeding in, an uncertain and turbulent environment. Agile refers to the

methods and best practices for organizing projects based on the values and principles

documented in the Agile Manifesto. Following are some of the most common Agile

frameworks.

Agile software development is an umbrella term for a set of frameworks and practices

based on the values and principles expressed in the Manifesto for Agile Software

Development and the 12 Principles behind it.

Kanban

Kanban is a simple, visual means of managing projects that enables teams to see the

progress so far and what’s coming up next. Kanban projects are primarily managed

through a Kanban board, which segments tasks into three columns: “To Do,” “Doing,”

and “Done.”

Scrum
Scrum is similar to Kanban in many ways. Scrum typically uses a Scrum board, similar

to a Kanban board, and groups tasks into columns based on progress. Unlike Kanban,

Scrum focuses on breaking a project down into sprints and only planning and

managing one sprint at a time. Scrum also has unique project roles: Scrum master and

product owner.

Extreme Programming (XP)

Extreme Programming (XP) was designed for Agile software development projects. It

focuses on continuous development and customer delivery and uses intervals or

sprints, similar to a Scrum methodology. However, XP also has 12 supporting processes

specific to the world of software development:

● Planning game

● Small releases

● Customer acceptance tests

● Simple design

● Pair programming

● Test-driven development

● Refactoring

● Continuous integration
● Collective code ownership

● Coding standards

● Metaphor

● Sustainable pace

Feature-driven development (FDD)

Feature-driven development is another software-specific Agile framework. This

methodology involves creating software models every two weeks and requires a

development and design plan for every model feature. It has more rigorous

documentation requirements than XP, so it’s better for teams with advanced design

and planning abilities. FDD breaks projects down into five basic activities:

● Develop an overall model

● Build a feature list

● Plan by feature

● Design by feature

● Build by feature

Dynamic Systems Development Method (DSDM)


The Dynamic Systems Development Method (DSDM) was born of the need for a

common industry framework for rapid software delivery. Rework is to be expected,

and any development changes that occur must be reversible. Like Scrum, XP, and FDD,

DSDM uses sprints. This framework is based on eight fundamental principles:

● Focus on the business need

● Deliver on time

● Collaborate

● Never compromise quality

● Build incrementally from firm foundations

● Develop iteratively

● Communicate continuously and clearly

● Demonstrate control

Crystal

Crystal is a family of Agile methodologies that includes Crystal Clear, Crystal Yellow,

Crystal Orange, Crystal Red, etc. Each has a unique framework. Your choice depends

on several project factors, such as your team size, priorities, and project criticality.

Lean
Lean development is often grouped with Agile, but it’s an entirely different

methodology that happens to share many of the same values. The main principles of

the Lean methodology include:

● Eliminating waste

● Build quality in

● Create knowledge

● Defer commitment

● Deliver fast

● Respect people

● Optimize the whole

Dynamic System Development Method

The Dynamic Systems Development technique (DSDM) is an associate degree agile


code development approach that provides a framework for building and maintaining
systems.
The DSDM philosophy is borrowed from a modified version of the sociologist
principle— 80 % of An application is often delivered in twenty percent of the time it’d
desire to deliver the entire (100 percent) application.
DSDM is An iterative code method within which every iteration follows the 80% rule
that simply enough work is needed for every increment to facilitate movement to the
following increment.
The remaining detail is often completed later once a lot of business necessities are
noted or changes are requested and accommodated.
The pool has outlined AN Agile Development Model, known as the DSDM life cycle that
defines 3 different unvarying cycles, preceded by 2 further life cycle activities:

Feasibility Study:
It establishes the essential business necessities and constraints related to the applying
to be designed then assesses whether or not the application could be a viable
candidate for the DSDM method.

Business Study:
It establishes the use and knowledge necessities that may permit the applying to
supply business value; additionally, it is the essential application design and identifies
the maintainability necessities for the applying.

Functional Model Iteration:


It produces a collection of progressive prototypes that demonstrate practicality for the
client.
(Note: All DSDM prototypes are supposed to evolve into the deliverable application.)
The intent throughout this unvarying cycle is to collect further necessities by eliciting
feedback from users as they exercise the paradigm.
Design and Build Iteration:
It revisits prototypes designed throughout useful model iteration to make sure that
everyone has been designed during a manner that may alter it to supply operational
business price for finish users. In some cases, useful model iteration and style and
build iteration occur at the same time.

Implementation:
It places the newest code increment (an “operationalized” prototype) into the
operational surroundings. It ought to be noted that:

(a) the increment might not 100% complete or,

(b) changes are also requested because the increment is placed into place. In either

case, DSDM development work continues by returning to the useful model iteration

activity.

Below diagram describe the DSDM life cycle:


DSDM is often combined with XP to supply a mixed approach that defines a solid

method model (the DSDM life cycle) with the barmy and bolt practices (XP) that are

needed to create code increments. Additionally, the ASD ideas of collaboration and

self-organizing groups are often tailored to a combined method model.

Extreme Programming
Extreme Programming (XP) is an agile software development framework that
aims to produce higher quality software, and higher quality of life for the
development team.

XP is the most specific of the agile frameworks regarding appropriate


engineering practices for software development.

The general characteristics where XP is appropriate are:

● Dynamically changing software requirements


● Risks caused by fixed time projects using new technology
● Small, co-located extended development team
● The technology you are using allows for automated unit and
functional tests.


How

How Does Extreme Programming (XP) Work?

XP, unlike other methodologies, is very opinionated when it comes to engineering


practices.

Besides practices, XP is built upon values and principles.

Values provide purpose to teams. They act as a “north star” to guide your decisions in a
high-level way. However, values are abstract and too fuzzy for specific guidance. For
instance: saying that you value communication can result in many different outcomes.
Practices are, in some ways, the opposite of values. They’re concrete and down to
earth, defining the specifics of what to do. Practices help teams hold themselves
accountable to the values. For instance, the practice of Informative Workspacesfavors
transparent and simple communication.

Principles are domain-specific guidelines that bridge the gap between practices and
values.

Managing interactive processes

Managing People

● Act as project leader

● Liaison(communication and cooperation) with stakeholders

● Managing human resources

● Setting up reporting hierarchy


Managing Project

● Defining and setting up project scope

● Managing project management activities

● Monitoring progress and performance

● Risk analysis at every phase

● Take necessary step to avoid or come out of problems

● Act as project spokesperson

Basics of Software Estimation

Estimation is the process of finding an estimate, or approximation, which is a value

that can be used for some purpose even if input data may be incomplete, uncertain, or

unstable.

Estimation determines how much money, effort, resources, and time it will take to

build a specific system or product. Estimation is based on −

● Past Data/Past Experience

● Available Documents/Knowledge

● Assumptions

● Identified Risks

Estimation need not be a one-time task in a project. It can take place during −

○ Acquiring a Project.

○ Planning the Project.

○ Execution of the Project as the need arises.


Project scope must be understood before the estimation process begins. It will be

helpful to have historical Project Data.

Project metrics can provide a historical perspective and valuable input for generation

of quantitative estimates.

Planning requires technical managers and the software team to make an initial

commitment as it leads to responsibility and accountability.

Past experience can aid greatly.

● Use at least two estimation techniques to arrive at the estimates and reconcile

the resulting values.

● Plans should be iterative and allow adjustments as time passes and more details

are known.

Activities involved in Software Estimation:

1. Projects planning-Estimation determines how much money, effort, resources,


and time it will take to build a specific system or product
2. Scope and feasibility-The functions and features that are to be delivered to end
users.The data that are input to and output from the system.The "content" that is
presented to users as a consequence of using the software
3. Project resources-Each resource is specified with:A description of the
resource.A statement of availability,time when the resource will be required.The
duration of time that the resource will be applied Time window
4. Estimation of project cost and effort-The accuracy of a software project
estimate is predicated on:The degree to which the planner has properly
estimated the size (e.g., KLOC) of the product to be built.The ability to translate
the size estimate into human effort, calendar time, and money
5. Decomposition techniques- Before an estimate can be made and
decomposition techniques applied, the planner must Understand the scope of
the software to be built Generate an estimate of the software’s size)
6. Empirical estimation models-Estimation models for computer software use
empirically derived formulas to predict effort as a function of LOC (line of code)
or FP(function point).Resultant values computed for LOC or FP are entered into
an estimation model

Cost estimation simply means a technique that is used to find out the cost estimates.
The cost estimate is the financial spend that is done on the efforts to develop and test
software in Software Engineering.

Cost estimation models are some mathematical algorithms or parametric equations


that are used to estimate the cost of a product or a project.

Various techniques or models are available for cost estimation, also known as Cost
Estimation Models as shown below

Empirical Estimation Technique

Empirical estimation is a technique or model in which empirically derived formulas


are used for predicting the data that are a required and essential part of the software

project planning step.

These techniques are usually based on the data that is collected previously from a

project and also based on some guesses, prior experience with the development of

similar types of projects, and assumptions.

It uses the size of the software to estimate the effort.

In this technique, an educated guess of project parameters is made.

These models are based on common sense.

However, as there are many activities involved in empirical estimation techniques, this

technique is formalized. For example Delphi technique and Expert Judgement

technique.

Heuristic Technique

Heuristic word is derived from a Greek word that means “to discover”.

The heuristic technique is a technique or model that is used for solving problems,

learning, or discovery in the practical methods which are used for achieving

immediate goals.
These techniques are flexible and simple for taking quick decisions through shortcuts

and good enough calculations, most probably when working with complex data. But

the decisions that are made using this technique are necessary to be optimal.

In this technique, the relationship among different project parameters is expressed

using mathematical equations.

The popular heuristic technique is given by the Constructive Cost Model (COCOMO).

This technique is also used to increase or speed up the analysis and investment

decisions.

Analytical Estimation Technique

Analytical estimation is a type of technique that is used to measure work.

In this technique, firstly the task is divided or broken down into its basic component

operations or elements for analyzing.

Second, if the standard time is available from some other source, then these sources

are applied to each element or component of work.

Third, if there is no such time available, then the work is estimated based on the

experience of the work.


In this technique, results are derived by making certain basic assumptions about the

project. Hence, the analytical estimation technique has some scientific basis.

Halstead’s software science is based on an analytical estimation model.

Accurate estimation of project size is central to satisfactory estimation of all other

project parameters such as effort, completion time, and total project cost.

The project size is a measure of the problem complexity in terms of the effort and time

required to develop the product.

Currently, two metrics are popularly being used to measure size—lines of code (LOC)

and function point (FP).

LOC

LOC is possibly the simplest among all metrics available to measure project size.

Consequently, this metric is extremely popular. This metric measures the size of a

project by counting the number of source instructions in the developed program.

Obviously, while counting the number of source instructions, comment lines, and

header lines are ignored.

Determining the LOC count at the end of a project is very simple. However, accurate

estimation of LOC count at the beginning of a project is a very difficult task. One can

possibly estimate the LOC count at the starting of a project, only by using some form
of systematic guess work. Systematic guessing typically involves the following. The

project manager divides the problem into modules, and each module into sub-modules

and so on, until the LOC of the leaf-level modules are small enough to be predicted. To

be able to predict the LOC count for the various leaf-level modules sufficient.

accurately, past experience in developing similar modules is very

helpful. By adding the estimates for all leaf level modules together,

project managers arrive at the total size estimation. In spite of its

conceptual simplicity, LOC metric has several shortcomings when

used to measure problem size. We discuss the important

shortcomings of the LOC metric in the following subsections:

LOC is a measure of coding activity alone. A good problem size

measure should consider the total effort needed to carry out various

life cycle activities (i.e. specification, design, code, test, etc.) and not

just the coding effort. LOC, however, focuses on the coding activity

alone—it merely computes the number of source lines in the final

program.

LOC count depends on the choice of specific instructions.


LOC measure correlates poorly with the quality and efficiency of the code.

LOC metric penalizes use of higher-level programming languages and code reuse.

LOC metric measures the lexical complexity of a program and does not address the

more important issues of logical and structural complexities.

It is very difficult to accurately estimate LOC of the final program from problem

specification.

Function Point (FP) Metric

Function point metric was proposed by Albrecht in 1983. This metric overcomes many

of the shortcomings of the LOC metric. Since its inception in late 1970s, function point

metric has steadily gained popularity. Function point metric has several advantages

over LOC metric. One of the important advantages of the function point metric over

the LOC metric is that it can easily be computed from the problem specification itself.

Using the LOC metric, on the other hand, the size can accurately be determined only

after the product has fully been developed.

The conceptual idea behind the function point metric is the following. The size of a

software product is directly dependent on the number of different high-level functions

or features it supports. This assumption is reasonable, since each feature would take

additional effort to implement.


Step 1: Compute the unadjusted function point (UFP) using a heuristic expression.

Step 2: Refine UFP to reflect the actual complexities of the different parameters used

in UFP computation.

Step 3: Compute FP by further refining UFP to account for the specific characteristics

of the project that can influence the entire development effort.

It should be carefully noted that an effort estimation of 100 PM does not imply that 100

persons should work for 1 month. Neither does it imply that 1 person should be

employed for 100 months to complete the project. The effort estimation simply

denotes the area under the person-month curve (see Figure 3.3 ) for the project.

The basic COCOMO model

It is a single variable heuristic model that gives an approximate estimate of the project

parameters. The basic COCOMO estimation model is given by expressions of the

following forms:

Effort = a1 × (KLOC)a2 PM

Tdev = b1 × (Effort)b2 months where,

KLOC is the estimated size of the software product expressed in Kilo Lines Of Code.
a1, a2, b1, b2 are constants for each category of software product.

Tdev is the estimated time to develop the software, expressed in

months.

Effort is the total effort required to develop the software product, expressed in person-

months (PMs).

According to Boehm, every line of source text should be calculated as one LOC

irrespective of the actual number of instructions on that line. Thus, if a single

instruction spans several lines (say n lines), it is considered to be nLOC. The values of

a1, a2, b1, b2 for different categories of products as given by Boehm [1981] are

summarised below. He derived these values by examining historical data collected

from a large number of actual projects.

Estimation of development effort

For the three classes of software products, the formulas for estimating the effort based

on the code size are shown below:

Organic : Effort = 2.4(KLOC)1.05 PM

Semi-detached : Effort = 3.0(KLOC)1.12 PM


Embedded : Effort = 3.6(KLOC)1.20 PM

Estimation of development time: For the three classes of software products, the

formulas for estimating the development time based on the effort are given below:

Organic : Tdev = 2.5(Effort)0.38 Months

Semi-detached : Tdev = 2.5(Effort)0.35 Months

Embedded : Tdev = 2.5(Effort)0.32 Months

We can gain some insight into the basic COCOMO model, if we plot the estimated

effort and duration values for different software sizes. Figure 3.4 shows the plots of

estimated effort versus product size for different categories of software products

Intermediate COCOMO

The basic COCOMO model assumes that effort and development time are functions of

the product size alone. However, a host of other project parameters besides the

product size affect the effort as well as the time required to develop the product. For

example the effort to develop a product would vary depending upon the sophistication

of the development environment.

COSMIC Full Function Points


The COSMIC method is an international standard (ISO 19761]) for sizing the

functional requirements of any software.

‘COSMIC’ stands for the ‘Common Software Measurement International

Consortium’, a non-for-profit organization.

COSMIC function points are a unit of measure of software functional size.

The size is a consistent measurement (or estimate) which is very useful for

planning and managing software and related activities.

The process of measuring software size is called functional size measurement

(FSM).

The COSMIC FFP measurement method associates the functional user

requirements for each piece with a specific layer. Each layer possesses an

intrinsic boundary for which specific users are identified.

The COSMIC measurement principle is:

The functional size of a piece of software is equal to the number of its data

movements

A functional size is measured in units of ‘COSMIC Function Points’, abbreviated

as ‘CFP’ where:
1 CFP is defined, by convention, as the size of a single data movement of a

single data group

Then:

The size of a functional process is equal to the number of its data movements

types in CFP .

The size of a piece of software is equal to the sum of the CFP sizes of its

functional processes.

ENTRY An ENTRY (E) is a movement of the data attributes found in one


data group from the user side of the software boundary to the
inside of the software boundary. An ENTRY (E) does not update the
data it moves. Functionally, an ENTRY sub-process brings data
lying on the user's side of the software boundary within reach of
the functional process to which it belongs. Note also that in
COSMIC FFP, an entry is considered to include certain associated
data manipulation (validation) sub-processes.

EXIT An EXIT (X) is a movement of the data attributes found in one data
group from inside the software boundary to the user side of the
software boundary. An EXIT (X) does not read the data it moves.
Functionally, an EXIT sub-process sends data lying inside the
functional process to which it belongs (implicitly inside the
software boundary) within reach of the user side of the boundary.
Note also that in COSMIC FFP, an exit is considered to include
certain associated data manipulation sub-processes.
READ A READ (R) refers to data attributes found in one data group.
Functionally, a READ sub-process brings data from storage, within
reach of the functional process to which it belongs. Note also that in
COSMIC FFP, a READ is considered to include certain associated
data manipulation sub-processes.

WRITE A WRITE (W) refers to data attributes found in one data group.
Functionally, a WRITE sub-process sends data lying inside the
functional process to which it belongs to storage. Note also that in
COSMIC FFP, a WRITE is considered to include certain associated
data manipulation sub-processes.

Example 1: When there is a single data movement of the 4 types in Fig. 1, the

functional size is: 1 Entry + 1 Exit + 1 Read + 1 Write = 1 CFP + 1 CFP + 1 CFP + 1

CFP= 4 CFP.

Example 2: When there are two data movements of the 4 types in Fig. 1, the

functional size is:

2 Entries + 2 Exits + 2 Reads + 2 Writes = 2 CFP + 2 CFP + 2 CFP + 2 CFP = 8 CF

COCOMO-II

COCOMO-II is the revised version of the original Cocomo (Constructive Cost

Model) and is developed at University of Southern California.

It is the model that allows one to estimate the cost, effort and schedule when

planning a new software development activity.


Application composition model: This model as the name suggests, can be

used to estimate the cost for prototype development. We had already discussed

in Chapter 2 that a prototype is usually developed to resolve user interface

issues.

Early design model: This supports estimation of cost at the architectural

design stage.

Post-architecture model: This provides cost estimation during detailed

design and coding stages.

The post-architectural model can be considered as an update of the original

COCOMO. The other two models help consider the following two factors. GUI

development constitutes a significant part of the overall development effort. The

second factor concerns several issues that affect productivity such as the extent

of reuse.

Application composition model

The application composition model is based on counting the number

of screens, reports, and modules (components). Each of these

components is considered to be an object (this has nothing to do with


the concept of objects in the object-oriented paradigm). These are

used to compute the object points of the application.

Effort is estimated in the application composition model as follows:

1. Estimate the number of screens, reports, and modules

(components) from an analysis of the SRS document.

2. Determine the complexity level of each screen and report, and rate

these as either simple, medium, or difficult. The complexity of a

screen or a report is determined by the number of tables and views it

contains.

3. Use the weight values in Table

SCREEN complexity assignments for the data tables

No. of views Table<4 Table<8 Table>=8

<3 Simple Simple Medium

3-7 Simple Medium Difficult

>8 Medium Difficult Difficult


Report complexity assignment for table

No. of views Table<4 Table<8 Table>=8

0 or 1 Simple Simple Medium

2 or 3 Simple Medium Difficult

4 or more Medium Difficult Difficult

4. Add all the assigned complexity values for the object instances

together to obtain the object points.

Complexity Weights for Each Class for Each Object Type

Object type Simple Medium Difficult

screen 1 2 3

report 2 5 8

3 GL – – 10
component

5. Estimate percentage of reuse expected in the system. Note that

reuse refers to the amount of pre-developed software that will be


used within the system. Then, evaluate New Object-Point count

(NOP) as follows,

NOP=[(Object Points)*(100 - % of reuse) ]/100

6. Determine productivity using Table 3.6. The productivity depends

on the experience of the developers as well as the maturity of the

CASE environment used.

Developers Very low Nominal high Very high


experience low

CASE Very low Nominal high Very high


Maturity low

Productivity 4 7 13 25 50

Productivity Table

7. Finally, the estimated effort in person-months is computed as E =


NOP/PROD.

EARLY DESIGN MODEL

The unadjusted function points (UFP) are counted and converted to

source lines of code (SLOP).

In a typical programming environment, each UFP would correspond

to about 128 lines of C, 29 lines of C++, or 320 lines of assembly code.

The conversion from UFP to LOC is environment specific, and

depends on factors such as extent of reusable libraries supported.

Seven cost drivers that characterise the post-architecture model are

used.

These are rated on a seven points scale.

The cost drivers include product reliability and complexity, the

extent of reuse, platform sophistication, personnel experience,

CASE support, and schedule.


The effort is calculated using the following formula:

Effort = K SLOC × cost driver

Post-architecture model

The effort is calculated using the following formula, which is similar

to the original COCOMO model.

Effort = a × K SLOCb × cost driver

The post-architecture model differs from the original COCOMO

model in the choice of the set of cost drivers and the range of values

of the exponent b.

The exponent b can take values in the range of 1.01 to 1.26.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy