Unit-2, SPM
Unit-2, SPM
process. The models specify the stages and order of a process. So, it is a
implementing and testing software systems. here are many different software
frameworks that an organization uses to generate a custom set of steps tailored to the
One of the basic notions of the software development process is SDLC models which
stands for Software Development Life Cycle models. There are many development life
cycle models that have been developed in order to achieve different required
objectives. The models specify the various stages of the process and the order in which
they are carried out. The most used, popular and important SDLC models are given
below:
● Waterfall model
● V model
● Incremental model
● RAD model
● Agile model
● Iterative model
● Spiral model
● Prototype model
Waterfall Model
The waterfall model is a breakdown of project activities into linear sequential phases,
where each phase depends on the deliverables of the previous one and corresponds to
design.
V Model
of the waterfall model and is an example of the more general V-model. Instead of
moving down in a linear way, the process steps are bent upwards after the coding
phase, to form the typical V shape. The V-Model demonstrates the relationships
between each phase of the development life cycle and its associated phase of testing.
The horizontal and vertical axes represent time or project completeness (left-to-right)
The incremental build model is a method of software development where the model is
designed, implemented and tested incrementally (a little more is added each time)
until the product is finished. It involves both development and maintenance. The
product is defined as finished when it satisfies all of its requirements. Each iteration
passes through the requirements, design, coding and testing phases. And each
subsequent release of the system adds function to the previous release until all
designed functionally has been implemented. This model combines the elements of
Iterative Model
An iterative life cycle model does not attempt to start with a full specification of
requirements by first focusing on an initial, simplified set user features, which then
progressively gains more complexity and a broader set of features until the targeted
In other words, the iterative approach begins by specifying and implementing just part
of the software, which can then be reviewed and prioritized in order to identify further
the software for each iteration. In a light-weight iterative project the code may
developed in the 1970s and 1980s, such as the Structured Systems Analysis and Design
software development that end users can produce better feedback when examining a
live system, as opposed to working strictly with documentation. It puts less emphasis
RAD may resulted in a lower level of rejection when the application is placed into
production, but this success most often comes at the expense of a dramatic overruns
in project costs and schedule. RAD approach is especially well suited for developing
software that is driven by user interface requirements. Thus, some GUI builders are
The spiral model, first described by Barry Boehm in 1986, is a risk-driven software
development process model which was introduced for dealing with the shortcomings
in the traditional waterfall model. A spiral model looks like a spiral with many loops.
The exact number of loops of the spiral is unknown and can vary from project to
project. This model supports risk handling, and the project is delivered in loops. Each
The initial phase of the spiral model in the early stages of Waterfall Life Cycle that is
needed to develop a software product. The exact number of phases needed to develop
the product can be varied by the project manager depending upon the project risks. As
the project manager dynamically determines the number of phases, so the project
Agile is an umbrella term for a set of methods and practices based on the values and
principles expressed in the Agile Manifesto that is a way of thinking that enables teams
risk. Organizations can be agile using many of the available frameworks available such
The primary goal of being Agile is empowered the development team the ability to
small cycles. This results in more frequent incremental releases with each release
quality is maintained.
Choice of software process model
Choosing the right software process model for your project can be difficult. If you
know your requirements well, it will be easier to select a model that best matches your
needs. You need to keep the following factors in mind when selecting your software
process model:
Project requirements
Before you choose a model, take some time to go through the project requirements and
clarify them alongside your organization’s or team’s expectations. Will the user need to
specify requirements in detail after each iterative session? Will the requirements
change during the development process?
Project size
Consider the size of the project you will be working on. Larger projects mean bigger
teams, so you’ll need more extensive and elaborate project management plans.
Project complexity
Complex projects may not have clear requirements. The requirements may change
often, and the cost of delay is high. Ask yourself if the project requires constant
monitoring or feedback from the client.
Cost of delay
Is the project highly time-bound with a huge cost of delay, or are the timelines
flexible?
Customer involvement
Do you need to consult the customers during the process? Does the user need to
participate in all phases?
This involves the developers’ knowledge and experience with the project domain,
software tools, language, and methods needed for development.
Project resources
This involves the amount and availability of funds, staff, and other resources.
● A rigidly paced schedule that refers design improvements to the next product
version
2. Data Modelling: The data collected from business modeling is refined into a set of
data objects (entities) that are needed to support the business. The attributes
(character of each entity) are identified, and the relation between these data objects
(entities) is defined.
3. Process Modelling: The information object defined in the data modeling phase are
transformed to achieve the data flow necessary to implement a business function.
Processing descriptions are created for adding, modifying, deleting, or retrieving a
data object.
● When the system should need to create the project that modularizes in a short
period.
● It should be used only if the budget allows the use of automatic code generating
tools.
Agile Methods
Agile is the ability to create and respond to change. It is a way of dealing with, and
ultimately succeeding in, an uncertain and turbulent environment. Agile refers to the
methods and best practices for organizing projects based on the values and principles
documented in the Agile Manifesto. Following are some of the most common Agile
frameworks.
Agile software development is an umbrella term for a set of frameworks and practices
based on the values and principles expressed in the Manifesto for Agile Software
Kanban
Kanban is a simple, visual means of managing projects that enables teams to see the
progress so far and what’s coming up next. Kanban projects are primarily managed
through a Kanban board, which segments tasks into three columns: “To Do,” “Doing,”
and “Done.”
Scrum
Scrum is similar to Kanban in many ways. Scrum typically uses a Scrum board, similar
to a Kanban board, and groups tasks into columns based on progress. Unlike Kanban,
Scrum focuses on breaking a project down into sprints and only planning and
managing one sprint at a time. Scrum also has unique project roles: Scrum master and
product owner.
Extreme Programming (XP) was designed for Agile software development projects. It
● Planning game
● Small releases
● Simple design
● Pair programming
● Test-driven development
● Refactoring
● Continuous integration
● Collective code ownership
● Coding standards
● Metaphor
● Sustainable pace
methodology involves creating software models every two weeks and requires a
development and design plan for every model feature. It has more rigorous
documentation requirements than XP, so it’s better for teams with advanced design
and planning abilities. FDD breaks projects down into five basic activities:
● Plan by feature
● Design by feature
● Build by feature
and any development changes that occur must be reversible. Like Scrum, XP, and FDD,
● Deliver on time
● Collaborate
● Develop iteratively
● Demonstrate control
Crystal
Crystal is a family of Agile methodologies that includes Crystal Clear, Crystal Yellow,
Crystal Orange, Crystal Red, etc. Each has a unique framework. Your choice depends
on several project factors, such as your team size, priorities, and project criticality.
Lean
Lean development is often grouped with Agile, but it’s an entirely different
methodology that happens to share many of the same values. The main principles of
● Eliminating waste
● Build quality in
● Create knowledge
● Defer commitment
● Deliver fast
● Respect people
Feasibility Study:
It establishes the essential business necessities and constraints related to the applying
to be designed then assesses whether or not the application could be a viable
candidate for the DSDM method.
Business Study:
It establishes the use and knowledge necessities that may permit the applying to
supply business value; additionally, it is the essential application design and identifies
the maintainability necessities for the applying.
Implementation:
It places the newest code increment (an “operationalized” prototype) into the
operational surroundings. It ought to be noted that:
(b) changes are also requested because the increment is placed into place. In either
case, DSDM development work continues by returning to the useful model iteration
activity.
method model (the DSDM life cycle) with the barmy and bolt practices (XP) that are
needed to create code increments. Additionally, the ASD ideas of collaboration and
Extreme Programming
Extreme Programming (XP) is an agile software development framework that
aims to produce higher quality software, and higher quality of life for the
development team.
●
How
Values provide purpose to teams. They act as a “north star” to guide your decisions in a
high-level way. However, values are abstract and too fuzzy for specific guidance. For
instance: saying that you value communication can result in many different outcomes.
Practices are, in some ways, the opposite of values. They’re concrete and down to
earth, defining the specifics of what to do. Practices help teams hold themselves
accountable to the values. For instance, the practice of Informative Workspacesfavors
transparent and simple communication.
Principles are domain-specific guidelines that bridge the gap between practices and
values.
Managing People
that can be used for some purpose even if input data may be incomplete, uncertain, or
unstable.
Estimation determines how much money, effort, resources, and time it will take to
● Available Documents/Knowledge
● Assumptions
● Identified Risks
Estimation need not be a one-time task in a project. It can take place during −
○ Acquiring a Project.
Project metrics can provide a historical perspective and valuable input for generation
of quantitative estimates.
Planning requires technical managers and the software team to make an initial
● Use at least two estimation techniques to arrive at the estimates and reconcile
● Plans should be iterative and allow adjustments as time passes and more details
are known.
Cost estimation simply means a technique that is used to find out the cost estimates.
The cost estimate is the financial spend that is done on the efforts to develop and test
software in Software Engineering.
Various techniques or models are available for cost estimation, also known as Cost
Estimation Models as shown below
These techniques are usually based on the data that is collected previously from a
project and also based on some guesses, prior experience with the development of
However, as there are many activities involved in empirical estimation techniques, this
technique.
Heuristic Technique
Heuristic word is derived from a Greek word that means “to discover”.
The heuristic technique is a technique or model that is used for solving problems,
learning, or discovery in the practical methods which are used for achieving
immediate goals.
These techniques are flexible and simple for taking quick decisions through shortcuts
and good enough calculations, most probably when working with complex data. But
the decisions that are made using this technique are necessary to be optimal.
The popular heuristic technique is given by the Constructive Cost Model (COCOMO).
This technique is also used to increase or speed up the analysis and investment
decisions.
In this technique, firstly the task is divided or broken down into its basic component
Second, if the standard time is available from some other source, then these sources
Third, if there is no such time available, then the work is estimated based on the
project. Hence, the analytical estimation technique has some scientific basis.
project parameters such as effort, completion time, and total project cost.
The project size is a measure of the problem complexity in terms of the effort and time
Currently, two metrics are popularly being used to measure size—lines of code (LOC)
LOC
LOC is possibly the simplest among all metrics available to measure project size.
Consequently, this metric is extremely popular. This metric measures the size of a
Obviously, while counting the number of source instructions, comment lines, and
Determining the LOC count at the end of a project is very simple. However, accurate
estimation of LOC count at the beginning of a project is a very difficult task. One can
possibly estimate the LOC count at the starting of a project, only by using some form
of systematic guess work. Systematic guessing typically involves the following. The
project manager divides the problem into modules, and each module into sub-modules
and so on, until the LOC of the leaf-level modules are small enough to be predicted. To
be able to predict the LOC count for the various leaf-level modules sufficient.
helpful. By adding the estimates for all leaf level modules together,
measure should consider the total effort needed to carry out various
life cycle activities (i.e. specification, design, code, test, etc.) and not
just the coding effort. LOC, however, focuses on the coding activity
program.
LOC metric penalizes use of higher-level programming languages and code reuse.
LOC metric measures the lexical complexity of a program and does not address the
It is very difficult to accurately estimate LOC of the final program from problem
specification.
Function point metric was proposed by Albrecht in 1983. This metric overcomes many
of the shortcomings of the LOC metric. Since its inception in late 1970s, function point
metric has steadily gained popularity. Function point metric has several advantages
over LOC metric. One of the important advantages of the function point metric over
the LOC metric is that it can easily be computed from the problem specification itself.
Using the LOC metric, on the other hand, the size can accurately be determined only
The conceptual idea behind the function point metric is the following. The size of a
or features it supports. This assumption is reasonable, since each feature would take
Step 2: Refine UFP to reflect the actual complexities of the different parameters used
in UFP computation.
Step 3: Compute FP by further refining UFP to account for the specific characteristics
It should be carefully noted that an effort estimation of 100 PM does not imply that 100
persons should work for 1 month. Neither does it imply that 1 person should be
employed for 100 months to complete the project. The effort estimation simply
denotes the area under the person-month curve (see Figure 3.3 ) for the project.
It is a single variable heuristic model that gives an approximate estimate of the project
following forms:
Effort = a1 × (KLOC)a2 PM
KLOC is the estimated size of the software product expressed in Kilo Lines Of Code.
a1, a2, b1, b2 are constants for each category of software product.
months.
Effort is the total effort required to develop the software product, expressed in person-
months (PMs).
According to Boehm, every line of source text should be calculated as one LOC
instruction spans several lines (say n lines), it is considered to be nLOC. The values of
a1, a2, b1, b2 for different categories of products as given by Boehm [1981] are
For the three classes of software products, the formulas for estimating the effort based
Estimation of development time: For the three classes of software products, the
formulas for estimating the development time based on the effort are given below:
We can gain some insight into the basic COCOMO model, if we plot the estimated
effort and duration values for different software sizes. Figure 3.4 shows the plots of
estimated effort versus product size for different categories of software products
Intermediate COCOMO
The basic COCOMO model assumes that effort and development time are functions of
the product size alone. However, a host of other project parameters besides the
product size affect the effort as well as the time required to develop the product. For
example the effort to develop a product would vary depending upon the sophistication
The size is a consistent measurement (or estimate) which is very useful for
(FSM).
requirements for each piece with a specific layer. Each layer possesses an
The functional size of a piece of software is equal to the number of its data
movements
as ‘CFP’ where:
1 CFP is defined, by convention, as the size of a single data movement of a
Then:
The size of a functional process is equal to the number of its data movements
types in CFP .
The size of a piece of software is equal to the sum of the CFP sizes of its
functional processes.
EXIT An EXIT (X) is a movement of the data attributes found in one data
group from inside the software boundary to the user side of the
software boundary. An EXIT (X) does not read the data it moves.
Functionally, an EXIT sub-process sends data lying inside the
functional process to which it belongs (implicitly inside the
software boundary) within reach of the user side of the boundary.
Note also that in COSMIC FFP, an exit is considered to include
certain associated data manipulation sub-processes.
READ A READ (R) refers to data attributes found in one data group.
Functionally, a READ sub-process brings data from storage, within
reach of the functional process to which it belongs. Note also that in
COSMIC FFP, a READ is considered to include certain associated
data manipulation sub-processes.
WRITE A WRITE (W) refers to data attributes found in one data group.
Functionally, a WRITE sub-process sends data lying inside the
functional process to which it belongs to storage. Note also that in
COSMIC FFP, a WRITE is considered to include certain associated
data manipulation sub-processes.
Example 1: When there is a single data movement of the 4 types in Fig. 1, the
functional size is: 1 Entry + 1 Exit + 1 Read + 1 Write = 1 CFP + 1 CFP + 1 CFP + 1
CFP= 4 CFP.
Example 2: When there are two data movements of the 4 types in Fig. 1, the
COCOMO-II
It is the model that allows one to estimate the cost, effort and schedule when
used to estimate the cost for prototype development. We had already discussed
issues.
design stage.
COCOMO. The other two models help consider the following two factors. GUI
second factor concerns several issues that affect productivity such as the extent
of reuse.
2. Determine the complexity level of each screen and report, and rate
contains.
4. Add all the assigned complexity values for the object instances
screen 1 2 3
report 2 5 8
3 GL – – 10
component
(NOP) as follows,
Productivity 4 7 13 25 50
Productivity Table
used.
Post-architecture model
model in the choice of the set of cost drivers and the range of values
of the exponent b.