0% found this document useful (0 votes)
41 views18 pages

SPM Unit 2 Notes

The document discusses software processes, emphasizing the key activities involved in software development, including specification, development, validation, and evolution. It also covers various software process models such as RAD, DSDM, and Extreme Programming, highlighting their methodologies and phases. Additionally, it addresses software estimation techniques, focusing on the importance of accurate project estimation and the steps involved in the estimation process.

Uploaded by

tyagiji150105
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
41 views18 pages

SPM Unit 2 Notes

The document discusses software processes, emphasizing the key activities involved in software development, including specification, development, validation, and evolution. It also covers various software process models such as RAD, DSDM, and Extreme Programming, highlighting their methodologies and phases. Additionally, it addresses software estimation techniques, focusing on the importance of accurate project estimation and the steps involved in the estimation process.

Uploaded by

tyagiji150105
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

Unit – IInd

Software Processes

The term software specifies to the set of computer programs, procedures and associated
documents (Flowcharts, manuals, etc.) that describe the program and how they are to be
used.

A software process is the set of activities and associated outcome that produce a software
product. Software engineers mostly carry out these activities. These are four key process
activities, which are common to all software processes. These activities are:

1. Software specifications: The functionality of the software and constraints on its


operation must be defined.
2. Software development: The software to meet the requirement must be produced.
3. Software validation: The software must be validated to ensure that it does what
the customer wants.
4. Software evolution: The software must evolve to meet changing client needs.

Software Process Model

A software process model is a specified definition of a software process,


which is presented from a particular perspective. Models, by their nature,
are a simplification, so a software process model is an abstraction of the
actual process, which is being described. Process models may contain
activities, which are part of the software process, software product, and
the roles of people involved in software engineering. Some examples of
the types of software process models that may be produced are:

1. A workflow model: This shows the series of activities in the


process along with their inputs, outputs and dependencies. The
activities in this model perform human actions.
2. A dataflow or activity model: This represents the process as a set
of activities, each of which carries out some data transformations. It
shows how the input to the process, such as a specification is converted
to an output such as a design. The activities here may be at a lower
level than activities in a workflow model. They may perform
transformations carried out by people or by computers.

3. A role/action model: This means the roles of the people involved


in the software process and the activities for which they are responsible.

RAD (Rapid Application Development)

What is RAD?
Rapid application development is a software development methodology
that uses minimal planning in favor of rapid prototyping. A prototype is
a working model that is functionally equivalent to a component of the
product.
In the RAD model, the functional modules are developed in parallel as
prototypes and are integrated to make the complete product for faster
product delivery. Since there is no detailed preplanning, it makes it easier
to incorporate the changes within the development process.
RAD projects follow iterative and incremental model and have small
teams comprising of developers, domain experts, customer
representatives and other IT resources working progressively on their
component or prototype.
The most important aspect for this model to be successful is to make sure
that the prototypes developed are reusable.
RAD Model Design
RAD model distributes the analysis, design, build and test phases into a
series of short, iterative development cycles.
Following are the various phases of the RAD Model −
Business Modelling
The business model for the product under development is designed in
terms of flow of information and the distribution of information between
various business channels. A complete business analysis is performed to
find the vital information for business, how it can be obtained, how and
when is the information processed and what are the factors driving
successful flow of information.
Data Modelling
The information gathered in the Business Modelling phase is reviewed
and analyzed to form sets of data objects vital for the business. The
attributes of all data sets is identified and defined. The relation between
these data objects are established and defined in detail in relevance to the
business model.
Process Modelling
The data object sets defined in the Data Modelling phase are converted
to establish the business information flow needed to achieve specific
business objectives as per the business model. The process model for any
changes or enhancements to the data object sets is defined in this phase.
Process descriptions for adding, deleting, retrieving or modifying a data
object are given.
Application Generation
The actual system is built and coding is done by using automation tools
to convert process and data models into actual prototypes.
Testing and Turnover
The overall testing time is reduced in the RAD model as the prototypes
are independently tested during every iteration. However, the data flow
and the interfaces between all the components need to be thoroughly
tested with complete test coverage. Since most of the programming
components have already been tested, it reduces the risk of any major
issues.

Dynamic Systems Development Method

The Dynamic Systems Development technique (DSDM) is an


associate degree agile code development approach that provides a
framework for building and maintaining systems. The DSDM
philosophy is borrowed from a modified version of the sociologist
principle—80 % of An application is often delivered in twenty percent
of the time it’d desire deliver the entire (100 percent) application.
DSDM is An iterative code method within which every iteration
follows the 80% rule that simply enough work is needed for every
increment to facilitate movement to the following increment. The
remaining detail is often completed later once a lot of business
necessities are noted or changes are requested and accommodated.
The DSDM tool (www.dsdm.org) could be a worldwide cluster of
member companies that put together tackle the role of “keeper” of the
strategy. The pool has outlined,known as the DSDM life cycle that
defines 3 different unvarying cycles, preceded by 2 further life cycle
activities:

1. Feasibility Study:
It establishes the essential business necessities and constraints
related to the applying to be designed then assesses whether or not
the application could be a viable candidate for the DSDM method.
2. Business Study:
It establishes the use and knowledge necessities that may permit the
applying to supply business value; additionally, it is the essential
application design and identifies the maintainability necessities for
the applying.

3. Functional Model Iteration:


It produces a collection of progressive prototypes that demonstrate
practicality for the client.
(Note: All DSDM prototypes are supposed to evolve into the
deliverable application.) The intent throughout this unvarying cycle
is to collect further necessities by eliciting feedback from users as
they exercise the paradigm.

4. Design and Build Iteration:


It revisits prototypes designed throughout useful model iteration to
make sure that everyone has been designed during a manner that
may alter it to supply operational business price for finish users. In
some cases, useful model iteration and style and build iteration occur
at the same time.

5. Implementation:
It places the newest code increment (an “operationalized” prototype)
into the operational surroundings. It ought to be noted that:
• (a) the increment might not 100% complete or,
• (b) changes are also requested because the increment is placed
into place. In either case, DSDM development work continues by
returning to the useful model iteration activity.

Extreme Programming

Extreme programming (XP) is one of the most important software


development framework. It is used to improve software quality and
responsive to customer requirements. The extreme programming model
recommends taking the best practices that have worked well in the past
in program development projects to extreme levels.
Good practices needs to practiced extreme programming: Some of
the good practices that have been recognized in the extreme
programming model and suggested to maximize their use are given
below:
• Code Review: Code review detects and corrects errors efficiently. It
suggests pair programming as coding and reviewing of written code
carried out by a pair of programmers who switch their works
between them every hour.
• Testing: Testing code helps to remove errors and improves its
reliability. XP suggests test-driven development (TDD) to
continually write and execute test cases. In the TDD approach test
cases are written even before any code is written.
• Incremental development: Incremental development is very good
because customer feedback is gained and based on this development
team come up with new increments every few days after each
iteration.
• Simplicity: Simplicity makes it easier to develop good quality code
as well as to test and debug it.
• Design: Good quality design is important to develop a good quality
software. So, everybody should design daily.
• Integration testing: It helps to identify bugs at the interfaces of
different functionalities. Extreme programming suggests that the
developers should achieve continuous integration by building and
performing integration testing several times a day.

ITERATIVE PROCESS
Modern software development processes have moved away from the
conventional waterfall model, in which each stage of the development
process is dependent on completion of the previous stage. The economic
benefits inherent in transitioning from the conventional waterfall model
to an iterative development process are significant but difficult to
quantify. As one benchmark of the expected economic impact of process
improvement, consider the process exponent parameters of the
COCOMO II model. (Appendix B provides more detail on the
COCOMO model) This exponent can range from 1.01 (virtually no
diseconomy of scale) to 1.26 (significant diseconomy of scale). The
parameters that govern the value of the process exponent are application
precedentedness, process flexibility, architecture risk resolution, team
cohesion, and software process maturity. The following paragraphs map
the process exponent parameters of CO COMO II to my top 10 principles
of a modern process.
• Application precedentedness. Domain experience is a critical factor
in understanding how to plan and execute a software development
project. For unprecedented systems, one of the key goals is to confront
risks and establish early precedents, even if they are incomplete or
experimental. This is one of the primary reasons that the software
industry has moved to an iterative life-cycle process. Early iterations in
the life cycle establish precedents from which the product, the process,
and the plans can be elaborated in evolving levels of detail.
• Process flexibility. Development of modern software is characterized
by such a broad solution space and so many interrelated concerns that
there is a paramount need for continuous incorporation of changes. These
changes may be inherent in the problem understanding, the solution
space, or the plans. Project artifacts must be supported by efficient
change management commensurate with project needs. A configurable
process that allows a common framework to be adapted across a range of
projects is necessary to achieve a software return on investment.
• Architecture risk resolution. Architecture-first development is a
crucial theme underlying a successful iterative development process. A
project team develops and stabilizes architecture before developing all
the components that make up the entire suite of applications components.
An architecture-first and component-based development approach forces
the infrastructure, common mechanisms, and control mechanisms to be
elaborated early in the life cycle and drives all component make/buy
decisions into the architecture process.
• Team cohesion. Successful teams are cohesive, and cohesive teams are
successful. Successful teams and cohesive teams share common
objectives and priorities. Advances in technology (such as programming
languages, UML, and visual modeling) have enabled more rigorous and
understandable notations for communicating software engineering
information, particularly in the requirements and design artifacts that
previously were ad hoc and based completely on paper exchange. These
model-based formats have also enabled the round-trip engineering
support needed to establish change freedom sufficient for evolving
design representations.
• Software process maturity. The Software Engineering Institute's
Capability Maturity Model (CMM) is a well-accepted benchmark for
software process assessment. One of key themes is that truly mature
processes are enabled through an integrated environment that provides
the appropriate level of automation to instrument the process for
objective quality control.

BASIC OF SOFTWARE ESTIMATION


Estimation is the process of finding an estimate, or approximation,
which is a value that can be used for some purpose even if input data may
be incomplete, uncertain, or unstable.
Estimation determines how much money, effort, resources, and time it
will take to build a specific system or product. Estimation is based on −

• Past Data/Past Experience


• Available Documents/Knowledge
• Assumptions
• Identified Risks
The four basic steps in Software Project Estimation are −

• Estimate the size of the development product.


• Estimate the effort in person-months or person-hours.
• Estimate the schedule in calendar months.
• Estimate the project cost in agreed currency.
Observations on Estimation
• Estimation need not be a one-time task in a project. It can take place
during −
o Acquiring a Project.
o Planning the Project.
o Execution of the Project as the need arises.
• Project scope must be understood before the estimation process
begins. It will be helpful to have historical Project Data.
• Project metrics can provide a historical perspective and valuable
input for generation of quantitative estimates.
• Planning requires technical managers and the software team to
make an initial commitment as it leads to responsibility and
accountability.
• Past experience can aid greatly.
• Use at least two estimation techniques to arrive at the estimates and
reconcile the resulting values. Refer Decomposition Techniques in
the next section to learn about reconciling estimates.
• Plans should be iterative and allow adjustments as time passes and
more details are known.
General Project Estimation Approach
The Project Estimation Approach that is widely used is Decomposition
Technique. Decomposition techniques take a divide and conquer
approach. Size, Effort and Cost estimation are performed in a stepwise
manner by breaking down a Project into major Functions or related
Software Engineering Activities.
Step 1 − Understand the scope of the software to be built.
Step 2 − Generate an estimate of the software size.
• Start with the statement of scope.
• Decompose the software into functions that can each be estimated
individually.
• Calculate the size of each function.
• Derive effort and cost estimates by applying the size values to your
baseline productivity metrics.
• Combine function estimates to produce an overall estimate for the
entire project.
Step 3 − Generate an estimate of the effort and cost. You can arrive at
the effort and cost estimates by breaking down a project into related
software engineering activities.
• Identify the sequence of activities that need to be performed for the
project to be completed.
• Divide activities into tasks that can be measured.
• Estimate the effort (in person hours/days) required to complete each
task.
• Combine effort estimates of tasks of activity to produce an estimate
for the activity.
• Obtain cost units (i.e., cost/unit effort) for each activity from the
database.
• Compute the total effort and cost for each activity.
• Combine effort and cost estimates for each activity to produce an
overall effort and cost estimate for the entire project.
Step 4 − Reconcile estimates: Compare the resulting values from Step 3
to those obtained from Step 2. If both sets of estimates agree, then your
numbers are highly reliable. Otherwise, if widely divergent estimates
occur conduct further investigation concerning whether −
• The scope of the project is not adequately understood or has been
misinterpreted.
• The function and/or activity breakdown is not accurate.
• Historical data used for the estimation techniques is inappropriate
for the application, or obsolete, or has been misapplied.
Step 5 − Determine the cause of divergence and then reconcile the
estimates.
Estimation Accuracy
Accuracy is an indication of how close something is to reality. Whenever
you generate an estimate, everyone wants to know how close the numbers
are to reality. You will want every estimate to be as accurate as possible,
given the data you have at the time you generate it. And of course you
don’t want to present an estimate in a way that inspires a false sense of
confidence in the numbers.
Important factors that affect the accuracy of estimates are −
• The accuracy of all the estimate’s input data.
• The accuracy of any estimate calculation.
• How closely the historical data or industry data used to calibrate the
model matches the project you are estimating.
• The predictability of your organization’s software development
process.
• The stability of both the product requirements and the environment
that supports the software engineering effort.
• Whether or not the actual project was carefully planned, monitored
and controlled, and no major surprises occurred that caused
unexpected delays.
Following are some guidelines for achieving reliable estimates −

• Base estimates on similar projects that have already been


completed.
• Use relatively simple decomposition techniques to generate project
cost and effort estimates.
• Use one or more empirical estimation models for software cost and
effort estimation.
Refer to the section on Estimation Guidelines in this chapter.
To ensure accuracy, you are always advised to estimate using at least two
techniques and compare the results.
Estimation Issues
Often, project managers resort to estimating schedules skipping to
estimate size. This may be because of the timelines set by the top
management or the marketing team. However, whatever the reason, if
this is done, then at a later stage it would be difficult to estimate the
schedules to accommodate the scope changes.
While estimating, certain assumptions may be made. It is important to
note all these assumptions in the estimation sheet, as some still do not
document assumptions in estimation sheets.
Even good estimates have inherent assumptions, risks, and uncertainty,
and yet they are often treated as though they are accurate.
The best way of expressing estimates is as a range of possible outcomes
by saying, for example, that the project will take 5 to 7 months instead of
stating it will be complete on a particular date or it will be complete in a
fixed no. of months. Beware of committing to a range that is too narrow
as that is equivalent to committing to a definite date.
• You could also include uncertainty as an accompanying probability
value. For example, there is a 90% probability that the project will
complete on or before a definite date.
• Organizations do not collect accurate project data. Since the
accuracy of the estimates depend on the historical data, it would be
an issue.
• For any project, there is a shortest possible schedule that will allow
you to include the required functionality and produce quality
output. If there is a schedule constraint by management and/or
client, you could negotiate on the scope and functionality to be
delivered.
• Agree with the client on handling scope creeps to avoid schedule
overruns.
• Failure in accommodating contingency in the final estimate causes
issues. For e.g., meetings, organizational events.
• Resource utilization should be considered as less than 80%. This is
because the resources would be productive only for 80% of their
time. If you assign resources at more than 80% utilization, there is
bound to be slippages.
Estimation Guidelines
One should keep the following guidelines in mind while estimating a
project −
• During estimation, ask other people's experiences. Also, put your
own experiences at task.
• Assume resources will be productive for only 80 percent of their
time. Hence, during estimation take the resource utilization as less
than 80%.
• Resources working on multiple projects take longer to complete
tasks because of the time lost switching between them.
• Include management time in any estimate.
• Always build in contingency for problem solving, meetings and
other unexpected events.
• Allow enough time to do a proper project estimate. Rushed
estimates are inaccurate, high-risk estimates. For large
development projects, the estimation step should really be regarded
as a mini project.
• Where possible, use documented data from your organization’s
similar past projects. It will result in the most accurate estimate. If
your organization has not kept historical data, now is a good time
to start collecting it.
• Use developer-based estimates, as the estimates prepared by people
other than those who will do the work will be less accurate.
• Use several different people to estimate and use several different
estimation techniques.
• Reconcile the estimates. Observe the convergence or spread among
the estimates. Convergence means that you have got a good
estimate. Wideband-Delphi technique can be used to gather and
discuss estimates using a group of people, the intention being to
produce an accurate, unbiased estimate.
• Re-estimate the project several times throughout its life cycle .

COST ESTIMATION TECHNIQUES


Cost estimation simply means a technique that is used to find out the
cost estimates. The cost estimate is the financial spend that is done on
the efforts to develop and test software in Software Engineering. Cost
estimation models are some mathematical algorithms or parametric
equations that are used to estimate the cost of a product or a project.
Various techniques or models are available for cost estimation, also
known as Cost Estimation Models as shown below :

1. Empirical Estimation Technique –


Empirical estimation is a technique or model in which empirically
derived formulas are used for predicting the data that are a required
and essential part of the software project planning step. These
techniques are usually based on the data that is collected previously
from a project and also based on some guesses, prior experience
with the development of similar types of projects, and assumptions.
It uses the size of the software to estimate the effort.
In this technique, an educated guess of project parameters is made.
Hence, these models are based on common sense. However, as there
are many activities involved in empirical estimation techniques, this
technique is formalized. For example Delphi technique and Expert
Judgement technique.
2. Heuristic Technique –
Heuristic word is derived from a Greek word that means “to
discover”. The heuristic technique is a technique or model that is
used for solving problems, learning, or discovery in the practical
methods which are used for achieving immediate goals. These
techniques are flexible and simple for taking quick decisions through
shortcuts and good enough calculations, most probably when
working with complex data. But the decisions that are made using
this technique are necessary to be optimal.
In this technique, the relationship among different project parameters
is expressed using mathematical equations. The popular heuristic
technique is given by Constructive Cost Model (COCOMO). This
technique is also used to increase or speed up the analysis and
investment decisions.
3. Analytical Estimation Technique –
Analytical estimation is a type of technique that is used to measure
work. In this technique, firstly the task is divided or broken down
into its basic component operations or elements for analyzing.
Second, if the standard time is available from some other source,
then these sources are applied to each element or component of
work.
Third, if there is no such time available, then the work is estimated
based on the experience of the work. In this technique, results are
derived by making certain basic assumptions about the project.
Hence, the analytical estimation technique has some scientific
basis. Halstead’s software science is based on an analytical
estimation model .
COCOMO MODEL

Cocomo (Constructive Cost Model) is a regression model based on


LOC, i.e number of Lines of Code. It is a procedural cost estimate
model for software projects and is often used as a process of reliably
predicting the various parameters associated with making a project such
as size, effort, cost, time, and quality. It was proposed by Barry Boehm
in 1981 and is based on the study of 63 projects, which makes it one of
the best-documented models.

The key parameters which define the quality of any software products,
which are also an outcome of the Cocomo are primarily Effort &
Schedule:

• Effort: Amount of labor that will be required to complete a task. It


is measured in person-months units.
• Schedule: Simply means the amount of time required for the
completion of the job, which is, of course, proportional to the effort
put in. It is measured in the units of time such as weeks, months.

In COCOMO, projects are categorized into three types:

1. Organic
2. Semidetached
3. Embedded

1. Organic – A software project is said to be an organic type if the


team size required is adequately small, the problem is well
understood and has been solved in the past and also the team
members have a nominal experience regarding the problem.

2. Semi-detached – A software project is said to be a Semi-detached


type if the vital characteristics such as team size, experience,
knowledge of the various programming environment lie in between
that of organic and Embedded. The projects classified as Semi-
Detached are comparatively less familiar and difficult to develop
compared to the organic ones and require more experience and better
guidance and creativity. Eg: Compilers or different Embedded
Systems can be considered of Semi-Detached type.
3. Embedded – A software project requiring the highest level of
complexity, creativity, and experience requirement fall under this
category. Such software requires a larger team size than the other
two models and also the developers need to be sufficiently
experienced and creative to develop such complex models.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy