0% found this document useful (0 votes)
61 views

Project Report1

Uploaded by

Raj Garg
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
61 views

Project Report1

Uploaded by

Raj Garg
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 91

VIDYAVATI MUKAND LAL GIRLS

COLLEGE, GHAZIABAD

A
PROJECT REPORT
ON

" FURNITURE WEBSITE ”

BACHELOR OF COMPUTER APPLICATION


(BCA 3rd YEAR)
CCS UNIVERSITY, MEERUT
SESSION (2019 – 2022)

Under the Guidance: Submitted by:


Mrs. Princy Jain Harpreet Kaur (190962106010)
Mrs. Neeru Jain Ayushi yadav(190962106005)
TABLE OF CONTENTS

 ACKNOWLEDGEMENT

 CERTIFICATE

 TITLE OF THE PROJECT

 INTRODUCTION
Basic Introduction of
Project Objective & Scope
Work Flow
Advantages & Disadvantages
Technical Details

 SYSTEM ANALYSIS
Preliminary Analysis & Information
Gathering Input/Output
Feasibility Study
System Requirement
Specification Cost Estimation
Project Scheduling

 SYSTEM DESIGN
Project Planning
Modules
Flowcharts
Data Flow Diagram (DFD)
E-R Diagram
Screen Shots

 TESTING

 IMPLEMENTATION & MANITENANCE REFRENCES

 REFERENCES
ACKNOWLEDGEMENT

Primarily I would like to thank my faculty MRS. PRINCY JAIN AND MRS.
NEERU JAIN whose valuable guidance has been the once that helped me patch
me this project and make it full proof success. Their suggestions and instructions
have served as the major contributor towards the completion of the project.

Then I would like to thank my parents and friends who have helped me with their
valuable suggestions and guidance has been helpful in various phases of the
completion of the project.

Last but not the least I would like to thank my classmates who have helped me a
lot.

Name of students:
Harpreet Kaur saluja
Ayushi yadav
CERITFICATE

This is certify that the project report entitled “FURNITURE WEBSITE " by
Harpreet Kaur and ayushi yadav during the academic year 2021-2022. The data
sources have been fully acknowledged. I wish his success in all his future
endeavours.

Project Guide Principal

Signature: Signature:
Name: Name:
Date: Date:

Internal Examiner External Examiner

Signature: Signature:
Name: Name:
Date: Date:
TITLE OF THE PROJECT

“FURNITURE WEBSITE ”
INTRODUCTOIN

INRODUCTION OF FURNITURE WEBSITE


The home décor and furniture industry has a great
potential to increase the trend of interior designing and
room décor with various themes, color matching
furniture, etc. Nowadays, many people are looking for a
perfect home décor accessory as per their room
designing needs. Searching physically shop by shop is
not feasible for the customers. Therefore, the next big
technology App development company offers you a
good décor and furnishing through desktop, laptop, or
mobile with an application.
Attributes of a Successful Online Furniture Store App
The attraction of customers for a particular product or
an application depends on the stunning qualities that
you add to your store. There are some of the major
attributes that businesses are following
today. Furniture eCommerce App provides a complete
online eCommerce Solution for all furniture shopping
online.
OBJECTIVES

This project will serve the following objectives: -


1. Add and maintain records of available products.
2. Add and maintain customer details.
3. Add and maintain description of new products.
4. Add and maintain new entered category of products.
5. Provides economic/financial reports to the owner monthly or weekly
and yearly.
6. Provides a convenient solution of billing pattern.
7. Make an easy-to-use environment for users and customers.
8. Adds a huge variety of products .
ADVANTAGES

 The system reduces much of human efforts in calculating bill especially for
huge products.
 Save money and resources of organization and exclude of use of paper or
sheets in making bill.
 It can detect the product information and their price instantaneously using
RFID technology.
 Save time.
 It provides accuracy and faultless in billing calculations.
 The system is designed having attractive GUI and with detailed description.
 It is flexible and user-friendly.
 It also notified customers through sending an electronic bill via email.
DISADVANTAGES

 Requires large database.


 Time consuming
 Cannot track the product information if RFID tag is abraded.

Some customers still prefer to go to the salon when choosing furniture. Buyers are attracted by
the fact that they can see the furniture live, sit in an armchair – or try out the bed. Of course,
there are also conversations with friendly sellers – that some people consider necessary for the
right choice of furniture. On the other hand, some people consider going to the salon, having long
conversations with the seller and crowds – are too old-fashioned ways of buying, which they want
to avoid. Besides, coming to the salon creates a certain pressure for some customers – so they
can’t think cold-headed about their decision. That’s why more and more of them are turning to
online shopping – with which they can forget about the crowds and all the pressures.
TECHNICAL DETAILS

Software Requirement:
OPERATING SYSTEM WINDOWS 10
COMPILER NOTEPAD

LANGUAGE HTML

Hardware Requirement:
PROCESSOR INTEL I5
RAM 4GB
SYSTEM ANALYSIS
ANALYSIS OF PRESENT SYSTEM

Before we begin a new system, it is important to study the


system that will be improved or replaced (if there is one).
We need to analyze how this system uses hardware,
software, network and the people resources to convert data
resources, such as transaction data, into information
products, such as report and displays. Thus, we should
document how the information system activities of input,
processing, output, storage and control are accomplished.
INFORMATION GATHERING

This activity consists of gathering information about the functioning of the


present system. Large quantities of information need to be collected, evaluated,
managed and communicated. The four most commonly used methods of
gathering information are:-

See how much you know about gathering the information.

Interview: Interview must be well – planned in advance. Each participant


should know the objectives of the interview beforehand and prepare for it.
Questionnaire: A questionnaire is shorter and more highly structured the.
Interviews. It is a useful technique for obtaining the same information form a
large group of users.
Observation: Observation requires the system analyst to go on the work site to
watch. What is being done there. It is a good way of confirming and correcting
information gathered by other techniques.
Study of existing documents: since most organization today are involved in a
lot of paper work , the analyst can learn a lot about the system by studying
documents.
INPUT & OUTPUT

In input, the user has to enter his or her details according to


the system need to access the services that he or she needs.

In output, the user can access the information from the


system after entering the input in the system.
FEASIBILITY STUDY
OF THE PROJECT
FEASIBILITY STUDY

The concept of feasibility is to determine whether or not a


project is worth doing. The process followed in making
this determination is called feasibility study. Once it has
been determined that a project is feasible, the system
analyst can go ahead and prepare the project specification
which finalizes project requirements.

Types of feasibility:-

1.Technical Feasibility
2.Operational Feasibility
3.Economic Feasibility
4.Social Feasibility

Here we describe only few of these in details:-


TECHNICAL FEASIBILITY

The proposed system has technical capacity of


required to hold the data.
This project is efficient and responds quickly for
various enquires regardless.
 Of number of locations.
 The system proposed could be expanded easily and
Efficiency, whenever required.

OPERATING FEASIBILITY STUDY

The management of the organisation has a fully supported


us to bring up the project and the data security in this
project provided by setting up the password procedure so
that only the authorized user can access the system.
ECONOMICAL FEASIBILITY

 It has computerized paper works and also is reduced


to large extent.
With the help of this project single person is now
available to do the tasks of 5 to 7 persons.
 Due to processing speed of then Computer, we can
extract desired information’s in a fraction of second.

SOCIAL FEASIBILITY

It is the determination of whether a proposed project will


be acceptable to the people or not. This determination
typically examines the probability of the project being
accepted by the group directly affected the proposed
system change. To solve the actual problems in an industry
setting, a software or a term of engineers must incorporate
a development strategy that encompasses the process,
methods and tools layers. This strategy is often referred to
Process model or a Software Engineering Paradigm.
SOFTWARE REQUIREMENTS SPECIFICATIONS

The production of the requirements stage of the software development process is


Software Requirements Specifications (SRS) (also called a requirements
document). This report lays a foundation for software engineering activities and
is constructing when entire requirements are elicited and analysed. SRS is a
formal report, which acts as a representation of software that enables the
customers to review whether it (SRS) is according to their requirements. Also, it
comprises user requirements for a system as well as detailed specifications of the
system requirements. The SRS is a specification for a specific software product,
program, or set of applications that perform particular functions in a specific
environment. It serves several goals depending on who is writing it. First, the
SRS could be written by the client of a system. Second, the SRS could be written
by a developer of the system. The two methods create entirely various situations
and establish different purposes for the document altogether. The first case, SRS,
is used to define the needs and expectation of the users. The second case, SRS, is
written for various purposes and serves as a contract document between customer
and developer.
SOFTWARE DEVELOPMENT LIFE CYCLE(SDLC)

Software Development Life Cycle (SDLC) is a framework that defines


the steps involved in the development of software at each phase. It
covers the detailed plan for building, deploying and maintaining the
software.
SDLC defines the complete cycle of development i.e. all the tasks
involved in planning, creating, testing, and deploying a Software
Product.
SDLC MODEL

A software life cycle model is a descriptive representation of the


software development cycle. SDLC models might have a different
approach but the basic phases and activity remain the same for all the
models.

Following are the most important and popular SDLC models:-

1. Waterfall Model
2. Incremental Process Model
• Iterative Enhancement Model
• The Rapid Application Development (RAD) Model
3. Evolutionary Process Model
• Spiral Model
• Prototyping Model
WATERFALL MODEL

The waterfall is a cascade SDLC model that presents the development


process like the flow, moving step by step through the phases of
analysis, projecting, realization, testing, implementation, and support.
This SDLC model includes gradual execution of every stage. Waterfall
implies strict documentation. The features expected of each phase of this
SDLC model are predefined in advance.

The waterfall life cycle model is considered one of the best-established


ways to handle complex projects.

This approach allows avoiding many mistakes that may appear because
of insufficient control over the project. However, it results in pervasive
documentation development. It is beneficial to the developers who may
be working with the product in the future, but it takes a long time to
write everything down.

The waterfall model is a sequential design process in which progress is


seen as flowing steadily downwards (like a waterfall) through the phases
of
Requirement, Analysis, System Design, Implementation, Testing,
Development and maintenance.
Sequential Phases in Waterfall Model

Requirements Analysis: All possible requirements of the system to


be developed are captured in this phase and documented in a
requirement specification document.

System Design: The requirement specifications from first phase are


studied in this phase and the system design is prepared. This system
design help in specifying hardware and software requirements and helps
in defining the overall system architecture.

Implementation: With input from the input design, the system is first
developed in small programs called units, which are integrated in the
text phase. Each unit is developed and tested for its functionality,
which is referred to as Unit Testing.

Integration and Testing: All the units developed in the implementation


phase are integrated into a system after testing of each unit. Post
integration the entire system is tested for any faults and failures.

Deployment of system: Once the functional and the non-functional


testing is done; the product is deployed in the customer environment
and released into the market.

Maintenance: There are some issues which come up in the client


environment. To fix those issues, Patches are released. Also to enhance
the product some better versions are released. Maintenance is done to
deliver these changes in the customer environment.

Incremental Process Model


Incremental Model is one of the most adopted models of software
development process where the software requirement is broken down
into many standalone modules in the software development life cycle.
Once the modules are the split then incremental development will be
carried out in steps covering all the analysis, designing, implementation,
carrying out all the required testing or verification and maintenance.

In incremental models, each iteration stage is developed and hence each


stage will be going through requirements, design, coding and finally
the testing modules of the software development life cycle.
Functionality developed in each stage will be added on the previously
developed functionality and this repeats until the software is fully
developed. At each incremental stage there will be though review
basing on which the decision on the next stage will be taken out.

Iterative Enhancement Model


The iterative model is also called an incremental model in which a
particular project or software is broken down into large numbers of
iterations, where each iteration is a complete development loop resulting
in a release of executable product or software. A subset of the final
product under development, which grows from iteration to iteration to
become the final product or software. Prototyping, Rational Unified
Process (RUP), agile development, Rapid Application development are
examples of the iterative model.

The SDLC (Software Development Life Cycle) is notably huge and


abundant in numerous testing and development actions, techniques,
methodologies, tools, and others. It includes intensive outlining and the
administration, computation and arrangement. It is just following every
certain effort of the software engineers that application or software is
favourably created. The Iterative model is also a component of the SDLC.

A specific execution of a software development life cycle that


concentrates on primary, uncomplicated execution, which then
increasingly profits higher complication and wider characteristics setting
to the ultimate system, is concluded. In brief, development in the iterative
model is a manner of shattering down the software development of a huge
application into shorter sections.

The model of the iterative model life cycle did not begin with whole
stipulations. Particularly in the model, the development starts by
designating and executing the only component of the software when
analyzed to recognize later specifications. Furthermore, in the iterative
model, the process of iterative begins with a simplistic execution of a
little collection of the software requisite, which iteratively improves the
developing variants until the whole system is executed and prepared to
be redistributed. Every Iterative model release is developed in a
particular and established period of time known as iteration.
Rapid Application Development (RAD) Model

The RAD (Rapid Application Development) model is based on


prototyping and iterative development with no specific planning
involved. The process of writing the software itself involves the
planning required for developing the product.
Rapid Application Development focuses on gathering customer
requirements through workshops or focus groups, early testing of the
prototypes by the customer using iterative concept, reuse of the existing
prototypes (components), continuous integration and rapid delivery.

Rapid application development is a software development methodology


that uses minimal planning in favour of rapid prototyping. A prototype
is a working model that is functionally equivalent to a component of the
product.
In the RAD model, the functional modules are developed in parallel as
prototypes and are integrated to make the complete product for faster
product delivery. Since there is no detailed preplanning, it makes it
easier to incorporate the changes within the development process.

RAD projects follow iterative and incremental model and have small
teams comprising of developers, domain experts, customer
representatives and other IT resources working progressively on their
component or prototype.
The most important aspect for this model to be successful is to make
sure that the prototypes developed are reusable.
You can break down the process in a few ways, but in general, RAD
follows four main phases.

Phase 1: Requirement Planning


Phase 2: User Design
Phase 3: Construction Phase
Phase 4: Cutover Phase
Phase 1: Planning Requirement
This phase is equivalent to a project scoping meeting. Although the planning phase
is condensed compared to other project management methodologies, this is the
critical step for the ultimate success of the project. During this stage, developers,
client and team member communicates to determine the goals and the expectations
for the project as well as current and potential issues that would need to be
addressed during the build.

A basic breakdown of this stage involves:


• Researching the current problem
• Defining the requirements for the Project.
• Finalize the requirements with each other stakeholder’s approval

It is important that everyone has the opportunity to evaluate the goals and
expectations for the project and weigh in. By getting approval from each key
stakeholder’s and the developer, teams can avoid miscommunication and costly
change orders down the road.

Phase 2: User Design


During this phase, user interact with system analysts and develop models and
prototypes that represent all system processes, inputs and outputs. The RAD group
or subgroups typically use a combination of Joint Application Development (JAD)
techniques and CASE tools to translate user needs into working models.

Phase 3:Construction Phase


Focuses on the program and application development task similar to the SDL. In
RAD, however, user continue to participate and can still suggest changes or
improvements actual screens or reports and developed.

Phase 4:Cutover Phase


Resemble the final task in the SDLC implementation phase, including data conversion,
testing, changeover to new system, and user training. Compared with the traditional
methods, the entire process is compressed. As a result, the new system is built, delivered,
and placed in operation much sooner.
Evolutionary Software Process Model

Evolution model is based on the initial implementation will result in the user
comments it can be repaired through many versions until an adequate system can
be developed. In addition to having separate activities, this model provide
feedback to developers.

The evolution model divides the development cycle into smaller, "Incremental
Waterfall Model" in which users are able to get access to the product at the end
of each cycle.

The users provide feedback on the product for planning stage of the next cycle
and the development team responds, often by changing the product, plans or
process.
1.SPIRAL MODEL

Spiral Model is a combination of a waterfall model and iterative model.


Each phase in spiral model begins with a design goal and ends with the
client reviewing the progress. The spiral model was first mentioned by
Barry Boehm in his 1986 paper.
The development team in Spiral-SDLC model starts with a small set of
requirement and goes through each development phase for those set of
requirements. The software term adds functionality for the additional
requirement in every-increasing spirals until the application is ready for
the production phase.
The radial dimension of the model represents the cumulative cost. Each
path around the spiral is indicative of increased costs. The angular
dimension represents the progress made in completing each cycle. Each
loop of the spiral from X-axis clockwise through 360* represents one
phase.
SPIRAL MODEL
Spiral Model Phases

1. Planning :-

It includes estimating the cost, schedule and resources for the


iteration. It also involves understanding the system
requirements for continuous communication between the
system analyst and the customer.

2. Risk Analysis :-

Here is where the alternatives are analysed and the associated


risks are identified and evaluated.

3. Development :-

It includes testing, coding and developing software at the


customer site.
This may follow either the prototyping or classic lifecycle
approach

4. Evaluation :-

This involves a review of the proceeding development effort and


therefore, planning for the next Phase.
Software Prototyping

Software prototyping is the activity of creating prototypes of software


applications, i.e., incomplete versions of the software program being
developed. It is an activity that can occur in software development and
is comparable to prototyping as known from other fields, such as
mechanical engineering or manufacturing.

A prototype typically simulates only a few aspects of, and may be


completely different from, the final product.

Prototyping has several benefits: the software designer and


implementer can get valuable feedback from the users early in the
project. The client and the contractor can compare if the software
made matches the software specification, according to which the
software program is built. It also allows the software engineer some
insight into the accuracy of initial project estimates and whether the
deadlines and milestones proposed can be successfully met. The degree
of completeness and the techniques used in prototyping have been in
development and debate since its proposal in the early 1970s.
TYPES OF SOFTWARE PROTOTYPE

1. Rapid Throwaway Prototype

Rapid prototyping is also known as “throwaway prototyping” because the


prototype is expected to be relevant only in the short term, such as one sprint
in the Agile development framework. It may go through several cycles of
feedback, modification, and evaluation during that time. When all the
stakeholders are satisfied, it becomes a reference for the designers and
developers to use. After the sprint is completed, the prototype is discarded
and a new one is built for the next sprint.

2. Evolutionary Prototyping
An evolutionary prototype differs from the traditional notion of a software
prototype; an evolutionary prototype is a functional piece of software, not
just a simulation. Evolutionary prototyping starts with a product that meets
only the system requirements that are understood. It won’t do everything the
customer requires, but it makes a good starting point. New features and
functions can be added as those requirements become clear to the
stakeholders. That’s the “evolutionary” nature of this prototype.

3. Incremental Prototyping
Incremental prototyping is useful for enterprise software that has many modules
and components which may be loosely related to one another. In incremental
prototyping, separate small prototypes are built in parallel. The individual
prototypes are evaluated and refined separately, and then merged into a
comprehensive whole, which can then be evaluated for consistency in look,
feel, behavior, and terminology.

4. Extreme Prototyping
This type of prototyping model is mainly used for web applications. It is divided into
three phases-

 First, a basic prototype with static pages is created, it consists of


HTML pages.
 Next, using a services layer, data processing is simulated.
 In the last phase, services are implemented.
+
PROTOTYPE MODEL

_____________________________________________________
BASIC REQUIREMENT IDENTIFICATION

This step involves understand the very basic project requirements


specially in terms of user interface. The more intricate details of internal
design and internal designs and external aspects like performance and
security can be ignored at this stage.

DEVILOPING THE INITIAL PROTOTYPING :-


The initial prototype is developed in this stage, where the very basic
requirements are showcased and user interfaces are provided. These
features may not exactly work in the same manner internally in the
actual software developed and workaround are used to give the same
look and feel to the customer in the prototype developed

REVIEW OF THE PROTOTYPE :-


The prototyped developed is then presented to the customer and the
other important stakeholders in the project. The feedback is collected in
an organized manner and used to further enhancements in the product
under development.

REVISE AND ENHANCE THE PROTOYPE:-


The feedback and review comments are discussed during this stage and
some negotiations happen with the customer based on the facto like time
and budget constraints and technical feasibility of actual implementation.
The changes accepted are again incorporated in the new prototype
development and the cycle repeats until customer expectations are met.
COST ESTIMATIONS

Whether designing a building or developing software, successful


projects require accurate cost estimates. Cost estimations forecast the
resources and associated costs needed to accurate a project, Which
helps ensures you achieve projects objective within the approved
timelines and budgets.
Cost estimating is well developed discipline. By understanding the
nuances of cost estimating and using standard estimation techniques,
you can improve your forecasts. This complete guide to project cost
estimating will walk you through the key concepts and major
estimating techniques. Additionally, find how-toss, templates and tips
for key industries to help you get started with you estimates.
A Model used for estimation.

COCOMO (Constructive Cost Model)

Constructive Cost Model was introduced by Dr. Barry Boehm’s


textbook Software Engineering Economics. This model is now
generally called “COCOMO 81”.it refers to a group of models and is
used to estimate the development efforts which are involved in a project.
COCOMO is based upon the estimation of lines of code in a system and
the time. COCOMO has also considered the aspects like project
attributes, hardware, assessment of produce, etc. This provides
transparency to the model which allows software managers to understand
why the model gives the estimates it does.

HISTORY OF COCOMO
The constructive cost model was developed by Barry W. Boehm in the
late 1970sand published in Boehm's 1981 book software engineering
economics as a model for estimating effort, cost, and schedule for
software projects. It drew on a study of 63 projects at TRW Aerospace
where Boehm was Director of Software Research and Technology. The
study examined projects ranging in size from 2,000 to 100,000 lines of
code, and programming languages ranging from assembly to PL/I. These
projects were based on the waterfall model of software development
which was the prevalent software development process in 1981.
References to this model typically call it COCOMO 81. In 1995
COCOMO
II was developed and finally published in 2000 in the book Software Cost
Estimation with COCOMO II.COCOMO II is the successor of COCOMO
81 and is claimed to be better suited for estimating modern software
development projects; providing support for more recent software
development processes and was tuned using a larger database of 161
projects. The need for the new model came as software development
technology moved from mainframe and overnight batch processing to
desktop development, code reusability, and the use of off-the-shelf
software components.

COCOMO consists of a hierarchy of three increasingly detailed and


accurate forms. The first level, Basic COCOMO is good for quick, early,
rough order of magnitude estimates of software costs, but its accuracy is
limited due to its lack of factors to account for difference in project
attributes (Cost Drivers). Intermediate COCOMO takes these Cost
Drivers into account and Detailed COCOMO additionally accounts for
the influence of individual project phases.
BASIC MODEL

The basic COCOMO model estimate the software development effort


using only Lines of codes.
Program size is estimated thousand of source lines of code (SLOC,
KLOC).

There are three classes of software projects.


1. Organic mode: In this mode, relatively simple, small
software projects with a small team are handled. Such teams
should have good application experience to less rigid
requirement6s.
2. Semi-deteched: In this class intermediate project in which
team with mixed experience level are handled. Such project
may have mix of rigid and less than rigid requirements.
3. Embedded projects: In his class, project with tight
hardware, Software and operational constraints are handled.
TYPES OF COCOMO

Model 1. Basic COCOMO Model

It is the one type of static model to estimates software development


effort quickly and roughly. It mainly deals with the number of lines of
code and the level of estimation accuracy is less as we don’t consider
the all parameters belongs to the project. The estimated effort and
scheduled time for the project are given by the relation:
Effort (E) = a*(KLOC) b MM
Scheduled Time (D) = c*(E) d Months (M)

• E = Total effort required for the project in Man-Months (MM).

• D = Total time required for project development in Months (M).

• KLOC = the size of the code for the project in Kilo lines of code.

• a, b, c, d = The constant parameters for a software project.

Model 2. Intermediate Model


The basic COCOMO model considers that the effort is only a function of
the number of lines of code and some constants calculated according to
the various software systems. The intermediate COCOMO model
recognizes these facts and refines the initial estimates obtained through
the basic COCOMO model by using a set of 15 cost drivers based on
various attributes of software engineering.

1. Product attributes -
 Required software reliability extent

 Size of the application database

 The complexity of the product

2. Hardware attributes -
 Run-time performance constraints

 Memory constraints the volatility of the

virtual machine environment


 Required turnabout time

3. Personnel attributes -
 Analyst capability

 Software engineering capability

 Applications experience

 Virtual machine experience

 Programming language experience

4. Project attributes -
 Use of software tools

 Application of software engineering

methods
 Required development schedule

Model 3. Detailed COCOMO Model


Detailed COCOMO incorporates all qualities of the standard version
with an assessment of the cost drivers’ effect on each method of the
software engineering process. The detailed model uses various effort
multipliers for each cost driver property. In detailed COCOMO, the
whole software is differentiated into multiple modules, and then we
apply COCOMO in various modules to estimate effort and then sum the
effort.

The Six phases of detailed COCOMO are:

1. Planning and requirements


2. System structure
3. Complete structure
4. Module code and test
5. Integration and test
6. Cost Constructive model

The effort is determined as a function of program estimate, and a set of


cost drivers are given according to every phase of the software lifecycle.
Project Scheduling

Project scheduling is concerned with the techniques that can be


employed to manage the activities that need to undertaken during the
development of a project

Scheduling is carried out in advanced of the project commencing and involves:

• Identifying the task that need to be carried out.


• Estimating how long they will take.
• Allocating resources(mainly personnel).
• Scheduling when the task will occur
Once the projects underway control needs to be exerted to ensure that the
plan continues to represent the best prediction of what will occur in the
future:

• Based on what occurs during the development.


• Often necessities revision of the plan.
Effective project planning will help to ensure that the system
aredelivered:

• Within cost.
• Within the time constraints
• To a specific standard of quality
SYSTEM DESIGN
PROJECT PLANNING

The project planning phase of project management is where a project


manager builds the project roadmap, including the project plan, project
scope, project schedule, project constraints, work breakdown structure,
and risk analysis.

It doesn’t matter if the project is a new website or a new building—the


project planning phase serves as a roadmap and acts as a control tool
throughout the project. Project planning provides guidance by answering
questions like:

• What product(s) or service(s) will we deliver?


• How much will the project cost?
• How can we meet the needs of our stakeholders?
• How will progress be measured?

Project planning communicates deliverables, timing and schedules,


along with team roles and responsibilities. During the planning phase of
a project, the project manager is forced to think through potential risks
and hang-ups that could occur during the project.
These early considerations can prevent future issues from affecting the
overall success of the project, or at times, cause a project to fail. Too
little planning causes chaos and frustration; and too much planning
causes a lot of administrative work and not enough time for creative
work.

Ultimately, the planning phase of project management determines how


smoothly your projects move through the life cycle, which is why it's so
important to spend some time at the beginning of a project and get
your planning right.
FLOWCHART

A flowchart is a type of diagram that represents a workflow or process.


A flowchart can also be defined as a diagrammatic representation of an
algorithm, a step-by-step approach to solving a task.
The flowchart shows the steps as boxes of various kinds, and their order
by connecting the boxes with arrows. This diagrammatic representation
illustrates a solution model to a given problem. Flowcharts are used in
analysing, designing, documenting or managing a process or program in
various fields.
Flowcharts are used in designing and documenting simple processes or
programs. Like other types of diagrams, they help visualize what is
going on and thereby help understand a process, and perhaps also find
less-obvious features within the process, like flaws and bottlenecks.
There are different types of flowcharts: each type has its own set of
boxes and notations.
A flowchart is described as "cross-functional" when the chart is divided
into different vertical or horizontal parts, to describe the control of
different organizational units. A symbol appearing in a particular part is
within the control of that organizational unit. A cross-functional
flowchart allows the author to correctly locate the responsibility for
performing an action or making a decision, and to show the
responsibility of each organizational unit for different parts of a single
process.
SYMBOLS OF FLOWCHART

Different flowchart shapes have different conventional meanings. The


meanings of some of the more common shapes are as follows:

Terminator
The terminator symbol represents the starting or ending point of the
system.

Process
A box indicates some particular operation.

Decision
A diamond represents a decision or branching point. Lines coming out
from the diamond indicates different possible situations, leading to
different subprocesses.

Input/Output
It represents information entering or leaving the system. An input might
be an order from a customer. Output can be a product to be delivered.

Flow
Lines represent the flow of the sequence and direction of a process.
DATA FLOW DIAGRAM (DFD)

ZERO LEVEL DFD

E-R DIAGRAM
SCREEN SHOTS
TESTING
Software testing is the act of examining the artifacts and the behaviour
of the software under test by validation and verification. Software testing
can also provide an objective, independent view of the software to allow
the business to appreciate and understand the risks of software
implementation. Test techniques include, but not necessarily limited to:
• analysing the product requirements for completeness and correctness
in various contexts like industry perspective, business perspective,
feasibility and viability of implementation, usability, performance,
security, infrastructure considerations, etc.
• reviewing the product architecture and the overall design of the
product
• working with product developers on improvement in coding
techniques, design patterns, tests that can be written as part of code
based on various techniques like boundary conditions, etc.
• executing a program or application with the intent of examining
behaviour
• reviewing the deployment infrastructure and associated scripts &
automation
• take part in production activities by using monitoring & observability
techniques

Although software testing can determine the correctness of software


under the assumption of some specific hypotheses (see the hierarchy of
testing difficulty below), testing cannot identify all the failures within
the software. Instead, it furnishes a criticism or comparison that
compares the state and behavior of the product against test oracles —
principles or mechanisms by which someone might recognize a
problem. These oracles may include (but are not limited to)
specifications, contracts, comparable products, past versions of the same
product, inferences about intended or expected purpose, user or
customer expectations, relevant standards, applicable laws, or other
criteria.
A primary purpose of testing is to detect software failures so that defects
may be discovered and corrected. Testing cannot establish that a product
functions properly under all conditions, but only that it does not function
properly under specific conditions. The scope of software testing may
include the examination of code as well as the execution of that code in
various environments and conditions as well as examining the aspects of
code: does it do what it is supposed to do and do what it needs to do. In
the current culture of software development, a testing organization may
be separate from the development team. There are various roles for
testing team members. Information derived from software testing may be
used to correct the process by which software is developed.
Every software product has a target audience. For example, the audience
for video game software is completely different from banking software.
Therefore, when an organization develops or otherwise invests in a
software product, it can assess whether the software product will be
acceptable to its end users, its target audience, its purchasers, and other
stakeholders. Software testing assists in making this assessment.

Testing approach
Static, dynamic, and passive Testing: There are many approaches
available in software testing. Reviews, walkthroughs, or inspections are
referred to as static testing, whereas executing programmed code with a
given set of test cases is referred to as dynamic testing.
Static testing is often implicit, like proofreading, plus when
programming tools/text editors check source code structure or compilers
(pre-compilers) check syntax and data flow as static program analysis.
Dynamic testing takes place when the program itself is run. Dynamic
testing may begin before the program is 100% complete in order to test
particular sections of code and are applied to discrete functions or
modules. Typical techniques for these are either using stubs/drivers or
execution from a debugger environment. Static testing involves
verification, whereas dynamic testing also involves validation.
Passive testing means verifying the system behaviour without any
interaction with the software product. Contrary to active testing, testers
do not provide any test data but look at system logs and traces. They
mine for patterns and specific behaviour in order to make some kind of
decisions. This is related to offline runtime verification and log analysis.

Exploratory Approach: Exploratory testing is an approach to software


testing that is concisely described as simultaneous learning, test design,
and test execution. Cam Kiner, who coined the term in 1984, defines
exploratory testing as "a style of software testing that emphasizes the
personal freedom and responsibility of the individual tester to
continually optimize the quality of his/her work by treating test-related
learning, test design, test execution, and test result interpretation as
mutually supportive activities that run in parallel throughout the project.

The "box" approach: Software testing methods are traditionally


divided into white- and black-box testing. These two approaches are
used to describe the point of view that the tester takes when designing
test cases. A hybrid approach called grey-box testing may also be
applied to software testing methodology. With the concept of grey-box
testing—which develops tests from specific design elements—gaining
prominence, this "arbitrary distinction" between black- and white-box
testing has faded somewhat.

White “box” approach: White-box testing (also known as clear box


testing, glass box testing, transparent box testing, and structural testing)
verifies the internal structures or workings of a program, as opposed to
the functionality exposed to the end-user. In white-box testing, an
internal perspective of the system (the source code), as well as
programming skills, are used to design test cases. The tester chooses
inputs to exercise paths through the code and determine the appropriate
outputs. This is analogous to testing nodes in a circuit, e.g., in-circuit
testing (ICT).
While white-box testing can be applied at the unit, integration,
and system levels of the software testing process, it is usually done at the
unit level. It can test paths within a unit, paths between units during
integration, and between subsystems during a system–level test. Though
this method of test design can uncover many errors or problems, it might
not detect unimplemented parts of the specification or missing
requirements. Techniques used in white-box testing include:

• API testing – testing of the application using public and private APIs
(Application programming interfaces)
• Code coverage – creating tests to satisfy some criteria of code
coverage (e.g., the test designer can create tests to cause all statements
in the program to be executed at least once)
• Fault injection methods – intentionally introducing faults to gauge the
efficacy of testing strategies
• Mutation testing methods
• Static testing methods

Code coverage tools can evaluate the completeness of a test suite that
was created with any method, including black-box testing. This allows
the software team to examine parts of a system that are rarely tested and
ensures that the most important function points have been tested.[23] Code
coverage as a software metric can be reported as a percentage for:[19][23][24]
100% statement coverage ensures that all code paths or branches (in
terms of control flow) are executed at least once. This is helpful in
ensuring correct functionality, but not sufficient since the same code
may process different inputs correctly or incorrectly.[25] Pseudo-tested
functions and methods are those that are covered but not specified (it
is possible to remove their body without breaking any test case).

Black-box testing: Black-box testing (also known as functional testing)


treats the software as a "black box," examining functionality without any
knowledge of internal implementation, without seeing the source code.
The testers are only aware of what the software is supposed to do, not
how it does it. Black-box testing methods include: equivalence
partitioning, boundary value analysis,
allhttps://en.wikipedia.org/wiki/All-pairs_testingpairs testing, state
transition tables, decision table testing, fuzz testing,
modelhttps://en.wikipedia.org/wiki/Model-based_testingbased testing,
use case testing, exploratory testing, and specification-based testing.
Specification-based testing aims to test the functionality of software
according to the applicable requirements. This level of testing usually
requires thorough test cases to be provided to the tester, who then can
simply verify that for a given input, the output value (or behaviour ),
either "is" or "is not" the same as the expected value specified in the test
case. Test cases are built around specifications and requirements, i.e.,
what the application is supposed to do. It uses external descriptions of
the software, including specifications, requirements, and designs to
derive test cases. These tests can be functional or non-functional, though
usually functional.
Specification-based testing may be necessary to assure correct
functionality, but it is insufficient to guard against complex or high-risk
situations. One advantage of the black box technique is that no
programming knowledge is required. Whatever biases the programmers
may have had, the tester likely has a different set and may emphasize
different areas of functionality. On the other hand, black-box testing has
been said to be "like a walk in a dark labyrinth without a flashlight."
Because they do not examine the source code, there are situations when
a tester writes many test cases to check something that could have been
tested by only one test case or leaves some parts of the program untested.
This method of test can be applied to all levels of software testing: unit,
integration, system and acceptance. It typically comprises most if
not all testing at higher levels, but can also dominate unit testing as well.

Component interface testing: Component interface testing is a


variation of black-box testing, with the focus on the data values beyond
just the related actions of a subsystem component. The practice of
component interface testing can be used to check the handling of data
passed between various units, or subsystem components, beyond full
integration testing between those units. The data being passed can be
considered as "message packets" and the range or data types can be
checked, for data generated from one unit, and tested for validity before
being passed into another unit. One option for interface testing is to
keep a separate log file of data items being passed, often with a
timestamp logged to allow analysis of thousands of cases of data
passed between units for days or weeks. Tests can include checking the
handling of some extreme data values while other interface variables
are passed as normal values. Unusual data values in an interface can
help explain unexpected performance in the next unit.

Visual testing: The aim of visual testing is to provide developers with


the ability to examine what was happening at the point of software
failure by presenting the data in such a way that the developer can easily
find the information she or he requires, and the information is expressed
clearly.
At the core of visual testing is the idea that showing someone a problem
(or a test failure), rather than just describing it, greatly increases clarity
and understanding. Visual testing, therefore, requires the recording of
the entire test process – capturing everything that occurs on the test
system in video format. Output videos are supplemented by real-time
tester input via picturein-apicture webcam and audio commentary from
microphones. Visual testing provides a number of advantages. The
quality of communication is increased drastically because testers can
show the problem
(and the events leading up to it) to the developer as opposed to just
describing it and the need to replicate test failures will cease to exist in
many cases. The developer will have all the evidence she or he requires
of a test failure and can instead focus on the cause of the fault and how it
should be fixed.
Ad hoc testing and exploratory testing are important methodologies for
checking software integrity, because they require less preparation time to
implement, while the important bugs can be found quickly.In ad hoc
testing, where testing takes place in an improvised, impromptu way, the
ability of the tester(s) to base testing off documented methods and then
improvise variations of those tests can result in more rigorous
examination of defect fixes. However, unless strict documentation of the
procedures are maintained, one of the limits of ad hoc testing is lack of
repeatability.

Grey-box testing: Grey-box testing (American spelling: gray-box


testing) involves having knowledge of internal data structures and
algorithms for purposes of designing tests while executing those tests at
the user, or blackbox level. The tester will often have access to both "the
source code and the executable binary."Grey-box testing may also
include reverse
engineering (using dynamic code analysis) to determine, for instance,
boundary values or error messages. Manipulating input data and
formatting output do not qualify as grey-box, as the input and output are
clearly outside of the "black box" that we are calling the system under
test. This distinction is particularly important when conducting
integration testing between two modules of code written by two different
developers, where only the interfaces are exposed for the test.
By knowing the underlying concepts of how the software works, the
tester makes better-informed testing choices while testing the software
from outside. Typically, a grey-box tester will be permitted to set up an
isolated testing environment with activities such as seeding a database.
The tester can observe the state of the product being tested after
performing certain actions such as executing SQL statements against the
database and then executing queries to ensure that the expected changes
have been reflected. Grey-box testing implements intelligent test
scenarios, based on limited information. This will particularly apply to
data type handling, exception handling, and so on.

Testing levels
Broadly speaking, there are at least three levels of testing: unit testing,
integration testing, and system testing. However, a fourth level,
acceptance testing, may be included by developers. This may be in the
form of operational acceptance testing or be simple end-user (beta)
testing, testing to ensure the software meets functional expectations.
Based on the ISTQB Certified Test Foundation Level syllabus, test
levels includes those four levels, and the fourth level is named
acceptance testing. Tests are frequently grouped into one of these levels
by where they are added in the software development process, or by the
level of specificity of the test.

Unit Testing: Unit testing refers to tests that verify the functionality of a
specific section of code, usually at the function level. In an object-
oriented environment, this is usually at the class level, and the minimal
unit tests include the constructors and destructors.
These types of tests are usually written by developers as they work on
code (white-box style), to ensure that the specific function is working as
expected. One function might have multiple tests, to catch corner cases
or other branches in the code. Unit testing alone cannot verify the
functionality of a piece of software, but rather is used to ensure that the
building blocks of the software work independently from each other.
Unit testing is a software development process that involves a
synchronized application of a broad spectrum of defect prevention and
detection strategies in order to reduce software development risks, time,
and costs. It is performed by the software developer or engineer during
the construction phase of the software development life cycle. Unit
testing aims to eliminate construction errors before code is promoted to
additional testing; this strategy is intended to increase the quality of the
resulting software as well as the efficiency of the overall development
process.
Depending on the organization's expectations for software development,
unit testing might include static code analysis, data-flow analysis,
metrics analysis, peer code reviews, code coverage analysis and other
software testing practices.

Integration Testing: Integration testing is any type of software testing


that seeks to verify the interfaces between components against a software
design. Software components may be integrated in an iterative way or all
together ("big bang"). Normally the former is considered a better
practice since it allows interface issues to be located more quickly and
fixed.
Integration testing works to expose defects in the interfaces and
interaction between integrated components (modules). Progressively
larger groups of tested software components corresponding to elements
of the architectural design are integrated and tested until the software
works as a system.
Integration tests usually involve a lot of code, and produce traces that are
larger than those produced by unit tests. This has an impact on the ease
of localizing the fault when an integration test fails. To overcome this
issue, it has been proposed to automatically cut the large tests in smaller
pieces to improve fault localization.

System Testing: System testing tests a completely integrated system to


verify that the system meets its requirements. For example, a system test
might involve testing a login interface, then creating and editing an
entry, plus sending or printing results, followed by summary processing
or deletion (or archiving) of entries, then logoff.
Operational acceptance testing: Operational acceptance is used to conduct
operational readiness (pre-release) of a product, service or system as part of a
quality management system. OAT is a common type of non-functional software
testing, used mainly in software development and software maintenance
projects. This type of testing focuses on the operational readiness of the system
to be supported, or to become part of the production environment. Hence, it is
also known as operational readiness testing (ORT) or Operations readiness and
assurance (OR&A) testing. Functional testing within OAT is limited to those
tests that are required to verify the nonfunctional aspects of the system.
In addition, the software testing should ensure that the portability of the
system, as well as working as expected, does not also damage or
partially corrupt its operating environment or cause other processes
within that environment to become inoperative.

Installation testing: Most software systems have installation


procedures that are needed before they can be used for their main
purpose. Testing these procedures to achieve an installed software
system that may be used is known as installation testing.

Compatibility testing: A common cause of software failure (real or


perceived) is a lack of its compatibility with other application software,
operating systems (or operating system versions, old or new), or target
environments that differ greatly from the original (such as a terminal or
GUI application intended to be run on the desktop now being required to
become a Web application, which must render in a Web browser). For
example, in the case of a lack of backward compatibility, this can occur
because the programmers develop and test software only on the latest
version of the target environment, which not all users may be running.
This results in the unintended consequence that the latest work may not
function on earlier versions of the target environment, or on older
hardware that earlier versions of the target environment were capable of
using. Sometimes such issues can be fixed by proactively abstracting
operating system functionality into a separate program module or library.

Smoke and sanity testing: sanity testing determines whether it is


reasonable to proceed with further testing.
Smoke testing consists of minimal attempts to operate the software,
designed to determine whether there are any basic problems that will
prevent it from working at all. Such tests can be used as build
verification test.

Regression Testing: Regression testing focuses on finding defects after


a major code change has occurred. Specifically, it seeks to uncover
software regressions, as degraded or lost features, including old bugs that
have come back. Such regressions occur whenever software
functionality that was previously working correctly, stops working as
intended. Typically, regressions occur as an unintended consequence of
program changes, when the newly developed part of the software
collides with the previously existing code. Regression testing is typically
the largest test effort in commercial software development, due to
checking numerous details in prior software features, and even new
software can be developed while using some old test cases to test parts
of the new design to ensure prior functionality is still supported.
Common methods of regression testing include re-running previous sets
of test cases and checking whether previously fixed faults have re-
emerged. The depth of testing depends on the phase in the release
process and the risk of the added features. They can either be complete,
for changes added late in the release or deemed to be risky, or be very
shallow, consisting of positive tests on each feature, if the changes are
early in the release or deemed to be of low risk. In regression testing, it
is important to have strong assertions on the existing behavior. For this,
it is possible to generate and add new assertions in existing test cases,
this is known as automatic test amplification.

Acceptance Testing: Acceptance testing can mean one of two things:


1. A smoke test is used as a build acceptance test prior to
further testing, e.g., before integration or regression.
2. Acceptance testing performed by the customer, often in their
lab environment on their own hardware, is known as user
acceptance testing (UAT). Acceptance testing may be
performed as part of the hand-off process between any two
phases of development.

Alpha Testing: Alpha testing is simulated or actual operational testing


by potential users/customers or an independent test team at the
developers' site. Alpha testing is often employed for off-the-shelf
software as a form of internal acceptance testing before the software
goes to beta testing.

Beta Testing: Beta testing comes after alpha testing and can be
considered a form of external user acceptance testing. Versions of the
software, known as beta versions, are released to a limited audience
outside of the programming team known as beta testers. The software is
released to groups of people so that further testing can ensure the product
has few faults or bugs. Beta versions can be made available to the open
public to increase the feedback field to a maximal number of future users
and to deliver value earlier, for an extended or even indefinite period of
time (perpetual beta).

Functional vs non-functional Testing: functional testing refers to


activities that verify a specific action or function of the code. These are
usually found in the code requirements documentation, although some
development methodologies work from use cases or user stories.
Functional tests tend to answer the question of "can the user do this" or
"does this particular feature work."
Non-functional testing refers to aspects of the software that may not be
related to a specific function or user action, such as scalability or other
performance, behaviour under certain constraints, or security. Testing
will determine the breaking point, the point at which extremes of
scalability or performance leads to unstable execution. Non-functional
requirements tend to be those that reflect the quality of the product,
particularly in the context of the suitability perspective of its users.
Continuous Testing: Continuous testing is the process of executing
automated tests as part of the software delivery pipeline to obtain
immediate feedback on the business risks associated with a software
release candidate. Continuous testing includes the validation of both
functional requirements and
nonhttps://en.wikipedia.org/wiki/Nonfunctional_requirementsfunctional
requirements; the scope of testing extends from validating bottom-up
requirements or user stories to assessing the system requirements
associated with overarching business goals.

Destructive Testing: Destructive testing attempts to cause the software


or a subsystem to fail. It verifies that the software functions properly
even when it receives invalid or unexpected inputs, thereby establishing
the robustness of input validation and error-management routines.
Software fault injection, in the form of fuzzing, is an example of failure
testing. Various commercial non-functional testing tools are linked from
the software fault injection page; there are also numerous open-source
and free software tools available that perform destructive testing.

Software performance Testing: Performance testing is generally


executed to determine how a system or sub-system performs in terms of
responsiveness and stability under a particular workload. It can also
serve to investigate, measure, validate or verify other quality attributes of
the system, such as scalability, reliability and resource usage.
Load testing is primarily concerned with testing that the system can
continue to operate under a specific load, whether that be large
quantities of data or a large number of users. This is generally referred
to as software scalability. The related load testing activity of when
performed as a non-functional activity is often referred to as endurance
testing. Volume testing is a way to test software functions even when
certain components (for example a file or database) increase radically in
size. Stress testing is a way to test reliability under unexpected or rare
workloads. Stability testing (often referred to as load or endurance
testing) checks to see if the software can continuously function well in
or above an acceptable period.
There is little agreement on what the specific goals of performance
testing are. The terms load testing, performance testing, scalability
testing, and volume testing, are often used interchangeably.
Real-time software systems have strict timing constraints. To test if
timing constraints are met, real-time testing is used.

Usability Testing: usability testing is to check if the user interface is


easy to use and understand. It is concerned mainly with the use of the
application. This is not a kind of testing that can be automated; actual
human users are needed, being monitored by skilled UI designers.

Accessibility testing: accessibility testing may include compliance with


standards such as:

• Americans with Disabilities Act of 1990


• Section 508 Amendment to the Rehabilitation Act of 1973
• Web Accessibility Initiative (WAI) of the World Wide Web
Consortium (W3C)

Security Testing: security testing is essential for software that processes


confidential data to prevent system intrusion by hackers.
The International Organization for Standardization (ISO) defines this as
a "type of testing conducted to evaluate the degree to which a test item,
and associated data and information, are protected so that unauthorised
persons or systems cannot use, read or modify them, and authorized
persons or systems are not denied access to them."

Internationalization and localization: Testing for internationalization


and localization validates that the software can be used with different
languages and geographic regions. The process of pseudo localization is
used to test the ability of an application to be translated to another
language, and make it easier to identify when the localization process
may introduce new bugs into the product. Globalization testing verifies
that the software is adapted for a new culture (such as different
currencies or time zones).
Actual translation to human languages must be tested, too. Possible
localization and globalization failures include:
• Software is often localized by translating a list of strings out of
context, and the translator may choose the wrong translation for
an ambiguous source string.
• Technical terminology may become inconsistent, if the project
is translated by several people without proper coordination or if
the translator is imprudent.
• Literal word-for-word translations may sound inappropriate,
artificial or too technical in the target language.
• Untranslated messages in the original language may be left hard
coded in the source code.
• Some messages may be created automatically at run time and
the resulting string may be ungrammatical, functionally
incorrect, misleading or confusing.
• Software may use a keyboard shortcut that has no function on
the source language's keyboard layout, but is used for typing
characters in the layout of the target language.
• Software may lack support for the character encoding of the
target language.
• Fonts and font sizes that are appropriate in the source language
may be inappropriate in the target language; for example, CJK
characters may become unreadable, if the font is too small.
• A string in the target language may be longer than the software
can handle. This may make the string partly invisible to the user
or cause the software to crash or malfunction.
• Software may lack proper support for reading or writing
bidirectional text.
• Software may display images with text that was not localized.
• Localized operating systems may have differently named
system configuration files and environment variables and
different formats for date and currency.

Development Testing: Development Testing is a software development


process that involves the synchronized application of a broad spectrum
of defect prevention and detection strategies in order to reduce software
development risks, time, and costs. It is performed by the software
developer or engineer during the construction phase of the software
development lifecycle. Development Testing aims to eliminate
construction errors before code is promoted to other testing; this strategy
is intended to increase the quality of the resulting software as well as the
efficiency of the overall development process.
Depending on the organization's expectations for software
development, Development Testing might include static code analysis,
data flow analysis, metrics analysis, peer code reviews, unit testing,
code coverage analysis, traceability, and other software testing
practices.

A/B Testing: A/B testing is a method of running a controlled experiment


to determine if a proposed change is more effective than the current
approach. Customers are routed to either a current version (control) of a
feature, or to a modified version (treatment) and data is collected to
determine which version is better at achieving the desired outcome.

Concurrent Testing: Concurrent or concurrency testing assesses the


behaviour and performance of software and systems that use concurrent
computing, generally under normal usage conditions. Typical problems
this type of testing will expose are deadlocks, race conditions and
problems with shared memory/resource handling.

Conformance testing or type Testing: In software testing,


conformance testing verifies that a product performs according to its
specified standards. Compilers, for instance, are extensively tested to
determine whether they meet the recognized standard for that language.
IMPLEMENTATION

IMPLIMENTATION

The software implementation stage involves the transformation of the


software technical data package (TDP) into one or more fabricated,
integrated, and tested software configuration items that are ready for
software acceptance testing. The primary activities of software
implementation include the:

• Fabrication of software units to satisfy structural unit


specifications.

• Assembly, integration, and testing of software components


into a software configuration item.
• Prototyping challenging software components to resolve
implementation risks or establish a fabrication proof of
concept.

• Dry-run acceptance testing procedures to ensure that the


procedures are properly delineated and that the software
product (software configuration items (CIs and computing
environment) is ready for acceptance testing.

COMPUTER SCIENCE: -
In computer science, an implementation is a realization of a technical
specification or algorithm as a program, software component, or other
computer system through computer programming and deployment.
Many implementations may exist for a given specification or standards.
For example, web browsers contain implementations of world wide web
consortium- recommended specification, and software development
tools contain implementation of programming languages.
A special case occurs in object-oriented programming, when a concrete
class implements and interface, in this case the concrete class Is an
implementation of the interface and it includes methods which are
implementation of those methods specified by the interface
Information technology: in the information technology industry,
implementation refers to post-sales process of guiding a client from
purchase to use of the software or hardware that was purchased. This
includes requirement analysis, Scope analysis, Customization, System
integration, User policies, user training and delivery. These steps are
often over scene by project manager using project management
methodology. Software implementation involve several professionals
that are relatively to the knowledge-based economy such as business
analysts, technical analysts, solution architect, and project manager.
To implement a system successfully, many interrelated tasks need to be
carried out in an appropriate sequence. Utilising a well-proven
implementations methodology and enlisting professional advise can help
but often it is the number of tasks, poor planning and inadequate
resourcing that causes problems with an implementation project, rather
than any of the task being particularly difficult. Similarly, with cultural
issues it is often the lack
of adequate consultation and two-way communication then habits
achievement of the desired results.

Political science: In political science, implementation refers to the


carrying out of public policy. Legislatures pass laws that are then carried
out by public servants working in bureaucratic agencies. This process
consists of rulemaking, rule-administration and rule-adjudication.
Factors impacting implementations include the legislative intend, the
administrative the capacity if the implementing bureaucracy, interest
group activity and opposition and presidential and executive support.
In international relations, implementation refers to a stage of
international treaty-making. It represents the stage when international
provision are enacted domestically through legislation and regulation.
The implementation stage ids different from the ratification of an
international treaty.

Social and health science: Implementation is defined as a specified set


of activities designed to put into practice and activity or program of
known dimensions. According to this definition, implementation
processes are purposeful and are described in sufficient details such that
independent observers can detect the presence and strength of the
“specific set of activities” related to implementations. In addition, the
activity of program being implemented ids described in sufficient details
so that independent observers can detect its presence and strength
Water and natural resources: In water and natural resources,
implementation refers to the actualization of best management practices
with the ultimate goals conserving natural resources and improving
equality of water bodies Types:
• Direct change over
• Parallel Ronaldson as parallel
• Phased implementation
• Pilot introduction know as pilot
• Well-trade

Role of end users: System implementation generally benefits from high


levels of users’ involvement and management support. User participation
in the design and operation of the information system has several results.
First, if users are heavily involved in system design, they move
opportunities to mould the system according to their priorities and
business requirements, and more opportunities to control the outcomes.
Second, they are more likely to react positively to the change process.
Incorporating user knowledge and expertise leads to better solution.
The relationship between users and information system specialists has
traditionally been a problem area for I information system
implementation efforts. Users and information system specialists tend to
have different background, interests, and priorities. This is referred to as
the user-designer communications gap. These differences lead to
divergent organizational loyalties, approaches to problem solving and
vocabularies. Examples of these differences or concern are
below: -
User concern
• Will the system deliver the information I need for my work?
• How quickly can I access the data?
• How easily can I retrieve the data?
• How much clerical support will I need to enter data into the
system?
• How will the operation of the system fit into my daily business
schedule?
Designer concern
• How much disk storage will the master file consume?
• How many lines of program code will it take to perform this
function? • How can we cut down on CPU time when we run the
system?

MAINTENANCE
Maintenance

Software maintenance in software engineering is the modification of a


software product after delivery to correct faults, to improve performance
or other attributes.
A common perception of maintenance is that it merely involves fixing
defects. However, one study indicated that over 80% of maintenance
effort is used for non-corrective actions. This perception is perpetuated
by users submitting problem reports that in reality are functionality
enhancements to the system. More recent studies put the bug-fixing
proportion closer to 21%.

Need for Maintenance


Software Maintenance is needed for:-

• Correct errors
• Change in user requirement with time
• Changing hardware/software requirements
• To improve system efficiency
• To optimize the code to run faster
• To modify the components
• To reduce any unwanted side effects.

Types of Maintenance

1.Corrective Maintenance
Corrective maintenance aims to correct any remaining errors
regardless of where they may cause specifications, design, coding,
testing, and documentation, etc.
2. Adaptive Maintenance

It contains modifying the software to match changes in the ever-


changing environment.

3. Preventive Maintenance

It is the process by which we prevent our system from being obsolete. It


involves the concept of reengineering & reverse engineering in which an
old system with old technology is re-engineered using new technology.
This maintenance prevents the system from dying out.
4. Perfective Maintenance

It defines improving processing efficiency or performance or restricting


the software to enhance changeability. This may contain enhancement of
existing system functionality, improvement in computational efficiency,
etc.

Cost of maintenance
The cost of software maintenance can be high. However, this doesn’t
negate the importance of software maintenance. In certain cases, software
maintenance can cost up to two-thirds of the entire software process cycle
or more than 50% of the SDLC processes.

The costs involved in software maintenance are due to multiple factors and
vary depending on the specific situation. The older the software, the more
maintenance will cost, as technologies (and coding languages) change
over time. Revamping an old piece of software to meet today’s technology
can be an exceptionally expensive process in certain situations.

In addition, engineers may not always be able to target the exact issues
when looking to upgrade or maintain a specific piece of software. This
causes them to use a trial and error method, which can result in many
hours of work.

There are certain ways to try and bring down software maintenance costs.
These include optimizing the top of programming used in the software,
strong typing, and functional programming.
When creating new software as well as taking on maintenance projects for
older models, software companies must take software maintenance costs
into consideration. Without maintenance, any software will be obsolete
and essentially useless over time.

Maintenance activities

IEEE provides a framework for sequential maintenance process activities. It can


be used in iterative manner and can be extended so that customized items and
processes can be included.
These activities go hand-in-hand with each of the following phase:
• Identification & Tracing - It involves activities pertaining to
identification of requirement of modification or maintenance. It is
generated by user or system may itself report via logs or error
messages. Here, the maintenance type is classified also.
• Analysis - The modification is analyzed for its impact on the
system including safety and security implications. If probable
impact is severe, alternative solution is looked for. A set of
required modifications is then materialized into requirement
specifications. The cost of modification/maintenance is analyzed
and estimation is concluded.
• Design - New modules, which need to be replaced or modified, are
designed against requirement specifications set in the previous
stage. Test cases are created for validation and verification.
• Implementation - The new modules are coded with the help of
structured design created in the design step. Every programmer is
expected to do unit testing in parallel.
• System Testing - Integration testing is done among newly created
modules. Integration testing is also carried out between new
modules and the system. Finally the system is tested as a whole,
following regressive testing procedures.
• Acceptance Testing - After testing the system internally, it is
tested for acceptance with the help of users. If at this state, user
complaints some issues they are addressed or noted to address in
next iteration.
• Delivery - After acceptance test, the system is deployed all over
the organization either by small update package or fresh
installation of the system. The final testing takes place at client end
after the software is delivered.
Training facility is provided if required, in addition to the hard
copy of user manual.
• Maintenance management - Configuration management is an
essential part of system maintenance. It is aided with version
control tools to control versions, semi-version or patch
management.

Software re-engineering
When we need to update the software to keep it to the current market,
without impacting its functionality, it is called software re-engineering.
It is a thorough process where the design of software is changed and
programs are re-written.
Legacy software cannot keep tuning with the latest technology available
in the market. As the hardware become obsolete, updating of software
becomes a headache. Even if software grows old with time, its
functionality does not.
For example, initially Unix was developed in assembly language. When
language C came into existence, Unix was re-engineered in C, because
working in assembly language was difficult.
Other than this, sometimes programmers notice that few parts of
software need more maintenance than others and they also need re-
engineering.
Re-Engineering Process
• Decid e what to re-engineer. Is it whole software or a part of it?
• Perform Reverse Engineering, in order to obtain specifications of
existing software.
• Restructure Program if required. For example, changing
functionoriented programs into object-oriented programs.
• Re-structure data as required.

• Apply Forward engineering concepts in order to get re-


engineered software.
There are few important terms used in Software re-engineering

Reverse Engineering
It is a process to achieve system specification by thoroughly analysing,
understanding the existing system. This process can be seen as reverse
SDLC model, i.e., we try to get higher abstraction level by analysing
lower abstraction levels.
An existing system is previously implemented design, about which we
know nothing. Designers then do reverse engineering by looking at the
code and try to get the design. With design in hand, they try to conclude
the specifications. Thus, going in reverse from code to system
specification.
Program Restructuring
It is a process to re-structure and re-construct the existing software. It is
all about re-arranging the source code, either in same programming
language or from one programming language to a different one.
Restructuring can have either source code-restructuring and data-
restructuring or both.
Re-structuring does not impact the functionality of the software but
enhance reliability and maintainability. Program components, which
cause errors very frequently can be changed, or updated with re-
structuring.
The dependability of software on obsolete hardware platform can be
removed via re-structuring.

Forward Engineering
Forward engineering is a process of obtaining desired software from the
specifications in hand which were brought down by means of reverse
engineering. It assumes that there was some software engineering
already done in the past.
Forward engineering is same as software engineering process with only
one difference – it is carried out always after reverse engineering.

Component reusability
A component is a part of software program code, which executes an
independent task in the system. It can be a small module or sub-system
itself.
Reuse Process
Two kinds of method can be adopted: either by keeping requirements
same and adjusting components or by keeping components same and
modifying requirements.

• Requirement Specification - The functional and non-functional


requirements are specified, which a software product must comply
to, with the help of existing system, user input or both.
• Design - This is also a standard SDLC process step, where
requirements are defined in terms of software parlance. Basic
architecture of system as a whole and its sub-systems are created.
• Specify Components - By studying the software design, the
designers segregate the entire system into smaller components or
sub-systems. One complete software design turns into a collection
of a huge set of components working together.
• Search Suitable Components - The software component
repository is referred by designers to search for the matching
component, on the basis of functionality and intended software
requirements.
• Incorporate Components - All matched components are packed
together to shape them as complete software.

REFERENCE

• https://www.google.com
• https://www.tutorialspoint.com
• https://www.geeksforgeeks.org
• https://en.wikipedia.org
• https://www.reference.com
• https://www.freetutes.com

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy