0% found this document useful (0 votes)
65 views4 pages

Calibration of Software Quality: Fuzzy Neural and Rough Neural Computing Approaches

This paper examines using rough set theory to classify software changes. It develops a rough set based system to categorize software changes using change attributes like type of change, module changed, author, etc. A case study is presented where changes from a telecom system are classified. The key change types identified are enhancements, corrections, and modifications. Relationships between change attributes are analyzed. Reducts are found to reduce the number of attributes while preserving classification ability. The rough set system is able to successfully classify over 90% of the changes. This demonstrates rough sets can effectively handle vagueness in software change data and identify useful change patterns for management and process improvement.

Uploaded by

jatin1001
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
65 views4 pages

Calibration of Software Quality: Fuzzy Neural and Rough Neural Computing Approaches

This paper examines using rough set theory to classify software changes. It develops a rough set based system to categorize software changes using change attributes like type of change, module changed, author, etc. A case study is presented where changes from a telecom system are classified. The key change types identified are enhancements, corrections, and modifications. Relationships between change attributes are analyzed. Reducts are found to reduce the number of attributes while preserving classification ability. The rough set system is able to successfully classify over 90% of the changes. This demonstrates rough sets can effectively handle vagueness in software change data and identify useful change patterns for management and process improvement.

Uploaded by

jatin1001
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 4

Calibration of software quality: Fuzzy neural and rough

neural computing approaches


W. Pedrycz!,", L. Han", J.F. Peters",*, S. Ramanna", R. Zhai"

Summary

In this paper, the no. of changes reqd. to make in the given software product for bringing it to a
specified level of quality are estimated. For this, the quality of current software product is
assessed. The framework used for this is the McCall software evaluation framework. It is
hierarchical and has three levels : factors (highest-level based user views of software quality),
criteria (mid-level based on characteristics of software), and metrics (lowest level based on
quantification of software quality). As the quality of a software product is characterized by
interrelationships between these three levels, neural networks are used to find these out. Two
types of neural networks are used : fuzzy neural network and rough neural network. It is noted
that the output of fuzzy neural network is not stable, so rough neural network is used and it is
found that rough neural network utilizes more information than its fuzzy counterpart.

Empirical Assessment of Machine Learning based Software Defect Prediction


Techniques

Venkata U.B. Challagulla, Farokh B. Bastani, I-Ling Yen, Raymond A. Paul

Summary

Here, the different techniques for software defect prediction i.e. mainly the machine learning
techniques, e.g. Artificial Neural Networks, Instance-based Reasoning, Bayesian-Belief
Networks, Decision Trees, and Rule Inductions and statistical models, e.g. Stepwise Multi-linear
Regression models and multivariate models are compared. The specific cases where each type of
technique is more effective than others are listed. Some hybrid methods using both approaches
are also listed e.g. principal component analysis to enhance the performance of neural networks.
The analyses are done over the datasets in NASA’s MDP i.e. metric data program. Error
predicted by each model in case of each data set is compared along with the consistency of
prediction by each model. It is noted that while no single prediction technique works equally
well for all cases, the techniques IBL and 1R have better consistency as compared to other
models. Also, it was noted that only size and complexity metrics were not sufficient for accurate
prediction and that probabilistic models were reqd., e.g. BBN(Bayesian belief networks).
Fault Prediction using Early Lifecycle Data

Yue Jiang, Bojan Cukic, Tim Menzies

Summary

Here, it is argued that the metrics available early in the development lifecycle can be used to
identify fault-prone software modules. In particular, statistical models are developed whereby the
probability of false alarm is minimized and probability of correct classification of a module is
maximized. Techniques like random forests and ROC curves are used to assess the classification
performance. Requirement metrics, module based code metrics, and the fusion of requirement
and module metrics serve as predictors. The predicted variable is whether one or more defects
exists in the given module. Three types of models are run, where first only requirement metrics,
then only module metrics and afterwards the combinations of them are run. Various machine
learning algorithms are used, e.g. logistic regression, 1R, Naïve Bayes , etc. The data sets are
from the MDP programme of NASA. It is demonstrated that combining the requirement metrics
and code metrics give better classificatory performance as compared to individual metrics.

Modeling uncertainty in software engineering using rough sets


Phillip A. Laplante · Colin J. Neill

Summary

Here, the uncertainty in software engineering is equated with the developer/researcher’s inability
to measure/quantify certain properties of software and software processes. But, if the software’s
properties could be quantified within some universe of understanding, U, then the problem of
managing the uncertainties can be approached. In particular, the decision-making system using
rough set principles is demonstrated. As an example, The RTOS (i.e. real time operating system)
is characterized by some attributes like context switch time and interrupt latency that impact on
I/O timeliness and the acceptability of a particular RTOS is decided, based upon the decision-
rules that are extracted from the table using rough set reduct-finding algorithms. Rules with high
accuracy and high coverage are developed. Thereby, it is shown that rough sets can help tackle
the problem of uncertainty in specific cases.
Modelling Uncertainty in Software Reliability Estimation using
Fuzzy Entropy

Sheetal khokhar, Harish Mittal

Summary

Here, the software system reliability model developed is white-box type. It is necessary, because
with greater sophistication and complexity, the black box model becomes increasingly
erroneous, as it considers only the interactions with external environment and ignores the internal
ones. The type of uncertainty emphasized is the uncertainty due to changes in operational profile.
From historical data and using UML, a Discrete Time Markov Chain model is developed to
describe application procedure. The Transition probability matrix is obtained and it is fuzzified
using suitable membership function. The input is transition probability. Output is fuzzified in 3
levels- low, medium and high. It is again de-fuzzified i.e. converted into Shannon entropy. The
value of Shannon entropy is thus a measure of uncertainty. From the values of Shannon entropy,
the uncertainty is determined for an e-commerce survey data set where the probability of an
occasional/regular buyer to browse/register/buy items is estimated. Corresponding total error is
also obtained.

Revisiting the Evaluation of Defect Prediction Models


Thilo Mende, Rainer Koschke

Summary

Here, the cost-effectiveness of defect prediction models is evaluated. A performance measure is


developed that takes these considerations into account and assesses defect prediction models by
comparing them to the optimal performance of an imaginary best classifier. The measure is
expressed in terms of LOC of a given module. In this paper, we investigate the usability of this
performance measure on publicly available defect data sets from the NASA metrics data
program (MDP). First, we evaluate a trivial classifier based only on the size {measured in lines
of code} using classical performance measures. Afterwards, the comparison between five
different classification algorithms e.g. logistic, random forests, etc. and the trivial model is done
on several data sets, using a traditional performance measure and our newly proposed one. Cost-
effectiveness and performance of the classification algorithms is compared. The trivial model
performs well when evaluated using the former, while the newly proposed metric identifies it as
an insufficient model.
Towards a Software Change Classification
System: A Rough Set Approach

F. PETERS and SHEELA RAMANNA

Summary

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy