Calibration of Software Quality: Fuzzy Neural and Rough Neural Computing Approaches
Calibration of Software Quality: Fuzzy Neural and Rough Neural Computing Approaches
Summary
In this paper, the no. of changes reqd. to make in the given software product for bringing it to a
specified level of quality are estimated. For this, the quality of current software product is
assessed. The framework used for this is the McCall software evaluation framework. It is
hierarchical and has three levels : factors (highest-level based user views of software quality),
criteria (mid-level based on characteristics of software), and metrics (lowest level based on
quantification of software quality). As the quality of a software product is characterized by
interrelationships between these three levels, neural networks are used to find these out. Two
types of neural networks are used : fuzzy neural network and rough neural network. It is noted
that the output of fuzzy neural network is not stable, so rough neural network is used and it is
found that rough neural network utilizes more information than its fuzzy counterpart.
Summary
Here, the different techniques for software defect prediction i.e. mainly the machine learning
techniques, e.g. Artificial Neural Networks, Instance-based Reasoning, Bayesian-Belief
Networks, Decision Trees, and Rule Inductions and statistical models, e.g. Stepwise Multi-linear
Regression models and multivariate models are compared. The specific cases where each type of
technique is more effective than others are listed. Some hybrid methods using both approaches
are also listed e.g. principal component analysis to enhance the performance of neural networks.
The analyses are done over the datasets in NASA’s MDP i.e. metric data program. Error
predicted by each model in case of each data set is compared along with the consistency of
prediction by each model. It is noted that while no single prediction technique works equally
well for all cases, the techniques IBL and 1R have better consistency as compared to other
models. Also, it was noted that only size and complexity metrics were not sufficient for accurate
prediction and that probabilistic models were reqd., e.g. BBN(Bayesian belief networks).
Fault Prediction using Early Lifecycle Data
Summary
Here, it is argued that the metrics available early in the development lifecycle can be used to
identify fault-prone software modules. In particular, statistical models are developed whereby the
probability of false alarm is minimized and probability of correct classification of a module is
maximized. Techniques like random forests and ROC curves are used to assess the classification
performance. Requirement metrics, module based code metrics, and the fusion of requirement
and module metrics serve as predictors. The predicted variable is whether one or more defects
exists in the given module. Three types of models are run, where first only requirement metrics,
then only module metrics and afterwards the combinations of them are run. Various machine
learning algorithms are used, e.g. logistic regression, 1R, Naïve Bayes , etc. The data sets are
from the MDP programme of NASA. It is demonstrated that combining the requirement metrics
and code metrics give better classificatory performance as compared to individual metrics.
Summary
Here, the uncertainty in software engineering is equated with the developer/researcher’s inability
to measure/quantify certain properties of software and software processes. But, if the software’s
properties could be quantified within some universe of understanding, U, then the problem of
managing the uncertainties can be approached. In particular, the decision-making system using
rough set principles is demonstrated. As an example, The RTOS (i.e. real time operating system)
is characterized by some attributes like context switch time and interrupt latency that impact on
I/O timeliness and the acceptability of a particular RTOS is decided, based upon the decision-
rules that are extracted from the table using rough set reduct-finding algorithms. Rules with high
accuracy and high coverage are developed. Thereby, it is shown that rough sets can help tackle
the problem of uncertainty in specific cases.
Modelling Uncertainty in Software Reliability Estimation using
Fuzzy Entropy
Summary
Here, the software system reliability model developed is white-box type. It is necessary, because
with greater sophistication and complexity, the black box model becomes increasingly
erroneous, as it considers only the interactions with external environment and ignores the internal
ones. The type of uncertainty emphasized is the uncertainty due to changes in operational profile.
From historical data and using UML, a Discrete Time Markov Chain model is developed to
describe application procedure. The Transition probability matrix is obtained and it is fuzzified
using suitable membership function. The input is transition probability. Output is fuzzified in 3
levels- low, medium and high. It is again de-fuzzified i.e. converted into Shannon entropy. The
value of Shannon entropy is thus a measure of uncertainty. From the values of Shannon entropy,
the uncertainty is determined for an e-commerce survey data set where the probability of an
occasional/regular buyer to browse/register/buy items is estimated. Corresponding total error is
also obtained.
Summary
Summary