AIML Ritesh
AIML Ritesh
Numerical
MultiClass classification: label 2 0 1
1 : Categorical
2
Confusion Matrix
Note that for every confusion matrix, the net FP and net FN will have the same value.
Thus, the micro precision and micro recall can be calculated as:
In this example only all coming equal. But not equal in general
Macro Averaging
The macro-averaged scores are calculated for each class individually, and then the unweighted (or
normal) mean of the measures is calculated to calculate the net global score. For the example we have
been using, the scores are obtained as the following:
When A row represents an instance of the actual class, and a column represents an instance of the predicted class
THEN:
The “normalized” term means that each of these groupings is represented as having 1.00 samples. Thus, the sum of
each row in a balanced and normalized confusion matrix is 1.00, because each row sum represents 100% of the
elements in a particular topic, cluster, or class.
confusion matrix is used to evaluate the quality of the output of a classifier. The diagonal elements represent the
number of points for which the predicted label is equal to the true label, while off-diagonal elements are those
that are mislabeled by the classifier. The higher the diagonal values of the confusion matrix the better, indicating
many correct predictions.
The figures show the confusion matrix with and without normalization by class support size (number of elements
in each class). This kind of normalization can be interesting in case of class imbalance to have a more visual
interpretation of which class is being misclassified.
from sklearn.metrics import confusion_matrix
confusion_matrix(y_true, y_pred, normalize=‘true')
from sklearn.metrics import