confusion_matrix
confusion_matrix
Accuracy=TP+TN+FP+FNTP+TN
2. Precision
Precision focuses on the quality of the model’s positive predictions. It tells
us how many of the instances predicted as positive are actually positive.
Precision is important in situations where false positives need to be
minimized, such as detecting spam emails or fraud.
Precision=TP+FPTP
3. Recall
Recall measures how well the model identifies all actual positive cases. It
shows the proportion of true positives detected out of all the actual
positive instances. High recall is essential when missing positive cases
has significant consequences, such as in medical diagnoses.
Recall=TP+FNTP
4. F1-Score
F1-score combines precision and recall into a single metric to balance
their trade-off. It provides a better sense of a model’s overall performance,
particularly for imbalanced datasets. The F1 score is helpful when both
false positives and false negatives are important, though it assumes
precision and recall are equally significant, which might not always align
with the use case.
F1-Score=Precision+Recall2⋅Precision⋅Recall
5. Specificity
Specificity is another important metric in the evaluation of classification
models, particularly in binary classification. It measures the ability of a
model to correctly identify negative instances. Specificity is also known as
the True Negative Rate. Formula is given by:
Specificity=TN+FPTN
6. Type 1 and Type 2 error
Type 1 error
o A Type 1 Error occurs when the model incorrectly predicts a
positive instance, but the actual instance is negative. This is
also known as a false positive. Type 1 Errors affect
the precision of a model, which measures the accuracy of
positive predictions.
Type 1 Error=TN+FPFP
Type 2 error
o A Type 2 Error occurs when the model fails to predict a
positive instance, even though it is actually positive. This is
also known as a false negative. Type 2 Errors impact
the recall of a model, which measures how well the model
identifies all actual positive cases.
Type 2 Error=TP+FNFN
Example:
True Positive (TP): It is the total counts having both predicted and
actual values are Dog.
True Negative (TN): It is the total counts having both predicted and
actual values are Not Dog.
False Positive (FP): It is the total counts having prediction is Dog
while actually Not Dog.
False Negative (FN): It is the total counts having prediction is Not Dog
while actually, it is Dog.
Example: Confusion Matrix for Dog Image Recognition with Numbers
Index 1 2 3 4 5 6 7 8 9 10
Result TP FN TP TN TP FP TP TP TN TN
plt.gca().figure.subplots_adjust(bottom=0.2)
plt.gca().figure.text(0.5, 0.05, 'Prediction', ha='center', fontsize=13)
plt.show()
Output: