0% found this document useful (0 votes)
9 views31 pages

Evaluation Metrics: Yining Chen (Adapted From Slides by Anand Avati) May 1, 2020

The document discusses the importance of evaluation metrics in machine learning, particularly for binary classifiers, and outlines various metrics such as accuracy, precision, recall, and F1-score. It emphasizes the significance of choosing appropriate metrics, especially in scenarios of class imbalance and multi-class classification. Additionally, it covers summary metrics like AU-ROC and log-loss, and the implications of these metrics on model performance and evaluation.

Uploaded by

Mayura D
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views31 pages

Evaluation Metrics: Yining Chen (Adapted From Slides by Anand Avati) May 1, 2020

The document discusses the importance of evaluation metrics in machine learning, particularly for binary classifiers, and outlines various metrics such as accuracy, precision, recall, and F1-score. It emphasizes the significance of choosing appropriate metrics, especially in scenarios of class imbalance and multi-class classification. Additionally, it covers summary metrics like AU-ROC and log-loss, and the implications of these metrics on model performance and evaluation.

Uploaded by

Mayura D
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 31

Evaluation Metrics

CS229
Yining Chen
(Adapted from slides by Anand Avati)
May 1, 2020
Topics
● Why are metrics important?
● Binary classifiers
○ Rank view, Thresholding
● Metrics
○ Confusion Matrix
○ Point metrics: Accuracy, Precision, Recall / Sensitivity, Specificity, F-score
○ Summary metrics: AU-ROC, AU-PRC, Log-loss.
● Choosing Metrics
● Class Imbalance
○ Failure scenarios for each metric
● Multi-class
Why are metrics important?
- Training objective (cost function) is only a proxy for real world objectives.
- Metrics help capture a business goal into a quantitative target (not all errors
are equal).
- Helps organize ML team effort towards that target.
- Generally in the form of improving that metric on the dev set.
- Useful to quantify the “gap” between:
- Desired performance and baseline (estimate effort initially).
- Desired performance and current performance.
- Measure progress over time.
- Useful for lower level tasks and debugging (e.g. diagnosing bias vs variance).
- Ideally training objective should be the metric, but not always possible. Still,
metrics are useful and important for evaluation.
Binary Classification
● x is input
● y is binary output (0/1)
● Model is ŷ = h(x)
● Two types of models
○ Models that output a categorical class directly (K-nearest neighbor, Decision tree)
○ Models that output a real valued score (SVM, Logistic Regression)
■ Score could be margin (SVM), probability (LR, NN)
■ Need to pick a threshold
■ We focus on this type (the other type can be interpreted as an instance)
Score based models
Score = 1

Positive example

Negative example

Example of Score: Output of logistic regression.


For most metrics: Only ranking matters.
If too many examples: Plot class-wise histogram.

# positive examples
Prevalence =
# positive examples +
# negatives examples

Score = 0
Threshold -> Classifier -> Point Metrics
Label positive Label negative

Th
Predict Positive

0.5
Th=0.5
Predict Negative
Point metrics: Confusion Matrix
Label Positive Label Negative

Th
9 2
Predict Positive

0.5

Th=0.5

Properties:
Predict Negative

- Total sum is fixed (population).


- Column sums are fixed (class-wise population).
- Quality of model & threshold decide how columns
1 8 are split into rows.
- We want diagonals to be “heavy”, off diagonals to
be “light”.
Point metrics: True Positives
Label positive Label negative

Th TP

9 2
Predict Positive

0.5 9

Th=0.5
Predict Negative

1 8
Point metrics: True Negatives
Label positive Label negative

Th TP TN

9 2
Predict Positive

0.5 9 8

Th=0.5
Predict Negative

1 8
Point metrics: False Positives
Label positive Label negative

Th TP TN FP

9 2
Predict Positive

0.5 9 8 2

Th=0.5
Predict Negative

1 8
Point metrics: False Negatives
Label positive Label negative

Th TP TN FP FN

9 2
Predict Positive

0.5 9 8 2 1

Th=0.5
Predict Negative

8
1
FP and FN also called Type-1 and Type-2 errors
Point metrics: Accuracy
Label positive Label negative

Th TP TN FP FN Acc

9 2
Predict Positive

0.5 9 8 2 1 .85

Th=0.5
Predict Negative

Equivalent to 0-1 Loss!


1 8
Point metrics: Precision
Label positive Label negative

Th TP TN FP FN Acc Pr

9 2
Predict Positive

0.5 9 8 2 1 .85 .81

Th=0.5
Predict Negative

1 8
Point metrics: Positive Recall (Sensitivity)
Label positive Label negative

Th TP TN FP FN Acc Pr Recall

9 2
Predict Positive

0.5 9 8 2 1 .85 .81 .9

Th=0.5
Trivial 100% recall = pull everybody above the threshold.
Trivial 100% precision = push everybody below the
threshold except 1 green on top.
Predict Negative

(Hopefully no gray above it!)

8
1
Striving for good precision with 100% recall =
pulling up the lowest green as high as possible in the ranking.
Striving for good recall with 100% precision =
pushing down the top gray as low as possible in the ranking.
Point metrics: Negative Recall (Specificity)
Label positive Label negative

Th TP TN FP FN Acc Pr Recall Spec

9 2
Predict Positive

0.5 9 8 2 1 .85 .81 .9 0.8

Th=0.5
Predict Negative

1 8
Point metrics: F1-score
Label positive Label negative

Th TP TN FP FN Acc Pr Recall Spec F1

9 2
Predict Positive

0.5 9 8 2 1 .85 .81 .9 .8 .857

Th=0.5
Predict Negative

1 8
Point metrics: Changing threshold
Label positive Label negative

Th TP TN FP FN Acc Pr Recall Spec F1

7 2
Predict Positive

0.6 7 8 2 3 .75 .77 .7 .8 .733

Th=0.6

# effective thresholds = # examples + 1


Predict Negative

3 8
Threshold TP TN FP FN Accuracy Precision Recall Specificity F1
Threshold Scanning
Score = 1 1.00 0 10 0 10 0.50 1 0 1 0
Threshold = 1.00 0.95 1 10 0 9 0.55 1 0.1 1 0.182
0.90 2 10 0 8 0.60 1 0.2 1 0.333
0.85 2 9 1 8 0.55 0.667 0.2 0.9 0.308
0.80 3 9 1 7 0.60 0.750 0.3 0.9 0.429
0.75 4 9 1 6 0.65 0.800 0.4 0.9 0.533
0.70 5 9 1 5 0.70 0.833 0.5 0.9 0.625
0.65 5 8 2 5 0.65 0.714 0.5 0.8 0.588
0.60 6 8 2 4 0.70 0.750 0.6 0.8 0.667
0.55 7 8 2 3 0.75 0.778 0.7 0.8 0.737
0.50 8 8 2 2 0.80 0.800 0.8 0.8 0.800
0.45 9 8 2 1 0.85 0.818 0.9 0.8 0.857
0.40 9 7 3 1 0.80 0.750 0.9 0.7 0.818
0.35 9 6 4 1 0.75 0.692 0.9 0.6 0.783
0.30 9 5 5 1 0.70 0.643 0.9 0.5 0.750
0.25 9 4 6 1 0.65 0.600 0.9 0.4 0.720
0.20 9 3 7 1 0.60 0.562 0.9 0.3 0.692
0.15 9 2 8 1 0.55 0.529 0.9 0.2 0.667
0.10 9 1 9 1 0.50 0.500 0.9 0.1 0.643
0.05 10 1 9 0 0.55 0.526 1 0.1 0.690
0.00 10 0 10 0 0.50 0.500 1 0 0.667
Threshold = 0.00
Score = 0
Summary metrics: Rotated ROC (Sen vs. Spec)
Score = 1
Pos examples
Neg examples

Specificity AUROC = Area Under ROC


= True Neg / Neg
= Prob[Random Pos ranked
higher than random Neg]
Random Guessing

Agnostic to prevalence!

Score = 0

Sensitivity = True Pos / Pos


Summary metrics: PRC (Recall vs. Precision)
Score = 1
Pos examples

Neg examples
Precision AUPRC = Area Under PRC
= True Pos /
Predicted Pos
= Expected precision for
Random threshold

Precision >= prevalence

Score = 0

Recall = Sensitivity = True Pos / Pos


Summary metrics:
Score = 1 Score = 1

Model A Model B

Score = 0 Score = 0

Two models scoring the same data set. Is one of them better than the other?
Summary metrics: Log-Loss vs Brier Score
Score = 1 Score = 1
● Same ranking, and therefore the same AUROC,
AUPRC, accuracy!

● Rewards confident correct answers, heavily


penalizes confident wrong answers.
● One perfectly confident wrong prediction is fatal.
-> Well-calibrated model
● Proper scoring rule: Minimized at Score = 0 Score = 0
Calibration vs Discriminative Power

Logistic (th=0.5): Fraction of Positives


Precision: 0.872
Recall: 0.851
F1: 0.862
Brier: 0.099

SVC (th=0.5):
Precision: 0.872
Recall: 0.852
F1: 0.862 Output
Brier: 0.163
Histogram
Unsupervised Learning
● Log P(x) is a measure of fit in Probabilistic models (GMM, Factor Analysis)

○ High log P(x) on training set, but low log P(x) on test set is a measure of overfitting

○ Raw value of log P(x) hard to interpret in isolation

● K-means is trickier (because of fixed covariance assumption)


Class Imbalance
Symptom: Prevalence < 5% (no strict definition)

Metrics: May not be meaningful.

Learning: May not focus on minority class examples at all

(majority class can overwhelm logistic regression, to a lesser extent SVM)


What happen to the metrics under class imbalance?
Accuracy: Blindly predicts majority class -> prevalence is the baseline.

Log-Loss: Majority class can dominate the loss.

AUROC: Easy to keep AUC high by scoring most negatives very low.

AUPRC: Somewhat more robust than AUROC. But other challenges.

In general: Accuracy < AUROC < AUPRC


Rotated ROC
Score = 1

1% “Fraudulent”

1% Specificity
= True Neg / Neg

AUC = 98/99
98%

Score = 0

Sensitivity = True Pos / Pos


Multi-class
● Confusion matrix will be N * N (still want heavy diagonals, light off-diagonals)
● Most metrics (except accuracy) generally analyzed as multiple 1-vs-many
● Multiclass variants of AUROC and AUPRC (micro vs macro averaging)
● Class imbalance is common (both in absolute and relative sense)
● Cost sensitive learning techniques (also helps in binary Imbalance)
○ Assign weights for each block in the confusion matrix.
○ Incorporate weights into the loss function.
Choosing Metrics
Some common patterns:

- High precision is hard constraint, do best recall (search engine results,


grammar correction): Intolerant to FP
- Metric: Recall at Precision = XX %
- High recall is hard constraint, do best precision (medical diagnosis): Intolerant
to FN
- Metric: Precision at Recall = 100 %
- Capacity constrained (by K)
- Metric: Precision in top-K.
- ……
Thank You!

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy