0% found this document useful (0 votes)
5 views38 pages

Linear Classfiers, Loss

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views38 pages

Linear Classfiers, Loss

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 38

Linear Classifiers,

Linear Machines with


Hinge Loss
Regression:- A regression problem is the problem of determining a relation between one or more
independent variables and an output variable which is a real continuous variable, given a set of observed
values of the set of independent variables and the corresponding values of the output variable.

Regression Loss Function


Example: Mean Absolute Error

Imagine you are a data scientist working for a startup that sells handmade crafts online.
The company recently implemented a new pricing algorithm to predict the price of crafts
based on various features like size, material, and complexity. You want to evaluate the
performance of this new algorithm.
Example: Mean Squared Log Error (MSLE)

target = [2.5, 5, 4, 8]
preds = [3, 5, 2.5, 7]

Ans: 0.0397
Binary Classification Loss Functions
Hinge Loss Example:

Suppose you have a binary classification problem with the following data point:
•True label (y): +1
•Predicted score (f(x)): +1

The hinge loss for this point would be calculated as:


Hinge Loss = max⁡(0, 1−(1⋅1)) = max⁡(0,1−1) = max⁡(0,0) = 0

Consider another data point:


•True label (y): −1
•Predicted score (f(x)): 1
•Hinge Loss = max⁡(0, 1−(-1⋅1)) = max⁡(0, 2) = 2

Hinge loss increases, indicating a misclassification


1. Sigmoid cross entropy
2. Weighted cross entropy
Weighted cross-entropy loss is a variation of the standard cross-entropy loss, where
different classes are assigned different weights to handle class imbalance in the
dataset. This ensures that the model gives more importance to minority classes or
underrepresented labels.
Softmax example
True class label: Class 2
Categorical Cross-Entropy Loss=

Interpretation

• The loss is higher (1.204) when the predicted probability for the true class (class
2, with a probability of 0.3) is less.

• This demonstrates how the cross-entropy loss penalizes the model more when
the predicted probability for the correct class is lower, encouraging the model to
increase the accuracy of its predictions during training.
In the context of sparse categorical
cross-entropy (or any cross-entropy
loss), a higher value indicates a
greater error or mismatch between
the predicted probabilities and the
true label.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy