Linear Classfiers, Loss
Linear Classfiers, Loss
Imagine you are a data scientist working for a startup that sells handmade crafts online.
The company recently implemented a new pricing algorithm to predict the price of crafts
based on various features like size, material, and complexity. You want to evaluate the
performance of this new algorithm.
Example: Mean Squared Log Error (MSLE)
target = [2.5, 5, 4, 8]
preds = [3, 5, 2.5, 7]
Ans: 0.0397
Binary Classification Loss Functions
Hinge Loss Example:
Suppose you have a binary classification problem with the following data point:
•True label (y): +1
•Predicted score (f(x)): +1
Interpretation
• The loss is higher (1.204) when the predicted probability for the true class (class
2, with a probability of 0.3) is less.
• This demonstrates how the cross-entropy loss penalizes the model more when
the predicted probability for the correct class is lower, encouraging the model to
increase the accuracy of its predictions during training.
In the context of sparse categorical
cross-entropy (or any cross-entropy
loss), a higher value indicates a
greater error or mismatch between
the predicted probabilities and the
true label.