0% found this document useful (0 votes)
27 views9 pages

Losses

The document discusses various loss functions used in neural networks, including Mean Squared Error (MSE), Mean Absolute Error (MAE), Root Mean Squared Error (RMSE), Binary Cross-entropy, Categorical Cross-entropy, and Sparse Categorical Crossentropy. It explains how each loss function is calculated and their applications in regression and classification problems. Loss functions are crucial for helping models understand prediction errors and improve their learning algorithms.

Uploaded by

ayaazouz1997
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views9 pages

Losses

The document discusses various loss functions used in neural networks, including Mean Squared Error (MSE), Mean Absolute Error (MAE), Root Mean Squared Error (RMSE), Binary Cross-entropy, Categorical Cross-entropy, and Sparse Categorical Crossentropy. It explains how each loss function is calculated and their applications in regression and classification problems. Loss functions are crucial for helping models understand prediction errors and improve their learning algorithms.

Uploaded by

ayaazouz1997
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

LOSSES

Mean Squared Error (MSE)


Mean squared error is loss/cost function for regression-based neural networks. MSE is calculated
by taking the mean of the individual errors squares. Mathematically, MSE is defined as:

MSE can be implemented in Python as:

def mse(y_true, y_pred):


return np.sum((y_true - y_pred) ** 2) / len(y_true)
Mean Absolute Error (MAE)
Mean absolute error, or MAE for short, is another one of the loss functions used in regression
problems. The MAE of a neural network is calculated by taking the mean of the absolute
differences of the predicted values from the actual values.
The following picture explains the mathematical formula of MAE.

def mae(y_true, y_pred):


return np.sum(abs(y_true - y_pred)) / len(y_true)
Root Mean Squared Error (RMSE)
RMSE can be calculated simply by taking the square root of the MSE for a particular set of true
and predicted labels.
This is the mathematical formula for RMSE:

def rmse(y_true, y_pred):


squared_error = (y_true - y_pred) ** 2 / len(y_true)
return np.root(squared_error)
Binary Cross-entropy
The binary cross-entropy loss function, also called as log loss, is used to calculate the loss for a
neural network performing binary classification, i.e. predicting one out of two classes.
Softmax fuction is used at neural network output.
The binary cross-entropy for a set of true and predicted labels can be calculated by using the
following formula:
Categorical Cross-entropy
Categorical cross-entropy is the loss function used for multi-class classification tasks. It has pretty
much the same formula as binary cross-entropy except a few changes:

The labels provided to the categorical cross-entropy loss function are one-hot encoded. Each of
the true labels is a one-hot encoded vector, while each of the predicted labels is a one-hot encoded
vector of the probabilities of the respective classes.
For example, look at the following set of labels and prediction probabilities.
Sparse Categorical Crossentropy
The categorical cross-entropy becomes very memory in-efficient when we have a large number
of classes, say 1000. This means we have a large array of all zeros and a single 1. In such cases,
we use the sparse categorical crossentropy loss function. This loss function works on label-
encoded data instead of one-hot encoded data, which makes computation very fast when working
with a large number of classes.
Loss functions help a model (or a neural network) to determine how wrong its predictions are,
which in case helps the learning algorithm to decide what to do to minimize it.
MAE, MSE, and RMSE are used for regression problems.
Binary Crossentropy, Categorical Crossentropy, and Sparse Categorical Crossentropy loss
functions are used for classification problems.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy