0% found this document useful (0 votes)
4 views14 pages

CHAPTER 3.3 - Activation - Loss - Accuracy

Chapter 3.3 discusses activation functions, loss functions, and accuracy in artificial neural networks using Python. It covers various activation functions such as linear, sigmoid, ReLU, and softmax, and explains their roles in neural networks. Additionally, it details loss functions like categorical cross-entropy and mean squared error, along with methods to calculate model accuracy.

Uploaded by

21146424
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views14 pages

CHAPTER 3.3 - Activation - Loss - Accuracy

Chapter 3.3 discusses activation functions, loss functions, and accuracy in artificial neural networks using Python. It covers various activation functions such as linear, sigmoid, ReLU, and softmax, and explains their roles in neural networks. Additionally, it details loss functions like categorical cross-entropy and mean squared error, along with methods to calculate model accuracy.

Uploaded by

21146424
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

ARTIFICIAL INTELLIGENCE

in Python LANGUAGE

Chapter 3.3.: Artificial Neural Network -

Activation, Loss & Accuracy

M e c h a t ro n i c s – R o b o t & A I D e p a r t m e n t For Internal Circulation only


3.3.1. Activation functions
• Role of activation functions:
• Activation functions are used in order to activate the signal at the output of a neuron.
• 2 types of activation function: Activation functions at hidden layers and activations function for the
output function.
• When one have to map with non-linear functions, non-linear activation functions (such as sigmoid /
softmax / ReLU can be used).

M e c h a t ro n i c s – R o b o t & A I D e p a r t m e n t For Internal Circulation only


3.3.1. Activation functions
• Step activation function: 𝑁

Simplest activation function. Not in used today 𝑦 = 𝑠𝑖𝑔𝑛 ෍ 𝑎𝑖 𝑥𝑖


𝑖=1
anymore.
0, 𝑔<0
𝑠𝑖𝑔𝑛 𝑔 = ቐ
1, 𝑔>0

M e c h a t ro n i c s – R o b o t & A I D e p a r t m e n t For Internal Circulation only


3.3.1. Activation functions
• Linear activation function:
The linear activation is generally used at the output
layer for regression applications.

#Activation Linear function

class Activation_Linear:
# Forward Pass
def forward(self,inputs):
#Calculate output values from input
self.output = inputs

M e c h a t ro n i c s – R o b o t & A I D e p a r t m e n t For Internal Circulation only


3.3.1. Activation functions
• Sigmoid activation function:
Usually used for hidden layer in order to allow a better
optimization for weights and biases.

#Activation function Sigmoid

class Activation_Sigmoid:
# Forward Pass
def forward(self,inputs):
#Calculate output values from input
self.output = 1 / (1 + np.exp(-inputs))

M e c h a t ro n i c s – R o b o t & A I D e p a r t m e n t For Internal Circulation only


3.3.1. Activation functions
• Rectified Linear activation function (ReLU):
Rapid and efficient, it is today the most widely used activation function.

class Activation_ReLU:
#Forward Pass
def forward(self,inputs):
#Calculate output values from input
self.output = np.maximum(0,inputs)

M e c h a t ro n i c s – R o b o t & A I D e p a r t m e n t For Internal Circulation only


3.3.1. Activation functions
• Softmax activation function
(ReLU): class Activation_SoftMax:
In order to design classifier using N,
Softmax activation is a good choice at the #Forward pass:
def forward(self,inputs):
output layer as it returns bounded and
normalized output values, that represent #Get unnormalized probabilities

the probabilities of the output neurons. exp_values = np.exp(inputs - np.max(inputs, axis = 1,keepdims=True))
self.exp_values = exp_values
The sum of the probabilities is equal to 1

𝑒 𝑧𝑖,𝑗 #Normalized them for each sample


𝑆𝑖,𝑗 = 𝐿
σ𝑙=1 𝑒 𝑧𝑖,𝑗 probabilities = exp_values / np.sum(exp_values, axis = 1, keepdims=True)
self.output = probabilities

M e c h a t ro n i c s – R o b o t & A I D e p a r t m e n t For Internal Circulation only


3.3.2. Loss & Accuracy
• Role of Loss function (cost function):
• Allow the estimation of the Network’s error, i.e. how wrong the model is.
• Ideally, a perfect model has a loss function equal to 0.
• Generally we have 2 principal types of loss function: MSE (mean squared error loss) for regression
applications & Categorical Cross Entropy Loss for classification / identification

M e c h a t ro n i c s – R o b o t & A I D e p a r t m e n t For Internal Circulation only


3.3.2. Loss & Accuracy
• Categorical Cross Entropy Loss:
Categorical Cross Entropy Loss is used with Softmax output. The formula for the determination of the
function is as follow:

𝑦: Actual distribution
CCE Loss function: 𝐿 = ෍ 𝑦𝑖 log 𝑦ො𝑖
ො Predicted distribution
𝑦:
𝑖

Example: 𝑦= 1 0 0
𝑦ොsoftmax = 0,7 0,2 0,1

Loss calculation: 𝐿 = 1. log0,7 + 0. log0,2 + 0. log0,1 = 0,36

M e c h a t ro n i c s – R o b o t & A I D e p a r t m e n t For Internal Circulation only


3.3.2. Loss & Accuracy
• Categorical Cross Entropy Loss: class Loss_CategoricalCrossentropy(Loss):
# Class Cross Entropy Loss
# Forward Pass
import numpy as np
def forward(self, y_pred, y_true):
import math
samples = len(y_pred)
y_pred_clipped = np.clip(y_pred,1e-7,1-1e-7) #1e-7: avoid division by 0
class Loss:
# Probabilities for target values only if categorical labels
# Calculate the data and regulazation losses given
if len(y_true.shape) == 1: # The label array is 1D [0 0 ... 1 ... 0 0]
# the output and the ground truth values
correct_confidence = y_pred_clipped[range(samples), y_true]
def calculate(self,output,y):
elif len(y_true.shape) == 2:
#Calculate sample losses:
# The label array is 2D [[0 0 ... 1 ... 0 0] [0 0 ... 1 ... 0 0] …]
sample_losses = self.forward(output,y)
correct_confidence = np.sum( y_pred_clipped*y_true, axis = 1)
#Calculate mean loss:
# Losses calculation
data_loss = np.mean(sample_losses)
negative_log_likelihoods = -np.log(correct_confidence)
#Return loss:
return negative_log_likelihoods
return data_loss

M e c h a t ro n i c s – R o b o t & A I D e p a r t m e n t For Internal Circulation only


3.3.2. Loss & Accuracy
• Categorical Cross Entropy Loss: #Example Loss Function
import numpy as np
import matplotlib.pyplot as plt
import Loss as loss # type: ignore
# Test the class now:
softmax_output = np.array([[0.7, 0.1, 0.2],[0.1, 0.5, 0.4],[0.02, 0.9,

Code example: 0.08]])


#class_target = np.array([0, 1, 1])
class_target = np.array([[1, 0, 0],
[0, 1, 0],
[0, 1, 0]])
loss_function = loss.Loss_CategoricalCrossentropy()
loss_CCE = loss_function.calculate(softmax_output,class_target)
print(loss_CCE)

M e c h a t ro n i c s – R o b o t & A I D e p a r t m e n t For Internal Circulation only


3.3.2. Loss & Accuracy
• Mean Square Error Loss:
Mean Square Error Loss is used generally for outputs of regression applications (or prediction
applications). The formula for the determination of the function is as follow:
2
1 𝑦: Actual distribution
MSE Loss function: 𝐿= . ෍ 𝑦ො𝑖 − 𝑦𝑖
𝑁𝑜𝑢𝑡𝑝𝑢𝑡
𝑖
ො Predicted distribution
𝑦:

M e c h a t ro n i c s – R o b o t & A I D e p a r t m e n t For Internal Circulation only


3.3.2. Loss & Accuracy
import numpy as np

• Accuracy: # 3 samples outputs


softmax_outputs = np.array([[0.7, 0.2, 0.1],[0.5, 0.1, 0.4],[0.02, 0.9, 0.08]])
#Target (Ground truth labels) for the 3 samples
class_targets = np.array([0, 1, 1])

For the classification problem, one simple #Calculate max values indices along 2nd the samples
predictions = np.argmax(softmax_outputs, axis = 1)
way to compute the accuracy is to compare
print(predictions)
the softmax output vector (using argmax
function) with the label vector. An example if (len(class_targets.shape) == 2):
class_targets = np.argmax(class_targets,axis=1)
of code is as follow:
# Calculate the accuracy (True = 1, False = 0)
accuracy = np.mean(predictions == class_targets)
print("Accuracy: ", accuracy)

M e c h a t ro n i c s – R o b o t & A I D e p a r t m e n t For Internal Circulation only


Introduction to Artificial Intelligence

END OF CHAPTER 3.3

M e c h a t ro n i c s – R o b o t & A I D e p a r t m e n t For Internal Circulation only

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy