ML Module 2
ML Module 2
Input Layer
Function: After receiving the input features, the input layer sends them straight to the hidden layer.
Components: It is made up of the same number of neurons as the characteristics in the input data. One feature of the input vector
corresponds to each neuron in the input layer.
Hidden Layer
Function: This layer uses radial basis functions (RBFs) to conduct the non-linear transformation of the input data.
Components: Neurons in the buried layer apply the RBF to the incoming data. The Gaussian function is the RBF that is most
frequently utilized.
RBF Neurons: Every neuron in the hidden layer has a spread parameter (σ) and a center, which are also referred to as prototype
vectors. The spread parameter modulates the distance between the center of an RBF neuron and the input vector, which in turn
determines the neuron's output.
Output Layer
Function: The output layer uses weighted sums to integrate the hidden layer neurons' outputs to create the network's final output.
Components: It is made up of neurons that combine the outputs of the hidden layer in a linear fashion. To reduce the error
between the network's predictions and the actual target values, the weights of these combinations are changed during training.
Training Process of radial basis function neural network
An RBF neural network must be trained in three stages: choosing the center's, figuring out the
spread parameters, and training the output weights.
2. In RBF hidden layer computation nodes are different from output nodes. 2. MLP follows the common computational model in hidden as well as output.
3. In RBF hidden layer is non-linear and output layer is linear. 3. In MLP hidden layer and output layer is linear.
4. The argument of RBF activation function computes Euclidean norm between 4. Each hidden unit computes the inner product of input vector and synaptic
input vector and centre. vector.
5. Exponentially decaying local characteristics. 5. Global approximation to non-linear input - output mapping.
7. In MLP, the hidden nodes share a common model not necessary the same
7. In RBFN, the hidden nodes operate differently i.e. they have different models.
activation function.
8. In RBF network we take differece of input vector and weight vector 8. In MLP network we take product of input vector and weight vector.
10. RBFN does faster training process. 10. MLP is slower in training process.
11. RBFN is slow when practically used. 11. MLP is faster when practically used.
Support Vector Machine Algorithm
CNNs are trained using a supervised learning approach. This means that the CNN is
given a set of labeled training images. The CNN then learns to map the input
images to their correct labels.
• Data Preparation: The training images are preprocessed to ensure that they are
all in the same format and size.
• Loss Function: A loss function is used to measure how well the CNN is
performing on the training data. The loss function is typically calculated by
taking the difference between the predicted labels and the actual labels of the
training images.
• Optimizer: An optimizer is used to update the weights of the CNN in order to
minimize the loss function.
• Backpropagation: Backpropagation is a technique used to calculate the
gradients of the loss function with respect to the weights of the CNN. The
gradients are then used to update the weights of the CNN using the optimizer.
CNN Evaluation
Disadvantages of CNN
• Complexity: CNNs can be complex and difficult to train, especially for
large datasets.
• Resource-Intensive: CNNs require significant computational resources
for training and deployment.
• Data Requirements: CNNs need large amounts of labeled data for
training.
• Interpretability: CNNs can be difficult to interpret, making it challenging
to understand their predictions.
Ensemble Learning
Ensemble Learning in machine learning that
integrates multiple models called as weak
learners to create a single effective model for
prediction. This technique is used to enhance
accuracy, minimizing variance and removing
overfitting. Here we will learn different
ensemble techniques and their algorithms.
Bagging (Bootstrap Aggregating)
• Bagging is a technique that involves creating multiple versions of a model
and combining their outputs to improve overall performance.
• In bagging several base models are trained on different subsets of the
training data, then aggregate their predictions to make the final decision.
The subsets of the data are created using bootstrapping, a statistical
technique where samples are drawn with replacement, meaning some data
points can appear more than once in a subset.
2. Gradient Boosting
Gradient Boosting is a more general approach to boosting that builds models sequentially, with each new
model fitting the residual errors of the previous model. T
• he models are trained to minimize a loss function, which can be customized based on the specific task.
• We can perform regression and classification tasks using Gradient Boosting.