0% found this document useful (0 votes)
59 views5 pages

Week 4

Optimization is the process of training a model iteratively to minimize or maximize an objective function. Two common optimization algorithms are gradient descent and stochastic gradient descent. Activation functions introduce non-linearity into neural networks, allowing them to learn complex patterns. Without activation functions, neural networks would be limited to linear regression. Neural networks are trained by adjusting weights to minimize a loss function through an optimization algorithm like stochastic gradient descent. The model compares predictions to actual results and adjusts weights with each pass of data to improve accuracy over many iterations.

Uploaded by

Mrunal Bhilare
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
59 views5 pages

Week 4

Optimization is the process of training a model iteratively to minimize or maximize an objective function. Two common optimization algorithms are gradient descent and stochastic gradient descent. Activation functions introduce non-linearity into neural networks, allowing them to learn complex patterns. Without activation functions, neural networks would be limited to linear regression. Neural networks are trained by adjusting weights to minimize a loss function through an optimization algorithm like stochastic gradient descent. The model compares predictions to actual results and adjusts weights with each pass of data to improve accuracy over many iterations.

Uploaded by

Mrunal Bhilare
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

NPTEL WEEK 4

Q1.What is optimization? and their algorithms.(3 marks)

Optimization is the process where we train the model iteratively that results in maximum and
minimum function evaluation. We optimize our models by changing hyperparameters in each
step as well create accurate model with less error rate.

Two types of algorithms are used :

1. Gradient Descent : It finds out the local minima of a differentiable function and it
minimization algorithm that minimizes a given function.
2. Stochastic Gradient Descent : Here we select the data points randomly to calculate
the local minimum of the function. Stochastic basically means Probabilistic.

Q2.What is Activation function and why to use them.(3 marks)

Activation function is to introduce non-linearity into the output of a neuron.Whether a neuron


should be activated or not by calculating weighted sum and further adding bias with it.

WHY DO WE NEED : A neural network without an activation function is essentially just a


linear regression model. The activation function does the non-linear transformation to
the input making it capable to learn and perform more complex tasks.
Q3.How Neural Networks Are trained.Explain it with Example.(6 marks*)

1. we trained neural networks model basically by optimization problem.There are three


layers in Neural networks are input layer,Hidden layer and output layer.So in this
network we try optimize weights within model.
2. Within each connection between layers,Neuron have arbitarily way assign to
it.During training weights constantly updating intends to feature of the optimum
values.
3. It depends on optimization algorithm.Optimizer called SGD(Stochastic gradient
Descent),minimize the given loss function by assigning the weights and such weight
make a loss function as close as possible.
4. During training we supply model data and labels to that data For Ex.Cats and Dogs
that suppy model data and among with label produced whether given image is cat or
dog.Once this image passess through the entire Model,going output at the
ends.Actually 0.75 image being cat ,0.25 being a dog.In this case loss is going to be
error between the two predicting image actually is SGD.Minimize this error to make a
model as accurate as possible in its prediction.
5. The model compares the output with the original result.It repeats the process to
passess data over and over again so SGD to improve accuracy.The model adjusts the
weights in every iteration to enhance the accuracy of the output.
Q4.Define Non-Linear activation function and their varients?(6 marks)

The non-linear functions do the mappings between the inputs and response variables. Their
main purpose is to convert an input signal of a node in Neural Network to an output signal.
That output signal is now used as an input in the next layer in the model.

VARIENTS OF ACTIVATION FUNCTION :

1. Linear function : The activation function does the non-linear transformation to the
input making it capable to learn and perform more complex tasks.
2. Sigmoid function : It is a function which is plotted as ‘S’ shaped graph.In Deep
learning which is used to add non-linearity in a machine learning model.ex.regression
of bounded quantities, such as probabilities between 0 and 1.
3. Tanh Function : Tanh function also knows as Tangent Hyperbolic function. It’s
actually mathematically shifted version of the sigmoid function. Both are similar and
can be derived from each other.(Better than sigmoid).
4. RELU function : Stands for Rectified linear unit. It is the most widely used activation
function. Chiefly implemented in hidden layers of Neural network.
5. Softmax function : The softmax function is also a type of sigmoid function but is
handy when we are trying to handle classification problems.
Q5.Define Neural Network in Deep learning.How its works? and their types. explain it
with suitable Diagram(12 Marks**)

Neural Network is a computational model that has a network architecture acts like a human
brain.It has many layers. Each layer performs a specific function, and the complex the
network is, the more the layers are. That’s why a neural network is also called a multi-layer
perceptron.

Working :

A neural network has three layers.These layers are made up of nodes.

1.The input layer :The input layer picks up the input signals and transfers them to the next
layer. It gathers the data from the outside world.

2.The hidden layer : There are multiple hidden neural networks that performs all the back-
end tasks of calculation. A network can even have zero hidden layers.

3.The output layer : The output layer transmits the final result of the hidden layer’s
calculation.
The input layer receives the data and passes it on to the hidden layer. Weights are assigned to
each input at random by the linkages between the two layers. After weights are multiplied
with them individually, a bias is applied to each input. To the activation function, the
weighted total is passed. For feature extraction, the activation function decides which nodes
should be fired. To deliver the output, the model uses an application function on the output
layer. To reduce error, weights are modified and the output is back-propagated.

Types of Neurak Network are:

1. Recurrent Neural Network (RNN) : Recurrent neural networks (RNN) are a class of
neural networks that are helpful in modeling sequence data.It produce predictive
results in sequential data that other algorithms can't.
2. Convolutional Neural Network (CNN) : This network consists of one or multiple
convolutional layers. The convolutional layer present in this network applies a
convolutional function on the input before transferring it to the next layer.CNN used
in natural language processing and image recognition.
3. Radial Basis Function Neural Network (RBFNN) : networks are used type of artificial
neural network for function approximation problems. Radial basis function networks
are distinguished from other neural networks due to their universal approximation and
faster learning speed.
4. Feedforward Neural Network (FNN) : In this network, the information moves in
only one direction—forward—from the input nodes, through the hidden nodes (if
any) and to the output nodes. There are no cycles or loops in the network.
5. Modular Neural Network :It is neural network characterized by a series of
independent neural networks moderated by some intermediary. Each independent
neural network serves as a module and operates on separate inputs to accomplish
some subtask of the task the network hopes to perform.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy