0% found this document useful (0 votes)
8 views6 pages

MID - Notes: Saturday, February 25, 2023 9:39 PM

ML notes

Uploaded by

mushfiqtur24
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views6 pages

MID - Notes: Saturday, February 25, 2023 9:39 PM

ML notes

Uploaded by

mushfiqtur24
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

MID - Notes

Saturday, February 25, 2023 9:39 PM

Question: Type of Neural Network Explain with diagram.

Answer:
Three types of Neural Networks are explained with diagram bellow.

1. Feed Forward Network:


A feedforward neural network is a type of artificial neural network
where the flow of information is unidirectional, from input to output,
without loops or cycles. It consists of source neurons , at least one
middle or hidden layer of computational neurons, and an output
layer of computational neurons. The input layer receives the initial
input data and the output layer produces the final output.

Here's an example diagram of a feedforward neural network with


two hidden layers:

In this example, the input layer is connected to the first hidden


layer, which is then connected to the second hidden layer, and
which is connected to the final output. Each node in the hidden
layers applies a nonlinear activation function to its inputs, allowing
the network to learn complex mappings between input and output.

The number of nodes in each hidden layer, as well as the choice of


MID Page 1
The number of nodes in each hidden layer, as well as the choice of
activation function, are hyperparameters that must be chosen
based on the specific problem being solved. The output layer
typically uses a linear activation function for regression problems
and a softmax activation function for classification problems.

2. Hopefield Network:
The Hopfield network is a type of artificial neural network that
uses feedback connections to store and recall patterns from
memory. It is commonly used for associative memory and
pattern recognition tasks. The network is initialized with a set of
pattern vectors that are stored in the weights of the network.
When a new input pattern vector is presented to the network, it
activates the nodes and iteratively updates their activation until
the network reaches a stable state. The resulting pattern vector
is then compared to the stored patterns, and if it matches one of
them, the network outputs that pattern as the recalled pattern.
The Hopfield network is a powerful tool for associative memory
and pattern recognition, with potential applications in areas such
as image and speech recognition.

MID Page 2
The Hopfield network uses McCulloch and Pitts
neurons with the sign activation function as its
computing element:

The current state of the Hopfield network is


determined by the current outputs of all neurons,

y1, y2, . . ., yn.

Thus, for a single-layer n-neuron network, the


state can be defined by the state vector as:

MID Page 3
In the Hopfield network, synaptic weights between
neurons are usually represented in matrix form as
follows:

where M is the number of states to be memorized


by the network, Ym is the n-dimensional binary
vector, I is n × n identity matrix, and superscript T
denotes a matrix transposition.

1. kohonen Network:
Kohonen Network, also called Self-Organizing Maps (SOM), is an
unsupervised neural network that maps high-dimensional input
data onto a 2D grid. The nodes in the competition layer compete to
match the input data, and the winning node, known as the Best
Matching Unit (BMU), and its neighboring nodes adjust their
weights to become better matches. The network learns to organize
the input data in the map, allowing for intuitive visualization and
understanding of high-dimensional data. Kohonen Network has
applications in image processing, pattern recognition, and
clustering, and is a powerful tool for data visualization and
dimensionality reduction.

MID Page 4
The competitive learning rule defines the change
Δwij applied to synaptic weight wij as

where xi is the input signal and α is the learning


rate parameter.

The matching criterion is equivalent to the


minimum Euclidean distance between vectors.
The Euclidean distance between a pair of n-by-1
vectors X and Wj is defined by

MID Page 5
where xi and wij are the ith elements of the vectors
X and Wj, respectively.

To identify the winning neuron, jX, that best


matches the input vector X, we may apply the
following condition:

where m is the number of neurons in the Kohonen


layer.

MID Page 6

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy