0% found this document useful (0 votes)
77 views22 pages

Artificial Neural Network

Uploaded by

iroyharshkumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
77 views22 pages

Artificial Neural Network

Uploaded by

iroyharshkumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 22

Artificial Neural Network

These are computational models and inspire by the human


brain. Many of the recent advancements have been made
in the field of Artificial Intelligence, including Voice
Recognition, Image Recognition, Robotics using it. They are
the biologically inspired simulations performed on the
computer to perform certain specific tasks like -
 Clustering

 Classification

 Pattern Recognition


In general - It is a biologically inspired network of
artificial neurons configured to perform specific tasks.
These biological methods of computing are known as the
next major advancement in the Computing Industry.

Neural Network

The term ‘Neural’ has origin from the human (animal)


nervous system’s basic functional unit ‘neuron’ or nerve
cells present in the brain and other parts of the human
(animal) body. A neural network is a group of algorithms
that certify the underlying relationship in a set of data
similar to the human brain. The neural network helps to
change the input so that the network gives the best result
without redesigning the output procedure. You can also
learn more about ONNX in this insight.

advantages and disadvantages

The advantages of are listed below:

 A neural network can perform tasks that a linear


program can not.
 When an element of the neural network fails, its
parallel nature can continue without any problem.
 A neural network learns and reprogramming is not
necessary.
 It can be implemented in any application.
 It can be performed without any problem.

The disadvantages of are described below:

 The neural network needs training to operate.


 The architecture of a neural network is different
from the architecture of microprocessors. Therefore,
emulation is necessary.
 Requires high processing time for large neural
networks.

A combination of neurons whose performance vector


signifies the creation of real instance parameters of a
particular type of an object or it's part. Click to explore about
our, Capsule Networks Benefits

What are the parts of Neuron and their Functions?

The typical nerve cell of the human brain comprises of four


parts:
Function of Dendrite

It receives signals from other neurons.

Soma (cell body)

It sums all the incoming signals to generate input.

Axon Structure

When the sum reaches a threshold value, the neuron fires,


and the signal travels down the axon to the other neurons.

Synapses Working

The point of interconnection of one neuron with other


neurons. The amount of signal transmitted depends upon
the strength (synaptic weights) of the connections.

The connections can be inhibitory (decreasing strength) or


excitatory (increasing strength) in nature. So, a neural
network, in general, has a connected network of billions of
neurons with a trillion of interconnections between them.
difference between brain and computer

the difference between Artificial Neural Networks (ANN) VS Biological


Neural Networks (BNN)
Artificial Neural Network
Characteristics Biological(Real) Neural Network (BNN)
(ANN)
Faster in processing
Slower in processing information. The response
Speed information. Response time
time is in milliseconds.
is in nanoseconds.
Processing Serial processing. Massively parallel processing.
A highly complex and dense network of
Less size & complexity. It
Size & interconnected neurons containing neurons of the
does not perform complex
Complexity order of 1011 with 1015 of
pattern recognition tasks.
interconnections.<strong
Information storage is A highly complex and dense network of
Storage replaceable means replacing interconnected neurons containing neurons of the
new data with an old one. order of 1011 with 1015 of interconnections.
Fault intolerant. Corrupt Information storage is adaptable means new
Fault tolerance
information cannot retrieve information is added by adjusting the
in case of failure of the interconnection strengths without destroying old
system. information.
There is a control unit for
Control No specific control mechanism external to the
controlling computing
Mechanism computing task
activities

Artificial Neural Networks with Biological Neural


Network
Neural Networks resemble the human brain in the following
two ways -
 A neural network acquires knowledge through
learning.
 A neural network's knowledge is a store within inter-
neuron connection strengths known as synaptic
weights.
Von Neumann Architecture
Ann Based Computing
Based Computing
Serial processing - processingParallel processing - several
instruction and problem ruleprocessors perform
one at the time (sequential) simultaneously (multitasking)
Function logically with a set of ifFunction by learning pattern
& else rules - rule-basedfrom a given input (image,
approach text or video, etc.)
Programmable by higher-level
ANN is, in essence, the
languages such as C, Java, C++,
program itself.
etc.
Requires either big or error-Use of application-specific
prone parallel processors multi-chips.
Artificial Neural Network (ANN) Vs Biological Neural Network (BNN)

 The Biological Neural Network's dendrites are


analogous to the weighted inputs based on their
synaptic interconnection in it.
 The cell body is comparable to the artificial neuron
unit in it, comprising summation and threshold unit.
 Axon carries output that is analogous to the output
unit in the case of it. So, it is model using the
working of basic biological neurons.

works
 It can be viewed as weighted directed graphs in
which artificial neurons are nodes, and directed
edges with weights are connections between neuron
outputs and neuron inputs.
 The Artificial Neural Network receives information
from the external world in pattern and image in
vector form. These inputs are designated by the
notation x(n) for n number of inputs.
 Each input is multiplied by its corresponding
weights. Weights are the information used by the
neural network to solve a problem. Typically weight
represents the strength of the interconnection
between neurons inside the Neural Network.
 The weighted inputs are all summed up inside the
computing unit (artificial neuron). In case the
weighted sum is zero, bias is added to make the
output not- zero or to scale up the system response.
Bias has the weight and input always equal to ‘1'.
 The sum corresponds to any numerical value
ranging from 0 to infinity. To limit the response to
arrive at the desired value, the threshold value is
set up. For this, the sum is forward through an
activation function.
 The activation function is set to the transfer function
to get the desired output. There are linear as well as
the nonlinear activation function.
What are the commonly used activation functions?
Some of the commonly used activation function is - binary,
sigmoidal (linear) and tan hyperbolic sigmoidal
functions(nonlinear).
 Binary - The output has only two values, either 0
and 1. For this, the threshold value is set up. If the
net weighted input is greater than 1, the output is
assumed as one otherwise zero.
 Sigmoidal Hyperbolic - This function has an ‘S’
shaped curve. Here the tan hyperbolic function is
used to approximate output from net input. The
function is defined as - f (x) = (1/1+ exp(-????x))
where ???? - steepness parameter.

various types

Parameter Types Description


Based on theFeedForward, Feedforward - In which graphs have no loops. Recurrent - Loop
connection pattern Recurrent because of feedback.
Based on the
Single-layer, Single Layer - Having one secret layer. E.g., Single Perceptron Mult
number of hidden
Multi-Layer Having multiple secret layers. Multilayer Perceptron
layers
Based on the nature Fixed - Weights are a fixed priority and not changed at all. Adaptive -
Fixed, Adaptive
of weights the weights and changes during training.
Static - Memoryless unit. The current output depends on the curren
Based on the E.g., Feedforward network. Dynamic - Memory unit - The output
Static, Dynamic
Memory unit upon the current input as well as the current output. E.g., Recurrent
Network

Neural Network Architecture Types


 Perceptron Model in Neural Networks
 Radial Basis Function Neural Network
 Multilayer Perceptron Neural Network
 Recurrent Neural Network
 Long Short-Term Memory Neural Network (LSTM)
 Hopfield Network
 Boltzmann Machine Neural Network
 Convolutional Neural Network
 Modular Neural Network
 Physical Neural Network

 Perceptron Model

Neural Network is having two input units and one output


unit with no hidden layers. These are also known as ‘single-
layer perceptrons.'
 Radial Basis Function
These networks are similar to the feed-forward Neural
Network, except radial basis function is used as these
neurons' activation function.
 Multilayer Perceptron

These networks use more than one hidden layer of


neurons, unlike single-layer perceptron. These are also
known as Deep Feedforward Neural Networks.
 Recurrent

Type of Neural Network in which hidden layer neurons have


self-connections. It possess memory. At any instance, the
hidden layer neuron receives activation from the lower
layer and its previous activation value.
 Long Short-Term Memory Neural Network (LSTM)

The type of Neural Network in which memory cell is


incorporated into hidden layer neurons is called LSTM
network.
 Hopfield Network

A fully interconnected network of neurons in which each


neuron is connected to every other neuron. The network is
trained with input patterns by setting a value of neurons to
the desired pattern. Then its weights are computed. The
weights are not changed. Once trained for one or more
patterns, the network will converge to the learned patterns.
It is different from other Neural Networks.
 Boltzmann Machine Neural Network

These networks are similar to the Hopfield network, except


some neurons are input, while others are hidden in nature.
The weights are initialized randomly and learn through the
backpropagation algorithm.
 Convolutional Neural Network
Get a complete overview of it through our blog Log
Analytics with Machine Learning and Deep Learning.
 Modular Neural Network

It is the combined structure of different types of it like


multilayer perceptron, Hopfield Network, Recurrent Neural
Network, etc., which are incorporated as a single module
into the network to perform independent subtask of whole
complete.
 Physical Neural Network

In this type of Artificial Neural Network, electrically


adjustable resistance material is used to emulate synapse
instead of software simulations performed in the neural
network.

Artificial Intelligence collects and analyze data using smart


sensors or machine learning algorithms and automatically
route service requests to reduce the human workload. Click
to explore about our, Artificial Intelligence Applications

Hardware Architecture for Neural Networks


Two types of methods are used for implementing hardware
for it.
 Software simulation in conventional computer

 A special hardware solution for decreasing


execution time.
When Neural Networks are used with fewer processing
units and weights, software simulation is performed on the
computer directly. E.g., voice recognition, etc. When Neural
Network Algorithms developed to the point where useful
things can be done with 1000's neurons and 10000's of
synapses, high-performance Neural network hardware will
become essential for practical operation. E.g., GPU
( Graphical processing unit) in the case of Deep Learning
algorithms in object recognition, image classification, etc.
The implementation's performance is measured by
connection per the second number (cps), i.e., the number
of the data chunk is transported through the neural
network's edges. While the performance of the learning
algorithm is measured in the connection updates per
second (cups)

Learning Techniques
The neural network learns by adjusting its weights and bias
(threshold) iteratively to yield the desired output. These
are also called free parameters. For learning to take place,
the Neural Network is trained first. The training is
performed using a defined set of rules, also known as the
learning algorithm.

Training Algorithms

 Gradient Descent Algorithm


This is the simplest training algorithm used in the case of a
supervised training model. In case the actual output is
different from the target output, the difference or error is
find out. The gradient descent algorithm changes the
weights of the network in such a manner to minimize this
mistake.
 Back Propagation Algorithm
It is an extension of the gradient-based delta learning rule.
Here, after finding an error (the difference between desired
and target), the error is propagated backward from the
output layer to the input layer via the hidden layer. It is
used in the case of Multi-layer Neural Network.

Learning Data Sets


 Training Data Set
A set of examples used for learning is to fit the parameters
[i.e., weights] of the network. One approach comprises one
full training cycle on the training set.
 Validation Set Approach

A set of examples used to tune the parameters [i.e.,


architecture] of the network. For example, to choose the
number of hidden units in a Neural Network.
 Making Test Set

A set of examples is used only to assess the performance


[generalization] of a fully specified network or apply
successfully to predict output whose input is known.

What are the Five Algorithms to Train a Neural Network?


 Hebbian Learning Rule
 Self - Organizing Kohonen Rule
 Hopfield Network Law
 LMS algorithm (Least Mean Square)
 Competitive Learning

What is the architecture of it?


A typical Neural Network contains a large number of
artificial neurons called units arranged in a series of layers.
In typical Artificial Neural Network comprises different
layers -

 Input layer - It contains those units (Artificial


Neurons) which receive input from the outside world
on which the network will learn, recognize about, or
otherwise process.
 Output layer - It contains units that respond to the
information about how it learn any task.
 Hidden layer - These units are in between input
and output layers. The hidden layer's job is to
transform the input into something that the output
unit can use somehow.
Connect Neural Networks, which means say each hidden
neuron links completely to every neuron in its previous
layer(input) and the next layer (output) layer.
What are the Learning Techniques in Neural Networks?

Here is a list of Learning Techniques

 Supervised Learning
 Unsupervised Learning
 Reinforcement Learning
 Offline Learning
 Online Learning
We Discuss each one of them in length

 Supervised Learning
In this learning, the training data is input to the network,
and the desired output is known weights are adjusted until
production yields desired value.
 Unsupervised Learning
Use the input data to train the network whose output is
known. The network classifies the input data and adjusts
the weight by feature extraction in input data.
 Reinforcement Learning

Here, the output value is unknown, but the network


provides feedback on whether the output is right or wrong.
It is Semi-Supervised Learning.

 Offline Learning

The weight vector adjustment and threshold adjustment


are made only after the training set is shown to the
network. It is also called Batch Learning.

Online Learning
The adjustment of the weight and threshold is made after
presenting each training sample to the network.

Learning and Development in Neural Networks


Learning occurs when the weights inside the network get
updated after many iterations. For example - Suppose we
have inputs in the form of patterns for two different classes
of patterns - I & 0 as shown and b -bias and y as the
desired output.
Pattern y x1 x2 x3 x4 x5 x6 x7 x8 x9
I 1 1 1 1 -1 1 -1 1 1 1
O -1 1 1 1 1 -1 1 1 1 1
We want to classify input patterns into either pattern ‘I’ &
‘O.' Following are the steps performed:
 Nine inputs from x1 - x9 and bias b (input having
weight value 1) are fed to the network for the first
pattern.
 Initially, weights are initialized to zero.

 Then weights are updated for each neuron using the


formulae: Δ wi = xi y for i = 1 to 9 (Hebb’s Rule)
 Finally, new weights are found using the formulae:

 wi(new) = wi(old) + Δwi

 Wi(new) = [111-11-1 1111]

 The second pattern is input to the network. This


time, weights are not initialized to zero. The initial
weights used here are the final weights obtained
after presenting the first pattern. By doing so, the
network.
 The steps from 1 - 4 are repeated for second inputs.

 The new weights are Wi(new) = [0 0 0 -2 -2 -2 000]


So, these weights correspond to the learning ability of the
network to classify the input patterns successfully.

What are the use-cases of it?


There are four broad use-cases of Neural Network

 Classification Neural Network


 Prediction Neural Network
 Clustering Neural Network
 Association Neural Network
Classification Neural Network

A Neural Network can be trained to classify a given pattern


or dataset into a predefined class. It uses Feedforward
Networks.

Prediction Neural Network

A Neural Network can be trained to produce outputs that


are expected from a given input. E.g., - Stock market
prediction.

Clustering Neural Network

The Neural network can identify a unique feature of the


data and classify them into different categories without any
prior knowledge of the data. Following networks are used
for clustering -

 Competitive networks
 Adaptive Resonance Theory Networks
 Kohonen Self-Organizing Maps.
Association Neural Network

Train the Neural Network to remember the particular


pattern. When the noise pattern is presented to the
network, the network associates it with the memory's
closest one or discards it. E.g., Hopfield Networks, which
performs recognition, classification, and clustering, etc.

What are the applications of neural networks?


Top applications of neural networks:
 Neural Network for Machine Learning

 Face Recognition using it

 Neuro-Fuzzy Model and its applications

 Neural Networks for data-intensive applications


Neural Networks for Pattern Recognition

Pattern recognition is the study of how machines can


observe the environment, learn to distinguish patterns of
interest from their background, and make sound and
reasonable decisions about the patterns' categories. Some
examples of the pattern are - fingerprint images, a
handwritten word, a human face, or a speech signal. Given
an input pattern, its recognition involves the following task
-
 Supervised classification - Given the input
pattern is known as the member of a predefined
class.
 Unsupervised classification - Assign pattern is to
a hitherto unknown class.
So, the recognition problem here is essentially a
classification or categorized task. The design of pattern
recognition systems usually involves the following three
aspects-
 Data acquisition and preprocessing

 Data representation

 Decision Making

Approaches For Pattern Recognition


 Template Matching
 Statistical

 Syntactic Matching
Following Neural Network architectures used for Pattern
Recognition -
 Multilayer Perceptron

 Kohonen SOM (Self Organizing Map)

 Radial Basis Function Network (RBF)

A division of unsupervised learning which makes it more


handful because it can also handle unsupervised learning
which is itself a big plus. Click to explore about
our, Generative Adversarial Networks Applications

Neuro-Fuzzy Model and its applications

What is Fuzzy logic?

Fuzzy logic refers to the logic developed to express the


degree of truthiness by assigning values between 0 and 1,
unlike traditional boolean logic representing 0 and 1.

What is Fuzzy logic role in Neural networks?

Fuzzy logic and it have one thing in common. They can be


used to solve pattern recognition problems and others that
do not involve any mathematical model.

What are the applications of Neuro-Fuzzy Model?

Systems combining both fuzzy logic and neural networks


are neuro-fuzzy systems. These systems (Hybrid) can
combine the advantages of both it and fuzzy logic to
perform in a better way. Fuzzy logic and it have been
integrated to use in the following applications -

 Automotive engineering
 Applicant screening of jobs
 Control of crane
 Monitoring of glaucoma
In a hybrid (neuro-fuzzy) model, Neural Networks Learning
Algorithms are fused with the fuzzy reasoning of fuzzy
logic. It determines the values of parameters, while if-then
rules are controlled by fuzzy logic.

Neural Network for Machine Learning


 Multilayer Perceptron (supervised classification)
 Back Propagation Network (supervised
classification)
 Hopfield Network (for pattern association)
 Deep Neural Networks (unsupervised clustering)

Neural Networks for data-intensive applications


It have been successfully applied to the broad spectrum of
data-intensive applications, such as:
Activation
Application Architecture / Algorithm
Function

Process modeling and


Radial Basis Network Radial Basis
control

Tan- S
Machine Diagnostics Multilayer Perceptron
Function

Tan- S
Portfolio Management Classification Supervised Algorithm
Function

Tan- S
Target Recognition Modular Neural Network
Function

Tan- S
Medical Diagnosis Multilayer Perceptron
Function

Credit Rating Logistic Discriminant Analysis with ANN, Support Vector Machine Logistic func

Targeted Marketing Back Propagation Algorithm Logistic func

Voice recognition Multilayer Perceptron, Deep Neural Networks( ConvolutionalLogistic func


Neural Networks)

Financial Forecasting Backpropagation Algorithm Logistic func

Intelligent searching Deep Neural Network Logistic func

Gradient - Descent Algorithm and Least Mean Square (LMS)


Fraud detection Logistic func
algorithm.

Face Recognition using Artificial Neural Networks

Face recognition entails comparing an image with a


database of saved faces to identify the person in that input
picture. It is a mechanism that involves dividing images
into two parts; one containing targets (faces) and one
providing the background. The associated assignment of
face detection has direct relevance to the fact that images
need to analyze and faces identified earlier than they can
be recognized.

Face Recognition uses computer algorithms to find specific


details about a person's face. Click to explore about our, Face
Recognition with Deep Learning

What is learning rule in neural network?

The learning rule is a type of mathematical logic. It


encourages to gain from the present conditions and
upgrade its efficiency and performance. The learning
procedure of the brain modifies its neural structure. The
expanding or diminishing quality of its synaptic
associations rely upon their activity. Learning rules in the
Neural network:

 Hebbian learning rule: It determines how to


customize the weights of nodes of a system.
 Perceptron learning rule: Network starts its
learning by assigning a random value to each load.
 Delta learning rule: Modification in a node's
sympatric weight is equal to the multiplication of
error and the input.
 Correlation learning rule: It is similar to
supervised learning.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy