neural network
neural network
com/blog/types-of-neural-networks/
https://thecleverprogrammer.com/2020/07/19/image-classification-with-ann/
While artificial neurons are inspired by biological neurons and share some key characteristics, they
are simplified mathematical models designed for specific computational tasks. Biological neurons are
highly complex and versatile, serving various functions in the human brain beyond the scope of
artificial neurons.
similarities and differences between artificial neurons in neural networks and biological neurons:
Information Receive inputs, perform computations, generate Receive signals, process them, and
Processing outputs transmit signals
Learning Adjust weights during training to learn patterns Exhibit synaptic plasticity
tasks
including backpropagation
training techniques
An input layer: The first layer, which accepts input data and transmits it to the layers below.
One or more Layers that process data via interconnected neurons b/w input and output layers.
hidden layers:
An output layer: The layer at the end of the network that generates predictions or outputs.
Neurons: The fundamental processing units take in inputs, apply weights and biases, and
then, through an activation function, produce an output.
Weights: Parameters that control the intensity of connectivity between neurons and thus
the direction of information flow.
Biases: Additional neuronal characteristics that adjust the network’s behavior and
change the activation function.
Activation A non-linear function applied to the weighted sum of inputs in each neuron,
function: introducing non-linearity to the network.
Forward The method of creating predictions by transferring input data from the input
propagation: layer to the output layer through the network.
Backpropagation: The process of computing gradients of the error with respect to weights and
biases to adjust them during training.
Loss function: A function that gauges the performance of the network by calculating the
difference between predicted results and actual labels.
Neural Networks
Pros Cons
Can often work more efficiently and for longer than Still rely on hardware that may require labour
humans and expertise to maintain
Can be programmed to learn from prior outcomes to May take long periods of time to develop the
strive to make smarter future calculations code and algorithms
Are continually being expanded in new fields with Usually report an estimated range or estimated
more difficult problems amount that may not actualize
When an element of the NN fails, its parallel nature can Needs training to operate.
continue without any problem.
1. Data Preparation
2. Network Architecture Design
3. Data Pre-processing
4. Training the ANN
5. Validation and Tuning
6. Deployment and Testing
7. Decision-Making and Insights
1. Non-linearity
2. Adaptability and Learning
3. Parallel Processing
4. Robustness
5. Feature Extraction
Structure It can have complex architectures and layers Comprises interconnected neurons
Complexity Can have complex architectures and layers Varies in complexity across species
Fault
Resistant to noise and incomplete data Susceptible to noise and errors
Tolerance
Replication Easily replicated and controlled for Inherent control systems with
and Control experiments regulatory mechanisms
Automotive engineering
Applicant screening of jobs
Control of crane
Monitoring of glaucoma
In a hybrid (neuro-fuzzy) model, Neural Networks Learning
Algorithms are fused with the fuzzy reasoning of fuzzy
logic. It determines the values of parameters, while if-then
rules are controlled by fuzzy logic.
Tan- Sigmoid
Machine Diagnostics Multilayer Perceptron
Function
Tan- Sigmoid
Portfolio Management Classification Supervised Algorithm
Function
Tan- Sigmoid
Target Recognition Modular Neural Network
Function
Tan- Sigmoid
Medical Diagnosis Multilayer Perceptron
Function
Credit Rating Logistic Discriminant Analysis with ANN, Support Vector Machine Logistic functi
Fraud detection Gradient - Descent Algorithm and Least Mean Square (LMS) algorithm. Logistic functi
Approach Explanation
- Analyze neuron weights to determine feature importance and
Weight Analysis
connection strengths.
- Visualize neuron activations to understand how information is
Activation Visualization
processed through hidden layers.
- Generate inputs that maximize neuron activations, revealing what
Feature Visualization
each neuron recognizes.
Interpretability - Use LIME and SHAP to provide local and global explanations for
Techniques model predictions.
Knowledge Distillation - Train a smaller model to mimic the MLP's behavior, facilitating
insight extraction.
Layer-wise Relevance - Assign relevance scores to neurons and connections, showing their
Propagation contributions.
- Optimize inputs to maximize neuron activations, revealing neuron
Activation Maximization
preferences.
- Derive human-readable rules or decision trees from the MLP for
Rule Extraction
transparent representation.
Ensemble and Ablation - Analyze ensembles and perform ablation studies to understand model
Studies behavior and components.
Task-specific - Tailor interpretation approaches to the specific problem domain, e.g.,
Interpretation visualizing filters.