0% found this document useful (0 votes)
28 views4 pages

ASC Summarized Unit-1

The document provides an overview of biological and artificial neurons, detailing their structures, functions, and the concept of synapses. It also discusses various neural network architectures, learning techniques, and memory types, including auto-associative and hetero-associative memory. Key elements such as activation functions and the perceptron convergence rule are highlighted, illustrating the foundational concepts of neural networks.

Uploaded by

risu.22128
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views4 pages

ASC Summarized Unit-1

The document provides an overview of biological and artificial neurons, detailing their structures, functions, and the concept of synapses. It also discusses various neural network architectures, learning techniques, and memory types, including auto-associative and hetero-associative memory. Key elements such as activation functions and the perceptron convergence rule are highlighted, illustrating the foundational concepts of neural networks.

Uploaded by

risu.22128
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

Neuron, Nerve Structure, and Synapse

A biological neuron is a fundamental unit of the nervous system that transmits


information through electrical and chemical signals. It consists of three main
parts:
1. Dendrites: Branch-like structures that receive signals from other
neurons.
2. Cell Body (Soma): Contains the nucleus and other organelles. It
processes the incoming signals.
3. Axon: A long, slender projection that transmits signals to other neurons,
muscles, or glands.
At the end of the axon are synaptic terminals, where the neuron
communicates with the next neuron or effector cell through the synapse. The
synapse involves the release of neurotransmitters, which cross the synaptic
cleft and bind to receptors on the post-synaptic cell, propagating the signal.
Diagram: Biological Neuron
Dendrites Soma Axon Synaptic Terminals
(Input) (Processor) (Output) (Communication Point)
| | | |
[~~~]---->[Cell Body]---->[---------]---->[~~~~~Synapse~~~~~]

Artificial Neuron and Its Model


Artificial neurons are mathematical models inspired by biological neurons. A
basic artificial neuron consists of:
1. Inputs: Represent features or data points.
2. Weights: Each input is multiplied by a weight to indicate its importance.
3. Summation Function: Computes the weighted sum of inputs.
4. Activation Function: Introduces non-linearity and determines the output
of the neuron.
The output of an artificial neuron is given by:
Output = Activation Function()
Where:
• : Input values
• : Weights
• : Bias term
• Activation Function: A non-linear function like sigmoid, ReLU, or tanh.
Diagram: Artificial Neuron
Inputs Weights Summation Activation
x1, x2 ---> w1, w2 ---> (Σ) ---> f(Σ) ---> Output

Activation Functions
1. Sigmoid Function:
o Outputs values between 0 and 1.
o Used in binary classification.
2. ReLU (Rectified Linear Unit):
o Efficient and widely used in deep networks.
3. Tanh Function:
o Outputs values between -1 and 1.
4. Softmax Function: Used for multi-class classification.

Neural Network Architecture


1. Single-Layer Feedforward Network:
o Consists of one layer of artificial neurons.
o Example: Perceptron, which is used for linearly separable
problems.
Diagram: Single-Layer Feedforward Network
Input Layer ----> Output Layer
[x1, x2] ----> [y1]
2. Multilayer Feedforward Network:
o Has multiple layers: input layer, hidden layers, and output layer.
o Example: Multilayer Perceptron (MLP), capable of solving non-
linear problems.
Diagram: Multilayer Feedforward Network
Input Layer ---> Hidden Layer ---> Output Layer
[x1, x2] ---> [h1, h2] ---> [y1]
3. Recurrent Neural Network (RNN):
o Contains feedback loops.
o Suitable for sequential data like time series, speech, or text.
o Example: Long Short-Term Memory (LSTM).
Diagram: Recurrent Neural Network
Input --> Hidden State --> Output
^ |
|___________________________|

Learning Techniques
1. Supervised Learning: Uses labeled data. Example: Classification,
regression.
2. Unsupervised Learning: Uses unlabeled data. Example: Clustering,
dimensionality reduction.
3. Reinforcement Learning: Involves an agent interacting with an
environment to maximize rewards. Example: Game-playing AI.

Perceptron and Convergence Rule


A perceptron is a single-layer neural network used for binary classification. The
convergence rule states that if the data is linearly separable, the perceptron
learning algorithm will converge to a solution after a finite number of steps.
Algorithm:
1. Initialize weights and bias.
2. Update weights using the rule: Where is the learning rate.
3. Repeat until convergence.
Diagram: Perceptron
Inputs ---> Weights ---> Activation ---> Output
[x1, x2] [w1, w2] [f] [y1]

Auto-Associative and Hetero-Associative Memory


1. Auto-Associative Memory:
o Stores patterns and retrieves them when provided with a noisy or
incomplete version of the pattern.
o Example: Hopfield Network.
Diagram: Auto-Associative Memory
Input ---> Memory ---> Output (similar to input)
[x1] ---> [Network] ---> [x1]
2. Hetero-Associative Memory:
o Maps input patterns to different output patterns.
o Example: Translation tasks in machine learning.
Diagram: Hetero-Associative Memory
Input ---> Memory ---> Output (different pattern)
[x1] ---> [Network] ---> [y1]

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy