0% found this document useful (0 votes)
2 views11 pages

Neural Network Theory

Neural networks, first proposed by McCulloch and Pitts in 1944, describe brain functions using interconnected neurons that process input signals and produce outputs. These networks consist of layers of nodes that assign weights to incoming connections, firing only when a certain threshold is met. McCulloch and Pitts demonstrated that a neural network could theoretically compute any function that a digital computer can.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views11 pages

Neural Network Theory

Neural networks, first proposed by McCulloch and Pitts in 1944, describe brain functions using interconnected neurons that process input signals and produce outputs. These networks consist of layers of nodes that assign weights to incoming connections, firing only when a certain threshold is met. McCulloch and Pitts demonstrated that a neural network could theoretically compute any function that a digital computer can.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 11

NEURAL NETWORK

THEORY
Overview
• Neural networks were first proposed in 1944
by Warren McCullough and Walter Pitts.
• McCulloch and Pitts's paper provided a way
to describe brain functions in abstract
terms, and showed that simple elements
connected in a neural network can have
immense computational power.
• Basically, a neuron takes an input signal (dendrite),
processes it like the CPU (soma), passes the output through
a cable like structure to other connected neurons (axon to
synapse to other neuron’s dendrite). Now, this might be
biologically inaccurate as there is a lot more going on out
there but on a higher level, this is what is going on with a
neuron in our brain — takes an input, processes it, throws
out an output.
• Our sense organs interact with the outer world and send the
visual and sound information to the neurons. Let's say you
are watching a comedy show. Now the information your
brain receives is taken in by the “laugh or not” set of
neurons that will help you make a decision on whether to
laugh or not. Each neuron gets fired/activated only when its
respective criteria (more on this later) is met
• Of course, this is not entirely true. In reality, it is not just a
couple of neurons which would do the decision making.
There is a massively parallel interconnected network of 10¹¹
neurons (100 billion) in our brain and their connections are
not as simple as I told you.
• Now the sense organs pass the information to the
first/lowest layer of neurons to process it. And the output of
the processes is passed on to the next layers in a
hierarchical manner, some of the neurons will fire and some
won’t and this process goes on until it results in a final
• This massively parallel network also ensures that
there is a division of work. Each neuron only fires
when its intended criteria is met i.e., a neuron
may perform a certain role to a certain stimulus.
• It is believed that neurons are arranged in a
hierarchical fashion and each layer has its own
role and responsibility. To detect a face, the brain
could be relying on the entire network and not on
a single layer.
• Now that we have established how a
biological neuron works, lets look at what
McCulloch and Pitts had to offer.
• The first computational model of a neuron was proposed by
Warren MuCulloch (neuroscientist) and Walter Pitts (logician)
in 1943.
• A neural net consists of thousands or even millions of simple
processing nodes that are densely interconnected. Most of
today’s neural nets are organized into layers of nodes, and
they’re “feed-forward,” meaning that data moves through
them in only one direction.
• An individual node might be connected to several nodes in
the layer beneath it, from which it receives data, and several
• To each of its incoming connections, a node will
assign a number known as a “weight.” When the
network is active, the node receives a different
number over each of its connections and multiplies it
by the associated weight. It then adds the resulting
products together, yielding a single number.
• If that number is below a threshold value, the node
passes no data to the next layer. If the number
exceeds the threshold value, the node “fires,” which
in today’s neural nets generally means sending the
number — the sum of the weighted inputs — along
• The neural nets described by McCullough
and Pitts in 1944 had thresholds and
weights, but they weren’t arranged into
layers, and the researchers didn’t specify
any training mechanism.
• What McCullough and Pitts showed was
that a neural net could, in principle,
compute any function that a digital
Thank You

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy