0% found this document useful (0 votes)
27 views10 pages

Neural Network

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views10 pages

Neural Network

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

Neural network

A neural network is a network or circuit of neurons, or in a


modern sense, an artificial neural network, composed of artificial
neurons or nodes.[1] Thus a neural network is either a biological
neural network, made up of real biological neurons, or an
artificial neural network, for solving artificial intelligence (AI)
problems. The connections of the biological neuron are modeled
as weights. A positive weight reflects an excitatory connection,
while negative values mean inhibitory connections. All inputs are
modified by a weight and summed. This activity is referred as a
linear combination. Finally, an activation function controls the
amplitude of the output. For example, an acceptable range of
output is usually between 0 and 1, or it could be −1 and 1.

These artificial networks may be used for predictive modeling,


adaptive control and applications where they can be trained via a
dataset. Self-learning resulting from experience can occur within Simplified view of a feedforward
artificial neural network
networks, which can derive conclusions from a complex and
seemingly unrelated set of information.[2]

Contents
Overview
History
Artificial intelligence
Applications
Neuroscience
Types of models
Criticism
Recent improvements
See also
References
External links

Overview
A biological neural network is composed of a group or groups of chemically connected or functionally
associated neurons. A single neuron may be connected to many other neurons and the total number of
neurons and connections in a network may be extensive. Connections, called synapses, are usually
formed from axons to dendrites, though dendrodendritic synapses[3] and other connections are possible.
Apart from the electrical signaling, there are other forms of signaling that arise from neurotransmitter
diffusion.

Artificial intelligence, cognitive modeling, and neural networks are information processing paradigms
inspired by the way biological neural systems process data. Artificial intelligence and cognitive modeling
try to simulate some properties of biological neural networks. In the artificial intelligence field, artificial
neural networks have been applied successfully to speech recognition, image analysis and adaptive
control, in order to construct software agents (in computer and video games) or autonomous robots.

Historically, digital computers evolved from the von Neumann model, and operate via the execution of
explicit instructions via access to memory by a number of processors. On the other hand, the origins of
neural networks are based on efforts to model information processing in biological systems. Unlike the
von Neumann model, neural network computing does not separate memory and processing.

Neural network theory has served both to better identify how the neurons in the brain function and to
provide the basis for efforts to create artificial intelligence.

History
The preliminary theoretical base for contemporary neural networks was independently proposed by
Alexander Bain[4] (1873) and William James[5] (1890). In their work, both thoughts and body activity
resulted from interactions among neurons within the brain.

For Bain,[4] every activity led to the firing of a certain set of


neurons. When activities were repeated, the connections between
those neurons strengthened. According to his theory, this
repetition was what led to the formation of memory. The general
scientific community at the time was skeptical of Bain's[4] theory
because it required what appeared to be an inordinate number of
neural connections within the brain. It is now apparent that the
brain is exceedingly complex and that the same brain “wiring”
can handle multiple problems and inputs.

James's[5] theory was similar to Bain's,[4] however, he suggested


that memories and actions resulted from electrical currents Computer simulation of the
flowing among the neurons in the brain. His model, by focusing branching architecture of the
on the flow of electrical currents, did not require individual dendrites of pyramidal neurons.[6]
neural connections for each memory or action.

C. S. Sherrington[7] (1898) conducted experiments to test James's theory. He ran electrical currents down
the spinal cords of rats. However, instead of demonstrating an increase in electrical current as projected
by James, Sherrington found that the electrical current strength decreased as the testing continued over
time. Importantly, this work led to the discovery of the concept of habituation.

McCulloch and Pitts[8] (1943) created a computational model for neural networks based on mathematics
and algorithms. They called this model threshold logic. The model paved the way for neural network
research to split into two distinct approaches. One approach focused on biological processes in the brain
and the other focused on the application of neural networks to artificial intelligence.
In the late 1940s psychologist Donald Hebb[9] created a hypothesis of learning based on the mechanism
of neural plasticity that is now known as Hebbian learning. Hebbian learning is considered to be a
'typical' unsupervised learning rule and its later variants were early models for long term potentiation.
These ideas started being applied to computational models in 1948 with Turing's B-type machines.

Farley and Clark[10] (1954) first used computational machines, then called calculators, to simulate a
Hebbian network at MIT. Other neural network computational machines were created by Rochester,
Holland, Habit, and Duda[11] (1956).

Rosenblatt[12] (1958) created the perceptron, an algorithm for pattern recognition based on a two-layer
learning computer network using simple addition and subtraction. With mathematical notation,
Rosenblatt also described circuitry not in the basic perceptron, such as the exclusive-or circuit, a circuit
whose mathematical computation could not be processed until after the backpropagation algorithm was
created by Werbos[13] (1975).

Neural network research stagnated after the publication of machine learning research by Marvin Minsky
and Seymour Papert[14] (1969). They discovered two key issues with the computational machines that
processed neural networks. The first issue was that single-layer neural networks were incapable of
processing the exclusive-or circuit. The second significant issue was that computers were not
sophisticated enough to effectively handle the long run time required by large neural networks. Neural
network research slowed until computers achieved greater processing power. Also key in later advances
was the backpropagation algorithm which effectively solved the exclusive-or problem (Werbos 1975).[13]

The parallel distributed processing of the mid-1980s became popular under the name connectionism. The
text by Rumelhart and McClelland[15] (1986) provided a full exposition on the use of connectionism in
computers to simulate neural processes.

Neural networks, as used in artificial intelligence, have traditionally been viewed as simplified models of
neural processing in the brain, even though the relation between this model and brain biological
architecture is debated, as it is not clear to what degree artificial neural networks mirror brain
function.[16]

Artificial intelligence
A neural network (NN), in the case of artificial neurons called artificial neural network (ANN) or
simulated neural network (SNN), is an interconnected group of natural or artificial neurons that uses a
mathematical or computational model for information processing based on a connectionistic approach to
computation. In most cases an ANN is an adaptive system that changes its structure based on external or
internal information that flows through the network.

In more practical terms neural networks are non-linear statistical data modeling or decision making tools.
They can be used to model complex relationships between inputs and outputs or to find patterns in data.

An artificial neural network involves a network of simple processing elements (artificial neurons) which
can exhibit complex global behavior, determined by the connections between the processing elements
and element parameters. Artificial neurons were first proposed in 1943 by Warren McCulloch, a
neurophysiologist, and Walter Pitts, a logician, who first collaborated at the University of Chicago.[17]

One classical type of artificial neural network is the recurrent Hopfield network.
The concept of a neural network appears to have first been proposed by Alan Turing in his 1948 paper
Intelligent Machinery in which called them "B-type unorganised machines".[18]

The utility of artificial neural network models lies in the fact that they can be used to infer a function
from observations and also to use it. Unsupervised neural networks can also be used to learn
representations of the input that capture the salient characteristics of the input distribution, e.g., see the
Boltzmann machine (1983), and more recently, deep learning algorithms, which can implicitly learn the
distribution function of the observed data. Learning in neural networks is particularly useful in
applications where the complexity of the data or task makes the design of such functions by hand
impractical.

Applications
Neural networks can be used in different fields. The tasks to which artificial neural networks are applied
tend to fall within the following broad categories:

Function approximation, or regression analysis, including time series prediction and


modeling.
Classification, including pattern and sequence recognition, novelty detection and sequential
decision making.
Data processing, including filtering, clustering, blind signal separation and compression.
Application areas of ANNs include nonlinear system identification[19] and control (vehicle control,
process control), game-playing and decision making (backgammon, chess, racing), pattern recognition
(radar systems, face identification, object recognition), sequence recognition (gesture, speech,
handwritten text recognition), medical diagnosis, financial applications, data mining (or knowledge
discovery in databases, "KDD"), visualization and e-mail spam filtering. For example, it is possible to
create a semantic profile of user's interests emerging from pictures trained for object recognition.[20]

Neuroscience
Theoretical and computational neuroscience is the field concerned with the theoretical analysis and
computational modeling of biological neural systems. Since neural systems are intimately related to
cognitive processes and behaviour, the field is closely related to cognitive and behavioural modeling.

The aim of the field is to create models of biological neural systems in order to understand how
biological systems work. To gain this understanding, neuroscientists strive to make a link between
observed biological processes (data), biologically plausible mechanisms for neural processing and
learning (biological neural network models) and theory (statistical learning theory and information
theory).

Types of models
Many models are used; defined at different levels of abstraction, and modeling different aspects of neural
systems. They range from models of the short-term behaviour of individual neurons, through models of
the dynamics of neural circuitry arising from interactions between individual neurons, to models of
behaviour arising from abstract neural modules that represent complete subsystems. These include
models of the long-term and short-term plasticity of neural systems and its relation to learning and
memory, from the individual neuron to the system level.
Criticism
A common criticism of neural networks, particularly in robotics, is that they require a large diversity of
training for real-world operation. This is not surprising, since any learning machine needs sufficient
representative examples in order to capture the underlying structure that allows it to generalize to new
cases. Dean Pomerleau, in his research presented in the paper "Knowledge-based Training of Artificial
Neural Networks for Autonomous Robot Driving," uses a neural network to train a robotic vehicle to
drive on multiple types of roads (single lane, multi-lane, dirt, etc.). A large amount of his research is
devoted to (1) extrapolating multiple training scenarios from a single training experience, and (2)
preserving past training diversity so that the system does not become overtrained (if, for example, it is
presented with a series of right turns—it should not learn to always turn right). These issues are common
in neural networks that must decide from amongst a wide variety of responses, but can be dealt with in
several ways, for example by randomly shuffling the training examples, by using a numerical
optimization algorithm that does not take too large steps when changing the network connections
following an example, or by grouping examples in so-called mini-batches.

A. K. Dewdney, a former Scientific American columnist, wrote in 1997, "Although neural nets do solve a
few toy problems, their powers of computation are so limited that I am surprised anyone takes them
seriously as a general problem-solving tool" (Dewdney, p. 82).

Arguments for Dewdney's position are that to implement large and effective software neural networks,
much processing and storage resources need to be committed. While the brain has hardware tailored to
the task of processing signals through a graph of neurons, simulating even a most simplified form on Von
Neumann technology may compel a neural network designer to fill many millions of database rows for
its connections—which can consume vast amounts of computer memory and hard disk space.
Furthermore, the designer of neural network systems will often need to simulate the transmission of
signals through many of these connections and their associated neurons—which must often be matched
with incredible amounts of CPU processing power and time. While neural networks often yield effective
programs, they too often do so at the cost of efficiency (they tend to consume considerable amounts of
time and money).

Arguments against Dewdney's position are that neural nets have been successfully used to solve many
complex and diverse tasks, such as autonomously flying aircraft.[21]

Technology writer Roger Bridgman commented on Dewdney's statements about neural nets:

Neural networks, for instance, are in the dock not only because they have been hyped to
high heaven, (what hasn't?) but also because you could create a successful net without
understanding how it worked: the bunch of numbers that captures its behaviour would in all
probability be "an opaque, unreadable table...valueless as a scientific resource".

In spite of his emphatic declaration that science is not technology, Dewdney seems here to
pillory neural nets as bad science when most of those devising them are just trying to be
good engineers. An unreadable table that a useful machine could read would still be well
worth having.[22]
Although it is true that analyzing what has been learned by an artificial neural network is difficult, it is
much easier to do so than to analyze what has been learned by a biological neural network. Moreover,
recent emphasis on the explainability of AI has contributed towards the development of methods, notably
those based on attention mechanisms, for visualizing and explaining learned neural networks.
Furthermore, researchers involved in exploring learning algorithms for neural networks are gradually
uncovering generic principles which allow a learning machine to be successful. For example, Bengio and
LeCun (2007) wrote an article regarding local vs non-local learning, as well as shallow vs deep
architecture.[23]

Some other criticisms came from believers of hybrid models (combining neural networks and symbolic
approaches). They advocate the intermix of these two approaches and believe that hybrid models can
better capture the mechanisms of the human mind (Sun and Bookman, 1990).

Recent improvements
While initially research had been concerned mostly with the electrical characteristics of neurons, a
particularly important part of the investigation in recent years has been the exploration of the role of
neuromodulators such as dopamine, acetylcholine, and serotonin on behaviour and learning.

Biophysical models, such as BCM theory, have been important in understanding mechanisms for
synaptic plasticity, and have had applications in both computer science and neuroscience. Research is
ongoing in understanding the computational algorithms used in the brain, with some recent biological
evidence for radial basis networks and neural backpropagation as mechanisms for processing data.

Computational devices have been created in CMOS for both biophysical simulation and neuromorphic
computing. More recent efforts show promise for creating nanodevices for very large scale principal
components analyses and convolution.[24] If successful, these efforts could usher in a new era of neural
computing that is a step beyond digital computing,[25] because it depends on learning rather than
programming and because it is fundamentally analog rather than digital even though the first
instantiations may in fact be with CMOS digital devices.

Between 2009 and 2012, the recurrent neural networks and deep feedforward neural networks developed
in the research group of Jürgen Schmidhuber at the Swiss AI Lab IDSIA have won eight international
competitions in pattern recognition and machine learning.[26] For example, multi-dimensional long short
term memory (LSTM)[27][28] won three competitions in connected handwriting recognition at the 2009
International Conference on Document Analysis and Recognition (ICDAR), without any prior
knowledge about the three different languages to be learned.

Variants of the back-propagation algorithm as well as unsupervised methods by Geoff Hinton and
colleagues at the University of Toronto can be used to train deep, highly nonlinear neural
architectures,[29] similar to the 1980 Neocognitron by Kunihiko Fukushima,[30] and the "standard
architecture of vision",[31] inspired by the simple and complex cells identified by David H. Hubel and
Torsten Wiesel in the primary visual cortex.

Radial basis function and wavelet networks have also been introduced. These can be shown to offer best
approximation properties and have been applied in nonlinear system identification and classification
applications.[19]
Deep learning feedforward networks alternate convolutional layers and max-pooling layers, topped by
several pure classification layers. Fast GPU-based implementations of this approach have won several
pattern recognition contests, including the IJCNN 2011 Traffic Sign Recognition Competition[32] and the
ISBI 2012 Segmentation of Neuronal Structures in Electron Microscopy Stacks challenge.[33] Such
neural networks also were the first artificial pattern recognizers to achieve human-competitive or even
superhuman performance[34] on benchmarks such as traffic sign recognition (IJCNN 2012), or the
MNIST handwritten digits problem of Yann LeCun and colleagues at NYU.

See also
ADALINE
Adaptive resonance theory
Biological cybernetics
Biologically inspired computing
Cerebellar model articulation controller
Cognitive architecture
Cognitive science
Connectomics
Cultured neuronal networks
Deep learning
Digital morphogenesis
Exclusive or
Gene expression programming
Group method of data handling
Habituation
In situ adaptive tabulation
Memristor
Multilinear subspace learning
Neural network software
Nonlinear system identification
Parallel constraint satisfaction processes
Parallel distributed processing
Predictive analytics
Radial basis function network
Self-organizing map
Simulated reality
Support vector machine
Tensor product network
Time delay neural network

References
1. Hopfield, J. J. (1982). "Neural networks and physical systems with emergent collective
computational abilities" (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC346238). Proc. Natl.
Acad. Sci. U.S.A. 79 (8): 2554–2558. Bibcode:1982PNAS...79.2554H (https://ui.adsabs.har
vard.edu/abs/1982PNAS...79.2554H). doi:10.1073/pnas.79.8.2554 (https://doi.org/10.107
3%2Fpnas.79.8.2554). PMC 346238 (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC34623
8). PMID 6953413 (https://pubmed.ncbi.nlm.nih.gov/6953413).
2. "Neural Net or Neural Network - Gartner IT Glossary" (https://www.gartner.com/it-glossary/n
eural-net-or-neural-network). www.gartner.com.
3. Arbib, p.666
4. Bain (1873). Mind and Body: The Theories of Their Relation. New York: D. Appleton and
Company.
5. James (1890). The Principles of Psychology (https://archive.org/details/principlespsych01ja
megoog). New York: H. Holt and Company.
6. Cuntz, Hermann (2010). "PLoS Computational Biology Issue Image | Vol. 6(8) August
2010". PLoS Computational Biology. 6 (8): ev06.i08. doi:10.1371/image.pcbi.v06.i08 (http
s://doi.org/10.1371%2Fimage.pcbi.v06.i08).
7. Sherrington, C.S. (1898). "Experiments in Examination of the Peripheral Distribution of the
Fibers of the Posterior Roots of Some Spinal Nerves". Proceedings of the Royal Society of
London. 190: 45–186. doi:10.1098/rstb.1898.0002 (https://doi.org/10.1098%2Frstb.1898.00
02).
8. McCulloch, Warren; Walter Pitts (1943). "A Logical Calculus of Ideas Immanent in Nervous
Activity". Bulletin of Mathematical Biophysics. 5 (4): 115–133. doi:10.1007/BF02478259 (htt
ps://doi.org/10.1007%2FBF02478259).
9. Hebb, Donald (1949). The Organization of Behavior (https://archive.org/details/in.ernet.dli.2
015.226341). New York: Wiley.
10. Farley, B.; W.A. Clark (1954). "Simulation of Self-Organizing Systems by Digital Computer".
IRE Transactions on Information Theory. 4 (4): 76–84. doi:10.1109/TIT.1954.1057468 (http
s://doi.org/10.1109%2FTIT.1954.1057468).
11. Rochester, N.; J.H. Holland, L.H. Habit and W.L. Duda (1956). "Tests on a cell assembly
theory of the action of the brain, using a large digital computer". IRE Transactions on
Information Theory. 2 (3): 80–93. doi:10.1109/TIT.1956.1056810 (https://doi.org/10.1109%2
FTIT.1956.1056810).
12. Rosenblatt, F. (1958). "The Perceptron: A Probalistic Model For Information Storage And
Organization In The Brain". Psychological Review. 65 (6): 386–408.
CiteSeerX 10.1.1.588.3775 (https://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.588.
3775). doi:10.1037/h0042519 (https://doi.org/10.1037%2Fh0042519). PMID 13602029 (http
s://pubmed.ncbi.nlm.nih.gov/13602029).
13. Werbos, P.J. (1975). Beyond Regression: New Tools for Prediction and Analysis in the
Behavioral Sciences.
14. Minsky, M.; S. Papert (1969). An Introduction to Computational Geometry. MIT Press.
ISBN 978-0-262-63022-1.
15. Rumelhart, D.E.; James McClelland (1986). Parallel Distributed Processing: Explorations in
the Microstructure of Cognition (https://archive.org/details/paralleldistribu00rume).
Cambridge: MIT Press.
16. Russell, Ingrid. "Neural Networks Module" (https://web.archive.org/web/20140529155320/ht
tp://uhaweb.hartford.edu/compsci/neural-networks-definition.html). Archived from the
original (http://uhaweb.hartford.edu/compsci/neural-networks-definition.html) on 29 May
2014. Retrieved 2012. Check date values in: |accessdate= (help)
17. McCulloch, Warren; Pitts, Walter (1943). "A Logical Calculus of Ideas Immanent in Nervous
Activity". Bulletin of Mathematical Biophysics. 5 (4): 115–133. doi:10.1007/BF02478259 (htt
ps://doi.org/10.1007%2FBF02478259).
18. Copeland, B. Jack, ed. (2004). The Essential Turing. Oxford University Press. p. 403.
ISBN 978-0-19-825080-7.
19. Billings, S. A. (2013). Nonlinear System Identification: NARMAX Methods in the Time,
Frequency, and Spatio-Temporal Domains. Wiley. ISBN 978-1-119-94359-4.
20. Wieczorek, Szymon; Filipiak, Dominik; Filipowska, Agata (2018). "Semantic Image-Based
Profiling of Users' Interests with Neural Networks" (https://www.researchgate.net/publicatio
n/328964756). Studies on the Semantic Web. 36 (Emerging Topics in Semantic
Technologies). doi:10.3233/978-1-61499-894-5-179 (https://doi.org/10.3233%2F978-1-6149
9-894-5-179).
21. Administrator, NASA (June 5, 2013). "Dryden Flight Research Center - News Room: News
Releases: NASA NEURAL NETWORK PROJECT PASSES MILESTONE" (http://www.nasa.
gov/centers/dryden/news/NewsReleases/2003/03-49.html). NASA.
22. Roger Bridgman's defence of neural networks (http://members.fortunecity.com/templarserie
s/popper.html)
23. "Scaling Learning Algorithms towards {AI} - LISA - Publications - Aigaion 2.0" (http://www.ir
o.umontreal.ca/~lisa/publications2/index.php/publications/show/4). www.iro.umontreal.ca.
24. Yang, J. J.; et al. (2008). "Memristive switching mechanism for metal/oxide/metal
nanodevices". Nat. Nanotechnol. 3 (7): 429–433. doi:10.1038/nnano.2008.160 (https://doi.o
rg/10.1038%2Fnnano.2008.160). PMID 18654568 (https://pubmed.ncbi.nlm.nih.gov/186545
68).
25. Strukov, D. B.; et al. (2008). "The missing memristor found". Nature. 453 (7191): 80–83.
Bibcode:2008Natur.453...80S (https://ui.adsabs.harvard.edu/abs/2008Natur.453...80S).
doi:10.1038/nature06932 (https://doi.org/10.1038%2Fnature06932). PMID 18451858 (http
s://pubmed.ncbi.nlm.nih.gov/18451858).
26. "2012 Kurzweil AI Interview with Jürgen Schmidhuber on the eight competitions won by his
Deep Learning team 2009–2012" (http://www.kurzweilai.net/how-bio-inspired-deep-learning-
keeps-winning-competitions).
27. Graves, Alex; Schmidhuber, Jürgen (2008). "Offline Handwriting Recognition with
Multidimensional Recurrent Neural Networks" (http://papers.nips.cc/paper/3449-offline-hand
writing-recognition-with-multidimensional-recurrent-neural-networks). In Bengio, Yoshua;
Schuurmans, Dale; Lafferty, John; Williams, Chris K. I.; Culotta, Aron (eds.). Advances in
Neural Information Processing Systems 21 (NIPS'21). Neural Information Processing
Systems (NIPS) Foundation. pp. 545–552.
28. Graves, A.; Liwicki, M.; Fernandez, S.; Bertolami, R.; Bunke, H.; Schmidhuber, J. (2009). "A
Novel Connectionist System for Improved Unconstrained Handwriting Recognition". IEEE
Transactions on Pattern Analysis and Machine Intelligence. 31 (5): 855–868.
CiteSeerX 10.1.1.139.4502 (https://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.139.
4502). doi:10.1109/TPAMI.2008.137 (https://doi.org/10.1109%2FTPAMI.2008.137).
PMID 19299860 (https://pubmed.ncbi.nlm.nih.gov/19299860).
29. Hinton, G. E.; Osindero, S.; Teh, Y. (2006). "A fast learning algorithm for deep belief nets" (h
ttp://www.cs.toronto.edu/~hinton/absps/fastnc.pdf) (PDF). Neural Computation. 18 (7):
1527–1554. CiteSeerX 10.1.1.76.1541 (https://citeseerx.ist.psu.edu/viewdoc/summary?doi=
10.1.1.76.1541). doi:10.1162/neco.2006.18.7.1527 (https://doi.org/10.1162%2Fneco.2006.1
8.7.1527). PMID 16764513 (https://pubmed.ncbi.nlm.nih.gov/16764513).
30. Fukushima, K. (1980). "Neocognitron: A self-organizing neural network model for a
mechanism of pattern recognition unaffected by shift in position". Biological Cybernetics. 36
(4): 93–202. doi:10.1007/BF00344251 (https://doi.org/10.1007%2FBF00344251).
PMID 7370364 (https://pubmed.ncbi.nlm.nih.gov/7370364).
31. Riesenhuber, M.; Poggio, T. (1999). "Hierarchical models of object recognition in cortex".
Nature Neuroscience. 2 (11): 1019–1025. doi:10.1038/14819 (https://doi.org/10.1038%2F14
819). PMID 10526343 (https://pubmed.ncbi.nlm.nih.gov/10526343).
32. D. C. Ciresan, U. Meier, J. Masci, J. Schmidhuber. Multi-Column Deep Neural Network for
Traffic Sign Classification (https://people.lu.usi.ch/mascij/data/papers/2012_nn_traffic.pdf).
Neural Networks, 2012.
33. D. Ciresan, A. Giusti, L. Gambardella, J. Schmidhuber. Deep Neural Networks Segment
Neuronal Membranes in Electron Microscopy Images (https://papers.nips.cc/paper/4741-de
ep-neural-networks-segment-neuronal-membranes-in-electron-microscopy-images.pdf). In
Advances in Neural Information Processing Systems (NIPS 2012), Lake Tahoe, 2012.
34. D. C. Ciresan, U. Meier, J. Schmidhuber. Multi-column Deep Neural Networks for Image
Classification. IEEE Conf. on Computer Vision and Pattern Recognition CVPR 2012.

External links
A Brief Introduction to Neural Networks (D. Kriesel) (http://www.dkriesel.com/en/science/neu
ral_networks) - Illustrated, bilingual manuscript about artificial neural networks; Topics so
far: Perceptrons, Backpropagation, Radial Basis Functions, Recurrent Neural Networks,
Self Organizing Maps, Hopfield Networks.
Review of Neural Networks in Materials Science (http://www.msm.cam.ac.uk/phase-trans/a
bstracts/neural.review.html)
Artificial Neural Networks Tutorial in three languages (Univ. Politécnica de Madrid) (http://w
ww.gc.ssr.upm.es/inves/neural/ann1/anntutorial.html)
Another introduction to ANN (http://www.doc.ic.ac.uk/~nd/surprise_96/journal/vol4/cs11/rep
ort.html)
Next Generation of Neural Networks (https://www.youtube.com/watch?v=AyzOUbkUf3M) -
Google Tech Talks
Performance of Neural Networks (http://www.msm.cam.ac.uk/phase-trans/2009/performanc
e.html)
Neural Networks and Information (http://www.msm.cam.ac.uk/phase-trans/2009/review_Bha
deshia_SADM.pdf)
Sanderson, Grant (October 5, 2017). "But what is a Neural Network?" (https://www.youtube.
com/watch?v=aircAruvnKk&list=PLZHQObOWTQDNU6R1_67000Dx_ZCJB-3pi).
3Blue1Brown – via YouTube.

Retrieved from "https://en.wikipedia.org/w/index.php?title=Neural_network&oldid=936635293"

This page was last edited on 20 January 2020, at 02:01 (UTC).

Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may apply. By using
this site, you agree to the Terms of Use and Privacy Policy. Wikipedia® is a registered trademark of the Wikimedia
Foundation, Inc., a non-profit organization.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy