Ai Term Paper
Ai Term Paper
Honors 220C
Professor Freeman
December 7, 2019
Neuromorphic AI
Through our course in AI we have delved into many aspects of AI, from Neural Networks
and Machine Learning to the ethics of AI and consciousness. I decided to investigate the
intersection of software and hardware and how that pertains to AI. Neuromorphic AI pertains
to Neuromorphic chips in conjunction with Spiking Neural Networks (SNN). The electrical
components mimic that of the brain with hierarchical connectivity, dendritic compartments,
and synaptic delays. What these terms really mean is that we are building hardware and
software that imitates the biology of the brain. Neuromorphic chips were originally created in
the 1980s, but back then only specific algorithms could be programmed into the chips (Wired).
They could only be used for a single process, and so didn’t have the same capacity as our
human cortex. Our brains can handle many sensations and impulses; while transmitting data
and analyzing them at the same time. To be able to create computer chips that replicate that
process and those capabilities would be revolutionary. I will first cover the background of
Neuromorphic chips and recent research into them, but then later I will discuss industry
examples and the possibilities of what these chips could mean for our society.
These Neuromorphic Chips are meant to simulate how a neuron in the brain works, with
axons connected by dendrites. AI would allow the chips to have algorithms that can adapt and
change, while also passing information at non-metronomic sequences. Thereby acting even
more like a neuron in the brain, growing and learning, but also changing the pace at which
information is received and passed on. These chips use probabilistic computing to consider
many different uncertainties and contradictions in the data that is received and to learn and
use outputs beyond what humans are capable of. A well-known statistical model called the
Monte-Carlo Simulation is one of the underlying models in these chips (Intel). The Monte Carlo
involves calculating an equation multiple times with many random inputs to help predict the
future (Python). Using these probabilistic simulation methods helps the chips work in a more
fluid manner so that they can adapt to various data inputs and learn faster. As we have learned
in class, Neural Networks are entirely based upon probabilities so that the computer can learn
what outputs are more likely and what they might mean based upon the results of each layer of
the neural network. It makes sense that SNNs would work in similar way, except that they now
explains the relation of artificial neural networks (ANN) and spiking neural networks. The SNNs
are more hardware friendly and energy-efficient, however, they are difficult to train, since
backpropagation can’t be used with the neurons (discrete spikes of information). The paper
goes on to explain all the differences of ANNs and SNNs as well as various mathematical
functions that are used for the synaptic spike trains and more. Deep Convoluted Neural
Networks (DCNNS) are mentioned and how those are converted to SNNs to improve the
abilities of the SNNs and achieve Deep Learning with the neuromorphic chips. The diagrams
that represent the neural networks are similar to what went over in class, except that there are
now spiking units. In general, it seems that each spike leads to a convolution, then to a pool,
then back to a convolution, and this repeats as the number of maps increases until the spike
counter is reached. The Spiking Neuromorphic chips have had trouble in past with Deep
Learning; mainly with accuracy; but a lot of work has been done to remedy that. My main take
away from this paper is that SNNs have a similar outline as neural networks, and there are
generates a signal that travels to other neurons which, in turn, increase or decrease their
potentials in accordance with this signal. This flexibility is what sets Neuromorphic chips apart
from traditional CPUs. Neuromorphic chips currently don’t have high-volume applications,
which is one of the reasons why they have been in the background for so long. Big Data has
been a hot topic in the last decade, and so technology that is not able to handle a massive flow
implementation of it into the general technology industry. Another aspect of these chips that is
important is that they are made of silicon, to create a more durable chip. With modern chips,
physical restraints have occurred as processing speeds have been elevated and higher volumes
of data are being processed. The heat generated by these chips is a serious problem, and
neuromorphic chips are able use much less power, as well as process faster and adapt quicker
to the data inputs. Jeff Hawkins, founder of Numenta, believes strongly in the importance of
building these chips of silicon because of these benefits. His company is working to re-engineer
the neocortex, essentially aiding neuromorphic engineering from the neuroscience side.
Neuromorphic engineering can help us learn how the brain works better, but we still need to
have neuroscience research to try and understand the biology and chemistry of the brain.
Companies like Intel and IBM have already been performing research in the area
of Neuromorphic AI. Intel has a chip that they call “Loihi”, consisting of a 128-core design and
made on a 14nm process technology (Intel). I see this as a relevant application of condensed
matter physics and nanotechnology, where these circuits are so small that we must deal with
the quantum effects that occur. “What makes this a big deal is that these chips require far less
power to process AI algorithms. For example, one neuromorphic chip made by IBM contains
five times as many transistors as a standard Intel processor, yet consumes only 70 milliwatts of
power. An Intel processor would use anywhere from 35 to 140 watts, or up to 2000 times more
power” (Wired). Power is key as well. If these chips are improved with AI and designed to
better handle large data sets, then they could be mass produced all over the world and used in
almost every previous CPU governed technology. This significant reduction in power would put
far less strain on the energy grids and could help improve our carbon footprint. I foresee that
these chips could be in areas of renewable energies, as ways to reduce power cost of producing
energy. One of the reasons that nuclear fusion is not a viable source of energy is that fusion
requires more energy to combine the particles than the amount of energy produced. By using
computer systems that require less energy and that are far more dynamic, this could be one of
IBM has built a chip called TrueNorth. They decided to focus on the architecture of the
system because they saw that today’s supercomputers are no match for the organic wetware
that makes up the human brain. “To underscore this divergence between the brain and today’s
computers, note that a 'human-scale' simulation with 100 trillion synapses required 96 Blue
Gene/Q racks of the Lawrence Livermore National Lab Sequoia supercomputer” (Dharmendra
Modha, IBM Fellow). To make up for this disparity, IBM dramatically increased their count of
cores and transistors on the chip, with 4096 cores and 5.4 billion transistors (IBM). As the chip
runs complex neural network, it can use less than 100mW of power and has a power density of
20mW/cm^2 (IBM). As stated, earlier Intel’s ‘Loihi’ chip also consumed less than 100mW of
power, demonstrating how closely in competition these two companies are. I believe that this
competition is good, if there is still collaboration across the various research areas of this field.
Even if neuromorphic chips don’t achieve quite the revolution that some minds speculate, they
Most of the broad ethical concerns of AI can still be applied to this technology. These AI
chips can be applied to many different applications of AI, and so the ethical concerns will vary
depending on the discipline that it is integrated into. If this technology allows computers to
adapt quicker and learn faster, we could see the rise of general AI even sooner than previously
thought, although there is plenty of speculation whether this will ever happen. While it is
debated whether this will happen, there is consensus that Neuromorphic engineering will make
AI much more efficient (techolution). It is essential that we have legislation that provides
guidelines, but as we have seen in class our governments are far behind on this area.
the brain. By simulating how adaptive and quick-learning neurons are with our hardware, we
can dramatically decrease the power consumed and increase the speeds at which these chips
can process data and learn. All neuromorphic chips are based around transforming input spikes
into output spikes, using neural networks and probabilistic methods. If these chips become
more widely available, we should see far reaching impacts, from research and scientific
https://www.wired.co.uk/article/ai-neuromorphic-chips-brains
https://www.nist.gov/programs-projects/neuromorphic-computing
https://www.intel.com/content/www/us/en/research/neuromorphic-computing.html
https://pbpython.com/monte-carlo.html
https://www.technologyreview.com/s/526506/neuromorphic-chips/
https://www.mepits.com/tutorial/286/vlsi/neuromorphic-chip
https://techolution.com/neuromorphic-computing-2030-ai-mega-trends/
https://www.nextplatform.com/2017/03/29/neuromorphic-quantum-supercomputing-mesh-
deep-learning/
research.ibm.com/articles/brain-chip.shtml
https://arxiv.org/pdf/1804.08150.pdf (Tavanaei)
https://www.humanbrainproject.eu/en/silicon-brains/
https://numenta.com/