Sociological Study of Perceptron Convtroversy
Sociological Study of Perceptron Convtroversy
JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide
range of content in a trusted digital archive. We use information technology and tools to increase productivity and
facilitate new forms of scholarship. For more information about JSTOR, please contact support@jstor.org.
Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at
https://about.jstor.org/terms
Sage Publications, Ltd. is collaborating with JSTOR to digitize, preserve and extend access to
Social Studies of Science
This content downloaded from 146.169.183.232 on Tue, 21 May 2019 15:48:44 UTC
All use subject to https://about.jstor.org/terms
* ABSTRACT
Mikel Olazaran
Neural Nets
FIGURE 1
Multilayer Network
hidden
input ip units
units output units
Single-layer Machines
In the late 1950s and early 1960s groups from several universities
and laboratories carried out research and implementation projects
in neural nets. Among the most important projects were those
headed by Frank Rosenblatt (Cornell University and Cornell
Auronautical Laboratory, CAL), Bernard Widrow (Department
of Electrical Engineering, Stanford University) and Charles Rosen
(Stanford Research Institute, SRI).
The number of neural-net projects or groups is difficult to
quantify. In their critical study of neural nets (analyzed later in this
paper), Minsky and Papert alleged that, after Rosenblatt's work,
there were perhaps as many as a hundred groups (in an interview
conversation this number went up to 'thousands').
Rosenblatt's (1958) [perceptron] schemes quickly took root, and soon there
were perhaps as many as a hundred groups, large and small, experimenting with
the model either as a 'learning machine' or in the guise of 'adaptive' or 'self-
organizing' networks or 'automatic control' systems.13
FIGURE 2
Perceptron
order = 6
modifiable
connections
FIGURE 3
Processing Unit
inputs
vi
\W1 threshold
V2 summing element
device
V3 :) output
* h
n
This content downloaded from 146.169.183.232 on Tue, 21 May 2019 15:48:44 UTC
All use subject to https://about.jstor.org/terms
Olazaran: Official History of the Perceptrons Controversy 621
FIGURE 4
Simplified Perceptron
input units
v1 _ output units
03
V -*
n
The Navy revealed the embryo of an electronic computer today that it expects
will be able to walk, talk, see, write, reproduce itself and be conscious of its
existence. Later perceptrons will be able to recognize people and call out their
names and instantly translate speech in one language to speech and writing in
another language, it was predicted.17
Many of the people at MIT [referring to the symbolic AI leaders] felt that
Rosenblatt primarily wanted to get press coverage, but that wasn't true at all.
As a consequence many of them disparaged everything he did, and much of
what the Office of Naval Research did in supporting him. They felt that we were
not sufficiently scientific, and that we didn't use the right criteria. That was just
not true. Rosenblatt did get a lot of publicity, and we welcomed it for many
reasons. At that time, he was with Cornell Aeronautical Laboratory, and they
also welcomed it. But at ONR - as with any government organization - in
order to continue to get public support, they have to have press releases, so that
people know what you are doing. It is their right. If you do something good, you
should publicize it, leading then to more support.'9
Minsky and his crew thought that Frank Rosenblatt's work was a waste of t
and they certainly thought that our work at SRI was a waste of time. Min
really didn't believe in perceptrons, he didn't think it was the way to go. I k
he knocked the hell out of our perceptron business.21
There are now in the world machines that think, that learn and that create.
Moreover, their ability to do these things is going to increase rapidly until - in
a visible future - the range of problems they can handle will be coextensive
with the range to which the human mind has been applied.24
FIGURE 5
input
input
1
output output
1 .5> )
In points
multilaye
simply b
more advanced machines:
For example, the 'and' [function] . . . can be realized with the [single-layer]
linear-logic circuit . . . while the exclusive-or [functions] . . . require a cascade
linear logic arrangement [hidden units] .... [The limitations of single-layer
networks] are extremely severe . . . since the percentage of realizable logical
functions becomes vanishingly small as the number of input variables increases.
The chances of obtaining an arbitrary specified response are correspondingly
reduced. More sophisticated approaches must therefore be undertaken. A
number of alternatives are possible .... The most attractive appears to be
multiple-layer logical circuit arrangements, since it is known that any function
can thereby be realized. . . . However, no general criteria on the basis of which
intermediate logical layers can be taught functions required for over-all network
realization of the desired input/output relationship have been discovered.39
Theorists are divided on the question of how closely the brain's methods of
storage, recall, and data processing resemble those practised in engineering
today. On the one hand, there is the view that the brain operates by built-in
algorithmic methods analogous to those employed in digital computers, while
on the other hand, there is the view [Rosenblatt's view] that the brain operates
by non-algorithmic methods, bearing little resemblance to the familiar rules of
logic and mathematics which are built into digital devices.42
The models which conceive of the brain as a strictly digital, Boolean algebra
device, always involve either an impossibly large number of discrete elements,
or else a precision of the 'wiring diagram' and synchronization of the system
which is quite unlike the conditions observed in a biological nervous system.43
In the middle 1960s Papert and Minsky set out to kill the perceptron, or,
least, to establish its limitations - a task that Minsky felt was a sort of so
service they could perform for the artificial intelligence community.44
In the late 1950s and early 1960s, after Rosenblatt's work, there was a gre
wave of neural network research activity. There were maybe thousands
projects. For example Stanford Research Institute had a good project. B
nothing happened. The machines were very limited. So I would say by 1
people were getting worried. They were trying to get money to build big
machines, but they didn't seem to be going anywhere. That's when Papert a
tried to work out the theory of what was possible for the machines without lo
[feedforward perceptrons].45
There was some hostility in the energy behind the research reporte
Perceptrons..... Part of our drive came, as we quite plainly acknowledge
our book, from the fact that funding and research energy were being dissipa
on . . . misleading attempts to use connectionist methods in practical ap
cations.46
FIGURE 6
EE
Earlier I showed that early neural-net researchers
aware of problems like connectedness (especially worry
object and letter recognition). Nevertheless, in Min
Papert's study those problems acquired an 'anomalous'
Larry Laudan has defined an 'anomalous problem' as
that both (a) resists solution within a scientific approac
has an acceptable solution within a competing research t
but in controversies, notions like 'resistance to solution' and
'acceptable solution within a competing tradition of research' are
evaluated differently by the contending groups. The anomalous
character of a problem increases if researchers agree, to compar
the solution (or the lack of solution) given by a tradition o
research with the solution given by a competing one. One
important move in Minsky and Papert's rhetoric was to claim tha
problems such as parity or connectedness could easily be solved
using conventional algorithms in serial computers.55
Many of the theorems show that perceptrons cannot recognize certain kinds of
patterns. Does this mean that it will be hard to build machines to recognize
those patterns? No. All the patterns we have discussed can be handled by quit
simple algorithms for general-purpose computers.57
FIGURE 7
Interpretative Flexibility
This content downloaded from 146.169.183.232 on Tue, 21 May 2019 15:48:44 UTC
All use subject to https://about.jstor.org/terms
Olazaran: Official History of the Perceptrons Controversy 633
FIGURE 8
C /T
T/E
Source: H.M. Collins, Artificial Experts: Social Knowledge an
(Cambridge, MA: MIT Press, 1990), 32.
The simple perceptron (which consists of a set of inputs, one layer of neurons,
and a single output, with no feedback or cross coupling) is not at all what a
perceptron enthusiast would consider a typical perceptron. He would be more
interested in perceptrons with several layers, feedback and cross coupling...
The simple perceptron was studied first, and for it the 'perceptron convergence
theorem' was proved. This was encouraging, not because the simple perceptron
is itself a reasonable brain model (which it certainly is not; no existing
perceptron can even begin to compete with a mouse!), but because it showed
that adaptive neural nets, in their simplest forms, could, in principle, improve.
This suggested that more complicated networks might exhibit some interesting
behavior. Minsky and Papert view the role of the simple perceptron differently.
Thus, what the perceptronists took to be a temporary handhold, Minsky and
Papert interpret as the final structure.62
When I first saw the book, years and years ago, I came to the conclusion that
they had defined the idea of a perceptron sufficiently narrowly so that they
could prove that it couldn't do anything. I thought that the book was relevant,
in the sense that it was good mathematics. It was good that somebody did that,
but we had already gone so far beyond that. Not beyond the specific
mathematics that they had done. But the structures of the networks, and the
kinds of models that we were working on were so much more complicated and
sophisticated than what they had discussed in the book. All the difficulties, all
the things that they could prove that the perceptron couldn't do were pretty
much of noninterest, because we were working with things so much more
sophisticated than the models that they were studying. The things they could
prove you couldn't do were pretty much irrelevant.63
closed most people used Minsky and Papert's proofs against neural
nets without ever going into them.64
As in Pinch's case, the authority of Minsky and Papert's proofs
can be linked to the importance of the axiomatic or 'arithmetic
ideal' in science,65 although in this case this ideal should be applied
not only to those specific disputed objects but also to the more
general differences between the symbolic and neural-net
approaches. Symbolic AI is based on the capabilities of the
computer for manipulating symbolic expressions in ways sensitive
to their logico-syntactical - and therefore discrete - structure.
Although the question of proving what a computer program can
do is by no means trivial,66 symbolic AI was much closer to the
arithmetic (and rationalist) ideal than the subsymbolic, environment-
driven, trained (not programmed) neural-net approach (which was
closer to self-organizing, cybernetic systems).67
So far I have analyzed Minsky and Papert's proofs about single-
layer perceptrons. But what about multilayer nets? The question
of learning in multilayer nets had been on neural-net researchers'
agenda since the late 1950s, and was widely seen by them as a
critical issue. According to the official history of the debate,
Minsky and Papert showed that progress in neural nets as a whole
(not just in single-layer systems) was not possible. But what
Minsky and Papert actually said (in the formal literature) was
much less than that.
The perceptron has shown itself worthy of study despite (and even because of!
its severe limitations. It has many features to attract attention: its linearity; it
intriguing learning theorem; its clear paradigmatic simplicity as a kind
parallel computation. There is no reason to suppose that any of these virtu
carry over to the many-layered version. Nevertheless, we consider it to be
important research problem to elucidate (or reject) our intuitive judgement tha
the extension is sterile. Perhaps some powerful convergence theorem will b
discovered, or some profound reason for the failure to produce an interesti
'learning theorem' for the multilayered machine will be found.68
neural nets was not possible. We must now analyze the process of
closure of this debate - the process through which interpretative
flexibility was reduced, and controversy closed. In other words,
the question now is to explain the emergence of the official-history
view and its social functions.
Programs usually had to be run many times before all errors were foun
fixed. Since the debugging process was slower than CPU or input/output
This content downloaded from 146.169.183.232 on Tue, 21 May 2019 15:48:44 UTC
All use subject to https://about.jstor.org/terms
Olazaran: Official History of the Perceptrons Controversy 637
by yet further orders of magnitude, after receiving their output and fixing their
programs, programmers would have to wait, frustrated, in a queue until the
machine was again free .... The ... small number of available computers
(especially in universities) meant intense competition for computer time.70
At that time [in the 1960s], the Office of Naval Research had fund
of $40K or $50K. ARPA was able to fund hundreds of thousan
millions. Rosenblatt never attracted that kind of money, becau
offering a large pay-off. By pay-off I mean not in the scientific sens
application sense, world problem solving. Again, his work was mu
would say, traditional science. The Office of Naval Research never g
kind of money that he really required, and he was not successful in
money from the Science Foundation or from ARPA. One can
conclusion that if he had had the money he would have made e
progress. That's too easy an answer, because it doesn't always follo
amounts of money make the difference. Well before the Minsky
book came, Rosenblatt was not successful in attracting more m
know for a fact.73
Jon Guice has studied the role of ARPA and the MIT-area
defence and research community in the process of clo
perceptrons controversy. He has documented in det
decision to concentrate its IPTO funding resources on
bolic AI centres from the early 1960s (Minsky's MIT g
mid-1960s (Stanford, CMU and other smaller institutio
same time as it explicitly rejected applications to fund
research.74 This decision by ARPA was a very importan
the legitimation of symbolic AI and in the closure of th
trons controversy. Guice has also pointed out the impor
unconventional, satirical paper entitled Artificial Intel
written by consultant Louis Fein in 1963.75
Fein asks the reader to imagine that a Federal agency
request to bid on research and development work in A
companies. The author then includes the request to bid
companies' replies, and an evaluation of the propo
external technical expert who advises the agency. Ps
are used for the agency (Bright Field), bidding
(Optimystica; Dandylines Enterprises; Search Limited,
Search Unlimited; and Calculated Risks, Inc.) and ev
(J.R. 'Bubbles' Piercer, from Pessimyths, Inc., a consult
The bidding companies represent different AI perspec
research groups: self-organization (Optimystica), n
(Dandylines, which could refer to the SRI neural-n
This content downloaded from 146.169.183.232 on Tue, 21 May 2019 15:48:44 UTC
All use subject to https://about.jstor.org/terms
Olazaran: Official History of the Perceptrons Controversy 639
1960s could well be the 'marker event' that Newell was looking for
in his account of the emergence of symbolic AI.
Through the early 1960s, all the researchers concerned with mechanistic
approaches to mental functions knew about each other's work and attended the
same conferences. It was one big, somewhat chaotic, scientific happening. The
four issues I have identified - continuous versus symbolic systems, problem
solving versus recognition, psychology versus neurophysiology, and perform-
ance versus learning - provided a large space within which the total field sorted
itself out. Workers of a wide combination of persuasions on these issues could
be identified. Until the mid-1950s, the central focus had been dominated by
cybernetics, which had a position on two of the issues - using continuous
systems and orientation towards neurophysiology - but no strong position on
the other two. The emergence of programs as a medium of exploration
activated all four of these issues, which then gradually led to the emergence of a
single composite issue defined by a combination of all four dimensions
[symbolic, problem solving, psychology, performance]. This process was essen-
tially complete by 1965, although I do not have any marker event. [Later Newell
points to one more 'issue'.] Most pattern recognition and self-organizing
systems were highly-parallel network structures. Many were modelled after
neurophysiological structures. Most symbolic-performance systems were serial
programs. Thus, the contrast between serial and parallel (especially highly-
parallel) systems was explicit during the first decade of AI. The contrast was
coordinated with the other four issues I have just discussed.79
such as the stable magnetic orientations and domains in a magnetic system. Any
physical system whose dynamics in phase space is dominated by a substantial
number of locally stable states to which it is attracted can therefore be regarded
as a content-addressable memory. The physical system will be a potentially
useful memory if, in addition, any prescribed set of states can readily be made
the stable states of the system.97
A learning algorithm was discovered for the Boltzmann machine that provided
the first counterexample to the conjecture by Minsky and Papert that extensions
of the perceptron learning rule to multilayered networks were not possible.'0
The problem, as noted by Minsky and Papert, is that whereas there is a very
simple guaranteed learning rule for all the problems that can be solved without
hidden units, namely the perceptron convergence procedure (or the variation
originally due to Widrow and Hoff, which we call the delta rule), there is no
equally powerful rule for learning in networks with hidden units. The standard
delta rule [Widrow's LMS or delta rule algorithm] essentially implements
gradient descent in sum-squared error for linear activation functions. In this
case, without hidden units, the error surface is shaped like a bowl with only one
minimum, so gradient descent is guaranteed to find the best set of weights. With
hidden units, however, it is not so obvious how to compute the derivatives, and
the error surface is not concave upwards, so there is the danger of getting stuck
in local minima. The main theoretical contribution of this [paper] is to show that
there is an efficient way of computing the derivatives. The main empirical
contribution is to show that the apparently fatal problem of local minima is
irrelevant in a wide variety of learning tasks. Although our learning results do
not guarantee that we can find a solution for all solvable problems, our analysis
and results have shown that as a practical matter, the error propagation scheme
leads to solutions in virtually every case. In short, we believe that we have
answered Minsky and Papert's challenge and have found a learning result
sufficiently powerful to demonstrate that their pessimism about learning in
multilayer machines was misplaced.107
This content downloaded from 146.169.183.232 on Tue, 21 May 2019 15:48:44 UTC
All use subject to https://about.jstor.org/terms
648 Social Studies of Science
Concluding Summary
* NOTES
The work on which this paper is based was supported by a Basque Gov
scholarship for doctoral research at the Department of Sociology of the U
of Edinburgh (1988-91). I would like to thank the following people fo
encouraging and helping me throughout my research: Donald MacK
supervisor), James Fleck, Alfonso Molina and David Willshaw (Uni
Edinburgh); Peter Dayan (University of Toronto); and Jesus Maria Lar
This content downloaded from 146.169.183.232 on Tue, 21 May 2019 15:48:44 UTC
All use subject to https://about.jstor.org/terms
650 Social Studies of Science
This content downloaded from 146.169.183.232 on Tue, 21 May 2019 15:48:44 UTC
All use subject to https://about.jstor.org/terms
Olazaran: Official History of the Perceptrons Controversy 651
11. Neural nets also have a very important engineering aspect, directed toward
special purpose hardware implementation. Nevertheless, the main developments of
the evolution of neural nets have occurred around AI. AI can be seen as the core of
a wider, interdisciplinary field called 'cognitive science'. Although cognitive science
did not emerge as a differentiated discipline until the late 1970s, many of its main
problems were studied under different headings earlier.
12. For the history of symbolic AI, see: James Fleck, The Structure and
Development of Artificial Intelligence: A Case Study in the Sociology of Science
(unpublished MSc dissertation, University of Manchester, 1978); Pamela McCor-
duck, Machines Who Think: A Personal Inquiry into the History and Prospects of
Artificial Intelligence (New York: W.H. Freeman, 1979); Fleck, 'Development and
Establishment in Artificial Intelligence', in Norbert Elias, Herminio Martins and
Richard Whitley (eds), Scientific Establishments and Hierarchies, Sociology of the
Sciences Yearbook, Vol. 6 (Dordrecht: D. Reidel, 1982), 169-217; Fleck, 'Post-
script: The Commercialisation of Artificial Intelligence', in Brian P. Bloomfield
(ed.), The Question of AI (London: Croom-Helm, 1987), 149-64; Paul N.
Edwards, The Closed World: Computers and the Politics of Discourse in Cold War
America (Cambridge, MA: MIT Press, Inside Technology series, 1995, forthcom-
ing).
13. M.L. Minsky and S.A. Papert, Perceptrons: An Introduction to Computa-
tional Geometry (Cambridge, MA: MIT Press, 1969), 19. Minsky and Papert refer
to F. Rosenblatt, 'The Perceptron: a Probabilistic Model for Information Storage
and Organization in the Brain', Psychological Review, Vol. 65 (1958), 386-408.
14. National Physical Laboratory (NPL), Mechanisation of Thought Processes,
Vols I and II (London: Her Majesty's Stationery Office, 1959); Marshall C. Yovits
and S. Cameron (eds), Self-organizing Systems: Proceedings of an Interdisciplinary
Conference, Chicago, IL, 5-6 May 1959 (New York: Pergamon Press, 1960); H.
von Foerster and G.W. Zopf (eds), Illinois Symposium on Principles of Self-
organization, University of Illinois, Urbana, IL, 1960 (New York: Pergamon Press,
1962); Yovits, G.T. Jacobi and G.D. Goldstein (eds), Self-organizing Systems 1962
(Washington, DC: Spartan, 1962). In the 'Mechanisation of Thought Processes'
conference there were contributions from approaches including symbolic AI (M.
Minsky and J. McCarthy), 'cybernetics' (Donald M. MacKay, W. Ross Ashby),
pattern recognition (Oliver G. Selfridge, A.M. Uttley, Warren S. McCulloch and
Wilf K. Taylor) and neural networks (F. Rosenblatt). In the 1962 conference on
self-organization, there were contributions from perspectives including neural
modelling (Leon D. Harmon), brain theory/neural networks (W.S. McCulloch,
Michael A. Arbib, Jack D. Cowan), neural networks (F. Rosenblatt, B. Widrow),
neural networks/electrophysiological experiments (B.G. Farley), symbolic AI (A.
Newell) and 'cybernetics' (D.M. MacKay). Within the cybernetics movement,
work was done which was related to neural networks in diverse ways and degrees.
Oliver Selfridge's (NPL, op. cit., 511-26) hybrid Pandemonium system is an
example; another one is work on pattern recognition in Britain by A.M. Uttley
(ibid., 119-47) For early neural-network papers, see also J.A. Anderson and
Eduard Rosenfeld (eds), Neurocomputing: Foundations of Research (Cambridge,
MA: MIT Press, 1988).
15. F. Rosenblatt, On the Convergence of Reinforcement Procedures in Simple
Perceptrons (Buffalo, NY: Cornell Aeronautical Laboratory Report VG-1196-G-4,
1960); Rosenblatt, Principles of Neurodynamics (New York: Spartan, 1962); B.
This content downloaded from 146.169.183.232 on Tue, 21 May 2019 15:48:44 UTC
All use subject to https://about.jstor.org/terms
652 Social Studies of Science
Widrow and M.E. Hoff, 'Adaptive Switching Circuits', 1960 IRE WESCON
Convention Record (New York: IRE, 1960), 96-104.
16. Rosenblatt, Principles of Neurodynamics, op. cit. note 15, 111.
17. 'New Navy Device Learns by Doing', The New York Times (8 July 1958),
25:2. For similar statements, see: 'Electronic "Brain" Teaches Itself, The New
York Times (13 July 1958), at iv 9:6; 'Rival', The New Yorker (6 December 1958),
44-45.
18. McCorduck, op. cit. note 12, 87. For a list of people irritated by Rosen
see ibid., 88.
19. Yovits, interview, 28 November 1989.
20. Robert Hecht-Nielsen, Neurocomputing (Reading, MA: Addison-We
1990), 16-17. Hecht-Nielsen refers to Minsky & Papert, op. cit. note 13.
21. Rosen, interview, 10 November 1989.
22. See Collins (1983), op.cit. note 1, 49; Latour, op. cit. note 1; Star, op
note 1.
23. This is due in part to the fact that AI affects social discourses about
similarities and differences between human beings and machines: see J. F
'Artificial Intelligence and Industrial Robots: An Automatic End for Uto
Thought', in E. Mendelsohn and Helga Nowotny (eds), Nineteen Eighty-F
Science between Utopia and Dystopia, Sociology of the Sciences Yearbook, Vo
(Dordrecht: D. Reidel, 1984), 189-231.
24. Cited in Hubert L. Dreyfus, What Computers Can't Do: The Limi
Artificial Intelligence (New York: Harper Colophon, 2nd edn, 1979), 81-82
also Edwards, op. cit. note 12, 315 (draft publication).
25. Dreyfus, op. cit. note 24.
26. Some parts of Dreyfus's work are not far from the kind of contributions
the sociology of knowledge could make to AI and cognitive science. It is interest
to note that, although Dreyfus's work provoked a strong critical reaction from
AI leaders in the 1960s (see McCorduck, op. cit. note 12, Chapter 9), rece
someone as qualified as Minsky has implicitly recognized that such philosop
research can make positive contributions to AI. See the following quote f
Minsky's opening talk at the 1988 IEEE International Conference on N
Networks, in the heat of the neural-net revival: 'Minsky, who has been criticize
many for the conclusions he and Papert make in Perceptrons, opened his def
with the line "Everybody seems to think I'm the devil". Then he mad
statement, "I was wrong about Dreyfus too, but I haven't admitted it yet", w
brought another round of applause', from Randolph K. Zeitvogel, 'IC
Reviewed', Synapse Connection (now Neural Technology Update), Vol. 2-8 (19
10-11. For a recent discussion about AI from the perspective of the sociolog
knowledge, see H.M. Collins, Artificial Experts: Social Knowledge and Intelli
Machines (Cambridge, MA: MIT Press, 1990).
27. The use of the 'core set' concept in controversy studies is due to Co
'Core Set' & Changing Order, op. cit. note 1.
28. McCorduck, op. cit. note 12, 88.
29. Rosenblatt, op. cit. note 13, reprinted in Anderson & Rosenfeld (eds
cit. note 14, 92-114, at 96; Rosenblatt, Principles of Neurodynamics, op. cit
15, 67-70; F. Rosenblatt, 'Strategic Approaches to the Study of Brain Model
von Foerster & Zopf (eds), op. cit. note 13, 385-96, at 390-91.
30. Rosenblatt, Principles of Neurodynamics, op. cit. note 15, 306.
This content downloaded from 146.169.183.232 on Tue, 21 May 2019 15:48:44 UTC
All use subject to https://about.jstor.org/terms
Olazaran: Official History of the Perceptrons Controversy 653
This content downloaded from 146.169.183.232 on Tue, 21 May 2019 15:48:44 UTC
All use subject to https://about.jstor.org/terms
Olazaran: Official History of the Perceptrons Controversy 655
94. See, for example: Michael J. Mulkay, 'Three Models of Scientific Develo
ment', Sociological Review, Vol. 23 (1975), 509-26; Mulkay, G. Nigel Gilbert an
S. Woolgar, 'Problem Areas and Research Networks in Science', Sociology, Vol
(1985), 187-203.
95. It has been pointed out that Hopfield was a well-recognized physicist w
could 'afford' to attempt to make a contribution in an area that had not yet b
recognized as a valuable or respectable one: see Anderson & Rosenfeld, op.
note 14, 457.
96. J.J. Hopfield, 'Neural Networks and Physical Systems with Emerge
Collective Computational Abilities', Proceedings of the National Academy
Sciences, Vol. 79 (1982), 2554-58, reprinted in Anderson & Rosenfeld, op.
note 14, 460-64. The crucial aspect of Hopfield's contribution, - a conseque
of his use of the spin-glass metaphor - was the notion of the 'energy' of
(symmetrically-connected) neural net. The energy of a Hopfield system (a glo
measure of its performance) decreases every time a unit updates its state (a l
operation), until a local minimum (a stable state of the system) is reached. Thus t
local activity of each unit contributes to the minimization of a global property of
whole system. Patterns are stored at local minima of the energy function. One
the most important properties of this type of net is that it can work as a conten
addressable memory so that, under the right circumstances, it will retrieve corre
whole patterns when presented with degraded versions of input patterns.
97. Hopfield, op. cit. note 96 (reprinted version), 460.
98. David H. Ackley, G.E. Hinton and T.J. Sejnowski, 'A Learning Algorith
This content downloaded from 146.169.183.232 on Tue, 21 May 2019 15:48:44 UTC
All use subject to https://about.jstor.org/terms
658 Social Studies of Science
This content downloaded from 146.169.183.232 on Tue, 21 May 2019 15:48:44 UTC
All use subject to https://about.jstor.org/terms
Olazaran: Official History of the Perceptrons Controversy 659
receive enough recognition for his previous work: see S. Grossberg, 'Competitive
Learning: From Interactive Activation to Adaptive Resonance', Cognitive Science,
Vol. 11 (1987), 23-63.
109. M.L. Minsky and S.A. Papert, Perceptrons: An Introduction to Computa-
tional Geometry (Cambridge, MA: MIT Press, 1988), 260-61; this is a second,
enlarged edition of Minsky & Papert, op. cit. note 13.
110. See, for example, Olazaran (1991), op. cit. note 9, 274-75.
111. Ibid., 282-92.
112. Ibid., 276-81.
113. Ibid., 293-307.
114. I have adopted these categories from Pinch's case study: see Pinch, op. cit.
note 7.
This content downloaded from 146.169.183.232 on Tue, 21 May 2019 15:48:44 UTC
All use subject to https://about.jstor.org/terms