Teleosemantics PDF
Teleosemantics PDF
Edited by
GRAHAM MACDONALD
AND DAVID PAPINEAU
1 3 5 7 9 10 8 6 4 2
Contents
Index 229
This page intentionally left blank
List of Contributors
1. NATURALISM
impacts on the wider world, this argues that they cannot themselves lie out-
side the physical realm, for if they did then the bodily movements and so on
would have non-physical causes as well as physical ones.
Still, the doctrine that the mental realm cannot be outside the physic-
al realm is less straightforward than it seems. For one thing, it is unclear
how ‘physical’ is to be understood in this context. As Carl Hempel poin-
ted out some decades ago (Hempel 1969), our present best physical theory
is likely to be overtaken in the future by a better theory, which argues
that we would be wrong to constrain our naturalizing ambitions by our
present understanding of the physical. On the other hand, the constraint
that our ontology must respect some future physical theory seems no con-
straint at all, given our ignorance of that future theory. So any attempt
properly to articulate the idea that the mental cannot be separate from
the ‘physical’ threatens to make this doctrine either overly restrictive or
quite empty.
It seems to us that this challenge can be satisfactorily answered. The
recent literature contains a number of alternative suggestions for defining
‘physical’ in a way which leaves physicalism both plausible and contentful
(for a brief survey of the options, see Jackson 1998: 6–9).
However, even given such a definition of ‘physical’, there is another
familiar respect in which physicalism about the mental needs further elu-
cidation: how strictly should we read the requirement that the mental not
be ‘ontologically supplementary’ to the physical realm, as we put it above?
Few naturalists would want to subscribe to ‘type physicalism’, in the sense
of requiring each respectable mental property to be strictly identical with
some property describable in the language of physics. Rather, ontological
indistinctness from the physical is widely agreed by physicalists to require
only that mental properties should supervene on physical facts, in the sense
that they are metaphysically determined by the physical facts. But now this
threatens to remove the teeth from naturalism once more. Supervenience
on the physical is not a strong requirement: for example, moral properties
are generally supposed to supervene on physical properties, even by philo-
sophers who would strongly resist the idea that moral facts can somehow be
investigated by the methods of the natural sciences.
Still, the demands of supervenience are not empty. If you hold that
certain properties, while not type-identical to physical properties, are nev-
ertheless metaphysically supervenient on them, then surely you owe some
explanation of why this should be so. What is it about mental properties,
say—or indeed moral properties—that makes it the case that a mental or
moral difference must be due to a physical difference? A satisfactory answer
need not type identify mental or moral properties with physical properties,
but it will need to give some account of the nature of these properties that
Prospects and Problems for Teleosemantics 3
will explain why their instances should be metaphysically fixed by the phys-
ical facts.
The view that representational facts are functional facts can be seen as an
answer to this challenge. The concept of function that is used in biology is
itself a contested notion. In fact, it is likely that there is no ‘one’ notion of
function employed in biology. We shall consider some alternative analyses
of ‘function’ below. But, on any account, two things are clear. First, func-
tional properties are a paradigm of properties that are not type-reducible to
physical properties. There is no strictly physical property that is necessary
and sufficient for being a wing, say. All it takes for something to be a wing is
that it have the function of enabling flight: beyond this there is no limit to
the physical variety of different kinds of wing. And the same goes for other
familiar functional categories in biology, like being a stomach, or a heart, or
an eye. Second, despite this lack of type reducibility, it is clear that the func-
tional facts are metaphysically determined by the physical facts. Two items
could not possibly have all the same physical features (including their phys-
ical histories and environments) yet not have the same functional features.
Once the (wide) physical properties of something are given, then its func-
tional nature is fixed. (Different analyses of biological function will explain
this supervenience on the physical in different ways, but all will agree that
the functional facts do supervene on the physical facts.)
So an analysis of representational facts as functional facts will imply that
representational properties are not type-identical to physical properties, yet
at the same time will explain why representational facts must supervene on
the physical facts and thus be naturalistically acceptable.
Even if teleosemantics does not type reduce representational properties
to physical properties, for the reasons just explained, it may reduce them to
biological properties. (Thus Ruth Millikan, the most prominent of teleose-
manticists, entitled her first book Language, Thought, and Other Biological
Categories, 1984.) In defence of this biologically reductionist view, it can be
observed that teleosemantics aims to offer an explicit account of representa-
tional properties by appealing to a notion of function that is used in biolo-
gical theorizing. On the other hand, it is unclear whether the facts to which
teleosemanticists reduce representational properties should really qualify as
biological facts, given that they standardly involve cognitive mechanisms
that would normally be counted as in the realm of psychology rather than
biology. In the end, we do not think that much hangs on whether we think
of teleosemantics as a type reduction to biological properties. The more
important point is that teleosemantics offers a naturalistically acceptable
explanation of representation, whether or not we also count this as a bio-
logical reduction.
4 Graham Macdonald and David Papineau
of fertilizing ova, but only one in a zillion actually does this. Provided the
pay-off from success sufficiently outweighs the costs of failure, biological
mechanisms can have functions that they fail to achieve far more often than
not. Indeed there are good reasons why we should expect a snake repres-
entation production mechanism to have just this structure: the pay-off from
success (avoiding a real danger of snake bite) so far outweighs the costs of
representing falsely (needless evasive action) that it makes biological sense
to err on the side of caution, and produce the representation in response to
even the most fallible signs of snakes.
So teleosemantics, as so far outlined, stands diametrically opposed to the
kind of input-based causal or verificationist theories that imply that false rep-
resentations are atypical. Given the frequency with which false representa-
tions are in fact found, this would seem to count in favour of the teleosemant-
ic approach. However, not all thinkers within the teleosemantic camp regard
its commitment to output-based content as an unalloyed advantage. Con-
sider the following well-known thought-experiment devised by Paul Piet-
roski (1992), and discussed by a number of contributors to this volume. The
kimu are simple creatures, with very limited sensory abilities, whose only
enemies are the snorf, who hunt them every day at dawn. A mutation endows
one of them with a disposition to sense and approach red things. This dispos-
ition is a biological advantage to its possessors, since it leads them to climb a
nearby hill every dawn, the better to observe the red sunrise, and means that
they thereby avoid the marauding snorf, who do not climb hills. As a result,
the disposition spreads through the kimu population.
Now, consider the state a kimu gets into when it is stimulated by some-
thing red. It seems natural to credit this state with the content red. But
an output-based teleosemantics argues differently. Nothing good happens
to the kimu just because they approach something red. Most of their red-
approaching behaviour is just a waste of time. It is only when this behaviour
takes them away from the dangerous snorf that it yields any biological
advantage. So an output-based teleosemantics will deem the state in ques-
tion to represent snorf-free, or predator-free, or some such.
This strikes many as strongly counter-intuitive, especially when it is fur-
ther specified that the kimu cannot tell a snorf from a sausage, and would
be perfectly happy to approach any snorf who happened to colour them-
selves red. Whatever the other virtues of teleosemantics, it seems wrong for
it to conclude that the kimu’s state signifies snorf-free, rather than simply
red. After all, by hypothesis the kimu’s senses are tracking the presence or
absence of redness, not the presence or absence of snorf.
There are alternative versions of teleosemantics that promise to analyse
cases like these differently. These alternatives place more emphasis on the
processes that produce representations than the purely output-based kind
8 Graham Macdonald and David Papineau
3. FUNCTIONS
Millikan Functions
Central to the etiological account is the idea that individuals gain functional
traits as a result of being replicated. Millikan (1984, 1993) offers a highly
abstract account of replication. A simplified version goes like this: item A is
a reproduction of individual B if and only if B has some determinate properties
in common with A, and this correlation of properties can be explained by
a natural law. These common properties are the reproductively established
Prospects and Problems for Teleosemantics 13
Non-Genetic Selection
The first resource enabling extension of the etiological strategy can be called
derivative functionality: devices can have the (direct) function of producing
effects that themselves have (derived) functions. The second resource avail-
able to the teleosemanticist is non-genetic selection.
So far we have not paused to analyse the notion of selection. In fact this
notion applies much more broadly than to genetically based selection. All it
requires is a set of items that that have the characteristics of:
14 Graham Macdonald and David Papineau
4. SUMMARIES OF CHAPTERS
This, the ‘folk epistemology objection’, has been aired before, so Jackson
replies to various responses to the objection (cf. Braddon-Mitchell and
Jackson 1997, 2002; Papineau 2001). One such response claims that
correlations between opaque selection states and transparent states could
allow for access to the opaque content via the transparent state. But such
a correlation, argues Jackson, won’t deliver justified opinion about the
opaque content unless the folk have knowledge of the correlation—and
in most cases, they won’t. Strengthening the correlation to identity fails
the same test—the epistemic properties of (folk-accessible) intentional
content differ from the epistemic properties of any selectional role states.
Interpreting a teleosemantic theory along the lines of functional role-
realizing state theories, where the teleosemantic state is the realizer, also
won’t do; the relevant content property will be the role property, the one
to which the folk have epistemic access.
Ruth Millikan (Chapter 5) addresses a concern that was raised some time
ago by Christopher Peacocke (1992) that the content assigned to any rep-
resentation by her teleosemantic theory would be essentially anti-realistic. It
would be this because content would be sensitive to, and only to, selection
processes, and selection can only operate on what is available to those pro-
cesses. In other words, no selection-transcendent content could be assigned
to any representation. But it seems as though we do understand content
that has not itself been selected for, content that is ‘useless’ from the point
of view of, say, reproductive advantage. Millikan shows how her teleose-
mantic approach can embrace the idea of ‘useless’ content, content that is
derived from selection processes (however these are understood) but is not
itself selected for. Her solution makes essential use of extended selection
processes: any process involving trial-and-error learning counts as a selec-
tion process. Further, any set of systematic mapping rules that are selected
for will contain the capacity to generate such ‘useless content’.
This theme of non-selected (but still teleosemantic) representational con-
tent is pursued by Dan Ryder (Chapter 6), who applies the general form of
Millikan’s notion of derived relational proper functions to the brain. Spe-
cifically he uses the idea of a modelling machine’s function to be the produc-
tion of models of those items that are fed into it (its inputs) and applies it to
the way in which natural kinds can be modelled in the brain. He shows how
a cell can become tuned to a source (a bird, say) that is the location of con-
stantly correlated features (feathers, a beak, etc.). Cellular networks tuned
in this way model the environment, which is what they were designed (via
natural selection) to do. Ryder shows how such cellular networks can meet a
particular challenge, that of showing how the extension of one concept can
be determined as different from that of another concept even though the two
extensions have superficially resembling members.
Prospects and Problems for Teleosemantics 19
allow for the phenomenon of ‘unexploited content’, content that the system
containing it is unable to use. One may have a representation at one’s dispos-
al but be unable to exploit all of its representational features, perhaps because
we have not been taught how to use those features. Teleosemantics, they
argue, requires content to be truly ascribed only after the ability to exploit
is acquired—so for the teleosemanticist there can be no ‘unexploited’ con-
tent. They trace this failure to take into account unexploited content to a
tendency to conflate representation and indication. There are significant dif-
ferences between the two, but both can have unexploited content, content
that is there prior to selection—and, it is argued, the kind of content that is
needed for representationalist cognitive science must allow for some part of
it to be unexploited. The conclusion is that it is an objection to teleosemantic
theories that they cannot accommodate unexploited content.
As many of these chapters illustrate, teleosemantics has been used
primarily to account for cognitive aspects of mentality. Carolyn Price
(Chapter 10) applies the ‘High Church teleosemantic theory’ of Millikan
to the determination of the content of emotional appraisals, these being
intentional states (e.g. beliefs) triggering the occurrence of our emotions. A
particular problem is how to distinguish such appraisals from dispassionate
evaluative judgements. Price begins by listing various functions of
emotional appraisals, such as providing motivation, focusing attention on
relevant information, limiting the set of responses the subject will choose
from, triggering expressive behaviour. Given this, she raises the question
‘What kind of content does an emotional appraisal have—descriptive,
directive, or mixed?’ In the light of the variety of functions such appraisals
are called upon to perform, Price suggests that a mixed, descriptive and
directive, content is called for. Do evaluative judgements similarly have
mixed content? She suggests that they do, but that the directive content of
the two is different, in that the directive content of the emotional appraisal
will be more detailed about the response than will the directive content
of the evaluative judgement. The descriptive content of the emotional
appraisal will also be tied to avoidable threats, this restriction not being
applied to the descriptive content of evaluative judgements. Correlatively,
the temporal content of an emotional appraisal will be restricted to the
present, near past, or near future, whereas the temporal content of an
evaluative judgement need not be so restricted.
REFERENCES
Language is at the core of the cognitive revolution that has transformed that
discipline over the last forty years or so, and it is also the central paradigm
for the most prominent attempt to synthesize psychology and evolutionary
theory. A single and distinctively modular view of language has emerged out
of both these perspectives, one that encourages a certain idealization. Lin-
guistic competence is uniform, independent of other cognitive capacities,
and with a developmental trajectory that is largely independent of envir-
onmental input (Pinker 1994, 1997). Thus language is seen as a paradigm
of John Tooby and Leda Cosmides’ concept of ‘evoked culture’: linguist-
ic experience serves only to select a specific item from a menu of innately
available options (Tooby and Cosmides 1992). In explaining this concept,
Tooby and Cosmides appeal to the metaphor of a jukebox. The human gen-
ome pre-stores a set of options, and the different experiences provided by
different cultures select different elements out of this option set. I shall argue
to the contrary that variability between speakers, the sensitivity of linguistic
development to environmental input, and the limits of encapsulation are
not noise. They are central to the language and its evolution.
Evolutionary arguments about language face a problem. Evidentially ro-
bust theories of the evolution of language are in short supply. That is no
accident. Language is unique; no other living species communicates with
even a simple language.¹ Moreover, it leaves no direct trace in the
¹ The claim that there is a qualitative difference between human language and animal
signal systems is not controversial, though there are conflicting views on the nature of the
crucial differences: see Hauser et al. (2002), Jackendoff and Pinker (2005), and Pinker
and Jackendoff (2005) for divergent views on the crucial organizational difference
between language and other systems. Hauser and his allies think that recursive structure
is the crucial novelty; Jackendoff and Pinker think that this understates the significance
of the productive lexicon. Deacon, in contrast to both groups, argues that the crucial
24 Kim Sterelny
I agree with Pinker and other defenders of the modularity hypothesis that
language is very cognitively demanding, and the evolution of language must
difference is semantic not organizational: terms in natural language are symbols, not
natural signs (Deacon 1997).
² The biology–culture dichotomy is not a good one: (Oyama et al. 2001). But by
‘biological’ I mean that phenotypic similarity is transmitted from parent to offspring
exclusively by the flow of genes; not through any form of social learning.
Language, Modularity, and Evolution 25
³ Tomasello argues that this is true not just of the vocabulary of a language but
also of its structural features: many morphological features—for example, tense–aspect
systems—begin as specific items of vocabulary which are then incorporated as markers;
for example, ‘going’ and ‘will’ have been converted into future constructions in English
(Tomasello 1999; 2003: 42–4).
28 Kim Sterelny
versions of the module within the one population (Chomskysoft TM1.2 and
Chomskysoft TM1.3 ), as there must be as it evolves, then some members of the
population will speak a language that is partially incommensurable to other
members of the same group. Without the right module, even if my con-
versational partner deploys modal constructions, I will neither understand
nor come to understand them.⁴ Remember, too, that structural innova-
tions are not merely additive. If, for example, Chomskysoft TM1.3 differs from
its predecessor by having a regularized tense-aspect system modifying the
verb phrase, then sentence formation using that module will be pervasively
different from those of the predecessor. Incommensurability will not be
an occasional glitch in cross-module interactions. It will be a typical and
persistent feature of these interactions, for those equipped with the sim-
pler module cannot acquire the new structures (or, at least, cannot acquire
native-speaker ease in the use of them).
Thus, consider a case where a language module, Ancestral, is fixed in a
population. Consider a Variant that would be superior in expressive power
if it were universal in a population. It by no means follows that Variant
can invade. Even if agents are motivated to cooperate, high error rates make
coordination difficult, and the invading form may increase error rates ini-
tially, even though it would be a superior system if it were fixed in the
population. An ability to embed clauses multiply or to indicate tense and
aspect by inflections threatens just to generate extra communication fail-
ures. The fitness structure is similar to that of a small number of Tit-for-Tat
variants in an otherwise All-Defect population. Those using the Tit-for-Tat
strategy find it hard to establish, for they do worse against the majority prac-
tice than that practice does against itself. The same is true of Variant, for
the pay-off structure is as follows: Variant–Variant > Ancestral–Ancestral
> Variant–Ancestral, because Variant–Ancestral interactions will result in
more communication and coordination failures than either of the other pair-
ings. But since Variants are rare and Ancestrals are common, unless there is
sharp population segregation most Ancestral interactions will be with other
Ancestrals, and few Variant interactions will be with other Variants. Thus
Variants can invade only if (i) Variant–Variant interactions are much more
productive than Ancestral–Ancestral interactions; or (ii) Variant–Ancestral
interactions are only slightly less productive than Ancestral–Ancestral inter-
actions; or (iii) Variants rarely interact with Ancestrals.
The conditions which would allow Variant to invade may have been
satisfied in early hominid populations. But it is not likely that they were.
For there are reasons for suspecting that Variant–Ancestral interactions
⁴ Or, at best, I will understand them with more difficulty and with more misunder-
standings.
Language, Modularity, and Evolution 29
⁵ Moreover, accent has the further consequence that when individuals do transfer,
they carry some of their history with them. So it acts as a membership badge whether
you want one or not. Our linguistic identity is both difficult to fake and difficult to
conceal. The same is of course true of other social conventions which are both complex
and arbitrary: how to use your fish-knife; in which direction the port circulates. These
points about the role of language differences in our social life are well taken, but the
drifter strategy was probably never a genuine option for our hominid ancestors. Amongst
chimps, for example, only female adolescents have an opportunity to migrate.
30 Kim Sterelny
Few doubt that some of the cognitive skills involved in language use are
specialized and encapsulated: those involved, for example, in the phono-
logical analysis of utterances in your native language. Likewise, few argue
that all of those cognitive skills are specialized and encapsulated. Fodor
nominated the pragmatics of language use as a paradigm of unencapsu-
lated problem solving (Fodor 1983). For example, there is no telling in
advance what information you will need to work out why a speaker says
something that is obviously and spectacularly false and who therefore can-
not be saying what they mean. So no module can be designed for this task.
In this section, I will argue that identifying the literal meaning of an agent’s
utterance is more like the case of pragmatics than that of phonological ana-
lysis.
Grice, Bennett, and others have argued for a metarepresentational pic-
ture of language. Understanding the meaning of an utterance involves
recognizing and representing the speaker’s communicative intentions
(Grice 1957; Bennett 1976). Understanding utterances essentially involves
recognizing that speakers are intentional systems. Ruth Millikan does not
doubt that we sometimes interpret others via identifying their commu-
nicative intentions, especially when participants in an exchange share no
language. But she thinks that as a general model of language use this picture
is psychologically implausible. Our system of linguistic interpretation is not
designed to recognize communicative intentions. Rather, its function is to
generate the same belief in the audience that caused the agent’s utterance.⁶
For Millikan, interpretation is much more like perception than it is like
inference. The system of meaning conventions form the channel conditions
for ‘natural telepathy’. When word tokens fulfil their proper function, a
thought in the speaker’s mind appears in the mind of the audience (Millik-
an 1984, 1998).
it does not follow that he has the concept of a word, concept, or symbol.
It is true that the concept–word correspondences to which Jack is sensit-
ive are less stable than the connections between hue, saturation, and colour
judgements. For colour vision depends on constant features of the phys-
ical environment, whereas the TIGER–‘tiger’ correspondence depends on
linguistic regularities. But those regularities are stable over human gener-
ations. Moreover, as with colour vision, the process is involuntary. Once
Two-Aardvarks says ‘tiger’, Jack cannot help but hear it as that term. Thus
Ruth Millikan argues that once the system of correlations or connections
has been established (and it can be established gradually, without delib-
erate engineering or consciousness), decoding can be causally sensitive to
these connections, without agents having to have thoughts about these con-
nections. You can speak and understand without being able to talk (or
think) about speaking or understanding. On this picture, understanding
the semantics of others’ utterances is assimilated to module-like processes
(though Millikan does not use this terminology): it is automatic, unreflect-
ive; not subject to central control.
I doubt that this is right. ‘Hue’, ‘saturation’, and other terms that pick
out the components of visual cognition are not elements of folk vocabulary.
The science of human colour vision is not a refined, more subtle, more pre-
cise but recognizable version of folk thought on colour experience. Likewise
syntax and phonology are not refined versions of folk thought about lan-
guage. In contrast, ‘word’ is a folk word, as is ‘means’, ‘is a name of ’, ‘is
about’, and so forth. Much of semantics is a refined, extended, and more
precise version of folk views of language and meaning. I do not think this
is an accident. Seeing does not depend on being able to talk about seeing.
For the most part, the utility of vision does not depend on our capacity
to reflect on vision. In contrast, the utility of language is more closely tied
to our capacity to reflect on language. I think there are three reasons why
this is so. First, in most circumstances we do not see, or in other ways rep-
resent, our retinal image. That image is just a causal intermediary on the
road to perceptual belief. What another agent says cannot be a mere causal
intermediary in this sense, for that would lead us to be liable to deception,
manipulation, and swallowing whole the errors of others. Even if we do
typically accept what others tell us, it is still true that what others say is eval-
uated, and hence must be represented. Furthermore, the fact that we have
been told something is often an important datum in itself. It is a clue to
what others believe and want. Second, language learning and the division of
linguistic labour seems to require that we represent language as well as use
it. In learning language, we acquire much of our vocabulary from expres-
sions rather than instances. Ostensive learning plays some role in acquis-
ition, but many terms are acquired from representations of their targets,
36 Kim Sterelny
rather than from those targets themselves. We may not need folk semantic
concepts to talk. But how could we acquire a concept from a word without
having concepts for ‘word’, ‘about’, and so on? Early language use could
not rely on smooth, well-established, uniform, and relatively context-free
use of language. Stable word–concept correlations represent coevolution-
ary achievements, not initial starting points. Many early uses of language
would have often been more like language-contact situations using pidgins.
And pidgins, as many have pointed out, rely heavily on pragmatics. They
rely on speakers thinking about what their conversational partners meant,
asking themselves ‘What did he mean by ‘‘pushpush’’. Could he really have
meant . . . ?’
Thus, whatever the literal meaning turns out to be, the processes by
which one language user identifies that literal meaning of an utterance do
not seem to be modular; they are not like perceptual processing. They are
conceptualized and they are not encapsulated.
conditional rules: ‘if you are rich and powerful in a polygamous society,
marry many wives; if you are rich and powerful in a monogamous envir-
onment, be sure to marry a highly nubile wife of high genetic quality.’
However, there are limits on wired-in conditional responses. The range
of variation and its significance must itself be constant. So innate mod-
ules do presuppose unchanging environments, though perhaps what does
not change is the range and significance of environmental variability. Even
if humans have been widely distributed through ecospace, innate systems
can direct adaptive responses so long as the region of space humans have
occupied has been stable over time.
However, there is no reason to think that the environment of human
language has been stable, even in the extended sense discussed above. For
one thing, the physical and biological environment of human evolution
has been increasingly unstable, as a result of climate change and of the
expansion of hominid species out of their ancestral range (Potts 1996,
1998; Calvin 2002). Moreover, and perhaps more importantly, human
language capacities have coevolved with other cognitive capacities and
with the processes of culture change. Language is not the only distinctive
human cognitive capacity: we differ from our closest living kin in having
far greater powers to represent the future (Suddendorf and Corballis
1997), causal interactions (Tomasello 2000), the mental life of other
agents (Heyes 1998), moral and other norms (Boyd and Richerson 1992;
Richerson and Boyd 1998). If, as is surely likely, these capacities arose
with language rather than preceded it, then the coevolution of language
with these other cognitive faculties would have greatly altered the expressive
demands on human language. Moreover, the coevolution of cognition
with culture has built a mechanism that results in the cumulative change
of human environments. The generation-by-generation construction of
specialist vocabularies discussed earlier is an instance of a more general
process. Humans are niche constructors: we rework our own environment;
think of shelters and clothes; the domestication of animals; the use of
tools (Odling-Smee 1994; Odling-Smee et al. 2003). Moreover, our niche
construction is cumulative: generation N + 1 inherits a changed world
from generation N and further modifies the world N + 2 will inherit. So
Michael Tomasello has argued that, in contrast to the great apes, there are
three timescales that matter in understanding human minds and human
culture. The great apes have social but not cultural lives, and hence there
are only two timescales in their cognitive histories: those of phylogeny and
ontogeny. In understanding human cognition, there is a third timescale:
that of the history of culture, as complex capacities are assembled. As
these are built, they interact with and transform individual ontogeny and
biological history (Tomasello 1999a).
38 Kim Sterelny
REFERENCES
1. INTRODUCTION
¹ The basic ideas of teleosemantics can also be used to try to explain the semantic
properties of public representations (Millikan 1984). But the main focus, both in the
initial work and in more recent discussions, has been mental representation. I will say
quite a bit in this chapter about how mental and public representation are related, but I
will not discuss teleosemantic treatments of public representation itself.
Mental Representation and Naturalism 43
seems to express only a vague hope that some form of informational seman-
tics will succeed (1998: 12). Teleosemantics seems to have a fair number
of people still working on it, with various degrees of faith, as can be seen
in this volume. Millikan’s enthusiasm about her initial proposals seems
undiminished, in contrast to Fodor. But the teleosemantic program is not
insulated from the general turn away from optimism. Sometimes an idea
loses momentum in philosophy for no good reason—because of a mixture
of internal fatigue and a shift in professional fashion, for example. It is pos-
sible that this is what happened with naturalistic theories of representation.
But I think that many people have been quietly wondering for a few years
whether the naysayers might have been right all along.
More concretely, I think there is a growing suspicion that we have been
looking for the wrong kind of theory, in some big sense. Naturalistic treat-
ments of semantic properties have somehow lost proper contact with the
phenomena, both in philosophy of mind and in parts of philosophy of lan-
guage. But this suspicion is not accompanied by any consensus on how to
rectify the problem. In this chapter, my response to this difficult situation is
to re-examine some basic issues, put together a sketch of one possible altern-
ative approach, and then work forward again with the aid of this sketch.²
So a lot of the chapter is concerned with the idea of mental representa-
tion in general, and what philosophy can contribute to our understanding
of this phenomenon. These foundational discussions take up the next two
sections. Section 4 then looks at some empirical work that makes use of
the idea of mental representation—a different empirical literature from the
ones that philosophers usually focus on. Then in Section 5 I look at teleo-
semantics from the perspective established in the preceding sections.
2. REPRESENTATIONALISM REASSESSED
Let us look more closely at the ‘basic representationalist model’, and also
at versions that make use of a resemblance relation. In this section I will
discuss three characteristics of the model, and will also discuss in more
detail the problem of regresses and pseudo-explanations.
The first feature of the model I will discuss perhaps looks harmless, but
I will say quite a lot about it. When we have a situation that fits the basic
Mental Representation and Naturalism 49
⁵ See e.g. Ramsey et al. (1991); van Gelder (1995); Clark (1997).
Mental Representation and Naturalism 51
(This concept will be discussed some more in the next section.) The rat is
guiding its behavior, in some specific spatial task, by using this inner struc-
ture. It seems we can say that this is a case of the rat using the state of X (the
inner structure) as a guide to Y. But, of course, all the rat is doing is receiv-
ing input of various kinds, and combining this with various pre-existing
inner states to control behavior. It does not single out X, single out Y, and
decide to use the former as a guide to the latter.
From the point of view of the scientist, there is no problem here. The
rat is situated in a particular environment—a maze, for example. If the
scientist has reason to posit inner representations, he or she can say that
the representations are being used to deal with this particular maze. The
scientist applies what I will call a ‘thin behavioral’ specification of the target.
This is fine in practice, at least in simple cases. It is also rather philo-
sophically unsatisfying. It is natural from the scientist’s point of view to say
that the rat is using X as a guide to Y, but as far as the mechanics of the
situation are concerned, the ‘as a guide to Y’ claim seems extraneous. There
will also be a lot of vagueness in thin behavioral specifications of targets.
We have a different and richer specification of the target when it is picked
out explicitly by a separate representational act. Against this, it might be
argued that worrying about a richer and sharper specification of the target
is worrying about something that is not part of the ‘empirical skeleton’ of
representation use, and hence should not detain us. I will return to this issue
below.
The third issue I will discuss in this section is not an essential part of the
basic representationalist model, but is a feature of many applications and
developments of it. It is common when talking of mental representation
in ways inspired by the kinds of considerations discussed above to posit a
resemblance relation, albeit an abstract one, between representation and tar-
get. In what I regard as well-developed versions of this idea, the target itself
is not specified by the presence of a resemblance relation; the specification
of the target is a separate matter. Rather, the idea is that given that some
internal structure X is being consulted as a guide to Y, this consultation can
only be expected to be successful or adaptive to the extent that there is a
suitable resemblance relation between the two. So the goal, in some sense,
of consulting a representation is to exploit a resemblance relation between
representation and target.
At first glance, it surely seems clear that this should be regarded as an
optional feature of the representationalist model. Some and only some pub-
lic representations work via resemblance; why should this not be true also
of internal representations? However, it is quite common in this area to
52 Peter Godfrey-Smith
use the notion of resemblance far more broadly, and see the exploitation
of resemblance relations as a general or invariable feature of mental rep-
resentation. Sometimes, it seems to me, these claims are made in a way
that uses an extremely weak concept of resemblance or similarity. In other
cases, the concept of resemblance being used is not especially diluted, and
a genuinely strong claim is being expressed. The underlying line of reason-
ing might perhaps be something like this. In the public case, the available
relations between X and Y that might be exploited are roughly the three
distinguished many years ago by C. S. Peirce: resemblance, indication, and
conventionally established relations. The last of these is off the table in the
case of mental representation. The second can be assimilated to the first,
once resemblance or isomorphism is construed in a suitably abstract way.
So the only kind of relation that really matters here is resemblance.
For this or other reasons, many discussions of mental representation
extend the language of resemblance to cover a very broad class of cases. In
Randy Gallistel’s entry for ‘Mental Representation’ in the Elsevier
Encyclopedia of the Social and Behavioral Sciences (2001) he insists that all
representations exhibit an isomorphism with the represented domain. In cor-
respondence, Gallistel confirmed that cases usually discussed by philosophers
using concepts of information or indication (thermostats, fuel gauges, etc.)
are treated by him as involving abstract isomorphisms. Millikan’s teleo-
semantic theory uses concepts of mapping and correspondence in similarly
broad ways; occasionally she explicitly says that her theory vindicates the idea
that inner representations ‘picture’ or ‘mirror’ the world (1984: 233, 314).
And earlier I quoted Cummins (1994), who claimed that the exploitation
of structural similarity is the key to all sophisticated cognition.
I do not want to deny that there are some very subtle but still reasonable
notions of resemblance that may be used here, especially those employed in
logic and mathematics. My aim is not to restrict the talk of resemblance and
mapping to cases where some very obvious notion of picturing is involved.
Yet I resist the idea that some suitably abstract resemblance or isomorphism
relation is always involved in mental representation. When X is consulted
to guide behavior towards Y, this may involve the exploitation of an ante-
cedently specifiable resemblance relation, but it may not. It can be tempting
to add here that there must be some natural relation between representation
and target that makes the representation worth consulting. And from there,
it can seem that resemblance or isomorphism is the only genuine candid-
ate. But this is not so. Once we have an intelligent brain, it can generate
and adaptively manipulate representations that do not have any simple, eas-
ily exploited relation to their targets. (Strong versions of the ‘language of
thought’ hypothesis are expressions of this possibility: Fodor 1975.) For
this reason, I see no reason to accept the Cummins hypotheses that was
Mental Representation and Naturalism 53
quoted earlier. That hypothesis arises out of a desire for an overly simple
explanation for when and why it is worth consulting X to deal with Y.
More precisely, there are (as we often find) strong and weak ways to read
the Cummins hypothesis, with the strong way unjustified and the weak way
misleading. In strong forms, the hypothesis was criticized in the previous
paragraph. In weak forms, the notion of similarity or resemblance is exten-
ded too far, and becomes post hoc in its application. (If a representation
was successfully and systematically consulted to deal with some target, there
must have been a similarity or isomorphism present of some kind . . .)
Before leaving this topic, I should note in fairness that the Cummins
hypothesis I have focused on here was expressed in a note attached as com-
mentary (1994) to a reprinting of an earlier chapter. The same ideas were
followed up in his 1996 book, but I have chosen to focus on a formula-
tion that Cummins presented in a rather ‘unofficial’ way. Secondly, I am
aware that the representational role of abstract but not-trivial resemblance
relations, especially those with mathematical description, needs a far more
detailed treatment than I have given it here.
The final topic I will discuss in this section is a general challenge to the
usefulness of the representationalist model. I call it a challenge to the ‘use-
fulness’ of the model, but the challenge is derived from stronger arguments,
often directed against the model’s very coherence. My aim here is to modify
and moderate an older form of challenge.
I argued that the empirical skeleton of public representation use might
be used as a model for some kinds of mental processing. But might it be
possible to see, in advance, reasons why this will be a bad or misleading
model? Famous arguments due to Wittgenstein (1953) and the tradition
of work following him are relevant here. One form of argument that is
especially relevant holds that if we import the basic structure of represent-
ation use into the head, we find that the reader or interpreter part of the
mechanism has to be so smart that we have an apparent regress, or pseudo-
explanation.
A version of this challenge to representationalist explanation in cognitive
science is expressed by Warren Goldfarb (1992). He is discussing a hypo-
thesis that people with perfect pitch make use of ‘mental tuning forks’. This
concept was introduced in a newspaper discussion of a piece of neuros-
cientific work on the different neural activity of people with and without
perfect pitch. Goldfarb regards the hypothesis of mental tuning forks as
pseudo-explanatory in the extreme.
Tuning forks! Are they sounding all the time? If so, what a cacophony! How does
the subject know which fork’s pitch to pick out of the cacophony when confronted
with a tone to identify? If they are not always sounding, how does she know which
one to sound when confronted with a tone?
54 Peter Godfrey-Smith
Real tuning forks give us the means to identify pitches, but they do so because
we have the practices and abilities to use them. The internal standard is supposed
to give us the means to identify items, but without practices and abilities, for
the internal standard is also meant to operate by itself, in a self-sufficient man-
ner. (If it were not, it would be otiose: why not settle for practices and abilities
themselves? . . .) (Goldfarb 1992: 114–15)
This line of thought might also be used to express a challenge to the
Cummins hypothesis that I have discussed several times in this chapter.
Cummins wants to explain intelligence by giving the mind access to some-
thing with the same structure as its target. Call this structure S. If the
mind’s problem is dealing with things that exhibit S, how does it help to
put something with S inside the head? The mind still has to detect and
respond to S, just as it did when S was outside.
When the challenge is expressed in these strong sorts of terms, the right
reply to it is to connect the representationalist model to the basic ideas
of ‘homuncular functionalism’ (Dennett 1978; Lycan 1981). The internal
representation is not supposed to be ‘self-sufficient’, to use Goldfarb’s term.
It would need a reader or interpreter; there must be something akin to
‘practices and abilities’. But the mind’s interpreter mechanism need not
have the whole set of practices and abilities of a human agent. The inter-
preter can be much less sophisticated than this (more ‘stupid’, as the hom-
uncular functionalist literature used to say), and might operate in a way
that is only somewhat analogous to a human agent using an external repres-
entation. The representationalist holds that positing this kind of separation
between a representation-like structure with an exploitable relation to a tar-
get and a subsystem to make use of that structure is a good hypothesis about
the mind. If we put these two components together, some special cognitive
capacities become possible.
So if the challenge is expressed by saying that we can see in advance
that no explanatory progress can be made with the basic representational-
ist model, then the challenge can be defused. But the fact that we have this
in-principle answer does not mean that we will necessarily make progress
in the actual world, by using the representationalist model. It may well be
that, for reasons akin to those expressed in the traditional challenge, there
is little in fact to be gained by employing the model. This will depend on
what the mind’s structure is actually like. In order to have some explanatory
usefulness, there needs to be the right kind of interaction between a rep-
resentation and reader in the mind. Putting it in homuncular functionalist
terms, the reader needs to be smart enough for its interaction with the rep-
resentation to be reader-like, but not so smart that the model collapses into
homuncularism of the bad kind.
Mental Representation and Naturalism 55
This section will look at one family of applications of the basic repres-
entational model in psychology and other cognitive sciences. The work
discussed in this section makes use of the concept of a mental or cognit-
ive map—a representational structure with some similarity or analogy to
familiar external maps, like street maps. This is obviously not the only way
to develop and apply the basic representational model in trying to under-
stand mental processes, but it is a very natural way to do so. As I noted in
Sections 2 and 3 of this chapter, there is a way of thinking about the repres-
entationalist model that leads people to think of resemblance or isomorph-
ism as a crucial relation between internal and external states. Looking for
inner map-like structures is a way to develop this idea.
The literature on inner maps is also, as I see it, a rather pure and dir-
ect way to use the basic representationalist model to think about the mind.
The literature on inner maps in the cognitive sciences is partially separate
from the tradition that emphasizes computation, logic, and language-like
representation. The empirical work on cognitive maps in question is often
(unsurprisingly) concerned with spatial skills, usually in non-linguistic ani-
mals. So this is a somewhat simpler arena in which the role of the repres-
entational model can be investigated. In particular, we do not have to worry
about the possible effects of public language capacities on the representa-
tional powers of thought.⁶
The notion of inner maps is also interesting because it seems to be a kind
of ‘attractor’ concept, one that people come back to over and over again
and from different parts of science and philosophy. There is something very
appealing about this idea, but of course it also raises in a vivid way the pit-
falls discussed at the end of the previous section. I should also emphasize
that the discussion in this section is an initial foray into this literature; I
hope to discuss it in more detail on another occasion. Here I will also dis-
cuss scientific work rather than philosophical work (see Braddon-Mitchell
and Jackson 1996 for a relevant philosophical discussion).
In psychology, the father of the idea of inner maps is E. C. Tolman
(1948). For Tolman, the hypothesis of ‘cognitive maps’ was put forward in
response to some particular forms of intelligent behavior, studied primarily
in rats and seen especially (though not exclusively) in dealing with space.
The crucial contrast that Tolman had in mind when he developed this
⁶ For some speculations on these issues that complement the present discussion, see
Godfrey-Smith (forthcoming b).
56 Peter Godfrey-Smith
⁷ For a survey in comparative psychology that uses the concept, see Roberts (1998).
For a review of the neuroscientific work, see Jeffrey et al. (forthcoming).
⁸ Tolman himself, after contrasting his mapping idea with the stimulus–response
model, then compared more task-specific ‘strip maps’ with more flexible ‘comprehensive
maps’. He saw this as a gradient distinction. Once we have this distinction, the contrast
Mental Representation and Naturalism 57
Suppose the discussion in the preceding sections is on the right track. What
then is the status of the large body of philosophical work on naturalistic
theories of mental representation? In particular, what, if anything, have we
learned from teleosemantics? I take these two questions in turn.
The basic representationalist model is a schematic, vague sort of struc-
ture, and also one that is not usually described in rigorously naturalistic
terms. So the following question presents itself: supposing that we formu-
lated a version of the model in purely naturalistic terms, exactly what sorts
of semantic description could be given a principled basis in the model?
When this question is asked about a very simple and stripped-down ver-
sion of the model, we know from decades of philosophical work that the
available semantic descriptions will exhibit a range of indeterminacies and
breakdowns. But there is the possibility that richer versions of the model
may support more determinate and fine-grained semantic descriptions than
stripped-down versions do. So it is possible to take the basic model and
embed it in a more elaborate and detailed scenario, where all the extra com-
ponents of the scenario are described in purely naturalistic terms. We can
do this, and then ask which additional kinds of semantic description attach
plausibly to the resulting structure. For example, we can try to embed the
basic model in a surrounding context that will make a sharp and principled
notion of misrepresentation available, or the discrimination of contents
that involve coextensive concepts.
60 Peter Godfrey-Smith
I will focus once again on the target problem. I claimed in earlier sections
that the basic representationalist model can be employed, and is often em-
ployed, with a ‘thin behavioral’ specification of the target. In practice, there
is no problem saying that the target of the rat’s inner map (if there is one)
is the maze with which it is dealing. This idea was accepted, while noting
that from a philosophical point of view the specification of the target here
seems somewhat vague and extraneous. In a teleosemantic version of the
representationalist model, however, the target becomes far from extraneous.
This is because of the role of the target in a feedback process that shapes the
representation-using mechanisms.
I will use Millikan’s theory to illustrate this. A central concept in her
account is that of an ‘indicative intentional icon’. A wide range of semantic-
ally evaluable phenomena turn out to involve these structures, including
bee dances, indicative natural language sentences, and human beliefs. Mil-
likan says that an indicative intentional icon is a structure that ‘stands
midway’ between producer and consumer mechanisms that can both be
characterized in terms of biological function. The consumer mechanisms
modify their activities in response to the state of the icon in a way that
only leads systematically to the performance of the consumers’ biological
functions if a particular state of the world obtains. That state is (roughly)
the content of the icon. More specifically, though, if we have a set-up of
this kind then the icon is ‘supposed to map’ onto the world in a partic-
ular way—via the application of a particular rule or (mathematical sense)
function. Given the way that the consumers will respond to the state of the
icon, if the world is in such-and-such a state then the icon is supposed (in a
biological sense) to be in a corresponding state.
What this story involves, in abstract terms, is a combination of the basic
representationalist model plus a feedback process, in which relations be-
tween actions produced and the state of the world can shape the represen-
tation-using mechanisms. We suppose that the success of actions controlled
by the consultation of an inner representation is determined by the state
of some particular part of the world, and these successes and failures have
consequences for the modification of the cognitive system. The particular
feedback process that Millikan uses is biological natural selection. Within-
generation change is handled by an elaborate story about how selection
on learning mechanisms generates teleo-functional characterization of the
products of those learning mechanisms. It is the abstract idea of a suit-
able feedback process (or what Larry Wright once called a ‘consequence
etiology’) that is most relevant here, however (see Wright 1976). If we have
a feedback process of the right kind, then the representationalist model can
be employed in such a way that the specification of the target becomes a
natural part of the story—a real part of the mechanism. The aspect of the
62 Peter Godfrey-Smith
But I have in mind a slightly different set of empirical claims, and from
the point of view of the present chapter, the more philosophical and more
empirical aspects of teleosemantics have often been mixed together in com-
plicated ways.
One empirical hypothesis has remained live throughout the chapter: does
the basic representationalist model pick out an important natural kind that
figures in the causal organization of intelligent organisms? I have described
representationalism as a ‘model’, but models can be used to represent accur-
ately the real structure of the world. Models in the present sense should not
be associated, in any general way, with instrumentalism.
To that hypothesis about the basic model we can add hypotheses about
the evolutionarily embedded versions of the model that figure in teleo-
semantics. One hypothesis is especially vivid. Might it be the case that
the only natural systems that instantiate the basic representationalist mod-
el have been shaped not just by evolution in general, but by the particular
kinds of selective histories that figure in the teleosemantic literature? Might
a particular kind of natural selection be the only feasible etiology for the kind
of structure seen in the basic representational model?
Some advocates of teleosemantics might want to say, at this point, that
this was always the whole point of the program—the orientation of the
work has always been empirical, though very abstract. To me, however, it
seems that if an empirical hypothesis of this kind is supposed to be center
stage, it has not been properly separated before now from various other
ideas—ideas of the kind discussed in the earlier parts of this section.
6. CONCLUSION
This has been a somewhat sprawling chapter, but I will give a brief sum-
mary of my main points.
We may need some new views about the kind of application that repres-
entational talk has to inner states and processes. In this chapter I tried to
develop one such view, by treating representationalism as the application of
a model. This view is not offered as an account of our most basic mentalistic
concepts, although it might be linked with an account like Sellars’s. When
taken seriously in a scientific or philosophical context, the basic represent-
ationalist model does have interesting foundational problems, but these are
not insoluble in principle. Much of the literature on ‘cognitive maps’, espe-
cially in comparative psychology, is a rather pure application of the basic
representationalist model. Teleosemantics, especially Millikan’s work, can
be seen as a philosophical elaboration of the same model. Teleosemantics
embeds a version of the basic model in a detailed biological scenario, and
66 Peter Godfrey-Smith
REFERENCES
² Teleosemanticists face a lot more problems than the one I will be discussing—see,
for example, Perlman (2002), Enç (2002), and Walsh (2002) and, of course, Fodor
(1990)—but one thing at a time.
³ I’m not sure the quality we are after is normative (see Dretske 2001 for my reasons).
That will depend, I suppose, on what one means by ‘normative’. But whatever it is, it is
something capable of grounding the difference between the true and the false, right and
wrong, correct and incorrect, valid and invalid. It is, in a word, something capable of
putting the mis- into representation and the mal- into function.
Representation and Self-Knowledge 73
⁴ Roughly, the kind of functions associated with Cummins (1975) causal-role analysis
of function.
⁵ Hardcastle’s (2002) effort to give causal-role functions a normative quality is not
much help to materialists. She traces the normativity of a thing’s causal role (what she
calls ‘pragmatic’ functions) to the explanatory purposes of scientists (see pp. 152–3).
Such normativity is obviously not available to explain, at least not in naturalistic terms,
the normativity or intentionality of the mental.
74 Fred Dretske
the case of artifacts, the history that defines the function or purpose of a
thing relates to the intentions and purposes of those who design and use
it. In the case of biological organs and mechanisms, it comes from—where
else?—the evolutionary history or individual development of a system. It
is for this reason that defective things have to have a history. If they don’t,
they aren’t defective no matter how much they fail to work the way we want
them to.⁶
Injured, healthy, strained, stretched, diseased, flawed, ill, sick, damaged,
spoiled, ruined, marred, contaminated, defiled, corrupted, infected, mal-
formed —they are all like that. Nothing can be any of these things unless
it is the result of some historical process that has defined what that thing’s
function is (or the kind of thing it is—see footnote 5) and, therefore, what
it is supposed to be doing. If a thing is marred or damaged, for instance,
it departs in some degree from a standard that defines how it, or things of
that sort, should be. And so it is in psychological affairs. Without a history
(intentions of purposeful agents provide their own kind of history) there
are no mistakes. Or misrepresentations. There are, therefore, no representa-
tions.
It is for this reason that teleosemantics is committed to a historical, an
etiological, conception of functions. That, at least, is what I believe. I’m
not, however, going to spend more time arguing for it.⁷ For present pur-
poses, I will simply assume it. That will suffice because if this assumption
is false, if teleosemantics does not need history in its theory of meaning, if,
somehow, an object that materializes randomly can, at the first moment of
its existence (before it has acquired any relevant history), have an appro-
priate function and, thereby, misrepresent the world around it, then the
problem I’m going to discuss (below) is not really a problem for teleo-
semantics. So the worst-case scenario for teleosemantics is if the assumption
is true. So, in defense of teleosemantics, I assume it true in order to show
that, even on a worst-case scenario, teleosemantics is a plausible theory of
representation.
If the function of a thing and, thereby, its representational power, is
derived from history, teleosemantics is a particularly strong form of exter-
nalism about the mind. The external facts on which mental content (and,
therefore, the mental) supervenes are not only facts external to the repres-
entation, they are facts that sometimes (in the case of biological functions,
for instance) relate to the very remote past. Not only does the mind not
supervene on the current physical state of a system, it does not supervene on
the current global state of the universe. According to teleosemantics, what
we think and experience today—indeed, the fact that we think and experi-
ence anything at all today—depends not only on what is going on in us and
around us, but on events and conditions that existed long ago and (prob-
ably) far away. A physical duplicate of a conscious being, a person (?) who
lacked the appropriate history—a history that gave its internal states the
requisite functions—would not think and experience anything at all. The
internal machinery would function—causally speaking—in the same way,
but it wouldn’t have the same (or, indeed, any) function. There would be
no representations. The duplicate would be a zombie—devoid of thought
and experience.⁸
To describe this result as a problem for teleosemantics may be too gener-
ous. Some would describe it as a reductio ad absurdum. This result not only
strains the imagination of hard-headed materialists (I have heard respec-
ted materialists describe it as preposterous), it seems to many to be at odds
with the obvious fact that we know—often enough anyway—what we are
thinking and experiencing. And even if we don’t always know exactly what
we think and experience, we certainly know that we think and experience.
We know that we are conscious beings, and conscious beings (on a repres-
entational view of the mind) are beings that have thoughts and experiences.
Descartes gives us this much: the first and most indubitable fact is that we
think. But if thinking about or experiencing orange pumpkins (or anything
else, for that matter) requires us (or the mechanisms and organs in us) to
have had a certain history, as teleosemantics (on our assumption) tells us,
then we can either know, by introspection, by the fact that we are conscious
beings, that we’ve had such a history or we need to study history in order to
find out whether we are really conscious. Either way, it is absurd.
⁹ Throughout I use (like Descartes) ‘thought’ as a very general category. It (for me)
includes all propositional attitudes. In wondering whether P, in hoping, fearing, or
regretting that P, and in wanting or desiring that P, one is (in this broad sense) thinking
that P. To know that you think (in this broad sense) that P, then, is to know that you
occupy a mental state with intentional content. It is not merely to know that you (for
instance) think rather than fear that P.
Representation and Self-Knowledge 77
¹⁰ This point could also be expressed by talking about the difference (in the context of
knowledge or belief attributions) between attributive vs. referential uses of descriptions.
Following Boër and Lycan’s (1986: 18) description of the difference between a referential
sense of knowing who the murderer is (in which it is not necessary to know the murderer
murdered anyone) and the attributive sense (where this is necessary), we could say that
children know what they think in the referential sense (where it is not necessary to know
they think it), not the attributive sense (where it is necessary to know this).
78 Fred Dretske
can know what they think without knowing they think it in the same sense
you can know what my brother is doing without knowing it is my brother
doing it. But this, as my critic has been quick to point out and as I am now
willing to concede, only shows that Suzy knows of what she thinks that it is
that the dog is loose (the phrase ‘of what she thinks’ kept carefully outside
the that-clause that expresses what it is Suzy really knows). It does not show
that Suzy knows that what she thinks is that the dog is loose (the phrase
‘what she thinks’ here occurring inside the scope of the knowledge attribu-
tion). So it does not, not in any relevant sense, show that Suzy knows what
she thinks—much less that she knows what she thinks without knowing
she thinks it.
So, to make the next step in my argument, let me shift to a person who,
unlike a child, possesses the relevant concepts and beliefs. I will describe
an analogous situation—knowing what someone said—and suggest that it
provides a model for knowing what one thinks.
Clyde gets a telephone call from his good friend Harold. Harold tells
him that he is going on vacation for two weeks. Clyde hears him say this
and, let us suppose, hears him say it under ideal telephonic conditions (no
static, clear articulation, etc.), the kind of conditions that would ordinarily
prompt us to say that Clyde knows what Harold said. So far, I hope, there
is nothing suspicious. Now the twist. There are several people, all practical
jokers, who, quite unknown to Clyde, enjoy telephoning Clyde and imit-
ating Harold. They are very good at it. As far as Clyde can tell, the call he
received from Harold could have been from any one of these other people.
It sometimes is one of these other people. Unaware of the past deceptions,
and, therefore, the very real present possibilities they create, Clyde not only
believes (correctly as it turns out), without doubt or hesitation, that it is
Harold he is talking to, but (incorrectly as it turns out) that he knows it is
Harold.¹¹
I have Gettierized¹² Clyde’s belief that it was Harold on the phone while
leaving intact his evidence for what it was that Harold said to him. The
question I’m interested in is this: does the fact that Clyde does not know
it was Harold who said he was going on vacation mean that he doesn’t
know what Harold said to him? If asked (‘What did Harold say?’), Clyde
will tell you, confidently and truthfully, exactly what Harold said. If asked
whether he knows—and, if so, how he knows—that this is what Harold
¹¹ I here assume that if Clyde can’t tell the difference between Harold’s voice on the
phone and the voices of several other people, any one of whom might be calling, then,
whether or not he realizes it, he doesn’t know it is Harold. He certainly can’t hear that it
is Harold.
¹² That is, I have described conditions in which Clyde has a justified true belief (that
it is Harold he is talking to) that does not constitute knowledge.
Representation and Self-Knowledge 79
said, Clyde will tell you, once again confidently and (I submit) truthfully,
that he knows this because he heard him say it. If anyone ever knows what
another person says on the phone, Clyde, given the circumstances, surely,
knows what Harold said. Yet, Clyde doesn’t know it was Harold who said
it. Clyde thinks he knows. This, indeed, is why he so confidently reports
what he knows by referring to the caller as Harold. But the truth of the
matter is that Clyde is ignorant about who called him.
Unlike the earlier case of the child, we now have an example in which
the agent does understand the phrase (‘what Harold said’) being used to
pick out the proposition that he heard expressed on the phone. He not
only understands it, he confidently (and truly!) believes it refers to what
he heard. That is why he describes what he heard the caller say as ‘what
Harold said’. Unlike the case of the child who does not believe, does not
even understand, that ‘what I think’ (when said or thought by her) is a
correct description of the mental state whose content (namely, that the
dog is loose) she has special access to, Clyde does understand—indeed, he
truly and confidently believes, that ‘what I heard Harold say’ is a correct
description of the content he has special (auditory) access to, the proposi-
tion he heard expressed on the telephone. Why isn’t this enough to know
not (once again) that it was Harold who said he was going on vacation, but
that what Harold said was that he was going on vacation? If it is enough,
then, it seems, we have an attractive externalist model of introspection. Just
as Clyde can know what it was Harold said without knowing, at least not
by hearing, that it was Harold who said it, why can’t a person know what it
is he thinks (by, say, introspection) without knowing, not by introspection,
that he thinks it?¹³
¹³ Strictly speaking, the analogy with knowing what you think vs. knowing that you
think it should contrast knowing what Harold said with knowing that he said it—not,
as I have done, with knowing that it was Harold who said it. With a few minor alterations
this could be done. All we need to imagine are things—programmed sound synthesizers,
for example, or (thanks to Doug MacLean for this suggestion) parrots—that can produce
the same sounds as Harold when he says that he is going on vacation without actually
saying or asserting anything. I assume here that parrots and machines who make the
sounds ‘I am going on vacation’ are not actually saying they are going on vacation. They
utter the words (and, therefore, perhaps, in direct discourse say) ‘I am going on vacation’,
but they do not, by producing these sounds, say (indirect discourse) that they are going
on vacation. When Clyde hears Harold on the phone saying that he is going on vacation,
therefore, he can know (by hearing) what Harold said without knowing (at least not by
hearing) that anything was said.
I have chosen to run the analogy as I have in the text because it is simpler and more
intuitive and it makes the point equally well. The important point, once again, is that
the way you know the x is y may be, and often is, quite different from the way you know
that it is x that is y.
80 Fred Dretske
authority about the first, we enjoy no privileged access to the second fact.
It may be, as teleosemanticists have it, that to know you are thinking about
or experiencing pumpkins requires information not obtained by looking
inward. Introspection doesn’t tell you that you think, only what you think.
If this is right, we have an answer to the epistemological objection to teleo-
semantics. A special authority about, and a privileged access to, one’s own
thoughts and experiences is compatible with a historical theory of thought
and experience. The only remaining question is whether this answer to the
objection gives a plausible account of self-knowledge. Is this really all that
introspection yields? Do we, in fact, use a different method to find out that
we think from the method (if it is a method) that tells us what we think?
My purpose here was only to argue that there was no valid epistemolo-
gical objection to teleosemantics. I’ve already done this. I should quit now.
But I can’t resist a few remarks about the plausibility of this picture of the
mind’s knowledge of itself.
As my examples show, we often know that x is y by some direct meth-
od (hearing, seeing, introspection) without knowing, without being able to
know, by that same direct method, that it is x that is y. If we know that it is x
that is y, our way of knowing this may be, and often is, quite different from
our way of knowing that x is y. I don’t have to see, even be able to see, that
it is water that is boiling to see that the water is boiling. There is, after all,
nothing about water to distinguish it from gin, vodka, and a variety of other
liquids. I needn’t be able to see that it is water in order to knowingly refer to
what I see to be boiling as water. So if it is water, and if I reasonably and truly
believe it is water, there is nothing to prevent me from saying I can see that
the water is boiling. That, I submit, is how I know that the water is boiling.
If I know it at all, though, that isn’t how I know it is water that is boiling. If
I actually know it is water, I probably know that in some way other than the
way I know it is boiling. The fact that I came to know (or believe) it is water
by chemical analysis doesn’t mean I can’t see that the water is boiling.
Why shouldn’t the same be true of introspection. The fact that I found
out I think by having someone (parents? teachers? friends? Descartes?) tell
me doesn’t mean I can’t now discover what I think by simple introspection.
The analogy with ordinary perception can be pushed a little further. Per-
ception of ordinary dry goods tells us what is in the physical world, not that
there is a physical world. I see that there are cookies in the jar, people in the
room, and (by the newspapers) continued violence in the Middle East. That
is how I know there are cookies, people, and violence in these places. Cook-
ies, people, and violence are physical things that exist independently of my
perception of them. Do I, therefore, know, by visual perception, by seeing,
that there are things that exist independently of my perception of them?
Can I see that there is a material world and that, therefore, solipsism is false?
Representation and Self-Knowledge 83
I don’t think so. It seems more reasonable to say that assuming there is a
physical world, or assuming we know (in some other way) that there is a
physical world, perception tells us what sorts of things are in it—cookies,
people, and violence. Visual perception has the job of telling me what phys-
ical objects I see, not that I see physical objects. If my perceptual faculties
had the latter job, the job of telling me that I was (in effect) not hallucin-
ating, not aware of some figment of my own imagination, they would be
incapable of discharging their responsibilities. For, as we all know, hallucin-
atory cookie jars can, and sometimes do, look much the same as real cookie
jars. You can’t see the difference. If it’s a real object you see, perception
will tell you whether it’s an orange or a banana (a difference that is plainly
visible), but perception cannot tell you whether it’s a real orange or just a
figment of your imagination. That difference isn’t visible.
Memory has a similar structure. Memory tells us what happened in the
past—the specifics, as it were, of personal history. It does not tell us there
is a past. I can remember (hence, know) what I had for breakfast this morn-
ing. No trick at all. I distinctly remember that it was granola. Nonetheless,
despite this (what I remember) implying that the past is real (if it isn’t real,
I didn’t have breakfast this morning; hence, do not remember having gran-
ola for breakfast this morning) this doesn’t mean I can remember that the
past is real. If I know the past is real, I don’t know this by remembering
that it is real. That isn’t a way to answer Russell’s skeptical question about
the past.¹⁵ If I know the past is real, I know it in some way other than by
memory. Memory is a faculty that tells me what occurred in the past given
that there was a past just as perception tells me what is in the material world
given that there is a material world. Maybe I have to know the past is real in
order to remember what I had for breakfast this morning (I doubt it, but let
that pass), and maybe I have to know there is a physical world to see wheth-
er there are cookies in the jar (let that pass too), but the point is that I do
not have to know these things by memory and vision in order for memory
and vision to tell me (give me knowledge of ) what I had for breakfast this
morning and what is in the cookie jar.
Introspection is like that. Introspection tells me what is in my mind,
what it is I am thinking, wanting, hoping, expecting, and the kind of exper-
iences I am having. It doesn’t tell me I really have a mind, mental states
with content. If I know that at all, I know it in some way other than by
introspection, the faculty that, given that I have thoughts and feelings, tells
me what I’m thinking and feeling.
¹⁵ Russell’s question: How do you know the world and all its contents were not created
a few moments ago complete with memory traces, fossils, history books, etc.—complete,
that is, with all the indicators you rely on to tell you about the past?
84 Fred Dretske
REFERENCES
A, A., C, R., and P, M. (eds.) (2002), Functions: New Essays in
the Philosophy of Psychology and Biology (Oxford: Oxford University Press).
BË, S. E., and L, W. (1986), Knowing Who (Cambridge, Mass.: Bradford
Books, MIT Press).
B, C. (2002), ‘A Rebuttal on Functions’, in Ariew et al. (2002: 63–112).
C, R. (1975), ‘Functional Analysis’, Journal of Philosophy, 72/20: 741–65.
D, F. (1969), Seeing and Knowing (Chicago: University of Chicago Press).
(1970), ‘Epistemic Operators’, Journal of Philosophy, 68/24: 1007–23.
(1971), ‘Conclusive Reasons’, Australasian Journal of Philosophy, 49/1: 1–22.
(1972), ‘Contrastive Statements’, Philosophical Review, 81/4: 411–37.
(1981), Knowledge and the Flow of Information (Cambridge, Mass.; Bradford
Books, MIT Press).
(1988), Explaining Behavior (Cambridge, Mass.: Bradford Books, MIT Press).
(1995), Naturalizing the Mind (Cambridge, Mass.: Bradford Books, MIT
Press).
(2001), ‘Norms, History, and the Mental’, in Denis Walsh (ed.), Natural-
ism, Evolution and Mind (Cambridge: Cambridge University Press); previously
pub. in Dretske, Perception, Knowledge, and Belief: Selected Essays (Cambridge:
Cambridge University Press, 2000).
(2005), ‘The Case against Closure’, in Matthias Steup and Ernest Sosa (eds.),
Contemporary Debates in Epistemology (Malden, Mass.; Blackwell).
EÇ, B. (2002), ‘Indeterminacy of Function Attributions’, in Ariew et al. (2002:
291–313).
F, J. (1990), A Theory of Content and Other Essays (Cambridge, Mass.: Brad-
ford Books, MIT Press).
G, P. (1957), ‘Meaning’, Philosophical Review, 66: 377–88.
H, V. G. (2002), ‘On the Normativity of Functions’, in Ariew et al.
(2002: 144–56).
L, P., and M, N. (eds.) (1998), Externalism and Self-Knowledge (Stan-
ford, Calif.; CSLI Publications).
M, R. (1989), ‘In Defense of Proper Functions’, Philosophy of Science, 56/2:
288–302.
N, K. (1991), ‘The Teleological Notion of Function’, Australasian Journal
of Philosophy, 69: 454–68.
P, M. (2002), ‘Pagan Teleology: Adaptational Role and the Philosophy of
Mind’, in Ariew et al. (2002: 263–90).
W, D. M. (2002), ‘Brentano’s Chestnuts’, in Ariew et al. (2002: 314–37).
4
The Epistemological Objection
to Opaque Teleological Theories
of Content
Frank Jackson
After you’ve seen someone walk through a minefield, you have a pretty
good idea where they think the mines are located. After you’ve dined reg-
ularly with someone, you have a pretty good idea of what they like to eat
and drink. And so on and so forth. It is a commonplace that what people
do and say tells the folk—that is, you and me, Shakespeare and Aristotle,
most of us when we are not drawing on specialist knowledge in cognit-
ive science or whatever—a good deal about the contents of the beliefs and
desires of our fellow human beings, and more generally about what they
think. The observation of behaviour in circumstances often grounds jus-
tified belief about the contents of intentional states, and, what is more,
we qua members of the folk know this. This commonplace does not pre-
sume behaviourism of course—we all know that the direction of the wind
is indicated by the behaviour of a windsock but facts about the wind cannot
be analysed in terms of facts about windsocks.
The fact that this is a commonplace means that any theory of content
should respect it. This chapter argues that certain versions of teleological
theories of content are inconsistent with it, and more generally with the fact
we folk qua folk have many justified beliefs about the contents of beliefs
and desires, and thought more generally. The versions I have in mind are
theories offering biconditionals like:
(A) x believes that P iff x is in a state that . . .
where the ellipsis is filled by a clause that relates to selectional matters
that are opaque to the folk, and where these biconditionals are understood
86 Frank Jackson
¹ For a view of this kind, see Papineau (1993: 94). He is not offering it as a finished
theory but as a sketch to give the general idea. Our argumentation will be independent
of the various qualifications and refinements.
Opaque Teleological Theories of Content 87
structures are selected for, they do not qua folk have opinions, let alone
justified ones, about what these structures are selected for —the selectional
histories of the ear and the larynx are major research topics. Moreover, no
one thinks that observations of subjects interacting with their environments
are enough to justify attributing properties like having a state selected to co-
vary with such and such, or having a state selected to bring about so and so,
to subjects. To suppose otherwise would make a nonsense of all the work
that went into establishing the theory of evolution.
At first glance, we seem to have a serious problem for teleological theories
of content of the kind in question, henceforth opaque selectional theor-
ies of content: they seem to imply that the folk do not have the justified
opinions about the contents of the intentional states of their fellow humans
that they clearly do have, and that, in particular, even extended observations
of interactions with environments are not enough to justify ascribing con-
tents. After all, the folk do not have justified opinions about magnolia metal
because they have never heard of it, and, even if they had heard of it, would
not normally have justified opinions about which metals are examples of
it.² In the same way, it seems that they would not, on opaque selectional
theories, have justified opinions about intentional contents because these
contents are properties they have never heard of, or need never have heard
of. And, even when they have heard of them, the absence or presence of
these properties is not something the folk need have a justified view about
in order to have justified opinions about what their fellows are thinking,
and is not something we get justified belief concerning out of observation of
interactions with the environment.
My contention in this chapter is that what seems right at first glance
seems right after a number of glances. I will argue that opaque selectional
theories of content cannot explain how it is that we folk have the justi-
fied opinions we do about intentional contents. They cannot explain, for
instance, why Shakespeare was so often justified in his views concerning
what those around him wanted and believed based on his observations of
their behaviour.
Our argument will be independent of the detail of the various teleo-
logical accounts of content provided only that they are opaque selectional
accounts in the sense already explained. Where necessary I will frame mat-
ters in terms of the very simple teleological account of the content of belief
and desire given above, but the point being made will not depend on the
simplification (which all parties can agree is substantial); it will depend on
the opacity.
I know many teleological theorists will insist that their theory cannot be in
trouble from such a ‘quick’ objection and that the objection commits some
simple mistake or other. Most of the rest of this chapter is a series of replies
to the various objections to the folk epistemology objection, or FEO, as I’ll
call it, from people who insist that I have made some simple mistake or
other. But let me spell out the structure of the objection before we proceed
to look at some responses to it. The objection in schematic form runs thus:
Premise 1. Having an intentional state with such and such content is a
property justifiably ascribed in so and so circumstances. (Premise supported
by reflection on Shakespeare and the folk generally.)
Premise 2. Having a state, or being in a state, with so and so a selec-
tional history is not justifiably ascribed in so and so circumstances. (Premise
supported by reflection on what is needed to justify belief in selectional
matters.)
Conclusion. Having an intentional state with such and such content
is not identical with being in a state with so and so a selectional history.
(Leibniz’s Law)
This argument is valid and there is no problem about applying Leibniz’s
Law to properties of properties. The same style of argument can, of course,
be run for the intentional states themselves, for being the belief that P as
opposed to having the belief that P, for instance.
It is time to look at the possible objections. They are all constructed from
objections that I have come across in one form or another but I have not
sourced them for fear of misrepresenting.
given a selectional story but not why they are justified in holding that they
get it right.
⁵ Papineau is explicit that this is how he understands teleological theories; see his
comments on theoretical reductions on (1993: 93), and his comments at the beginning
of (2001). Braddon-Mitchell and Jackson (1997) argues from the perspective of the critic
rather than the supporter, that this is the best way to read the view.
Opaque Teleological Theories of Content 91
However, the theory also holds that although the relevant content prop-
erty and the relevant selectional property are distinct, they metaphysically
necessitate each other. This is not a teleological theory of content. It is no
more a teleological theory of content than are necessitarian dual attribute
theories of mind versions of physicalism. Some dual attribute theorists hold
that the phenomenal feels of sensory states are properties distinct from any
that appear in physicalist theories of mind, but that they are necessarily
connected to properties that appear in physicalist theories. The necessary
connection does not turn their view into a version of physicalism. Mutatis
mutandis for teleology.
It follows that we need to add something to (D) in order to get a view
that can properly be regarded as a teleological theory of content. The obvi-
ous addition, as we in effect noted near the beginning, is an identity claim.
In addition to holding that a suitable instance of (D) is a necessary a posteri-
ori biconditional, teleologists should hold that suitable instances of
(E ) having such and such content = being a state that plays so and so a
selectional role
and
(E ) being in a state with such and such content = being in a state that
plays so and so a selectional role
are necessary a posteriori truths. Again, the water–H2 O case might be held
to be suggestive. Not only is it a necessary a posteriori truth that x is water
iff x is H2 O, it is a necessary a posteriori truth that water is identical with
H2 O (modulo worlds where there is no water).
But now we have the trouble we noted at the outset. (E ) and (E ) are
false: for each it is the case that the property on the left-hand side differs in
its epistemic properties from the property on the right-hand side; that’s the
nub of the FEO. And a similar consideration applies in the case of scientific
identities. When Shakespeare believed that there is water in a glass in front
of him, he believed that things are a certain way in the glass, but what he
believed about how things are in the glass is not what we believe when we
believe that a glass contains H2 O. It follows that how things are believed to
be when water is believed to be somewhere is not how things are believed
to be when H2 O is believed to be somewhere. And this is what one would
expect given the a posteriori nature of the identity between water and H2 O.
If what one believes about how things are when one believes that there is
water is the same as what one believes about how things are when one
believes that there is H2 O, then from the very moment humans believed
that there is water, they believed that there is H2 O. But the a posteriori
92 Frank Jackson
⁶ For some of the issues here, see e.g. Soames 2002: 4 and the discussion that follows.
Opaque Teleological Theories of Content 93
say that something has the colour of the sky is not being blue; it is being
same-coloured with the sky. Likewise, the property we believe something to
have when we believe it to be the colour of the sky is not being blue (on the
obvious reading); it is being same-coloured with the sky. Being water and
being H2 O differ just as being blue and being the colour of the sky differ,
but water and H2 O are one and the same just as (being) blue and the colour
of the sky are one and the same property.
Many of the things we say about how things are are to the effect that some-
thing has a property that itself has a property. To be fragile is to have
some internal nature that causes breaking on dropping: there is the inter-
nal nature, and there is what it does or would do. Or take the colour of the
sky example just mentioned: to say that something has the colour of the sky
is to say that it has a colour that has the property of including the sky in
its extension. We might call the properties ascribed in such cases ‘second-
order’, meaning not that they are properties of properties but that they are
properties possessed by x in virtue of x’s possession of a property that itself
has a property: the property of having whatever property is so and so. An
example much discussed in the philosophy of mind is the version of func-
tionalism tailored to be compatible with a type–type mind–brain identity
theory.⁷ It draws a sharp distinction between being in pain and pain. To
be in pain is to instantiate a kind that fills so and so functional role; pain is
the kind (one that may or may not vary from creature to creature, etc.) that
does fill the role. This gives two property identities:
being in pain = instantiating a property that fills so and so functional
role
pain = the property that fills so and so functional role (neural state N
in such and such creatures, as it might be).
In the same way, runs the objection to FEO, selectional theories must dis-
tinguish two properties. One is the property we ascribe to someone when
we say that they believe that P or (same thing) the property we believe them
to have when we believe that they believe that P. This property is not a
selectional property—it is, for instance, a property we folk are justified
in believing someone to have in circumstances where we are not justified
in believing them to have any relevant selectional property. However, this
property is the property of having a property which is thus and so, and this
is confirmed by the fact that when ‘blockhead’ cases and Martian marion-
ette cases are described to us, we immediately designate them as cases where
there is no contentful thought.⁸ That which generates ‘their’ interactions
is not of the right kind, and we know this qua member of the folk. It is,
therefore, plausible that the property we ascribe to x when we believe that x
believes that P, or when we use ‘believes that P’ of x, is a second-order prop-
erty in the requisite sense. But it is important that the ‘thus and so’ in (F ) be
given an undemanding reading in the sense of a reading that preserves the
point we have been focusing on: the folk qua folk have plenty of justified
beliefs about content. For example, if we spelt out ‘thus and so’ in a way
that stated that the internal workings be carbon-based, as in
(F carbon) believing that P = the property of having a carbon-based
property that is thus and so,
we would wrongly make it the case that the folk who do not know that
we are carbon-based—Shakespeare and Aristotle would be examples—lack
justified beliefs about content.⁹
All the same, this leaves quite a bit of room to manoeuvre. There is a lot
about internal goings on that is available to the folk from interaction pat-
terns. Consider the phenomenon of imprinting. Baby ducks are disposed
to keep company with the first thing of a suitable sort that they see after
hatching. Usually it is the mother duck obviously, but sometimes it is a
dog, the experimenter, or whatever. They imprint on the first thing that
they see, and their behaviour is explained in terms of their having imprin-
ted on the mother, the dog, or . . . Although we ascribe imprinting on the
basis of observation of behaviour, what we ascribe, or are in a position to
ascribe, goes well beyond behavioural patterns. We know, for instance, that:
the duck’s initial sighting lays down a persisting trace inside the chicken,
otherwise its following behaviour would fade away quickly; that the nature
of the internal trace that is laid down is a function of the nature of the
thing first seen, otherwise it would not be able to discriminate between the
thing first seen and things seen subsequently; and that the trace laid down
is causally connected to the ways the legs and the head operate, otherwise
the information being carried by the trace inside the baby duck would be
irrelevant to the movements of its head and legs in sustaining the accom-
panying behaviour. All of this is something available and implicitly known
to the folk. The little reasoning sketches given above do not call on spe-
cialist knowledge in cognitive science. However, it is also true that what is
so available is restricted to the causal, functional, and informational under-
pinnings of the behavioural interactions. It concerns leaving traces, causal
transactions between traces, casual links to legs, the causal role of the eyes,
and so on. In order to preserve the folk-availability of content, the ‘thus and
so’ in (F ) must pertain to matters of this kind. What is obvious to the folk
and something they may justifiably believe and ascribe on the basis inter alia
of observations of behaviour can be complex, but the complexity is restric-
ted to complex causal and functional roles, and the like. It does not include
selectional roles. If it did Wallace and Darwin’s insights would have been
available to the folk qua folk.
This blocks the possibility of the identification in (G) being with selec-
tional properties. The situation can be put as follows. We need an un-
demanding reading of the ‘thus and so’—one that makes the existence of
a property that is thus and so folk-available—in order for (F ) to meet the
folk-availability constraint. Mark any such reading ‘thus and so*’. But then
what we get from (G) is that belief that P = the property that is thus and
so* and the property that is thus and so* is not a selectional property. It
might be a functional property of the kind we can justifiably believe in
given the information available to the folk, or it might be whatever prop-
erty plays the relevant functional role, because if we can justifiably believe
that some functional property is instantiated we can justifiably believe that
there is a property playing the functional role. But in neither case is it a
selectional property. Selectional properties are not folk-available, and they
do not play the relevant functional roles—that is done by neural properties,
or maybe internal functional architecture.
In sum, the second-order property way of reading selectional cum tele-
ological theories of content faces a dilemma. Any reading of (F ) that allows
some instance of (G) to be an identification of a content property with a
selectional property is a reading that means that the folk-availability con-
straint is violated. Any reading of (F ) that meets the folk-availability clause
blocks any instance of (G) being an identification of content with a selec-
tional property.
Opaque Teleological Theories of Content 97
‘‘You went wrong near the beginning when you said ‘there is no problem
about applying Leibniz’s Law to properties of properties’. There is a prob-
lem if the properties of properties are epistemic ones like being justifiably
supposed to obtain in so and so circumstances. The opacity of belief con-
texts tells us that.’’
Reply. This is tantamount to denying that we have beliefs about prop-
erties. For consider the right response for those who hold that ‘a = b’, ‘S
believes that a is F ’, and ‘It is false that S believes that b is F ’ can be true
together. The right response is that when S believes that a is F, S does not
have a belief about a, the thing. S has, rather, a belief about the proposition
that a is F, and the explanation of how it can be false that S believes that
b is F and true that S believes that a is F, when a = b, is that the proposi-
tion that b is F is a different proposition from that a is F, and the beliefs in
question are about propositions and not objects. Thus there is no violation
of Leibniz’s Law. (Of course, some insist that S’s belief that a is F is about a,
but they are the same people who insist that if S believes that a is F, then S
ipso facto believes that b is F in the case where a = b.)
I think we should resist any suggestion that we do not have beliefs about
properties. We really do have beliefs about how things are in certain parts
of the world and how things are with certain things. Our beliefs really do
place things in categories, and that’s to assign them properties. Our assign-
ments may be correct or incorrect but it is properties, not something else or
nothing, that gets assigned.
It might be objected that an argument like the one just rehearsed for
objects can be developed for properties in order to show that we do not
strictly have beliefs about properties. Surely the following can be true
together
(H) S believes that a is blue,
(I) It is false that S believes that a has the colour of the sky,
( J) Blue = the colour of the sky.
How so if S’s belief is about the single property—blue (= the colour of
the sky)? The answer is that we can read (I) in two different ways. We can
read the claim that S believes that a has the colour of the sky as being true
when S believes that a is blue. Is there some other colour that S believes a
to have? On this reading it is not possible for (H), (I), and (J) to be true
together. The more obvious way to read (I) is as saying that it is false that
98 Frank Jackson
S believes that a has the property of being same-coloured with the sky. But
blue = being same-coloured with the sky. There is no way to read (I) that
makes trouble for the intuitive view that when S believes that a is F, S has a
belief about a property, namely, the property that S believes a to have.
REFERENCES
¹ I use the following abbreviations: LTOBC for Language, Thought and Other Biological
Categories; OCCI for On Clear and Confused Ideas; VM for Varieties of Meaning.
Useless Content 101
its spatial relation to me. But its spatial relation to me is a perfectly objective
relation, and I am best off if I perceive it accurately.
Second, that a creature represents only a few features of its world does
not mean that it represents its world as having only a few features. The limit
of one’s representations of the world is not a representation of the limits
of one’s world. That two creatures represent different aspects of the world
they live in does not imply that they represent different worlds or that they
represent the world as being different ways. What I know about the world is
very strictly limited. It does not follow that I represent the world as having
strict limits. In the vocabulary of OCCI, to suppose that this follows would
be to ‘import completeness’ (OCCI, ch. 8).
appropriate genes are responsible for determining what the original forms of
reinforcement will be for a particular animal species. But forms of reinforce-
ment, such as alleviation of hungry or of thirsty feelings, or the presence of a
sweet taste in the mouth, are not themselves means toward fulfillment of any
purposes of the genes. Sweet tastes, for example, although correlated during
the history of the species with the presence of needed nutrition, are not, in
themselves, involved in any direct causal process resulting in increased nutri-
tion. Behaviors reinforced by sweet tastes (M&Ms) are selected for
producing sweet tastes, so they have as a natural purpose to bring in sweet
tastes. Also, true, the disposition to be reinforced by sweet tastes was selec-
ted because sweet tastes typically indicated nutritive value. But behaviors can
succeed in bringing in sweet tastes without succeeding in increasing nutri-
tion. That is what saccharine is sold for. Although the purpose of procuring
sweet tastes on the genetic level is gaining calories, the psychological purpose
of behaviors conditioned by sweets is merely to procure sweet tastes. So beha-
viors can succeed in fulfilling their natural biological purposes of bringing
in sweet tastes without fulfilling any more basic functions for which genes
were selected. In the broad sense meant here these behaviors have ‘biological
utility’, although in a narrower sense—on a shallower level—of course they
do not. We generally distinguish these levels of purpose by calling one ‘psy-
chological’ and the other ‘biological’, but looked at carefully, the level of
psychological purposes produced by reinforcement is just another layer of
biological purpose, though these are not so direct as the purposes, say, of
the heart’s beating or the stomach’s juices flowing. It is in the broad sense in
which psychological purposes are included among biological purposes that
the content of a representation is said to rest on its biological utility.
Processes of practical reasoning also proceed by trial and error, generat-
ing a third level of natural selection, hence of natural purposes. The nat-
ural purpose of behaviors selected through practical reasoning is to reach
whatever goals are represented at the start of the reasoning as the ends for
which means are to be selected. The final implementing intentions that
immediately generate behaviors have been selected because these repres-
entations have led, in the inner world of the thinker, to representations of
the original goal as fulfilled. If the practical reasoning operates properly,
implementing these selected intentions will produce implementation of the
original goal. The interesting and challenging question concerns the origin
of these represented goals, in particular, their relation to more basic biolo-
gical purposes resting more directly on genetic selection. What determines
which conscious goals we pursue? Perhaps most are derivative from prior
goals that we have. We aim for them because we believe they will lead to
ends already established as goals. But what mechanism selected the original
goals from which these goals were derived?
104 Ruth Millikan
Clearly our original conscious goals are not the same as whatever our
genes aim for. Babies do not come into the world fervently desiring to
live a long life and produce lots more babies. Nor do they come into the
world desiring to obtain just those goodies that will or would serve to
reinforce their behaviors through conditioning. This is true in the first
instance simply because babies don’t come into the world thinking about
anything, hence explicitly desiring anything. They have to develop concepts
before they can explicitly desire things. But let us suppose that a child has
developed adequate concepts of all the things that do or will happen to
reinforce her behaviors, from sweet tastes to sexual pleasure to smiles. She
is able to think about each of these things. Merely developing concepts
of these things is not the same as conceiving these things AS things that
reinforce or AS things wanted, however. Just as there may be a large gap
between what reinforces an animal’s behavior and what has deeper biolo-
gical utility for the animal—between sweet tastes and nutrition or between
sexual pleasure and having babies—there may be a large gap between what
actually reinforces behavior and what one wants or what one understands
to be a cause of reinforcement. Often we may know when we are attracted
or repulsed without knowing exactly why we are attracted or repulsed, or
we may know that we are happy or sad without knowing why. Knowing
exactly what it is that we want is by no means automatic. Likely we only
find this out by experience, indeed, by something analogous to hypothesis
formation and testing. ‘I don’t know why it is but shopping for Christmas
always makes me anxious’, we say, or ‘There is something about electric
trains that just fascinates me.’ We may wrongly suppose that having a lot
of money is what is required to make us happy, or being allowed to sleep
in every day without having morning commitments. Indeed, just what it is
that makes people happy has proved to be an extremely challenging ques-
tion for clinical and social psychology. Moreover, knowing what it is that
attracts us or repels us need not lead to a reasoned desire for or against
that thing. Often we have other conflicting interests to consider. Still, the
mechanisms through which goals are projected by conscious desire and
reason are undoubtedly mechanisms that our genes have been selected for
engendering.² These mechanisms are still with us because they have some-
times—often enough—produced behaviors that benefited human genes in
the past. Aims and goals that are products of these mechanisms are also bio-
logical purposes in the broad sense intended, even though it may happen
very often that they do not lead to fulfillment of any more direct purposes
² The claim, made by Fodor and others, that our human cognitive mechanisms may
actually have arrived of a piece without benefit of natural selection is discussed in VM,
ch. 1 n. 2 and ch. 2 n. 5.
Useless Content 105
of the genes, perhaps not even producing behaviors that are psychologically
rewarding. Fulfillment of these aims and goals still has biological utility in
the broad sense that is meant.
Now one of the most important jobs that beliefs are designed to do,
surely, is to combine with other beliefs to form new true beliefs. This kind
of function contributes to biological utility whenever any of the beliefs
formed in this way turns out to have biological utility. Even the most highly
theoretical of beliefs are not excluded from having biological utility, then,
so long as they participate in chains of reasoning that eventually bear prac-
tical fruit. Still, surely many beliefs that we humans have do not ever help to
serve any of our goals, so the question does arise how these beliefs can have
content on the teleologist’s view.
First we should be clear that the teleologist’s position is not that each indi-
vidual belief must help to serve a biological function. Rather, beliefs must
fall within a general system of representation where the semantic rules for
the system are determined by reference to the way its consumers, its inter-
preters, are designed to use these representations. But in order to have been
designed to make and use representations in the system in a certain way,
the producers and consumers or their ancestors must have been using rep-
resentations from the same system productively in the past. Otherwise these
mechanisms would not be representation producers and consumers. They
would not have been selected, through evolutionary history or through
prior learning, for their capacity to coordinate through the use of repres-
entations. But there is another alternative. There may be ways that the
producers have been designed to learn or to be tuned to produce repres-
entations that are coordinated with ways the consumers are designed to
be tuned to use them, so that the producer’s rules and the consumer’s
use dispositions are somehow tailored in advance to match one another.
If the former—if the producer and consumer (or their ancestors) have a
history of being coordinated through the use of a particular semantic sys-
tem, we need to know how to generalize from past successful coordinations
to determine a unique general semantic rule that determines content in
those cases where no actual coordination occurs. If the latter—if produ-
cer and consumer have been designed to learn to respond in a coordinated
way without actually practicing together, we need to understand how this
marvel is accomplished. I have argued that the semantic rules governing
representations that humans use in perception and thought are sometimes
derived one of these ways and sometimes the other. Let me discuss these
alternatives in turn.
Suppose, first, that the rules are determined by a series of past successful
coordinations between the producers and consumers of the representations.
106 Ruth Millikan
Still, we might question whether there is only one way to generalize from a
series of past successful uses so as to determine a general rule of correspond-
ence between representations and candidates represented. Recall Kripke’s
(1982) worries that no matter how many past examples of correct addi-
tions one begins with, these examples will never determine the correct way
to go on to new examples using the ‘plus rule’. But the two cases are very
different. The rule we are seeking is one that not only coincides with past
successful cases of coordination but conformity with which, in each case,
causally generated the coordination. In each case, had the representation
been different the consumer would have reacted differently, and this reac-
tion would not have served its purpose, or would not have served it owing
(in part) to the fact or condition that was represented. Kripke’s difficulty
was that he could not appeal to dispositions that he has to react to the plus
are produced, for example, through the dorsal visual channels (VM, ch. 14).
And there are surely limitations on the extremes that humans can perceptu-
ally represent or interpret directly for action, just as there are on the dances
that bees can dance and interpret.
Pressing questions about the representation of affairs with which humans
could not possibly interact arise only when we are pretty certain that we
do in fact represent these things. For example, Peacocke (1992) has asked
about representing propositions that concern things outside our light cone.
No interaction with such things is physically possible even in principle.
What makes this question legitimate is that we are pretty sure that we
can formulate questions about what is outside our light cone, and even
though we are not able to answer these questions, it is possible that someone
might have an unfounded belief about some such matter, and that the belief
would in fact have a truth value.
Again, my suggestion on this matter will involve an appeal to composi-
tionality.⁴ But first, I need to locate the problem within the framework of
the theory of empirical concepts with which I have been working (LTOBC,
OCCI ). Representations produced and consumed by systems that employ
empirical concepts are examples of a kind of representation that producers
and consumers can learn how or be tuned to use cooperatively without
actually practicing together. Their production and use dispositions can be
tailored in advance to fit one another. It will take me a few moments to
explain how this can be. I will not defend my position on this matter at any
length, however. Discussion and defense can be found in VM chapter 19,
LTOBC chapters 15–19, and OCCI chapter 7. The implied realist onto-
logy is explained and defended in OCCI chapter 2 and in LTOBC chapters
14–17.
⁴ So does Caroline Price’s solution (2000). I agree with Price’s analysis as far as it goes.
But the question how we acquire concepts of objective kinds, objects, and properties in
the first place so as to recombine them compositionally requires explanation.
⁵ In OCCI, ch. 6, and in VM, ch. 9, I argue that gathering information by believing
what another human says is in all relevant ways exactly like gathering information
through direct perception. The result is that people can have basic concepts of things
that they don’t yet know how to recognize ‘in the flesh’.
Useless Content 109
dances are correct. There really is nectar both of those places. Nor do
the bees have any way of representing where there isn’t any nectar. Bee
dances are not sensitive to a negation transformation. A subject–predicate
sentence and its negation, on the other hand, are explicitly incompatible,
incompatible right on the surface. Similarly, humans can think negative
thoughts, and these thoughts contrast explicitly with possible positive
thoughts. Whether the way human thoughts are coded resembles the way
language is coded in any other way, certainly our thoughts are sensitive
to a negation transformation. Whenever we have opportunity to gather
the same information in two different ways, through two different natural
information channels yielding different proximal stimulations, we have a
chance to gain evidence about whether our various methods of attempting
to identify the entities represented in the subjects and the predicates of
these judgments are each converging to focus on some single objectively
same thing. Consistent agreement in judgments is evidence that these
various methods of making the same judgment are all converging on
the same distal affair, bouncing off the same target, as it were. If the
same belief is confirmed by sight, by touch, by hearing, by testimony,
by various inductions one has made, and is confirmed also by theoretical
considerations (inference is a method of identification too), this is sterling
evidence for the univocity of the various methods one has used to identify
each of the various facets of the world that the belief concerns.
Thus the same object that is square as perceived from here should be
square as perceived from there and square by feel and square by check-
ing with a carpenter’s square and square by measuring its diagonals and
square by hearing from another person that it is square.⁶ Similarly, if a per-
son is tall and good at mathematics as recognized today, that same person
should prove tall and good at mathematics when reidentified tomorrow.
Both one’s general methods of reidentifying individuals and one’s methods
of recognizing height and mathematical skill are corroborated in this way as
methods of reidentifying objective selfsames. That the same chemical sub-
stance is found to melt at the same temperature by checking with an alcohol
thermometer, a mercury thermometer, a resistance thermometer, a gas ther-
mometer, a bimetal expansion thermometer, and a thermal thermometer
is evidence both that one is able to recognize the same chemical substance
again and that there is indeed some real quantity (unlike caloric pressure)
that is being measured by all of these instruments. If a multitude of differ-
ent operational definitions are found to correlate with one another exactly,
then they can be assumed all to measure the same thing. Moreover, and
⁶ See n. 5 above.
112 Ruth Millikan
⁷ ‘If we see on a road one house nearer to us than another, our other senses will bear
out the view that it is nearer; for example, it will be reached sooner if we walk along
the road. Other people will agree that the house which looks nearer to us is nearer; the
ordinance map will take the same view . . .’ (Russell 1912: 31). This is Russell’s argument
that a real spatial relation between the houses corresponds to the nearer than relation of
which we are aware between certain sense data.
⁸ The remainder of this chapter follows paragraphs in VM, ch. 19, very closely, with
the kind permission of the MIT Press.
⁹ External negation, which operates on the sentence as a whole, is called ‘immunizing’
negation in (Millikan 1984). Horn (1989) gives a parallel analysis calling it ‘meta-
linguistic’ negation as opposed to ‘descriptive’ negation. The claim is that immunizing
or metalinguistic negation is not a semantic operator.
Useless Content 113
REFERENCES
¹ For example, I ignore pure internal conceptual role theories, since they fail to
explain how a concept can even have an extension. No psychosemantics has ever been
given for ‘long-armed’ role theories (e.g. Harman 1987), and the early informational
theories (Stampe 1977; Dretske 1981) have been shown to suffer fatal problems with
disjunctivitis and the like (Fodor 1990).
116 Dan Ryder
attempt and fail to account for the representation of kinds, or they fall back
on something like an intention to refer to a kind—not exactly the most
auspicious move for a reductive theory.
There are a number of problems that prevent non-teleosemantic
theories from explaining how it is possible to represent kinds. A concept
of a kind K must exclude from its extension things that superficially
resemble K but whose underlying nature is different, e.g. a concept of
water excludes XYZ (Putnam 1975). Any psychosemantic theory that
depends exclusively upon the intrinsic properties (including dispositions)
of the representer to determine extension will thus fail to provide for
the unequivocal representation of kinds, by reason of the familiar twin
cases. This problem infects theories based on isomorphism (Cummins
1996), information (Usher 2001), and nomological covariance, including
(as Aydede 1997 demonstrates) Fodor’s asymmetric dependence theory in
its most recent guise (Fodor 1994).
Some information-based and nomological covariance theories avoid the
twin problem by adding further conditions on extension determination.
Whether or not these moves help with twins, they do not help with kinds.
For example, Prinz (2002) requires a representation’s content to be its ‘in-
cipient cause’, the thing that explains the concept’s acquisition. Such modi-
fied informational and nomological covariance theories inevitably fall victim
to a second problem for such theories, the problem of epistemically ideal
conditions. On a nomological covariance theory, surely it is the category that
exhibits the best nomological covariance with a representation that is its con-
tent. (What else could it be?) Since the covariation of a representation with
its content will always be better in ideal epistemic circumstances, a nomo-
logical covariance theory will always incorrectly dictate that the content of
a representation of kind K is rather K-in-epistemically-ideal-circumstances.
Getting the right content cannot be achieved by ruling out those
factors that are ‘merely epistemic’, e.g. by adding an ‘ideal conditions’
clause (Stampe 1977; Stalnaker 1984), since which factors are merely
epistemic will depend upon which factors are semantic (McLaughlin 1987).
If the symbol really means K-in-good-light, being in good light is not a
merely epistemic factor. (And if the symbol really means K-in-weak-light,
good light will not be ideal.) The only remaining way to rule out the
epistemic factors from the content of a representation on a nomological
covariance theory would be by ad hoc restriction of the class of candidate
contents to those that fit with how we in fact carve up the world. Because
we find these categories intuitive, this move can easily go unnoticed.
But this is to ignore the fact that a psychosemantics will be part of an
explanation for why those categories are the intuitive ones, i.e. why our
representations have the contents they do. The restrictions on content must
On Thinking of Kinds 117
kind will have a cluster of typical properties, some subset of which will be
those that explain its detection (or however the mapping is brought about)
and the proper performance of the consumer’s relevant functions. Why not
say that the representation denotes either the subset or disjunction of selec-
tionally explanatory properties, rather than the kind itself? A kind is not
identical to a subset or disjunction of its properties. This seems to raise the
spectre of twin problems for Millikan—the selectionally explanatory prop-
erties could be clarity, liquidity, and potability, properties that H2 O and
XYZ share. (We shall see later that some additional resources allow Millikan
to deal with this problem, if not for the ubiquitous fly-snapping frog, then
at least for us.)
As far as I know, all other reductive theories face a relative of one of
the problems above. We must face the possibility that a reductive natur-
alistic psychosemantics cannot explain the representation of kinds directly.
However, there are also a number of non-reductive strategies for explaining
the possibility of representing kinds, strategies that appeal in some way to
the representation wielder’s intentions. Such an appeal might be used in a
reductionist project if the representations (normally concepts) mobilized in
these intentions do not themselves include any representations of kinds. I
think this hope is forlorn, but the reductionist has a lesson to learn from the
attempt.
What sort of intention would do the trick? Well, an intention to pick out
a kind, of course. For instance, the representation of specific kinds could
be accounted for by mental description. Just as one might say that unicorns
are picked out in thought descriptionally (‘a horse with a horn’—otherwise
a puzzling case, especially for the naturalist, since unicorns do not exist),
perhaps one could pick out a specific kind with ‘a φ that is a kind’, where
‘φ’ denotes some complex property whose representation can be accom-
modated by your favourite reductive theory. It would remain to give some
reductive account of the concept of kindhood; however, if concepts of par-
ticular kinds are difficult to account for with current reductive theories
of intentionality, then the concept of kindhood seems even more difficult.
Further, how would such a concept be acquired without prior concepts of
specific kinds as examples?
Perhaps a further non-reductive move could be made, filling out the
concept of kindhood descriptionally. A well-developed account of kind-
hood exists that could be put to use in this way. According to the ‘uni-
fied property cluster’ account (Boyd 1991; Kornblith 1993; Millikan 1999,
2000), a natural kind is characterized by a set of correlated properties,
where some further principle explains why they are correlated, and thus
why reliable inductive generalizations can be made over them. For example,
water is a substance with multiple correlated properties like liquidity in
On Thinking of Kinds 119
² Teleological theories typically put this ‘supposed to’ in terms of function. Here, this
stretches the normal use of ‘function’ a little; normally we say that something has the
function of doing something, not of being a certain way. However, it is convenient to use
the term to cover both sorts of supposed-tos, and this is how I shall use it. What matters
is the normativity, not the functionality per se.
³ Consider a structure S1 , where the elements of S1 are interrelated by a single type of
two-place relation, R1 , according to some particular pattern. That is, R1 obtains between
certain specific pairs of elements of S1 . S1 is isomorphic to another structure, S2 , if there
is a relation R2 (also two-place) and a one-to-one function mapping the elements of S1
onto the elements of S2 such that: for all x and y belonging to S1 , xR1 y if and only
if f(x)R2 f(y). This definition may be extended to n-place relations in the obvious way
(Russell 1927: 249–50; Anderson 1995).
122 Dan Ryder
of using the model to fill in missing information about the world (‘predict-
ive use’, broadly speaking). Another important use of models is in practical
reasoning, in figuring out how to act (‘directive use’). For instance, the scale
model of a building might be used as a guide for its construction. (Elsewhere,
I have argued that the occurrent attitudes are the causal role equivalents of
these two uses; 2002.)
Just like representation in indicators and maps, representation in models
is a functional property—mere isomorphism is insufficient. A rocky out-
crop that just happens to be isomorphic to the Spirit of St Louis does not
represent the Spirit of St Louis because the isomorphism in question is not
a normative one—the rock is not supposed to be isomorphic to the Spirit of
St Louis. A model represents because it has the function of mirroring or
being isomorphic to some other structure.⁴
Structures are composed of elements that enter into relations. When two
structures are isomorphic, an element of one is said to correspond to a partic-
ular element in the other, within the context of that isomorphism. These
two relations, isomorphism and correspondence, are promoted to being
representational properties when they become normative or functional. A
model represents a structure S when it has the function of being isomorph-
ic to S, and the model’s elements then represent the elements of S because
they have the function of corresponding to them. Thus representation in
models comes in two related varieties, one for the model, and the other for
its elements. A model of the Spirit of St Louis models the Spirit of St Louis,
while the left wingtip of the model stands in for the left wingtip of the Spirit
of St Louis.
3. MODEL-BUILDING
the mould. Then it injects a substance that hardens inside the mould, and
finally it breaks the mould and ejects a small-scale model of the original
object.
Why is it that we can say that the scale model this machine produces is
a model of the original object? Suppose the original object is the Spirit of
St Louis. (It is a big machine!) There need not be any intention to produce
a model of the Spirit of St Louis at work here. Perhaps someone just set
this model-making machine loose on the world, letting it wander about,
making models of whatever it happens to come across. (Of course, there
were intentions operative in the production of the machine; what we have
eliminated is any specific intention to produce a model of the Spirit of St
Louis.) The scale model produced is a model of the Spirit of St Louis simply
because the plane is what served as a template for production of the model.
The function of this machine is not to produce isomorphs of particular
things; it has the more general function of producing isomorphs of whatever
it is given as input. Each individual model inherits its function of mirroring
some specific object O from this general function, and the fact that O is
the input that figures in its causal history. Consequently, for any particular
model the machine produces, we must know that model’s causal history in
order to know what it represents.
But there is something else we need to know: the machine’s design prin-
ciples. In our example, the spatial structure of the model represents the spa-
tial structure of the thing modelled. But the model has a number of other
structural features besides its spatial structure; for example it has a density
structure. However, these other structural features are not representation-
al. Even if it fortuitously turned out that our scale model of the Spirit of St
Louis has exactly the same density structure as the Spirit of St Louis, the dens-
ity structure of the model would not correctly represent the density structure
of the plane (just as a black-and-white TV doesn’t correctly represent the col-
our of a zebra). This is because if the scale model happened to have a density
structure that mirrored the density structure of the real plane, it would be
entirely by accident, in the sense that it would not be by design.
A model-making machine is designed so that certain specific types of
relational features of input objects will cause the production of a specific
type of isomorphic structure. Those features of the input object that, by
design, determine the isomorphism for the automatic scale modeller are spa-
tial relations—and so spatial relations are the only relations the model rep-
resents, that it has the function of mirroring. Similarly, the only relational
features of the model that are structured by the input object, by design, are
spatial relations. Thus the design principles of the automatic scale modeller
tell us that only the spatial features of the model it produces do any rep-
resenting. When supplemented with the production history of a particular
On Thinking of Kinds 125
model, the design principles can tell us exactly what that model and its ele-
ments represent, i.e. what the model has the function of being isomorphic
to, and what its elements have the function of corresponding to in the con-
text of that isomorphism. Similarly for any other model-making machine:
the machine’s design principles plus the causal history of a particular model
will tell us what that model represents.
Note that the automatic scale modeller is capable of producing inaccur-
ate models. Perhaps a piece of the machine falls off during its operation,
and introduces a lump into the model of the plane. This model says some-
thing false about the plane’s structure. Alternatively, it may be that the
general design principles for the machine fail in certain unforeseen circum-
stances, e.g. perhaps deep holes in an object cannot be fully penetrated by
the modelling clay. In both of these types of inaccuracies, the machine fails
to produce what it is supposed to produce, namely a structure spatially iso-
morphic to its input.
In the automatic scale modeller, there are two stages to the production of
a genuine model with a specific content. I propose that we can apply these
two stages of model production to the brain, in particular to the cerebral
cortex (because the thalamocortical system is the most likely brain structure
to subserve mentality). The first stage is the design of the model-making
machine, either intentional design (the automatic scale modeller) or evol-
utionary design (the cortex). The second stage is exactly the same in both:
template-based production of specific models according to the design prin-
ciples of the machine, as determined by the first stage. This is what it is to
acquire new representations through (non-reinforcement) learning.
If we suppose that the seat of the mind, the cerebral cortex, is designed
(by natural selection) to build models of the environment, the crucial ques-
tion that arises is this: what are the design principles of the cortex? In the
next section I will describe, from a functional point of view, the essentials
of these design principles according to the SINBAD theory. First, though,
a little preview of how this foray into neuroscience will help us eventually
answer the question we started out with, of how it is possible to represent
kinds.
The type of models the cortex is designed to build are dynamic models.⁵
The elements of a static model and the isomorphic structure it represents
are constants, like the position of the tip of the plane’s wing, and the pos-
ition of the tip of the model’s wing (relative to other points internal to the
plane). By contrast, in a dynamic model the elements in the isomorphic
⁵ The earliest extended physicalist discussion of the dynamic isomorphism idea, and a
defence of its relation to the mind, occurs in Kenneth Craik’s The Nature of Explanation
(1943). See also Cummins (1989) and McGinn (1989, ch. 3).
126 Dan Ryder
Stimulus dimension
(a)
(b)
External variable
Internal variable
flashes, and another begins its life tuned to booms. Through a process of
association, the pairwise correlation between flashes and booms (in thun-
derstorms) comes to be reflected in a mirroring covariation between the
neurons tuned to flashes and booms.
There are a number of reasons why the cortical design principles can-
not be those of classical associationism. One particularly serious problem
with the associationist proposal is that it is too impoverished to explain
our capacity to reason (Fodor 1983). In any case, there is neurophysiolo-
gical evidence that the regularity structure in the environment that guides
production of cortical models is not simple pairwise correlational struc-
ture, as the associationist supposes. Rather, the template regularity pattern
is of multiple correlations, i.e. multiple features that are all mutually correl-
ated (Figure 6.2b) (Favorov and Ryder 2004). This proposal also receives
support from psychology. While people tend to be quite poor at learning
pairwise correlations, unless the correlated features are highly salient and
the correlation is perfect or near-perfect ( Jennings et al. 1982), when mul-
tiple mutual correlations are present in a data-set, people suddenly become
128 Dan Ryder
The relevant cortical design principles apply in the first instance to pyr-
amidal cells (see Figure 6.3), the most common neuron type in the cere-
bral cortex (70 to 80 per cent of the neurons in the cortex fall into this
class—see Abeles 1991; Douglas and Martin 1998). Like any other neur-
on, a pyramidal cell receives inputs on its dendrites, which are the elaborate
tree-like structures as depicted on the cell in Figure 6.3. A cortical pyram-
idal cell typically receives thousands of connections from other neurons,
some of which are excitatory, which increase activity, and others of which
are inhibitory, which decrease activity. (Activity is a generic term for a sig-
nal level.) Each principal dendrite—an entire tree-like structure attached
to the cell body—produces an activity determined by all of the excitatory
and inhibitory inputs that it receives. This activity is that dendrite’s output,
which it passes onto the cell body. The output of the whole cell (which it
delivers elsewhere via its axon) is determined in turn by the outputs of its
principal dendrites.
The input–output profile of a dendrite, and thus its contribution to the
whole cell’s output, can be modified by adjusting the strengths of its syn-
aptic connections, and possibly by modifying other properties of the dend-
rite as well, like its shape (Woolley 1999; McAllister 2000). An important
question in neuroscience is: what principles underlie the adjustments a cell
makes in order to settle on some input to output causal profile? Why do
On Thinking of Kinds 129
a
principal
dendrite
cell body
(soma)
axon
Figure 6.3. A typical cortical pyramidal cell. The dendrites form the input region
of the cell, which transmits its output via the axon. There is a total of five principal
dendrites visible on this cell. (‘Dendrite’ can refer either to a principal dendrite or
to a sub-branch of a principal dendrite.) Axons from other neurons synapse on one
or more of the thousands of tiny spines covering the dendrites; inhibitory synapses
may also occur between spines.
⁶ For full details of the SINBAD theory, please see Ryder and Favorov (2001), Ryder
(2004), and Favorov and Ryder (2004).
130 Dan Ryder
and dendrite B to feathers. Because beaks and feathers are consistently cor-
related in the environment, the dendrites will consistently match.
Of course, there are more complex forms of mutual predictability than
simple correlation. Real dendrites can receive thousands of inputs, and they
are capable of integrating these inputs in complex ways. So the dendrites
can find not just simple correlations between beaks and feathers, but also
what I call ‘complex correlations’ between functions of multiple inputs.
Consider another cell. Suppose that amongst the detectors its first dend-
rite is connected to is a bird detector and a George Washington detector,
and for its second dendrite, a roundness detector and a silveriness detect-
or. (Clearly detectors that no well-equipped organism should be without!)
There is no consistent simple correlation between any two of these, but
there is a consistent complex correlation—bird XOR George Washington
is correlated with round AND silvery. So in order to match consistently, the
dendrites will have to adjust their input–output profiles to satisfy two truth
tables. The first dendrite will learn to contribute 50 per cent when [bird
XOR George Washington] is satisfied, and the second one will learn to con-
tribute 50 per cent only when [round AND silvery] is satisfied; otherwise
they will both be inactive (output = 0). Since these two functions are cor-
related in the environment, the two dendrites will now always match their
activities, and adjustment in this cell will cease.
Consistent environmental correlations are not accidental: there is virtu-
ally always a reason behind the correlations. For example, the correlation
between beaks and feathers in the first example isn’t accidental—they are
correlated because there exists a natural kind, birds, whose historical nature
(an evolutionary lineage) explains why they tend to have both beaks and
feathers. What will happen to a cell that has one dendrite that comes to
respond to beaks, while the other comes to respond to feathers? The cell
will respond to birds—the thing that explains the correlations in its inputs.
Similarly, the second cell will come to respond to the kind that explains the
complex correlations in its inputs, namely American quarters.
SINBAD cells thus have a strong tendency to tune to sources of correl-
ation. Different cells will tune to different sources of correlation, depend-
ing upon what inputs they receive. Each cell’s tuning is to be explained
by a particular source, and the correlations that source is responsible for
(Figure 6.4). When this tuning process takes place over an entire network,
the network is transformed so that its flows of activation come to mir-
ror regular variation in its containing organism’s environment. Where the
environment has some important variable—a source of correlation—the
network will have a cell that has tuned to that source of correlation. And
where there is a predictive relation among sources of correlation, the net-
work will be disposed to mirror that relation. The activities of the network’s
132 Dan Ryder
External variable
Source of correlation
Sensory input
cells will covary in just the way that their correspondents in the environ-
ment do. In short, the network becomes dynamically isomorphic to the
environment.
The reason that a cortical SINBAD network develops into a dynamic iso-
morphism is that cells’ inputs are not only sensory, but also (in fact primar-
ily) derived from within the cortical network. A cell’s tuning is guided, in
part, by these intracortical connections (Phillips and Singer 1997). Tun-
ing—changing a cell’s dispositions to react to the environment—occurs
through the modification of a cell’s dispositions to react to activity in oth-
er cells, mediated by intracortical connections. It is these latter dispositions
that come to mirror environmental regularities.
Remember, on the classical associationist picture, a pairwise correlation
in the environment comes to be mirrored in the brain (Figure 6.2a). On
the SINBAD picture, it’s not simple pairwise correlation that comes to
be mirrored in the brain, but patterns of multiple correlations, plus their
sources (Figure 6.4). In Figure 6.5, where one SINBAD cell’s inputs come
from other SINBAD cells in the cortical network, it can be seen that,
through the process of cell tuning, the regularities obtaining among the
On Thinking of Kinds 133
Figure 6.5. When a SINBAD cell receives intracortical inputs from other SIN-
BAD cells, the relations among sources of correlation come to be mirrored in those
connections. (The mirroring is shown as bidirectional since cortical connections
tend to be reciprocal, though the two directions are mediated by distinct ‘cables’).
sources of correlation upon which these cells depend for dendritic matching
will come to be reflected in their intracortical connections. Since sources
of correlation are interrelated both within and across levels (cats are related
to fur and to mice, water is related to taps and salt, and grass is related
to greenness and to suburbia), an extensive network develops, as a cell’s
dendrites come to use other cells’ outputs in finding a function that allows
them to match.⁷ A cell may start with a tenuous correlational seed,⁸ but
this subtle sign of those correlations’ source is enough to put the cell on a
path towards discovering the multitude of regularities in which that source
participates. As the cell achieves more and more robust dendritic matching,
⁷ Naturally, in getting their dendrites to match, cells can take advantage not only of
intrinsic features, but also of relational features of sources of correlation.
⁸ If there is no correlation available, the cell’s activity will be low, and it will elaborate
its dendrites in search of new inputs until it is able to find a correlation (for a review of
dendritic growth, which occurs throughout life, see Quartz and Sejnowski 1997). If it is
still unsuccessful, at some point the cell will ‘give up’ and degenerate (Edelman 1987).
134 Dan Ryder
⁹ If you are worried that there are far too many sources of correlation our brains
need to have some cells tune to, consider the fact that in the densely interconnected
human cerebral cortex, there are somewhere between 11 and 25 billion pyramidal
cells (Pakkenberg and Gundersen 1997). Compare this to a good adult vocabulary of
50,000 words. (There is also a mechanism to prevent too many cells from tuning to the
same source of correlation—see Favorov and Ryder 2004.)
¹⁰ Of course, this will create a mismatch between dendrites; if a previously correlated
input is consistently absent, the dendrite will learn to ignore it in order to achieve a match
again with the other dendrites on the cell.
On Thinking of Kinds 135
Can the SINBAD theory explain how the representation of kinds is pos-
sible? It should be uncontroversial that the cortical network, if it is a SIN-
BAD network, is a model-building machine. Clearly the cortical network is
supposed to structure itself isomorphically with regularities in the environ-
ment; the utility of this isomorphism is undeniable, for filling in missing
information about the world, and in practical reasoning. But our main
question, of how it is possible to represent kinds, turns upon the nature
of the specific design principles of a SINBAD network. The design prin-
ciples of a model-making machine dictate its general function, and thus
what type of structure it represents. We have seen that SINBAD networks
have a strong tendency to become dynamic isomorphisms that mirror regu-
larities organized around sources of correlation. The result we now want to
get is that this tendency is teleofunctional: that the cortical SINBAD net-
work was designed to develop such isomorphisms, and consequently that
SINBAD cells are supposed to correspond to sources of correlation. Given
the analysis of model representation from Section 2, it would follow that
SINBAD cells represent sources of correlation. Since kinds form one type
of source of correlation, we would have shown how it is possible to repres-
ent kinds.
Note the contrast between this approach and the descriptionist’s. The
descriptionist begins with a (complex) representation of a cluster of observ-
able properties that typically characterize some kind. But, observes the de-
scriptionist, no such representation can ever represent a kind—it will never
have the right extension to do so (owing to twin problems, for example).
The only way to mentally represent a kind, they continue, is to represent
it as a kind in a very strong sense: one must have a detailed conception of
kindhood, and somehow link this with the representation of the cluster of
observable properties. (In Section 1, I tried to show that this was not a very
promising approach.)
According to my proposal, this detailed conception of kindhood is not
necessary. Contra the descriptionist, it is possible for a representation that
does not include anything like a conception of kindhood nevertheless to
136 Dan Ryder
‘get the extension right’ in the case of a kind. The automatic scale modeller
produces representations of (relative) spatial points on objects in virtue of
having the general purpose or function of producing correspondences to spa-
tial points. It need not have anything like a conception of spatial-pointhood
in order to do this. Similarly, a representing device may produce repres-
entations of kinds in virtue of having the general purpose or function of
producing correspondences to kinds, while utterly lacking a conception of
kindhood. (In fact, my proposal is slightly different, of course. It says that
the cerebral cortex has a general purpose or function of producing corres-
pondences to the broader class of sources of correlation, but that still means
it can ‘get the extensions right’ in the case of kinds, since kinds are simply a
variety of source of correlation.) Does this count as representing a kind ‘as
a kind’? I don’t know. Some would take a detailed conception of kindhood
to be necessary for that phrase properly to apply. All I care about at the
moment is solving the problem described in Section 1, which was the prob-
lem of ‘getting the extensions right’, something that no other extant theory
can manage. (Perhaps SINBAD neurosemantics can also help explain how
it is possible to acquire the concept of kindhood, as well as the conception
that typically accompanies it, but I make no claims about that here.)
So all we need to show is that SINBAD cells have the general purpose
or function of corresponding to sources of correlation (or, if you prefer,
that the cortex has the function of ‘producing’ cells exhibiting such cor-
respondences). Since evolution is the designer here, we need to make it
plausible that the SINBAD mechanism was selected for the properties of its
interaction specifically with sources of correlation, that its being structured
by sources of correlation in particular confers some benefit compared to
other types of model-building (e.g. pairwise association). This is eminently
plausible. We saw that the clustering of numerous (possibly complex) prop-
erties around a source of correlation allows a cell that tunes to that source to
have multiple lines of ‘evidence’ for its presence. The result is an extremely
powerful predictive network, with multipotent capabilities for filling in.¹¹
Importantly, SINBAD cells must tune to reliable sources of multiple correl-
ations in order for the network to exhibit this sort of power; the particular
advantage of the network depends entirely upon the inductive richness of
sources of correlation. SINBAD cells are plausibly built (by evolution) to
take advantage of this inductive richness—they have a strong tendency to
tune to sources of correlation, and this tendency is what ultimately pro-
duces a rich isomorphism.
¹¹ Note that several functions may overlap on a single dendrite, and typically cells will
operate in population units, with all members of one population corresponding to the
same source of correlation. So the capacity for filling in is astronomical (Ryder 2002).
On Thinking of Kinds 137
There are several related aspects to the way in which SINBAD cells
take advantage of the inductive richness of sources of correlation.¹² First,
this richness permits a SINBAD network, given the nature of its units, to
develop a correspondingly rich inductive network, not of scattered pairwise
correlations (as in an associative network), but of interrelated regularities
grounded in the deep structure of the environment. This richness ensures
robust prediction through redundancy —there are many ways to predict the
same thing.
Second, the inductive richness of sources of correlation facilitates future
learning. Because sources of correlation are inductively rich, once a SIN-
BAD cell starts to tune to one by discovering some of the correlated proper-
ties it exhibits (the correlational seed), the cell (given its special properties)
is in a uniquely advantageous position to discover further correlation.¹³
(This will be the case as long as its dendrites have not come to match their
activities perfectly, which they almost never will, owing to the presence of
noise in the cortical network.) We saw that in this way, a cell continually
adds to its lines of ‘evidence’ for the presence of the source of correlation to
which it is tuning, indefinitely enriching the model’s isomorphism.
Relatedly, the success of SINBAD networks depends upon a cell, dur-
ing the course of learning, receiving useful inputs from other cells in the
network. These other inputs will be much more useful to aid dendritic
matching if these other cells have tuned to real kinds, so that their outputs
carry information about real kinds. That is because environmental regular-
ities are fundamentally determined by interactions among real kinds (and
other sources of correlation). So, given the nature of the SINBAD mechan-
ism, it’s vital that cells tune to sources of correlation in order to develop a
nice isomorphism.
A way to think of it at the network level is this: the SINBAD cortical
network was selected for mirroring, not just regularities, but grounded reg-
ularities (grounded in sources of correlation). But, one might ask, how can
mirroring grounded regularities be selected for? In order for a mechanism to
be designed to mirror grounded regularities, it must incorporate some way
of picking them out. But how could any mechanism do this?
¹² They are relatives of functions that Millikan attributes to empirical concepts (in
the previous chapter in this volume, and her 2000, ch. 3): ‘accumulating information’
about substances (including real kinds), and ‘applying information previously gathered’
about substances.
¹³ This has a corollary: learning becomes much more efficient when a SINBAD
network creates correspondences to sources of correlation. Compared to the acquisition
of a model lacking such correspondences, fewer relations involving fewer variables must
be learned in order for the model to be inferentially complete. (See Favorov and Ryder
2004; Kursun and Favorov 2004).
138 Dan Ryder
¹⁴ Here follows a brief account. For more details, see Ryder (2002, 2004).
On Thinking of Kinds 139
of the properties that have helped cause it to fire.¹⁶ These properties are cor-
related owing to their being properties of this particular source, so the cell
finally comes to tune to and correspond to that source of correlation. This
is the functionally normal route for a SINBAD model to adopt a particular
configuration, the way it was designed to work: some specific source of cor-
relation causes (or explains) each cell’s achievement of dendritic matching.
Only in this way can a cell participate in a reliable predictive network. It is
equivalent to the automatic scale modeller taking in an object through its
input door, producing a nice mould, and spitting out a perfect model.
In a SINBAD model, deviations from the way structuring is supposed to
proceed by design will be deviations from the way tuning is sup-
posed to proceed by design. These will include causal interactions with
things that have inhibited a cell from achieving its current level of matching
success, and in most cases, these inhibitors will not be templates for the cell.
For example, consider the following history of SINBAD model produc-
tion. Suppose a cell had been gradually tuning to cats. Perhaps a dog caused
the cell to fire at some point, because in certain conditions, dogs look like
cats. Let us say that the dog made three of the cell’s dendrites match, while
there was a failure to match for two dendrites. The SINBAD learning rule
made some of these dendrites modify their connections. But this does not
improve their overall matching success. The dendrites will tend to move
away from functions that pick out cats (functions they had previously been
tending towards), without taking them any closer to functions that pick out
dogs, or anything else. Without consistent ‘training’ through exposure to
multiple dogs, the dendrites are unlikely to modify their behaviour so as to
increase their sensitivity to features characteristic of dogs. (In fact, the only
thing they might improve their sensitivity towards, in this case, is this par-
ticular dog in this particular circumstance—and that is not even a source of
correlation.) Subsequent exposures to cats bring the dendrites back towards
the function that picks out cats, and the cell back towards better match-
ing success (and thus predictive utility). That response to a dog inhibited
the cell from achieving its current level of matching success; it led it away
from finding the correlated functions due to cats, the isomorphism it has
now settled on. The dog was something that affected the model, but not
¹⁶ Despite the fact that the causal relation between the activity in a SINBAD cell’s
dendrite and a source of correlation is mediated by some intervening physiology, it is
still causation in virtue of some determinate property of the stimulus. Which property
was causally relevant to the activity in the synapse can be identified by counterfactuals.
Suppose an instance of a particular shade of red causes a cell to fire. Had the stimulus
been a different shade of red, the cell would have fired anyway. However, had the
stimulus been blue, it would not have fired. Then the property that was causally relevant
to the synaptic activity was redness, not the particular shade of red, nor colouredness.
On Thinking of Kinds 141
according to design. It features in the history of the cell’s tuning, but it did
not cause or explain its matching success, i.e. the aspect of model structuring
that occurs by design. That dog was a stray rock in the SINBAD mechan-
ism, while the kind cat was this cell’s template. A cell’s template is a source
of correlation that explains its current matching success.
This does not mean that the model-building machine was broken when it
changed so as to reduce its predictive utility; it was just functioning sub-
optimally. Also note that our conclusion that this cell represents cats is
consistent with an alternative history in which the cell permanently veers
off its course, eventually tuning to and representing dogs. In this case,
the cell would have different properties with a different explanation for its
matching success, and it would be in a different model, with a different
history—indeed, at some point in its progress, the kind cat may cease to
provide any explanation for its matching success. Its matching success may
depend entirely upon its previous exposure to dogs.¹⁷ On the other hand, if
the kind cat (as well as dog) continues to explain its matching success, the
cell will be disjunctive or ‘equivocal’, where two kinds are confused as being
the same (Ryder 2002). (See Millikan 2000 on equivocal concepts, which
certainly exist and so ought to be psychosemantically explicable.) This is
another way model design can proceed sub-optimally. It is sub-optimal
since it will lead to inductive errors.
So the design principles of the cortical model-making machine pick out,
as a cell’s template, only the things that have helped that cell achieve its
current matching success (where that is measured holding its response pro-
file and current broad environment fixed). Anything else does not explain
the creation of an internal model according to the cortical design principles.
Therefore, a single SINBAD cell¹⁸ has the function of corresponding only
to the source of correlation that actually helped it achieve the degree of
dendritic matching it has attained thus far. That is the source of correlation
¹⁷ Note that this avoids a problem that arose for Dretske’s (now abandoned) account
of representation in Knowledge and the Flow of Information (Dretske 1981). In this
book, Dretske identifies the content of an indicator with the information that was
instrumental in causing it to develop a particular sensitivity during its ‘learning period’.
When it responds to something else after the learning period, it misrepresents. Suppose
an indicator has been responding to As, and then it responds to a B. On Dretske’s
old theory, if this latter response is part of the learning period, Bs will be part of the
indicator’s content, but if it is part of the ‘use’ period, then the indicator misrepresents
the B as an A. Which is it? As Loewer (1987) points out, the problem for Dretske is
that there is no principled way to distinguish between the ‘learning’ period and the ‘use’
period. In SINBAD neurosemantics, there is no need to distinguish between a learning
period and a use period. You just need to ask whether B explains to some non-negligible
extent, the cell’s current matching success.
¹⁸ I note again that representations that actually have a cognitive effect will typically
involve populations of SINBAD cells.
142 Dan Ryder
the cell represents. Anything else that it responds to, has responded to, or
corresponds to in the context of some isomorphism is not part of the cell’s
representational content.
So not only can SINBAD cells have the function of corresponding to
kinds in the context of an isomorphism, the details of the SINBAD mech-
anism allow us to determine exactly which kind (or other source of cor-
relation) a particular SINBAD cell has the function of corresponding to.
An element of a model represents that which it has the function of corres-
ponding to. So if all goes well, that kind will be the unique representational
content of the cell. Since SINBAD cells are the basic elements of a SINBAD
network, we can also determine which regularity structure a whole network
has the function of being isomorphic to, and thus models. Because of their
inductive richness and SINBAD’s penchant for such richness, kinds will
tend to figure prominently in these internal models. Which, in addition to
the evidence linking SINBAD to the cortex, and the cortex to the mind, is
an important reason to suppose that mental representation, at least in us, is
SINBAD representation.
REFERENCES
will essay a more comprehensive defence of the idea that perceptual exper-
iences are representations. This paragraph will suffice for now.) Clearly,
something can have a certain function but not perform it in a particular
instance. Thus, the above theory is compatible with falsity of attribution.
A virtue of TMS Theory is that it seems to make the question we are
dealing with—‘What is the feature attributed to a thing by an experience
of it as blue?’—recognizably scientific. Scientists in many areas of the life
sciences—evolutionary biology, physiology, psychology, ethology, etc.—
ask about the biological function of animal organs and the various states
and conditions of these organs. The methodology of addressing these ques-
tions is contested, but constitutes fairly familiar territory nonetheless. Thus,
the Teleosemantic Theory serves as a bridge between philosophy and cog-
nitive science.
TMS Theory meets an immediate point of resistance when one tries to apply
it to cognitively sophisticated organisms. What is the functionally appro-
priate response to a given perceptually represented situation—for instance,
to the experience of something as blue? Is there such a thing? In Matthen
(1988) I argued that there is not. It seemed obvious to me then that while
sensory states in primitive organisms, or vestiges of these states in sophistic-
ated organisms (e.g. the sneeze, the startle response, etc.), might lead directly
to autonomic responses, perceptual states in human and other higher anim-
als might lead to any number of responses depending on other volitional and
cognitive states. There is no determinate way that we are supposed to react
to the presence of a blue thing much less to one that merely appears blue.
Suppose I see a nearly full glass of beer on the table. It is by no means
automatic that I will pick it up and take a sip. Is it mine or somebody else’s?
Am I drunk or sober? Is it six o’clock on a hot evening when I am look-
ing forward to my first drink of the day? Or is it six in the morning when
I have found an unfinished glass that someone left there overnight? These
alternatives suggest that the perceptual state feeds into a complicated pro-
cess of contextualization, cognitive assessment, and decision-making. This
is its function, not the initiation of some autonomic response. It seems not
just simplistic, then, but flat out wrong-headed to define perceptual content
in terms of some single response which perception is supposed to initiate.
Call this the Problem of Multiple Responses.
Responding to this Problem, I suggested that the states in lower organisms
(or in ourselves) that tie into autonomic responses were not full-fledged
perceptual states but only ‘quasi-perceptual’. The function of full-fledged
Teleosemantics and the Consumer 151
perceptual states, such as the one illustrated above, was merely to ‘detect’
(or, using the terminology introduced above, ‘indicate’—see Dretske 1988;
Matthen 1989) certain situations, leaving it up to the perceiver to decide
what ought to be done with the information so provided. Perception is
detection, I proposed, not the initiation of an appropriate response. By so
arguing, I brought my position close to that of Dretske (1988) as quoted
above. In short, I proposed:
Weak TMS Theory. E attributes F to x if an occurrence of E is supposed
to indicate that x is F.
The normative element contained in the words ‘supposed to’ is essential
here; it distinguishes TMS Theory from the Indicator Meta-Semantic
Theory. In the weaker version of TMS Theory, there is no requirement that
a perceptual state should initiate a specific response to the situation repres-
ented, as there is in the (stronger) TMS Theory articulated earlier: indeed,
the claim was that perceptual states proper (as opposed to quasi-perceptual
states) do not initiate specific responses.
This proposal immediately raised Ruth Millikan’s ire (1989/1993). Mil-
likan is, of course, a pioneer of teleosemantic theories (though her focus
was originally on language and communication, rather than perception),
and as such she subsumes perceptual states under the broader category of
representations. A representation, she says, is an item made for use by a
‘consumer’. It accords by a certain rule with the situation it represents
(‘accords by a certain rule’ is, I think, her version of ‘supposed to indic-
ate’), and the consumer is thereby enabled to respond appropriately to that
situation. A beaver splash representing danger is an example. If a beaver
splashes its tail in the danger-representing way, and danger in fact exists,
then other beavers in the vicinity, the consumers of this representation, will
do what they are supposed to do in the presence of danger. If it splashes
its tail when there is no danger, then these consumers will thereby be given
reason to do what would have been appropriate had there been danger, but
not what is appropriate in the actual non-threatening situation. Potentially,
this could cause the beaver’s normal activities to be disrupted. Thus, the
accuracy of the representation is, as Millikan says, a normal condition of the
success of the consumer’s actions. (In response to the Problem of Multiple
Responses, this should be amended to read ‘normal condition of the success
of the consumer’s choice of actions’.)
The Problem of Multiple Responses to a given situation is beside the
point, Millikan argued. What is important is that a connection exist be-
tween a given cognitive state and the situation or feature that it is supposed
to represent. With regard to the glass of beer, the accuracy of my perception
is a normal condition for the success both of drinking it (when that is
152 Mohan Matthen
which the consumer uses this representational state is irrelevant. All that is
relevant is that any action-choice will be disrupted by the representation’s
not being in accord with the situation it represents.
Millikan, however, went much further than I did with detecting.
Whereas I had been content to assume that in very low-level organisms,
the representational state that accorded with environmental situations was
the very same as the one that initiated the action, Millikan—rightly, I
think—insisted on distinguishing between the detector function and the
effector-triggering function even in simple ‘pushmi-pullyu’ representations,
as she calls them (1995)—states that combine detector and effector
functions in a single package.
Imagine that a certain cell-body in a unicellular organism enters into a
particular chemical state when a particular situation obtains, and that this
chemical state initiates a certain behaviour in the organism. (Perhaps the
lack of some nutrient causes a molecule to be stripped of a nucleotide,
which in turn causes the absent nutrient to be synthesized.) The chemical
state of this cell-body both detects the occurrence of the triggering situ-
ation and initiates the appropriate response to that situation, and is, as
such, a pushmi-pullyu representation. Despite such co-location of detect-
ing and effecting, it is important to distinguish these two capacities, and to
link them in precisely the way that Millikan does, i.e. to say that the suc-
cessful performance of the detector function is a normal condition for the
successful performance of the effector function. The detector manufactures
representations; the effector is a consumer of the representation.
This is an absolutely crucial insight for teleosemantics. However, the
point goes deeper than Millikan realized. Effector functions cannot be dis-
regarded or ignored when we ask about detector functions. To be sure, it
is a function of detectors to be accurate, and this accuracy forms part of
the normal conditions of effector success. But this does not mean that we
can state the detector function in abstraction from effector function. For
we cannot properly identify which feature the detector is supposed to detect
accurately except by reference to effector function. This is what Millikan
(and I) failed to appreciate. Both of us assumed (I surmise) that we could
find the feature in question by looking ‘upstream’, i.e. by looking at the
environment. In fact, we can find it only by looking at the consumer. Or so
I shall argue.
Imagine an actor who is told to react with pretended surprise to the blue
presentation. She rehearses this action again and again, and perfects a look
of surprise. She has not avoided habituation, but has simply ignored it in
her outward reaction. There are multiple responses to the repeated blue
presentation, then, but there is also an autonomic response.
Habituation illustrates a more general phenomenon. Any perceptual state
alters the state of inner epistemic ‘organs’, with the consequence that re-
sponses to further presentations of that or other stimuli are altered. In
computational terms, these epistemic organs have been represented, ever
since Donald Hebb, as networks of connected neurons. Every new per-
ceptual state affects the strengths of the connections between neurons in
these networks. In more cognitive terms, one might say that the organ-
ism maintains a set of inner expectations, and every incoming perceptual
state automatically occasions an update of these expectations. With respect
to these expectations, a new stimulus will reinforce an expectation to the
degree that is experienced as similar to occurrences encountered before, or
weaken an expectation to the degree that it is dissimilar to what has been
encountered before. Such mechanisms for creating, maintaining, and extin-
guishing expectations are, as Quine (1969: 306) argues, innate. For ‘There
could be no induction, no habit formation, no conditioning, without prior
dispositions on the subject’s part to treat one stimulation as more nearly
similar to a second than to a third.’ This implies that there can be no
learning without an unlearned, i.e. innate, capacity to measure perceptual
similarity.
Broadly speaking, these expectations are of two sorts. First, we have
expectations concerning individual objects—about where they will be,
what colour or shape they will be, etc. Second, we have expectations about
associations among features: about what size goes with what weight, what
colour goes with what taste, and so on. Though these expectations change
measurably only after a series of perceptual presentations, the most plaus-
ible way to represent the connection between perception, on the one hand,
and learning or memory, on the other, is to posit that each and every per-
ceptual state has some effect on some expectation or ‘epistemic’ net. (The
term ‘epistemic’ is perhaps inflated in this context. The point is that even
very simple organisms are capable of forming environmental representa-
tions that last longer than a single interaction with that environment. In
the case of habituation, each presentation lasts only a second or so, but the
change to the response disposition lasts much longer. The term ‘epistem-
ic’ is used to mark this longer duration, the change of dispositions over the
long run; no suggestion of reasoning or justification is intended.)
In view of the occurrence of such autonomic epistemic responses, we
may elaborate TMS Theory: the responses initiated by perceptual states
156 Mohan Matthen
have states that accord with certain states of the environment. So it seems
that the right way to approach the question is to find out what states of
the environment detector states are supposed to accord with. The question
we are asking demands more than that we establish a correlation between
detector states and environmental states. What we are looking for here is a
theory of function within which to embed any such correlation. One pop-
ular theory about colour perception is that it is supposed to detect surface
spectral reflectance (cf. Hilbert 1987, 1992; Matthen 1988). The idea is
that the colour vision system sorts things by surface spectral reflectance, and
that an organism has colour vision just in case it is able to discriminate such
reflectances. This is an organism-independent way of characterizing sens-
ory feature. Having looked at the real-world functional correlate of colour
perception states, we arrive at a characterization of colours in purely phys-
ical terms. Call this the ‘upstream’ approach: it looks upstream from the
perceiver to find the meaning of perceptual representations.
The upstream approach neglects the consumer. Every detector function
is associated with effector functions within the same organism. As we saw
in the previous section, perceptual functions are associated with epistemic
functions. These epistemic functions result in new representations, which
are associated with various further effector functions, and so on. All these
functions together form a system that serve the organism in its interac-
tions with the environment. Consider the detector system from this point
of view. It will work well if it sorts things together when they are ‘sup-
posed to’ be treated the same for these epistemic purposes, and sorts them
differently, i.e. differentiates them, when they are supposed to be treated
differently. To revert to Millikan’s insight about the consumer, perceptu-
al classification is correct if it serves the organism’s effector functions; it is
incorrect if it disrupts these functions. For example, the perceptual sorting
function that serves habituation is supposed to group things together when
it is appropriate to ignore a new presentation as not constituting news. This
is the ‘downstream’ approach. It attends to the results of perceptual sorting
activities, not to the stimuli that occasion these activities, for the meaning of
perceptual experiences.
This point of view is dictated by a simple fact. Biological function
arises out of evolutionary history. The function of an organ is that feature
of it that contributed positively to the selection history of the type of
organism in which it occurs. In a system of organ functions like the one
sketched above, functions must be coordinated. If a sensory system co-
classifies things that need to be differentiated for the normal functioning of
effector organs, the organism suffers. Conversely, if the organism develops
activities that demand a different classification scheme than its detector
organs provide, it will suffer. Early teleosemantic theories concentrated on
158 Mohan Matthen
The emphasis on the coevolution of the provider and the consumer of rep-
resentations—i.e. of detector and effector systems within the same organ-
ism—clears up a well-known problem for the Teleosemantic Theory. In
early versions of teleosemantics, psychological data was used to show that
sense features were physically characterized kinds. For example, David Hil-
bert (1987, 1992) and I (Matthen 1988) argued that colours were surface
spectral reflectances. (In light of the argument given in the previous section,
the conclusion ought to have been that human colours were surface spectral
reflectances grouped by a particular similarity metric.) Recently, however,
it has been pointed out (by Boghossian and Velleman 1991 and Braddon-
Mitchell and Jackson 1997) that we do not as naive perceivers know colours
under this description. After all, people in ancient times knew colours well
enough, just as well as the naive observer today knows them—and surface
160 Mohan Matthen
spectral reflectances had not even been discovered. Yet, it seems that we do
know the colours. For sensory features of this sort, to experience them is
to know them. So, it appears, the Teleosemantic Theory cannot account
for the subjective content of sense features, i.e. for how they present them-
selves to us. (Actually, it is unclear why a naive perceiver should know a
meta-semantic, as opposed to a semantic, fact about the meaning of exper-
ience: nevertheless, I will take the difficulty at face value. That is: we do
have a naive grasp of colour and TMS Theory should explain this grasp. On
the other hand, TMS Theory need not stumble on the fact that the naive
colour-perceiver need know nothing of evolutionary theory.)
Clearly, the consumer’s perspective helps with this difficulty. For instead
of trying to find the definition of sense features upstream at the head of the
process that starts with a thing out in the world, the consumer’s perspect-
ive looks downstream to the effector system for this definition. Now, one
might say that the effector system possesses one kind of ‘knowledge’ of what
a detector system’s determinations mean—it knows this in terms of its own
response. We claimed at the end of Section 3 that the proper way to define
a sense feature was in terms of sameness of response: blue is that feature
of things that brings about the same response as some paradigm—the sky,
perhaps—up to some degree of similarity. Since the epistemic response to
sensory states can be assumed to be known—tacitly, but nonetheless com-
pletely—the problem of how we know sense features seems not to arise at
all within this framework. The knowledge is instinctive and contained in
how we respond to things. (Boghossian and Velleman, who attacked ‘phys-
icalist’ theories of colour, could agree since a response-defined theory is not,
by their lights, physicalist. Braddon-Mitchell and Jackson, however, were
attacking teleosemantic theories as such, and they should find the present
argument liberating.)
In the next and final section, we examine the role of perceptual experi-
ence in making explicit the kind of tacit knowledge discussed in the present
section.
achieved, Lewis says, S1 and S2 are signals. In Lewis’s case, the requisite
combination is achieved by agreement between the parties. But Skyrms
(1996) shows that, under natural selection, coordinated action plans have
an ‘attractive force’ of their own, and there is no need for extrinsic acts
of agreement. As he (1996: 103) says: ‘Signaling system equilibria . . . must
emerge in the games of common interest that Lewis originally considered.’
Like Lewis, Skyrms was considering signals between organisms that have an
interest in achieving a coordinated signalling system. Here, we are consider-
ing a signal coordination problem between subsystems of a single organism.
Since the subsystems of a single organism perish or prosper according to
whether the organism does, the commonality of ‘interest’ is guaranteed. If
coordination is not achieved, the organism will be less fit, and consequently
both the detector and the effector will perish, regardless of how effective
they may be considered in themselves.
Notice that if there is to be a possibility of signalling in such a situ-
ation, the sexton has to have at least as many putative signals available
to him as there are circumstances that demand different actions on the
part of Revere. But neither cares at all which communicative action plan
is adopted, as long as the mutually desired result ensues. It follows that
there is always a choice as to which signal is associated with which circum-
stance. Thus, the association between signal and circumstance is a matter
of convention: there is always a choice among possible association schemes,
and nothing matters other than that both the sexton and Revere agree on
which signal is to be used in which set of circumstances. Earlier, I stip-
ulated that there were n features that could be attributed to a stimulus
x, and correspondingly n actions, each one appropriate when the corres-
ponding feature is detected. We now see that the sensory system would
need n non-coercive signals to inform the effector organs of which of these
features is detected. The important point to note is that, as Lewis and
Skyrms demonstrate, it does not matter which signal was associated with
which feature, so long as the actions taken by the epistemic effector sys-
tems coordinated properly with the signalling code chosen by the sensory
detector system. A coercive signal has to be chosen for its effects. If I am
going to ensure that you do the right thing by directly manipulating your
body, then I must choose my actions in such a way as to achieve this end.
With a non-coercive signal, all that matters is that my action plan coordin-
ates with yours. Thus, which signal stands for which determination of the
sensory system is contingent and historical. What matters in the case of a
non-coercive signal is that the ‘speaker’ and the ‘hearer’ should coordinate
their attitudes.
Where are we now? In Section 4, I proposed that sensory systems co-
evolve with effector systems. Here I am adding a further wrinkle to that tale.
Teleosemantics and the Consumer 165
The claim is that under certain circumstances, the effector function is such
that, instead of direct manipulation, a non-coercive signal of the detector
state needs to be sent from detector to effector. Perceptual experience is
such a signal. The analysis of these signalling problems by Lewis and Skyrms
demonstrates that the meaning of a given experience is determined by an
arbitrary coordination scheme which emerges as a part of the coevolution
of detector and effector systems. In these circumstances, it is a matter of
convention and history which signal is associated with which circumstance
for the purposes of communication between systems. It is a convention in
the sense that (a) there is a plurality of coordination equilibria, and (b) nat-
ural selection does not ‘prefer’ one to another. This is the sense in which
we can take perceptual experience to be a representation with conventional
meaning.
REFERENCES
1. INTRODUCTION
I see some newspaper blown by the wind as a cat slinking, and thus I rep-
resent the newspaper as a cat. Three things are involved: (i) a representation,
presumably some neural event, (ii) its target, in this case the newspaper,
and (iii) the content of the representation, cat slinking. In this case, the rep-
resentation misrepresents its target because there is a mismatch between its
target and its content. A philosophical theory of mental content is princip-
ally concerned with the relation between items of the first and third kind.
Such a theory tries to answer the question: in virtue of what does a mental
representation have the content it has?¹ An obvious desideratum for such a
theory of content is that it gets the contents of representations right.
It’s easy to give a theory of mental content that ascribes some content to
mental representations. There’s the Today is Tuesday Theory, for example,
which says that all of our brain states have the content Today is Tuesday.²
This allows for misrepresentation because it entails that all of our brain
states are wrong six days of the week. However, it is a terrible theory because
it gets the contents of very few mental representations right. What we want
is a theory that entails that we are thinking that today is Tuesday only if we
are thinking that today is Tuesday, and that entails that we are thinking that
the plants need watering if we are thinking that the plants need watering
instead.
While this desideratum is obvious, it is surprisingly difficult to apply.
Consider the notorious case of the frog. A normal frog will snap at anything
that’s moving and suitably small and contrastive with its background (for
short, at anything small, dark, and moving). At least as philosophers tell the
tale, in their natural habitat the small, dark, moving things are mostly flies
that are nutritious for frogs. There has been debate about whether this or
that theory of mental content generates suitable content in this case, and
yet—and this is one of the most frustrating things for those fresh to the
debate—opinions differ as to what the correct content is. In the case of the
frog, philosophers have variously argued that the content of its visual rep-
resentation is fly; frog food; a parcel of chemicals nutritious for frogs; something
small, dark, and moving; small, dark, moving food ; or something indetermin-
ate between these.³
How can we use simple system cases to test our theories if we cannot agree
on what the content is? What we need is some independent ground for believ-
ing one content ascription rather than another. This chapter tries to provide
such a ground. Here I argue that some candidate contents serve the purposes
of mainstream cognitive science better than others do, mainstream cognitive
science being understood as that science that uses an information-processing
approach to provide operational explanations of cognitive capacities. I claim
that some candidate contents can and some cannot play a role in such explan-
ations. This is a contentious beginning but I am content to make my con-
clusion conditional on the assumption that a theory of content should try
to meet the needs of mainstream cognitive science.
Subject to this condition, if my argument here is along the right lines,
we will have a good reason to reject standard teleological theories, such
as Ruth Millikan’s (1991), as well as some non-standard ones, such as Kim
Sterelny’s (1990) and Nicholas Agar’s (1993). On the positive side, we
will also have good reason to take another look at informational theor-
ies like those offered by Jerry Fodor (1990b), Fred Dretske (1994), Pierre
Jacob (1997), and Neander (1995, forthcoming).⁴ Note that the last three
of these are teleological theories of mental content, so this chapter should
not be construed as an argument against all teleological theories.
In what follows I switch from frogs to toads. The perceptual systems of
frogs and toads are very similar and the neuroethological literature on the two
overlaps to a great extent, but toads let me make my point a little more vividly.
³ Jerry Fodor (1990a) and Kim Sterelny (1990) say it represents its target as a fly. Ruth
Millikan (1991) says it represents it as frog food and Carolyn Price (1998, 2001) says it
represents it as a parcel of chemicals nutritious for frogs. Fodor (1990b: 106), who has
changed his mind, along with Fred Dretske (1994), who has also changed his, Pierre Jacob
(1997), and I (Neander 1995) say it represents it as something small, dark, and moving.
Nicholas Agar (1993) says it is small, dark, moving food. And Dretske (1986), David
Papineau (1998), and Daniel Dennett (1995) suggest that the content is indeterminate
between these things. This is a very incomplete list of those who have participated in this
debate, but it is representative.
⁴ This chapter is from my forthcoming book Mental Representation: The Natural and
the Normative in a Darwinian World (MIT Press).
Content for Cognitive Science 169
I hope readers will enjoy or at least endure with patience the short excursion
into toad neuroethology that takes place in Section 2. I also hope that even
those who disagree with the implications I draw from it in Section 3 will find
the issues interesting. The smaller goal is to argue for a particular content
ascription in a particular case, but the larger goal is obviously more import-
ant. It is to illustrate the way in which content ascriptions should cohere with
neuroethological analyses of relevant cognitive capacities.
As for the relation between neuroethology and cognitive science, the two
are continuous. The relevant domain of study for frogs and toads is referred
to as ‘‘neuroethology’’ not ‘‘cognitive science’’ since the latter applies mostly
to the study of our own species. However, neuroethology does the same sort
of thing for other animals as cognitive science does for us; it studies such
things as perception, motor control, and decision making in non-human
animals, including primates. Its aims and methodological tools are also
much the same despite some obvious differences, such as in the ethical con-
straints that scientists feel themselves to be under and the fact that verbal
responses by research subjects have an important place in one and none
in the other. In both cases, the aim is to understand the normal flow and
transformation of information and its neural substrate. Sometimes expli-
citly computational models are developed in both cases (e.g. compare Marr
1982 and Cobas and Arbib 1992). Moreover, biologists believe that much
of what they have learned about the anuran (frog and toad) nervous system
applies—and many of the concepts developed in studying them are applic-
able—to a wide range of vertebrate species (Ewert et al. 1983: 414). Given
this substantial and methodological continuity, I sometimes use ‘‘cognitive
science’’ to refer to both cognitive science and neuroethology, generically.
2. TOAD NEUROETHOLOGY
of the seminal paper by Jerome Y. Lettvin and his colleagues (1959), ‘What
the Frog’s Eye Tells the Frog’s Brain’, which first sparked philosophical
interest in the frog. But this is an oversimplification even of this paper.
Lettvin et al.’s claim was that ‘‘prey’’ discrimination occurred in retinal
cells, but anuran retinal cells are more complex than mammalian ones and
the relevant process was not thought to be mere transduction (even mam-
malian retinal cells do more than mere transduction). In any case, more
recent research has undermined Lettvin et al.’s claim. It turns out that fur-
ther information processing involving mid-brain structures is required for
‘‘prey’’/‘‘predator’’/‘‘other’’ discrimination. Five decades of intensive
research further on, biologists are still trying to unravel the complexities.
Some philosophers complain that the frog example is ‘numbingly fami-
liar’ but my sympathy with this is tempered by the fact that we have main-
tained an impressive collective ignorance about the real live case. Besides,
both the frog and the toad are genuinely excellent subjects for our purposes.
As I have already remarked, the anuran brain is similar in terms of broad
principles to those of many other vertebrate species. Also, while it is cer-
tainly very complex, it is nonetheless relatively simple compared to other
vertebrate brains, and so it is easier to understand. In addition, frogs and
toads have been intensively studied—they are the amphibian equivalent of
Drosophila. As a result, neuroethologists have a more complete understand-
ing of their nervous systems than they have of most other vertebrate nervous
systems (Ewert et al. 1983: 413).
positive conditioning can affect their responses: if, for instance, the odor of
mealworms accompanies feeding for a time it can subsequently strengthen
their prey-catching response and even override what would otherwise
constitute non-prey-like (‘‘other’’) features. What follows concerns their
capacity to distinguish ‘‘prey’’, prior to any such conditioning.
David Papineau (1998: 5 n. 1) wonders if frogs and the like have a
belief–desire psychological structure, and suggests that creatures lack
determinate representational content if they lack a belief–desire psycholo-
gical structure. Whether toads have a belief–desire psychological structure
depends on how demanding the notions of belief and desire are, but it’s
worth noting in passing that toads have motivational states and therefore
states that have a desire-like direction of fit, and they also have information-
al states and therefore states that have a belief-like direction of fit. In other
words, they have states that were designed to tell them what conditions
obtain and they have states that were designed to cause certain conditions to
obtain. Toad behavior is also somewhat flexible, even aside from its modest
learning potential, in the sense that it’s modified in appropriate ways on the
basis of different informational and motivational states.
Toads respond to large looming predator-like stimuli by a range of beha-
viors that biologists describe as sidestepping, ducking, puffing up, rising up
stiff-legged, excreting toxic oils, and turning, crawling, or leaping away. In
response to a prey-like stimulus, in contrast, researchers report that the toad
displays a sequence of behavioral elements, which are said to consist typ-
ically in orienting (o) toward the stimulus, stalking or approaching it (a),
fixating or viewing the prey from front on (f ), and snapping at it (s) by
lunging and/or extending its tongue and/or snapping its jaw. There is some
flexibility in how these behavioral elements are combined. The toad might
simply snap if the prey is in front and within range, and orienting and
approaching can be left out or repeated as often as required. So we might
have sequences such as f -s, o-f -s, o-a-a-f -s, o-o-a-o-a-o-o-a-o-o-a-a-f -a-f -s,
and so on.
Usually, a toad’s response to a stimulus deemed prey-like involves an ini-
tial orienting toward it, unless the toad is already so oriented. A normal
toad does not orient toward predator-like stimuli or (prior to conditioning)
other-like stimuli. But if a toad sees a prey-like stimulus move out from
behind a barrier it first moves around the barrier, which can involve turning
away from the prey.
If the toad is placed in a glass dome and a prey-like item is rotated hori-
zontally at a constant distance around the toad the sequence is o-o-o-o-o-o-o-
o-o . . . and so on, until the toad habituates, which takes about sixty seconds.
This rotating procedure has been used in some experiments (that I describe
Content for Cognitive Science 173
below) to gauge the extent to which a stimulus counts as prey-like for the
toad. The more turns a motivated toad makes in a thirty-second interval,
the more the stimulus is considered prey-like for the toad.
Some parts of the entire prey-capture sequence are classified as fixed-
action patterns. That’s to say that once they have begun they cannot be
modified in the light of further information. For instance, if a dummy prey
disappears after a critical point in the fixation phase, the toad still snaps
and gulps and, as the neuroethologists report, often licks its mouth in seem-
ing satisfaction. As ethologists use the term, the sign stimuli for an innate
releasing mechanism for a fixed-action pattern are those features of the
environment that release or trigger the relevant behavior. The sign stimuli
for an innate releasing mechanism can be ascertained by purely behavioral
studies through the use of dummy stimuli and the careful variation of vari-
ables, a practice that goes back to the famous studies of Konrad Lorenz and
Niko Tinbergen in the first half of the twentieth century.
Behavioral studies show that the toad distinguishes ‘‘prey’’ from ‘‘preda-
tor’’ and ‘‘other’’ by some quite specific features of a moving stimulus. The
range of the relevant dimensions and the preference curves differ from spe-
cies to species; there are hundreds of species of toad. However, the story is
much the same across the range. In a famous series of studies (summarized
in Ewert et al. 1983) Jorg-Peter Ewert and his colleagues used a variety of
dummy stimuli including cardboard cutouts with three distinct configura-
tions. These consisted of (i) rectangles of constant width and varying lengths
moved in a direction parallel to their longest axis, dubbed ‘‘worms’’, (ii) rect-
angles of constant width and varying lengths moved in a direction perpendic-
ular to their longest axis, dubbed ‘‘anti-worms’’, and (iii) squares of different
sizes, dubbed ‘‘squares’’. In brief, as shown in Figure 8.1, the ‘‘worms’’ pro-
voke prey-capture behavior, although ‘‘anti-worms’’ of the same dimensions
are ignored, and the ‘‘squares’’ produce prey-capture behavior if they are
small enough and avoidance behavior when they are larger.
As you can see, the categories ‘‘worm’’, ‘‘anti-worm’’, and ‘‘square’’ do
not quite correspond with the categories ‘‘prey’’, ‘‘other’’, and ‘‘predator’’.
However, a toad’s prey tends to be worm-like. That is, it tends to be within
certain size parameters and moving in a direction that parallels its longest
axis. Again, I use scare quotes to remind us that not all ‘‘worms’’ are real
worms and not all real worms are ‘‘worms’’. ‘‘Worms’’ can be crickets or
millipedes, or cardboard cutouts, and a real worm can be an ‘‘anti-worm’’
if, for example, it is stunned, hung by its tail, and moved perpendicular to
its longest axis.
Visual ‘‘prey’’/‘‘predator’’/‘‘other’’ discrimination is not affected by fea-
tures of the stimuli that are not captured by these dummy stimuli. For
174 Karen Neander
Behavior
40 Worm
Number of turns
s
per minute
Square s
20 s s
Anti-
worm
Worm Anti- Square
worm
0
2 4 8 16 32
Length s (degrees)
Figure 8.1. The toad’s behavioral response to worms, anti-worms, and squares,
measured in turns per minute, varies with the length s of each kind of stimulus.
After Ewert (1980)
⁵ This creates problems for Price’s (1998, 2001) solution to the functional
indeterminacy problem(s). It conflicts with her claim that her ‘abstraction condition’
rules in favor of the content toad food (she is discussing frog food, but extending the same
reasoning to toads, we get toad food). According to Price, when we determine ‘the unique
correct description’ of the function of a device and hence, on her view, the description
that determines the contents of its representations, we must not consider the internal
design of the device or that of the collaborating systems. Where ‘d’ is the relevant device,
Price says, ‘we do not need to know how d’s fellow components make their contribution.
Nor do we need to know about the design of the device itself ’ (1998: 70). This doesn’t
favor toad food over something more like item moving in a direction that is parallel to
its longest axis because we don’t need to know how the detection device or its fellow
components work (in the sense of knowing anything about their underlying structural
Content for Cognitive Science 175
or functional design) in order to know the sign stimuli that trigger the prey-catching
response. We can discover them while treating the frog or toad as a black box.
176 Karen Neander
Table 8.1. A comparison of three classes of retinal ganglion cells in a typical toad.
Neuronal class Approx. ERF diameter (deg.) IRF strength Preferred stimulus
R2 4 +++ Dimming
R3 8 ++ Dimming or brightening
R4 16 + Brightening
Source: Adapted from Ewert et al. (1983: 444).
surround. In the common toad, R2s have the smallest receptive fields,
with center diameters of 4◦ . They are primarily ‘‘off-center’’, meaning
that their centers respond best to a dimming of light. R3s have somewhat
larger receptive fields, with center diameters of 8◦ , and they have ‘‘on–off-
centers’’, meaning that their centers respond well to either dimming or
brightening. R4s have the largest receptive fields, with centers that have
diameters of 16◦ and they respond best to an increase of light. Some of
these properties are compared in Table 8.1.
Since the surround inhibits the cell, each cell responds best when the
entire center is stimulated and none of the surround is. Thus, an R2 cell will
respond most strongly when a dark circle entirely fills its center’s receptive
field. These cells can thus provide information about changes in illumin-
ation in the visual field, which can be used to extract information about
the size, shape, and motion of moving stimuli. The response patterns of
R2, R3, and R4 retinal ganglion cells to the ‘‘worm’’, ‘‘anti-worm’’, and
‘‘square’’ configurations are given in Figure 8.2.
If we compare these responses to the behavioral response patterns of the
toad, shown in Figure 8.1, we can see that none of these retinal ganglion
cells have excitation patterns that mirror the response of a motivated toad.
For instance, the behavioral response increases with the length of the worm-
like stimuli, but none of the retinal ganglion cells show the same preference.
So neurobiologists conclude that ‘‘prey’’/‘‘predator’’/‘‘other’’ discrimina-
tion requires further processing beyond that performed by the retinal gan-
glion cells.
The retinal ganglion cells primarily extend to the optic tectum (T), a
mid-brain structure, and also to the thalamic pretectum (TH), in addition
to several other neural structures. Different kinds of retinal ganglion cells
go to different layers of the optic tectum. There’s also a crossing over, with
retinal ganglion cells from the right eye crossing to the left tectum and those
from the left eye going to the right tectum. Neighborhood relations are
preserved, which means that nearby retinal ganglion cells that record from
nearby retinal receptors and thus from nearby regions of visual space project
onto nearby parts of the tectal layers. In this sense, the tectum contains
multiple maps of the visual field.
Content for Cognitive Science 177
A. R2 cells B. R3 cells
40 40
Impulses per second
Worm
Worm Square
20 Anti- 20
worm
Square
Antiworm
0 0
2 4 8 16 32 2 4 8 16 32
Length s (degrees)
C. R4 cells
40
Square
20
Antiworm
Worm
0
2 4 8 16 32
Figure 8.2. Responses of retinal ganglion cells to three kinds of moving stimuli.
After Ewert (1980)
the response patterns of the motivated toad, as shown in Figure 8.1, you
can see that the match is quite close. It is sufficiently close for neuroetholo-
gists to think that the T5(2) cells might be the ‘prey-recognition neurons’.⁶
The response pattern of these T5(2) cells is largely explained as a balance
of the inputs from two other classes of cells, an excitatory input from anoth-
er class of tectal cells, the T5(1) cells, and an inhibitory input from some
thalamic pretectal cells, the TH3 cells. The activation patterns of TH3 and
T5(1) cells are shown in Figures 8.3a and b. Electrode implantation and
lesion experiments provide evidence that the TH3 cells have this inhibitory
effect. Electrical stimulation of the TH3 cells reduces the response of T5(2)
neurons. And if the thalamic pretectum is removed, the toad’s prey-capture
response becomes disinhibited, so that the toad acts as if everything that
moves is prey: it will then orient toward its own extremities, toward large
predator-like cardboard squares, to the hand of the experimenter, and so
on. Smaller lesions in the thalamic pretectum produce the same response
with respect to smaller parts of the visual field. Neuroethologists conceive of
this ensemble of TH3, T5(1), and T5(2) cells as something like an AND-
gate (Ewert et al. 1983: 442), or as an AND-gate with a NOT-gate on one
of the inputs (it does not have a discrete on–off character, but the ana-
logy with computational components described as AND-gates is apt). A
schematic presentation of the proposed operation of this ‘‘gate’’ is given in
Figure 8.4.
Other areas of the toad’s brain also mediate visual ‘‘prey’’/‘‘predator’’/
‘‘other’’ discrimination. For example, cells in the telencephalon inhibit the
Number of impulses
30 30 Square 30
Square Worm
20 Worm
Anti- 20 20
Square
per second
worm
Anti-
10 10 worm 10 Anti-
Worm
worm
0 0 0
2 4 8 16 32 2 4 8 16 32 2 4 8 16 32
(a) TH3 cells (b) T5(1) cells (c) T5(2) cells
Length s (degrees)
Figure 8.3. Responses of thalamic TH3 cells, tectal T5(1) cells, and tectal T5(2)
cells. Only (c), the T5(2) cells, has the same pattern as the behavioral response of
the toad, shown in Figure 8.1. After Ewert (1980)
⁶ The match is not perfect. One difference is that the maximally stimulating length
of the worm-like stimulus is 16◦ for behavior and only 8◦ for the T5(2) cells (for further
discussion of this point, see Camhi 1984: 230–7).
Content for Cognitive Science 179
+
T5(1)
T5(2)
TH3 −
A. +
T5(1)
T5(2)
TH3
−
B.
+
T5(1)
T5(2)
TH3
−
C. +
T5(1)
T5(2)
TH3
−
D.
+
T5(1)
T5(2)
TH3
−
Figure 8.4. The T5(1), TH3, T5(2) ensemble. The response of the T5(2) cells is
primarily controlled by the inhibiting influence of the TH3 cells and the excitatory
influence of the T5(1) cells. After Ewert (1980), from Carew (2000)
activity of the thalamus. Their removal in the poor toad results in hyper-
excited visually induced escape behavior and eliminates visually induced
prey-capture behavior altogether. However, as I say, the areas that are
primarily responsible for visual ‘‘prey’’/‘‘predator’’/‘‘other’’ discrimination
are the optic tectum and the thalamus. In sum, it is thought that visually
induced prey-capture is primarily mediated by the optic tectum moderated
by the thalamus (and that visually induced predator-avoidance is primarily
mediated by the thalamus moderated by the optic tectum).
the firing of the optic fibers (the retinal ganglion cells) turns out to be the
‘‘prey’’-representation. Our best bet to date is that ‘‘prey’’-recognition has
only occurred once T5(2) excitation has occurred.
For this reason, I shall speak of a high frequency of action potential activ-
ity in a T5(2) cell—hereafter ‘‘+T5(2)’’ or ‘‘T5(2) excitation’’—as the
relevant representation. This is better than the usual vague or worse talk but
it could be an oversimplification. There are a number of different features of
neuronal events that could carry information. One is the average rate of fir-
ing of neurons, and this seems to be involved in this case. However, it might
not be a case of local coding (i.e. a single cell for a single feature). Instead,
a cluster of cells (e.g. with overlapping tuning curves) might be involved.
Luckily, the issue does not substantially impact the main argument in this
chapter. Only certain details would need changing.
Assuming that the relevant representation is a +T5(2), the question is
‘‘What is the content of a +T5(2)’’? Before this case can be used to test our
theories of mental content we need an independent basis for attributing one
content to it rather than another. In Sections 3.1 and 3.2, I suggest that
we need to observe the coherence constraint: if the contents of mental rep-
resentations are to play a role in explaining cognitive capacities, they must
cohere with the relevant information processing. At the risk of laboring the
point, let me stress that this is not offered as a theory of mental content.
Coherence is an intentional notion, but that’s fine and dandy here because
I am not offering a theory of mental content at this point. I am offering a
heuristic or a principle to be used in identifying contents in simple systems
so that we can determine pre-theoretically (i.e. pre-philosophical-theory-of-
content-ly) what the correct content is.
L+
L
L−
⁷ I hope it’s clear that none of this is meant to solve the so-called functional
indeterminacy problem. It’s one thing to argue for a given content, and another to show
that one’s theory delivers it.
184 Karen Neander
Several considerations speak in favor it. One is the fact (at least prior
to any conditioning) that a normal toad does not snap at such an ‘‘anti-
worm’’. Only an abnormal toad, such as a toad with an ablated thalamus,
reliably snaps at ‘‘anti-worms’’. It seems unfortunate in a theory of content
if it implies that correct representation requires abnormality.
Putting neurological impairment aside, the contents toad food or member
of a prey species also seem unmotivated from the perspective of the informa-
tion processing and underlying mechanism. As we’ve seen, the mechanisms
that precede +T5(2)s are sensitive to environmental features relevant to the
size, shape, and motion relative to shape of the stimulus. They can therefore
support a content that concerns the configuration of visible features. They
do not support the other contents in so far as they are insensitive to wheth-
er the stimulus is nutritious or whether it is a member of a certain species.
No detectors of chemicals nutritious for toads are involved; no detectors of
individuating characters of species are involved. At most, we can say that
the mechanisms that detect the configuration of visible features approxim-
ate as detectors of these.⁸
Natural selection has (as biologists say) satisficed. Other things being
equal, a capacity to tell nutritious things apart from non-nutritious things
would have been good for the toad, as would a capacity to recognize mem-
bers of species that have historically been caught and eaten by them. But the
plain fact is that neither of these capacities evolved. Such capacities might
require fancier cognitive equipment, which is expensive to build, operate,
and maintain, or the needed mutations might have never arisen. Either way,
these are not the capacities that a toad possesses. Instead the toad possesses a
different capacity that merely approximates these.⁹
Of course, it is the capacity possessed and not the capacity approxim-
ated that neuroethologists aim to explain. In part this is a methodological
point. As noted at the end of Section 2.2, neuroethologists need an exact
description of the preferences of the motivated toad before looking for their
neural basis or else they will be looking for the wrong thing. In part, it is
also a logical point. You cannot explain how your car does 30 miles per
⁸ If what’s meant by ‘‘prey’’ is not a member of a species that has historically been
hunted by the toad, but rather something that can be caught and eaten, the suggestion
is much more promising. Not, in my view, because edibility is a Gibsonian affordance,
but because +T5(2)s lead to further information processing that controls catching and
eating, and we do not want to preclude the possibility that +T5(2)s have more content
than their perceptual content (i.e. motor or motivational content).
⁹ One could argue that the toad has a capacity to recognize that food has a certain
probability of being present, but notice that the toad would still not be in error when it
tokened at something with the right configuration of visible features.
186 Karen Neander
gallon if it cannot do 30 miles per gallon and can only do 28 instead. You
could explain how it could do 30 if it were altered in various ways, but
neuroethologists are not primarily interested in explaining how to improve
on the design of creatures like toads.
What of the content toad food that is moving in a direction that parallels its
longest axis? This kind of mixed proposal, intended to be ecumenical, doesn’t
disregard the neuroethology as egregiously as others do, but the compromise
needs motivating. It is a virtue of it that it lets us count a +T5(2) produced
in response to a nutritious ‘‘anti-worm’’ as erroneous. But most of the objec-
tions mentioned above in relation to toad food apply equally well to toad food
with the right configuration of features. The toad does not have a capacity to
recognize whether or not the stimulus is nutritious. It has a capacity to recog-
nize whether the stimulus has the right configuration of features. This latter
capacity approximates the former capacity. But a capacity approximated is
a capacity approximated, not a capacity possessed.
and this is the content we get when we appeal to selection history. How-
ever, even if a theory of content must make reference to selection history,
there are different ways to do so, and some theories of content that appeal to
selection history generate contents of the sort favored here, as was noted in
the introduction. Of course, whether they are satisfactory on other grounds
remains to be seen. I am sometimes accused of not being teleological in pro-
posing this argument. However, I see myself as steering us toward a more
promising type of teleological theory.
3. Millikan (2000, app. ) argues that arguments like mine lead us to a
bad end. She thinks that such arguments lead to the conclusion that con-
tent cannot go beyond what can be discriminated. In fact, to my dismay,
Millikan interprets an earlier paper of mine (Neander 1995) as asserting
this, and as even asserting that there can be no distal contents (and no distal
functions)! I did not say this (I certainly did not mean to say this), and
nor am I saying it here. I shall not try to unravel the prior confusion but
we should be clear that the present argument does not entail that content
cannot go beyond what can be discriminated.
Millikan maintains that arguments like mine are ‘‘verificationist’’ (Mil-
likan 2000, app. ). She also levels the claim against Jerry Fodor, for his
having said that, ‘According to informational semantics, if it’s necessary
that a creature can’t distinguish X s from Y s, it follows that a creature can’t
have a concept that applies to X s but not to Y s’ (Fodor 1995: 32.)
Fodor (1990b: 107–9) makes a similar point, in explaining his asym-
metric-dependency theory, which entails that the frog represents (de dicto)
small, black dots rather than flies. Fodor defends this outcome because, he
says, if we think of the content as fly we would have to allow that some
mistakes are nomologically necessary for the frog. He sheds light on what
he means by this when he adds that if we think of the content as fly, ‘There
is no world compatible with the perceptual mechanisms of frogs in which
they can avoid mistaking black dots for flies’ (Fodor 1990b: 107–9).
Fodor refers to the implication as ‘an attenuated sort of verificationism’
(1990b: 108), an admission that Millikan turns into a critique. However,
Fodor’s principle is not really verificationist. For one thing, it applies to
concepts not sentences or sentences in the head. For another, it only applies
to conceptual primitives. Admittedly, most concepts are conceptual primit-
ives according to Fodor, or anyway the Fodor from back then, but we need
to note that we only approach even an attenuated form of verificationism if
we combine Fodor’s principle with this further claim.
Even with respect to conceptual primitives, Fodor’s principle only entails
that their contents cannot go beyond what can be discriminated in a certain
188 Karen Neander
depend on subtle behavioral cues on the predator’s part. How they flee (e.g.
whether they engage in stotting, and how close they let the predator get
before fleeing) depends on the type of predator (Griffin 2001: 71). Further,
they can recognize a lion-like animal from different distances and differ-
ent angles of view, and on the basis of seeing only small parts of it (Griffin
2001: 128). Compare this to the toad’s simple perceptual capacities. The
toad’s detection of prey-like or predator-like stimuli is not mere transduc-
tion, but it is nonetheless based on a fairly simple configuration of visible
features. A good theory of content should respect these gradations in soph-
istication.
4. Another way to respond to the argument in this chapter is to argue
that an innately programmed inference is involved in the case of the toad.
It seems undeniable that the toad is processing information about the con-
figuration of visible features. But it could be argued that the toad’s brain
infers the presence of something nutritious (or the presence of a member
of a prey-species) from this configuration of visible features. According to
most contemporary theories of perception, perceptual processing is often
inferential. It often involves innate assumptions that are ‘‘embodied’’ in the
processing. The relevant inferences are not deliberate or conscious but are
implicit in unconscious processing. So why shouldn’t this be what is hap-
pening in the toad’s case?
The problem with this response is that there is no motivation from the
perspective of neuroethology for thinking that an inference is involved. And
we are looking for pre-theory-of-content reasons for preferring one content
over another. For one thing, if it seems plausible that the toad infers the
presence of nutrients, it should seem equally plausible that it infers the pres-
ence of a member of a prey species, since the configuration of features can
be considered equally good evidence for both. There is nothing in the neur-
oethology that distinguishes between these two competing interpretations.
For another thing, the claim that there is an inference or something like an
inference has empirical implications, if we are realists about representations.
Those who press this reply must allow that some representation in the first
place represents the configuration of features. There must be one represent-
ing the premise (or quasi-premise) and another representing the conclusion
(or quasi-conclusion) of the inference (or quasi-inference). To insist that
one and the same representation represents both premise and conclusion is
an ad hoc move.
Let us take a quick look at a case that cognitive scientists count as
inferential: our perception of parallel lines converging in the distance. It
is thought that we see them as parallel because our perceptual processing
embodies the assumption that lines that converge toward the horizon are
190 Karen Neander
parallel and receding. The idea is that we infer, or sort of pseudo-infer, from
the presence of converging lines in the 2D sketch that parallel and receding
lines are present in the scene, and so our 3D sketch of it represents them
as parallel and receding (Marr 1982). Notice that when cognitive scientists
make this proposal they posit two stages of representation: the 2D sketch
of converging lines and their subsequent 3D interpretation as parallel and
receding. In the case of the toad, there is no evidence of a relevant second
step. Once the toad tokens a +T5(2), it appears to have done all it does by
way of ‘‘prey’’/‘‘predator’’/‘‘other’’ discrimination. From there, it moves on
to processes that govern orienting, approaching, more precise localization
of the prey, and so on.
In any case, the inferential response cannot save certain theories (e.g.
Millikan’s, at least on her construal of it), which do not allow that any
representation of the configuration of visual features occurs. If there is no
representation of the configuration of visual features there can be no infer-
ence from such a representation.
In relation to this, it is also worth noting that a general principle of
information-processing accounts of perception is that visible properties
must be represented before invisible ones are (see e.g. Palmer 1999:
85–92,146–50). What is meant by a ‘‘visible property’’ here is not entirely
clear. However, the idea is that the perceptual system must begin with the
surface features of a perceived object that are presented to the perceptual
system. It must begin with the hide of the cow, not its inner nature, and
with the side of the cow facing the perceiver, not the parts hidden from
view. This principle (often regarded as the downfall of a Gibsonian theory
of perception)¹⁰ makes good sense. And any theory that entails that the
toad’s perceptual system only represents the stimulus as nutritious violates
this principle.
5. Nor does it help to claim that toads need to know about toad food.
Millikan says that mice need to know about hawks, not merely about the
properties by which they recognize hawks (Millikan 2000, app. ). But this
is just plain wrong. The mouse does not need to know about hawks as such.
As long as it escapes, the capacity to recognize hawk-like properties suf-
fices (satisfices). Nor do toads need to know about toad food as such. What
matters is whether they eat the food, not whether they represent it as such.¹¹
¹⁰ I ignore Gibson’s theory of perception in this chapter because it’s not mainstream
cognitive science.
¹¹ This recalls Fodor’s point: ‘All that’s required for frog snaps to be functional is that
they normally succeed in getting the flies into the frogs; and, so long as the little black
dots in the frog’s Normal environment are flies, the snaps do this equally well on either
account of their intentional objects. The mathematics of survival comes out precisely the
Content for Cognitive Science 191
same either way’ (Fodor 1990b: 106). As Fodor goes on to say, this is the kind of thing
that makes philosophers feel that content makes no difference, but we have seen that it
makes an explanatory difference.
192 Karen Neander
REFERENCES
1. INTRODUCTION
¹ There are, of course, initiatives in cognitive science that are not representation-
alist—e.g. the dynamic systems approach advocated by van Gelder and Port (1995)
and others. If non-representationalist approaches ultimately carry the day, then disputes
about how mental representation should be understood in cognitive theory will have
been idle. For the most part, we simply assume in what follows that some form of repres-
entationalism is correct. But, now and again, we phrase matters more methodologically,
as points about what representationalist explanations of cognition need to assume rather
than as points about what cognitive processes actually require.
196 Cummins, Blackmon, Byrd, Lee, and Roth
2. TELEOSEMANTICS
3. UNEXPLOITED CONTENT
the basic idea. Imagine someone who learns to use road maps to find a route
from point A to point B. A study of the map might lead to the following
plan: make a left at the third intersection, then another left at the next cross
street, followed by an immediate right. It never occurs to this person to use
the map to extract distance information until, one day, someone suggests that
the map shows a shorter route than the one generated. Prior to this insight,
our imaginary subject uses the map in a way that would be insensitive to vari-
ous geometrical distortions, such as shrinking the north–south axis relative
to the east–west axis. If assignments of representational content are limited
by the abilities its user actually has to exploit the map, we will have to say that
there is no distance information there to be exploited until after the user has
learned to exploit it. And this will evidently make it impossible to explain
how the user could learn to effectively compare routes for length: you can-
not learn to exploit content that isn’t there. Indeed, it is evident that this
story makes no sense at all unless we concede that relative distances are rep-
resented before the user learns to exploit that information. Even if the user
never exploits relative-distance information, we are forced to allow that it is
there to be exploited, since, under the right conditions, the user could have
learned to use maps to compare distances. This would not be possible if the
map did not represent relative distances.
How seriously should we take this sort of example? We think the les-
son is far-reaching and fundamental. To begin with, the idea that a brain
can learn to exploit previously unexploited structure in its representations
is presupposed by all neural network models of learning. Such learning typ-
ically consists in adjusting synaptic weights so as to respond properly to
input activation patterns. This whole process makes no sense unless it is
assumed that input patterns represent proximal stimuli prior to learning,
and that the representational content of input patterns remains the same
throughout learning. Prior to learning, the network cannot properly exploit
input representations: that is precisely what the process of weight adjust-
ment achieves over time.²
Having come this far, we can see that the problem of learning to exploit
‘lower-level’ (‘upstream’) representations must be ubiquitous in the brain, if
we assume that the brain acquires new knowledge and abilities via synaptic
weight adjustment. In perceptual learning, for example, proximal stimuli
must be represented before the appropriate cortical structures learn or
evolve to exploit those representations in target location and recognition.
² We have heard it said that the network creates the content of its input patterns as
learning progresses. But if we say this, we have no reason to say that early responses are
errors. And if early responses are not errors, why change the weights in any particular
direction? Indeed, why change them at all?
198 Cummins, Blackmon, Byrd, Lee, and Roth
them. In general, content only becomes adaptive, hence a candidate for the
kind of content-fixing selection contemplated in teleosemantics, when and
if the ability to exploit it is acquired. Evidently, there can be no selection
for an ability to exploit content that isn’t there. The ‘opportunity’ to evolve
the ability to exploit texture gradients in visual representations as depth cues
simply cannot arise unless and until depth-representing texture gradients
become available to exploit.³
Reflection on this last point suggests that the same difficulty arises
whether the ability to exploit some feature of a representation is learned
or evolved. For concreteness, assume that the ability to exploit texture
gradients as depth cues is learned. While the current state of neuroscience
provides no definitive account of such learning, it is perfectly intelligible to
suppose it involves the systematic adjustment of synaptic weights in some
substructure of the visual cortex. Evidently, if the ability to learn to exploit
texture gradients itself evolved after texture gradients became available in
early visual representations, we have a situation exactly like the one just
rehearsed: teleosemantics assigns no content to unexploited features of
representations, and this undermines the obvious explanation of how the
ability to learn to exploit such features might later become adaptive.
To sum up: once our attention is directed to the phenomenon of un-
exploited content, it is natural to ask how the ability to exploit previously
unexploited content might be acquired. Learning in the individual, and
evolution in the species, are the obvious answers. Equally obvious, how-
ever, is that teleosemantics cannot allow for evolving the ability to exploit
previously unexploited content: that requires content to pre-date selection,
and teleosemantics requires selection to pre-date content.
It seems likely that the very possibility of unexploited content has been
overlooked in philosophical theories of content because of a failure to dis-
tinguish representation from indication. In this section, we digress a bit to
explain how we understand this distinction, and conclude by suggesting
how exclusive attention to indication tends to make the phenomenon of
unexploited content difficult to discern.⁴
³ The underlying general point here, that selection for a given capacity requires that
the capacity already exist in some part of the population, is not new. See e.g. Macdonald
(1989).
⁴ This section draws heavily from Cummins and Poirier (2004).
200 Cummins, Blackmon, Byrd, Lee, and Roth
5.1. Terminology
Some authors (e.g. Schiffer 1987) use ‘‘mental representation’’ to mean any
mental state or process that has a semantic content. On this usage, a belief
that the Normans invaded England in 1066 counts as a mental representa-
tion, as does the desire to be rich. This is not how we use the term. As we
use the term, a mental representation is an element in a scheme of semantic-
ally individuated types whose tokens are manipulated—structurally trans-
formed—by (perhaps computational) mental processes. Such a scheme
might be language-like, as the Language of Thought hypothesis asserts
(Fodor 1975), or it might consist of (activation) vectors in a multidimen-
sional vector space as connectionists suppose (e.g. Churchland 1995). Or
it might be something quite different: a system of holograms, or images,
for example.⁵ An indicator, on the other hand, simply produces structurally
arbitrary outputs that signal the presence or magnitude of some property in
its ‘receptive field’.
5.2. Indication
We begin with some influential examples.
• Thermostats typically contain a bimetallic element whose shape indicates
the ambient temperature.
• Edge detector cells were discovered by David Hubel and Torsten Wiesel
(1962). They write: ‘The most effective stimulus configurations, dictated
by the spatial arrangements of excitatory and inhibitory regions, were
long narrow rectangles of light (slits), straight-line borders between areas
of different brightness (edges), and dark rectangular bars against a light
background.’
• ‘Idiot lights’ in your car come on when, for example, the fuel level is low,
or the oil pressure is low, or the engine coolant is too hot.
‘‘Indication’’ is just a semantic-sounding word for detection. Since we
need a way to mark the distinction between the mechanism that does the
detection, and the state or process that is the signal that the target has been
detected, we will say that the cells studied by Hubel and Wiesel are indic-
ators, and that the pattern of electrical spikes they emit when they fire are
⁵ It is possible that the brain employs several such schemes. See Cummins (1996b)
and Cummins et al. (2001) for further discussion of this possibility.
Representation and Unexploited Content 201
⁶ The theory is generally credited to Denis Stampe (1977). Its most prominent
advocates are Fodor (1987) and Dretske (1981).
⁷ Representation, on the view advocated by Cummins (1996a), is grounded in
isomorphism. Since isomorphism is plainly transitive, it might seem that representation
must be transitive too. In a sense, this is right: the things that stand in the isomorphism
relation are structures—sets of ‘objects’ and relations on them. If S1 is isomorphic to S2,
and S2 is isomorphic to S3, then S1 is isomorphic to S3. An actual physical representation,
however, is not an abstract object; it has a structure—actually, several—but it isn’t itself
a structure. The connected graph structure of a paper road map is isomorphic to the
street and intersection structure of a town, but not to the town’s topology. The town’s
topology is isomorphic to the topology of a citrus grove. But no structure of the road
map need be isomorphic to any structure of the grove.
⁸ It is what Haugeland would call a recording of the picture. See Haugeland (1990).
202 Cummins, Blackmon, Byrd, Lee, and Roth
5.4. Discussion
If indication is your paradigm of mental content, as it is bound to be if
you hold some form of causal theory, you are going to focus on what
fixes the content of an indicator signal.¹⁰ Whatever fixes the content of an
indicator signal, it is not its structural properties. In this context, therefore,
motivation is lacking for thinking about which aspects of a representation’s
⁹ We do not mean to imply here that the shape of a spike train is never significant.
The point is rather that two indicators can have the same spike train, yet indicate different
things.
¹⁰ See Cummins (1997) for more on the marriage between causal theories, indication,
and the Language of Thought.
Representation and Unexploited Content 203
structure can usefully be processed, and whether the ability to do that pro-
cessing is learned or evolved or a combination of both. Maps rub your nose
in the possibility of unexploited content; idiot lights do not.
There can, however, be unexploited indicator signals. Think of the color-
coded idiot lights at intersections: you have to learn that red means stop,
green means go. Before learning, this is also unexploited content (though
not what we have been calling representational content), and, unsurpris-
ingly, it makes trouble for teleosemantics. Teleosemantics implies that an
indicator signal has no content until there has been selection for the indic-
ator that generates it. But the ability to exploit, or to learn to exploit, an
indicator signal can only evolve if the indicator is already there signaling its
target.
Magnetosomes are magnetically polarized structures (typically ferrite
surrounded by a membrane) in single-cell ocean-dwelling anaerobic
bacteria. The orientation of these structures correlates with the direction
of the earth’s magnetic field. By following the magnetic orientation in a
particular direction, organisms far from the equator can avoid aerobic water
near the surface. For this to work, magnetosomes must be chained and
attached at both ends of the cell to form a reasonably straight line in the
direction of locomotion (see Figure 9.2). This is because the orientation
of the organism is simply a consequence of the orientation of the chain
of polarized molecules. The whole body of the bacterium is a floating
compass needle. The organism swims, and will move in whatever direction
it happens to point.
Chaining, of course, is simply a physical consequence of having a lot of
little magnets suspended in proximity. They will stick together north to
Figure 9.2. Magnetotactic bacterium from the Chiemsee, Bavaria, Germany (Bio-
magnetism Group, University of Munich). Dark blobs are sulfur granules
204 Cummins, Blackmon, Byrd, Lee, and Roth
south. What is not so obvious is why the north pole of the string winds up
attached at the ‘front’—i.e. direction of locomotion—end of the organ-
ism. However this happens, it is certainly possible, indeed probable, that
the attachment process evolved after magnetosomes themselves appeared
within the cell body of anaerobic bacteria. Selectionist theories imply that
magnetosome chains did not indicate the direction of anaerobic water until
after it became adaptive to do so, i.e. only after the evolution of attach-
ment. But surely it is in part because they did indicate the direction of
anaerobic water that the attachment process was adaptive enough to be
selected for.
6. CONCLUSION
¹¹ See Cummins and Poirier (2004) for a discussion of how indicators might become
‘source-free’ and function as terms.
206 Cummins, Blackmon, Byrd, Lee, and Roth
REFERENCES
A, C., B, M., and L, G. (eds.) (1998), Nature’s Purposes (Cam-
bridge, Mass.: MIT Press).
A, A., C, R., and P, M. (eds.) (2002), Functions: New Essays in
the Philosophy of Psychology and Biology (Oxford: Oxford University Press).
B, D. (ed.) (1999), Function, Selection, and Design (New York: SUNY Press).
C, P. (1995), The Engine of Reason, the Seat of the Soul (Cambridge,
Mass.: MIT Press).
1. INTRODUCTION
Harry was visiting his friend Mel. He had wandered into the kitchen to
make a cup of tea, when he heard a sound behind him. He swung round
to see Mel’s cat Fluffy hissing at him from the floor. Seeing Fluffy’s aggres-
sive stance, Harry panicked: he kicked out at the cat and bolted from the
kitchen.
Later, Harry described to Mel what happened: ‘I evaluated the situation’,
he said, ‘and judged that Fluffy presented a danger.’ Harry, it seems, was
not being entirely honest: there is a difference between the dispassionate
evaluation that he described to Mel and the emotional response that he
actually produced. But what could the difference be? The problem is par-
ticularly puzzling if we accept that emotions are not simply bodily feelings
or urges, but involve appraisals of some kind. How can we distinguish emo-
tional appraisals from dispassionate evaluations?
There are a number of ways in which the distinctive character of an
emotional appraisal might be explained. One strategy is to suggest that
emotional appraisals are psychological states of a distinctive type. They are
not judgements, thoughts, desires, or perceptual states; nor are they some
combination of these. Rather, they are produced by a separate psychological
mechanism, and have a special role to play in our psychology (Griffiths
1990). Again, it might be suggested that emotional appraisals have a special
kind of content.¹ For example, it is sometimes suggested that the content of
emotional appraisals is related in a distinctive way to the subject’s interests
or concerns (Averill 1980: 310; Nussbaum 2001: 52 n.).
¹ I use the term ‘content’ to refer to the truth (or correctness) conditions or the
satisfaction conditions of intentional states.
The Content of an Emotional Appraisal 209
These strategies might be seen as alternatives; but they are not mutually
exclusive. It may be that a complete explanation of the distinctive character
of emotional appraisals will refer to their origin, their function, and their
content. Indeed, it is reasonable to expect these considerations to be linked.
In what follows, I would like to explore these links. In particular, I would
like to explore the idea that the content of emotional appraisals is depend-
ent on their function. I shall concentrate on a single case study: Harry’s
appraisal of the danger presented by Fluffy. I shall refer to this appraisal
as AP .
My discussion will assume a certain theoretical background: that is, a
teleosemantic theory of intentional content of the kind suggested by Ruth
Millikan (1984, 1989). A theory of this kind begins from the claim that the
content of an intentional state is determined by its biological function; or,
more precisely, by the biological function of the mechanism that produced
it, together with the way in which that mechanism normally works. Mil-
likan’s version of the theory is marked by two distinctive features. First,
on her approach, the notions of a biological function and of normality
are understood historically.² Secondly, on her account, we cannot determ-
ine the content of an intentional state by considering only the conditions
under which it would normally be produced: we must also consider the
way in which a state of that type would normally benefit the subject (Mil-
likan 1984: 100; 1989: 286; Price 2001: 82). I shall follow Karen Neander
in referring to a theory of this type as a High Church teleosemantic theory
(HCT) (Neander 1995).
Elsewhere, I have attempted to apply a version of HCT to the content of
relatively simple intentional occurrences, such as sensory signals; and I have
attempted to extend that account to increasingly sophisticated intentional
devices, including perceptual states with singular content, spatial maps,
judgements, and desires (Price 2001). In this chapter, I shall rely on some of
the claims and distinctions that I introduced in my earlier discussion. I shall
introduce these bits of baggage, as briefly as I can, as I go along.
In order to present a teleosemantic account of the content of emotional
appraisals, it will be necessary to say something about the function that
these states play in our psychology. This is, of course, a highly controver-
sial issue; it is also an empirical issue, on which philosophers are entitled to
do no more than speculate. In the next two sections, I shall offer a spec-
ulative account of the function of certain emotional appraisals, drawing
I shall begin with some brief remarks aimed at clarifying what I mean by
an emotional appraisal. I take it that the occurrence of an emotion is a
complex event, involving a structured pattern of physiological and psycho-
logical changes, triggered by an intentional state of some kind.³ I use the
term ‘emotional appraisal’ to refer to the intentional state that triggers these
changes.⁴
In this chapter, I shall assume that the changes involved in a particular
occurrence of emotion are triggered by a single appraisal. This assumption
may well turn out to be false. It has been suggested that an occurrence of
emotion involves at least two processes: a speedy, automated process and
a slower process that involves higher-level cognitive capacities (Oatley and
Johnson-Laird 1987; LeDoux 1998; Toates 2002). If so, it is possible that
these processes are initiated by two or more distinct appraisals, each per-
forming a different set of functions. If this turns out to be correct, it will be
necessary to adjust my account in order to accommodate this point.
³ For two different views of what might be included within an occurrence of emotion,
see Ekman (1980: 80–95); Goldie (2000: 12–16).
⁴ For present purposes, I shall sidestep questions about how these states are produced,
leaving it open whether they are generated by a separate psychological mechanism or
whether they are triggered by beliefs or desires. If HCT is correct, a complete account of
the content of emotional appraisals will need to take account of the way in which they
are produced; but I do not have space to explore this issue properly here.
The Content of an Emotional Appraisal 211
may acquire during the course of our lives (Millikan 1986: 72; Price 2001:
191–211).
General-purpose systems benefit from a capacity to deploy information
and practical skills in a highly flexible way. There is no telling in advance
how any of the intentional states produced by the system will be used in
pursuing the subject’s interests. On the other hand, systems of this kind
face some significant challenges. First, because a general-purpose system
serves a wide range of different interests, it needs to have some means
of determining which interest to pursue in any given situation. Secondly,
general-purpose systems have access to a broad range of information, only a
small subset of which is relevant in any given situation. The system must
have some way to ensure that the right information is brought to bear
on the problem. A similar problem arises with respect to the behavioural
responses that the system is able to deploy: the system must ensure that the
responses that it considers are likely to be of some use. These problems are
particularly pressing in situations that need to be dealt with quickly.
General-purpose systems contrast with special-purpose systems. In par-
ticular, they contrast with intentional systems that are interest-dependent:
these are systems that serve a specific interest or set of interests—for ex-
ample, predator-avoidance or nest-building. Interest-dependent systems
range from simple recognition–response mechanisms to systems with relat-
ively sophisticated inferential capacities. The information and behavioural
responses available to an interest-dependent system are used to promote
only the interest or set of interests that the system serves. As a result, organ-
isms that possess only interest-dependent intentional systems will be able to
make only limited use of the information and skills that they acquire.
Interest-dependent systems avoid some of the problems that confront
general-purpose intentional systems. In particular, a system that serves only
a single interest will not need to decide which interest to serve in any
given situation. Moreover, such a system will not face the same degree
of difficulty in focusing on relevant information and effective behavioural
responses: a system of this kind will normally have access only to informa-
tion that is likely to help it to execute the task that it functions to execute;
moreover, the behavioural responses that it is able to deploy will normally
be limited to responses that have, in the past, proved effective in enabling it
to execute this task.
The distinction between interest-dependent and general-purpose
systems bears some similarities to Fodor’s distinction between modular
and non-modular systems. Fodor characterizes modules as systems that
are both domain-specific and informationally encapsulated: that is, they
have a specific informational or executive task to perform and are able
to draw on information from only a limited set of sources (Fodor 1983:
212 Carolyn Price
With this distinction at the ready, we can now turn our attention to the
first question that we need to address: what is the biological function of an
emotional appraisal?
It is not axiomatic that emotional appraisals have a biological function
at all. For this to be case, one of two possibilities must be realised. One
possibility is that we have inherited a set of emotional capacities that once
helped our ancestors to survive and to reproduce, and which thereby help to
explain our presence today. If so, our emotional capacities will function to
help us in just the way in which they helped our ancestors.
Secondly, it is possible that our emotional capacities are not inherited,
but develop during the course of our lives, in a way that depends on our
experiences as individuals and as members of a certain social group. It
might be thought that this scenario is incompatible with the claim that our
emotional capacities have a biological function. But the two claims can be
made compatible if we assume that the process of learning is controlled by
mechanisms that themselves have a biological function. If such mechanisms
exist, their function will be to generate emotional capacities suited to our
physical and social environment. On this scenario, our emotional capacities
could be ascribed functions deriving from the function of these mechan-
isms, together with the way in which they normally work (Millikan 1984:
46–7; Price 2001: 124–9). In what follows, I shall assume that one of these
possibilities is realised. I shall not try to adjudicate between them.⁵
I shall begin from the suggestion that the function of emotions is
to enable us to deal with what Paul Ekman calls fundamental life tasks
(Ekman 1992: 171). These include hazards, such as an encounter with a
large predator; and opportunities, such as an encounter with a potential
⁵ For some different views, see Ekman (1980, 1992); Griffiths (1997); Averill (1980);
Goldie (2000: 84–122).
The Content of an Emotional Appraisal 213
mate. These situations are crucial for the subject, in the sense that an
inappropriate response can be highly damaging. In addition, they are often
emergencies, to which it is necessary to respond very quickly. According to
the life task hypothesis, the function of emotional appraisals is to enable us
to deal effectively with situations of these kinds.⁶
How do emotional appraisals help us to deal with these situations?
Emotional appraisals are produced quickly, enabling the subject to identify
the situation without delay. They trigger physiological changes, preparing
the subject for action; and they prompt expressive behaviour, signalling
to others how the subject is likely to react. In addition, they generate
behaviour that is designed to resolve the situation: for example, fleeing from
the threat; avenging the insult; celebrating the goal. Unlike some of the
expressive behaviours triggered by emotional appraisals, these behaviours
are not plausibly regarded as stereotypical responses: they are generated by
practical inference. This implies that emotional appraisals are capable of
influencing practical decision-making in some way.
How do emotional appraisals influence decision-making? First,
emotional appraisals are plausibly regarded as sources of motivation.
Moreover, emotional motivations are urgent. In other words, they do not
compete on an equal footing with the subject’s other goals. Gripped by
fear, Harry would not normally balance his desire to avoid being injured by
Fluffy against his desire to make himself a cup of tea: his panicky appraisal
prompts him to treat the goal of protecting himself from Fluffy as his only
current concern.
Secondly, emotions seem to influence the way in which we think. We
saw earlier that a subject who is capable of general-purpose reasoning faces
a fundamental difficulty: they must ensure that, in deciding how to act in a
particular situation, they focus primarily on considerations that are relevant
to the situation. Ronald de Sousa suggests that emotions provide a solution
to this problem: emotions function to frame our reasoning, by focusing our
attention on information that is relevant to the problem at hand (de Sousa
1987: 190–6; see also Evans 2002). If this proposal is correct, then one
function of AP is to fix Harry’s attention on the threat posed by Fluffy and
on any aspect of the situation that might help him to escape or to ward off
the threat.
De Sousa’s proposal concerns the information that the subject considers
before acting. We saw earlier that a similar difficulty arises with respect to
⁶ See also Griffiths (1990); Tooby and Cosmides (1990); Lazarus (1991); Johnson-
Laird and Oatley (1992); LeDoux (1998). Theorists who have argued for the life task
hypothesis have generally supposed that our emotional capacities are inherited; however,
it would be possible to combine the life task hypothesis with the view that our emotional
capacities are learned.
214 Carolyn Price
the possible actions that the subject will consider. Oatley and Johnson-
Laird suggest that a further function of an emotional appraisal is to focus
the subject’s attention on a specific set of responses to the situation
(Oatley and Johnson-Laird 1987: 37). This seems plausible. If we are told
that Harry is overcome by fear, we would expect him not only to try to
avoid being injured by Fluffy, but to do so in certain predictable ways—for
example, by running away from her, by trying to fend her off, or by cower-
ing in a corner. We would not expect him to react by talking calmly to
her, even if this is known to be an effective way to deal with angry cats. To
respond by talking gently to Fluffy would require Harry to master his fear,
not to act in accordance with it. This lends support to the suggestion that
AP works to narrow down the types of action that Harry is able to consider.
De Sousa sums up his proposal by suggesting that the function of emo-
tions is to cause the mechanisms that are responsible for practical reas-
oning to operate as if they were informationally encapsulated modules. I
would like to suggest instead that one function of an emotional apprais-
al is to cause our general-purpose reasoning system to act as if it were a
special-purpose, interest-dependent system. As we saw earlier, such a system
will focus exclusively on a single task; it will normally have access only to
information that is relevant to that task; and it will be able to generate a
limited set of actions, each of which has already proved effective in dealing
with that task.
All this suggests that an emotional appraisal has a very complex function.
We might describe it roughly as follows. In a situation in which the subject
is confronted by a certain type of crucial situation, an emotional appraisal
functions:
1. to prompt the subject immediately to find a way to resolve the situation,
without regard to other considerations;
2. to focus the subject’s attention on a narrow range of possible actions;
3. to focus the subject’s attention on information that will help him or her
to choose one of these possible actions and to perform it in an effective
way;
4. to trigger physiological changes that prepare the subject to carry out one
of those solutions;
5. to trigger expressive behaviour that signals the subject’s situation and
likely actions to other organisms.
How plausible is this account? Certainly it is not complete: it ignores,
for example, the longer-term effects of our emotional experiences on mood,
motivation, and memory. Moreover, this account does not fit all types of
emotional appraisal equally well. In particular, clause (2) does not apply to
The Content of an Emotional Appraisal 215
Intentional states can be divided into different categories. First, there are
states that register that a certain situation obtains or that a certain type of
event has occurred. These states have descriptive content, identical with the
information that they normally carry. Factual judgements possess purely
descriptive content: they function to convey information, which can be
used to satisfy a range of different goals. I have argued elsewhere that the
states produced by simple signalling systems also have purely descriptive
content. The function of a simple signalling system is to trigger some ste-
reotypical response whenever a certain condition arises. The sensory sys-
tem that triggers the eye blink reflex is a system of this kind: this system
216 Carolyn Price
that he has about his situation. For example, he might take account of
information about the proximity of the threat, the availability of exits or
suitable hiding places, and so on. If so, AP will represent avoiding injury as a
goal to be achieved.
Moreover, I also suggested that some emotional appraisals have a fur-
ther function—to direct the subject’s attention to a specific set of possible
actions. I suggested that this was true of panicky appraisals like the one
produced by Harry. The options that occur to Harry might include run-
ning away, lashing out at the threat, or hiding from it. Again, these are
not stereotypical responses: they are performed in a way that takes account
of information about the situation: for example, information about the
location of the threat, the position of an exit, and so on. This implies
that AP will represent not only the overall goal of avoiding injury, but
also a set of sub-goals: running away, lashing out at the threat, hiding, or
whatever.
Finally, this directive content will include a temporal element: it will
represent the time at which the subject is to respond to the situation. Pre-
sumably, a panicky appraisal will normally prompt the subject to act imme-
diately, but this may not be true of all types of emotional appraisal: there
may be some kinds of emotional appraisal that do not normally prompt
the subject to act immediately, but only at some time in the future. We
have already seen, for example, that anxious appraisals do not motivate
an immediate response. This highlights the need to distinguish between
the claim that an emotional appraisal functions to prompt the subject to
act immediately and the claim that it functions to ensure that the subject
prioritises the problem. An appraisal that prompts the subject to respond
at some time in the future may nonetheless function to ensure that the
subject begins to search for an effective response straight away. But this
‘straight away’ does not need to be represented, because, as we have seen,
prioritising the problem is not something that the subject needs to decide
how to do.
If all this is correct, it suggests that the content of AP should be expressed
in something like the following way: that cat is very threatening; avoid being
injured by it—by running away from it now or by lashing out at it now or by
hiding from it now.
What justifies this ascription of content to AP ? If we accept HCT, this
ascription will be justified by the history of this type of appraisal. If the
capacity to produce panicky appraisals is inherited, the crucial point will
be that this capacity sometimes enabled Harry’s ancestors to avoid being
injured by prompting them to run away from the threat, or to lash out at
it, or to hide from it. If we assume that Harry’s capacity to produce pan-
icky appraisals is learned, then this ascription will depend on the claim that
The Content of an Emotional Appraisal 219
this capacity has, in the past, enabled Harry (or perhaps others in his com-
munity) to avoid being injured by behaving in one of these ways. In either
case, the content of AP will reflect the successes of the past.
⁹ One possibility is that an evaluative judgement depends, not only on how things
are in the environment, but also on the subject’s desires or preferences. For example, it
might be suggested that an evaluative judgement about danger will normally carry the
information that the subject desires to avoid injury. (I find this suggestion attractive, but
I am not clear whether it is correct.) If it turned out that evaluations, but not emotional
appraisals, normally carry information about the subject’s desires, this would constitute
a difference in content between the two states—a contrast explained, not by a difference
in function, but by a difference in the way in which these states are normally produced.
¹⁰ Millikan tentatively endorses this suggestion (Millikan 1995a).
220 Carolyn Price
do. As a result, the judgement that Fluffy is dangerous presents the threat
posed by Fluffy as one consideration among others. Unlike an emotional
appraisal, it might well be overridden in favour of some other goal.
It is possible to make a connection between this point and claims made
by other writers concerning the distinction between evaluative judgements
and emotional appraisals. Both Michael Stocker and Peter Goldie have
suggested that the contrast between emotional appraisals and evaluative
judgements is not a matter of their content—at least, not in the sense in
which I am using the term here—but rather of the way in which their
content is entertained by or presented to the subject. Stocker suggests that
there is a distinctive sense in which the content of an emotional apprais-
al is ‘taken seriously’ by the subject (Stocker 1987). Goldie suggests that
the difference between an emotional appraisal and an evaluative judgement
is analogous to the difference between an indexical and a non-indexical
thought: in feeling fear, he suggests, ‘you are emotionally engaged with
the world, and typically you are poised for action in a new way’ (Goldie
2000: 61). Similarly, I have suggested that one difference between AP and
an evaluative judgement is that the content of AP is presented to Harry
as an immediate and overriding priority. As we have seen, however, this is
not a difference in content: because AP does not represent prioritising the
situation as a goal.
However, there are a number of ways which the content of AP might
be thought to differ from the content of an evaluative judgement. In what
follows, I shall investigate some of these differences. I shall begin by consid-
ering the directive content of the two kinds of state.
Indeed, there is at least one reason to deny that this will be the case.
According to HCT, as we have seen, the content of AP will be determined
by the nature of situations in which the production of a panicky appraisal
has benefited Harry or Harry’s ancestors. This implies that these situations
had a feature in common: that is, they all involved a threat that could be
avoided or overcome in some way. More precisely, they all involved a threat
that could be avoided by running away, or by lashing out, or by hiding.
Where a situation involves a threat that cannot (in principle) be avoided in
one of these ways, the capacity to produce a panicky appraisal could not nor-
mally benefit the subject. This implies that AP will represent the presence,
not merely of a threat, but of a threat that can, in principle, be avoided in
one of these ways. If Harry is confronted by a threat of some other kind—
for example, if he hears that he is suffering from a dangerous illness—it
would not be appropriate for him to produce an appraisal of this kind.¹¹
At first glance, it might seem that a similar restriction will apply to
Harry’s evaluative judgements. For example, it might be thought that a
subject could not normally benefit from a capacity to evaluate a situation as
dangerous if that situation cannot be avoided. If this were correct, it would
imply that Harry’s evaluative concept ‘dangerous’ will apply only to dangers
that he can, in principle, avoid. Of course, even if this were correct, it would
imply that the descriptive content of this evaluative judgement is broader
than the descriptive content of a panicky appraisal. This is because the
directive content of the evaluative judgement ‘That cat is dangerous’ does
not concern a specific set of possible actions, but only concerns some more
general goal, such as avoiding injury. As a result, there would be no need to
suppose that the evaluative concept ‘dangerous’ concerns only dangers that
can be escaped by running away, lashing out, or hiding. So the evaluative
judgement would apply to a wider range of cases than AP .
However, I have argued elsewhere that, in the case of subjects who are
capable of a certain rather sophisticated form of reasoning, the content
of judgements and desires is not subject to this kind of restriction (Price
2001: 241–50).¹² Subjects who are capable of this kind of reasoning are
able to recognise that certain goals are (in principle) beyond their reach,
and to understand this as resulting from a combination of their own lim-
ited capacities and of some independent feature of the situation—the dis-
tance of a place, say, or the complexity of a problem. In what follows,
I shall refer to this form of reasoning as R-reasoning. A subject who is
capable of R-reasoning might sometimes save themselves time and trouble
¹³ The fact that a frightened person is more likely to judge that they are in a dangerous
situation is not in itself evidence for the existence of this inferential connection. This
phenomenon might arise from the fact that the feeling of fear focuses the subject’s
attention on the frightening aspects of the situation, and so on the evidence that supports
that evaluation.
224 Carolyn Price
also Maclaurin and Dyke 2002). If someone has suffered a loss, we would
expect them to feel sad; but, in most cases, we would expect their sorrow
to diminish after a time. Again, if someone has been treated very unjustly,
we would expect them to feel angry; but we would expect their anger to
fade over time. Something similar applies to fear: we would expect Harry’s
panic to fade away as soon as he has escaped from Fluffy. Indeed, in many
cases, we would think that there is something inappropriate or irrational
about an emotion that does not diminish in intensity over time. In contrast,
if someone judges that they have suffered a flagrant injustice or survived a
very dangerous encounter, we would not expect their evaluation of what has
happened to change as time passes: they should not judge that the injustice
was any less flagrant or the encounter any less dangerous because it occurred
a long time ago.
A consideration of the function of these two kinds of state can help
us to make sense of this distinction.¹⁴ I have suggested that the function
of a panicky appraisal is to mobilise physiological, cognitive, and behavi-
oural resources to deal with a threat that currently confronts the subject.
The ‘currently’ is important: Harry would not normally benefit from a
propensity to produce a panicky appraisal in response to the information
that some threat had occurred in the past, or that some threat will occur at
some time in the future. A panicky appraisal, then, will represent a threat
as present or as imminent; but not as past, or in the future. Of course,
Harry may remember a past encounter with Fluffy, and feel afraid. But his
appraisal will represent the threat as present or imminent, not as past or in
the more distant future.
The range of temporal content that an emotional appraisal can convey
will depend on the nature of the emotion. Panicky actions are likely to be
useful only in the midst of the crisis. In contrast, angry actions might help
the subject to deal with offences that are just about to happen, that are
happening, or that have happened in the recent past. This implies that an
angry appraisal might represent an offence as imminent, present, or in the
recent past—but not as belonging to the remote past or the distant future.
In contrast, there are no such restrictions on the temporal content of an
evaluative judgement. For example, if Harry can judge that Fluffy is dan-
gerous as he eyes her across the kitchen, he can also judge that Fluffy was
dangerous when he remembers the event the next day. It is easy to see how
the capacity to produce this past-tense evaluation might be of use to Harry
in drawing further inferences—for example, the inference that he ought to
avoid visiting people who live with cats. Again, Harry can also judge that
Fluffy will still be dangerous two years from now: this is because he can
¹⁴ Maclaurin and Dyke offer a similar explanation (Maclaurin and Dyke 2002).
The Content of an Emotional Appraisal 225
12. CONCLUSION
REFERENCES
¹⁵ I have read early drafts of this chapter at seminars at York University and at the
Open University. I would like to thank the participants for their thoughtful comments.
The chapter has also benefited from conversations with Philip Percival, Kevin Sludds,
and Fred Toates. I am grateful for their help.
228 Carolyn Price
adaptation 11, 24, 27, 29, 31, 33, 36 correspondence 35, 52, 106, 110, 122,
language-specific 31 136–37, 152
affordances 101, 185 fn.8 Cosmides, L. 23, 26
Agar, N. 168 Craik, K. 125
Aydede, M. 116 culture 26, 32, 37–8
material 24
bee dance 61, 107, 110, 216 evoked 23, 24–6, 32, 36
Bennett, J. 33 Cummins, R. 9–10, 47, 49–50, 51–3,
biological 64, 201 fn.7
design 100
environment 37 defect 28–30
kinds 119 Dennett, D. 117, 168
purposes 102–03, 105 Descartes, R. 75, 76 fn.9 (see also
utility 100, 102–03, 105 Cartesian)
Bloom, P. 24, 119, 123 de Sousa, R. 213–4, 217
Boghossian, P. 159–60 dispositions 103, 105–08, 132, 138,
Braddon-Mitchell, D. 159–60 155
relational 107
Camhi, J. 183–4 Donald, M. 25
Campbell, D. 15 Dretske, F. 8, 117, 120–21, 123,
Cartesian 81, 98 141 fn.17, 149, 151, 168, 201
coevolution 19, 32, 36–8, 158–9, fn.6
161, 164 Dunbar, R. 29
cognitive science 16, 19, 20, 42–3, Dyke, H. 224
48–50, 55, 96, 150, 167–69,
183, 189–90, 191, 205–6 Ekman, P. 210, 212
coherence 110, 139 fn.15, 180 ,183 encapsulated 16, 25, 32–4, 36,
communication 28, 30, 34, 147, 151, 211–12, 214
160, 165 emotions 208–27
compositional 10, 13, 106–08 Ewert, J.-P. 173, 175
computational 34, 59, 155, 169, 178, externalism 76
200
concept 24, 34–6, 78, 104, 187–88, feedback 32, 61–3
191–92 fitness 19, 28
empirical 108–09 Fodor, J. 25, 33, 42–3, 116–17, 146,
evaluative 221–23 168, 187–88, 190 fn.11, 201 fn.6,
mentalistic 44–6, 65, 72 211
of belief 76 frog visual system, 168–72, 174–5,
conditioning 104, 155, 172 179
instrumental 102 functional
operant 102 analysis 10
consumer 19, 49, 100, 105–08, categories 3
117–18, 150–53, 157, 159–60, indeterminacy 174 fn.5, 183 fn7.
202 property 94, 96, 122
mechanism 4–6, 49, 61, 63–4 role 18, 89, 93, 96
semantics 100 teleo- 61–2, 135
contraries 113 traits 12
230 Index
Papineau, D. 5–6, 42, 64, 86 fn.1, selection 10–14, 16–18, 20, 29,
89–90, 92, 97, 123, 168 fn.3, 31–2, 38, 60–3, 65, 85–94, 96,
172, 196, 222 fn.12 98, 102–04, 109, 117–18,
Peacocke, C. 18, 95 fn.8, 108, 222 120–03, 157, 185–87, 196, 199,
fn.12 203–04
perceptual experience 19, 69–70, non-genetic 13–14
147–50, 154, 157, 160, 165 ontogenetic- 15
Percival, P. 223–5 secondary 15
phenomenal 69, 91 selectional
phylogeny 12, 37 facts 86, 89
Pierce, C.S. 52 process 123
Pietrowski, P. 19 properties 86, 90–4, 96
Plato 205 recruitment 117
Poirier, P. 199, 205 relevance 120
Port, R. 195 roles 18, 89–92, 96
Price, C. 107–08, 168 fn.3, 174 fn.5, theories of content 87, 89, 94, 98
186 Sellars, W. 45–6, 65
Prinz, J. 116, 119 semantic
propositional attitude 46, 69 fn.1, 76, conceptual role semantics, 115
205 consumer semantics 100
pyramidal cells 128–42 description 44, 59, 60, 66
indicator semantics 4, 6, 11, 117,
Quine, W.v.O. 155 149, 151
informational semantics 43,
reduction 3, 90, 118–20 115–16
reference 24, 64, 95 fn.9, 195, 204–05 meta-, 148–9, 151
regress 47–8, 53, 113 properties 17, 42–4, 48
reinforcement 103–04, 123 rules 105
representation (see also success semantics 5
misrepresentation) SINBAD theory, 125–42
asymmetric dependence theory Skyrms, B. 163–5
of, 116, 187 Stampe, D. 4, 201 fn.6
cognitive 4 Sterelny, K. 57, 168, 170
inner 61–2 stimulation 8, 155, 171, 175, 178
internal 56 proximal 109–11, 205
232 Index