Background: Chapter 6: Jay Lemke
Background: Chapter 6: Jay Lemke
Background
Let us start with your way into academic life and social semiotics.
I was a student at the University of Chicago first in mathematics and then in physics. I had
had an interest even from very young age in astronomy and cosmology, and that led to an
interest in the theory of relativity and quantum mechanics. I knew a lot about those things
when I went to university, so in fact I finished my first degree in three years, and was already
taking postgraduate courses in my third year as an undergraduate. I had therefore a good head
start at a young age in physics, and I completed my PhD at the University of Chicago when I
was about 24 or 25 years, studying proton and antiproton annihilation. I got interested in the
teaching of science, partly because I did not think science was very well taught, even at an
excellent university like the one where I was at. Instead of going on in physics where I had an
offer of a postdoctoral study, I accepted an offer to go to the City University of New York as
an assistant professor in both physics and the field of science education. Because of some
retirements of the more senior people, I quickly became the lead person in the field of science
education at the university. I initiated a research program in science education, and I gradually
let go of my research program in theoretical physics. It was very difficult, however, because
as a physicist I had been trained to always do research in relationship to a well-developed
theory, but there was no well-developed theory in the teaching of science. In that time,
Piaget’s theory was dominant in education, and this theory might have been useful if I was
studying young children. My area of work was with secondary school students and with their
teachers. The focus there is more on the content and the concepts of science, and there was no
good theory, and there still is no good theory of how to do science with these kinds of
students. I decided to create a theory, and being a physicist I was ambitious. I thought that this
could not be all that difficult to do. I create theories in physics. I can certainly create theories
about the learning of science. Of course, it turned out to be much more difficult than I
thought.
I had decided from a fairly early time that an important issue was the communication
of scientific ideas in the classroom, and this communication was done mostly in the form of
oral language with some additions of diagrams and writing on the chalkboard. I proposed a
research project to the National Science Foundation in the US to do tape and video recording
of classrooms in science of younger secondary school students to more advanced students and
even in some university classes. I wanted to analyze how the meanings of the ideas and the
concepts were expressed by the teacher and understood by the students. To do this I needed a
theory of language that would enable me to analyze what was being said and to interpret what
was being meant conceptually. I talked to some friends of mine who were in anthropology
and linguistics, and they asked me if I had read Kenneth Lee Pike or Michael Halliday. I was
on vacation in London that summer, and on a shelf in a bookstore I saw Michael Halliday’s
book Language as Social Semiotic. I was very impressed with what I read in that book, and it
seemed to me that Halliday had exactly the approach to language that I needed in order to do
research on science classrooms. I arranged therefore to go to Sydney in 1979. I had a friend
who was a young professor in anthropology and linguistics there, and he introduced me to
Michael Halliday. Halliday invited me to give a paper while I was there, which I did. It was
called “Action, Context and Meaning”, which was essentially my approach to language as one
component of human action and human communication in a situational context and a context
of culture. Halliday and I got along well, and he invited me to come back the following year.
My proposal to the National Science Foundation had been approved and I had money to take
a year off from my teaching in New York, and to go both to England and Australia to consult
with various people about how to analyze the language of science teaching. That really
launched my academic career in the direction of studying linguistics and communication as
they applied to science, and this was the direction of my work for quite a while.
In the late 1990’s, I became interested in some broader issues. My work on language
and science education had led me to see the importance of various social and cultural factors
in the learning of science that went outside the classroom. I also became interested in the
history of mathematical and scientific language, and how it had diverged from everyday
language. That was a subject that Michael Halliday was also interested in at that period of
time. I was also by then dissatisfied with looking only at language. It was clear that you also
had to look at other semiotic modalities, certainly diagrams, graphs, maps, charts and pictures.
What seemed to me most important was how they were all integrated. Since I had already
taken a semiotic approach to language, I wanted to take a semiotic approach to what we today
call multimodality, and to expand the application of systemic linguistics to the analysis of
discourse, to the analysis of multimodal activity and multimodal texts. I was still a physicist
in my habitus, so I turned to the field of complex systems analysis in physics to see if there
were any ideas that might be helpful in this regard. I made a little progress, but not as much as
I wanted. I turned back again to the study of multimedia and, at that point, computer games.
This was around year 2000-2002. I studied the nature of multimedia, multimodal interactions
in computer games and their applications to teaching and learning. The work on complex
systems analysis and computer games took mostly place when I was at the University of
Michigan, where I went after New York. Coming here to University of California San Diego,
I began to work with younger children, which was a very new experience; they were playing
computer games with undergraduates from the university as a sort of informal learning
experience. In studying this, something became very clear to me: you cannot understand the
process of learning without including the emotional component. My most recent work is
therefore oriented to integrate the analysis of cognitive or ideational dimensions of learning
with the affective or emotional and interpersonal dimensions.
How did you develop your contact to Halliday and other social semioticians?
My introduction to systemic functional linguistics and social semiotics came in those early
days in 1978-9. I was a visitor for extended periods at Halliday’s department in Sydney in
1981, -82 and -83, and back in -85 and -87 and about every second year thereafter through the
1990’s. That was the time when Halliday was writing the introduction to Functional Grammar
and using draft versions of it in his MA course in Applied Linguistics in Sydney. I sat in on
many of the classes in my summer holiday, which was the middle of the teaching year in
Australia. Halliday and I had many conversations, and I also had conversations with Jim
Martin, Ruqaiya Hasan and many other people. In that period, we began to talk with people in
anthropology, literature and visual arts about possible generalizations of the organizing
principles of language as social semiotics to other forms like visual forms, music and dance
forms. We organized what we called the Newtown Semiotic Circle. Newtown is one of the
neighborhoods of Sydney that is next door to the university. We had a very interdisciplinary
group there, and in fact we pretty much invented the term ‘social semiotics’ in that group.
This was really the origin of social semiotics. Michael Halliday and Ruqaiya Hasan
participated in that group, and so did Gunther Kress, Theo van Leeuwen, Jim Martin, and me.
My friend Allan Ramsay was there, Terry Threadgold came from the literature department
and there were anthropologists coming.
It was not so much that social semiotics was an influence on me, as it was that I was
bringing to this beginning of social semiotics various perspectives that I had already been
exposed to. It was one of the criticisms that many people who did systemic functional
research in those days, were too narrowly focusing on the work of Halliday and only on the
grammar, and that attention had to be paid to what was going on in the intellectual world
around language and society, political analysis, critical theory and feminism. The Newtown
Semiotic Circle was engaged in broadening out the enterprise from linguistics to social
semiotics. Michael Halliday had already intended that from the beginning, and made it clear
in the book Language as Social Semiotics that the purpose of systemic functional linguistics
was to provide a tool for critical social analysis. He was all in favor of doing this. He reserved
to himself the toolmaker role, but there is no point in making a tool if no people are going to
use it for something. He wanted to see it widely used in the analysis of ideology in the media
and feminist perspectives in literature, which Terry Threadgold was doing, and many other
such applications. What I was bringing came partly from my background in physics, which
was rather unique in the group. I also brought in my reading in cybernetics and information
theory and of Gregory Bateson’s work, especially his book Steps to an Ecology of Mind from
1972.
I am eclectic and believe in bricolage where you borrow ideas and tools and ways of
thinking from everyone that you can, and you put them together in your own way for your
own purposes. Around that time, I began to read about intertextuality from the French theorist
Michel Riffaterre (ref.?), and then Julia Christeva and Mikhail Bakhtin. I became enormously
interested in Bakhtin, especially his ideas about heteroglossia. In the multimodal area, we
discussed the work of Roland Barthes, who, as far as I could tell, stole many of his ideas from
Hjelmslev without credit. Barthes’ work Elements of Semiology from 1964 reads almost like a
summary of Hjelmslev, especially the ideas about connotative semiotics. Barthes applied
these ideas, not only to literature, but also to some very early analysis of multimodality.
Foucault was another point of view that I brought in to the discussions in the Newtown group
fairly early, especially his work in the archeology of knowledge. I was also reading Pierre
Bordieau and bringing some of his notions in as well to help give a more solid sociological
foundation to social semiotics. The social semiotics in the beginning was very strong on the
semiotics, and had only good intentions on the ‘social’. There was not an equally
sophisticated social model to go with the sophisticated linguistic model, and you needed both
to bring them together as social semiotics.
The sign
Peirce is significant here. While Halliday’s semiotics came from Saussure and Hjelmslev,
Peirce had a completely separate and independent approach to semiotics, and in some ways,
one that I thought was better. Peirce, for whatever reasons, whether they were logical or just
temperamental, liked to have things come in threes rather than twos. So he did not build his
semiotics on the signifier and the signified or on a content plane and an expression plane, but
he always included a third element. In his scheme, the signifier is the ‘representamen’, the
thing that does the representing, and what we usually think of as the signified, he calls ‘the
object’. There is, however, a third element, ‘the interpretant’, which I think of as ‘a system
that does the interpreting’. In other words, no signifier, no sign in the sense of the material
expression points to the signified or the meaning it is supposed to stand for. You have to make
that connection! You have to do what Halliday calls ‘construe’, and of course you construe
according to systems of social convention. Semiotics is very good at describing those systems
of social convention. That is in some ways what the grammar is doing for you, but there has
to be a third element that actually does this construing. For Peirce, it is not the semiotic
relation that is fundamental; it is the process of semiosis. This fits in very well with my own
perspective. Coming from physics and from a notion that if meaning-making takes place, it
takes place because material beings are engaged in material processes and are doing things
that make the meaning happen. I certainly understood from a very early point that the Peircian
perspective had an advantage over the Saussurian one.
Do you find the central Peircian notions icon, index and symbol useful?
Yes, I do. Like many other people, I have found these notions to be one of the most
interesting contributions of Peirce, though you do have to remember that for Peirce icon,
index and symbol is only one of three or four different triads, which try to characterize the
relationship between signifier and signified, representamen and object in terms of different
principals of interpretation and different kinds of relationships they can have to each other. In
practical terms, I think that symbol is the least useful concept because it is the one that is most
generic for linguistic science and other kinds of arbitrary science. The other two have,
however, proved especially fascinating. If you know Peirce’s theory, the most fundamental
concepts are firstness, secondness and thirdness. Peirce decided not to give names to them,
because glosses would be too misleading. Roughly speaking, firstness is a kind of similarity
of form, secondness is a relationship through causality and thirdness is a relationship through
convention. In the simplest way of applying this to signifier and signified, you have a
similarity of form, as you might have between the map of Norway and the country of Norway.
This is iconicity. Many believe that visual representation is more iconic than linguistic
representation usually is.
Indexicality is a very fundamental concept for the Jacobsonian functional linguistics
tradition. Michael Silverstein’s writings about indexicality have made it into a major tool in
the thinking of linguistic anthropologists (ref.?). They point out that a lot of the meaning that
we ascribe to signs or to acts and actions as signs, comes not simply from their denotation, but
from another way of thinking of connotation. I may be talking about icons, indexes and
symbols, but I am talking about it in English, and that tells you something about me. And I
am talking about it in American English, and that says something more about me. You may
even hear certain throatiness in my voice because I have been talking a lot, and that also tells
you something about me. As a system of interpreting from your point of view, there are many
layers of meaning in the words I say, which have some kind of physical or causal relationship
to me as the speaker. You can take that even further, not just to me as the speaker, but to the
culture and historical period from which I am speaking, Foucault’s episteme, the set of
conventions of what it is possible to say at this time in history about some things like icons,
indexes and symbols. Indexical meaning is therefore a very powerful tool in that way. I would
say that indexicality is the number 1 tool, and iconicity maybe has the tenth of the power of
indexicality as a tool. The symbol concept is taken so much for granted that it has almost no
power as a tool anymore.
Why do you think that so few of the social semioticians have pointed to Peirce?
I think there are two reasons perhaps, a push and a pull. The push is that Peirce is very hard to
read. He wrote thousands of pages, and they are not well organized. And there is more than
one Pierce, an early Pierce, a middle Pierce and a later Pierce, and they are almost like three
different writers. He had continuity in his thinking, but it is very hard to establish that
continuity when reading him. I do not care too much about if I am misinterpreting Peirce
because I am not claiming to be a Peirce expert. I am claiming to be able to say useful things
with ideas that I think I got from Peirce. That is the push away from Pierce. The pull-side is
that in social semiotics, there is a very strong sense of loyalty to the Hallidayian tradition and
its way of formulating ideas. Halliday did not rely very much on Peirce. He relied a lot more
on Hjelmslev who in turn relied on Saussure.
Yes. I read the Prolegomena to a Theory of Language (1953 [1943]) very early on Halliday’s
suggestion. I was very much influenced by it, especially by the connotative semiotics, and I
saw the relationship to Roland Barthes. Through Barthes’ way of applying connotative
semiotics to multimodal texts, I saw how one could apply the general Hjemslevian tradition to
multimodal texts. In some ways, connotative semiotics has something in common with my
concept metaredundancy, and with Pierce’s notion of infinite semiosis or chains of
signification where the first signifier points to some signified for an interpretive system, but
that in turn can point to another one, and that can again point to another one and so forth.
Peirce describes this as linear chains. By Hjelmslev and Barthes, there is a kind of a meta-
relation hierarchy, so it is relations of relations of relations of relations. What points to the
next connotatively is not either the signifier or the signified of the first sign, but rather the
relationship between the two of them, or in Peircian terms the meaning-making action. This
meaning-making action becomes then the signifier for the next level. This felt very
comfortable to me, but I must admit, Hjelmslev’s book is also difficult to read. I have said to
my students: “When you have finished reading Hjelmslev, start over again in chapter 1 and
read all the way through a second time, because a lot of the things in the first half of the book
only make sense once you have read the second half of the book.”
Metaredundancy
Your concept metaredundancy has turned out to be significant. What do you mean by
metaredundancy?
Metaredundancy is one of the more difficult notions in my own toolkit. The inspiration comes
from Gregory Bateson’s work and his notion of meta-learning (ref.?). In Bateson’s own
account of his work, the notion developed by watching dolphins in a pool. What Bateson
noticed was that at a certain point, the dolphins caught on to the fact that the trainers did not
just want them to learn certain tricks. They wanted them to learn lots of different tricks to
impress the audience. The dolphins being quite smart, instead of just doing the next trick,
invented a dozen new tricks and behaviors that had never been seen before. The dolphins had
in some sense learned how to learn. They had figured out what the learning task was about at
a higher level, a meta-level. Bateson elaborated this idea in many ways and related it to his
theories of communication and to the origins of schizophrenia. The notion of hierarchy
relationships, relations of relations of relations, was a big and powerful idea at the time. It was
the foundation of French structuralism, and it was important in the analysis of the so-called
second order cybernetics. Such ideas were also very familiar to me because of my studies in
mathematics and physics. I had also read Bertrand Russell (ref.) and Alfred North Whitehead
(ref.) on the theory of logical types which was also fundamental for Bateson’s work.
I tried to construct a systematic version of these ideas in order to basically account for
the role of context in meaning. When you study the signifier and the signified, you will see
that the same signifier does not always point to the same signified. The same word, the same
sentence, the same gesture does not always have the same meaning. So what determines
which meaning it has? We usually say the context determines it. The next question is what is
the context, and how do you know which context is relevant to use to determine the meaning
in each case. Logically, you would answer the norms of your culture tells you which context
is the one in which this particular sign should be interpreted as having this particular meaning.
You begin to build up a meta-hierarchy. The relationship between the signifier and the
signified is itself mathematically speaking a contingent probability relationship. In other
words, given all the possible interpretations of the given signifier, the interpretations have
different probabilities of being the most useful or most shared interpretation, depending on the
context.
There are, however, multiple contexts, and for each of those contexts, there is a
different probability distribution of the interpretation of the sign. If you are in a different
culture, even in the same or the most similar context, there will be a different set of
probability distributions. People often misunderstand the term redundancy, taking it to mean
100% redundancy, but that is not what it means. It is a probability. It simply means that A and
B are redundant if given A you have a better than random chance of guessing B. It might not
be B; it might be C or D, but as long as it is not equally likely to be C or D, we can talk about
redundancy. If it is equally likely to be B or C or D, there is no redundancy there. Given the
context, then you know which probability distribution to use in interpreting the signs, but
equally, if you know the probability distributions for interpreting the signs, you can figure out
which context you are in. This goes both ways. Culture is the next level. If you know the
culture, then the rules of the culture tell you in which contexts there is which probabilities
assigned to the interpretation of the sign. But you could also go the other way. If you know in
which contexts these are the probability distributions, then you know which culture you are
in.
Of course you have to make it much more precise. You cannot just set up one culture
against another culture. Cultures and contexts are multidimensional, but the principle of the
metaredundancy is there. The first ones are redundant with each other, which make the first-
order redundancy, the second is second-order redundancy, the next is third-order redundancy,
etc. In principle there is no limit to how far you go though I do not think human brains operate
with more than four or five of these levels of redundancy. As a general term for this, we have
the notion metaredundancy.
Metafunctions, communication, text, and genre
You use Halliday’s three metafunctions in your work with texts, but you have given them new
names: presentation, orientation and organization. Why?
I have done that because I wanted to be able to generalize from the case of language to the
case of other semiotic modalities, and particularly to multimodal texts. I tried to find the
common denominator. In art criticism, they talk about iconography, which is the ideational
component, about perspective, which has to do with an interpersonal, attitudinal component,
and about composition which is more or less the textual aspect. In music, there is another
terminology, and so it is for gesture, posture and action more generally. I wanted a common
or neutral set of terms that were generic and could be applied across the different modes. I
came up with these terms to represent the cross-modal dimensions of meaning making. The
claim is that regardless how you make meaning, with whatever sign-system or combination of
sign-systems, you always have a presentational aspect or dimension, something that specifies
what the about is, what the content is, what ‘what’ is; and you have something that is
orientational, which expresses your evaluative and attitudinal stance towards it and ultimately
towards whoever you are communicating with or co-acting with. You also have an
organizational element regardless how many dimensions and modes that may be involved.
These notions are a generalization or an expansion of Halliday’s metafunctions.
Communication is one of those terms like context or culture that is useful mainly because it is
vague. The attempt to give it a precise definition, I personally think is counterproductive,
although I know many people will disagree with that. I once defined ‘communication’ by
saying: “Communication is the creation of community.” For me, that is in the broadest sense
what communication is. Communication is the processes which bind a community together.
People within a community communicate more often and more intensely in more important
ways with each other than they do with people who are not part of that community. I do not
believe that communication is the transfer of the same meaning from one person to the other.
I believe that communication comprises the social processes by which communities bind
themselves together, and it does not have to be in language. I think in general it is in joint
collaborative or interactive action that communication take place.
Text has a number of different meanings. I have usually distinguished between an objective
meaning of text, by which I mean the actual physical, material text, the ink on the paper or the
lights on the computer screen, versus the meaning text, by which I mean the meanings which
are interpreted by some interpreter from the objective text. Peirce would say by some system
of interpreters given all the contexts, all the intertexts, you know. Halliday of course uses text
as the fundamental unit of meaning. If an expression or an action has meaning in a
community, it is a text. You can then of course ask: “What is not a text?” For me, that is an
isolated signifier. You pull a word out of the dictionary, like ‘rough’. ‘Rough’ is not a text. If
someone tells me they had bad luck, and I say “Rough!”, that, is however, a text.
You also use the notion of genre in your work. What is genre to you?
In the most basic sense, a genre is for me just a type as opposed to a token. Let me take the
most fundamental kind of genre, an action or activity genre. It is people doing things in a way
that is typical, recognized, repeated and repeatable in a culture. What we usually think of as a
genre, however, is a text genre or a discourse genre, and then you are looking at the language
or multimodal expression of that particular activity genre. If you are writing out a text, then
the fundamental genre is the genre of the activity of writing for instance a haiku. The written
haiku itself is a byproduct of the activity, a trace or an index of the action genre that produced
it. Of course it inherits from the action genre certain kinds of features, which is what we
usually think of in a more linguistic sense of genre. It tends for instance to inherit a division
into parts. Each of the parts has a differentiated function within the whole, and there are some
constraints on the sequential or temporal ordering of those parts.
I think it is very close to Jim Martin’s notion of genre taken somewhat in isolation. Martin
wants to imbed the notion of genre in a sort of stratificational hierarchy. I think that this is
sometimes a useful thing to do, and sometimes maybe it is not so useful. Clearly, what is
central to Martin’s notion are stages which are from my point of view the functional parts of
the genre. He is mainly interested in describing what those functional parts are, and what their
functions are.
Text scales is a concept I have created and found useful in my own work. When I started to
analyze discourse, my initial interests were in the discourse of science classrooms. At that
time, there was still a major debate as to whether it was some kind of grammatical
organization above the sentence. Important questions were what counts as grammatical
organization and what does not, and how is meaning organized above the level of the
sentence. Today we all take it for granted that there is a lot of organization of meaning above
the scale of the sentence. But what I wanted to know was what kind of meanings you can
make with longer texts that you cannot make with shorter texts. This seemed to me to be
really a fundamental question. I developed therefore this notion of text scales, to ask what
kind of meaning you can make with a long complex sentence that you cannot make with a
single clause, what kind of meaning you can make with a paragraph or a logical argument that
you cannot make with a single sentence, with a novel and not with a short story, with a
dissertation and not with a 15-page article, and so forth. I think that we all recognize that there
really are such distinctions of meaning. These differences in meaning are not so obvious, so it
is a productive question to ask about text scales.
It is important to be consistent with my larger point of view in which it is always the
activity or the activity genre, the people doing things, that is fundamental and not the text.
This translates into activity scales, or time scales as I more recently called it, in which the
fundamental question is not just what kinds of meaning can you make over longer periods of
activity than over shorter periods, but how do the meanings or the actions that you take over
shorter time scales cumulate and integrate into the meanings that you make over the longer
scales. This question has led to a very productive way of looking at things in terms of cross-
scale relationships.
The fundamental model that I have used for this work comes indirectly from
developmental biology, which is another collaboration field earlier in my career, and also
from complex systems theory more recently. The model is essentially a ‘sandwich’ with
three-levels in which you put whatever level you are interested in in the middle, and then you
look at least one level above it, meaning longer than it, and one level below, meaning shorter
than it. The meanings that you make or the actions and activities you do that typically takes
place at the level in focus, are themselves organizations out of smaller activities and actions or
units of words or sentences below, and they are subject to the constraints and affordances of
the longer term activities that are going on at the time. If you are in the middle of the science
lesson, those longer scales are represented by the entire lesson or the unit or topic that you are
dealing with. The shorter scale is represented by the individual utterances. The level in focus
is typically the episode, a series of interactions between students and teachers around a
particular topic that achieves a particular aim or objective.
Multimodality
What would you say are your major theoretical contributions to the field of multimodality?
I think the most important tool that I have added, is my notion of multiplicative or multiplying
meaning, which essentially is an argument from information theory and from the cybernetic
approach. If you have several different codes, or several different sets of alternatives, and you
are deploying them simultaneously, then the set of all the possible combinations of them is the
relevant set for deciding the information value of any particular choice or instance. If I had
three sets of three choices each, and if I deployed them separately and independently and I
made one choice, I would have one out of three information value for each of them (one out
of three plus one out of three plus one out of three). I would have, in effect, what is called a
value three of information, whereas if I deploy them in conjunction with each other, so that I
have to make a selection from all three simultaneously, I have a space of nine different
possibilities. If I choose one, then the specificity of that is one in nine, or an information value
of nine. So there is a combinatorial explosion of possible combinations. Because of
redundancies and genre constraints, it is not a complete explosion. Not every one of the nine,
or nine billion, possibilities is equally likely, but in general it is a lot more than the sum of just
adding them up one at a time separately. You can then think about what are the possible
combinations that you can multiply. For me, one of the most basic ones is multiplying the
presentational times the orientational times the organizational. Even within the metafunctions,
there are different sub dimensions and different modes. So I am also multiplying the linguistic
by the visual by the mathematical. At the very least, this gives you a large check list of things
to pay attention to. How does for instance the orientational aspect of the visual interact and
combine with the orientational aspect of the linguistic? It is not always homogeneous, so the
orientational aspect of the visual can for instance interact with the presentational aspect of the
linguistic.
Is the idea of multimodality the largest innovation in social semiotics the last twenty years?
It probably is. The innovation before multimodality was the critical perspective, I think; the
political and social dimension and the ideological dimension of analysis, which still remains
on the agenda. But once you look at things in terms of multimodality, there is no going back.
Everything is multimodal, even a printed text on a page, the choice of typeface, the bold and
the italics, the headers. Most of the texts that we look at online are multimodal. Video is an
inherently text which has auditory dimensions; sound dimensions and in many cases music
dimensions in addition to the visual organization dimensions and the talk and language that is
going on.
How do you distinguish between modes? If we talk about modes multiplying, would you for
instance say that color is a separate mode? Or music?
Some of this I think is not so much an inherent feature of the modality of expression as it is a
historical product of how people have deployed the affordances, the potentials of the modes
of expression. Music, I think, does qualify as a full-fledged semiotic resource system, because
historically it has been developed for purposes that have brought out most of its potential in
that way. It tends to have a more dominantly attitudinal or emotional function rather than an
ideational function, but it can also have an ideational function. Color, on the other hand, has
not been developed historically in our culture in such a way. I do not think we really have
color syntagms that operate as a separate semiotic modality. If I flashed a series of colors at
you I do not know if you would make any special sense out of that sequencing. Maybe if I
flashed them in the sequence of the rainbow. Or I might be able to flash sequences that are
more like female identified colors or male identified colors or darker colors and brighter
colors. We have not historically, however, turned color into an autonomous, or semi-
autonomous, system the way we have with music.
So what may fruitfully be analyzed as a separate mode might change through history?
Yes, I think so. There is a tendency to conventionally fuse existing modes together, and to
some extent to also separate them from each other. I think this is most clear developmentally.
For very young children, drawing and writing are not separate modes, yet. They have to be
taught how to distinguish drawing from writing and to create one separate semiotic resource
system for drawing and a different system for writing. Something that was originally,
primordially, developmentally a single system gets split into two. I see no reason why two
different systems might not become fused through their redundancy relations moving up
towards one hundred per cent. Once they reach one hundred per cent then they are no longer
two separate systems, because combining them cannot give you any additional meaning.
People have been debating that for a very long time. I have always taken the position that
because we are a logocentric culture we have a vested interest in trying to see that language is
special, and usually better than the other modes. I remain a sceptic about that. Moreover it is a
pure academic abstraction to say that language exists as a separate mode. It is never deployed,
and it cannot be deployed, as a separate mode. There is no expression plane for language
which is not also expression plane for some other semiotic modality. In spoken language there
is the timbre of my voice, my accent and all the indexical features we were talking about
before. In written language, there are typographical features and choices. It seems to me that it
is more productive to ask the question: What does language add to the multimodal mix? What
are its special strengths or the special purposes for which we tend to use language more than
the other modes? I think categorization and sub categorization is one of those. Language is
really good at classifying things and making them into groups, types, sets, categories, sub
categories and overlappings of categories. Language is also very good at separating out
processes from participants, which in some ways may be a very artificial thing to do. It is not
the way reality actually appears to us or is constructed; we only think it is because we are
using language to describe it to ourselves. But if you want to do that, and there are occasions
when you do want to do that, language is a particularly good way of doing it.
Yes, indeed, the differential, functional affordances of the different modalities within the
multimodal mix. Michael O’Toole has talked about mono-functional tendencies in various
genres (ref?). One could talk in some ways about functional specialization of modes. If you
look at mathematics it is really a highly functionally specialized mode. It is extremely good at
looking at meaning by degree, quantitative relationships of more and less, and the ways in
which more and less of one thing correlates with or are functionally related to more or less in
something else. That was its original specialized function. Of course it has taken on many
other specialized functions too, especially in the last hundred years.
I think cultures tend to preferentially use certain modes for certain meaning purposes.
We tend to use music for emotional and rhythmic kinds of purposes and language for
classifying and also narrative purposes. We use visual representations for showing spatial
relationships and so forth.
Literacy like context and culture is another one of those notions that work best if you do not
define it too precisely. The meaning of literacy has changed so much over my lifetime from
being almost exclusively the ability to read print text and gain basic information value from
doing so, to the use of written language for your own purposes. Once one goes to a
multimodal view on literacy, then literacy and multimodal semiotic competence are more or
less the same thing. I do not really see how to distinguish them anymore. I think one can talk
about specialized literacies, like you can have a mathematical literacy in the register of
differential calculus, or you can have a literacy in the texts of systemic functional linguistics.
To me there are literacies, but if you talk about literacy as such it does not mean much more
than either the everyday registers of colloquial language or generalized multimodal semiotic
competence.
Could we hope to develop a general social semiotics of all modes? And in what ways would
that be different from traditional semiotics?
I do not think you can ever really have the modes separately. I think the only general
semiotics you can have is a fully multimodal semiotics. And I think the main difference from
the traditional approach is the social part. Traditional semiotics has been very formalist with
emphasis mainly on forms, relations and contrasts of forms, and constraints on combinations
of forms. It has been much less about functions of those forms and combinations of forms and
how they are shaped historically by the kinds of things that people want to do with the forms,
including their political objectives, their identity issues and their practical necessities.
How can social semiotics and SFL contribute to the study of science?
There are many different aspects to science; there is scientific research itself, there is our
understanding of science as a social phenomenon, and there is the teaching of science. I
believe that perspectives from SFL and social semiotics can contribute to all of those aspects,
but in rather different ways. The contribution to the teaching of science is obvious by helping
students to understand that a lot of what goes under the name of scientific thinking of
scientific concepts, is really specific forms of scientific language, talking science, scientific
reasoning, scientific genres, both spoken genres as well as written.
I think critical discourse analysis is important for understanding science as a social
phenomenon. Science is not a purely isolated, neutral, arbitrary, even objective activity. It is
culturally and historically situated, and it is deeply interdependent on economic interests and
even to some extent on political ideologies. There is a strong gender identity component in
many of the ideas and programs of science, particularly in certain subfields of science. You
need tools to be able to persuade people that the things you say are right. Critical discourse
analysis, using tools from systemic functional linguistics and perspectives from social
semiotics, provides very powerful for actually doing such analysis.
In scientific research itself I think it depends on the field, and even more on the
component of scientific research. One of the areas that is of great interest in scientific research
today is representation. How do you represent great, complex data sets and complex
interrelationships of different variables? We are in the age of big data, and we have enormous
power of representation through computerization, but we have not really invented a lot of new
genres of representation, certainly not much in the sense of verbal genres. Scientific articles
today reads pretty much as one 50 years ago, though maybe not like one 150 years ago. The
design of effective scientific relations is a major issue in scientific research today. You cannot
understand your data unless you can meaningfully represent it to yourself, and you cannot
communicate persuasively arguments about your data unless you have representations that
enable you to share your insights and argue about the representations with other members of
the scientific community. In particular, I think that multimodal analysis with a grounding in
social semiotic approaches, and ultimately in principles deriving from SFL, is potentially a
very important tool for the design of better scientific representations and also for new
scientific representations.
What is your contribution on this field?
Pretty obviously my book Talking Science from 1990 has had a fairly substantial impact in
the fields of science and mathematics education and to approaches to research on classroom
discourse. Some of my notions about the application of the multimodality approach to
multimodal scientific text, the integration of mathematics and various kinds of representations
such as tables, charts and diagrams along with the linguistic text in the genre of the scientific
article, have been useful. For very many people, this has been a model of how to look at those
interrelationships even in other genres. I have not been in the business of designing new
scientific representations, though I might. Some of my social semiotic perspectives have had
some small influence in terms of getting people, at least in science education, to have a more
sociocultural perspective on the teaching of science. I leave it to other people to try to
persuade the scientists to do that.
Halliday has talked about the evolution of the language of science. Is this a field you have
worked with?
I think Halliday has probably done more of that, particularly on the language side. His work
on the rise of nominalization as a tool in scientific writing and discourse and other changes in
the way in which scientists explain the relationships of concepts and ideas that went along
with nominalization is historically accurate and very important. My own contribution has
been more on the multimodal side, trying to look at the ways in which scientific language and
the scientific use of mathematical expressions have become mutually adapted to one another
over time as there has been greater and greater use of mathematical representations in
scientific writing. If you go back to the 1700s, a lot of the mathematical expression was very
verbal, and even some of the diagrams and formulas mixed grammatically normal English or
Latin sentences with mathematical symbols and expressions. We do not do it that way
anymore. There is an almost completely separate grammar for the mathematics, which is
ultimately derived from the grammar of natural language, but has become very specialized
and diverged from it alongside the special register grammar of different areas of science. I
know physics the best, and the way in which the different grammars accommodate and
complement each other has evolved textually over time. Something similar has gone on with
visual representations in scientific writing. They are extremely important, more so than people
outside sciences realize. There are many scientific articles in which an expert will look first
and primarily at the figures and diagrams, and refer to the language only in order to clarify the
meaning of what is in the graph, the diagram or the chart. And of course the ways in which
visual elements are accommodated in a complementary way in the creation of the whole
multimodal scientific text, is an even more complex phenomenon with historical changes,
than just the integration of the mathematical symbolism with the text.
Is there a fundamental aim of science text to present one truth, and if it is so how can the
truth be represented?
It is certainly true that the genres and register of science have evolved to state precise
meanings and in general to imply that there is a true or correct description of reality or of a
phenomenon. Scientific texts allow, however, for degrees of certainty about what is being said
or claimed, and there are even representational conventions for that. When you represent data
in many forms of graphs, you can for instance put indicators saying that it could be a little bit
more or little bit less than what I think it is, but certainly not a lot more or a lot less. Science
does not, however, really have a set of conventions for dealing with multiple points of view
about a phenomenon, or multiple interpretations, except to see them as contrastive
alternatives; only one of which can be right. There have been certain periods in the history of
science when scientists have lived uncomfortably with the notion that there might be more
than one correct explanation of what is going on. The most famous recent period of that was
the so-called wave/particle-duality in quantum-mechanics, the argument of Niels Bohr
regarding complementarity and the notion that reality is inheritably too complex for any one
single point of view to be able to explain it all. Then you need to have multiple points of view
in order to get a grasp on reality. That only lasted for about 20 years, and it was a very
uncomfortable 20 years in the history of science because it went so much against the kind of
scientific philosophy that scientists themselves have, which is perhaps a bit more naïve than
what many philosophers and sociologists of science have. Scientists do not have a strong
motivation for developing multiperspectival approaches, and they are uncomfortable with the
notion that you might never be able to resolve a problem. It undermines the identity of the
belief-system and maybe even the value-system of scientists.
Do we need another language and other representations for talking about the future aspects of
science, for instance the environmental challenges?
This is a big question. Sciences have for years been convinced about the dangers of climate
change and have presented the data and the evidence and representations to decision-makers,
who also hear from other people, who do not want to give up making more money in order to
save the earth. I think the question here is not so much a scientific question as it is a rhetorical
and political question. One of the problems is that scientists are not very good at making their
case to the public. In designing better scientific representations, a key feature is to find ways
to make them more friendly to non-experts, so that for example many issues of probability
and statistics, which are fundamental to the scientific representation of controversial issues,
like environmental policy issues, simply is understandable for the average person. Purely in
representational and rhetorical terms I think that science could do a lot better, but scientists do
not have the time or the expertise to do that. They need people who are good at translating
scientific findings of relevance to policy into terms, not for policy makers or experts, or for
the staff of people of a congressman or a senator, but for the mass media and for ordinary
people.
You have in recent years worked with cognition and emotions. How do you combine
cognition and emotions?
It goes back to semiotics and natural science, which are the two sources of all my work. The
complex systems I was interested in combined human social systems with natural ecological
systems. Many of the core theoretical ideas on doing this come from the field of bio-
semiotics, which is probably better known in Europe than in the United States. For me, it was
the attempt to figure out how meaning-making processes, which for me now includes feeling
processes, take place in material systems; a kind of materialist rather than a formalist view of
semiotics. I asked: “Why can’t we do for our perspective on feelings and emotions, what
we’ve already done for our perspective on cognition?” That is to go outside of the head, make
it culturally specific, make it situated in interactions in an environment in a context, and in
effect turn it into a social semiotics so it becomes a social semiotics of feeling.
At the same time, I questioned the traditional distinction between affect and cognition.
Reading about embodied cognition for instance, or some of the papers that Paul Thibault has
written about embodied models of language, or ‘languaging’ as he calls it (ref. ?), and works
of people like Tim Ingold (ref.?), an anthropologist from Scotland, and Maxine Sheets-
Johnstone (ref. ?), who is a theorist of movement and dance, it became clearer and clearer to
me that the traditional distinction between thought and feeling is an ideological distinction. It
is a distinction that has to do with gender stereotypes. There is, however, no meaning-making
without feeling, and it is no feeling apart from its meaningfulness for us. I began to look into
both the neurological basis of the relationship between thinking and feeling. The so-called
feeling centers and the so-called thinking centers of the brain are actually tightly integrated in
one another. I looked at bio-semiotics inspired by Jesper Hoffmeyer in Denmark, whose
primary interest is in the biological origins of semiosis (ref.?). For me, the interest was the
biological single unitary phenomenon of feeling and cognition, which we then rip apart,
separating them into feeling versus meaning, which is not something we should do. I have a
chapter coming out in a new handbook on advances in semiotics edited by Peter Trifonas
from Toronto which is called Feeling and Meaning – A Unitary Bio-Semiotic Approach.
For me, the aesthetics in the general sense of aisthesis in Greek, which means feeling, is
fundamental to all aspects of meaning. We tend to associate it with feelings of beauty, but in
an earlier generation it was also associated with moral feelings and in the ancient Greeks it
was associated with stronger passions, as with tragedy or comedy. In this broad sense
aesthetics is fundamental to all aspects of meaning, even scientific meaning or mathematical
meaning. Mathematicians find aesthetic beauty in certain kinds of mathematical proofs, and
regard other kinds of mathematical proofs, which are equally mathematically valid, as being
‘ugly’. Just like computer scientists will regard certain programming algorithms as elegant
and beautiful and they will get all emotionally excited about them, and they will regard others
as ugly kludges and ad hoc ad-ons that have no real ‘computer sensibility’. In scientific texts
scientists also get quite emotional about a beautifully designed experiment in the laboratory,
or a beautiful mathematical formula or expression of an idea or generalization. Perhaps the
thing that is most often overlooked is that even the so-called neutral academic objectivity is
itself a feeling. It is not the absence of feeling, it is a particular feeling that one has and
cultivates. There is an aesthetics – if you like – of neutrality, as well as an aesthetics of
passion and engagement.
Can aesthetics be integrated in the three metafunctions?
For me aesthetics is a sort of sub category or a branch more generally within the area of
feelings, emotions, evaluations, and stances towards things. Formally it would come within
the orientational semiotic function. That is similar in many ways to Jim Martin’s work on
appraisal. He sees aesthetics to be an evaluative component within the interpersonal
metafunction, and looks at linguistic resources for the expression of attitude and evaluation
for which he has the evaluation of objects by aesthetic criteria as one sub component. I would
tend generally to go along with that, but it is not just about the semiotic expression and
evaluation. It is about the feeling that you get on the basis of which you decide that you are
going to evaluate it in this way. I think the feeling in many ways comes first in aesthetic
evaluation, and then you look for a semiotic way to justify and explain why you feel that way
about something.
I think that the aesthetic, emotional, affective dimension is one of the most important
challenges for the future. It has been somewhat neglected and not well integrated. This has a
connection to what Jim Martin proposed as Positive Discourse Analysis, looking for examples
of ways in which discourse represents the best rather than the worst in human beings. It is
obviously very easy to find the worst. In his analysis for example of the discourse of
reconciliation in South Africa, he was striving to find the positive, the aspirational
possibilities of humanity. We need to ask what we find inspiring in multimodal texts. How do
we use multimodal texts and actions as feeling and meaning mediators to bring out the best in
ourselves? I think that is a worthwhile program for the future.
How do the cognitive and the emotional enter into education and how to design education?
Unfortunately it does not enter in nearly as much as it should enter in. I think this is another
one of our cultural shortcomings. We associate the emotional with women and children and
not with men who go to war or rule the world. In education there is an emphasis, to some
extent, on emotion and play and feelings for very young children in schools. But then
certainly in American culture, and I think in many other European cultures, a time comes
when the only thing we teach children about emotion is how to control it, which means how
to suppress it, and how to make it consistent with what the culture considers appropriate
emotion in a situation. We do not pay attention to how we can use emotion, how it can
become a tool in the same way we use meaning as a tool in order to solve problems, to
creatively come up with new solutions, to design, create and produce new things, and simply
find new ways of enjoying life. If you apply this to science education, some of the students
find science education very dull and boring, whereas many scientists find it very exciting and
emotionally engaging and inspiring. So there is something wrong, there is something missing
in the teaching of science, and perhaps the teaching of many other subjects. This concerns
recognition of the emotional and aesthetic dimensions of the subject and the legitimacy of
enjoying yourself when you are learning things.
Digital media
You have been especially interested in new digital media and you have worked with these
media within the framework of social semiotics. What are your main theoretical contributions
in this field?
Initially I began by taking over concepts that I had used from more traditional media to see
how they would need to be changed or extended for new media. My notions of intertextuality
led to my analyses of hypertext. The fundamental question was: When you go across a link in
a hypertext, what is the meaning relation between the source of the link and the target of the
link? Are those relations the same kinds of intertextual relationships that we normally have?
Is there a need for clarification or specification of those relationships? How are they
organized? How do they appear in different genres? I did some work on that and presented it
at a SFL conference in Cardiff a long time ago. I never published very much about it, mainly
because I moved fairly quickly into broader analyses of web based materials such as web sites
from NASA and comparisons between print text and web based scientific text.
I kept, however, following the progressions of new media, always looking for the ones
that offered the greatest multimodal affordances. Computer games appealed greatly to me
around 2000, as they did also to James Paul Gee who was studying similar things about the
same time (ref.?). Computer games are a dynamic medium, a medium in which events unfold
in time in a way that is not as completely under the control of the user as in a web site. If I
follow a trajectory of links from here to there in a web site I control the timing and the pacing
of those shifts, but in a computer game, the program controls as much as I do. Sometimes it is
even a fight between me trying to slow things down and the program trying to speed things
up. I became very interested in issues of pacing and timing and time scales and the traversals
and trajectories over time in dynamic computer multimedia like computer games. That
became one of the areas that I published on.
I was also looking at what happens to the presentational and orientational
multiplicative model of meaning in this new space. Obviously, it gets bigger and more
complex, and there are many more different kinds of combinations that take place affected by
the temporal dimension. But the thing that struck me as most evidently different was the role
of feeling and emotion – though it turned out that it was not really different, it was just more
obvious. The pacing was closely related to the anxiety that you had in playing the game. In
most of these games you can die, or at least fail in a way that feels unpleasant to you, and you
have anxiety about whether it is going to happen. When the pace of the game goes more and
more rapidly your anxiety increases. The choices that you make in the game are not based
purely on the meanings that those choices have; they also are based on the feeling state that
you are in at the time you make them. There is developed a feedback loop between the
meaning choices you make, the consequences of those choices for your feeling state and the
effects of the feeling state on the subsequent meaning choices you make. So the meaning –
feeling cycle becomes a single integrated unit of analysis for understanding the trajectory of
what you do and what happens in the course of playing the game. The impetus for my most
recent work has been trying to articulate a more unified and integrated theory of meaning and
feeling.
I think we are now well aware that technological hype and technological fixes are dangerous.
It is too easy to promise too much or to expect too much. No technology is going to make
learning better unless it occurs in the context of some genuine motivation for learning, and
some thoughtful support for learning from the environment and the teacher. That being said,
there is still a lot of opportunity and a lot of affordances. If you replace print text books with
multimedia learning support materials or systems, you can use more dynamic media, merely
integrating a lot more video, and in the case of science a lot more animations and simulations
into learning. You cannot pick up your text book and see a diagram about the relationship of
pressure and volume, and push down on the plunger and see pressure and volume indicators
change, whereas if you have a well-designed online textbook you ought to be able to do that.
It is clearly preferable to replace print textbooks in this way.
More specifically: what is the role of software in designing modern texts?
It depends on what kinds of software one is talking about. There is software behind your
ability to push on a plunger in a simulation of a pressure – volume relationship in your online
physics text book. There is also more general purpose software to enable you, for example, to
create your own simulation of a situation. And there is software that enables students to learn
how to program computers in more fundamental ways. It has been a debate in education,
whether learning computer programming should be a fundamental literacy or not. My own
view is that general purpose computer programming knowledge beyond the most elementary
level is a specialized skill that should be learned by people who have some particular reason
or desire to learn it, not necessarily by everyone. But what you might call scripting level
programming skills are probably a kind of skill that almost everyone will at some point or
other find useful to know. But also in today’s reality, such skills are highly specific to
particular kinds of genres and tasks, so the conventions that you learn for doing it with one
programming system will not necessarily work in a different programming system. I have the
feeling that in twenty or forty years from now there will be generic conventions that will
apply across many different kinds of scripting. But we are not there yet.
I use media to mean the material technologies of expression and modalities to mean the
semiotic resource systems, but in choosing the best sounding English word for something I do
not always stick rigorously to my own definitions. Hypermodality is basically hypertext plus
multimodality. Today, in practice, most new multimodal texts or systems are also hypermodal
and you probably do not need the term hypermodality anymore. It is something that is taken
for granted as one aspect or affordance of modern multimodal systems. Meta-media literacy is
somewhat different. What I meant by meta-media literacy, or meta media in general, was that
the expression plane could be customized to the ways of communicating that you found most
comfortable, while leaving the content plane relatively unchanged. Imagine that you had the
online multimodal version of your text book and you could request the text book to present a
concept to you using more language and less mathematics, or more visuals and less
mathematics, or more mathematics and more language and less visuals, or to use
preferentially certain kinds of visuals rather than other kinds of visuals. The idea of meta-
media was simply the notion of a modally customizable presentation of what was still the
underlying same content database.
You previously mentioned that you see the medium as part of the immediate context of the
text.
Yes, the material medium, meaning the technology and the conventions for operating and
using that technology are a part of the context of the text that we usually background. We
usually do not pay attention to it, unless it stops working. A book is a multimodal technology
material medium, but if two pages of the book are stuck together or they have not been slit or
cut through the paper properly, then suddenly it becomes a relevant context. Some avant
garde authors and publishers have even deliberately played around with the conventions of
the book, whether it is playing around with the convention that you read sequentially page
after page, or playing around with the convention that some pages may be cut away or only
half of the page exists. Art books of many kinds play with these conventions, and then you
suddenly realize that it is a relevant part of the context. Usually ‘black boxed’, as the Latour
(ref?) terminology has it, that is something so much taken for granted that as long as it is not
going wrong you do not actively regard it as part of the context.
Systemic functional linguistics is not widely used and taught in American linguistics
departments. The reason for that is historical. Because Chomsky was American, Chomsky’s
students became the professors in many, if not most, linguistics departments in the United
States. Unfortunately the Chomskian school had a very exclusionary approach to linguistics.
They felt they had the right way of looking at linguistics, and everybody else was wrong.
They suppressed and pushed out every other approach than their own, not just SFL. SFL has,
however, found its home outside of linguistics: in applied linguistics areas, language teaching
for example, computational and anthropological linguistics, discourse analysis methods
throughout the social sciences, and social semiotics is important in multimedia and
multimodal analysis methods. Social semiotics and SFL has a significant footprint in the
United States, but not in the linguistics departments, not in formal linguistics as a discipline.
Where can we go to find social semiotics and SFL on the American continent?
Social semiotics is stronger in Canada than in the United States, because it is a British
tradition. Much as the academic world likes to deny these things, national loyalties which are
propagated by who is the student of whom and who goes to which university to get their PhD,
has an enormous impact on the different shapes of the schools of thought in the different
countries. In Canada, Toronto has been a big center for this approach, not at least through the
communicational linguistic studies of Michael Gregory and his students. Gregory has many of
his PhD-students in different universities across Canada. The educational linguistic approach
is strongly represented in the University of British Columbia.
In the United States, the situation is complicated by the fact that there are other
functional linguistic traditions besides SFL. Functional linguistics has become pretty strong in
anthropology for instance, in the tradition of Dell Hymes and John Gumperz. Then there is
my colleague at University of Michigan, Judith Irvine, and the Santa Barbara-tradition from
Sandra Thompson. In the east, the University of Pennsylvania where Dell Hymes was for
many years, is a center. The Jacobsonian school from Harvard, represented by Michael
Silverstein at the University of Chicago, and all his students must also be mentioned. Many of
these scholars went into linguistic anthropology because they would not have been able to get
jobs in Chomskian linguistics departments. The antipathy of the Chomskians forced all the
functionalists to become close allies of one another, and there has been much exchange of
ideas. Computational linguistics and corpus linguistics are other areas where there is a large
amount of influence from SFL.
As you see it, what are the major advantages and deficits with systemic functional linguistics?
The major advantage is clearly the paradigmatic orientation to meaning; that you analyze
wording choices directly in relationship to differences of meaning, which is certainly not
possible in a Chomskian model of language. It is possible in many of the other functionalist
approaches to language, but in most of those it requires a great deal more intuition on the part
of the analyst to get a satisfactory analysis. I also think that systemic functional linguistics is
probably more teachable to a wider range of users than many of the other approaches are.
Systemic functional linguistics takes, however, a long time to learn, and this may be one of
the disadvantages. There is a lot of specialized terminology; there are a number of difficult
kinds of concepts. Halliday often says that people are not accustomed to thinking
grammatically, and this is true. It takes an effort to learn how to think grammatically even
enough to be able to use SFL grammar as a tool for discourse analysis. One of the other
criticisms of SFL which I have mentioned briefly before is that it tends to be rather inward
looking. People in the field tend to talk mostly to other people who are also in the field and
may not look as much outside to other flavors of functional linguistics. They could look more
into cultural anthropology and political sociology, and also into other areas of semiotics like
biosemiotics and new paradigms like embodied cognition and embodied meaning making as
Paul Thibault, and not many others, has done.
Do politics or ideology and linguistics go hand in hand as Halliday in his early dream of a
Marxist linguistics might have said?
They can go hand in hand. Sometimes they go too much hand in hand, as when some people
using a Critical Discourse Analysis approach may have already decided about the political
analysis before they do the linguistic analysis or the multimodal analysis of their texts. As a
result, learning how to understand the political situation from the analysis of the texts will not
be the case. Instead they are merely trying to prove that their political analysis was correct all
along by the evidence that they gain from the text. Rhetorically perhaps, this could be a useful
strategy; sometimes you do need to beat people over the head with evidence for some things
that are obviously true. From a research point of view, however, I think this is not a good
strategy.
In your view, what are the most important current and future trends in social semiotics and
systemic functional linguistics?
One of them certainly is the move towards integrating a semiotic approach with an embodied
approach, taking up the implications of a basically materialist model of communication and
language, and taking them a step further to talk about what it means to have animate bodies
moving, touching, interacting, doing while speaking, gesturing, drawing and writing and so
forth. A second one which has some affinities with that is a greater emphasis on feelings and
emotions and the integration of feeling and meaning, and how to take that into account, to get
richer forms of analysis that do not simply describe what it means and how it means it but
also give an account of how we feel about it and why we feel the way we do about it in
relationship to the meanings that we make with it. Those would be the two that I am most
optimistic about having a productive future in the field.
There are a number of other areas that have been going for a while, that may or may
not become more productive in the future. Jim Martin’s Positive Discourse Analysis that we
mentioned earlier may be one of those. I think applications of SFL in corpus linguistics have
been going on for a while now, and with the increasing sophistication of computer language
models, there may be a further future for that. It may become possible to extend Halliday’s
program of looking at the relative probabilities of different choices within system networks as
a function of different genres or registers, so that one could even get quantitative models of
register in the program of generative computational linguistics. And of course, there is always
the holy grail of being able to parse natural language with the computer, which we still seem
to be as far away from as we were fifty years ago.