0% found this document useful (0 votes)
229 views101 pages

Audiovisual Translation Subtitling Audio Description

Uploaded by

Lucas Melo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
229 views101 pages

Audiovisual Translation Subtitling Audio Description

Uploaded by

Lucas Melo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 101

Audiovisual Translation

(Subtitling, Audio Description)

Brasília-DF.
Elaboration

Claudio Ramos de Lima Chaves

Production

Linguistics Review and Publishing Technical Appraisal Team


Sumário

PRESENTATION.................................................................................................................................... 4

BOOK STUDIES ORGANIZATION AND RESEARCH................................................................................... 5

INTRODUCTION.................................................................................................................................. 7

UNIT I
TYPES OF AUDIOVISUAL........................................................................................................................... 9

CHAPTER 1
TYPES OF AUDIOVISUAL AIDS USED FOR TEACHING................................................................... 12

CHAPTER 2
DUBBING AND VOICE-OVER.................................................................................................... 37

UNIDADE UNIT II
SUBTITLING AND AUDIO DESCRIPTION.................................................................................................... 62

CHAPTER 1
SUBTITLING.............................................................................................................................. 62

CHAPTER 2
AUDIO DESCRIPTION............................................................................................................... 77

BIBLIOGRAPHY................................................................................................................................ 100
Presentation

Dear student

The editorial proposal for this Study and Research Book brings together elements that
are considered necessary for the development of the study with safety and quality.
It is characterized by an updated, dynamic and pertinent content, as well as by the
interactivity and modernity of its structure, which is appropriate to the Distance
Education methodology.

With this material, we intend to reflect on and understand the different knowledges
to be offered, allowing you to expand specific concepts in the area and to work in a
competent and conscientious way, as is appropriate for the professional that looks for
continuous education to overcome the challenges that scientific-technological evolution
imposes on the contemporary world.

This publication was prepared with the intention of being a valuable subsidy to facilitate
your journey in the long way to be covered in both personal and professional life. Use it
as an instrument for success in your career.

Editorial Board

4
Book Studies Organization
and Research

Aiming to make your study easier, the contents of this book are organized into units,
subdivided into chapters, in a didactic, objective and coherent manner. They will be
addressed through basic texts, with topics to think about, and other editorial resources
that intend to make the reading more enjoyable. At the end, bibliographic sources will
also be indicated, to deepen the studies with readings and complementary research.

Following is a brief description of the icons used in the organization of the Study and
Research Books.

Discussion

Texts that try to instigate the student to reflect on a certain subject before even
starting reading the material, or that come after some passage the author added.

To think about

Queries included along the study so that the student can make a pause and reflect
on the content studied or issues that help him/her in the reasoning. It is important
for the student to check his/her knowledge, experiences and feelings. These
reflections are the starting point for constructing his/her conclusions.

Suggested complementary study

Suggestions for extra readings, films and websites for further study, forum
discussions or face-to-face meetings when appropriate.

Attention

Alerts for important details/issues that contribute to sumarize or conclude the


subject addressed.

5
To know more

Complementary information to elucidate the sumaries and conclusions on the


subject addressed.

Summarizing

An excerpt that aims to sum up relevant information of the content, making it


easier for the student to understand more complex passages.

(Not) to conclude

Integrative text, at the end of the module, that motivates the student to continue
learning or stimulates other considerations on the module studied.

6
Introduction

We live in the society influenced greatly by the Media. With the appearance of new
technologies there appeared also new forms of international and intercultural
communication which led to new forms of translation. Cinematography, as a part of the
Media, has become one of the most widely spread and influential forms of art.

The translation of cinematographical products is called audiovisual translation though


one can find many synonymous names as film translation, TV translation, screen
translation and many others.

With the social phenomenon of globalization translators face the urgent need to
translate films in short periods of time but in a high-quality way. Translation Studies
theorists take the challenge to develop the theoretical background and frameworks of
performing audiovisual translation and managing possible constrains and challenges.

Of course, the use of audiovisual aids is not restricted to the industry of cinema. So, we
shall present other uses of them as well, such as for teaching purposes.

The aim of this fascicle is to give a theoretical overview of audiovisual translation which
is defined as the transfer from one language into another of the verbal components
contained in audiovisual works and products. Thus, the object of the fascicle is mostly
audiovisual translation from the Translation studies perspective.

Objectives
»» Give direct theoretical overview of audiovisual translation.

»» Define the various terms in relation to audiovisual translation.

»» Show that audiovisual translation is not only relevant to the entertainment


industry, but also for social inclusion of impaired people.

»» Show some academic studies on the topic of audiovisual translation.

»» Instigate students to further investigate by themselves on this quite


recent/new academic area of studies.

7
8
TYPES OF UNIT I
AUDIOVISUAL

We may find three basic issues in the AV field, which are the relationships between
pictures and soundtrack and verbal output, between the target language/culture and
a foreign language/culture, and finally between the spoken code and the written one.

Talking about AVT: terms


Long before video became popular, the term used was film translation, although
the term language transfer used to ignore extralinguistic characteristics of these
texts (sound effects, music, image, and so on). Now, people normally use the term
Audiovisual translation.

Multimedia translation can also refer to games, theatre, comics and digital resources
such as CDRoms. Screen translation is currently a widely used term, though thought
‘too narrow’ by some (O’Hagan 2007, 158). The term versioning is used within the
industry.

Types of AVT: revoicing

Dubbing is basically replacing a single spoken language dialogue track with another
translated language dialogue track. Normally, it involves some adaptation for a text to
oncamera characters, which might include some lip-synchronization for the times in
which the chest or the face of a particular speaker in a medium shot is found visible.
(Take a look at the following Erin Brockovich example http://www.youtube.com/
watch?v=CtGcmch4t-A)

We usually think of dubbing as interlingual, but many examples of intralingual


dubbing also exist in feature films – cf. more general phenomenon of postsynchronisation.

Voiceover

Voice-over (or, maybe, half dubbing) normally happens when a TV programme,


feature film, interview or documentary is adapted / translated and furtherly broadcast

9
UNIT I │ TYPES OF AUDIOVISUAL

almost in synchrony by a journalist or an actor. In voice-over, the original voicesound


is either turned down or reduced entirely after a few seconds.

<http://www.youtube.com/watch?v=Rr48yysAu0M>.

[Fim Saiba Mais]

Free Commentary

Having comments, omissions, clarifications and additions, Free commentary is a kind


of adaptation for any new audience. Rather than with the soundtrack, synchronisation
is normally done with the on-screen images. Regularly, Free commentary is used
for corporate/promotional videos, documentaries and children’s programmes, such as
“March of the Penguins”.

AVT might also include In-vision signing for the deaf and hard of hearing

<http://www.youtube.com/watch?v=tzWqbG3qF3w>.

Audio description is normally used for the partially sighted and the blind and it
regularly comprises some reading of information while describing what is happening
on the screen (costumes, facial expressions, body language, action etc.).

Hitchcock clip <http://www.youtube.com/watch?v=tzWqbG3qF3w>.

Audiosubtitling is normally set for the partially sighted and the blind and it necessarily
involves the speaking of subtitles, either by a speech synthesiser or by a human actor.
It may come with AD.

Videogame translation (either subtitling or dubbing) for the ‘pre-rendered cinematic


elements known as cut-scenes’. Now usually known as videogame localization or games
localization.

10
TYPES OF AUDIOVISUAL │ UNIT I

ABBREVIATIONS AND ACRONYMS

CALL Computer Assisted Language Learning


CD Compact Disc
DVD Digital Versatile Disc
EFL English as a Foreign Language
IM Instant Messaging
LCD Liquid Crystal Display
L2 Second Language
OHP Overhead Projectors
TEFL Teaching English as a Foreign Language
VCR Video Cassette Recorder

11
CHAPTER 1
Types of audiovisual aids used for
teaching

Different types of visual aids


Several types of visual aids exist and are being used, of being left behind by the advances
of technology. You will find here some advice in order to help you make the most out of
the ones which are commonly used.

PowerPoint (or equivalent)

What is PowerPoint?

PowerPoint is a computer program that allows you to create and show slides to
support a class. You can combine text, graphics and multi-media content to create
professional classes. As a class tool, it is normally used to structure and organise your
presentation; also, to create a consistent and professional format as well as to provide
some backdrops for your class’s contents and to bring animation to your slides in order
to give them some greater visual impacts.

The software has become greatly popular and most certainly you have seen it used by
your teachers, professors and fellow students or even in presentations which are not
related to the University. As it is the world’s most popular presentational software, if
you learn how to present using PowerPoint, your employability will naturally increase.
Used well, PowerPoint can improve the clarity of your classes and presentations and
help you to illustrate your message and engage your audience.

Not all classes require support from PowerPoint, of course, so you should consider
whether it is appropriate or not. This decision will need to take into account the venue of
your class, the availability of equipment, the time available and the expectations of the
students. Independent of your choice to use PowerPoint or not, your should carefully
plan your class in order to reach your objectives.

In short words, it is is now the most common visual aid. If it is used well, it can really
help you in your presentation; on the other hand, if it is badly used, it might bring the
opposite result. The general principles are: use a big enough font (minimum 20pt),

12
TYPES OF AUDIOVISUAL │ UNIT I

keep the background simple, use animations when appropriate, make things visual;
don’t make the font so small you can’t read it, don’t use a fussy background image – it
gets distracting, don’t use countless slides full of bulleted lists which are all lookalike.

Overhead projector of slides and transparencies

The slide and transparencies are normally displayed on the overhead projector (OHP),
which is a very useful, although old, tool found in most schools and colleges. It enlarges
and projects your slides onto a wall or screen with no need of the lights being dimmed.
These slides can be produced in three ways:

»» spontaneously produced slides: they could be written while you


speak in order to illustrate some of your points or even to register
comments from the students during the class;

»» pre-prepared slides: they could be images or words produced on a


computer or handwritten/drawn;

»» a mixture of each: try adding to pre-prepared slides when making your


class to create the idea of movement in the class, signal some detailed
interrelationships or even highlight change.

A useful rule is to use 18/20-point text in case you are making slides with text from a
computer, so you may make it sure the text on each of them is big enough so it can be
read from the back of the room. Of course, this will also help reduce the quantity of
information presented on each slide.

So, you should avoid giving your students or your audience long texts or clearly complex
diagrams to read since this will limit their ability to follow what you are saying. Also,
always try to avoid lists full of abstract words. They are ultimately uninformative or
misleading.

Professors and teachers who are trying to use some modern forms to develop
understanding and communicate information may not consider OHPs as their first
choice, mostly because, when they are overused, they rapidly lose their efficacy by
boring students. Nevertheless, when they are used appropriately, they still prove quite
beneficial.

An OHP may need appropriate classroom space. Ideally, it would be palced near an
outlet and have an extension cord if necessary. It would be placed at the front part of
the room on any possible flat surface, since classroom desks, which normally have an
angle, are quite problematic if you can’t use some books to prop it up.

13
UNIT I │ TYPES OF AUDIOVISUAL

Uusing an OHP to share your transparencies with the class clearly helps ease the
group discussions. Your groups in the class would also be able to quickly record their
conversations and work to share with the other students in the class. Such strategies
strongly bring benefits to the students who respond more to visual learning cues.

Although OHPs seem out of date in most of our now more technologically advanced
classrooms, they still provide some valuable back-up for those tiems when the Internet
(or any other technological tool, for instance) fails to workproperly, and the professor
or teacher still needs to show some visuals to the whole class. Professors and teachers
may keep salient information on the transparency to carry on with an alternate lesson,
also previously planned and prepared.

Hearing impaired or even deaf students will all benefit from OHPs used to display or
add visual aids to the discussion or lesson. Although professors and teachers always
need to remember to partially/totally dim the lights in order to make the image more
clearly visible, those impaired students may also need to see their classroom translators
along with the images on OHP.

Flip chart

A flip chart is a large pad of paper on a stand. It is a quite flexible way of registering
information throughout your class – you may even want to use pre-prepared sheets for
some of the key points.

You should register information as you carry on with your class, keeping only one
main idea to each of the sheets. Then, you may flip back through the pad in order to
recap your main past points. Also, you may use the turning of a page to cldearly state
your progression from one point to the other. Always remember to keep your writing
readable and clear and all your eventual diagrams as simple as they might be.

Flip charts may come in several forms, although it is most commonly shown on a tripod.

All the text is usually handwritten with markers. The charts may include figures. Any
sheet might be flipped over at any time by the presenter in order to go on to a new page.

A very useful tool it presents is that some flip charts might bring a reduced version, such
as a scheme, of the page which faces the audience printed on the back of the previous
page, since it allow the presenter to see the scheme of the things the audience is able to
see. Some others might have some teaching notes printed on theie backs.

Finally, the flip charts may be used in several different settings such as in a drawing
board for Art students, in teaching institutions of any type, in classrooms of all types
14
TYPES OF AUDIOVISUAL │ UNIT I

and in any kind of presentation in which the papers pads would be pre-filled with any
information on a specific subject.

White or black board

White or black boards, also called chalkboards, might be very useful in helping to explain
sequences of routines or ideas, especially in the sciences. Professors and teachers might
use them to clarify their titles or to register their key points as they introduce your class,
since it provides them with a fixed list in order to help them recap as they go along the
class.

Professors should not expect students to follow their spoken descriptions of any given
process or experiment. Instead, they should write each of the stages on the board,
which includes the writing of any complex terminology, as in a glossary, or any precise
references in order to guide your students on taking more accurate notes.

Nevertheless, if they have put something on the board, they would either rub it off
or leave it there – and both actions may prove to be distracting to your students or
audience. Also, they should always make sure students have already written down a
particular reference before rubbing it all off – one of the most frustrating things at
school or college is not being given enough time for these notes.

If it is YOU using the tool, prevent from leaving on the board out of date points from an
earlier moment of your class since this might bring some confusion to your students.
In case you need to write ‘live’, don’t forget to check that your students or audience are
able to clearly read your writing.

Move away when you write something on the board so that students are able to see what
you have just written. Remember that your students should always have a clear view of
the board. Also, always be careful you don’t ever block learners who might be sitting at
both sides of the room.

You may need to develop the talent/ability to write on the board and still keep your
eyes in the back of your head with classes of Young Learners. So, never turn your back
on your class for any long period. The best professors and teachers would write on the
board as they still keep an eye on their students.

Always write as clear as possible on the board and try to make sure you have inserted
words/text large enough for everyone from the back of the class to clearly see and
comprehend.

15
UNIT I │ TYPES OF AUDIOVISUAL

With blackboard and chalk, make sure the board is often washed so that the writing
keeps clear. With a whiteboard, on the other hand, make sure you are using a pen which
is in a colour that everyone is able to read and comprehend – black or blue are best.
Keep the coloured ones for highlighting points.

You should consider practicing your writing in straight lines all along the board. Of
course, in some languages, the handwriting styles might be different or even some
letters might look a bit different. Always try to point out the relevant differences to your
students and to make sure they are all able to read clearly what you have inserted.

Another important tip is that you check what you write while you write. Many students
might have visual memories, so we have to be careful about accuracy of grammar and
spelling, particularly if you have the intention of having your students copying it down
in their notebooks to study it all further.

Always try to check with your students if they are ready for you to clean the board. In
case you are waiting for some to finish doing an exercise or copying, never leave the
others with no activity to do. To finish this part, always ask the students to start the
warm-up for a future class or exercise orally or even make a personalised example while
they are on it.

Organising your board

If your board is often untidy and messy, what your students write on their notebooks
shall be messy as well.

It is always a good idea to divide the board into as many sections as the different topics
or subjects you are going to lecture on. You should reserve at least one part of the board
for use along the lesson which can be re-used and cleaned off. You may also use another
part for the most important information and that could stay there for the entire lesson.

For instance, you may make a list of the essential aims/activities for that particular class
so that your students might envision what is still to come. You may also tick some items
off as they have been achieved along the class. At the end, you might want to review the
lesson aims so students can actually evaluate what they thought they have learnt.

If you are teaching older learners, you may want to insert other important information,
such as vocabulary or key grammar points which might be needed for the class. You
may want to leave the lower part of the board empty for you, or even your students, to
write on.

16
TYPES OF AUDIOVISUAL │ UNIT I

A professor or teacher may use the board in several different ways in the classroom, so,
it is not only for writing something up, such as new vocabulary. For instance, you might
use your board for reinforcing or giving oral instruction.

Another example would be inserting the page number for any given exercise on the
board in a bigger class, which saves the teacher or the professor from giving a lot of
repetition. While haveing project work or group work, you might use the board to show
the organization of your class.

You can insert items for correction from oral activities, short texts, exercises or messages.
In this special case, coloured pens or chalks are quite useful for inserting dialogue parts.

You may also use your board to provide registers of how a word is used, structures, new
words. Or you may brainstorm for new vocabulary with the class in a spidergram. Of
course, when you have more advanced classes, you might want to provide a register of
a class discussion or even to give some help with some planning for writing.

You may want to use the wode surface of your board to fix and show all kinds of items –
flashcards, pictures and posters. Also, you may have students come out to the board in
order to talk about or point to various items. Online images or magazine and newspaper
pictures might be used for a variety of spoken activities. Flashcards, the most popular
among professors and teachers, may also be used for a variety of games, if not only for
simple matching activities.

Always try to encourage students to attend to the activity and come out to the board to
describe, order, select or choose pictures, since all of these will probably make of your
classroom a more interactive place as well as avoid the excess of teacher talking time.

You may always display some other items such as real authentic materials – such as
photos, adverts, maps, as well as your students’ own work. Remember there is no need
for you to stick to the board.

You could additionally (or solely) display items all around the room, mostly if they are
not big enough so the whole class is able to clearly see when they are at the front. On top
of that, you may also ask your students to look at the materials while moving around.

Playing games

There is an infinite amount of different games which we may use just the board.
Professors and teachers regularly need a repertoire of interesting board games as
lesson-ending activities, fillers or warmers which might require no preparation at all.

17
UNIT I │ TYPES OF AUDIOVISUAL

You can play many other games apart from the common and traditional ones of crosses
(answering questions for O or X), and noughts and hangman.

‘Pictogram’, for instance, can be adapted to all levels (they draw a picture and others
should guess the word). While teaching younger learners, the spelling races prove to be
very popular. Most word games are an excellent way of revising vocabulary and settling
classes. You may use jumbled sentences or anagrams or even some very basic words for
very young learners with some missing vowels, for example.

Using visuals

You have no need of being a genius at drawing in order to use drawings and pictures
with any of your students. Some of the worst the drawings may bring the most fun to
the classes. One thing you should try is to master some basic faces with expressions and
stick men, for example.

If you like explaining stories and texts to our students, drawing pictures is definitely
an essential skill you should develop. So, why not practise storytelling with some of the
basic pictures fixed on the board? Try to remember you can always ask your students
to come out to the board so they can also draw – this has always been a fun activity at
whichever level your students are. You may also create some picture stories along with
your students and use them later during your course.

Some other visuals which are useful to draw would be large-scale pictures such as a
plan of a house, a plan of a town, maps etc.

One of the most useful and most commonly used visual aids in teaching is the chalkboard.
It is a board with a smooth surface, usually painted black or dark green, for writing on
with chalk.

Chalkboards were originally made of smooth, thin sheets of black or dark grey slate
stone. Modern versions are often green or brown and are thus sometimes called a green
board or brown board instead.

For many schools in the world, a chalkboard is the main teaching resource. A chalkboard
is a reusable writing surface on which text or drawings are made with chalk or other
erasable markers.

They are widely used in all sectors of education and training and are most suitable for
displaying notes and diagrams during a lesson and for working calculations or similar
exercises in front of the class. They can be used repeatedly, as they can be easily cleaned

18
TYPES OF AUDIOVISUAL │ UNIT I

with a duster. These bring a nice opportunity for both professor and teachers and the
students to practice their speaking skill on it.

You should try to keep your board interactive

One thing which is good is always to ask students to come to the board to work, present,
write or even draw. The same should work with group activities.

Even discipline can be worked out with the use of board activities – end a noisy class by
giving them a short word game or copying exercise. One thing widely used by teachers
with younger children is to write a child’s name up on the board, but nowadays there
is some discussion whether this kind of exposure to the big group is actually effective.

Of course, your board is a tool for organization too. So, you should mostly use it to keep
you on track with a lesson or even as a memory store for the parts of the class still to do.

Paper handouts

Normally, handouts are quite useful. You may use a handout if your current information
proves to be too detailed to fit on a slide or on a regular lecture, or even if you need your
students to achieve a full register of your findings. Of course, you should consider the
benefits of giving out your handouts at the right part of the class: beginning, middle or
end.

The problems are simple and direct: given too early and the handouts may become a
distraction; you give them too late and your students might have taken a great number
of unnecessary notes; and given out in the middle and they will inevitably read rather
than listen to your own words.

One simple and good way of avoiding such pitfalls is to give them out a little incomplete
and at key stages along the class. Even with grammar, the teacher may highlight the
missing details orally, which encourages students to fill in the gaps in their handouts.
They seem to like competition, instantly.

Video (DVD and YouTube)

The most relevant aspect when using videos is that they give you an opportunity to
show some stimulating visual information. Normally, teachers and professors use
videos to bring sound, pictures and movement into their classes. But one should always
make sure that the video is relevant to your content, otherwise it becomes only fun, and

19
UNIT I │ TYPES OF AUDIOVISUAL

not constructive. One should always tell their students what it is they should look for.
Another important thing is to avoid showing any more of the video than it is necessary
for the tasks.

There is no doubt that the use of video in lower and higher educational settings is
quickly accelerating in departments of all disciplines from arts, sciences, humanities
to continued professional curricula. That is because videos can be used not only for
teaching, of course, but also for learning and studying wherever it is, in and out of the
classroom.

Video is very often attractive as a way to capture some lecture contents and also to
present some direct instruction. It is believed that of all the technological devices which
are involved in any given learning experience, video is frequently the most resource
intensive and the most visible. It becomes natural to assume that videos will become
the most impactful in any teaching process.

Indeed, it is a powerful medium, but, of course, as with anything else used in the teaching
process, video must be made bearing strong pedagogical choices in mind so they can
be most effective. So, the actual lecture is definitely one strategy on the instructional
palette and video is but one tool in such media toolbox.

Video can also be designed for presenting case studies, interviews, digital storytelling,
student directed projects, and more. Choosing the appropriate instructional strategy
and pairing it with an effective media format is part of the analysis performed during
your course design process.

The objective of such resource should be to identify am amount of best practices which
one could apply to the types of video you might make as supportive material in relation
to students’ learning task to ensure that your video is as effective and engaging as
possible.

Artefacts or props

Often, artefacts or props can prove to be very useful to use while making a presentation
or giving a lecture. In case you decide to bring an artefact with you for your class or
lecture, just make sure the object in case can be clearly seen and always be prepared to
be asked by your students to pass it round or even move it to different parts of a larger
room to help your students have a view at it in detail.

Always try to remember that this action will take some time and bear in mind that,
when students are immersed in the process of looking at an object, they will quite often

20
TYPES OF AUDIOVISUAL │ UNIT I

find it difficult to pay attention to your actual talk. So, you should conceal large props
until the right moment of the class you will need them, as they most certainly distract
your students’ attention.

Non-projected Materials

Non-projected materials are different forms of instructional materials or teaching aids


that do not require any form of projection before utilized. These include the textual and
non-textual materials. Textual materials and non-textual materials refer to all the print
and non-print materials that are used by the teachers and learners for instructional
process.

The print materials are the textbooks, magazines, periodicals, journals, and newspapers
and the non-print materials include chalkboard, charts, maps, graphs, posters,
specimen, objects, handouts, models and flip charts and wall charts.

Charts

Charts are pictorial ways of representing relationships between the several variables or
objects and ideas or things. They summarize and present a great deal of information in
a small amount of space. Charts are the graphic teaching materials including diagrams,
posters, pictures, maps and graphs. They have several advantages.

They are used during teaching learning speaking skills and discussion about the
relationships of the things. Their main function is to show relationship such as
comparisons, relative amounts, developments, processes, classification and organization
while teaching and learning speaking skills. They are easy to use and can be displayed
in a variety of ways.

You can use a flipchart, a poster, or an overhead projector, which can project a giant
image of your chart on a screen. Besides, whenever electricity is unavailable or interrupts,
charts are needed to illustrate group reports and to provide a written record or points
made by the students in their ways to improve speaking skills.

Maps

Maps are also some of kinds of non-projected materials that are representations of the
whole or parts of earth’s surface. They indicate location, distance, extent, area, land and
water forms. It is useful for enriching basic speaking skills, understanding direction,

21
UNIT I │ TYPES OF AUDIOVISUAL

recognizing scale and computing distance, pronouncing and interpreting symbols and
labels.

Besides Geographical world, maps have also essential benefits, in language world as
well. It gives language teachers the development of world language. Besides, it brings a
great benefit for both students and teachers in most speaking targeted classroom. The
students can express the countries selecting from the map using the skill.

Graph

A graph is a pictorial representation of statistical data and easy-to-understand format.


Because statistics are abstract summaries of many examples, most listeners find that
graphs help them make the data more concrete in speaking skill.

Most of the EFL teachers try to teach intonation with the use of graphs. Whereas, the
students can train the skill how to make such sounds in order to be fluent enough in
speech. It seems that they are quite effective for showing overall relationships and
trends among syllables.

There are several types of graphs: bar graphs, line graphs, pie graphs and picture
graphs. Many of today’s computer presentation programs can easily convert statistics
into visual forms using the graphs.

Wall Chart

On the other hand, a wall chart is a large flat printed sheet of paper, card or cloth, which
displays related sets of information in the process of teaching and learning speaking
skill. A wall chart contains much more information than a poster, which usually conveys
only one message.

It can be used as a teaching chart for a lesson. Besides, it can be used to stimulate
interest and provide motivation, act as a source of ideas or topics for discussion and as
memory substitute and an information store.

Wall charts must be as large as it is enough for the entire class to see them during their
training of speaking skill. They should, then, be always shown and fixed where many
people may have instant access to them.

22
TYPES OF AUDIOVISUAL │ UNIT I

Flipchart

Flipcharts are often used in business presentations and training sessions. A flipchart
normally is just a large paper pad which is resting on a kind of easel. It is easy to train
speaking skill using them. During your presentation, you need only flip the page to
reveal your next visual.

Flipcharts are best used when you have brief information to display or when you want to
summarize comments from audience during presentation. Most experienced flipchart
users recommend that you use lined papers to keep your words and drawings neat and
well organized.

Model

A model can be defined as a scale representation of any given object. It may be smaller
or larger than the actual object. For instance, you might want to make a model of a
human tongue position in order to practice some speaking skills with the aid of cotton
and paper. Of course, this one would naturally be smaller than the actual object. On the
other hand, a model of the vocal cords would probably be larger than the real ones.

Make sure, however, that any model you use is large enough to be seen by all members
of your audience. A model can be an enlargement, a reduction of the size of its original.
It represents a replica of the original item. It can be used effectively in explaining the
operating principles of various types of equipment.

They are especially adaptable to small group discussions in speaking skill which learners
can ask questions. It is more effective if it works like the original and can be taken apart
and reassembled

<http//:www.models/mock-ups.edu>.

Objects

An object refers to a collection of real things for instructional use. On the other hand,
a specimen is a sample of the real object or a material. Objects and specimens are real
objects but not substitute to each other. They are three dimensional in nature and
provide firsthand direct experiences.

The category of materials resources that can be valuable in the teaching of language
studies is the use of actual or real objects in the classroom. These objects are called realia

23
UNIT I │ TYPES OF AUDIOVISUAL

and can have a powerful impact on students’ interest and motivate them to acquire the
skill easily on speech. The objects bring the real outer world into the classroom.

Using objects and specimens as teaching aids, a teacher can teach speaking skill and the
students can learn it easily. They can be touched, smelled, heard, and even tasted, as
well as seen which can motivate learner to acquire speaking skill quickly.

Posters

Posters consist of pictures, words and or numbers drawn on a large, flat sheet of paper,
cloth or card. They convey important messages and catch the eye of views easily. They
are based on the audio stories in order to acquire speaking skill and are often used to
introduce the story to the students before playing the audio. Posters should be as big as
possible with clear illustrations and a short message.

They are useful to communicate simple and clear information to individuals and
groups (for example, speaking skill) and can be used effectively in both schools and in
development work as well. Posters also help to introduce and elicit the speaking skill of
a specific lesson. Moreover, they enhance students’ speaking skills as they discuss and
train it.

Pictures

On the other hand, pictures are the most commonly available graphical aids. They
include photographs, paintings, illustrations clipped from periodicals. Pictures are
effective additional aids in an EFL classroom. A 20th century Chinese philosopher
stated that “one picture is worth a thousand words”.

Using different relevant pictures in a classroom makes the classroom interesting and
interactive. It helps the teachers to visualize the content of the classroom and it normally
makes learners a bit more attentive and a lot more engaged in the school tasks.

It becomes more contextualized and even real when pictures are chosen to be used to
introduce any kind of topic to the learners. Also, most learners shall get an overview of
the lesson and are able to generate ideas better when they get to use pictures as pictures
improve the ability of the learners to comprehend better any given thing.

<http://aaudiovisualaids.blogspot.com/ 2010/10/av-aids-in-teaching.html>.

24
TYPES OF AUDIOVISUAL │ UNIT I

Cartoon

The word cartoon has various meanings based on several and different forms of visual
arts and illustration. It is one of visual resources, which has humorous character. Even
though, the term has evolved overtime, the original meaning was in fine art and their
cartoon meant a preparatory drawing for a piece of art such as painting. It gives meaning
to a subtle messag. In a cartoon, the features of objects and people are exaggerated
along with generally recognized symbol. In short, a cartoon is a figurative and subtle
graphic aid to EFL speaking skill.

Flashcard

Based on Teaching and Applied Linguistics Dictionary (2010), flashcard is a card with
words sentences or pictures on it, used as an aid or cue in language lessons, especially
teaching and learning speaking skill

<http://avaudiovisualaids.blogspot.com/2010/10/av-aids-in-teaching.html>.

Flash cards have been used in teaching English as second language not only for teaching
speaking skill, but also for teaching propositions, articles, sentence structures, tenses,
and phrasal verbs. They are useful aids to teach students and make them practice
speaking drills in the learning of foreign language.

Flashcards facilitate phoneme recognition if students are poor speakers. Learners often
encounter with and exposed to new words by flashcards and most students continue to
pronounce them to review their skills afterwards.

Working with flashcards improves speaking skill retrieval because learners are offered
with an L2 word and its pronunciation on the other side of the card. Finally, they can
easily practice speaking skill and recall their meanings in as much as the phonemes
appear on two different sides of the card.

Why using all these? – A Brazilian perspective

We shall now present some studies in many English-speaking countries which have
given emphasis to the relevance of visual grammar in most English language learning
contexts. For instance, The New London Group refers to “the increasing multiplicity

25
UNIT I │ TYPES OF AUDIOVISUAL

and integration of significant modes of meaning-making, where the textual is also


related to the visual, the audio, the spatial, the behavioral, and so on”.

Visual literacy might be connected to multimodality, which is something that refers


to the use of multiple semiotic resources with the aim at producing or interpreting
meanings. In this part of our fascicle, we first expand the notion of competence in a
communicative environment to include it as a quite relevant skill for ESL/EFL students
to develop, so they are able to interact more effectively with people from English-
speaking communities.

After that, we shall provide readers with some discussion on the importance of a
multimodal approach to ESL/EFL teaching as well as offer s number of suggestions
on how to connect visual literacy more specifically to young learners, as it may be
found in advertisements and video games, with drawing their attention to task-based,
pedagogical activities which might be integrated to the EFL syllabus.

Multimodal communicative competence (MMCC)

This topic involves the use and knowledge of language in terms of the spatial, audio,
gestural and visual dimensions of communication, including, of course, computer-
mediated-communication. Most of literacy practices include all these various semiotic
meanings, with which all ESL/EFL learners must be familiar.

In the construction of s student’s identity in ESL, for instance, it must be emphasized


the contextual nature of both writing and reading as well as of the way regular literacy
is deeply bound up with some specific sociocultural social relationships, institutions,
and contexts.

It seems that MMCC shall allow the ESL/EFL learners to become and feel better
prepared for multiple literacy practices both in their sociocultural and professional
experiences with non-native and native speakers of English.

When students have easier access to the Internet for the classes, young students might
interact with others from all around the world through Twitter, Instagram, blogs,
fotologs, Facebook, and any other e-environments. Such experiences with diversified
types of multimodal texts in this ever-growing English-speaking world could quickly
become a productive means for engage students in their MMCC in English.

The importance of multimodality in ESL/EFL process of teaching and learning in


relation to the newest demands in terms of literacy is that it is no longer acceptable

26
TYPES OF AUDIOVISUAL │ UNIT I

to think of literacy as isolated from a large array of economic, technological and social
factors.

Two related yet distinct issues deserve to be especially highlighted. On the one hand,
the large move from the long dominance of writing over the other skills to the brand
new imaging dominance and, on the other hand, the change from the dominance of the
book as the medium it is to the dominance of the screening medium.

These two movements are producing the new revolution in both the effects and uses of
literacy and of any of those many associated means for communicating and representing
in every domain and at every level.

Our concern in this part of or fascicle is actually to bring up new possibilities for ESL/
EFL teaching. ESL/EFL teachers should always try to understand that “to communicate
is to work in making meaning” (KRESS, 2003:11) and that “visual structures realize
meanings as linguistic structures do also, and thereby point to different interpretations
of experience and different forms of social interaction” (KRESS and van LEEUWEN,
1996:2).

As a relevant educational feature, multimodality is also that, whatever the subject or


discipline, students nowadays are expected to produce and interpret various texts
which integrate verbal and visual modalities, as well as more complex interweavings
of verbiage, image and sound in most filmic media and also in other performative
modalities.

It must be emphasized the necessity for learners to wander beyond their natural
practical expertise in computer-based technologies involved in all the school curriculum
to integrate some understanding of semiotic analysis frameworks, for instance, the
grammar of visual design by Kress and van Leeuwen.

With the aim at implementing visual meaning-making resources as well as


reconceptualizing literacy in EFL as a united role of visual and verbal meaning-making,
professors and teachers must become used to the basic principles of systemic-functional
linguistics.

People use language to mention any kind of experience of the world, which includes
the worlds in people’s own minds, to describe states and events and the particularities
involved in them. People also use language, of course, to interact with one another, to
elicit or change their relations, to express our own viewpoint on things in the world,
to influence their behaviour, and to maintain or establish social relations, which is the
interpersonal metafunction.

27
UNIT I │ TYPES OF AUDIOVISUAL

At last, while using language, people organize their messages in various ways which
may show how they are combined with other messages in their context as well as
with any wider context in which people might be writing or talking, that is the textual
metafunction.

In terms of visual grammar (KRESS and van LEEUWEN, 1996; 2006), the ideational
metafunction is known as representational, the interpersonal as interactive, and the
textual, as compositional.

Ideational/representational structures visually and verbally construct the events


nature, the circumstances and instacnes in which they happen, and the participants
and objects involved; interactive/interpersonal visual and verbal resources might
construct the relationships nature among listeners/speakers, readers/writers, and also
the viewers and what is viewed by them, of course; textual/compositional meanings
might be linked with the information distribution value or the relatively emphatic
relation among elements of the image and text.

Understanding the textual, interpersonal and ideational metafunctions in relation to


visual and verbal semiotics apart from how they are used together in communication is
an asset for EFL students and teachers.

Meaningful opportunities in multimodality for ESL/


EFL learners

Whether an ESL/EFL course is designed for teenage or adult students, it must provide
a link between linguistic form and function (understood here as language use, speech
function). In ESL/EFL classes, the link between these two important aspects of language
learning should be included in classroom activities, as “learners must acquire both to be
fully competent in their second language” (DOUGHTY, 1998: 128).

As SLA (Second Language Acquisition) studies have shown, a focus on form which
promotes meaningful communication seems to be the best practice. “Striking a balance
between emphasizing accurate production of second language forms and promoting
meaningful communication in real contexts has become a vital concern” (DOUGHTY,
1998:149).

In respect to this, developing a metalanguage in what concerns visual literacy in the


classroom and using visuals might lead to quite a lot meaningful language practice.

During the last thirty years or so, communicative language courses around the world
have proliferated, and pedagogy has changed from “language in isolation to language

28
TYPES OF AUDIOVISUAL │ UNIT I

as communication” (DOUGHTY, 1998:134), offering the opportunity for ESL/EFL


learners to use the target language in class activities.

However, even though “the tenets and methodologies of the communicative approach
arguably constitute the dominant paradigm in current English language teaching”
(POOLE, 2002:75), in many parts of the world EFL teaching is still focussed on forms
only, on grammatical terminology per se, in detriment of the development of meaningful,
fluent use of English.

People may see the necessity of being worried about any focus on form as scaffolding,
not as any kind of monotonous unstimulating lists of long and traditional grammatical
terms, but as a practical tool for practice and interaction.

Theoretically, from a systemic-functional linguistic (SFL) perspective in relation to


second language pedagogy, most of what is called ‘interaction in the classroom’ proves
to be also important, and a basic premise is that “language development arises from
general circumstances of use and communicative interaction” (PERRETT, 2000:93).

Christie (2002), who analyses classroom discourse from SFL theory and Bernstein‟
work on pedagogic discourse, also sees the need for teachers to provide scaffolding and
important pedagogic opportunities, make students understand technical language and
integrate instructional and regulative registers.

From her analysis of several classes in the English-speaking world, she proposes an
appropriate integration of the regulative and instructional registers, with teacher
intervention to effectively help students to develop, classify and frame knowledge. She
claims for “an education that values knowledge and that values the learner, by seeking
to make available to learners as explicitly and unambiguously as possible significant
and useful information and ideas” (CHRISTIE, 2002:179).

Incorporating multimodal skills in the teaching of


English

One interesting way for professors and teachers to begin the journey in the direction
of the use of visual literacy in all of their EFL classrooms and lectures, specifically
for youngsters, is to gather all different types of pictures from leaflets, magazines,
newspapers so as to organize a data bank, into forms of sets which may come to
emphasize all three different functions as proposed by Kress and van Leeuwen‟s
(1996; 2006) for the grammar of visual design: compositional, that is the arrangement
of elements in any given visual space, with subsets containing different categories;

29
UNIT I │ TYPES OF AUDIOVISUAL

interactive, which refers to the various relations between and among the interlocutors;
and representational, which naturally corresponds to concepts or actions.

In this approach, students ought to be invited to contribute to the existing data bank,
which has naturally been a motivating factor for the expected interaction. In the same
way as it is done with professional databanks, with the approach the students are able to
create their own pictures databank so they can integrate those into the required future
activities.

Whether EFL teachers who subscribe to forms of communicative language teaching tend
to support task-based or content-based instructions (or even a combination of the two),
it seems that incorporating a multimodal approach can be a rewarding experience, as I
point out (HEBERLE, 2006). My experience in EFL teacher training courses in Brazil
has shown that teachers have responded positively to an integration of TBI or CBI with
multimodality and multiliteracy. The main points emphasized are that:

1. The approach may give opportunities for groups or pairs of students


to negotiate meaning in English, when they must solve a problem. For
instance, the teacher may assign them specific task-based activities which
could include pictures of people, places or objects in advertisements.
Students would have to choose one specific picture, giving reasons for
their choices, based on their intuitions and interests but their analysis
should be based on the multimodal metalanguage.

2. Regarding a link with real world contexts and topics of students‟


interests, as proposed by CLT, students may be asked to bring their
favourite magazines to class so as to discuss the visual-verbal synergy
and the different meanings available in them. Heberle and Meurer (2007)
present a brief multimodal analysis for EFL classes.

3. In terms of joining academic texts and EFL, following CBI, teachers may
ask students to discuss aspects of their cultural heritage, such as of their
town, their family background or any culture that interests them. They
would have to carry out research in books and magazines, as well as in
on-line museums and/or art galleries, and then present their findings in
oral or written assignments. Christie (2002) provides a rich discussion of
secondary school geography lessons which can be adapted to EFL classes.

4. Students may access different hyperlinks to http://www.allposters.com/


and choose a category to discuss (for instance, Entertainment or Art, as
seen below). First, they would give a general view of the category itself

30
TYPES OF AUDIOVISUAL │ UNIT I

regarding the main topics listed and then they would select one picture
for a more detailed analysis.

CALLOW, J. (Ed.). Image matters: Visual texts in the classroom. Marrickville, NSW:
Primary English Teaching Association, 1999.

UNSWORTH, L. (Ed.). Researching language in schools and communities. London


and Washington: Cassell, 2000, 222-244.

The advent of the Video Games as motivation for


learning English

Video games have always represented one of the most popular phenomena for youngsters
worldwide, and if EFL professors and teachers are able to increment some particular
aspects of such games in their classes, one might say that the chances are that students
shall indeed getvmotivated to participate in the proposed activities. Such games might
be a motivating and definitely suitable resource for most EFL students to apply the
studied visual grammar and, then, naturally learn English.

As suggested by Unsworth (which you can read in the previous “Saiba Mais” section)
“the first phase of the task with video games would involve understanding the game
instructions and strategies in English – a good opportunity for collaborative peer work
– and actually playing the game”.

Unsworth also adds that

it is useful to start off using games that are based on movies since the
background story and characters are likely to be well known to teenagers
and they can use their background knowledge to assist them to negotiate
the English and the new work on reading images grammatically.

The video game ‘Ultraviolet’ is based on the movie released by Sony, at http://www.
sonypictures.com/movies/ultraviolet/site/ (access June 2019). If, in the future, the
video game becomes no longer available, the lecturer may use the actual film, which is
available in most video rental stores or online

While looking at this picture, the students can give suggestions as to what Violet is doing
or what is going on. The picture is a clear example of a visual narrative representation,
but also emphasizing the concept of beauty and power: Violet, an elegant long-haired,
sexy brunnette is positioned in an oblique angle in relation to the viewers, but looking
directly at the viewers, demanding a reaction from them, and pointing a sword towards
us/them. In this case, viewers can foresee her courage and power.
31
UNIT I │ TYPES OF AUDIOVISUAL

The teacher can also discuss different aspects of visuals with students, such as the
colours involved, the descriptions of the scenes and the actions performed by Violet.
For this specific game, as in the film, each frame could be used by groups or pairs of
students for visual analysis and discussion. They can discuss the visual and the verbal
actions.

In http://xbox.gamespy.com/ (access June 2019) the teacher and students can also find
several games to analyse. For instance, students can watch demo videos of the game
The Godfather Xbox. If students know the movie and/or the book, they can compare
the similarities and differences between the different media.

Another popular game which may serve as an effective stimulus for the development of
students‟ practice in visual grammar and in EFL is SimCity: (http://simcity.ea.com/
about/simcity4/overview.php).

Unlike the other games mentioned above, this game seems particularly adequate if
teachers are concerned with non-violence and a more peaceful setting. Here the players
can act as mayor and carry out tasks to “govern” their “own virtual metropolis”.

Notice that on the site, shown below, there are a series of commands (imperative form)
for students to examine, and visually speaking the player has power over the scenes, as
s/he is looking down, from a high angle.

These examples are just an illustration of the possibilities available for teachers. If they
do not have easy access to the Internet in their classes, they can provide transparencies
for the overhead projector and discuss them in class with their students.

EFL students have constantly explained that they learned a lot of English through video
games, as they wanted to play but had to understand the instructions, the verbs related
to actions and the descriptions of the scenes.

Different frames of the selected site for analysis can be used for classroom discussion.
The class can observe the actions in the narrative sections and the teacher can then help
students to construct a narrative with different frames, using specific metalanguage
from visual grammar to make their stories vivid. Alternatively, students may be asked
to examine the site outside class, as homework assignments, and bring pictures of their
own favourite video games to comment on in class.

From different kinds of video games, besides analysing the scenes visually in terms of
actions, the class can discuss the verbal actions, the wording used to describe the main
characters and their missions.

32
TYPES OF AUDIOVISUAL │ UNIT I

Another discussion might be proposed regarding the visual and verbal meanings and
violence, whether the violent scenes shown in a specific game can also be detected in
the visuals. Based on APPRAISAL choices, students could examine different kinds of
adjectives and explain how the visual representations correlate with the verbal choices.

Teenagers may find that examining visual-verbal meanings can be stimulating in terms
of their productive and receptive skills in English. I understand that applying visual
literacy in the case of EFL teaching for teenagers represents a valid path for meaningful
learning. Task-based or content-based instruction with analysis of images and their
interpretive possibilities and classroom discussions around them, such as the ones
suggested in this paper, hold the potential to expand students” skills in learning English.

What we are also proposing is the integration of visual literacy skills in students”
learning, to make them understand and explore the notion that images are not evident
and obvious, but socioculturally constructed (KRESS and van LEEUWEN, 1996).

Thus, visuals are not to be a separate or add-on strategy, but as a valid tool in EFL
teaching and learning. I hope to have emphasized the relevance of multimodal meaning-
making in different literacy practices in teachers‟and students” academic, social/
cultural and civic life.

No doubt the classroom activities presented are just suggestions, not strict rules to
be followed, and teachers need to consider their own institutional demands but most
importantly their own students” sociocultural and personal values.

For this, it is important to redraw the disciplinary and cultural boundaries of foreign
language study towards what some professors call “the pursuit of communicative
happiness”. We believe that incorporating a multimodal, multiliteracy approach to the
TESOL context may contribute to a better understanding of these boundaries.

Why research and develop this topic?

In this competitive world, where communication is necessary to ensure our survival


in a global society, the ability to use English language is vital, especially in business,
sciences and technologies. It is necessary to encourage teachers and students to have
better contact with technology, to reinforce, practice and increase knowledge in different
areas through practical use of English language.

Many enterprises and factories offer job vacancies to those applicants who are capable
in English (both written and spoken). They promise good positions and better prospects
for those who use English fluently and accurately. Besides, the TV stations such as

33
UNIT I │ TYPES OF AUDIOVISUAL

BBC, ABC, VOA etc. present different programs of English to those who are eager of
improving their English-speaking skill.

Moreover, English has an important area in educative programs especially in learning/


teaching speaking skills. It is helpful tool for this generation. The modern world of
media demands good knowledge of language skills, the spoken one can be acquired
with the use of audiovisual materials for its assistance in speaking skill.

Many researchers suggest that audiovisual materials should be applied in teaching


speaking skill. Thus, teaching speaking skill through audiovisual materials makes the
process of teaching easier, effortless and enjoyable.

However, the researchers have regularly found that there is little advancement of
English-speaking skill at the studied schools and it might be because of the factors
related to teachers, the students and administrative staff. Based on the researcher’s
personal observation being as an EFL teacher in the school, the students have also lack
of interest on their progress of improving speaking skill.

On the other hand, administrative bodies were not that much worried about the issue
though they should have played an important role on paving the way for both teachers
and students to improve their speaking skills that can be related to the use of audiovisual
materials for teaching speaking skills.

Many professors clearly stated the popular saying of Confucius that values the need
of students’ involvement on audiovisual materials to practice speaking skill, that is, “I
hear, I forget”, “I see, I remember”, and “I do, I understand”.

The above saying implies the following meanings. “I hear, I forget” has close relation
with one’s imagination. Traditionally, teachers depend too much on verbal exposition,
which makes students hear and forget what have been said by teachers.

Unless the individual has a pragmatic/practical/ imagination, it will be difficult for


the individual to visualize objects and events. On the other hand, “I see, I remember,”
carries the meaning of more remembrance when we see something. As a sensory organ,
the eye is very highly developed when compared to the other sensory organs.

It is quite natural that the knowledge gained through the sense of sight is more accurate
and permanent than the ones gained through other senses. Hence, what one sees is
better remembers. More than 80 per cent of our knowledge is gained through our eyes.

Whereas “I do, I understand” implies, when one is engaged in any practical activity,
involving physical work (doing practical work in the laboratory, workshop or in the

34
TYPES OF AUDIOVISUAL │ UNIT I

field) all the senses are used to perceive. Knowledge is gained through the engagement
of all the senses.

This is conceived as learning by direct experience. The outcome of it is pragmatic. A


lot of self-activity is involved in this way of learning. Thus, it is an ideal method of
making pupils acquire complete knowledge through engagement of all the senses by
using audiovisual materials.

Undoubtedly, audiovisual aids are those instructional aides, which are used in the
classroom to encourage teaching learning process. As Singh (2005) defines: “Any device
which by sight and sound increase the individuals’ experience beyond that acquired
through read described as an audio visual aids” Audiovisual aids are those instructional
devices which are used in the classroom to encourage learning and make it easier and
interesting.

The materials like television, radio, projectors, filmstrip, models, maps, charts etc called
audiovisual materials. The term “audio-visual material” is commonly used to refer to
those instructional materials that may be used to convey meaning without complete
dependence upon verbal symbols or language.

Teaching students with the help of audiovisual materials keep them more engaged in
their lessons and enlivens their fluencies and understanding. Video presentations, slide
shows, power point and other media revolutionize the way teachers teach their students
and play important role in their learning of speaking skills.

Besides, they have various advantages: they help to make the learning process more
effective and conceptual; they help teachers to grab the attention of students and build
interest and motivation of students in the learning process. Audiovisual materials
enhance the energy level of teaching; they are even better for freeing overburdened
classrooms and provide students a realistic approach and experience.

Different researchers classify audiovisual materials differently because of their usages.


They forward different definitions for the audiovisual materials, but they do have
something in common. All the definitions highlight the fact that audiovisual materials
mean “exposure to real language”.

“Audiovisual materials are materials that we can use in the classroom and that have not
been changed in any way for ESL students”. On the other hand, they are tools of record
to improve speaking skill that are used for several times and more than others are. As
we know, the term audiovisual is basically combination of two words: audio refers to
what we can hear, and visual refers to what we can see.

35
UNIT I │ TYPES OF AUDIOVISUAL

Audiovisual materials with relationship to the principle of six “Cs”. They added that,
audiovisual materials are devices that make the text more concentrated, compact/
concise, coherent, comprehensible, correspondent and codable. Therefore, all such aids
make our understanding clear to us through our senses.

According to Madhuri, AVM tools for students can improve speaking skills several
times over, more than other methods. AVM can be defined as stimulating materials
and devices which aid sound and sight in teaching to facilitate learning by students by
activating more than one sensory channel.

The most widely accepted functions of audiovisual devices are their uses in aiding
understanding. Learning can be sped up by using models, movies, filmstrips and
pictorial materials to supplement textbooks. These devices normally bring color and
significance to the idea brought by the teacher.

Abstract ideas can be made concrete in the minds of the students using these devices.
The use of these devices such as pictures and objects, arouse emotions and incite of the
individual in action. The ability of speaking skill can be developed best using audiovisual
materials (http://www.benefitof.net/benefits-of-visual-learning/).

However, these audiovisual materials are not without their challenges. The teachers
may face technical problems during their usage or demonstration. There may occur
administrative problem that brings negative impact on teachers’ usage of audiovisual
materials to teach speaking skill.

Besides, during the usage of these materials, environment of the classroom has also its
own impact on students’ practice of speaking skill.

1. What are the most valuable audiovisual aids in your opinion? Explain
why.

2. In your own words, are are the main benefits of visual learning?

36
CHAPTER 2
Dubbing and Voice-over

Voice-over vs. dubbing: two sides of the same


tone
You probably have watched one of those old noir films, where a narrator, generally
troubled, keeps on talking about his dire life in existential grief, and then you are
familiar with voice-over in a movie. Applied through many ways in the cinema industry,
which has granted it some iconicism in today’s pop culture, this technique has a more
practical, common use in everyday radio and news.

In an interview where the person speaks a foreign language, most of the production
companies generally use some voice actors to register over the original audio of the
interview. While doing that, the viewer normally partially hears the interviewee in
the background while he is speaking his or her own language, with the voice actor
interpretation at the mainstream.

Generally, the companies leave the voice actor in a much louder volume and choose
to lag some seconds behind the original recorded track. This technique is quite useful
since it permits the viewer to both understand and hear the speaker’s words at once.
This is generally referred to as being the UN-Style voice-over.

A more common audiovisual process is what is called dubbing. Of course, we are not
here talking about the electronic music genre with the same name; dubbing is basically
when all the sound elements are mixed while including the original production track
and the chosen additional recordings; together, they make what the industry normally
calls the ‘complete soundtrack’.

The phrase “dubbing” is, in the video production world, commonly used when the
original audio track is entirely replaced by the voice actors’ track. Contrary to voice-
over (UN-Style), which normally preserves most of the original track underneath the
voice actor, the dubbing process must be carefully synchronized and timed to match the
speaker’s lips, intonations, and even meaning.

This is frequently called lip-sync dubbing. As one may imagine, such process is lengthy
and arduous; quite often, the voice actor has to work with the film editors in a studio,
generally re-recording some segments in which the audio and visuals are proving
difficult to match.
37
UNIT I │ TYPES OF AUDIOVISUAL

Although a wide range of terms has been used in the past to refer to the translation
of audiovisual productions (screen translation, film translation and multimedia
translation, among others), audiovisual translation (AVT) seems to be the term most
widely used nowadays both in the industry and in academia, and will also be used
throughout this chapter.

In line with current research in this field, AVT is here understood as an umbrella term
referring to a wide range of practices related to the translation of audiovisual content.
Frequently, such practices may imply making audiovisual programmes available and
more accessible to those viewers who might not speak or understand the language of
the original sript, so they require some interlingual translation in the form of subtitling
or dubbing.

However, they might also involve providing audiences with sensory (e.g. hearing/
visual) impairment some real and accessible audiovisual material, frequently those
which require intersemiotic or intralingual translation.

In respect to that, AVT also involves various practices such as audio description (AD)
for the partially sighted and the totally blind, subtitling for the hard-of-hearing and the
totally deaf (SDH), as well as sign language interpreting (SLI). As the title makes clear,
the focus of this chapter is on dubbing and subtitling, the two most widespread AVT
interlingual modes used for the translation of films.

Nonetheless, an overview of the different AVT modes will be provided below and the
reader can investigate the literature cited further. References to other modes will also
be made throughout the chapter as necessary.

As far as language transfer of audiovisual material is concerned, it is distinguished


between two fundamental approaches: revoicing and subtitling. In the former, oral
output is transferred aurally in the target language by inserting a new soundtrack; in
the latter, there is a change from spoken to written mode, and dialogue and other verbal
elements are transferred as written text on screen.

Within these two umbrella approaches, further classifications of AVT techniques or


modes can be established. Subtitling can also be defined as one of the transfer modes
which may consist of showing a written text, normally positioned on the lower screen
part, and which attempts to retell the original speakers dialogue, and also some of the
discursive elements which always appear in the images (placards, inscriptions, graffiti,
inserts, letters and the like), as well as the information contained on the whole of the
soundtrack.

38
TYPES OF AUDIOVISUAL │ UNIT I

It is common, within subtitling, to differ between intralingual and interlingual modes,


the former including some sorts of subtitling in the very same language as the original
dialogue for foreign language learning and SDH. When subtitling live programmes,
SDH can be produced through respeaking, a technique in which speech recognition
software is used to convert the original dialogue – which is respoken by a respeaker –
into subtitles (ROMERO FRESCO 2011).

In addition to these, classifications of subtitling modes also include surtitling, whereby


subtitles for opera and theatre performances are projected above the stage, and
fansubbing, which is subtitling done by fans for fans and normally distributed for free
over the internet.

As far as revoicing is concerned, dubbing and voiceover are the two most widely used
interlingual modes. Chaume (2012: 1) defines dubbing as a type of AVT which “consists
of replacing the original track of a film’s (or any audiovisual text) source language
dialogues with another track on which translated dialogues have been recorded in the
target language.”

Dubbing is often associated with lip synchronisation, emphasising the need to


synchronise the translated dialogue with the lip movements of the characters on screen.
However, not all cases of dubbing require lip synch, as when a character (e.g. a narrator)
is off-screen.

As a result, a distinction is often made between off-screen dubbing and lip synch
dubbing. Unlike dubbing, in voiceover there is no replacement of audio tracks,
but an overlapping: the original and the translated tracks of dialogue are presented
simultaneously to the target viewer, with the volume of the former lowered, though still
audible, to avoid confusion.

In this AVT mode, which is often associated with non-fictional programmes such as
documentaries but also used to translate fictional material in certain East European
countries, the translated dialogue track usually starts and finishes a few seconds after
and before the original dialogue (FRANCO et al. 2010: 43).

From its very beginnings, AVT has been closely linked to technological developments.
First cinema and then, after World War II, TV sets started to become commonplace in
homes to such an extent that by the 1950s television was already the main source of
entertainment and the primary medium for influencing public opinion.

Since then, analogue technology has become digital and, by the mid-1990s, the Video
Home System (VHS) tape had given way to the then revolutionary Digital Versatile/

39
UNIT I │ TYPES OF AUDIOVISUAL

Video Disc (DVD), which, in turn, is being phased out and overtaken by movie streaming
services on the web, known in some countries as ‘internet cinemas’.

The way in which we consume audiovisual productions has also been altered
significantly, from the early large public spaces represented by cinemas, to the family
experience of watching the television in the relative privacy of the living room, to the
more individualistic approach of watching our favourite programmes in front of our
personal computer, tablet or smartphone.

Today’s viewers tend to be more independent and impatient when it comes to


their watching habits and expectations and want to be able to enjoy their preferred
programmes whenever they choose and on any of the devices they possess.

Video-on-demand services are a commercial response to meet the needs of this new
breed of viewers by allowing them to watch what they want, when they want and in the
quantities that they want.

This evolution has brought along a shift from the printed paper to the digital screen,
foregrounding the audiovisualisation of communication in our society and triggering a
similar boom in the practice of audiovisual translation that can only continue to expand
and flourish in the foreseeable future.

Despite being a well-established professional practice, AVT has been a relatively


unknown field of research until recently. The first studies written on the topic were
brief and published in a wide range of outlets, from cinema magazines and newspapers
to translation journals, which has the adverse effect of making any attempt at dipping
into the historiography of AVT a rather complex venture.

Laks’s (1957) “Le sous-titrage de films” can be considered a pioneering work for its
comprehensive overview of this professional practice.

In the 1960s and the 1970s, any approach to subtitling might be considered lethargic,
inspite of some works appearing on this topic, with the Babel journal publishing a
special issue on cinema translation in 1960, the monograph by Hesse-Quack (1969)
on the process of dubbing, and the book authored by some scholars on the phonetic,
semiotic, aesthetic and psychological aspects of dubbing.

In 1987, the first ever Conference on Dubbing and Subtitling was organised in Stockholm,
under the auspices of the European Broadcasting Union, triggering an unprecedented
interest in AVT that materialised in the publication of new articles and books in the
field, among which the ones by Pommier (1988), Luyken et al. (1991), Ivarsson (1992)
and Ivarsson and Carroll (1998) are perhaps the most important.

40
TYPES OF AUDIOVISUAL │ UNIT I

The golden age of AVT, coinciding with digitalization, can be traced back to the 1990s,
when such a field turned itself into the object of systematic researches from a translational
perspective and saw the publication of collective volumes, monographs and doctoral
theses, along with the organisation of domain-specific international conferences and
the development of university curricula specialising in AVT.

Since then, we have witnessed an exponential increase in the number of contributions


and scholarly activities on AVT, signalling a move from the margins to the centre of
academic debates and highlighting the fact that the field has gained social significance
and visibility, has finally come of age academically and has a most promising and
inspiring future.

Main Research Methods Given AVT’s heterogeneous and interdisciplinary nature,


scholars working in the field have looked for inspiration in related disciplines in their
search for theoretical frameworks and methodological approaches that can also be
exploited to account for AVT practices and processes.

When it comes to investigating the different professional practices, the trend has been
to study them together under the umbrella term of audiovisual translation, even though
their study would gain in depth and substance if approached individually.

Although they share some commonalities, the differences that separate them justify
more targeted analyses.

For example, the change from oral to written normally does not occur in dubbing
processes; the strategies of condensation and deletion are pivotal to subtitling but not
so much to dubbing; the transfer of discourse markers, exclamations and interjections is
not a challenge in subtitling, but it is critical in dubbing; the representation of linguistic
variation is virtually impossible to achieve in written subtitles; and the presence of the
source language as well as its cohabitation with the target language in the subtitled
version straightjackets the potential solutions in a way that does not happen in dubbing.

Early scholarly debates on the topic tended to consist of value-laden comparisons


contrasting dubbing and subtitling, to focus exclusively on the linguistic code to the
detriment of the remaining signifying audio and visual codes, and to describe the role
of the various professionals involved as well as the actual translation processes.

Although it is understandable in an emerging discipline that so much attention was


paid to these particular themes, and although the results certainly contributed to the
advancement of the discipline, the concern was that this type of research failed to
embrace the communicative richness of audiovisual texts in their entirety.

41
UNIT I │ TYPES OF AUDIOVISUAL

To correct this imbalance, scholars have advocated the use of “autochthonous models”
(PÉREZ-GONZÁLEZ 2014: 96) for the study of AVT built on interdisciplinary and
integrative methodological and theoretical foundations.

Despite its relative youth, AVT has come of age academically and can be considered a
consolidated field of research within the broader area of Translation Studies.

The development of AVT as a discipline has been accompanied by an evolution in key


topics and debates and, if early studies on the topic can be said to have focused on
the distinctiveness and autonomy of AVT, interdisciplinarity and cross-fertilisation are
certainly the way forward.

The traditional focus of the pioneering scholarly studies conducted in the field of AVT
tended to be biased towards the analysis of the role played by language, the challenges
encountered when carrying out the linguistic transfer and the translational strategies
activated by the translators to overcome them.

With the passing of the years, the scope of the research has widened considerably to
encompass many other aspects that directly impinge on the transfer that takes place
during AVT.

Studies on more traditional practices such as subtitling and dubbing coexist these days
with investigations on accessibility of the media and show some change of focus from
the idiosyncrasies of the original text to the possible, multiple effects that the resulting
translation has on different viewers.

In regard to this, AVT professors and researchers have proved to be increasingly wishing
for relying on technology as well as some statistical analysis in order to interrogate the
data under scrutiny, and the study of reception and process has become pivotal in recent
academic exchanges, with the viewer becoming the focal point of the investigation.

Experimental research based on empirical enquiry has thus become one of the relatively
recent developments in AVT as academics are no longer content with describing a given
or taking for granted certain inherited premises that have been passed on unchallenged
in the literature.

Rather, contemporary AVT scholars are eager to test the validity of their theories
experimentally, to explore the cognitive effort involved in the translational process, or
to describe the effects that AVT practices have on the various heterogeneous groups
that make up the audience, on translators-to-be and on professionals working in the
field and, in these pursuits, they exploit biometric methodologies, new technologies
and statistical data analysis tools.

42
TYPES OF AUDIOVISUAL │ UNIT I

In this attempt to measure all kinds of human behaviour there is the application of
some physiological instruments, eye trackers for isntance, which are frequently used in
the many fields like social sciences and advertising, to the experimental investigation
of AVT.

Such devices, which mostly offer some metrics about visual information while measuring
eye movement and eye positions, have actually been helping most scholars interested in
AVT to get apart from speculation to observation of subjects and data-based research.

In this new research ecosystem, eye tracking is widely used in experimental research
in AVT to gauge the attention paid by viewers to the various parts of the screen, in
an attempt to gain a better understanding of their cognitive processes whilst watching
(and reading the subtitles of) the audiovisual programme.

Eye trackers are far from being the only techniques and some more traditional ones, such
as interviews and questionnaires, as well as a wide array of other biometric tools are also
being used to conduct examinations centred on audience reception, such as galvanic
skin response devices to measure participants’ levels of arousal, and webcams to record
and conduct facial expression analysis to inform researchers about respondents’ basic
emotions (surprise, anger, joy…), to assess whether they have been able to express their
attitude in any observable behaviour, and to monitor their engagement in the activity.
E lectrocardiograms (ECG) and e lectroencephalography (EEG) have also been tested.

EEG is a neuroimaging technique that helps to assess brain activity associated with
perception, cognitive behaviour, and emotional processes by foregrounding the parts
of the brain that are active while participants perform a task or are exposed to certain
stimulus material. ECG, on the other hand, monitors heart activity in an attempt to
track respondents’ physical state, their anxiety and stress levels, which in turn is able to
bring some helpful insights into the concerned processes.

Dubbing

Definitions

In filmmaking, dubbing is defined as the process of additioning some new dialogue


or even some other sounds to the original soundtrack of a movie which has already
been shot. Also, dubbing is quite familiar to most audiences as it is normally used for
translating foreign-language movies into the audience’s native language.

Every time a foreign language is dubbed, translation for the original dialogue must be
carefully matched to the movements of the lips of the actors in each particular scene.

43
UNIT I │ TYPES OF AUDIOVISUAL

Of course, dubbed soundtracks not often equal the original foreign language soundtracks’
artistic quality, and subtitles are normally preferred by most viewers as the best means
for understanding dialogues in foreign language films.

Dubbing is frequently used in the original-language version of most soundtracks, but


mostly for technical motives. Filmmakers constantly use it to correct some defects
which arise from the synchronized filming, which is that in which the actors’ real voices
are simultaneously recorded with the actual photography.

Most synchronously recorded dialogues may prove to be quite inaudible or unclear in


a long-distance shot or even due to any accidental air traffic overhead while shooting,
or else it might just be impossible to hide a microphone in a close enough position to
intelligibly capture the actors’ voices.

Dubbing regularly permits the filmmaker to get some high-quality dialogue irrespective
of the real conditions which existed while shooting.

Also, dubbing is used to add all sorts of sound effects to the original shooting soundtrack.
It might even be used in some musicals to cover for a more pleasing voice than that of
any actor who might be performing a song on camera.

Filmmakers in some countries count on dubbing to provide the soundtrack of their entire
films, mostly because such technique may be quite less troublesome and expensive than
the synchronized filming.

Dubbing techniques

As you should already be aware, most of the dubbing is done while in the post-
production stage and it has been being done nowadays in the comfort of a nice and
modern recording studio. Also, it is a highly time-consuming job, which requires very
well-trained hands to deliver real quality output.

So, what is dubbing, really?

In short words, it is the process of furnishing a film or tape with a new soundtrack and
for getting a noise free dialogue track we follow the process for dubbing.

There will be lots of noise distortions and disturbances in the process of shooting and
the chances are extremely high that most dialogues which are being performed by the
actor or the actress will not get properly captured along the process.

44
TYPES OF AUDIOVISUAL │ UNIT I

While dubbing, the artist who has actually played the role in the movie or even any
other dubbing actor whose voice really resembles that of the artist might repeat the
dialogue while in the studio. Dialogues have been being repeated as artists are having a
look at the film itself in its sequences and with lip sync, which are added with felicitous
expressions and pertinent feelings.

So, dubbing has been being done mostly in what experts call dry studios in which the
artist shall hear the original pilot track or even the location sound through a headphone
for adding life senses to the character.

Let us talk about one fo the most successful movies ever. James Cameron’s Avatar has
collected over $440 million only at the domestic box office and has internationally
gotten more than doubling that much. The actual viewing experience in various non-
English-speaking countries has naturally been quite different — because audiences
actually hear dubbing artists reading Sigourney Weaver’s and Sam Worthington’s lines.

While most Americans normally associate dubbing with those famous out-of-sync
B-movies of martial arts, such technique is really no joke for all other audiences around
the world, because most of the huge-budget movies are originally from the United
States itself.

But, how does dubbing work?

Hollywood movies have always been dubbed into Spanish, German, and French, mostly
because those countries all have quite a sizable film-going communities and numbers.

Of course, there are generally two Spanish versions, one for Latin America and one for
Spain. The actual decision depends on the kind of movie and its projected market value
in any given country.

Some of the animated films have been dubbed into more languages than most of the live
action, because animation is basically aimed at children, and those are generally not
able to read eventual subtitles.

For example, Disney’s “The Princess and the Frog” shall be dubbed into almost
40 languages, and at the same time the studio’s live action named “The Sorcerer’s
Apprentice” is scheduled for only getting nine.

Of course, in the process, the studio then must hire some translators, who normally
live in the local country. First, they begin by creating a draft word-for-word translation.
Secondly, which is in many of the cases, the made translation is further tweaked to make
sentences and words fit better with the actors’ originally English-speaking mouths.
45
UNIT I │ TYPES OF AUDIOVISUAL

The translator will try to make the “labials”—the consonants that cause the mouth to
close, such as M, B, and P—match up with the ones in the English original version.

Next, the studio starts casting voice-over actors in each country, often with the help of
local studios and, occasionally, the film’s director. The foreign actors’ voices have to
match the age, texture, and comedic sense of the original.

For a big celebrity such as Johnny Depp or Jim Carrey, a single actor in each country
will dub all of the star’s films. Koichi Yamadera, for instance, is the official Jim Carrey
of Japan. Studios also sometimes employ local celebrities, like when Disney hired the
French singer Charles Aznavour to do the voice of Ed Asner’s protagonist in the movie
Up.

On rare occasions, the original actor will do the dubbing himself. Viggo Mortensen
speaks Spanish, so he did a Spanish dub for Hidalgo. For the Castilian dub of G-Force,
Penélope Cruz was unavailable, so her sister, Monica Cruz, got the job instead.

To record the parts, actors read from a script while watching the video. Before each
line of dialogue, they’ll hear three beeps, and at the moment when the fourth imaginary
beep should have sounded, they start their line. That, at least, is the standard method.

The French use a completely different technique known as “rythmo band.” As the movie
plays, lines of dialogue scroll across the bottom of the screen in calligraphy, which
stretches and compresses in different places, indicating precisely how to shoehorn the
words into the character’s mouth movements.

While actors using the regular process record about 10 lines per hour, those using rythmo
band record two or three times as many and are more in sync with the character’s lips.

Performers often develop their own recording tricks. Natasha Perez, an actress in
Los Angeles who does Spanish dubbing, says that since Spanish sentences typically
take longer to say than the equivalent in English, she has to say her lines as quickly as
possible and will often start speaking a tad early. To record a kiss, she’ll either kiss the
top of her hand (for a peck) or put the tip of her thumb together with the tip of her index
finger and kiss the hole (for a French).

To finish the process, the studio takes the film’s main soundtrack and strips out the
English voices, creating what is called an M&E—music and effects track. Once the
foreign actors’ voices are recorded, sound editors take the M&E and stick the foreign
dialogue in the right places. The sound mixers then blend the dialogue with the music
and sound effects so that everything sounds fluid.

46
TYPES OF AUDIOVISUAL │ UNIT I

Voice-over

Voice-over definition

Voice-over (also known as off-camera or off-stage commentary) is a production


technique where a voice—that is not part of the narrative (non-diegetic)—is used in a
theatre, filmmaking, television production, radio, or any other kind of presentations.

The produced voiceover is then read from a made script and might be spoken, for
example, by someone who may or may not appear later in the production or even by a
specialist or well-known voice talent (Sean Connery and Morgan Freeman have been
particularly often called for such role).

In the synchronous dialogue, in which the voiceover is basically narrating the action
which is taking place as the narration goes, is still the most common and easiest
technique in voiceovers. Of course, although rarer, asynchronous has also been used in
cinema.

This last one is commonly prerecorded and naturally placed over the top of a video
or movie and generally used in most news reports or documentaries to explain some
information. So, voiceovers are often used in on-hold messages and video games, as
well as for information and some announcements at common events and most tourist
destinations. We may find them also read live in big events such as most of the award
presentations.

Remember: voiceover is something added into any pre-existing dialogue, and it shall
not be confused with another process which is that of replacing some dialogues with a
translated version of it, since that is called revoicing or dubbing.

Voice-over techniques

Character device

In Herman Melville’s Moby Dick (1956), Ishmael (RICHARD BASEHART) narrates


the story, and he sometimes comments on the action in voiceover, as does Joe Gillis
(William Holden) in Sunset Boulevard (1950) and Eric Erickson (William Holden) in
The Counterfeit Traitor (1962); adult Pip (John Mills) in Great Expectations (1946 film)
and Michael York in a television remake (1974).

47
UNIT I │ TYPES OF AUDIOVISUAL

Voiceover technique is likewise used to give voices and personalities to animated


characters. Noteworthy and versatile voice actors include Mel Blanc, Daws Butler, Don
Messick, Paul Frees, and June Foray.

Charactering techniques in voiceovers are used to give personalities and voice to


fictional characters. There has been some controversy with charactering techniques in
voiceovers, particularly with white radio entertainers who would mimic black speech
patterns.

Radio made this racial mockery easier to get away with because it was a non-
confrontational platform to freely express anything the broadcasters found fit. It also
became the ideal medium for voice impersonations.

Characterization has always been popular in culture and all forms of media. In the late
1920s radio started to stray away from reporting exclusively on musicals and sporting
events, instead, radio began to create serial talk shows as well as shows with fictional
storylines.

The technique of characterization can be a creative outlet to expand on film and radio,
but it must be done carefully.

Creative device

In film, the filmmaker places the sound of a human voice (or voices) over images
shown on the screen that may or may not be related to the words that are being spoken.
Consequently, voiceovers are sometimes used to create ironic counterpoint.

Also, sometimes they can be random voices not directly connected to the people seen
on the screen. In works of fiction, the voiceover is often by a character reflecting on
his or her past, or by a person external to the story who usually has a more complete
knowledge of the events in the film than the other characters.

Voiceovers are often used to create the effect of storytelling by a character/omniscient


narrator. For example, in The Usual Suspects, the character of Roger “Verbal” Kint has
voiceover segments as he is recounting details of a crime. Classic voiceovers in cinema
history can be heard in Citizen Kane and The Naked City.

Sometimes, voiceover can be used to aid continuity in edited versions of films, for the
audience to gain a better understanding of what has gone on between scenes. This was
done when the film Joan of Arc (1948), starring Ingrid Bergman, turned out to be far
from the box-office and critical hit that was expected, and it was edited down from 145
minutes to 100 minutes for its second run in theaters.
48
TYPES OF AUDIOVISUAL │ UNIT I

The edited version, which circulated for years, used narration to conceal the fact that
large chunks of the film had been cut out. In the full-length version, restored in 1998
and released on DVD in 2004, the voiceover narration is heard only at the beginning of
the film.

Film noir is especially associated with the voiceover technique. The golden age of
first-person narration was during the 1940s. Film noir typically used male voiceover
narration but there are a few rare female voiceovers.

In radio, voiceovers are an integral part of the creation of the radio program. The
voiceover artist might be used to remind listeners of the station name or as characters
to enhance or develop show content.

During the 1980s, the British broadcasters Steve Wright and Kenny Everett used
voiceover artists to create a virtual “posse” or studio crew who contributed to the
programmes. It is believed that this principle was in play long before that time. The
American radio broadcaster Howard Stern has also used voiceovers in this way.

Educational or descriptive device

The voiceover has many applications in non-fiction as well. Television news is often
presented as a series of video clips of newsworthy events, with voiceover by the reporters
describing the significance of the scenes being presented; these are interspersed with
straight video of the news anchors describing stories for which video is not shown.

Television networks such as The History Channel and the Discovery Channel make
extensive use of voiceovers. On NBC, the television show Starting Over used Sylvia
Villagran as the voiceover narrator to tell a story.

Live sports broadcasts are usually shown as extensive voiceovers by sports commentators
over video of the sporting event.

Game shows formerly made extensive use of voiceovers to introduce contestants and
describe available or awarded prizes, but this technique has diminished as shows have
moved toward predominantly cash prizes. The most prolific have included Don Pardo,
Johnny Olson, John Harlan, Jay Stewart, Gene Wood and Johnny Gilbert.

Voiceover commentary by a leading critic, historian, or by the production personnel


themselves is often a prominent feature of the release of feature films or documentaries
on DVDs.

49
UNIT I │ TYPES OF AUDIOVISUAL

Commercial device

The commercial use of voiceover in television advertising has been popular since the
beginning of radio broadcasting.

In the early years, before effective sound recording and mixing, announcements were
produced “live” and at-once in a studio with the entire cast, crew and, usually, orchestra.
A corporate sponsor hired a producer, who hired writers and voice actors to perform
comedy or drama.

Manufacturers will often use a distinctive voice to help them with brand messaging,
often retaining talent to a long-term exclusive contract.

The industry expanded very rapidly with the advent of television in the 1950s, and the
age of highly produced serial radio shows ended. The ability to record high-quality
sound on magnetic tape also created opportunities.

Digital recording—thanks to the proliferation of PCs, smartphones (iOS and Android


5.0+), dedicated recording devices, free or inexpensive recording and editing software,
and USB microphones of reasonable quality, increasing use of home studios — has
revolutionized the industry.

The sound recording industry uses the term “presence” as the standard of a good quality
voiceover and is used for commercial purposes in particular. This term “presence”
measures the legitimacy of how it sounds, specifically one of a voiceover.

Advances in technology for sound recording have helped voiceovers reach this standard.
These technological advances have worked continuously on diminishing “the noise
of the system...and thus reducing the distance perceived between the object and its
representation.”

The voice over industry works in tandem with the advertising industry to help deliver
high quality branding and as a whole is worth millions. Commercial advertising that
uses voiceovers reaches about 89 percent of all adults in Britain alone.

Translation

In some countries, such as Russia and Poland, voiceover provided by an artist is


commonly used on television programs as a language localization technique, as an
alternative to full dub localization.

50
TYPES OF AUDIOVISUAL │ UNIT I

In Brazil, multiple voiceover is also common, but each film (or episode) is normally
voiced by three to six actors. The voice artists try to match the original voice and
preserve the intonation.

The main reason for the use of this type of translation is that unlike synchronized
voice translation, it takes a relatively short time to produce, since there is no need to
synchronize the voices with the character’s lip movements, which is compensated by
the quieted original audio.

When there is no speaking in the film for some time, the original sound is turned up.
Recently, as more films are distributed with separate voice and noises-and-music
tracks, some voiceover translations in Bulgaria are produced by only turning down the
voice track, in this way not affecting the other sounds.

One actor always reads the translation crew’s names over the show’s ending credits
(except for when there is dialogue over the credits).

Voice over and dubbing in Brazil

The subject perspective

Audiovisual translation (AVT) is basically a recent field of research if one compares


it to most of the other fields in the area, such as technical translation and literary
translation. Nevertheless, AVT has already been the subject of a quite high number of
official publications – by practitioners and scholars, of course.

However, most, if not all, of these publications make it clear that developments in the
research field might still be limited in terms of the genres studied and of the kinds of
transfer.

There are two unquestionable facts when it comes to AVT research, both in Europe and
here in Brazil:

a. The focus on subtitling and dubbing. Television interpreting and voice-


over translation have constantly been ignored by researchers. Quoting
Jean-Pierre Mailhac, who wrote about the voice-over of commercial
videos: “Audio-visual translation appears to be essentially synonymous
with subtitling and dubbing” (in Gambier (ed.) 1998:208).

b. The focus on fictional output, mostly on feature films, although factual


output represents a large portion within the media repertoire. Moreover,

51
UNIT I │ TYPES OF AUDIOVISUAL

although factual output is frequently subtitled, subtitling is often studied


within fiction. This preference for fiction may also be observed in statistics
on television broadcasting - such as those published by the European
Research Institute or by the European Audiovisual Observatory – where
figures concerning factual programs ar0e rarely given much attention, if
any.

A nice exemplification of those two facts may be found in the later edition of the
Language Transfer and Audiovisual Communication (LTAC), which is a bibliography
compiled by Yves Gambier. Such book is nothing like a comprehensive bibliography
on AVT, although it is up to now basically the only real research reference in this new
recent field, in which significant data might be found.

Among the more than 1,200 entries which constitute the volume:

»» Entries about ‘voice-over’, which we have just explained about and


is actually one of the exemplary modes of language transfer in actual
programs, summed up to only 11.

»» Entries which mention the many discourse types which are most
commonly used in the factual programs were reduced to not a single
entry for ‘interviews’; only 2 entries for ‘narration’; and barely 7 entries
for ‘commentary’.

»» Only the ‘great’ amount of 21 entries might be identified as studies of


TV programs which are commonly accepted as being classified as the
non-fictional genre, among them: commercials (only one entry); political
debates and/or panel discussions (4 entries); news (9 entries); current
affairs (only one entry), and the documentary film (6 entries).

The revised volume of such publication accounts for the total of 41 entries which
explicitly and directly refer to non-fiction. Nevertheless, when most entries which might
be related to the factual genre, but do not refer explicitly to it, are put into consideration,
then 18 entries mentioning interpreting and 3 entries mentioning TV programs – either
for television or radio – might be actually added to the list, which leads us to the ‘grand’
total of 62 entries.

As mentioned before, such bibliography is definitely not comprehensive. Most of


Brazilian studies, for instance, are not to be found there. It is recalled at least 7 theses/
dissertations on AVT, and some other research projects which have been developed

52
TYPES OF AUDIOVISUAL │ UNIT I

in the country recently. But, once more, literally all of them deal with dubbing and/or
subtitling of fictional output.

Most of the current work in Brazil which deals with factual output is concerned mainly
with the intralingual translation, such as sign translation and closed captioning.

It is always worth mentioning that quite a lot has been written since 1997, but it has
been done so with the mentioned unbalance between research dealing with factual
output and the ones dealing with fictional output, as far as AVT is concerned.

There may be a minimum of two reasons for such an unbalanced research situation.
The first one shall be the strong tradition of scholar literature, which has permeated
most, if not all, AVT studies in particular and translation studies in general.

The second reason might be (this is only a guess) the false idea unfortunately shared
by most scholars that translation of facts should not represent real challenge for any
serious translator; therefore, it should be of little, if of any, interest for the researcher.
Translation of facts normally ends up being neglected, classified as a boring object of
study which would obviously lack the subjectivity and creativity inherent in real fiction.

One might conclude that one immediate consequence of such small quantity of research
in relation to factual output is the non-existence of conceptual and terminological
consensus.

Of course, there is still a lot to be studied in such an empty field. Apart from obstacles,
most of the academic studies on voiced-over documentaries have proved that factual
output is definitely a promising and increasing field for AVT research which should
not be underestimated. Researches on this kind of output shall rightfully challenge
people’s beliefs in relation to theoretical or even idealized statements about their own
translation, or their studies on translation, as well as the facts presented on the famous
small screens.

It naturally leads us to consider the ethics involved in such activity; certainly, it


stimulates interdisciplinarity while focusing on the relevant role of factual AVT for
discussions of media discourse.

Most of the research on these translated audiovisual facts might finally constitute one
relevant barometer of today’s agenda for the discourse globalization – or the assumption
that we exchange information on equal terms.

53
UNIT I │ TYPES OF AUDIOVISUAL

Finally, we would naturally suggest something which has no real connection with what
was previously said. We would suggest some reception researches as assets in AVT, and
we believe that any of you would agree with us here.

As scholars, we have the tendency to make some assumptions about viewers, often
underestimating them. But we should accept that, being translation scholars and
researches that we are, we must be more critical viewers. Therefore, it si not acceptable
that we no longer assume we share a vaguely, if not strongly, similar view about AVT
with our monolingual viewers. Definitely, we have a necessity to know how most viewers
consume AVT in reality if real advance in the field is our goal.

The professional perspective

In 1991, when Georg-Michael Luyken, director of the European Institute for the Media,
published a book called Overcoming Language Barriers in Television, the team Jatalon
(or Equipe Jatalon) launched in Brazil the booklet Manual do Vídeo.

This booklet was the only national publication for the general audience that explained,
among other things, how the Brazilian set of technical parameters for the home video
subtitling market was created.

While European researchers at EIM discussed the commercial and linguistic aspects
of language transfer, Equipe Jatalon’s professionals defined the basic set of norms and
technical standards that have been adopted by all Brazilian subtitling companies to this
date.

Equipe Jatalon was the first group of professionals who assessed objectively the quality
of electronic subtitles in Brazil. They pointed out, in the section called “Subtitling
Techniques”, that they did not aim at researching subtitles in their translation aspect.

Instead, their idea was: to put ourselves in the shoes of the viewer who is going to read
from seven hundred to two thousand lines of text on screen and who, after this really
tough task, should still feel comfortable and be willing to watch another film (1991:78,
my translation2).

The question, then, was: How to establish subtitling parameters? Being a highly
qualified group of electronic engineers and audiovisual professionals, Equipe Jatalon
used to write articles about video equipment and technical essays about audiovisual
productions for the newspaper Folha de São Paulo.

By 1986, this newspaper had received a lot of letters and phone calls from readers
complaining about their difficulty in ‘reading’ the subtitles of home video films. Due
54
TYPES OF AUDIOVISUAL │ UNIT I

to the lack of subtitle preparation norms, Equipe Jatalon, sponsored by Folha de São
Paulo, set out to create them.

The starting point was to answer two questions: (1) What kind of characters are people
used to read in Brazil? and (2) What kind of subtitles are Brazilians used to? Based on
newspapers, magazine texts and cinema subtitles, the group decided to establish the
following standard rules:

»» Subtitles would be yellow in colour.

»» Subtitles would be centralised and the words accented according to the


official orthographic rules of the Portuguese language.

»» One subtitle line would be displayed on screen for at least one second and
would be synchronised with the lines spoken by the actors.

»» One subtitle would have no more than two lines.

»» Font size would comply with the Specifications for Safe Action and Safe
Title Areas Test Patterns for Television Systems, established by the
Society of Motion Pictures and Television Engineers, in the USA.

»» Subtitle texts would be written in upper and lower case.

It is right to say that Equipe Jatalon not only defined the boundaries not to be trespassed
by subtitling translators from then on but also established the aesthetics of Brazilian
subtitles.

Although it might be clear that subtitles are actually image invading elements, which
always and certainly steal and disturb the viewer’s attention from the scenes, Equipe
Jatalon was well aware of the productive and relevant communicative function of
subtitles: their basic, starting point ahs always been the viewer’s reading comfort while
exposed to the scenes and to the great amount of screen information.

Those rules have not really and constantly changed much in the recent years, and, of
course, they have no need to change. They have been quite satisfactory. Equipe Jatalon
has really done their job well, one might say, irrespective of the fact that real Brazilian
cable television stations and subtitling companies have done very little, or nothing, in
order to improve the translation skills of their own staff of subtitlers.

Fortunately, most universities in Brazil have already noticed the relevance of such area
of study and, because of that, some experiences in the field have already surfaced to be
shared with you here in this fascicle.

55
UNIT I │ TYPES OF AUDIOVISUAL

While many magazine and newspaper articles have been strongly criticising the poor
quality of cinema, home video and television jobs on translation for almost 30 years
now, we have been having an average of one Doctorate thesis or at least one Master
dissertation per year in the last ten years, which shows that there are very serious
studies on such unique form of translation.

The reality is that the market out there depends only on universities to supply it with
trained professionals. Of course, that has been the best way to improve the quality of
subtitling in all aspects. Unfortunately, the Brazilian subtitling industry nowadays has
had different goals, and it shall not devote itself to that crucial task.

The academic perspective

Research on audiovisual translation in Brazil is being carried out by both the academy
and the companies that deal with this kind of translation, such as subtitling laboratories
and dubbing studios.

However, despite having common interests, there is no exchange of ideas between the
academy and these companies. They do their research separately and do not share the
results.

This situation must change, and it is about time that the dialogue between the people
interested in this subject begins. Companies will benefit from academic findings and
researchers will have access to valuable data and will also get feedback from professionals.

Every researcher working in this area knows how difficult it is to collect audiovisual
data in our country. The work conducted by non-academic institutions, our focus here
will be mainly on academic research, more specifically the one carried out at the State
University of Ceará. But before talking about the Fortaleza experience, let us look at
what has already been done in the area.

The largest amount of research was produced in the PostGraduation programs of


Brazilian universities. Some students interested in audiovisual translation decided to
write their theses and dissertations on the subject.

They are just isolated projects, not connected to the fields of study developed by the
Post-Graduation programs. Most of them are about the translation of fiction, focusing
on open subtitling.

There is no academic research on dubbing and voice over, except for Eliana Franco’s
thesis (2000), which was written in Europe. Some examples of these studies are the

56
TYPES OF AUDIOVISUAL │ UNIT I

dissertations by Cortiano (1990), Franco (1991), Bamba (1997), Rodrigues (1998),


Gonçalves (1998), the thesis by Mouzat (1995) and my own thesis (Araújo, 2000).

At the State University of Ceará, audiovisual translation is one of the fields of study in
a research group registered at one of the official government agencies that promote
research, CNPq. The group is entitled “Translation and Lexicology” and comprises the
works of teachers and of graduate and postgraduate students from Foreign Language
Teaching and Applied Linguistics, respectively.

Three of these students are working on screen translation, but from different perspectives.
One of them (Carlos Augusto Viana da Silva) is dealing with intersemiotic translation or
transmutation, more precisely the translation/adaptation of Virginia Woolf’s book Mrs.
Dalloway to the screen. The other student (Antônia Célia Nobre) has been investigating
the importance of knowing audiovisual features when dealing with subtitling.

She reflects on a personal experience as an amateur subtitler in Fortaleza. Her task


became quite difficult once she approached subtitling as if it were literary translation.
It was only during the translation process that she realized that the audiovisual
environment should be considered. The last student (Graeme Clive Hodgson) has
been investigating the simultaneous interpreting of the Oscar Awards, especially the
translation of humor.

As to teachers’ research at UECE, the main focus nowadays is on closed caption. This
form of translation is relatively new in the country and no research on the topic was
ever done.

We borrowed the term “closed caption” from the U.S. because the system available
in Brazil to produce these subtitles is the American one. Except for feature films,
this system transcribes the speech rather than condenses it. It is different from the
European system, which produces intralingual condensed titles, and which is called
“closed subtitles”.

The corpus of our research includes feature films and factual television programs
translated by means of closed captioning. They were videotaped from the only Brazilian
television channel that exhibits intralingual titles, Rede Globo de Televisão.

Closed captions on this channel appear in: (1) all news programs (Bom Dia Brasil,
Jornal da Globo, Jornal Nacional, Jornal Hoje); (b) a news magazine (Fantástico); (c) a
talk show (O Programa do Jô); (d) some films exhibited on Monday evenings under the
series title Tela Quente. Some of the films were: Romeo and Juliet, The Man in the Iron
Mask, and The Truth about Cats and Dogs.

57
UNIT I │ TYPES OF AUDIOVISUAL

The type of tests that will check the effectiveness of the captions has not been set out yet.
However, the passages that are going to be used in the tests have already been selected.

What is being currently done is the description of the titles exhibited on the Brazilian
television channel in the form of a data record that contains all the features that may
affect the reception of the titles positively or negatively, such as number of subtitles per
minute, shot changes, synchronism between speech, title and image etc.

I will finish this short exposition by presenting some of the hypotheses we are investigating
in this research. So, I will not mention hypotheses related to orality markers that are
being investigated mainly by my colleague Eliana Franco.

Firstly, we assume that in the news programs, in the television magazine and in the talk
show, subtitles do not synchronize with either speech or image.

This lack of synchrony may impair the reception of the titles. Secondly, we assume that,
in the case of a monologue (e.g. a reporter reading the news), the title can be understood
even when it does not synchronize with speech or image. And finally, we assume that
synchrony that is not observed in factual programs of the corpus is respected in feature
films. This helps viewers to make the best of closed captions.

Some voice over / voice acting exercises

You know that guy at the gym with the gigantic arms and neck who counts his reps
so loudly that you can hear him despite your headphones? You know who I’m talking
about. The same guy who sounds like he could be auditioning to be one of the backup
lions in The Lion King on Broadway.

You can say what you want about him, but one thing is sure: He’s in great shape. Now
look at your arms and neck. Probably not so developed, and that’s OK. You’re a voice
actor; your strength comes from the inside. When you look inside what do you see?
Flabby little vocal cords and a limp tongue?

If you want to be the best voice actor you can be, you need to think of that guy at the
gym. Instead of bulging biceps, deltoids, and quadriceps, your muscle groups are the
lips, diaphragm, and tongue. Once you own those muscles, you can focus on the finesse
and technique necessary to execute your assignments and get more and more rolling
in. So here is a list of muscle and technique exercises that will help you realize your full
voice acting potential.

58
TYPES OF AUDIOVISUAL │ UNIT I

1. Start low, breath correctly

Your breath comes from the descending of your diaphragm, the floor between your lungs
and heart and your abdominal cavity where you have a bunch of more slimy organs like
the pancreas, spleen, and stomach. As the diaphragm pulls down, it creates low pressure
inside your lungs, which sucks in the air. When you breathe out, the diaphragm pushes
back up, expelling the air. Your lungs don’t contract and expand by themselves.

Too often people try to suck air in by lifting their shoulders, but if the diaphragm doesn’t
move, you don’t breathe. And your breath is what vibrates your vocal cords, so nothing
vocal happens without it.

Exercise: Find a counter or sturdy table that is about as high as 2-3 inches above your
belly button. Lean on it so that your weight is resting on your stomach. Using only the
force of your diaphragm, breathe in enough so that your entire body is pushed back.
Think of it as a push-up by breathing.

2. Buzzing lips

Your lips polish the sounds that your vocal cords have created. Lips are also a direct
extension of your jaw and cheek muscles. Speaking clearly and at the appropriate
volume requires muscle development in order to move your lips properly.

You can’t exactly do pull ups with your lips, but the good news is you don’t need to. Lips
are about flexibility and endurance.

Exercise: Have you ever tried to play the trumpet or another horn instrument? Put
your lips together and buzz them forcefully, enough so that you’re making a mouth
trumpet sound. Now fluctuate between a higher pitched sound (faster buzzing) and a
lower pitched sound (slower buzzing).

Stand in front of a mirror. When you’re buzzing as fast as possible, your lips should look
like a blur in front of your face. Using your breathing exercises, try to buzz your lips as
long as possible before running out of breath. Use a timer to measure your progress.

3. Rotate the tongue

Aside from your vocal cords, your tongue is probably the second most important speech
organ. It shapes all of your words and syllables. It’s what gets in the way when you get
tripped up by a tongue twister. The tongue is a muscle like the lips and diaphragm.
Making it stronger and more agile will make you a stronger and more agile voice actor.

59
UNIT I │ TYPES OF AUDIOVISUAL

Exercise: The tongue helicopter. Take your tongue and move it around the part of your
mouth that’s outside your teeth but inside your lips. Make a full circle. When you’re
standing in front of the mirror, you will be able to see the bump of your tongue run
around the bottom of your mouth by your chin and the top of your mouth under your
nose. Now just like a helicopter taking off, speed up the rotation. Once you’ve gotten a
good speed, change direction. You’ll gain both strength and control.

4. Go for endurance

The vocal cords need to be strengthened as well, but you have to be careful to start
slowly and move your way up. Your vocal cords are one of the most sensitive parts of
your body, and injuring them could mean saying goodbye to your dream career of voice
acting. The best way to start is to read out loud, but just reading without tracking what
you’re doing is not going to help you improve.

Exercise: Read for half an hour straight – out loud – with a decibel meter. Cut each
passage into 500 words and time yourself. How long did it take you to read those 500
words?

Now, for the next 500 words, read at exactly that same speed again, don’t let yourself
go faster or slower. The decibel meter will make sure that you are keeping a consistent
volume. If your volume starts to drop off towards the end of the passage, it means that
you aren’t breathing correctly.

When you’ve got this exercise down, move on to more difficult passages. Pull out an old
copy of Nathaniel Hawthorne or Adam Smith’s Wealth of Nations and try navigating
very complex sentences. If you can find your breathing and sustain your energy in one
of those famous, paragraph-long sentences, you’re on the right track.

5. Work on your silences

As a voice actor, silences are critical. Not only do they make it easy for editing after the
fact, but they are moments that control the rhythm and they are when you breathe!
Humans have a tendency to occupy all silences.

In fact, when we speak, our phrases are continuous sounds that our brain deciphers
into different words and meanings. Punctuation is your friend. Commas and periods
should be noticeable; they are there for a reason!

Exercise: Record yourself speaking with a program where you can see the recording
waveform like GarageBand or Audacity. The wider the amplitude of the waveform, the

60
TYPES OF AUDIOVISUAL │ UNIT I

more sound you were making. When there is no amplitude between the wider parts,
just a flat line, you completed a perfect silence.

Your waveform should have perfect silences regularly through the recording. If you
start to see fewer and fewer silences towards the end of your recording, it means that
you were rushing.

6. Sing a variety of music

Getting out of your comfort zone is probably the most fun of all voice over exercises.
Singing is a great way to work on endurance, breathing, and different voice ranges. A
lot can be said about your talents if you can sing Barry White and Beyoncé side by side!
Singing has the added benefit of having a built-in rhythm, which will naturally rub off
on you the longer you sing.

Exercise: Get out your headphones and listen to a song you like in front of the lyrics.
Then, turn on your microphone and get ready to record. Now, sing the song along with
the version playing in your headphones. Once it’s done, go back and listen to your
recording.

Sure, it might sound terrible, but the idea is to identify where you can improve. In this
voice over exercise, your capacity to understand how you sound is key to being a voice
actor, and it’s often easier to do that when you’re comparing yourself to a song than
comparing a voice recording to how you think it should sound.

The beauty of these exercises is that they all build upon each other. It’s impossible to
forget about one piece of the puzzle, because if you do, you won’t progress with the
other parts.

Voice acting is a challenging career that requires you to perform at your absolute best.
Remember the guy at the gym: he’s always there, and he’s always grunting, but he is
unquestionably the strongest one.

1. What is the best definition of dubbing?

2. What is the main difference between voice-over and dubbing?

3. In your own words, explain the main steps one should follow in order
to exercise their voice acting.

61
SUBTITLING
AND AUDIO UNIDADE UNIT II
DESCRIPTION

CHAPTER 1
Subtitling

What is subtitling?
Subtitling is one field in the translation area, and it concerns audio description, voice-
over and dubbing. In other words, the audiovisual language of TV programs or films
transferred with certain forms to be understandable by target audiences whom they are
not familiar with its source language.

Subtitling was firstly used in Europe in 1929 when the first talkies reached Europe.
What is important is to give some theoretical clear definition of what is subtitling before
going further in the challenges and strategies. Some scholars define subtitling as ‘the
process of providing synchronized captions for film and television dialogue’.

Subtitling is also defined as some sort of supplementing the original voice soundtrack
by adding written text on the screen. Therefore, subtitling has as its main role to ease
the actual access for the foreign viewers on audiovisual productoins, such as videos and
movies, in any given foreign language.

The criteria of subtitling

Of course, subtitling differs a lot from translation of most written texts. So, how it
differs? Tornqvist (1998, p. 10) in his book “The problem of subtitling” mentioned four
main differences between translating written texts and subtitling, which can be stated
as the the crucial criteria defining the subtitling field. And he actually states;

1. The reader of translated text does not compare the source text with the
target, while in the subtitle, this comparison happens automatically
especially if the viewer speaks the source language.

62
SUBTITLING AND AUDIO DESCRIPTION │ UNIT II

2. The translator of written texts naturally has more space in order to add
footnotes, explanations etc. when something difficult is found in the
original text, and the subtitler, of course, is not able to do this.

3. It can be defined that the inter-textual translation should always involve


translation from one written text to another, but subtitling, on the other
hand, necessarily involves translation from some spoken languages into
lines of some written texts.

4. In subtitling, extended massages must be condensed to subtitling


requirements which written texts have more space to present them.

Challenges

All types of translation have their own challenges ‘difficulties’. Subtitling as a part of
this field has its own formal (quantitative) and contextual (qualitative) restrictions.
The textual restrictions are those which imposed on the subtitle by the visual context,
while the formal deals with the way of presenting the subtitle. Some scholars add that
the number of possible audiovisual translation problems is endless and a list that
would count for each one of them can never be finite. There are many challenges which
surround the subtitling process and it can be classified into three main types (Technical,
Cultural and Linguistic) challenges.

Technical challenges

This is the prominent kind of challenges involved in the whole subtitling process and
which might cause restrictions on the translator’s work, unlike what normally happens
while translating written texts. They classified them into:

»» The space: Translators are limited to a restricted number of characters


throughout the whole subtitling process, which are maximum two lines
for one image of about 37 characters per line. This number of characters
may slightly differ from one language to another. Of course, the chosen
syllables have a great effect on this number, as in the case of using (mw),
which clearly takes more space than when one is using syllables like (li).

»» Time: Of course, one of the most important technical limit is that of the
allowed time for subtitle, which is constantly no longer than six seconds
to be exposed on the screen, that is the content has to be cut down to fit
the limits of characters as well as the time of shown the subtitle on screen.

63
UNIT II │ SUBTITLING AND AUDIO DESCRIPTION

This may influence how the viewers will be able to catch the subtitle and
understand the content. Therefore, the correct word choice to present
the content with possible limited number of words may help in this issue.

»» Spotting: The actual subtitle on the screen must be precisely matched


with the dialogue. Nevertheless, subtitling would not only include
dialogues of the narrators or real characters, but it must also include some
other meaningful letters, signs or any other written things and aspects
that may be relevant for the comprehension of what is being shown on
screen.

»» Position on screen: Pictures on the screen made of 720 pixels wide by


576 pixels high and the subtitle must be positioned between 10% from
each frame edge to be in the central and at the bottom of the screen.

»» Font: The standard font type, size and colour have effects on the subtitles
whether the character will be with or without shadowed background like;

Cultural challenges

Cultural bound elements have been proving themselves extra challenges for the
professional subtitler. The differences between cultural norms of different countries
rise through using language and translating from one language to another, especially
during subtitling because it deals with audiovisual materials (TOURY, 1995, p.38).

All of this can be represented through the adopted style of the subtitler like using
domestication, foreignization, functionalism etc. for instance names of characters,
famous places etc. which the audience is familiar or not familiar with.

Among the most popular forms of cultural challenges for professional subtitlers is
humor, mostly due to the fact that sometimes laughter is by far more important than
the actual meaning of what is being said in certain TV series such as ‘Friends’. Humor
can be classified into international jokes which can be translated literally, and it is easy
to understand.

Taboo and swearing words are generally cultural related words which may be kept in
some subtitles and deleted from others for many reasons such as when it is actually
forbidden in the targeted culture, like heavy loaded expressions, bloody swearing etc.

Another example is the representation of people from particular field or who hold
a certain position like ‘MP’ in Britain, the person who represent people in House of

64
SUBTITLING AND AUDIO DESCRIPTION │ UNIT II

Commons, but in other countries they use ‘Deputy’ instead of ‘MP’ to refer to that
person (CINTAS AND REMAEL, 2010, p.37).

Linguistic challenges

Some of the linguistic challenges which face the subtitlers are statements that the
linguistic choice in subtitling is not realy random, and, in other words, characters
in audiovisual programs or movies might convey some sorts of effects through their
annotation, lexicon, syntax, grammar etc. which carries connotative meaning in
addition to the denotative one.

There are many linguistic constrains which related to subtitling.

Accents and pronunciation which require special experience or skill for the subtitler to
deal with them.

»» Dialects: these are directly related to certain geographical areas.

»» Idiolect: this is some sort of a personal manner of speaking for some


people and they might even change it from person to person in their
relation sphere.

»» Sociolects: this is directly related to certain economic status, which


might include financial situations and even locations inside a city or town.

Other types of linguistic challenges are the grammatical mistakes in dialogue which
have to be corrected in the subtitles as in the following example;

Skopos and subtitling

Before starting to talk on the adopted strategies for subtitling, let us clarify what is the
purpose of subtitle?

No one can deny that the subtitle is done for a specific function which is to make the
audiovisual materials understandable by the audience irrespective of they being from
foreign languages or hard hearing. Here comes the role of the Skopos when the subtitler
implies a specific function on his/ her subtitling for a specific language audience.

Schjoldager (2008, p.166) states that the Skopos ‘function’ cannot be the same in the
source ‘the audiovisual’ and the target ‘subtitle’. Kristensen (2009) states that the
sender of the source text does not have the same target audiences.

65
UNIT II │ SUBTITLING AND AUDIO DESCRIPTION

Therefore, she insists that the Skopos can help the translator to decide which
macrostrategy can be applied to certain subtitling translation.

Furthermore, the source may have different functions within the same language as well
as in many other languages ‘… the designated Skopos will determine which one is more
appropriate in a given situation’ for specific subtitle.

All of this actually shows that the professional subtitler is the one with the main role
in enforcing specific functions within the subtitle in order to achieve a certain purpose
successfully.

Strategies

There may be found a large number of strategies for handling those mentioned subtitling
challenges. The first attempt to put standard strategies to overcome subtitling challenges
was by Vinary and Darbelnet who displayed impressive subtitling strategies which were
furtherly developed by some other scholars.

Although subtitling strategies may be classified into two levels, the micro-strategies
and macro-strategies, it is not certain that such concepts are actually clear for those
who come to study this topic. Macro-strategies formulate the overall framework of the
translation, while the micro-strategies deal with individual translation problems on
word and sentence levels.

Macro-strategies

As presented previously, the skopos focuses on the function of the target text. Thus, the
macro-strategies help the translator to decide how to translate the source text. Some
authors state that there are two types of macro-strategies; the source oriented micro
strategy which focuses on the source text and the target oriented which focuses on the
target text.

Thomsen (2009) adds that if the subtitler ‘translator’ will focus on the form and content
of the source such as in documentaries then the translation is source oriented. If the
subtitler focus on the effects of the text more than the semantic meaning, then the
translation is target oriented.

The subtitler as mediator between cultures or even in the case of intralingual subtitling
has to convey to the target audiences the same information of the source and not that
it had been adapted for the target receptors. For example, if American TV series from

66
SUBTITLING AND AUDIO DESCRIPTION │ UNIT II

New York are subtitled to fit Brazilian culture, the translator must find the equivalent
street names, Cafés etc.

Micro-strategies

Once the macro-strategy had been decided, the level of micro-strategy can be shown up.
There are many strategies on this level adopted by professional translators.

The resignation occurs in all types of verbal transmission. His proposed strategies
clarify why the translator has chosen to translate in a certain method.

Marfa runs a case study on Gottlieb’s proposed strategies for subtitling films from
English to Spanish language and Maryam did another case study for applying Gottlieb’s
strategies in subtitling Black English movies. All these studies approved the success of
those strategies in overcoming subtitling challenges on the micro level.

Moreover, Schjoldager (2008, p.92) develops Gottlieb’s strategies and proposes twelve
types of subtitling strategies on the micro level as the following;

1. Direct transfer: This strategy does not translate the source text words,
but transfer them directly to the subtitle such as subtitling within the
same language for hard hearing as in the following example;

2. Calque: This strategy presents translation for the source text words
with very close structure of it. Sometimes it results the target text sounds
‘unidiomatic’ (Ibid, p.94) like;

3. Direct translation: This strategy does not need for explanation because
it represents a direct transfer of the source meaning to the target.

4. Oblique translation: It includes some similarity to direct translation,


but with transference of whole source context.

5. Explicitation: This strategy can make the implicit information in the


source text as possible as explicit in the target one.

6. Paraphrase: It gives more freedom for the translator to formulate the


meaning of the target according to his preference and preserving the
main content of the source.

7. Condensation: This strategy allows the translator to shorten the


subtitle according to the time and space limits to overcome the technical
challenges. The condensation is sub-classified into two types;

67
UNIT II │ SUBTITLING AND AUDIO DESCRIPTION

a. Condensation and reformulating at word level and it is classified into six


types.

b. Condensation and reformulating at sentence level and it is classified into


nine types.

3. Adaption: This is used when source text contains word or reference


which does not exist in the target for certain linguistic or cultural causes
such as; (RL Gang with Arabic subtitles)

4. Addition: It is not used widely in subtitling only when the translator


wants to add something to the source text.

5. Deletion: This strategy is applied when an element from the source text
is excluded for some technical, cultural or linguistic constraints. Deletion
is sub- classified into two types: deletion or omission at word; deletion or
omission at sentence level.

6. Substitution: The translators use this strategy when they need to


change the source meaning with another different one in the target. The
translators do not prefer to use this strategy to keep the target ‘subtitle’
close to the source unless they do it for certain function.

7. Permutation: This strategy is applied when the translator includes


some of the source items in the target for certain difficulties. This can be
seen in humoristic elements and wordplays.

It can be concluded that subtitlers must be aware of some extra concepts in order to
present good works which can be classified with professional level.

These concepts are:

»» The concept of culture: The subtitler must be aware of the cultural


issues in his\her translation as a part of mediation between culture with
full understanding and respect for their differences.

»» The use of translation strategies: This concept varies at the macro


and micro levels considering the sociocultural and ideological stylistic
effects.

»» Norms Linking concept: It is important for the translators to remember


the link between translation norms and technical constraints especially

68
SUBTITLING AND AUDIO DESCRIPTION │ UNIT II

with recent development in technology in internet downloading of films


and subtitling software.

»» Written and oral concept: This can be presented in the sociolinguistic


role and the responsibility of subtitler for using writing conventions in
subtitling which are adopted in specific language.

All of this shows that translation studies help to develop audiovisual researches by
bringing them through relevance theory, Skopos theory etc. to overthrow all standing
difficulties on the way of subtitling.

Have you ever seen your own favorite films on DVD or on TV showing some strange text
at the bottom part of the screen, sometimes even in strange, foreign languages? Have
you ever watched DVDs which bring on a list of different languages in which subtitles
are available for you to read?

That is because, as the word suggests, subtitles are titles, or just texts, given at the
bottom (sub) part of the screen. Most of these are the actual transcriptions of the words
which have been being spoken on the screen by the commentators, newsreaders or
actors, but most commonly in a different language from the one you natively speak.

Subtitling is defined as the process by which some special artists go and try to
simultaneously translate most of the audio texts from a video into one or more different
languages. The whole process is composed of 2 necessarily simultaneous conversions:

The conversion from audio (aural) to graphic (visual / textual) mode (transcription)
– i.e. the mode of reception is changed from audio to textual. Information that was
available to be heard now becomes available to be read.

This is aking to transcription, but for the fact that the audio information is in a different
language whereas the written (transcribed) information is in a different language.

The conversion (translation) from one language to another in the process of subtitling,
the audio text is converted to a different language while transcribing. Hence for example,
a Chinese dialogue which is being spoken on screen and heard by the audience, is
converted to text-graphic subtitles in English and made available to be read.

One can say that

»» subtitling = translation + transcription

›› Because there is oral to written translation, there is also an element of


interpretation.

69
UNIT II │ SUBTITLING AND AUDIO DESCRIPTION

»» subtitling = interpretation + translation + transcription

›› However, considering that translation is already a written activity, the


need to specify transcription is obviated. Hence, we can consider the
model:

»» subtitling = interpretation + translation

›› Another criticism of the above formula is that interpretation is actually


already a translation activity, with only the modes differing, so it’s
actually interpretation plus transcription:

»» subtitling = interpretation + transcription

›› A second level analysis of each element will perhaps remove the


overlaps and hence the confusions:

›› Interpretation = listening (understanding) + translating (language


conversion) + speaking Translation = reading + translating (language
conversion) + writing Transcription = listening + writing

»» subtitling = listening + understanding + converting the language + writing

›› Hence subtitling, in its core, is a combination of various elements of


interpretation, translation and transcription. It is a closely blended
combination of all 3 activities.

Types of subtitling

Intralingual or closed caption subtitling is done for the benefit of the deaf and hard of
hearing. There is a legal obligation on British broadcasting channels to subtitle a certain
proportion of material. For live broadcast, now often automated through a combination
of respeaking and voice recognition.

Intralingual subtitling is also used to address regional linguistic variations (politically


problematic).

Interlingual (open caption) subtitling (the focus of this workshop) involves moving
from the oral dialogue to one or two written lines and from one language to another,
sometimes to two other languages (bilingual subtitling).

Surtitling, often used in opera, is one-line subtitling placed above a theatre stage or in
the back of the seats, displayed during performance. Live subtitles are pre-prepared

70
SUBTITLING AND AUDIO DESCRIPTION │ UNIT II

but added at the time of broadcast. Live (real-time) subtitling is used in interviews and
in situations where no script is available beforehand. Involves respeaking and voice
recognition.

Dubbing & subtitling countries

Gottlieb’s (1998) ‘four blocks’:

1. source-language countries

2. dubbing countries

3. voiceover countries

4. subtitling countries

Source-Language countries are English-speaking, with hardly any non-anglophone


imports.

Imported material tends to be subtitled rather than dubbed, though more commercial
foreign films may also be dubbed. Imported material tends to be ‘art’ movies, aimed at
a literate audience.

Some exceptional genres exist, notably animation and martial-arts films (‘kung fu’
dubbing). The DVD format has introduced greater variability here (O’Hagan 2007, 157).

Dubbing countries include mainly German-, Italian-, Spanish- and French-speaking


countries in Europe and elsewhere. Nearly all imported films and TV programmes in
these countries are dubbed. Some availability of original-language products, particularly
in larger urban centres.

Voiceover countries include Russia, Poland and other large or medium-sized speech
communities for whom dubbing would be very expensive and for whom subtitling is
not favoured, e.g. for reasons of literacy rates. In feature films, one narrator interprets
all the dialogue; the volume of the original soundtrack is turned down while s/he is
speaking. Sometimes two narrators (one male, one female).

Subtitling Countries include several nonEuropean speech communities as well as several


small European countries with a high literacy rate for whom dubbing is prohibitively
expensive. In many countries different AV norms apply to theatrical exhibition and
television broadcast (and differences national broadcasters vs. cable).

Gottlieb draws our attention to many types of subtitling, including:

71
UNIT II │ SUBTITLING AND AUDIO DESCRIPTION

»» FL => domestic majority language

»» National minority languages => majority language

»» Majority language => immigrant language

»» Subtitling local varieties => common written language

»» Revoicing FL dialogue in favoured language with subtitles in non-


favoured domestic language.

How Subtitles Are Made – See Hear – BBC Two:

<http://www.youtube.com/watch?v=u2K9-JPIPjg>.

The Brazilian case with Subtitling

The translation by means of subtitles is the condensed interpretation of a movie or a


television show. It is currently widely used in Brazil in the many different media. Before
the popularization of cable TV, this method of translation only appeared in theaters
and in domestic videos. Currently, subtitles can also be seen on closed television, where
Subtitling is more common than Dubbing.

Research in this area occurred mainly in the mid-1980s in Europe, and in the early
1990s in Brazil. In Europe, it is the subtitling companies, the distributors and television
channels that produce surveys to facilitate the dissemination of its programming for
several different cultures.

In Brazil, almost all researches have been carried out so far by the academy, the
universities, that is, through several theses and dissertations. The results of these surveys
have not yet reached the professionals in the field, although such kind of research takes
into account the context in which translation is performed. Therefore, every researcher
in this area needs to know the conditions of production to analyze them. An important
aspect to consider is the process of subtitling.

Here we will present the main characteristics of the production of the most common
captions in our country – the closed caption and the open caption. The focus will be
especially on those produced for television and video. Before we start this presentation,
we would like to clarify some questions on the subject.

1. The Classification of Subtitles

72
SUBTITLING AND AUDIO DESCRIPTION │ UNIT II

The caption can be classified according to two parameters: LINGUISTIC and


TECHNICAL. As for the first parameter, it can be both intralingual and interlingual.
The INTRALINGUAL is that one which happens in the same language as the spoken
text. It is used in programs for hearing impaired viewers, in programs aimed at learners
of a foreign language (Gottlieb, 1998) and in the television news program parts whose
sound is not very audible.

The INTERLINGUAL caption is the best-known type, that is, it is the final (arrival)
language, in the form of a written code, of the dialogues of a film or TV in any foreign
language. It is the most popular type of captions, since it is the one used in cinemas,
videos and Brazilian television.

As for the technical aspect, they can be opened or closed. The open caption is that which
is superimposed on the image before transmission or display, that is, it always appears
on the screen and does not depend on a decoder to be triggered. It can be “virtual”, in the
case of satellite transmission, “burned” to acid (in films for the cinema) or electronically
recorded (in movies for video distribution). It may be yellow or white in color and it
may appear on the centered screen, or either left or right.

Closed Caption is written in white, upper case, or low, on black stripe. Access shall be at
the discretion of the viewer by means of a subtitle decoder (Closed caption key) located
(when available) on the TV control. These subtitles are converted into electronic codes
and inserted in line 21 of the blank vertical range of the TV signal. They can be of 2
types:

The caption of the rotary type (or Roll-up) is one whose lines rise from the bottom of
the TV screen continuously, at a maximum of 4 lines at a time (here in Brazil they are
always two lines) and the words that compose it are displayed from left to right. It is
usually the type used for closed captioning live. This rotary subtitling system can be
found in the news programs, as in “Fantástico” from Globo Network.

The Pop-on caption is one whose phrases or sentences appear as a whole, and not word
by word as it happens with the rotating caption. They are temporarily on the screen,
usually in sync with the audio, then disappearing or being replaced by other subtitles.
This is the type of caption used in prerecorded programs. The Pop-on caption resembles
that of the open caption. Globo uses it for their movies and mini-series.

It is worth mentioning that in this case the mini-series is subtitled from the dubbed
movie in Portuguese, which does not occur with the open captioned version, translated
directly from the English movie. Here we have a pivot translation, that is, a translation
made to from another translation.

73
UNIT II │ SUBTITLING AND AUDIO DESCRIPTION

From the above, we could verify that the closed caption in Brazil usually appears in
intralinguinal translations. In the case of the Roll-up closed caption, the translation
approaches a transcription, that is, the closed caption brings almost all original speech,
unlike the open caption, in which much condensation is made present.

In England, according to De Linde & Kay (1998), closed captions are edited and called
Closed Captions, and not Closed Caption, as in the American system. This edition made
in the European system, also happens in closed Brazilian pop-on subtitles. However, as
they are produced from the dubbed text, the amount of condensation is less than that
of the open caption.

After this brief exposition about the open caption and closed caption definitions, we will
begin to discuss the process of producing these captions.

2. The Open Captions

The process of open subtitling for video and television in Brazil works in the following
way. First, the translator receives from the “laboratory” or “subtitling company” the
tape to be translated. After translation, it comes the MARKING (the beginning and the
end of each caption) performed by a professional called MARKER. Subtitles are then
reviewed by the REVIEWER to be recorded on the tape by computer or by an operator.

Therefore, we can see that whoever puts the subtitles on tape, that is not the translator
and rather a professional called CAPTIONER. In order to differentiate this professional
from the translator, the author called him a CAPTIONIST, making use of of a term
already in use in Brazil.

Subtitling of a film can be done with or without the help of software specific. All the
subtitling companies or TV stations use specific software for subtitling in the marking
and recording phases of the subtitles within the sub-matrix of the client, but only a few
allow their translators to use it in the subtitling. The use of the program facilitates the
work of preparing the subtitles.

If the subtitling company or TV station does not provide its captionary software adopted,
the subtitling is performed with a normal word processor – usually Word for Windows.
When electronic subtitling is done with only one-character generator, the translator
has been asked to do his captions in graph paper or typed paper. In order to do this, the
captionary had to mark the conversation mentally or, in possession of the script, mark
the breaks in the written text version.

In Brazil, computer programs for subtitling (SYSTIMES and SCANTITLING, known as


CAVENA) are not very often used by translators because of the difficulties of accessing

74
SUBTITLING AND AUDIO DESCRIPTION │ UNIT II

them. The process takes place as follows: the distributor passes the matrix tape to the
subtitling company, which then hires the translator.

The dialing is done manually with the help of Word program and of the Time Code
Reader. The working tape, that is, the tape to be subtitled, comes with a watch which
marks the hours, minutes, seconds and frames of the movie.

This watch is called of Time Code Reader (TCR) and is the main working tool of the
captionary. This tool becomes even more important if subtitling is not done with
software. The TCR 01:20:33:01 shows that the film has already run for an hour, twenty
minutes and 43 seconds.

The picture can be found in frame 1 of a total of 29 frames per second. After the caption
has been completed, the captionary delivers his translation to the “highlighter”. And
after going through the “reviewer”, the process comes to an end with the “Subtitler”.

3. The Closed Captions

The caption closed in Brazil is produced by a company called Steno do Brazil. Globo, the
only broadcaster that regularly captures its programming, sends the signal by satellite
for the company, which captures the schedule, often in real time, because many reports
are made live.

The professional in charge of the subtitling is called “stenotipista” (stenocaptioner in


English), using the “computerized stenographer” (stenograph). To perform such a task,
the professional must be a good typist, as he needs to type an average of 160 words per
minute. There are times when you have to deal with reporters that get to speak 187
words per minute in programs subtitled live (KLEIN, 2000).

The stenographer is equipped with a special keyboard, the stenotype (stenotype). It has
24 keys that can be operated simultaneously, which is quite faster. Another preponderant
factor for its agility is the fact that words are typed by sound, that is, approximate
phonetics and not spelling. With just few typed sounds, a computer program searches
the dictionary and finds the desired word. However, sometimes this does not happen,
and it produces an unwanted word.

These are the main characteristics of the subtitling process in Brazil. Any analysis
of the product of this process needs to consider its influence on the Translation.
This procedure is necessary, because the subtitling, although using the writing as an
instrument, cannot be treated as a written text. As we have seen, the process is much
more complex, involving many technical, textual and translations.

75
UNIT II │ SUBTITLING AND AUDIO DESCRIPTION

1. What are the challenges faced by those who engage into subtitling?

2. In YOUR opinion, what is the kind of countries in Gottlieb’s list in which


YOUR country should be included and why?

3. What are the two types of captions and what are the main characteristics
of each?

76
CHAPTER 2
Audio description

What Is audio description?


Can you imagine yourself trying to enjoy a live performance, movie, or TV show, but
not being able to see it? It would be always extremely challenging to get a complete
understanding of what is actually happening on the screen.

You would most probably miss most, if not all, of crucial information which is naturally
expressed visually through scenery, character actions, or gestures, instead of through
audio. It is quite easy to conclude that there is an amazing quantity of detail which
might be conveyed in each single image shown in the movie or video.

Now, take a moment to imagine this description– “A snowman shuffles up to a purple


flower peeping out of deep snow. He takes a deep sniff. His nose lands on a frozen
pond. A reindeer looks up and pants like a dog. Seeing the reindeer slip on the ice, the
snowman smiles and moves toward him, though actually, he’s running on the spot. The
reindeer falls on his chin. The snowman uses his arm as a crutch. The reindeer paddles
his front legs.”

Without any illustrations, this description paints a vivid picture of a certain scene from
the movie Frozen.

Listen to the description yourself in the official movie trailer:

<https://www.youtube.com/watch?v=O7j4_aP8dWA>. (access in June, 2019).

A Definition of Audio Description (AD)

AD consists on the clear and objective description of all information which contains
visual and informative expressions, such as body and facial expressions that report
something, titles, changes of time and space, special effects, costumes, reading titles
credits, information about the environment and all information written on the screen.

The AD allows the user to receive information contained in the image at the same time
that it is shown, allowing the person to be thoroughly enjoying the work, following a
routine of subjectivation of the narrative, in the same way as someone who sees it does.

77
UNIT II │ SUBTITLING AND AUDIO DESCRIPTION

The descriptions take place in the spaces between the dialogues and the insertions
between the sonorities of the film and the spectacle, it is never superimposed on the
relevant sound sound, the way the auditory hearing harmonizes with the children of
the film.

How is it done?

Different technical tools are used depending on the medium we are working on. For
movies, series, novels or documentaries, the current supports are cinema, television
and DVD.

In audiovisual products, AD is added to a second audio channel. In the case of television,


through a channel that provides this extra audio band, it is usually triggered by the SAP
(Secondary Audio Program) key of the equipments.

For plays, support is the spectacle itself and, in this case, it is only possible to do it live.

AD may be

Recorded

To produce recorded audio description, the process takes place in the following stages:

Study and screenplay

A specialized scriptwriter studies the work to be described and produces a script with
the texts to be narrated. The creation of the script is a delicate and subjective work,
which must follow international standards and techniques established in countries
where AD is already standardized.

The audiowriting of speeches take place between the lines of the audio in the film, so
that in the script there is the exact indication of where each speech should be embedded
in the original audio of the film, and it is necessary that the audiowriter works directly
from a copy of the film with Time Code (time reference that synchronizes audio and
video).

If the script is performed by more than one audiodescriptor (for instance, the case
of short-term work), a specialized reviewer should standardize the language and
vocabulary.

78
SUBTITLING AND AUDIO DESCRIPTION │ UNIT II

»» Essays and adjustments: After the script has been completed, the
actor-audiologist must test the placement of the narrated speeches in the
previously chosen places. This is the time when small time adjustments
or the exchange of a word for another occur so that the description is
considered adequate.

»» Recording: With the script ready and having already rehearsed, the
actor-audiodescriptor enters the studio, accompanied by a recording
director and the recording technician, to perform the recording of the
descriptions contained in the script.

»» Synchronization: The extra audio file, containing the AD, is edited and
mixed within the original soundtrack of the movie or program, in the case
of television and DVD, and it goes through an extra audio channel. In the
case of the cinema industry, the sound file is transmitted to headphones,
so that such kinds of information complement the original sound of the
movie.

Rehearsed live AD

AD can be done live. This form is more appropriate in Film Festivals, plays, dance
shows, operas and other artistic manifestations in general.

In the rehearsed live AD, the preparation of the AD speeches is done in the same way
as in the recorded AD, but in this case the actor-audiologist performs the narration live.
The first two steps, Study the Script and Essays and Adjustments are identical to those
of recorded AD.

»» Performed live: It is the performance of the AD at the same time the


work is displayed. In this type of AD, which is usually made in cinemas
and theaters, the equipment used is the same as that of simultaneous
translation. Audience-listeners sit in booths narrating into microphones
and the sound is transmitted to users through headphones.

The movie or play session normally takes place without any interference from the rest
of the audience. The original sound of the movie or piece is captured by the user of AD
by the theater’s own sound system or voice of the actors on stage, and the audio content
is captured by the headset.

79
UNIT II │ SUBTITLING AND AUDIO DESCRIPTION

Simultaneous AD

In this format, the ADist does not have prior knowledge of the work to be described,
therefore, there is no script, nor any possibility of testing it. Therefore, it is the only one
possible in products that are transmitted live.

Because of this characteristic, the simultaneous audio-recording is subject to failures


and overlaps of the audiodescriptor’s speeches with the speeches of the characters,
since the work has not been previously studied.

»» Training: for the result to be satisfactory, the professional who performs


the simultaneous AD must be settled on some specific training.

AD in non-dubbed foreign films

When the product is foreign and not dubbed, it is necessary that the actor-audiodescribers
also perform the interpreted reading of the dialogues of the film translated into
Portuguese, for example. This service does not characterize a dubbing, because the
interpreted reading does not completely overlap with the original voice of the characters.

The viewer listens to the original dialogue, but in the form of a voice over, he/she also
hears the interpreted reading of the dialogues, along with the AD of the scenes. This
interpretation of the dialogues should be done in a way that subtly follows the mood of
the scenes and the characters, but always in a lower and discreet tone. The preparation
of the script, essays and adjustments is similar to the recorded audioscription.

What about the Internet? Wouldn’t it be relevant to have


AD available online?

By 2020, the world shall have about 75 million blind people and three times that number
of visually partially impaired people.

Although we have an almost uncountable amount of information on the internet, we


come to realize that many of these contents are not yet accessible to that portion of the
population with visual impairment.

Let us imagine the following situation: a blind man wants to buy a notebook, but despite
surfing the internet with the help of a screen reader, many visual information cannot
be interpreted.

80
SUBTITLING AND AUDIO DESCRIPTION │ UNIT II

This person can look for a simple notebook or a more sophisticated one. But the lack
of detail will compromise the purchase decision and often he/she will not have the
autonomy to buy it by him/herself, as someone with a vision would.

It is as if a person with vision (normovisual) walks into a site with eyes closed and is to
make a purchase. The chances of buying some sorts of wrong or non-taste product are
quite great.

How to make contents accessible?

This can be done through audiodescription (AD), which have been describing here as a
technique used to translate images into words.

The AD was born in the United States, in the 1970s, from the master’s thesis of the
researcher Gregory Frazier. In Brazil, the technique appeared in 2003 through the “Asi
Vivemos” festival, which features films about people with disabilities.

But although AD is prominent in the audiovisual, the technique can be applied in other
situations, such as plays, dance shows, didactic materials etc.

Because it is a technique, it consists of procedures and rules to achieve a good result.


In AD’s case, there are guidelines for guiding production. Among these guidelines, we
shall highlight two:

1. Describe what you see

This guideline says a lot about what you will not describe. According to US audiologist
Joel Snyder, “describe what you see” is about narrating what you see and what a regular
lay person may not have realized that was visually available.

Moreover, in the AD process it is necessary not to give more information than the work
in case permits. That is, making some material accessible is not really to help a person
with visual impairment “understand it better”.

Always bear it in mind that a person with visual impairment only does not have the
vision, so the person shall have the same autonomy as the normovisuais have.

2. Neutrality

Neutrality is closely related to the previous guideline. It is one to translate what one
observes and sees to give the essential elements to the AD user, who can then create in
his own mind the image of what has been written.

81
UNIT II │ SUBTITLING AND AUDIO DESCRIPTION

Speaking in neutrality at the time of the locution, the professor of the Federal University
of Pernambuco, Francisco Lima, advises that it is not to transmit through the voice
what he thinks or thinks of the work.

In other words, the locution of the AD must be consistent with the work. If it is an
action scene, the voice should keep pace so that the visually impaired person has the
same feelings as the people who are actually viewing the moving image.

Contents with accessibility

Below, we have selected some types of materials which tell you how to make them
accessible for visually impaired people.

Videos

A survey conducted by Cisco showed that, by 2020, 85% of the internet traffic will be
through videos. But how to make this kind of content accessible? Precisely by AD.

Although this technique is performed by specialized professionals, if you have a


commerce company and you are thinking of hiring a scriptwriter, or even if you wish to
do the AD internally, you need to look for a few points:

Scripting

This is a primordial step and requires great care on the part of the writer.

The professional will watch the video, make some initial considerations and list images
that are actually relevant to the context.

It is important to emphasize that speech is slower than the images and that in some
videos it will be impossible to describe all the scenes. In addition, AD must enter in
speech intervals and should not compromise the whole of the work.

Consulting

Once the script is ready, it is time to be validated by the consultant, that is, an AD
professional with training in the area as well as a person with visual impairment.

The consultant’s job is to review the roadmap, considering two aspects:

»» If the insertions contained therein make it possible to construct images


in the mind of the hearer;

82
SUBTITLING AND AUDIO DESCRIPTION │ UNIT II

»» If the sentences are in accordance with the grammar rules.

If the consultant deems it necessary, he/she proposes improvements in the document.


After this analysis, he/she sends the document to the screenwriter.

Voice recording

Ideally, the locution should be performed by a person who already has some practice in
doing so. But if it is not possible, what you need to know is that the narration must keep
pace with the work.

If it is a teaser video, extremely fast and lively, the AD locution should remain neutral.
However, if there is some information on the screen and a rhythm of emotion, the
narrator should follow that goal.

Editing

Editing is an extremely painstaking work that needs a lot of attention. Such work,
although it seems often simple, is what will represent all the work of the writer and the
consultant.

But, after all, what is so different about this issue? The publisher needs to follow the
guidance set out in the script, and in addition, often need to use the ability to speed up
the voice minimally so that a given phrase will fit at that available/precise time.

Approval of the consultant and screenwriter

Finally, the work ends when the consultant and the writer evaluate the locution, editing
and final work.

This step is important, because with the context of the work, it may be necessary to
make changes before making the video available.

Webinars

Imagin that you, or the company you work for, will make a good practice webinar on
Landing Page, an extremely visual material.

When the webinar is intended for people with visual impairment, we should consider
some information:

»» Location (room that has colorful pufs, for example);

83
UNIT II │ SUBTITLING AND AUDIO DESCRIPTION

»» Clothing (if it is very different or refers to the theme);

»» Shared screen presentations need explanation.

In the case of tables, they must contain caption and, depending on how long they remain
on the screen, they should be made available to all participants by e-mail.

Blog/eBook

A visually impaired person can access a PDF or blog quietly through screen reader
software.

However, some details within this material may compromise the understanding of the
subject. Let us exprole such details.

Font size

As much as computers might have a way to increase the screen, often the zoom mode
can deconfigure the content and make it difficult to read – after all, a person with
visual impairment may be blind or only have some other classification of visual acuity
problem. Therefore, the font size, instead of 10, could be 14.

One must also be careful when publishing PDF materials. One should check how the
file contents were created, whether the file contains the searchable text, that is, the text
itself, or whether it is available mostly in any image format.

The latter makes it considerably more difficult for people with visual impairment to
read content that use screen reader software. One would correctly deduce that it is even
quite impossible to be done.

ALT description

Many marketers know that ALT attribute is set to optimize the SEO onpage, but it can
also collaborate for accessibility.

The ALT is used in HTML codes and has the purpose of creating an alternative (!),
hence the name ALT, text for the image.

If you can not use ALT to make the image accessible, the suggestion is to create a caption
that describes it.

84
SUBTITLING AND AUDIO DESCRIPTION │ UNIT II

Images

Images make a lot of difference when we talk about content and are applicable to various
commercial, non-commercial, personal and group models.

That is why, in addition to inserting the ALT, it is important to put an accessible caption,
that is, with the aid of an AD.

AD in static images should follow a fundamental guideline: we describe from general


to specific.

In addition, when a description is made, it is necessary to follow a logical order, that is,
describe from top to bottom, from left to right.

When one is said to describe the general, it means one is saying what the image is about.

In relation to the specific (not general) ones, we bring more thoroughly detailed items
of the image, without losing sight on the concepts of clarity and objectivity.

Infographics / graphics

Each person has one of 5 senses more accurate than another and working the diversity
of contents can collaborate to reach everyone.

Another visual way to present information is through infographic. But to make this
content accessible, we need to pay attention at some points during the drafting process.

Size of the figures

Usually infographics have icons / images that help us to understand them.

When we are working on a material for an audience that has some difficulty in seeing,
we must increase such icons and then save them in a better resolution whether it is
necessary to zoom in or not.

Subtitles

In addition, the use of subtitles can help understanding this kind of graphic piece,
bringing information that did not fit the image.

In static images, we must pay special attention to the timing of describing them: the
background with its respective color; which geometric figure is used or, alternatively,
what this figure is recalled; also, whether the graphics are right, centered or left.

85
UNIT II │ SUBTITLING AND AUDIO DESCRIPTION

Also, if a text follows the normal reading flow, from left to right and from top to bottom,
it is not necessary to keep repeating “below, below, below, and below”.

You should say it only once, unless there is a break in such sequence, for example a text
below and then another centralized one.

In this case, it is necessary to bring about some information from the text which is
centralized, since the person with visual impairment shall not be able to know this
change in format.

It should be noted that if there were this kind of change, it would be because those who
produced the image had wanted to give a prominence to what was actually centralized,
so it should be described.

Also, it is not necessary to use expressions like “it is written” or “written text”, because
if it comes between quotation marks, people can identify that it is a message contained
in the image.

Landing Page

A Landing Page (LP) is a conversion page that aims to make a visitor a Lead. RD Station
Landing Pages are 90% accessible to the visually impaired.

However, if you use another LP format, remember that you need to use the best practices
available, such as captchas.

Nevertheless, avoid using those captchas with images (for example, captchas that ask to
click on the image of “cars” to prove that it is not a robot).

E-mail

The good practice of e-mail format should also be remembered when we are talking
about making that message more accessible. That is, not only use images, but use texts
and images to make the content accessible. And, of course, describe the images using
the basic rules of AD.

Let’s imagine that you are going to send out a promotional Marketing E-mail announcing
a large sales of winter clothes.

Information about color, size, and values must be accessible so that visually impaired
people can have guaranteed access to the products.

86
SUBTITLING AND AUDIO DESCRIPTION │ UNIT II

Finally, when we talk about Inbound Marketing and about educating our audience at
each stage of the buying journey with the goal of becoming a customer, we must be open
to the diversity of people.

We need to produce content that reaches the person’s soul, but we must not forget that
it can be a visually impaired person.

That is why using assistive technology techniques can contribute not only to the
inclusion of these people, but also be decisive in attracting (or not) a new Lead.

Can audio description be translated into other languages?

In the process of subtitling and dubbing today, the translator gets the preliminary script,
sometimes the continuity to work from there when rendering the different versions.

A brief look at the current situation of audio description (AD) in Europe shows that in
the wake of pioneering countries such as the United Kingdom and Spain, a second wave
of countries is now starting to provide AD.

Portugal had its first fully accessible DVD, “O Nascimento de Cristo (Hardwicke, 2007)”,
officially presented on the second Media for All conference in Leiria in November 2007.
In the same year, the first Dutch audio described film, Blind (van den Dop, 2007), was
premiered in Utrecht.

The year 2009 saw the production of the first commercial Belgian-Dutch DVD with AD,
Loft (VAN LOOY, 2008) and in July the first Italian film festival to offer AD, the Roma
Fiction Fest, was organised.

In fact, these are no more than a few examples; AD appears to be experiencing a boom.
This, of course, means that research cannot lag.

However, most research into AD to date focuses on AD as a new text type, its interaction
with the other semiotic systems of the film, and the need for AD guidelines. The present
paper will look at the challenges presented by the interlingual translation of existing
AD scripts.

On the basis of two case studies it aims to gain some insight into the following issues:
why translate AD? Is the translation of AD comparable to other forms of audiovisual
translation (AVT) and/or does it present specific problems?

87
UNIT II │ SUBTITLING AND AUDIO DESCRIPTION

Audio description as a form of audiovisual


translation

As Holmes pointed out when detailing his seminal classification of Translation Studies
(TS), existing scientific disciplines scramble to reconsider and adapt their paradigms
and models when a new area of investigation crops up that may be of interest to their
own field, or, as Mulkay put it: “Science tends to proceed by means of the discovery of
new areas of ignorance”.

Although AD is relatively new, both as a commercial practice and as an area of research,


we can say that it has already acquired some form of ‘tenure’ within the broader field
of AVT. On the one hand, this is no doubt an issue of power and expediency, the
appropriation of an area of teaching and research that already had a certain tradition
elsewhere.

Witness, for example, the publication of articles, amongst others, in Journal of Visual
Impairment and Blindness. On the other hand, research from a TS and, especially, an
AVT perspective can undoubtedly also offer valuable contributions.

Indeed, AD bears some obvious resemblances with other forms of text production and
other AVT practices more specifically. Subtitling is a case in point. In both AD and
subtitling, the source text is a multisemiotic product and its translation into a new
target text is constrained by linguistic as well as extra-linguistic factors.

Let us home in on the latter. In subtitling, the length of the subtitle is determined by
the duration of the spoken dialogue, the target audience’s reading speed and by various
technical constraints, such as the number of characters and lines of text available.

AD is faced with similar problems as the descriptions must be fitted in between (film)
dialogues and must not interfere with the spoken text or relevant sound effects of a
given film.

Moreover, both subtitlers and describers must be aware of the fact that their translations
will become an integral part of a multimodal product and will therefore interact with
the various other semiotic channels of the source text.

This means that some of the skills of both types of ‘translators’ are very similar: they
must be able to create a concise, and stylistically appropriate text that promotes the
comprehension and interpretation of a source text within which it is integrated, for an
audience from a different cultural background.

88
SUBTITLING AND AUDIO DESCRIPTION │ UNIT II

One particularly relevant concept that comes to mind in this context is Chaume’s
(2004) concept of intersemiotic cohesion: neither subtitles nor audio descriptions are
self-sufficient texts.

They lack coherence and cohesion when they are read (or heard) out of context,
deprived of the input from the visual and aural channels of the film (or other audio-
visual product).

Intersemiotic cohesion in subtitling “refers to the way it connects language directly to


the soundtrack and to images on screen, making use of the information they supply to
create a coherent linguistic-visual whole”.

However, AD also differs from other forms of AVT in significant ways. For one, there is
the essential difference in the actual nature of the translation process.

Notwithstanding the multimodality of any subtitled product, ‘traditional’ intralingual


and interlingual subtitling (as opposed to SDH) can be said to remain intrasemiotic
rather than intersemiotic translation, even if it is not always a case of ‘translation
proper’ in Jakobson’s terms.

AD on the other hand, even in its script-writing phase, is a form of intersemiotic


translation, producing a text for a heterogeneous yet also specific target audience, that
has no, or limited, access to the visual information of the filmic text.

This creates various new and unique problems for the describer, the main one being
the transition from visually to verbally conveyed information, both of a denotative and
a connotative type.

The translation of audio description

Notwithstanding its particularities, studying AD from an AVT perspective clearly has


its advantages. Strangely enough, however, the intralingual translation of existing
audio descriptions, another form of AVT, is almost conspicuous by its absence in AVT
practice and research.

One reason may be that there is only a limited amount of research material available.
However, AD translation does happen (witness our study) and, we claim, will increase
in the (near) future, if only because it may be perceived as a cost-cutting factor by
international translation companies, film producers and distributors.

To the best of our knowledge, the translation of AD is mentioned in some articles or


conference, but there is disagreement, at best, about its efficiency as a working method.

89
UNIT II │ SUBTITLING AND AUDIO DESCRIPTION

It is claimed that translating an AD may take just as long or even longer than creating
a new AD script.

By contrast, Bourne and Jiménez write that “translating audio descriptions would seem
to offer considerable advantages in terms of time and therefore cost in comparison with
the present practice by which audio descriptions are written from scratch by professional
describers in different langua ges”. With this article we want to demonstrate that the
issue is well worth investigating.

There seem, in fact, to be a few arguments in favour of translating existing audio


descriptions. Most translators are not fully trained in intersemiotic translation, nor are
they experts in film studies.

As a result, they are faced with two completely new and particularly time-consuming
challenges: the description of visual images into words and content selection. Indeed,
there is often insufficient time available to describe all the relevant information shown
in the image. By contrast, when translating an audio description, the selection of visual
material and the wording have already been done.

Translators of audio descriptions will, no doubt, be faced with other problems, but the
question is: are these problems different from translation problems that they would
encounter in other translational contexts?

And if they are not, might this mean that the translation of audio descriptions is more
central to the core business of TS and AVT studies than AD proper? These are the main
questions that will be considered in this article.

Besides, a related, more practical consideration deserves brief mention. It is safe to say
that worldwide training in AD, both at universities and in companies, is relatively new.
In other words, countries starting out with AD have no or few trained describers.

As it happens, many of the countries that are relatively new to AD are also countries that
import a lot of foreign productions and therefore have plenty of professional translators
with experience in AVT, translators who are familiar with the constraints governing
AVT translation.

In short, apart from the issue of whether the translation of audio descriptions may or
may not be an economically more viable choice, it remains a logical option for countries
that are just starting to provide AD.

90
SUBTITLING AND AUDIO DESCRIPTION │ UNIT II

General (audiovisual) translation issues

A first category of problems that we noticed, seems to occur in other types of translation
as well, hence we label them as general (audiovisual) translation issues. However, based
on the idea that AD texts can be considered as a new text type; we decided to have a
closer look at them to determine whether they occur equally frequently in other types
of translation or can nevertheless be ascribed to the specificity of our source material.

Grammatical problems: the translation of conjunction ‘as’ Two TCPs relating to


grammatical issues stand out: the translation of the conjunction ‘as’ (see example 2)
and the translation of ING-forms.

The use of ING-forms abounds, including ING-forms that are part of a verb phrase
featuring a verb in a continuous tense, different types of non-finite ING-clauses and
combinations of ING-clauses with different functions (adverbial clauses, nominal
clauses, adjectival clauses) often with the ING-form in initial position and used without
subject, as in examples (2) and (3) below.

1. Rob runs after Rachel as she rushes through some reeds. (ZB)

2. Passing a mirror, she sees herself and averts her eyes. (B)

3. Noticing Catherine staring at her, Marie, a young woman who has long
wavy hair, lowers her arms, then heads out of the room.

Both the occurrences of ‘as’ and those multiple onex of ING-forms appear to call for
multiple translation strategies, firstly because different contexts and functions call for
different choices, and secondly because there is no one-to-one grammatical equivalent
in Dutch, which means a choice must always be made.

The need for different solutions even for the translation of ING-forms with the same
grammatical function, may be due to the visual context or the visual data to be described,
the narrative context (e. g. whether or not two actions are interpreted as simultaneous),
or the linguistic target context (e.g. depending on the sentence or clause into which the
translations of the item must be incorporated).

In most cases the formal translation shift does not result in meaning shifts, but in
some cases it does. In the following section we will have a closer look at the different
translation strategies for ‘as’, since covering both grammatical issues would entail
covering too much ground for the scope of this paper.

Although the word ‘as’ can have different functions, its predominance as a subordinating
conjunction in both our English source texts, is striking. Moreover, as shown in the table
91
UNIT II │ SUBTITLING AND AUDIO DESCRIPTION

below, ‘as’ appears to occur considerably more often than some other very common
English conjunctions such as WHEN, BECAUSE and SINCE.

Apparently, ‘as’ is a very useful word when it comes to audio describing film. Ideally,
of course, the importance of this word count should be checked against a similar word
count using corpora of different types of naturally occurring standard British English,
research that we are planning, as a result of the present case study.

For now, our hypothesis is that ‘as’ is so omnipresent in both English AD scripts because
of its temporal function as a conjunction denoting simultaneity (see e.g. example 2).

Film equals action taking place in time, and often involves simultaneous actions or action
sequences. These can take place within one setting or in different settings connected
through cross-cutting or camera movement. At the risk of being slightly speculative, we
would say that in AD,

‘As’ functions as the verbal equivalent of such visual editing and the depiction of (near)
simultaneous action.

The subordinating conjunction ‘as’ can act as a subordinator of finite clauses, and as a
subordinator for –ed clauses or verbless clauses. ‘As’ can function as a subordinator of
time, indicating simultaneity (4), but also as a subordinator signalling reason (5), and,
occasionally, concession (6).

4. As it grew dark, we could hear the hum of the mosquitoes.

5. As Jane was the eldest, she looked after the others.

6. Naked as I was, I braved the storm.

Quirk et al. also point out that clauses introduced by ‘as’ can sometimes give rise to a
certain ambiguity, suggesting both a temporal and a causal relationship, as in example
(9), where the person referred to as ‘he’ could be hearing the conversation ‘because’ he
is standing close to the kitchen.

7. As he was standing near the door, he could hear the conversation in the
kitchen.

Besides these basic functions, there is a discussion of instances of ‘as’ in clauses of


similarity and comparison (9), as well as in comment clauses (9 & 10).

8. (Just) as a moth is attracted by a light, so he was fascinated by her.

92
SUBTITLING AND AUDIO DESCRIPTION │ UNIT II

‘As’ in comment clauses functions as a relative pronoun in a sentential relative clause,


or as a subordinator:

9. She is extremely popular among students, as is common knowledge.

10. He is the best candidate, as it seems.

Extralinguistic cultural references

One universal type of TCP is what Pedersen calls extralinguistic cultural references
(ECRs), a term he defines as a reference that is attempted by means of any culture-
bound linguistic expression, which refers to an extralinguistic entity or process, and
which is assumed to have a discourse referent that is identifiable to a relevant audience,
as this referent is within the encyclopaedic knowledge of this audience.

In other words, ECRs are expressions that refer to entities outside language such as
names of people, places, institutions, food and customs, which a person may not know,
even if s/he knows the language in question.

They occur in all types of texts and their translations, and neither AD nor its translation
constitute an exception. Again, the issue is worth exploring here in more detail, given
the unique nature of the original source material, i.e. the film.

In the first phase, the description of the visual images into words, the relation between
the source ECR and the target ECR is not one of transition from one natural language
into another but from a visual concept into a verbal concept.

One fundamental difference between verbal and visual communication is that visual
images – especially on the more concrete level – seem to be, to a much higher degree,
‘pre-filled’ with meaning, in other words, visual communication does not generalise.

The linguistic term ‘car’ is a very general concept that can conjure up many different
mental images in the reader, whereas ‘a car’ shown in a film is always a specific type of
vehicle. This obviously has consequences for the describer, who will never be shown a
general concept, but always a very specific reference.

Like in other types of translation, the strategy chosen to describe that ECR will be
based on (AD) constraints, the describer’s own knowledge and research skills, and an
evaluation of the target audience. If the concept belongs to or is known in the target
culture, a specific term can be used to describe the ECR.

93
UNIT II │ SUBTITLING AND AUDIO DESCRIPTION

Otherwise a more generalising description may be an option. However, it seems that to


a larger extent than in natural languages, the preferred strategy will also be based on
the encyclopaedic knowledge of the describer.

While for verbal communication, sources that explain specific ECRs abound, such are
not (yet?) available to an audio describer trying to find out what a certain visual ECR is,
unless the master script provides information.

Only if the describer can visually identify the ECR (e.g. making use of a picture dictionary)
or knows its name, can it be rendered by a specific natural language equivalent. In other
cases, a ‘guild house’ typical for e.g. Flanders, may become a ‘stately house’, to give but
one example.

Retention

“The police clear the children away and one of them finds a sten gun in amongst the
vegetables.”

“A polícia libera as crianças e uma delas acha uma submetralhadora entre os


vegetais.”

Direct translation

“Once outside they run over to Muntze’s staff car which is parked nearby.”

“Uma vez lá fora, eles correm até o carro particular de Muntze, que está estacionado
ali perto.”

Substitution

“Rachel gets off the bike, walks over to the door of a terraced house and rings the
bell.”

“Rachel sai da bicicleta, anda até a porta de uma casa com um terraço e toca a
campainha.”

Generalisation

“Van Gein gets out and puts on his trilby.”

“Van Gein sai e coloca seu chapéu.”

94
SUBTITLING AND AUDIO DESCRIPTION │ UNIT II

Given the fact that the translation of an existing audio description is an instance of
pivot translation (cfr. supra), the person translating the initial description in fact seems
to have a double task.

First, the translators will have to consider a new target audience. If the source ECR is
not an established concept in the target culture, it can be substituted, replaced by a
more general target language reference, or omitted altogether.

If the audience is expected to know the ECR from the source text, the reference can
be retained (possibly adapted to target language standards) or a direct translation
provided. This requires careful judgement; one might wonder, for instance, whether
the Dutch audience know what a ‘Kübelwagen’ is, but one might ask the same question
about the English audience.

This brings us to the translator’s second task. The AD-translator cannot take the verbal
source text for granted and will have to evaluate whether the original describer has
chosen a suitable strategy and/or level of specificity in the source description.

Unlike other instances of indirect translation, where the pivot language is generally
chosen because the translator does not (sufficiently) understand the source language,
the translator of an audio description understands both the source material (the film)
and the language of the original description and should be able to make a correct
assessment of the strategies used in the initial AD. Indeed, we have pointed out above
that the translated version does make use of filmic (non-verbal) information.

As the examples from our case study show, it is safe to say ECRs, just like the conjunction
‘as’, constitute a TCP for translators of audio descriptions. But although we categorised
the translation of extralinguistic cultural references as a general translation problem,
our study seems to show that at least part of the issue is very closely related to the
visual source material; the translator of any audio description will have to evaluate the
strategy adopted by the describer and go back to the film to see if another strategy/
translation might be more suitable.

This could indicate that translation of cultural references in audio descriptions is not a
general problem after all but should also be classified as an AD translation issue.

Science “tends to proceed by means of the discovery of new areas of ignorance”, but
also that research tends to elicit more questions than it solves. Our case study shows
that claims in favour of or against the efficiency of translating audio descriptions are
premature.

95
UNIT II │ SUBTITLING AND AUDIO DESCRIPTION

Audio descriptions, we have shown, share characteristics with other forms of AVT,
but have their own specificity as well. First because AD is a new text type, and second
because it functions as a pivot translation in this particular set-up. This means that
audio descriptions that function as source texts for subsequent translations, of necessity
have their own challenges.

More concretely, our case study shows that some word types and/or grammatical forms
(e.g. ‘as’) may appear more often and with different or more limited functions in AD.
This can result in TCPs (especially if the ST usage is ambiguous) and cause language
pair specific problems, if the word in question has no easy translational equivalent and
occurs more often in AD than in other text types.

With respect to ECRs, the challenge resides in the source text as a pivot translation that
the translator can or must double check with the film, and the differences in cultural
distance that might occur between film – audio description – translation.

On the other hand, a large portion of audio descriptions is made up of very straightforward,
short and simple sentences, and the style of source texts audio descriptions varies14.
Quantifying the translation effort is therefore a complicated matter.

By way of conclusion, we would say that it is not necessarily the case that the translation
of AD is more central to TS than audio description itself. We do plead for the integration
of both AD and AD translation in AVT courses because they require related skills.

And we would like to end with a call for more research, on large corpora, into the
specificities of AD as a text type as well as more research into AD translation, in order
to identify the challenges and start tackling them.

Practice of AD – You may want to adapt it to


translation practice

First, notice that these tips shall be applied to ONLINE Audio Description. Also,
Web Content Accessibility Guidelines (WCAG) 2.0 defines how to make Web content
more accessible to people with disabilities. Accessibility involves a wide range of
disabilities, including visual, auditory, physical, speech, cognitive, language, learning,
and neurological disabilities.

Audio Description (AD) in video content has proved vital for ensuring that people
who are blind or vision impaired can experience video presentations such as movies
and TV shows. In Australia, AD content is growing in the areas of cinema and DVD.

96
SUBTITLING AND AUDIO DESCRIPTION │ UNIT II

Internationally, we have seen huge growth in audio described television. However, AD


content still has a low profile online.

The good news is that in recent times there have been a lot more discussions about AD,
including several great email conversations relating to techniques and best practice in
a number of W3C working groups.

The challenge though is that it is hard to know the best way to go about audio describing
online video; while WCAG Level ‘AA’ compliance requires AD, developers have many
questions which are not clearly defined. The questions I’ve been asked and have seen
online generally fall under the following four questions:

What does AD mean in terms of WCAG 2.0? It is difficult to understand what needs to
be done between level ‘A’ and ‘AA’.

What should I do when there’s no room for audio description, such as a talking head?

Should I embed two audio tracks or have two separate video files?

Audio description usage online seems to be a bit like the Yeti: I hear about it but I never
come across it. Do I really have to do it?

This month’s column will focus on addressing these questions, and providing you
wish some quick tips on how to ensure that your video is compliant with WCAG while
making sure that people who are blind or vision impaired can enjoy the time-based
media content.

Tip 1: audio description in level ‘A’ can be a text


description

WCAG 2.0 has two different success criteria for audio description, depending on
whether you are trying to achieve ‘A’ or ‘AA’ compliance. Level ‘A’ states that:

1.2.3 Audio Description or Media Alternative (Prerecorded): An alternative for time-


based media or audio description of the prerecordedvideo content is provided for
synchronized media, except when the media is a media alternative for text and is clearly
labelled as such.

What this means is that there needs to be an alternative, but it doesn’t have to be actual
AD. An alternative could be a text transcript located on the same webpage as the video
content, describing the visual aspects of the video. However, if it is possible to provide
traditional AD then the W3C encourages you to do so.

97
UNIT II │ SUBTITLING AND AUDIO DESCRIPTION

Tip 2: level ‘AA’ websites really do need audio


described video

Many people argue that there’s no need for audio described content as there’s hardly
any out there.

The WCAG 2.0 Level ‘AA’ guidelines state that:

1.2.5 Audio Description (Prerecorded): Audio description is provided for all prerecorded
video content in synchronized media. (Level AA)

This guideline is pretty clear: audio description really does need to be included for a
website to be truly Level ‘AA’ compliant.

The questions then is why, if so many websites aim for Level ‘AA’ compliance, isn’t there
more AD video content? Some organisations argue that it’s too hard to do. This appears
to be view held by the Queensland state government, that even taken the extraordinary
step of removing AD from its policies but including all other aspects of WCAG 2.0 Level
‘AA’. In practice though it’s not as challenging as it first appears, and some of the other
tips here can help.

Tip 3: if there’s no room for audio description during


the video, put context AD in at the beginning

There’s been a lot of discussion recently about what to do if there’s a video featuring a
‘talking head’. For example, if a government Minister is giving a speech about something
which has a lot of talking and no breaks.

Some have argued that a bit of alternative text about the speech located near the video
negates the need for AD, while others argue that this does not work as if the video is
embedded somewhere else, the text description is lost. Others argue that there’s no
real need for AD in these circumstances, but this raises concern among people who are
blind or vision impaired who argue that they need some context for the speech.

A possible solution is to put a bit of AD at the start of the video, explaining the context
of the speech, the location of the speech, the person talking and any other important
information relevant to the event.

This would effectively set the scene for someone who is vision impaired, would require
very little effort, almost no additional editing and ensures Level ‘AA’ compliance.

98
SUBTITLING AND AUDIO DESCRIPTION │ UNIT II

Tip 4: put a standard version and an AD version


online together rather than trying to embed multiple
audio tracks

In the case of a talking head, it would make sense just to have the audio described
version but in more standard AD video releases such as a movie or TV show, it’s good
for people to have the option of viewing the standard version or an AD version.

Many developers have asked if it’s better to try and put multiple audio tracks into one
video file or have two separate video files and my advice would be to have two separate
versions: while multiple audio tracks work well on DVD, and most online video formats
support multiple audio tracks, the time and effort involved in trying to get everything
synchronised and ensuring that people have a media player that supports toggling
between tracks makes it very difficult.

A great example of best practice on this is the BBC iPlayer. The BBC iPlayer has nearly
all of its television programs audio described, and just has an additional AD copy of the
show on their website, along with the standard audio version.

This approach works well and due to the low cost of data storage these days, hardly
makes any difference to the bottom line on that front. The amount of AD present on the
BBC network dispels the myth that there is no AD content available on the web.

Tip 5: there may be an AD soundtrack available


already

When it comes to online content, people often assume that there’s no AD soundtrack
for the video footage being used even though the footage may have been in the cinema
or released on DVD. It may save you time and effort to track down an AD version of the
footage you need to use rather than create an AD soundtrack.

So, while AD may require a bit more planning that some of the other WCAG 2.0
guidelines, it may not be as difficult as first though and it will make a big difference in
accessibility for people who are blind or vision impaired. Additional information is also
available on creating audio description files.

1. What is an audio description? What is it for? Who is it for?

2. What is the new audio describer?

3. What are the extralinguistic cultural references and in which way do


they interfere in the translation process?

99
Bibliography

BAKER, M. (Ed.). Routledge encyclopedia of translation studies. London: Routledge,


1998.

BAÑOS, Rocío and DÍAZ-CINTAS, Jorge. “Language and translation in film: dubbing
and subtitling”, in Kirsten Malmkjaer (ed.) The Routledge Handbook of Translation
Studies and Linguistics. London: Routledge, 313-326, 2018.

CHAUME, Frederic. “Dubbing practices in Europe: localisation beats globalisation”,


Linguistica Antverpiensia, 6, 203-217, 2007.

CHRISTIE, F. Classroom discourse analysis. A functional perspective. London and New


York: Continuum, 2002.

CINTAS, D. and REMAEL, A. Audiovisual translation: Subtitling. Manchester: St.


Jerome Publishing, 2010.

DE LINDE, Z.; KAY, N. The semiotics of subtitling. Manchester, St. Jerome, 1999.

DOUGHTY, C. Acquiring Competence in a Second Language. In BYRNES, H. (Ed.).


Learning Foreign and Second Languages New York: The Modern Language Association
of America, 1998. 128-156.

FAIRCLOUGH, N. Media Discourse. London: Edward Arnold, 1995.

FAIRCLOUGH, N. Analysing discourse: Textual analysis for social research. London


and New York: Routledge, 2003.

GOTTLIEB, Henrik. Subtitling – A New University Discipline. In: Dollerup, Cay and
Ladegaard, Anne (red). Teaching Translation and Interpreting – Training, Talent and
Experience. Amsterdam/Philadelphia: John Benjamins Publishing Company, 1992.

GOTTLIEB, H. Subtitling. In Routledge encyclopedia of translation studies. In. Baker,

M. (ed.), Londres: Routledge, 244-248, 1998.

KARAMITROGLOU, Fotios. Towards a Methodology for the Investigation of Norms in


Audiovisual Translation. Amsterdam: Rodopi, 300Pp, 2000.

KLEIN, M. Legendagem de programas ainda é pouco ulilizada na TV brasileira. In


Folha de São Paulo:Caderno de TV, 2000.
100
BIBLIOGRAPHY

KRESS, G. Linguistic Processes in Sociocultural Practices. Oxford: Oxford University


Press, 1989.

KRESS, G. Literacy in the New Media Age. London/New York: Routledge, 2003.

KRESS, G.; van LEEUWEN, T. Reading images: The grammar of visual design. London:
Routledge, 1996.

KRESS, G. R.; van LEEUWEN, T. Reading images: the grammar of visual design (2nd
ed.). London; New York: Routledge, 2006.

MATAMALA, Anna. Translations for dubbing as dynamic texts: Strategies in film


synchronization, Universitat Autònoma de Barcelona, 2010.

O’CONNEL, E. Screen Translation. In P. Kuhiwczak; K. Littau (eds.), A companion to


translation studies (pp. 120-133). Toronto: Multilingual Matters Ltd. 2007.

PEDERSEN, J. High felicity: A speech act approach to quality assessment in subtitling.


in Chiaro, D., Heiss, C.; Bucaria C.: Between text and image: updating research in screen
translation (pp. 101-115). Amsterdam: John Benjamins, 2008,

SCHJOLDAGER, A. Understanding Translation. Denmark: Academia Publications,


2008/2009.

SCHWARTZ, Barbara. Translation for Dubbing and Voice-Over, The Oxford Handbook
of Translation Studies, 2011.

SHUTTLEWORTH, M.; COWIE, M. Dictionary of translation studies. London: St.


Jerome. Publishing Company, 1997.

THOMSEN, Jane E. A Comparative analysis of macro- and micro-strategies in subtitling


and dubbing, 2009. Available from: <http://theses.asb.dk/projekter/research/
shrek%28217808%29/>. [Accessed May 13 2019].

TÖRNQVIST, E. Ingmar Bergman Abroad: The Problems of Subtitling. Amsterdam:


Vossiuspers AUP, 1998.

University of California-Berkeley: Tools for the Classroom Setting.

UNSWORTH, L. (Ed.). Researching language in schools and communities. London and


Washington: Cassell, 2000, 222-244.

ZABALBEASCOA, Patrick. Developing Translation Studies to Better Account for


Audiovisual Texts and Other New Forms of Text Production. Unpublished PhD.
Universitat de Lleida, Spain, 379 pp, 1993.

101

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy