0% found this document useful (0 votes)
130 views11 pages

Introduction To Audio Design & Effects

This document provides an introduction to audio design and effects. It discusses key concepts related to sound perception including hearing, listening, localization. It also covers the physics of sound including frequency, pitch, and the characteristics of low, mid, and high frequencies. Finally, it discusses digital audio and classifications of sound including diegetic versus non-diegetic sound.

Uploaded by

Gomzzyyy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
130 views11 pages

Introduction To Audio Design & Effects

This document provides an introduction to audio design and effects. It discusses key concepts related to sound perception including hearing, listening, localization. It also covers the physics of sound including frequency, pitch, and the characteristics of low, mid, and high frequencies. Finally, it discusses digital audio and classifications of sound including diegetic versus non-diegetic sound.

Uploaded by

Gomzzyyy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 11

NMC512:

AUDIO DESIGN &


EFFECTS

CHAPTER 1:
INTRODUCTION TO AUDIO DESIGN
& EFFECTS
PERCEPTION OF SOUND:
• 3 basic requirements: sound source, medium & receiver. It can be learn by this 2 conditions; sound designer
represent the sound and audiences process the sound to derive meaning.
• Also known as audiation (psychological process experienced by our thoughts). Example, silent book reading
cause our mind to sound the words in our head, or in voiceover: allow us to hear the interior thought of a
character.
• HEARING: physiological process that excited the brain and cause physiological sensation when acoustic energy
arrives at our ears.
• LISTENING: ability to derive meaning from sound and to respond & perceive through active listening process.
• COCKTAIL EFFECT: ability to filter extraneous sound while focusing on selected sound
• LOCALIZATION: ability to perceive specific sound placements within sound field (Left and Right speaker,
panning & surround system
• ACOUSTIC: characteristic of sound interacting in a given space.
• RHYTHM: identifiable pattern of sound and silence
• TEMPO: the speed of rhythm
• NOISE: any unwanted sound found in the soundtrack
• SILENCE: zero sound/mute/dead-air to create tension, release or contrast
THE PHYSICS OF SOUND:
• The physics of sound is the study of the physical attributes
of sound waves. There are many ways that sound can be
described and measured.
• Sound Travels In Waves
• When a disturbance of air particles happens a series of
waves spread out from the source in all directions. Those
waveforms are composed of cycles of compressed air
molecules and decompressed air molecules. These waves
will eventually lose their energy and settle back to a resting
state unless disturbed again.
• The ability of air to return back to a resting state is the
elastic property that also allows sound to travel so
efficiently and effectively. The more reflective surfaces in
the space the longer it will take for the room to return to a
stable resting state. The more absorbent the materials are
in the room the quicker the room will lose its acoustic
energy.
THE PHYSICS OF SOUND:
• Frequency and pitch
• In the physics of sound, frequency is defined as the number of cycles, or complete waveforms, that
occur within the period of 1 second, measured by hertz (Hz). 1 cycle per second equals to 1 Hz
• A complete cycle is defined as one compression and rarefaction cycle. The number of cycles within a
second can also be given a musical value that is called a pitch. Pitch is the relative highness or lowness
of a tone or frequency.

• Low Frequencies
• Low frequencies generally encompass the 1st 2 octaves of the range of our hearing. This would start
at 20 cycles and end around 80 cycles. The general perception of low frequencies extends a bit higher
than 80 cycles, generally up to hundred or 125 Hz. It is this frequency range that gives us the feeling
sensation of sound in the form of vibration.
• Low Mid Frequencies
• The low mid frequencies pick up where the low frequencies leave off and extend up to about 400
cycles or so. These frequencies encompass the fundamental frequencies of a majority of the musical
instruments. Therefore, this frequency range is critically important to capture accurately. It will also
help to define the low frequency range.
THE PHYSICS OF SOUND:
THE PHYSICS OF SOUND:
Midrange Frequencies
• The midrange frequencies generally start from around 400 cycles and extend up to approximately 2.5
kHz. This range of frequencies adds definition and intensity to a sound. It is this frequency range that
gives us most of our localization cues. Balancing these frequencies between the instruments is critical
to creating separation and definition within each of the individual instruments.
Hi Mid Frequencies
• Hi mid frequencies generally start around 2.5 kHz and extend up to approximately 5 or 6 kHz. This
frequency range provides the presence of individual sounds. Our hearing is optimized in this range
because it is the area where we define the hard consonants of different words. This frequency range
is all about detail and our ability to clearly understand what it is we are hearing.
High Frequencies
• The high-frequency range starts around 5 or 6 kHz and extends up to the limit of our hearing at
approximately 20 kHz. This frequency range is important to create the sense of size and space in a
recording. Because high frequencies travel most efficiently in the upper area of a room, they give a
sense of height that is necessary for the proper imaging and accurate representation of any
recording.
DIGITAL AUDIO
• Digital audio is audio, or simply sound, signal that has been recorded as or converted into digital
form, where the sound wave of the audio signal is encoded as numerical samples in continuous
sequence, typically at CD audio quality which is 16 bit sample depth over 44.1 thousand samples per
second. Digital audio is the name for the entire technology of sound recording and reproduction using
audio signals that have been encoded in digital form.
• In a digital audio system, sound of an analog electrical signal is converted with an analog-to-digital
converter (ADC) into a digital signal, typically using pulse-code modulation. This digital signal can then
be recorded, edited, modified, and copied using digital audio workstation computers, audio playback
machines and other digital tools. When the sound engineer wishes to listen to the recording on
headphones or loudspeakers (or when a consumer wishes to listen to a digital sound file of a song), a
digital-to-analog converter(DAC) performs the reverse process, converting a digital signal back into an
analog signal, through an audio power amplifier and sending it to a loudspeaker.
• Digital audio systems may include compression, storage, processing and transmission components.
Conversion to a digital format allows convenient manipulation, storage, transmission and retrieval of
an audio signal. Unlike analog audio, in which making copies of a recording results in generation loss,
a degradation of the signal quality, when using digital audio, an infinite number of copies can be
made without any degradation of signal quality.
SOUND CLASSIFICATION:
• Michel Chion’s categorization of three basic listening:
• Causal listening is when we focus on, or recognize, the cause or source of the sound. We gather
information based on the sound: where a sound is located, what type of object caused the sound, and
so on. Studies have shown that we identify sound by creating a mental picture of the cause, sometimes
in the form of a mental stereotype, or through words that allow us describe the sound (Ballas 2007).
• In comparison, semantic listening refers to the ways in which we listen to, and interpret, a message
that is bound by semantics, such as spoken words. Causal and semantic listening are not mutually
exclusive—we can listen to both the words someone says as well as how someone says them (and the
fact that, for example, a voice is the source of the sound).
• Finally, reduced listening denotes listening that focuses on the traits of the sound, independent of
cause and meaning. Reduced listening focuses, for instance, on the quality or timbre of a sound. For
example, in Fallout 3 (Bethesda Game Studios 2008), a broadcast tower is sending out beeping signals:
if we were listening causally, we would likely assume that some form of electronic equipment is making
the sound. If we were listening semantically, we may listen to the message, in Morse code, and if we
were listening in a reduced fashion, we may describe the sound as a sine wave at about 3000 Hz in
short bursts of approximately one half-second each.
SOUND CLASSIFICATION: NON-DIEGETIC SOUND:
Sound whose source is neither visible on the screen nor
has been implied to be present in the action: 
DIEGETIC SOUND:
 narrator's commentary
Sound whose source is visible on the screen or whose  sound effects which is added for the dramatic effect
source is implied to be present by the action of the
film:   mood music

•voices of characters 
•sounds made by objects in the story  Non-diegetic sound is represented as coming from the a
•music represented as coming from instruments in the source outside story space.
story space ( source music)  
The distinction between diegetic or non-diegetic sound
Diegetic sound is any sound presented as originated
depends on our understanding of the conventions of film
from source within the film's world 
viewing and listening.  We know of that certain sounds
Digetic sound can be either on screen or off are represented as coming from the story world, while
screen depending on whatever its source is within the others are  represented as coming from outside the
frame or outside the frame.  space of the story events. 
Another term for diegetic sound is actual sound   A play with diegetic and non-diegetic conventions can be
used to create ambiguity (horror), or to surprise the
Diegesis is a Greek word for "recounted story"  
The film's diegesis is the total world of the story action    audience (comedy). 
Another term for non-diegetic sound is commentary
sound. 
AUDIO AS NARRATIVE FUNCTIONS IN
CREATIVE CONTENT:
• GUIDED PERCEPTION – music score guides audience to a non-literal interpretation of the visuals, promoting a
“softer” perception of the content
• DRAWING THE AUDIENCE INTO THE NARRATIVE – composer create sound to draw audience out of their
present reality and into the cinematic experience. And also, the soundtrack provide closure while transitioning
the audience back to the reality.
• DIRECTING THE EYE – sonic foregrounds, mid-grounds and backgrounds level using volume, panning delay and
reverb.
• ESTABLISHING OR CLARIFYING POINT OF VIEW – use thematic music and assign to different
individual/characters, this approach is to establish or clarify the POV of the scene, especially when more than
one character is present. Example, Ratatouille scene between rats and elderly women.
• CLARIFYING THE SUBTEXT – idea comes in silent film era, to clarify the emotion of the scene; known as
subtext scoring. Music is design to assist neutral and ambiguous expression of the character, plus revealing the
character’s inner thoughts and feeling.
• CONTRASTING REALITY AND SUBJECTIVITY – sound that created in reverse from the subjective reality, can be
found in slow-motions scene, montage sequences and dream sequences. Slow motions scene: diegetic sound
is slowed down and lowered the pitch, dream sequences: the sound become gloomy with additional reverb to
distance the sound from reality, montage sequences: being drive by the exclusivity of the sound/music.
AUDIO AS NARRATIVE FUNCTIONS IN
CREATIVE CONTENT
• EXTENDING THE FIELD OF VISION – Sound is worth a thousand picture, on screen and off screen
sound. On Screen: Visible objects that are producing sound. Off Screen: sound that is created to
follow the objects as they move on and off screen, and used to imply any visual that is not shown.
• TENSION AND RELEASE: creating a sound that represent the emotion of tension and calmness. A
BUILD is a cue that combines harmonic tension and volume to create an expectation for a given
events. Shock Chords (dissonant clusters of notes), harmonization of melody with minor 2nds, air-
raids sirens, crying babies, growling animals and snake rattles for tension feel, while crickets sound,
lapping waves and gentle rain for calming effects.
• CONTINUITY – not limited to physical and temporal realms, but also important to emotional
continuity. When developing cues, continuity can be promoted through instrumentation.
• PROMOTING CHARACTER DEVELOPMENT – animators create characters, not actors, SFX and music
reinforce character’s physical traits and movements, any object can be personified or
anthromorphism through dialogue, music and sfx and conversely, human are often caricatured in
animation. Example: food step covered with instruments sound to show the character or size.
• DIALOGUE, MUSIC/SCORE AND SOUND EFFECTS

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy