RAGHAV Sound Design
RAGHAV Sound Design
SEM-END PROJECT
ON
SOUND DESIGN
BY
RAGHAV CHAUDHARI
ID NO.: 160561022
CERTIFICATE
Internal Examiner
External Examiner
Principal
ACKNOWLEDGMENT
RAGHAV CHAUDHARI
ID NO.: 160561022
CREATIVE ARTS & MEDIA STUDIES
INDEX
• Sound Designing
• The Designer’s Work
• The Equipment
• Mastering
• Sound Editing
• Equalization Techniques
• Audio Script
_PROJECT
SOUND DESIGNING
Sound design is the art and practice of creating sound tracks for a
variety of needs. It involves specifying, acquiring or creating auditory
elements using audio production techniques and tools.
It is the process of recording, acquiring, manipulating or generating
audio elements. It is employed in a variety of disciplines including
filmmaking, television production, theatre, sound recording and
reproduction, live performance, sound art, post-production and video
game software development. Sound design most commonly involves the
manipulation of previously composed or recorded audio, such as music
and sound effects. In some instances it may also involve the composition
or recording of audio to create a desired effect or mood.
Examples of Sound design - The eerie scream of an Alien, or the sound
of the weapons going off in a video game. A sound designer is one who
creates all the stunning sound SFX. Sound design requires both
technological and artistic skills. A sound designer needs to develop
excellent knowledge and techniques in recording, mixing, and special
effects in order to create unique and interesting sounds. There are a host
of modern Software and hardware synths which are used to create new
and unheard SFX and sounds.
THE DESIGNER’S WORK
Sound designers and composers begin their work by studying the script,
gathering as much information as they can about any sound or music it
calls for. As in all other aspects of design, an early meeting with the
director and the design team is essential to get a clear understanding of
the production concept.
Some directors will already have very clear ideas about what the sound
effects and/or music should sound like, while others may request that
the sound designer/composer sit in on rehearsals to assist with
developing effects and music to fit the specific contexts in which they
will be used.
The Sound Designer works closely with the Director and a range of
other staff to create the aural world for the audience, involving:
• The Sound Designer may advise on how to best hear the performers,
which may involve acoustic adjustments to the theatre and set, or the
addition and configuration of radio and/or float mics for the
performers.
In order to be successful, the professional Sound Designer must have a
huge array of creative and technical skill sets, including a well-
developed sense of hearing; a comprehensive understanding of musical
history and genre; a musician’s sensitivity to timbre, rhythm, melody,
harmony and musical structure; and a deep understanding of
psychoacoustics, system engineering, acoustics, computer networking,
component integration, and of the systems for sophisticated audio
distribution. Technical skills across a variety of computer operating
systems and software are fundamental, as is the ability to learn new
concepts and equipment in a world of fast-paced technological
development. But perhaps most importantly, Sound Designers
understand the tremendous power of sound to aid the storytelling
process, to transport an audience directly into the vortex of the
performance and to make that performance a truly unforgettable
experience.
THE EQUIPMENT
COMPUTER:
These days, since recording studios are almost ALL digital… The first
thing you obviously need is a computer. And while you can just use any
old computer, at-first, you should eventually invest in the best
one you can afford.
Because today’s DAW’s can be EXTREMELY hard on processing
resources and making full-use of its features requires a blazing-fast
computer.
DAW (DIGITAL AUDIO WORKSTATION):
A digital audio workstation (DAW) is an electronic device or
application software used for recording, editing and producing audio
files. DAWs come in a wide variety of configurations from a single
software program on a laptop, to an integrated stand-alone unit, all the
way to a highly complex configuration of numerous components
controlled by a central computer. Regardless of configuration, modern
DAWs have a central interface that allows the user to alter and mix
multiple recordings and tracks into a final produced piece.[1]
DAWs are used for the production and recording of music, songs,
speech, radio, television, soundtracks, podcasts, sound effects and
nearly any other situation where complex recorded audio is needed.
AUDIO INTERFACE:
Once you’ve got the software, the next thing you’ll need is an audio
interface audio interface…
Which has the primary purpose of providing all the necessary
connections to send your music:
• INTO the computer when recording, and… the
computer when recording, and…
• OUT the computer during playback.
MICROPHONE:
The oldest item on this list by far… Microphones have been around
since long before recording studios ever existed. Yet ironically, in all
those years, very little about them has changed. And many of the top
models from a half-century ago are still among the industry standards of
today.
That’s not to say that microphones are a simple topic, because it’s
actually quite the opposite. Recording studios typically carry several-
dozen mics or more…each one used to achieve:
• a different sound
• from different instruments
• in different situations
STUDIO MONITORS:
In the pro audio world, we call them either studio monitors studio
monitors, or nearfield monitors.
And while they might look similar to plain old speakers…THEY’RE
NOT. Compared to consumer speakers, which typically accentuate
certain frequency bands in order to improve the listening experience for
certain audiences, studio monitors are designed with the opposite goal
of providing a perfectly FLAT frequency response, so engineers can hear
a mix as it truly is, flaws and all…so they can adjust accordingly.
CABLES:
There are various types of cables available in the Pro audio world which
serve the purpose of connecting microphones to audio interfaces & to
studio monitors & vice-versa.
MICROPHONE STANDS:
They actually come in many shapes and sizes, each designed for specific
tasks.
POP FILTERS:
One peculiar fact about your mouth is that it expels a strong burst of air
whenever you pronounce “p” or “b” sounds. In normal conversation,
you don’t even notice it. But when singing into a microphone, that blast
of air is heard as a low frequency “thump” known as popping popping,
which is both unpleasant to the ears, and unacceptable on a recording.
Pop filters are designed to solve this problem by catching the blast of air
before it hits the diaphragm of the mic.
AUDIO CONSOLE/ MIXER:
In sound recording and reproduction, and sound reinforcement systems,
a mixing console is an electronic device for combining sounds of many
different audio signals. Inputs to the console include microphones being
used by singers and for picking up acoustic instruments, signals from
electric or electronic instruments, or recorded music. Depending on the
type, a mixer is able to control analog or digital signals. The modified
signals are summed to produce the combined output signals, which can
then be broadcast, amplified through a sound reinforcement system or
recorded.
Mixing consoles are used in many applications, including recording
studios, public address systems, sound reinforcement systems,
nightclubs, broadcasting, television, and film post-production. A typical,
simple application combines signals from microphones on stage into an
amplifier that drives one set of loudspeakers for the audience. A DJ
mixer may have only two channels, for mixing two record players. A
coffeehouse's tiny stage might only have a six channel mixer, enough for
two singer-guitarists and a percussionist. A nightclub stage's mixer for
rock music shows may have 24 channels for mixing the signals from a
rhythm section, lead guitar and several vocalists. A mixing console in a
professional recording studio may have as many as 96 channels.[1]
In practice, mixers do more than simply mix signals. They can provide
phantom power for condenser microphones; pan control, which changes
a sound's apparent position in the stereo soundfield; filtering and
equalization, which enables sound engineers to boost or cut selected
frequencies to improve the sound; dynamic range compression, which
allows engineers to increase the overall gain of the system or channel
without exceeding the dynamic limits of the system; routing facilities, to
send the signal from the mixer to another device, such as a sound
recording system or a control room; and monitoring facilities, whereby
one of a number of sources can be routed to loudspeakers or
headphones for listening, often without affecting the mixer's main
output.[2] Some mixers have onboard electronic effects, such as reverb.
Some mixers intended for small venue live performance applications
may include an integrated power amplifier.
MASTERING
Mastering is the term most commonly used to refer to the process of
taking an audio mix and preparing it for distribution. There are several
considerations in this process: unifying the sound of a record,
maintaining consistency across an album, and preparing for
distribution.
How mastering impacts the sound of the record One goal of mastering is
to correct mix balance issues and enhance particular sonic
characteristics, taking a good mix (usually in the form of a stereo le) and
putting the final touches on it. This can involve adjusting levels and
general “sweetening” of the mix. Think of it as the difference between a
good-sounding mix and a professional-sounding, nished master.
This process can involve adding broad equalization, applying
compression, limiting, etc. This is often actually referred to as
“premastering” in the world of LP and CD replication, but let’s refer to
it as mastering for simplicity.
How mastering can impart consistency across an album Consideration
also has to be made for how the individual tracks work together when
played one after another in an album sequence. Is there a consistent
sound? Are the levels matched? Does the collection have a common
“character” and play back evenly so that the listener doesn’t have to
adjust the volume?
This process is generally included in the previous step, with the
additional evaluation of how individual tracks sound in sequence and in
relation to each other. This doesn’t mean that you simply make one
preset and use it on all your tracks so that they have a consistent sound.
Instead, the goal is to reconcile the differences between tracks while
maintaining (or even enhancing) the character of each of them, which
will most likely mean different settings for different tracks.
Preparation for distribution
The final step usually involves preparing the song or sequence of songs
for download, manufacturing and/or duplication/replication. This step
varies depending on the intended delivery format. In the case of a CD, it
can mean converting to 16 bit/44.1 kHz audio through resampling
and/or dithering, and setting track indexes, track gaps, PQ codes, and
other CD-specific markings. For web-centered distribution, you might
need to adjust the levels to prepare for conversion to AAC, MP3 or hi-
resolution les and include the required metadata.
SOUND EDITING
With the advent of new technologies in audio editing, editing over the
years has become more accurate and easier. Software and hardware
programs are designed specifically to help editors piece together music
or audio pieces. These programs are generally referred to as digital
audio workstations (DAWs). The idea behind audio editing is usually to
take a piece of music and slice it and dice it so that it is free of errors and
consistent to listen to.
Editing can be purely for audio (example audio podcast, music cds etc.)
or it can be for a video. For audio which needs to be synced with video,
the editors are provided with a video clip and an audio clip that both
need to be matched. Obviously, the video clip isn’t going to undergo
any editing because it is the section of media that the music is supposed
to conform to (not the other way around).
In many cases, the audio editor is given a file that works with their
specific DAW. They can then manipulate virtually every part of the
musical piece. Most DAWs give you access to all of the individual tracks
that go into making a complete song. That means editors gain access to
the vocal track, the guitar track (or other instrumental track), the drum
track, and many more. This is not just an mp3 audio file, but, instead, a
song divided into its individual tracks (or stems). It’s also conveniently
placed into a visual interface—generally conveyed as “waveform”—that
is a visual representation of each audio track.
Some general application of audio editing are:
• Remove breaths, cough, ringing of the phone or any
other unwanted interference
• Remove repeated dialogues
• Add music intro/outro
• Stretch/shorten audio and sound effects according to
the length of the visual.
• Splice together audio recorded at different sittings
• Sync up different musical instruments so that they all
sound on the beat.
• Loop, slice and edit beats.
As a sound wave reaches the end of its medium, it undergoes certain
characteristic behaviours. Whether the end of the medium is marked by
a wall, a canyon cliff, or the interface with water, there is likely to be
some transmission/refraction, reflection and/or diffraction occurring.
Reflection of sound waves off of barriers result in some observable
behaviours which you have likely experienced. If you have ever been
inside of a large canyon, you have likely observed an echo resulting
from the reflection of sound waves off the canyon walls. Suppose you
are in a canyon and you give a holler. Shortly after the holler, you would
hear the echo of the holler - a faint sound resembling the original sound.
This echo results from the reflection of sound off the distant canyon
walls and its ultimate return to your ear. If the canyon wall is more than
approximately 17 meters away from where you are standing, then the
sound wave will take more than 0.1 seconds to reflect and return to you.
Since the perception of a sound usually endures in memory for only 0.1
seconds, there will be a small time delay between the perception of the
original sound and the perception of the reflected sound. Thus, we call
the perception of the reflected sound wave an echo.
A reverberation is perceived when the reflected sound wave reaches
your ear in less than 0.1 second after the original sound wave. Since the
original sound wave is still held in memory, there is no time delay
between the perception of the reflected sound wave and the original
sound wave. The two sound waves tend to combine as one very
prolonged sound wave. If you have ever sung in the shower (and we
know that you have), then you have probably experienced a
reverberation. The Pavarotti-like sound which you hear is the result of
the reflection of the sounds you create combining with the original
sounds. Because the shower walls are typically less than 17 meters away,
these reflected sound waves combine with your original sound waves to
create a prolonged sound - a reverberation.
EQUALIZATION TECHNIQUES
An equalizer (/mixing/equipment/equalizers/) is an audio device with
multiple frequency controls for adjusting sound tone quality. There are
many reasons for using an equalizer.
The main problems in a mix are usually excess muddiness and honk.
Before beginning to equalize any instrument, it is important to listen to
the sound in order to identify any nuance/unwanted frequencies.
Muddiness normally comes from bass heavy instruments, such as the
kick drum, bass guitar, and the lower end of the piano. The frequencies
responsible are usually centered between 100-400 Hz. However, simply
cutting these frequencies will make the sound thin as this area does
contribute to the body and rhythm of a mix.
The best way to deal with muddiness is to scan the lower frequencies
with a high Q setting and a moderate boost level of about 8dB. Once
excess muddiness is found, cut the signal right down, and then slowly
bring up the gain control until there is a good balance between body and
muddiness.
The idea is to fill up the audio spectrum (/mixing/techniques/audio-
spectrum/) and applying cut to several sounds in the same frequency
area will just leave a hole in a mix. If too much bass is lost from the
sound, try adding some moderate boost to the sub-bass region at around
60 Hz to compensate.
The honk area is focused between 500-3000 Hz and determines how
honky and prominent an instrument is in the mix. Excess output at this
range can sound cheap, boxy and can cause unwanted ear fatigue. If
boosting in this area, be very cautious, especially on vocals. Human
hearing is extremely sensitive at these frequencies with the slightest
boost or cut resulting in a huge change in the sound.
If there are any irritating frequencies, sweep through the sound with a
medium curve at about 10dB cut. Once the frequencies are located adjust
the amount of cut as desired. It may be necessary to compensate for any
cutting in this area by applying a small amount of boost around the 5-8
kHz area to liven up anything that may have been severely affected by
the cutting. This will help to preserve the overall brightness.
Exercise caution when cutting frequencies and do not cut the same
frequency on all of the sounds.
When applying equalization, it is advisable to use large amounts of cut
or boost initially. This helps give a better idea of the frequencies being
affected. The human ear can quickly become used to an equalized
sound, so quick successive bursts on the gain knob until the frequency
sounds about right is best.
The equalizer can be useful for creating space in the mix by balancing
frequencies. If certain instruments occupy a similar frequency band they
will end up masking each occur within that particular area of the audio
spectrum, often resulting in a muddy sound.
To allow elements to best fit together, there has to be some juggling of
frequencies so that each instrument has its own predominant frequency
range. For example, if a kick drum is heavy and powerful in the 80 Hz
but getting muddied up by the bassline, attenuating the bassline around
this frequency will free up valuable mix room allowing for the kick to
shine through. The result being that the mix will sound more clear and
distinct.
THE MERMAID
Once up on a time, there lived a little boy named
Dippy. He was an orphan living in a small village
near the sea. Dippy was a kind boy and yet everyone
in his village hated him because of the scar on his
face which people deluded with a magical curse. Thus
he was forced to live in a small hut, in the
outskirts of the village. Regardless of how the
villagers treated him, Dippy always put up a happy
face and worked very hard fishing and selling at the
market. Only few bought his fish and many hated him
which made him really sad.
One fine morning, as Dippy was fishing at his regular
spot, he noticed bubbles rising up onto the surface
of the water. He leaned forward to get a closer look
into the water when a girl suddenly sprung out of
water and hit the shore. This startled Dippy as he
fell back. He slowly walks towards her and looks at a
beautiful blonde dressed in ragged clothes. She
blinked her eyes as Dippy looked into the eyes as
beautiful as an ocean. There was something about her,
something told him that she was a DIFFERENT.
A BIG SMILE ran across his face as he looked at her.
Lisa
(Are You A Human..? )
His happiness was short-lived, for this QUESTION hurt
him real bad. Since his birth, he was mistreated and
bullied by everyone in the village and THIS GIRL,
whom he had just met had the same opinion about him.
This made him ANGRY and SAD at the same time. Tears
started filling in his eyes as he pushed her and
walked away from her.
She quickly gets up and blocks his path asking him
the same question.
lisa
(Are You A Human..?)
Dippy tries to avoid her but she pops up repeating
the same question again.
LISA
(Are You A Human..?)
Unable to control his anger anymore, Dippy lets out
his full might and yells at her...
Dippy
(Yes, I'm A Huumannnn...)
...tears start rolling from his eyes.
She HUGS him & LIFTS him out of joy...
LISA
(YessS...I did it!)
Confused DIPPY stares at her...
LISA
(Hi...I'm LISA. Will You Be My Friend?)
...as she extends her hand.
DIPPY
(I'm DIPPY...Nyc to meet U)
Tears roll down across his face as he smiles. He felt
happy as someone accepted him for who he was.
Everyone hated him because of his appearance but this
girl lookes past his scar. This meant a lot to him.
Lisa has been witnessing DIPPY crying and has no idea
what to do or how to stop him from crying...So, she
quickly grabs his hand...
DIPPY
(Yes... Let's be friends)
Lisa responds to him with a BIG SMILE.
DIPPY
(Where are you from and what were you doing
underwater..?)
LISA
(I'm the youngest daughter of NEPTUNE, the KING OF
ATLANTIS...)
DIPPY
(Are you....?)
LISA
(YessssS... I'm a MERMAID..!)
…Contd.