0% found this document useful (1 vote)
1K views22 pages

Mixing and Mastering Masterclass V3

Uploaded by

Nehemias Rivera
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (1 vote)
1K views22 pages

Mixing and Mastering Masterclass V3

Uploaded by

Nehemias Rivera
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

MIXING MASTERCLASS

A guide for Producers


& Artists
INTRODUCTION
1
WHAT IS MIXING?
Operation that consists of merging, in a single soundtrack, the sounds of several other tracks,
dialogues, music and noise.

MAIN ELEMENTS

The audio mixing process can be understood as the balance and organization of various sound
sources taking into account the following elements basic:
• Volumes
• Panorama (Pan)
• Equalization
• Compression
• Harmonic Excitation (Drive, Amplifier Simulators, etc.)
• Effects (Delay, Chorus, Reverb, etc.)

BASIC METHODOLOGY

Mixing can be understood as a spiral process. Usually not we worked on an element and con-
sidered it “ready” until the end of the mix. We need work and listen to every element present in
the music and we will possibly have to revisit each of them a few times until you find the most
suitable way for it to fit and is balanced with the rest of the mix elements.

MIXING WITH ELEMENTS IN SOLO

Despite being the most intuitive way to start making adjustments to the elements in the mix,
it’s the most treacherous way. The track can sound interesting on its own, but generally it will
not fit with the other elements. It can lead to mistakes more often and make you have to spend
more time correcting and discovering sound inconsistencies.

MIXING WITH ALL ELEMENTS

The way that seems less intuitive and more difficult at first will lead you to more and more con-
sistent results and interesting throughout the learning process. Only then will it begin to gain
dominance full of audio tools.

The most basic point to start your mix is ​​perhaps making a Rough Mix:

MIXING MASTERCLASS
MASTERING -TUTORIAL.COM
1. You can choose to reset all faders or start from the original point at which faders are; if 2
you participated in the recording and production, chances are you have already direct-
ed the sound in the previous phases, so I recommend that you do not zero the faders.
If you’re mixing for someone, bounce the session the way you she finds herself. This
bounce will be your reference point. Then yes, if you wish, you can lower the faders to
start from scratch. Use this bounce as a reference to throughout the Mixing process to
know where it came from and where it wants to go. IT’S very common, when learning, to
process tracks too much; in this In this case, returning to the original bounce for a refer-
ence is very important. If the “mixed” and “processed” sounds are worse than the “raw”
track, you will know just in time. Avoid unnecessary processing!!!
2. Remove all plugins (leave only those that are part of the production, such as special ef-
fects plugins or possible simulators of amplifiers);
3. Start to raise the faders (if you have zeroed) or swing, little by little, using volumes only;
4. The ideal would be to do this process in groups, such as the battery group (or electronic
beats), percussions, string instruments, keyboards / synthesizers and voices;
5. Next to the volume adjustment, work on the pan adjustment, which will allow to arrange
the elements more clearly in the stereo field (left / right).

VOLUME AND PANNING


These are the most basic elements of the mix and therefore the most important. At all times,
you’ll be adjusting volumes and panning to get to a more accurate result, but this basic adjust-
ment will guide your entire process. Therefore, watch out! If you drop an element with an ex-
treme pan to the left, for example, will
be a lot of work, at a more advanced
stage of the mixing process, the
modification of the positioning of this
element without drastically affecting
the whole. Each element within the
mix depends on all other elements
of the mix. So be it well aware of this
process!

We can understand Mixing as a process in 4 dimensions. We can see the elements arranged
as a 3-dimensional image, which moves along the time (4th dimension). Therefore, the áudio
elements can be arranged as follows in the mix:

This form of association with images helps a lot in the process of mixing the our day-to-day
lives and can facilitate understanding, speeding up learning techniques in a short time.

MIXING MASTERCLASS
MASTERING -TUTORIAL.COM
FUNCTIONAL MIXING
3
Although the image with the visualization of the instruments is very useful as a “Photography”
of your mix, we can use the concept of “Functional Mixing” to further assist in the process of
organizing the mix.
Each instrument or element in the mix has its own well-defined musical function. Generally
speaking, drums and bass create the base, guitars complement the base, but can also create
details, while voices bring the main message and remain the focus of attention. Some elements
remain active for the entire duration of the song, while
some synthesizers and percussion can appear only for
short sections of the arrangement. The understanding
of all elements and the judgment of the functions and
importance of each of them within the mix, will guide
our process and Mixing methodology.
There are no rules for choosing the order of elements
of a mix, but understanding of its functions facilitates
the process as a whole. The big Most Mixing Engineers
follow the following reasoning: bases to complements
to voices and details. This reasoning follows the logic of
a construction civil, for example.
Engineers plan to build a house by laying the foundation, slabs and beams (base), then walls
and roof (complements) and then the cladding interior, doors and windows (voices and details).
Working on the rhythmic elements facilitates the process for the balance and placement of
the harmony elements and voices in the mix. In most of the time, the adoption of this process
leads to a much faster result in the mixing. Even so, depending on the music, © Mastering the mix
some other element can guide the attention of the engineer to start work for
him instead of the rhythmic elements or base. But, in any case, the base will always have a lot
of attention right in the start of the process. Obviously, we cannot forget the details, as they can
just ruin a mix. Some elements, even if not touched in the most of the arrangement of the song,
can obstruct or hinder some key elements of the music, if treated incorrectly.

EQUALISERS

They are used to change the gain in specific


portions of the audio signal spectrum. In prac-
tical terms, equalizers are used to change the
“color” of an element within the mix.

MIXING MASTERCLASS
MASTERING -TUTORIAL.COM
Filter Types:
Each equalizer can have one or more types of filters:
4
• Band Pass (also called Bell): Boost or cut based on the central frequency chosen and width
defined by parameter Q (when available);
• High Pass Filter (HPF): Removes frequencies below the selected frequency (cut-off frequen-
cy) and only pass frequencies above it.
• Low Pass Filter (LPF): Opposite to HPF; removes only frequencies above the selected fre-
quency
• High-Shelf: Boost or cut on all frequencies above the selected frequency;
• Low-Shelf: Boost or cut on all frequencies below the selected frequency;

MIXING MASTERCLASS
MASTERING -TUTORIAL.COM
COMPRESSORS
5
Perhaps the most misunderstood
and misused tool in the audio world,
the compressor basically has the
function of doing what its name
says: compress the dynamic varia-
tion of the audio. It is natural that in
the recording of a drums there is a variation in intensity in the beats of kick and box or else in
the recording of guitar that some chords come out stronger than others.
This is all natural, but in the mix it is very important to have control over these elements so that
we can organize the sound more precisely. We could simply use a volume fader to make these
adjustments for dynamic variations, but the compressors serve precisely to do this task auto-
matically, especially considering that many elements may have the problems described previ-
ously and it would simply be impossible to address all these situations individually.

The illustration below shows the difference between an audio signal before and after com-
pression:

What the compressor did in practical terms was to hold the highest portions of audio and,
with that, we generated a fuller sound (with fewer peaks). Through the compensation gain, the
lower intensity portions are enlarged and, in this way, we create a smaller difference between
the strongest and weakest portions of the signal. Hence, we achieve what we call reducing the
dynamic range of audio.
To perform this work, a basic compressor uses 5 parameters:
• Threshold;
• Attack;
• Release;
• Ratio;
• Make-Up Gain.
Depending on the compressor
architecture, it may contain all of
these parameters, some more and
sometimes not all. Some com-
pressors, for example, have attack
or threshold parameters pre-defined by the manufacturer (which cannot be selected by the
user), but allow you to select the release and / or ratio. Threshold is the parameter that defines
from which point the compressor starts to act. The compressor operates only in the region
that is above the threshold (highest in amplitude), at first.

MIXING MASTERCLASS
MASTERING -TUTORIAL.COM
Let’s assume that a guitar track rotates its average peak amplitude at -20 dBFS. After a certain
point, the musician started to make chords with a slightly heavier beat and the signal became
6
stronger consequently. If we set the compressor threshold to -18 dBFS (it will be above the -20
dBFS point), it will not act during most of the audio signal. If, from the moment the musician
started playing stronger, the sound will pass -18 dBFS, the compressor will start running and
remain so until the signal returns below -18dBFS again. Obviously, the compressor depends on
another important parameter to take action as soon as the threshold is passed: the Ratio.

Ratio is the compression ratio (or rate). In the following illustration we see different lines with
the values 1: ​​ 1, 2: 1, 4: 1 and 20: 1. If our compressor is selected in 1: 1 mode, no compression
will occur. 1: 1 means that for every 1 dB that passes above the threshold, 1 dB will result as an
output signal. In this way, the signal remains intact. However, if the compressor is in 2: 1 mode,
we will have light compression. For every 2 dB that exceeds the threshold, only 1 dB will be at
the signal output. In our previous case, let’s assume that at a certain point the musician played
a chord that peaked at -12 dBFS. Our threshold was set at -18 dBFS, so the signal exceeded it by
6 dB. At a rate of 2: 1, instead of the output signal being -12 dBFS, it will become -15 dBFS, as we
will have a reduction of 3 dB (half the signal that exceeded the threshold). The higher the ratio,
the greater the compression. Very strong compression (for example 20: 1) is what we call a lim-
iter. If a signal exceeds the threshold of a limiter, it is practically “cut”, leaving only the portion
below the threshold. A very strong limiter is what we call a brickwall limiter (“brick wall”).

The attack parameter defines how fast a com-


pressor goes into action as soon as the signal
exceeds the threshold and release, how fast
the compressor stops compressing as soon as
the signal returns below the threshold. They are
parameters that define the shape (or mold) of the
sound. A sound with a very fast attack sounds
more aggressive, while a slower attack lets the
transients and serious portions of the sound
pass without being too compressed, generating a
more natural sound.
A quick release also makes the sound more ag-
gressive, since the transition from compressed
to uncompressed sound is done very abruptly.
A medium long release is generally used more in general practical situations, as it allows the
sound to be softer and more controlled. There are several different currents of thought and
different applications regarding the selection of the attack and release times of the compres-
sor, but there are no definitive rules in relation to this.
After the sound is compressed, obviously the feeling we have is that the sound is lower. To
compensate for the gain, most compressors have the make-up gain parameter, which serves
to leave the compressed signal with the approximate perceived volume of the uncompressed
audio.

MIXING MASTERCLASS
MASTERING -TUTORIAL.COM
Expanders & Gates
7
Expanders and gates work much like compressors, but instead of reducing the dynamic range
of a signal, they increase the dynamic range. The easiest use to understand how an expander
or gate works is, for example, in the sound of the box, kick drum or drum tones. There is a lot
of leakage of cymbals and other parts in a box microphone. In a mix, we can try to “isolate” the
sound from the box as much as possible through the use of an expander or gate. When the box
is touched, the sound is left unprocessed, but in the intervals between box beats, the expand-
er “reduces” background noise. Basically what it does is to increase the dynamic range of the
audio down, pushing the audio towards the lower signal levels.

The difference between an expander and a gate is


basically the same difference between compressors
and limiters. The rate of a gate is much higher than
the rate of an expander. The idea is exactly analogous
to that of a compressor. The threshold defines at what
point the expander (or gate) will begin to reduce (or
expand down) the audio. The ratio is written in re-
verse, ie 1: 2, 1: 4, 1:10, 1:20.

A ratio of 1: 4 means that for every 1 dB that passes below the threshold, 4 dB will be expand-
ed downwards, that is, we will have the sensation of “pushing the dirt” 4 dB’s downwards with
each dB below the threshold. The attack and release values a ​​ re inverted here, compared to the
compressors.
When the expander starts to act, we set the release time. When the sound returns above the
threshold, the expander (or gate) “opens” and what defines how fast this opening is is the
attack parameter. Expanders and gates also have two special parameters depending on their
architecture: range and hold. Range defines the maximum dB reduction the processor can
make, thus creating a “floor boundary” for background noise. Hold is used to “ask” the ex-
pander to wait a certain time (in milliseconds) before the processor starts pushing the sound
down. The release time starts to count after the hold time. If hold is 0 ms, only the release time
is taken into account. There is also a special type of expander, the upward expander, which is
used to treat the upper part of the signal, as well as the compressor, but it performs a dynam-
ic “expansion” upwards. expander is used to treat wrongly processed audio signals, mainly
compression and limiter used in an inappropriate and extreme way. audio quality, but it allows
a totally “limited” or squashed signal to be treated with the same level of compression as other
elements within a mix.

MIXING MASTERCLASS
MASTERING -TUTORIAL.COM
REVERBS, DELAYS & EFFECTS
8
Reverb, or simply reverb, is perhaps the most
intuitive signal processing for people. Hearing the
sound of a guitar as if it were inside a theater, is
perhaps one of the most natural things to think
about in terms of sound processing. Reverb, de-
lay, phaser, chorus and flanger are all time-based
effects.

Basically, the dry sound (without processing) occurs at a certain point in time and a few mil-
liseconds (and depending on the effect, a few seconds) later the processed sound is added
to the original sound, creating the effect. Reverb can be created in the digital world in two
basic ways: digital algorithms or convolution. The most common way would be through digital
algorithms, where the processor takes the original signal, simulates sound reflections on the
walls of a room or imaginary environment and leaves the sound reflecting in this environment
for a certain period of time. The sound is combined with the original, generating the rever-
berated sound. A convolution reverb uses a stimulus recorded in the real world (which we call
impulse-response - impulse-response) and this stimulus, which is basically a very short-lived
sound (usually a pink noise of infinitely short duration), mathematically processes the origi-
nal sound so that we have the feeling of inserting the sound recorded physically in the room
where the pulse was measured.

Physically, the reverberation occurs as shown


in the figure above. The original signal (initial
pulse) is reproduced in a given environment.
After a very short time (pre-delay), the orig-
inal signal initially reaches the walls of the
room, where the first reflections of the sound
occur (early reflections). The sound is reflect-
ed on the various walls and begins to bounce
over a period of time on the other walls (late
reflections or reverb). The time it takes for the sound to be reduced by 60 dB from the begin-
ning of the primary reflections is what we call the reverb time, or simply, reverb time (in some
processors it can be called decay time).

MIXING MASTERCLASS
MASTERING -TUTORIAL.COM
Thus, the main parameters of a reverb processor are:
• Type: We can select the environment as a room, hall, plate, chamber and so on;
9
• Decay: reverb time (usually shown in seconds);
• Pre-Delay: Delay between the original impulse and the first reflections; a higher pre-delay
value creates the feeling of a larger room;
• Room Size: Determines the physical size of the room and generally increases the decay
proportionately, if the room is large;
• Diffusion: Determines the amount of diffusion in a room; a room with uneven surfaces,
tends to “spread” sound reflections more, generating a more “colorful” sound (higher dif-
fusion value); a room with straighter surfaces tends to spread the sound less, generating a
more neutral and transparent sound (lower diffusion value).

Unlike reverb, which generates many reflections


that are perceived as a large sound mass, the de-
lay can be perceived as distinct repetitions of the
audio. The basic processing of the delay is quite
simple. The audio passes through the processor,
which stores the content in its memory; after a
certain predefined period of time (delay time), the
stored sound is repeated, being added to the origi-
nal sound during the period of time specified by the
feedback. This stored sound can still be processed before being replicated; this processing is
done by the modulation unit that exists in several delay processors. This modulation is used to
change the delay repetition time through the depth and rate parameters. With the modulation
unit active, a delay processor can create the effects of phaser, chorus and flanger, which are
basically forms of delay created from the modulation of the repeated signal.

MIXING MASTERCLASS
MASTERING -TUTORIAL.COM
HARMONIC DRIVERS
10
It is very common nowadays to see amplifier simulators and plug-in audio hardware. These
types of processors use what we call harmonic excitation. We can record a clean guitar sound
and then process it entirely during the mix, choosing a virtual amplifier, type of speaker, type
of microphone used in the recording and so on. All this processing defines the timbre of the
audio signal, that is, it alters the characteristics and interrelationship between the fundamental
frequencies and harmonic sounds of the signal. Not only can we use these tools in case we fully
treat the recorded guitar sound without any processing, we can also add energy, create har-
monic distortion and many other interesting effects using harmonic excitation plugins. There
are many such tools available on the market, but we can mention some well used:
• NLS Waves: Analog sound table simulators;
• Waves Vitamin: multiband harmonic exciter;
• Guitar Rig, Amplitube and Ampeg SVX: Simulators for guitar and bass amplifiers;
• Kramer Tape, UAD Studer and Ampex Tape Recorder: Simulators for magnetic tape record-
ers;
• PSA Sansamp and UAD Thermionic Culture Vulture: Distortion units

IMAGE MANIPULATOR (MID/SIDE)

Mid / side is a signal processing technique that allows us to treat stereo audio in a different way.
In mid / side (or M / S), we can treat audio in phase completely (which comes out equally in both
speakers, which consequently is the sound that comes out in the center of the stereo image) of
out-of-phase audio ( which is the audio that
comes out only from the sides, with the pan
open) - more details about M / S in the Mas-
tering section. This allows us to perform
actions such as, for example, expanding the
sensation of stereo opening of a synthesiz-
er recorded in two channels or even trans-
forming a stereo sound into mono (adding
the two channels). We can also generate
the so-called fake stereo, creating a feeling
of depth and openness in a sound that was
originally recorded in mono. Various tools
are also available on the market for this
type of handling in the Mixing, such as:
• Waves S1 Stereo Imager;
• Waves PS22 Mono to Stereo Enhancer;
• UAD Precision K-Stereo Ambience Recovery;
• Izotope Ozone (allows opening of the stereo image, equalization and compression in Mid /
Side mode).

MIXING MASTERCLASS
MASTERING -TUTORIAL.COM
NOISE REDUCERS
11
Successful production depends on impeccable mixing, but we must not forget that editing
work before the mix is ​​vital. And when we are editing the audio, adjusting the performance,
choosing the best takes, tuning voices and so on, we can come across situations in which the
audio needs to be dealt with. “Air punch” sounds when recording voice with dynamic micro-
phones, clicking sounds, unwanted noises, cable noises or “hum” and so on, can and should be
dealt with during the Music Production process. There are numerous audio “restoration” tools
available on the market from various brands. The most important thing, however, is to under-
stand the processing categories and their functionality. Nowadays, many people use these
tools completely erroneously and it is necessary to understand when to use, what to use and
how to use:
• De-clicker: Tool used to remove eventual clicks caused by electrical surges, sound wave
continuity failures, clicks caused by incorrectly edited audio (without fades or crossfades),
crackling and so on;
• De-clipper: Tool used to “undo” the finding of the sound wave (digital clip) that occurs
mainly if the audio is captured with a lot of gain in the preamplifier or if it simply had one or
more points of excessive gain over a musical performance, exceeding the amplitude limit (0
dBFS) of the recording system. It is a tool based on the use of upward expansion;
• De-crackler: Used to remove noise (crackles) present in recordings of vinyl records;
• De-noiser: Tool that takes care of reducing constant noise in an audio file, such as noise
from cassette tapes, noise from guitar amplifiers and so on;
• Hum Removal: This tool is basically a specific equalizer that takes care of eliminating noise
caused by interference in the electrical network, which generates the so-called “hum” in
the sound (very common in guitars and instruments susceptible to electrical interference);
We must use very carefully any of these noise reduction tools, as they leave many devices in the
audio when used incorrectly. They are corrective measures for when we have an audio problem
and we cannot re-record, edit or use another take. Always give preference to having natural
audio with noise over clean, but extremely processed audio.

MIXING MASTERCLASS
MASTERING -TUTORIAL.COM
MASTERING
MASTERCLASS

A guide for Producers


& Artists
INTRODUCTION
1
Mastering is often thought of as a mysterious art form. This guide aims to tackle that mystery
head on—to not just explain what mastering is, but to outline how one might go about achiev-
ing the primary goal of any good mastering engineer. And what’s that primary goal? It’s simple:
to prepare an audio recording for distribution while ensuring it sounds at least as good (if not
better!) when it goes out than it did when it came in.
You’ve just finished mixing what you think is a pretty good recording. The playing is good, the
recording is clean, and the mix is decent. Mastering is a process that can, and with practice
often does, take recordings to the next level. What mastering shouldn’t be expected to do is
completely reinvent the sound of your recording. Mastering is not a substitute for good mixing,
or good arranging for that matter! “Loud” records are a result of good writing/arranging/mixing
and mastering. They are made to sound good and loud (if loud is what you are after) from the
get-go, not just at the end. Once you have reached the final step of mixing with something that
represents your best effort, something that you are proud of, then it’s time to dig in and see
how much further mastering can get you toward the sound that you hear in your mind’s ear.
In the end there are no right answers, no wrong answers, and no hard and fast rules. However,
there are some well-known principles of audio production and mastering that are worth think-
ing through as you experiment.

WHAT IS MASTERING?

Although there are many definitions of what “mastering” is, for the purpose of this guide we
refer to “mastering” as the process of taking a mix and preparing it for distribution. In general,
this involves the following steps and goals.

The Sound of a Record

The goal of this step is to take a good mix (usually in the form of a stereo file) and put the final
touches on it. This can involve adjusting levels and general “sweetening” of the mix. Think of
it as the difference between a good-sounding mix and a professional-sounding master. This
process can, when necessary, involve adding things such as broad equalization, compression,
limiting, etc. This process is often actually referred to as “premastering”.

Consistency Across an Album

Consideration also has to be made for how the individual tracks of an album work together
when played one after another. Is there a consistent sound? Are the levels matched? Does the
collection have a common “character,” or at least play back evenly so that the listener doesn’t
have to adjust the volume?
This process is generally included in the previous step, with the additional evaluation of how
individual tracks sound in sequence and in relation to each other. This doesn’t mean that you
simply make one preset and use it on all your tracks so that they have a consistent sound.
Instead, the goal is to reconcile the differences between tracks while maintaining (or even
enhancing) the character of each of them, which will most likely mean different settings for
different tracks.

MASTERING MASTERCLASS
MASTERING -TUTORIAL.COM
Preparation for Distribution
2
The final step usually involves preparing the song or sequence of songs for download, manu-
facturing, and/or duplication/replication. This step varies depending on the intended delivery
format. In the case of a CD or streaming, it can mean converting to 16 bit/44.1 kHz audio through
resampling and/or dithering, and setting track indexes, track gaps etc. For web-centered distri-
bution, you might need to adjust the levels to prepare for conversion to AAC, MP3, or hi-resolu-
tion files and include the required metadata.
MASTERING BASICS

When mastering, you’re typically working with a limited set of specific processors.
Compressors, limiters, and expanders are used to adjust the dynamics of a mix. For adjusting
the dynamics of specific frequencies or instruments (such as controlling bass or de-essing
vocals) a multiband dynamic processor might be required. A single-band compressor simply
applies any changes to the entire range of frequencies in the mix. Equalizers are used to shape
the tonal balance.
• Equalizers: are used to shape the tonal balance.
• Stereo Imaging: can adjust the perceived width and image of the sound field.
• Harmonic Exciters: can add an edge or “sparkle” to the mix.
• Limiters/Maximizers: can increase the overall level of the sound by limiting the peaks to
prevent clipping
• Dither: provides the ability to convert higher word-length recordings (e.g. 24 or 32 bit) to
lower bit depths (e.g. 16 bit) while maintaining dynamic range and minimizing quantization
distortion.

EQUALISERS

There are many different types of equalizers, and they are all meant to boost or cut specific
ranges of frequencies. EQs are typically made up of several bands. A band of EQ is a single filter.
By combining bands, you can create a nearly infinite number of equalization shapes. Paramet-
ric equalizers provide the greatest level of control for each band. They allow for independent
control of the three variables—amplitude, center frequency, and bandwidth—that make up a
bell or peaking equalizer.
The picture below shows the equalizer screen in EQ Eight of Ableton Live, but the principles are
the same for most parametric EQs. There are eight sets of arrows, which represent eight bands
of equalization.

MASTERING MASTERCLASS
MASTERING -TUTORIAL.COM
Below is the Pro-Q equalizer from Fabfilter. 3

DYNAMICS

Mastering the dynamics of a mix using compressors, limiters, and expanders is probably the
most challenging step of the process, but the one that can make the most difference between
a basement tape and a commercial-sounding mix. Taking the time to understand dynamics
processing can be well worth the effort.

There are a few things that make


mastering dynamics challenging:
The effect is subtle, at least if done
correctly. It’s not something you
clearly hear, like a flanger or reverb or so forth, but instead something that changes the char-
acter of the mix. If you think about it, compression removes something (dynamic range) and so
what you will hear is the absence of something.
A compressor is not necessarily working all the time. Since it changes in response to the dy-
namics in the music, you can’t listen for one specific effect. Level histograms and compression
meters can be invaluable for referencing when the compression is occurring, and by how much.
Not all compressors are created equal. While the concept is simple enough—restrain the vol-
ume when it crosses a threshold—the design and implementation (and therefore the quality)
of compressors varies considerably. Applying a quality compressor correctly, however, can
smooth the peaks and valleys in your mix and make it sound fuller, smoother, or allow you to
increase the average level (if that’s the desired goal).

MASTERING MASTERCLASS
MASTERING -TUTORIAL.COM
These are the four different types of compressors that exist to work in mastering:
Vari-MU (or Variable Mu or valvulador or Tube Compressors):
4

The best known is the Fairch- ild, from the 50s. The
VARI-MU are not very fast compressors and have
the characteristic of color- ing the sound. In other
words, they create harmon- ics. A kind of subtle
saturation.

OPTO:
These are called optical compressors. The most representative compressor among optics is the
Teletronix LA-2A. This famous compressor has already received several emulations, such as the
Waves CLA-2A (from the image above).

VCA:
VCAs are more modern compressors. Per-
haps the most famous example is the DBX 160
and the SSL BUSS COMPRESSOR, which was
emulated by Cytomic and if you are an Ableton
user, we are talking about Glue.

(DBX 160)
VCAs are compressors, for the most part, fast and trans-
parent. In addition, they have precise attack and release
control.

MASTERING MASTERCLASS
MASTERING -TUTORIAL.COM
FET:
FET compressor technology allows it to be ultra-fast. In addition, they promote a peculiar color-
5
ing in the audio. They are aggressive compressors with personality. Perhaps the most famous
in this category is Universal Audio’s 1176.

This is the most used compressor


in the mixing and mastering of the
greatest albums in history, includ-
ing vocals by Michael Jackson.

MASTERING MASTERCLASS
MASTERING -TUTORIAL.COM
LOUDNESS MAXIMIZER (Limiting)
6
Using tools like the Brainworx bx_limiter to perform limiting is not solely about making a re-
cording louder, though that is a consideration. Judicious use of a limiter can also enhance the
perceived presence and impact of a track.

Most sound editors


have a Normalize
func- tion. The
Normalize function
analyzes your entire
mix, finds the highest peak, and adjusts the gain of the entire mix so that the highest peak in
the mix is at 0 dBFS (the verge of clipping) or a specified target level. The rest of the music is
then adjusted in level by the same amount. However, all this does is make the single highest
peak on the verge of clipping.
The principle behind a limiter is that you can limit the peaks at the threshold and then bring
up the rest of the mix. The bulk of the mix can be brought up since the peaks are cut down, so
nothing overloads 0 dBfs.
A tiny bit of limiting is almost unnoticeable. In fact, if you were to limit or clip a single sample, it
is beyond our perception to notice that at all.

MASTERING MASTERCLASS
MASTERING -TUTORIAL.COM
STEREO IMAGING
7
Most pop/rock-based musical idioms have the following in common: the most important
elements are the drums and the vocals. To that end, the kick, snare, and lead vocal tracks are
usually panned to the center. When you use a stereo widener, you are therefore usually empha-
sizing the other elements in the mix. A little of that might help, but only a little.

Other is- sues


come into play
regard- ing
phase
relation- ships
and sonic clar-
ity overall, so
if you use a widening tool, listen to be sure that the heart of your recording
isn’t diminished.

METERS

Here are the three main parameters of meters and their uses in mastering:
Level Meters:
Level meters are probably the most familiar and ubiquitous meter. They will
usually display Peak level (the level of a signal from moment to moment)
and RMS or “average” level (the level of a signal averaged over a short
window of time). Both types of information are important but for different
reasons.
Peak level tells us how close the signal is to the point of distortion. It’s a way
of helping us understand whether we have any headroom to bring a signal
up without changing anything else about it and staying below the point of
distortion.
RMS level gives us information that relates to our perception of volume. Our
brain processes information during a short window of time to evaluate how
loud something is in our environment, and an RMS level is a way to attempt
to give feedback about that. However, RMS doesn’t relate directly to per-
ception in the sense that it doesn’t take frequency content and balance into
account.

MASTERING MASTERCLASS
MASTERING -TUTORIAL.COM
The relationship between Peak level and RMS level will vary widely depending on the dynamics
of the mix and the genre of the music. That makes it hard to generalize too much. However if
8
one had to give a general idea about where the RMS level should generally sit with respect to 0
dBFS:
• Electronica: -8 to -12
• Pop/RnB: -10 to -14
• Rock: -12 to -16
• Acoustic idioms (Jazz, Classical, folk music related): -14 to -20

Spectrograms:
While not exactly a meter, a spectrogram does map levels, or energy, across the spectrum. It is
a helpful tool that provides a reality check on what you hear and, to an extent, what you can’t
hear—especially in the nether regions of the bass and the region close to 20 kHz. Spectro-
grams are good for helping you quickly diagnose a problem frequency or set of frequencies.
For instance: When a track that has a problem with sibilance (the sound a vocal “S” makes), it
usually shows up quite readily on a spectrogram. That makes it easy to focus an EQ or a de-es-
ser on the problem.
It’s helpful to have a spectrogram running while working in the EQ module to help you focus
your EQ settings. It’s also useful to have one running at the beginning and the end of your mas-
tering chain to keep tabs on what is happening and how your original mix has been altered.

MASTERING MASTERCLASS
MASTERING -TUTORIAL.COM
Vectorscope/Correlation/Stereo Image Metering
This sort of metering is probably the most under-appreciated, and in some ways this isn’t
9
surprising. The concept of thinking about mono-compatibility, and the width of a stereo image,
isn’t always discussed often enough when people begin to learn about audio engineering or
mastering.
When you are mastering, you want to be sure that the main instruments in a mix don’t disap-
pear when you listen to it in mono. A correlation meter gives you visual feedback so you can be
sure that the recording has a strong orientation toward mono, and that it most certainly does
not spend too much time with a strong orientation toward out of phase information. Why?
Scenarios such as vinyl cutting, MP3 encoding, and terrestrial broadcasting (radio and TV) rely
heavily on the mono signal being in good proportion. In any of these scenarios, too much out-
of-phase information will cause unpleasant artifacts for listening and, in some cases, actually
cause a mix to be sent back to the mixer as unusable.
As with any mastering process, a major problem with phase is best corrected at the mix stage.
If that’s not possible, one can try to address these problems using Mid/Side processing or the
stereo imaging tool. In these cases, the correlation meter is a helpful gauge for making your
adjustments.

MASTERING MASTERCLASS
MASTERING -TUTORIAL.COM

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy