100% found this document useful (1 vote)
525 views19 pages

Elements of Radio Programme

The document discusses the key elements of radio programming including human voice, music, sound effects, and silence. It also describes the process of radio production including pre-production, production, and post-production stages. Finally, it covers basic radio equipment like microphones, headphones, and talkbacks.

Uploaded by

Ketan Jain
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
525 views19 pages

Elements of Radio Programme

The document discusses the key elements of radio programming including human voice, music, sound effects, and silence. It also describes the process of radio production including pre-production, production, and post-production stages. Finally, it covers basic radio equipment like microphones, headphones, and talkbacks.

Uploaded by

Ketan Jain
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 19

ELEMENTS OF RADIO PROGRAMME

Radio is a medium of voice, sound effects, music, and silence.

 Human voice 

It is the spoken word or voices of newsreaders and RJs that stimulate our imagination when
we listen to the radio. They are the personification of a radio, providing a personality with
which we identify and connect to the programme.

Since radio is exclusively an audio medium, so the voice should be clear, of high quality, and
pleasant to listen to. The presenters modulate their voice to create an impact and engage
their audiences in the show.

There are two aspects of the use of human voice in radio production. Firstly, there should be
a well written script to be spoken and then someone has to speak or read it before a
microphone in a studio. 

 Music 

Music is the soul of radio. It is used in different ways on radio. 

Film songs and classical music programmes are aired as independent programmes. 

It is also used as theme music of various radio programmes or signature tunes to give a
distinct identity to the radio station. In any event, the music played on most radio stations is
not randomly selected by individual presenters or producers, but it is governed by a music
policy that has been developed to appeal to the station’s target audience.

Music can suggest a change in scenes and locations. It could be something as simple as
drum beats or bells. It also helps in breaking monotony. 

It can be used to give the desired effect of happy or unhappy situations, fear or joy. For
instance, if the scene is of a celebration, fun and upbeat music will be used.

Music adds colour and life to any spoken word programme. It helps to set a particular mood
and convey or enhance emotions. 

 Sound effects 

Sound effects are artificially created or enhanced sounds, to emphasize or express an


action, mood, or feeling. 
They are often synchronized with certain actions and are created with foley and digital
processing, or taken from a sound effect library and the original sound of a scene. Example:
You can use two coconut shells to produce the sound effects of the sounds of horses’
hooves. Crushing an aluminium wrapper will sound as if a fire is raging. This is called foley.
Sound effects in a radio programme give meaning and sense of location. It adds realism to a
programme and helps a listener to use imagination. For example, you have to create the
scene of a temple, sound effects of bells, prayer chants, and crowd murmurs will be used.

They add depth and realism to the programme, thus, significantly impact the audience’s
experience.

 Silence 

Silence or pauses are essential while presenting a programme as it gives the listener some
time to register the information he has just heard in his mind, and allows him to fully immerse
in the experience.

It can be used imaginatively to convey meaning. It emphasizes on the emotion that every
single sound creates. For example: Silence after the news of a character’s death can
enhance the emotion of shock and grief. A long pause after an unexpected encounter can
add to the element of suspense.

Silence creates drama. Example: The silence after a door slam or a slap can add more
gravity to the scene. 

It is commonly utilized to great effect due to the emotional impact it has on viewers, the level
of immersion it creates, and the amount of intrigue it generates to better captivate the
audience.

Thus, when used appropriately, it adds to the quality of the radio programme.

RADIO PRODUCTION PROCESS


The process of radio production can be divided into three stages.

1. Pre-production Stage

This is the planning and development stage. It involves idea generation, research, scripting,
discussions with all the crew members and actors, location hunting, arrangement of
equipment, location hunting, booking of editing shifts, and budgeting.

It is all about getting a clear idea of what you want to make, and perhaps is the most
important stage of all.
 Brainstorming

It starts with ideation and conceptualization in which the subject matter is decided. For
example, the producer decides the topic or theme of the programme will be girl child
education.

 Planning

It is followed by drawing up a plan of action in which the format of the programme, the
writers
and voice actors, equipment required, venue and the location for the recording are decided. 
So here, the producer decides that the most potent format would be a radio drama. 

Now, all the characters will have their own distinct voice. He/she now decides whose voice
would suit the character the best.

For example, an actor with a mature and bold voice will be preferred for a serious adult role. 

 Scripting

Then the script is prepared and examined to make it suitable for radio. All dialogues, sound
effects, music etc, is decided for the girl child education programme. 

Next, all the administrative work involving paperwork for permissions and contracts is done.

 Paper work

The producer has decided the names of the voice actors, now it's time to invite them to
accept the job assignment and make all the terms of agreement clear. This agreement
specifying the salary and other terms and conditions, is referred to as the contract.

 Rehearsal

Finally, rehearsal is done by the voice actors before final recording to bring out the best of
the characters' emotions. 

2. Production Stage

The production stage involves recording the entire material for the final programme. 

It takes place in a proper studio and computers with all the voice actors present to record.

The recording itself also requires selecting and positioning the microphones, the type of
tapes to be used, the output of the sound from the audio mixer, etc. 
For example, it was decided that the format of the radio will be a drama. Since it has a
multiple number of characters, an omnidirectional microphone will be best suited for
recording.

Recording is the responsibility of the producer. It is, therefore, necessary for him/ her to
check the studio recorder, studio clock, the magnetic tape to be used sufficiently in advance
so that there is no hassle at the time of recording.

3. Post-production Stage

This is the final stage of production in which the recorded programme is given a shape and
structure with the help of editing.

It involves cutting the audio to fit the predetermined time limit, removing any uninteresting or
repetitive material, arranging audios in a logical sequence, adding the required sound
effects, music, or even pauses for the assembly of the final programme. 

For instance, if the sound effect of a baby girl crying is needed, it will be added in this stage.

If there was any awkward silence or fumbling during the voice acting, it can be cut out while
editing in post production.

The promotion and publicity of the programme both on radio and in other media is also a
part of this stage. This is done to ensure that people know about the programmes and also
listen to them.

EQUIPMENT
MICROPHONES

Microphone is an instrument for converting sound waves into electrical energy variations which
may then be amplified, recorded and transmitted.

On the basis of directivity, there are three types of microphones.

 Unidirectional

As the name suggests, it picks up sound from one direction. It suppresses sounds from the
rear and sides and hear better in one direction—the front of the mic. 

They are often used by announcers, presenters and newsreaders.

 Bi-directional

The voice or sound is picked up from two directions—from front and back but not from sides.
They can be used in an interview with two people facing each other with the mic between
them.

 Omnidirectional

The word omni means “all”. This microphone picks up sound from all directions.

It is used when a number of voices are used in a single programme like a radio discussion or
a radio drama.

On the basis of mechanism, we have three types:

 Dynamic

Their sound pickup device consists of a diaphragm that is attached to a movable coil. As the
diaphragm vibrates with the air pressure from the sound, the coil moves within a magnetic
field, generating an electric current. Also called moving-coil microphone.

They can be worked close to the sound source and still withstand high sound levels without
damage to the microphone or excessive input overload (distortion of very high-volume
sounds). They can also withstand fairly extreme temperatures, therefore, are ideal outdoor
mics.

 Condenser

Their diaphragm consists of a condenser plate that vibrates with the sound pressure against
another fixed condenser plate, called the backplate. Also called electret or capacitor
microphone.  

They are much more sensitive to physical shock, temperature change, and input overload,
but they usually produce higher-quality sound when used at greater distances from the
sound source. 

 Ribbon

Their sound pickup device consists of a ribbon that vibrates with the sound pressures within
a magnetic field. Also called velocity microphone.

They are similar in sensitivity and quality to the condenser mics, ribbon microphones
produce a warmer sound, frequently preferred by singers. They are strictly for indoor use.
 
HEADPHONES AND TALK BACKS

Headphones are a pair of padded speakers which you wear over your ears in order to listen
to audio signals without other people hearing it. 
It allows you the listener to have a direct line of audio that goes straight to the ears which
helps recognize words with much more clarity. It also allows the presenter to monitor the
loudness and expression of his/her voice.

Talkback is a microphone-and-receiver system installed in a recording/mixing console for


communication between people in the control room and performers in the recording studio.

Using this tool, the engineer or producer can communicate with a performers wearing
headphones while they are performing in the studio without interfering with the recording.

It is also used to announce the title or other relevant information at the beginning of a
recording.

AUDIO MIXERS

The audio mixer allows us to control the volume of a limited number of sound inputs and mix
them into a single output signal. It is needed whenever there are a number of sound sources
to select, blend together, and control (such as a couple of microphones, CD, VCR audio
output, etc.). The output of this unit is fed to the recorder.

Each input source comes into the mixer through a channel (vertical columns on the board)
which contains a number of rotary potentiometer knobs and buttons, each performing a
different function.

The basic input controls of the channel are: 

 Gain

The gain is a shield for the incoming signal that controls the amount of amplification
(boosting the signal) or attenuation (reducing the signal). 

The level is normalized when we have a healthy sound signal coming in that still has enough
headroom, so the loudest portions aren’t overmodulated. Overmodulation occurs when the
incoming signal is too loud and the signal becomes distorted. Too much gain equals
distortion.

 Equaliser

The EQ allows us to add or subtract a given frequency. On a simple board, it’s broken up
into high, mid and low. Some mixing boards expand to high, high-mid, low-mid and low
frequencies. 

The best practice is to use an EQ to remove troublesome frequencies, thus allowing us to


have a clearer, and less muddy sound. It is used for making sounds more intelligible and
reducing feedback.

 Pan

Panning allows us to move the sound to the left or to the right or keep it in the center. This
will allow us to play with the stereo image. 
The stereo image is the perceived spatial location of a given sound source. If you put an
input at 30 degrees left it will sound like the sound source is coming from front of you.
Playing with the panning of each channel will give us more space in mix.

 Fader

Fader allows you to control the level of any given channel that will be sent to the main mix.
So if you have an input and you want it to be the main thing heard in your mix, you would put
the fader up to full, allowing it to be the main thing heard in your mix.

 Phantom

Phantom power is a DC voltage (usually 12-48 volts) used to power the electronics of a
condenser microphone. For some (non-electret) condensers it may also be used to provide
the polarizing voltage for the element itself.

Functions of an Audio Mixer

1. Input: to pre amplify and control the volume of the various incoming signals

2. Mix: to combine and balance two or more incoming signals

3. Quality control: to manipulate the sound characteristics

4. Output: to route the combined signals to a specific output

5. Monitor: to listen to the sounds before or as their signals are actually recorded or
broadcast

TRANSMITTERS 

Transmitters are the devices which transmit the sound signals to radio sets of the listeners. It
is big in comparison to other equipment installed in the studio or control room. 

The strength and type of the transmitter determines the coverage area of broadcast.

It is available in different capacities such as 1 KW to 100 KW, 200 KW or 250 KW or above.


Its location is decided according to its capacity. A 1 KW transmitter or a low power
transmitter is normally installed in the vicinity of the studio, whereas the high power
transmitters are installed outside the city.

Likewise, there are Medium Wave (MW) radio broadcast transmitters and Short Wave (SW)
radio broadcast transmitters.
RECORDING & TROUBLESHOOTING 

INDOOR

 Studio

A studio is a specialized facility for sound recording, mixing, and audio production of
instrumental or vocal musical performances, spoken words, and other sounds. 

It typically consists of a live room equipped with microphones and mic stands, where
instrumentalists and vocalists perform; and the control room, where sound engineers and
record producers operate professional audio mixing consoles, or computers with specialized
software suites to mix, manipulate (e.g., by adjusting the equalization and adding effects)
and route the sound for analogue recording or digital recording. The two rooms are mostly
partitioned by a glass window in the middle. 

Ideally, the space is specially designed by an acoustician to achieve the desired acoustic
properties (sound diffusion, low level of reflections, adequate reverberation time for the size
of the ambient, etc.).

It is sound proof and usually has only one door which is not very easy to open. This prevents
any noise from outside interfering with the recording of high quality audio.

 Acoustics

Acoustics refers to the science concerned with the production, control, transmission,
reception, and effects of sound. The application of acoustics can be seen in almost all
aspects of modern studios.

The goal with generating premium acoustics within your Broadcast Studio is to lower your
levels of ambient echo, which in turn, will generate great clarity to the original sound and a
crisp quality to the broadcast signal.

There are three types of surfaces which come into play while talking about acoustics:
Reflective, Absorbing, and Diffusing. 

Fine-tuning sound quality inside a studio setting requires strategic placement of sound
absorption surfaces to control reverb time and diffusion materials to control "placement" of
the sound energy. 

By lowering the level of echo with sound proof walls, the room will be restored to premium
sound, and instantly generate into a more user-friendly studio for your broadcasting.

The unwanted sound wave reflections can be captured and converted out of your studio by
introducing a set of sound panels into the studio, wall or mounted ceiling.

 Perspective 
Sound perspective refers to the apparent distance of a sound source, evidenced by its
volume, timbre, and pitch.

Distance i.e how close to or far from you a sound seems to be-is created mainly by relative
loudness, or sound perspective. The louder a sound, the closer to the listener-viewer it is. 

A closer sound perspective may sometimes be simulated by recording with a directional


microphone which rejects sound from other directions. A more distant perspective may
sometimes be simulated in post-production by processing the sound and mixing in other
sounds.

Sound perspective can also give us clues as to who and where is present in a scene and
their relative importance to the narrative.

OUTDOOR

 Ambience 

Ambience or ambient audio means the background sounds which are present in a scene or
location.

Example: if the scene happens in a house in a village it will have the ambience of lots of
birds chirping in the background, while a house in a city would have sounds of vehicles.

It provides a sense of place where, and perhaps of time when, events occur. It creates an
atmospheric setting and engages the listener into the surroundings of said environment. 

It performs a number of other functions including:

 Providing audio continuity between scenes.


 Preventing an unnatural silence when no other sound is present.
 Establishing or reinforcing the mood.

 Noise 

Noise is any interference or unwanted sound in a given environment, with the exclusion of
the primary sound. 

A sound might be unwanted because it is loud, unpleasant or annoying, or intrusive and


distracting. Noise is also considered a mixture of many different sound frequencies at high
decibel levels.

Noise perception is subjective. For instance, the sound of a violin is generally very soothing
and beautiful. But in case we are recording something where it is acting as a distraction, its
sound becomes noise. Factors such as the magnitude, characteristics, duration, and time of
occurrence may affect one's subjective impression of the noise.
 EDITING AND MIXING .
Sound editing is a post production process involving selection, rearrangement, cutting, and
re-recording of sounds. It is also done to give a creative effect through new juxtapositions of
speech, music, sound and silence.

The basic functions of editing are:

1. Shaping a Programme: The recordings are rearranged in a logical and natural flow to
structure a programme having a beginning, a middle and an end. Therefore, certain
portions of the recorded programme, such as, repetitions or undesired parts may
have to be removed by editing. If there is a sudden change of mood or subject, a
pause may be added.

2. Timing a Program: The recorded programme is adjusted to fit into this predetermined
duration. The duration of a programme is usually kept a little short and in no case
should exceed the allotted time.

3. Cutting of Unwanted Material: It removes the uninteresting, repetitive or technically


unacceptable parts. For example: During recording, there could be some slips of
tongue, 'hems', needless long pauses or other mistakes in reading a script. All these
need to be cut.

4. Retakes: If certain parts of a programme are not recorded to the full satisfaction of
the producer, retakes may be taken and inserted.

Sound editing helps in designing a complete package, setting the mood, pace and flow of
the story, and communicating the right aesthetic to the audience.

It is key in creating an emotion evoking masterpiece to which the audience can connect. It
offers a final chance to clarify and intensify the intended meaning.

It is thus both an art and science as it requires technical knowledge or tools and softwares,
and artistic storytelling skills with a creative outlook.

 EDITING PROCESS

Each recording method (such as linear, non linear, etc.) has its own editing procedure.
However, they all involve some common basic steps.

The first step of the audio editing process is listening to the original recording. This gives the
producer an idea of whether the production has all the elements that are needed.

Next step involves identifying the portions that need to be deleted. Here emphasis is laid on
the dialogues. The beginning and the end of the unwanted portion are marked for cutting
and later rejoined for uninterrupted flow. During the dialogue editing phase, the sound editor
is also focusing on EQ, monitoring the frequencies of the dialogue sound and removing any
frequencies that are unwanted from the file.
The next step is to add in sound effects that will make the performances more realistic and
engaging. For example: Foley of a door slamming, a glass dropping, or a person walking
across the room or any other sound to accompany the action being performed on the
screen.

It is followed by the layering of ambient sound such as birds chirping, people talking, etc.

After everything happening on the screen has already been given a dedicated sound, the
music is added to create a mood or enhance emotions. 

Finally, all the audio files are mixed so that they are in sync.

TYPES OF SOUND EDITING

 Mechanical splicing

It involved physically cutting the tape at the edit points. A special chalk-like pen was used to
place markings and there were machines that allowed a film editor to view the film while
editing, providing a convenient way to determine more precisely where to cut or splice a
particular part of a print. The two ends of the remaining tape were then joined back together
with an adhesive tape.

 Linear editing

Linear editing is basically selecting audio clips from one tape and copying them in a specific
sequential order onto another tape. It does not allow random access or selection and
arrangement of shots. All tape-based editing systems are therefore called linear because
they involve starting with the first shot and working through to the last shot. It is also known
as tape to tape editing.

Its drawback is that it is time consuming and destructive in nature. Every time it is used, the
quality falls due to wear and tear.

 Non linear editing

The audio information is stored in digital form on computer hard disks or read/write optical
discs. Once audio material has been transferred on to a hard disk or pendrive, it can be
manipulated, cut, rearranged or treated digitally in a variety of ways depending on the
software used. 

An advantage of this over other methods is that it is non-destructive editing i.e it leaves the
original recording intact. It is therefore possible to do the same edit several times, or to try
alternatives without any worry of wear and tear. 

Once edited, the finished recording can be saved to the hard disk, written on to an individual
CD or DVD, or directly exported to other digital media platforms.

SOUND MIXING
Sound mixing is the process of matching audio levels of all of the sounds- from dialogue, to
foley and the music. A sound mixer must tweak every single audio file in order to make the
production sound clear, crisp, and seamless.

 ADDING SOUND EFFECTS AND MUSIC .


SOUND EFFECTS

A sound effect is an artificially created or enhanced sound, to emphasize or express an


action, mood, or feeling. It is often synchronized with certain actions and can be created with
foley and digital processing, or taken from a sound effect library and the original sound of a
scene.
Foley example: The realistic sound of bacon frying can be the crumpling of cellophane. 

Sound effects in a radio programme give meaning and sense of location. They add depth
and realism to a programme and help a listener to use imagination. Thus, significantly
impact the audience’s experience.

The term also refers to a process/technique applied to a recording while editing on the
software in the post production stage. Some typical effects used in recording and amplified
performances are:

 Echo: To simulate the effect of reverberation in a large hall or cavern, one or several
delayed signals are added to the original signal with a minimum delay of 35
milliseconds.

 Phaser: To give a "synthesized" or electronic effect to natural sounds, such as


human speech, the signal is split, a portion is filtered with an all-pass filter to produce
a phase-shift, and then the unfiltered and filtered signals are mixed. The voice of C-
3PO from Star Wars was created by taking the actor's voice and treating it with a
phaser.  

 Equalization: Different frequency bands are attenuated or boosted to produce


desired spectral characteristics.

 Time stretching: Changes the speed of an audio signal without affecting its pitch.
 Filtering: Boosts or attenuates certain frequency ranges using lowpass, high-pass,
band-pass or band-stop filters. Band-pass filtering of voice can simulate the effect of
a telephone because telephones use band-pass filters.

Some functions of sound effects are:

 Defining Space

Sound defines space by establishing distance, direction of movement, position, openness,


and dimension. Thunder at a low sound level tells you that a storm is some distance away,
as the storm moves closer, the thunder grows louder.

By varying sound level, it is also possible to indicate direction of movement. As a person


leaves a room, sound will gradually change from loud to soft: conversely, as a sound source
gets closet level changes from soft to loud,

 Focusing Attention

It draws attention and provides the viewer with a focus. In a shot of a large room filled with
people, the eye takes it all in but if a person shouts or begins choking the sound directs the
eye to that individual.

 Establishing Locale

Sounds can establish a locale. Example: honking car horns and screeching brakes place
you in city traffic and the whir and the clank of machinery places you in a factory.

 Emphasizing Action

Sounds can emphasize or highlight action. A person falling down a flight of stairs tumbles all
the harder if each bump is accented. A car crash becomes a shattering collision by
emphasizing the impact and the sonic aftermath-Including silence. Creaking floorboards
underscore someone slowly and methodically sneaking up to an objective.

 Intensifying Action

It increases or heightens dramatic impact. A car's twisted metal settling in the aftermath of a
collision emits an agonized groaning sound. In animation, sound and music) intensifies the
extent of a character's running, falling crashing, skidding, chomping, and chasing

 Depicting Identity

Barking identifies a dog, slurred speech identifies a drunk, and so on. But on a more
informational level, sound can also give a character or an object its own distinctive sound
signature: the rattle sound of a rattlesnake to identify a slippery villain with a venomous
intent; thin, clear, hard sounds to convey a cold character devoid of compassion, labored,
asthmatic breathing to identify a character's constant struggle in dealing with life.

 Setting Pace

Sounds, or the lack of them, help set pace. In a war battle, bomb bursts can vary from boom
boom boom to boom, then a few pauses, followed by boomboom. Additionally, sounds of the
bullets hitting or ricocheting off different objects can add to not only the scene's rhythm but
also its sonic variety.

MUSIC

Music is used in different ways on radio. There are programmes of music and music is also
used in different programmes. These include signature tunes, music used as effects in radio
plays and features.

The basic elements of music are pitch (which governs melody and harmony), rhythm (and its
associated concepts tempo, meter, and articulation), dynamics, and the sonic qualities of
timbre and texture.

Music in a production can have three uses: as production source, as source, and for
underscoring. 

Production source music emanates from an on-screen singer or ensemble and is produced
live during shooting or in post production. Source music is background music from an on-
screen source such as a stereo, radio, or jukebox; it is added during post. Underscore music
is original or library music added to enhance informational or emotional content. 

One essential difference between sound effects and music is that sound effects are
generally associated with action and music with reaction.

Music can be added for the following purposes:

 Establishing Locale

Many musical styles and themes are indigenous to particular regions. Example: Beats tribal
drums usually remind us of Africa.

 Emphasizing Action

Example: A dramatic chord underscores shock or a moment of decision. A romantic theme


highlights that flash of attraction between lovers when their eyes first meet. Tempo incrocing
from slow to fast emphasizes impending danger.

 Intensifying Action
The scariness of sinister music builds to a climax behind a scene of sheer terror and crashes
in a final, frightening chord. The repetition of a short melody, phrase, or rhythm intensifies
boredom, the threat of danger, or an imminent action.

 Depicting Identity

Music can identify characters, events, and programs. A dark, brooding theme characterizes
the bad guy. Sweet music indicates a gentle, sympathetic personality. Strong, evenly
rhythmic music suggests the relentless character out to right wrongs. 

 Setting Pace

Music sets pace mainly through tempo and rhythm. Slow tempo suggests dignity,
importance, or dullness; fast tempo suggests gaiety, agility, or triviality. Changing tempo
from slow to fast accelerates pace and escalates action; changing from fast to slow
decelerates pace and winds down or concludes action. Regular rhythm suggests stability,
monotony, or simplicity; irregular rhythm suggests complexity, excitement, or instability.
Using up-tempo music for a slow-moving scene accelerates the movements within the scene
and vice versa.

 Unifying Transition

Music is used to provide transitions between scenes for the same reasons that sounds are
used: to overlap, lead in, segue, and lead out. Overlapping music provides continuity from
one scene to the next. Leading-in music establishes the mood, atmosphere, locale, pace,
and so on of the next scene before it actually occurs. Segued music changes the mood,
atmosphere, pace, subject, and so on from one scene to the next. By gradually lowering and
raising levels, music can also be used to lead out of a scene. A complete fade-out and fade-
in makes a definite break in continuity; hence the music used at the fade-in would be
different from that used at the fade-out.

 Recalling or Foretelling Events

Example: A character begins to tell or think about the first time she saw her future husband
at a party as the music that was playing during that moment is heard. A soldier kisses his girl
good-bye as he goes off to war, but the background music indicates that he will not return.

 Evoking Atmosphere, Feeling, or Mood

It can evoke feelings that are obvious and easy to suggest, such as love, hate, and awe, and
also subtle feelings such as friendship. estrangement, pity, and kindness. Music can convey
the most obvious and the subtlest of moods: ecstasy, depression, melancholy, and
amiability.
 AUDIO FILTERS .
A filter is a circuit capable of passing, boosting (amplifying), or attenuating (cutting) certain
frequencies. It can extract important frequencies from signals that also contain undesirable
or irrelevant frequencies. It filters out the noise or reduces the interference of the external
signals that could affect the quality or the performance of any communication system. 

For example: Suppose you are recording a vox pop in a crowded market. You are unable to
hear the speaker’s voice due to the noise from the surroundings. This voice can be boosted
with the help of a filter.

Filters are used to clean up a signal or shape the sound creatively.

The four primary types of filters include: 

1. Low pass filter

It passes frequencies that are lower than the cutoff, and progressively cuts the frequencies
above the cutoff.

Mainly, the low pass filters are utilized in audio applications and filters out noises from any
external circuit. After the high-frequency signals are filtered, the resulting signal frequencies
get a crisp and clear quality.

2. High pass filter

It is the opposite of a low pass filter. So in this case, frequencies below the cutoff are
removed while higher frequencies are preserved.

It is mainly used to remove rumble and any other noise below the lowest fundamental
frequency of a sound. It is also used to create tension before a drop, so there is more of an
impact when the low-end returns.

3. Band pass filter

It is a combination of a low-pass and a high-pass. It progressively removes frequencies both


below and above the cutoff, passing only a narrow “band” of audio. 

To use a bandpass filter, you first select the bandwidth (say 600-820 Hz). Frequencies within
that range will then be boosted. Those outside the range are attenuated. It comes very
handy when you need to isolate only a select range of frequencies. 

Bandpass filters tend to sound brittle and tinny, and are useful for imitating speakers with a
limited range of frequencies, such as clock radio speakers and intercom systems. They can
be applied to human speech to sound like an old telephone. They can also enhance the
warmth of the sound by isolating and boosting low-mids.

4. Notch filter

Band-stop filter is sometimes called a band-reject or notch filter. It is the opposite of a band-
pass filter that passes everything through except the band of frequencies around the cutoff.

This is useful for attenuating mic feedback in live settings, or for removing electrical hum,
without affecting the audio in any noticeable way.

 EVALUATION .

The programme evaluation process is essentially about problem solving and creatively
seeking new ideas to meet the programme aim. It can be done on the following aspects:

 Production evaluation

Firstly, the programme should meet the proper technical and operational standard. This
means there is no audible distortion, the intelligibility and logic is obvious, the sound quality,
balance and levels are correct, the fades and other transitions are properly executed, the
pauses are not abrupt and the edits are unnoticeable. 

Secondly, a statement of purpose should be formulated for every programme so that it has a
specific direction and aim. This involves identifying the target audience, how the programme
intends to serve that audience, and how well does it set about doing it?

Thirdly, a professional evaluation of content and format should be done. This assesses
whether the interviews up to standard, the clarity of the script, the delivery by the presenter,
what was unique about the programme, how it appealed to the target audience, and what
can be improved.

 Programme quality evaluation

It takes eight components into consideration:

1. Appropriateness: Does the programme meet the needs of the audience while
respecting its educational, social or cultural background. 
2. Creativity: Does the programme have the element of originality, innovation, and
creativity, so that it combines the science and logic of communication with the art of
delight and surprise. This leaves a more lasting impression, differentiating the
memorable from the dull, bland or predictable.

3. Accuracy: Are the facts presented in the programme accurate and honest, and giving
a balanced view in the sense that they are fair to people with different views.

4. Eminence:  A quality programme is likely to include talented and established


performers and writers who are eminent in their own sphere. Their presence gives
authority and stature to the programme.

5. Holistic: The programme should not only be logical but also emotionally provoking to
arouse feelings of awe, sadness, excitement – or even anger of injustice.

6. Technical innovation: Is the programme daring – either in the production methods or


the way in which the audience is involved?

7. Personal enhancement: The programme should have some effect on the listener
such as to give pleasure, to increase knowledge, to provoke or to challenge.

8. Personal rapport: The listener intuitively appreciates a programme that is perceived


as well researched, meticulous, diverse and deep, or has personal impact – in short,
is distinctive. He then identifies not only with the programme and its people, but also
with the station. This way programmes earn a reciprocal benefit of loyalty and sense
of ownership.

 Audience evaluation 

Before the production, audience research tells the broadcaster specific facts about the
audience size, their listening habits, reaction to a particular station or to individual
programmes, and their interests and preferences. This measurement and discovery of
audiences is important to producers, station managers, and advertisers or sponsors who buy
time slots on different stations. 

Constructing a properly representative sample of interviewees is a process requiring


extreme precision and care.

Several methods of measurement are used and, in each, people are selected at random
from a specific category to represent the target population. A correct sample covers all
demographic groups and categories in terms of age, gender, social or occupational status,
ethnic culture, language and lifestyle, etc.
After production, it tells whether the programme was successful in fulfilling its aims. For
instance, did the intended message reach the audience? What are their views on presenters
and other competitors? Did their channel preference become favourable in your station? Did
they take the action your programme aimed for them to act upon? One of the key questions
is always to find out why someone did not listen to your programme. 

To answer these questions, a survey is taken where people may be interviewed face to face
or on phone, respondents complete a listening diary, or a selected sample wears a small
personal meter that records what stations the wearer has been in the audible presence of
during his or her waking hours.

Another system involves wearing a watch that records and compresses four seconds of
sound during every minute that the watch is worn. At the end of a period (daily or weekly) the
data contained in the device is sent to a computer via a telephone line for analysis. 

Another method of research is through research panels scattered throughout the coverage
area taking qualitative feedback on programmes by means of a questionnaire over time that
may usefully indicate changes in listening patterns. Such panels are also appropriate where
the programme is designed for a specific minority, such as farmers, hospital patients, a
particular ethnic or language group etc. 

Before large-scale use, any draft questionnaire should be tested with a pilot group to reveal
ambiguities or misunderstandings. 

In some cases, letter responses may also be taken although it has many limitations.

 Cost evaluation

The involved formulating a budget covering all programme costs. It includes staff salaries,
office overheads, studio time, transmission costs, copyrights, permissions, cost of research,
hiring of equipment, survey costs, etc.

Furthermore, if information about the size of the audience is available, the cost per listener
hour can be taken out by dividing the cost per hour by the number of listeners. This is an
important indicator. It should be decided what cost per listener hour is acceptable for the
programme format.

Relatively cheap programmes which attract a substantial audience may or may not be what
a station wants to produce. It may also want to provide programmes that are more costly to
make and designed for a minority audience such as the disabled or for a specific educational
purpose. These will have a higher cost per listener hour, but will also give a channel its
public service credibility.

Thus, it is important for each programme to be true to its purpose – to achieve results in
those areas for which it is designed.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy