Module 4 FL
Module 4 FL
FOCUS IN LEARNING
BEHAVIORIST PERSPECTIVE
BEHAVIORISM: PAVLOV, THORNDIKE, WATSON, SKINNER
In this module, challenge yourself to attain the following learning outcomes
The theory of behaviorism focuses on the study of observable and measurable behavior. It
emphasizes that behavior is mostly learned through conditioning and reinforcement (reward
and punishment). It does not give much attention to the mind and the possibility of thought
processes occurring in the mind. Contribution in the development of the behaviorist theory
largely came from PAVLOV, THORNDIKE, AND SKINNER.
BEHAVIORISM
Connectionism (THORNDIKE)
PRIMARY LAWS
LAW OF EFFECT
LAW OF EXERCISE
LAW OF READINESS
BEHAVIORISM
IVAN PAVLOV – a Russian psychologist, is well known for his work in classical conditioning or
stimulus substitution, Pavlo’s most renowned experiment involve meat, a dog, and a bell.
Initially, Pavlov was measuring the dog’s salivation in order to study digestion. This is when he
stumbled upon classical conditioning.
PAVLOV’s Experiment. Before conditioning, ringing the bell (neutral stimulus) caused no
response from the dog. Placing food (unconditioned stimulus) in front of the dog initiated
salivation (unconditioned response). During conditioning, the bell was rung a few seconds
before the dog was presented with food. After conditioning, the ringing of the bell (conditioned
stimulus) alone produced salivation (conditioned response). This is classical conditioning
Stage 1 – before conditioning
Bell neutral stimulus no response
Stage 2 – during conditioning
Bell – neutral stimulus
Paired with
Meat (unconditioned stimulus) salivation (unconditioned
response)
Stage 3 – after conditioning
Bell (conditioned stimulus) salivation (conditioned
response)
CLASSICAL CONDITIONING
Somehow you were conditioned to associate particular objects with your teacher. So, at
present, when you encounter the objects, you are also reminded of your teacher. This is an
example of classical conditioning.
PAVLOV also had the following findings
Stimulus Generalization – once the dog has learned to salivate at the sound of the bell, it will
salivate at other similar sounds.
Extinction – if you stop pairing the bell with the food, salivation will eventually cease in
response to the bell
Spontaneous Recovery – EXTINGUISHED responses can be recovered after an elapsed time. But
will soon extinguished again if the dog is not presented with the food
Discrimination – the dog could learn to discriminate between similar bells (stimuli) and discern
which bell would result in the presentation of food and which would not
Higher Order Conditioning – once the dog has been conditioned to associate the bell with food,
another unconditioned stimulus, such as a light may be flashed at the same time that the bell is
rung. Eventually, the dog will salivate at the flash of the light wit out the sound of the bell.
EDWARD L. THORNDIKE
Edward Thorndike’s Connectionism theory gave us the original S-R framework of behavioral
psychology. More than a hundred years ago he wrote a text book entitled, Educational
Psychology. He was the first one to use this term. He explain that learning is the result of
associations forming between stimuli (S) and responses (R). Such associations or habits become
strengthened or weakened by the nature and frequency of the S-R pairings. The model for S-R
theory was trial and error learning in which certain responses came to be repeated more than
others because of rewards. The main principle of connectionism (like all behavioral theory) was
that learning could be adequately explained without considering any unobservable internal
states.
THORNDIKE’s theory on connectionism stats that learning has taken place when a strong
connection or bond between stimulus and response is formed. He came up with three primary
level.
LAW OF EFFECT – the law of effect states that a connection between a stimulus and response is
strengthened when the consequence is positive (reward) and the connection between the
stimulus and the positive reward and the connection between the stimulus and the response is
weakened when the consequence is negative. Thorndike later on, revised this law when he
found that negative rewards (punishment) do not necessarily weaken bonds, and that some
seemingly pleasurable consequences do not necessarily motivate performance.
LAW OF EFFECT – this tells us that the more an S-R (stimulus-response) bond is practiced the
stronger it will become. “practice makes perfect” seen to be associated with this. However, like
the law of effect, the law of exercise also had to be revised when, Thorndike found that practice
without feedback does not necessarily enhance performance.
LAW OF READINESS – this states that the more readiness that learner has to respond to the
stimulus, the stronger will be the bond between them. When a person is ready to respond to a
stimulus and is not made to respond, it become annoying to the person. For example if the
teacher says, “ okay we will now watch the movie (stimulus) you’ve been waiting for”. And
suddenly the power goes off. The students will feel frustrated because they were ready to
respond to the stimulus but was prevented from doing so. Likewise, if the person is not all
ready to respond to a stimuli and is asked to respond, that also become annoying. For instance,
the teacher calls a student to stand up and recite, and then the teacher asks the question and
expects the student to respond right away when he is still not ready. This will be annoying to
the student. That is why teachers should remember to say the question first. And wait for a few
seconds before calling on anyone to answer.
PRINCIPLES DERIVED FROM THORNDIKE’s CONNECTIONISM
1. learning requires both practice and rewards (laws of effect/exercise)
2. a series of S-R connections can be chained together if they belong to the same action
sequence (law of readiness)
3. transfer of learning occurs because of previously encountered situations.
4. intelligence is a function of the number of connections learned.
JOHN WATSON, John B. Watson was the first American psychologist to work with Pavlov’s
ideas. He too was initially involved in animal studies, then later became involved in human
behavior research.
He considered that humans are born with a few reflexes and the emotional reactions of love
and rage. All other behavior is learned through stimulus response associations through
conditioning. He believe in the power of conditioning so much that he said that he is given a
dozen healthy infants he can make them into anything you want them to be, basically through
making stimulus response connections though conditioning.
Experiment on Albert
Watson applied classical conditioning in his experiment concerning ALBERT a young child and
while rat. In the beginning , Albert was not afraid of the rat; but WATSON made a sudden loud
noise each time. Albert touch the rat. Because ALBERT was frightened by the loud noise, he
soon became conditioned to fear and avoid the rat. Later, the child’s response was generalized
to other small animals. Now, he was also afraid of small animals. Watson then extinguished or
made the child unlearn fear by showing the rat without the loud noise.
Surely, Watsons research methods would be questioned today; nevertheless, his work did
clearly show the role of conditioning in the development of emotional responses to certain
stimuli. This may help us understand the fears, phobias, and prejudices that people develop
BURRHUS FREDERICK SKINNER
He believed in the stimulus response pattern of conditioned behavior. His theory zeroed in only
on changes in observable behavior, excluding any likelihood of any processes taking place in the
mind.
Skinners work differs from that of the three behaviorist before him in that he studied operant
behavior. Thus, his theory came to be known as OPERANT CONDITIONING.
OPERANT CONDITIONING is based upon the notion that learning as the result of change in overt
behavior. Changes in behavior are the result of an individual’s response to events (stimuli) that
occur in the environment. A response produces a consequence such as defining a word, hitting
a ball, or solving a math problem. When a particular Stimulus-Response (S-R) pattern is
reinforced (rewarded) the individual is conditioned to respond.
REINFORCEMENT is the key element in SKINNER’s S-R theory. A reinforce is anything that
strengthen the desired response. There is a positive reinforce and a negative reinforce.
A POSITIVE REINFORCER is any stimulus that is given or added to increase the response. An
example of positive reinforcement is when a teacher promises extra time in the play area to
children who behave well during the lesson. Another is a mother who promises a new cell
phone for her son who gets good grades. Other examples verbal praises, star stamps and
stickers.
A NEGATIVE REINFORCER is any stimulus that results in the increased frequency of a response
when it is withdrawn or removed. A negative reinforce is not a punishment, in fact it is a
reward. For, instance, a teacher announces that a student who gets an average grade of 1.5 for
the two grading periods will no longer take the final examinations. The negative reinforce is
removing the final exam which we realize is a form of reward for working hard and getting an
average grade of 1.5.
A NEGATIVE REINFORCER is different from a punishment because a punishment is a
consequence intended to a result in reduced responses. An example would be a student who
always comes late is not allowed to joined a group work that has already began (punishment)
and, therefore, loses points for that activity. The punishment was done to reduce the response
of repeatedly coming to class late.
SKINNER also looked into extinction or non reinforcement . responses that are not reinforced
are not likely to be repeated. For example, ignoring a student’s misbehavior may extinguish
that behavior.
SHAPING OF BEHAVIOR an animal on a cage
Shaping of Behavior – an animal on a cage mat take a very long time to figure out that pressing
a lever will produce food. To accomplish such behavior. Successive approximations of the
behavior are rewarded until the animal learns the association between the lever and the food
reward.
Behavioral chaining – comes about when a series of steps are needed to be learned. The animal
would master each step in sequence until the entire sequence is learned.
Reinforcement schedule – once the desired behavioral response is accomplish , reinforcement
does not have to be 100%; in fact it can be maintained more successfully through what skinner
referred to as partial reinforcement schedules.
Fixed interval schedules – the target response is reinforced after a fixed amount of time has
passed since the last reinforcement. Example; the bird in a cage is given food (reinforcer) every
10 minutes, regardless of how many times it presses the bar.
Variable interval schedules – this is similar to fixed interval schedules but the amount of time
that must pass between reinforcement varies. Example, the bird may receive the food
(reinforcer) different intervals, not every ten minutes.
Fixed ratio schedules – a fixed number of correct responses must occur before reinforcement
may recur. Example, the bird will be given food (reinforcer) every time it presses the bar 5
times.
Variable ratio schedules – the number of correct repetitions of the correct response for
reinforcement varies. Example, the bird is given food (reinforcer) after it presses the bar 3
times. Then after 10 times, then after 4 times. So the bird will not be able to predict how many
times it needs to press the bar before it gets food again.
Variable interval and especially, variable ratio schedules produce steadier and more persistent
rates of response because the learners cannot predict when the reinforcement will come
although they know that will eventually succeed. An example of this is why people continue to
buy lotto tickets even when an almost negligible percentage of people actually win. While it is
true that very rarely there is a big winner, but once in a while somebody hits the jackpot
(reinforcement). People cannot predict when the jackpot can be gotten (variable interval) so
they continue to buy tickets (repetition of response).
Implications of operant conditioning. These implications are given for programmed
instruction.
1. Practice should take the form of question (stimulus) – answer (response) frames which
expose the student to the subject in the gradual steps.
2. Require that the learner makes a response for every frame and receives immediate feedback.
3. Try to arrange the difficulty of the questions so the response is always correct and hence, a
positive reinforcement.
4. Ensure that good performance in the lesson us paired with secondary reinforcers such as
verbal praise, prizes and good grades.