0% found this document useful (0 votes)
7 views51 pages

MODULE 3 - Sensation and Perception

The document discusses various theories of perception, focusing on the Gestalt approach, Gibson's affordance theory, Marr and Nishihara's computational approach, Gregory's inferential theory, and Neisser's schema theory. Each theory presents different mechanisms and principles that explain how we perceive and recognize objects in our environment, emphasizing the roles of organization, interaction, and cognitive processes. Additionally, Biederman's Recognition-by-Components theory highlights the breakdown of complex forms into simpler geometric shapes for object recognition.

Uploaded by

msy2424
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views51 pages

MODULE 3 - Sensation and Perception

The document discusses various theories of perception, focusing on the Gestalt approach, Gibson's affordance theory, Marr and Nishihara's computational approach, Gregory's inferential theory, and Neisser's schema theory. Each theory presents different mechanisms and principles that explain how we perceive and recognize objects in our environment, emphasizing the roles of organization, interaction, and cognitive processes. Additionally, Biederman's Recognition-by-Components theory highlights the breakdown of complex forms into simpler geometric shapes for object recognition.

Uploaded by

msy2424
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 51

Module- 3

Sensation & Perception


Unit 1: Theories of perception:
Gestalt approach
The Gestalt approach to perception focuses on how we perceive images as
organized wholes rather than just a collection of individual elements.
Gestalt psychologists, such as Max Wertheimer, Kurt Koffka, and Wolfgang Köhler,
focused on understanding how people perceive entire objects rather than their
isolated features.
The phrase "The whole is greater than the sum of its parts" encapsulates the core idea
of Gestalt theory, which highlights that perceptual organization plays a powerful role
in how we interpret visual stimuli.
In the 1920s, they studied how we perceive patterns and objects as wholes rather
than as a collection of small sensations, which was the earlier structuralist view.
This is the process of organizing elements of the environment into recognizable
objects. Once we perceive something (like a Dalmatian in a picture), it becomes hard
to unsee it.
 Perception and Grouping: Perception helps us organize and make sense of
confusing stimuli by grouping similar elements, reducing the number of things to
process and identifying which objects belong together.
 Perceptual Organization According to Gestalt theory, the way elements are
arranged within an image influences how we perceive it. For instance, we may see
a circle or cross even if the shapes are incomplete (due to principles like closure
and good continuation).
 Gestalt Principles of Perceptual Organization: These are the rules or
tendencies that guide how we group visual elements into whole objects. Five major
principles include:
Proximity: Objects that are close together are perceived as a group (e.g., dots
forming rows because they are closer horizontally than vertically).
Similarity: Objects that are similar in appearance are grouped together (e.g.,
columns formed by elements of the
same color or shape).
Good Continuation: We prefer to
perceive continuous, smooth lines
rather than abrupt changes in
direction (e.g., intersecting lines are
seen as continuous rather than
separate objects).
Closure: We tend to complete incomplete figures, filling in gaps to see a closed shape
(e.g., mentally completing a rectangle with missing parts).
Common Fate: Objects moving together are perceived as part of the same group, a
principle commonly seen in motion-based grouping.
 Law of Prägnanz: This overarching principle states that of all possible ways of
interpreting a visual display, we select the one that results in the simplest and
most stable form. For instance, in the case of illusory contours, we are more likely
to interpret a shape as a simple geometric figure (like a triangle) because it is
easier for our brain to process.
 Figure-Ground Example: A familiar room might highlight certain objects (like
faces in photos), while others (like walls) fade into the background. An example of
figure-ground perception can be the classic white vase/black faces image.
 Use of Gestalt in Novel Stimuli: Research shows people use Gestalt principles
even for unfamiliar stimuli, recognizing fragments of geometric shapes faster if
they follow Gestalt laws like closure (e.g., a triangle).
 Gestalt and Human Perception: The Gestalt principles are simple but
fundamental to how we organize perceptions. Even infants use these principles
(e.g., proximity), but they may be unique to humans, as baboons don't experience
some visual illusions (e.g., Ebbinghaus illusion) like humans do.
 Top-Down Processing: Knowledge influences perception. Once we perceive a
certain pattern (e.g., faces in rocks), it becomes hard to see it differently. This
involves perceptual organization based on prior knowledge.
 Gestalt Laws as Heuristics: These laws are not always accurate and can
sometimes lead to incorrect perceptions, but they are fast and efficient, working
most of the time (e.g., mistaking tree stumps for an animal in the woods).
 Perception and Intelligence: Perception is considered "intelligent" as it involves
both top-down knowledge and the automatic application of Gestalt principles,
which are tuned to our environment. Grouping objects based on similarity or
proximity, though it may seem simple, reflects our perceptual system’s adaptation
to environmental patterns
 Limitations of Gestalt Principles: While Gestalt principles describe how we
perceive forms and patterns, they don't provide explanations for why or how these
processes occur. For deeper understanding, explanatory theories of perception are
needed.

Modern Perspectives on Gestalt Principles:


While the Gestalt principles remain influential, modern cognitive psychology has
continued to explore these ideas, including their application in infant perception
studies and attempts to formalize the law of Prägnanz (e.g., through minimal model
theory). Despite their intuitive appeal, these principles don't fully explain the cognitive
and physiological processes behind perception, leaving researchers with ongoing
questions about how exactly these principles operate in the brain.

Gibson affordance theory


Gibson’s direct perception theory is a unique approach to understanding how we
perceive our environment. While it is classified as a bottom-up theory, it diverges
from traditional views by emphasizing the active role of individuals in interacting with
their surroundings.
1. Ecological Approach: Gibson labeled his theory as an ecological approach,
emphasizing the interaction between individuals and their environments.
Role of Movement: Unlike passive reception of sensory information, perception
involves active movement through the environment.
2. Optic Array: The pattern of light that reaches the eye forms an optic array,
which is structured and contains all relevant visual information from the
environment.
Invariant Information: This optic array provides unambiguous information about the
spatial layout of objects. It includes:

 Texture Gradients: Variations in texture density can inform depth perception.


The ratio of an object's height to the distance between its base and the horizon is
invariant and aids in size perception.

 Optic Flow Patterns: Movement through an environment generates optic flow


patterns, which convey information about direction, speed, and distance.
Optic Flow and Pilot Training: When approaching a landing strip, the point of focus
appears stationary while the rest of the environment moves, providing crucial
information for navigation.

 Affordances: The potential uses of objects (e.g., a ladder affords climbing).


For example, a chair “affords” sitting, while a ladder “affords” climbing.
The specific affordance perceived may vary based on an individual's current needs or
psychological state (e.g., hunger, anger).
3. Resonance: Process of Perception: Gibson likens perception to tuning a
radio. Just as a radio resonates with electromagnetic signals when tuned
correctly, humans resonate with the information present in the environment.
Holistic Functioning: The nervous system operates holistically, allowing for
automatic perception of information.

Evaluation of Gibson's Approach


1. Philosophical Impact: Gibson’s theory emphasizes the inseparable relationship
between organisms and their environments, reinforcing the importance of context
in perception.
2. Richness of Visual Stimuli: He correctly identified that traditional laboratory
studies often underestimated the complexity of real-world stimuli, which provide
more information than static displays.
3. Visual Illusions: Gibson acknowledged that many visual illusions arise from
normal perception, and not merely from unusual stimuli designed to confuse.
4. Challenges to Gibson's Theory:
o Complex Processes: Critics argue that the processes involved in identifying
invariants and affordances are more complicated than Gibson suggested.
Marr (1982) criticized Gibson for not recognizing the complexity of detecting
physical invariants.
o Seeing vs. Seeing As: Gibson’s theory may not adequately address the
distinction between simply perceiving an object and interpreting its meaning
based on prior knowledge.
o Internal Representations: Critics point out that internal representations
(such as memories) play a crucial role in perception, as evidenced by studies
with chimpanzees navigating their environment efficiently by recalling past
experiences.
Gibson's direct perception theory significantly contributes to our understanding of how
we perceive the world around us, emphasizing the richness of sensory information and
the importance of active engagement with our environment.

Marr & Nishihara- computational approach


Marr and Nishihara (1978) made significant contributions to our understanding of
object recognition through their computational approach, emphasizing the necessity
of 3-D model representations. Their work arose from the realization that traditional 2-
D sketches are limited in their ability to identify objects because they are viewpoint-
centered. This means that the representation of an object can vary drastically
depending on the angle from which it is viewed, complicating the recognition process.
To address these challenges, Marr and Nishihara proposed a 3-D model
representation, which contains viewpoint-invariant information, ensuring that the
representation remains consistent regardless of the viewing angle.
Marr and Nishihara identified three essential criteria for an effective 3-D
representation:
1. Accessibility: The representation must be easily constructed from visual
stimuli, allowing for straightforward processing and recognition.
2. Scope and Uniqueness:
o Scope refers to the applicability of the representation across various
shapes within a given category.
o Uniqueness ensures that different views of an object yield the same
standard representation.
3. Stability and Sensitivity:
o Stability incorporates similarities among objects, ensuring that common
features are captured.
o Sensitivity reflects the representation’s ability to recognize salient
differences between objects.

Hierarchical Organization and Primitive Units


Marr and Nishihara proposed that the primitive units for describing objects should be
cylindrical structures with a major axis. This axis-based approach was adopted
because the main axes of an object are often easier to identify regardless of the
viewing position. In contrast, precise shape characteristics are more variable and less
reliable for recognition.
The hierarchical organization of these primitive units allows for a structured
representation of complex objects. High-level units provide information about the
overall shape, while low-level units offer more detailed descriptions. For example, the
human form can be decomposed into a series of cylinders at various levels of
generality, allowing for recognition of human figures regardless of the viewing angle.

Object Recognition Process


According to Marr and Nishihara, the process of object recognition involves matching
the constructed 3-D model representation against a catalogue of stored
representations in memory. To achieve this, it is necessary to identify the major axes
of the visual stimulus. They proposed that concavities—areas where the contour
points inward—are identified first. In the human form, these concavities appear in
areas like the armpits, which help segment the visual image into recognizable parts
(e.g., arms, legs, torso, and head). Once these segments are identified, the main axis
of each can be established.

Advantages of Concavity and Axis-Based Representation


The emphasis on concavities and axis-based representations offers several
advantages for object recognition:
1. Role of Concavities: Identifying concavities is crucial for recognizing complex
shapes.
For example, the distinction between faces and goblets can be clarified by recognizing
concave features, which help delineate facial features or parts of the goblet.
2. Invariance Across Angles: The lengths and arrangements of axes can be
computed for most visual objects, regardless of the angle from which they are
viewed. This aids in maintaining a stable representation during recognition.
3. Comparative Distinction: Axis information facilitates comparisons between
different objects. For instance, the relative lengths of body segments can help
distinguish between humans and other primates.

Gregory- inferential theory


Gregory (1970, 1980) suggests many visual illusions arise from misapplying
knowledge from three-dimensional perception to two-dimensional figures.
Size constancy: Objects are perceived as having constant size, regardless of distance,
contrasting with the decreasing size of the retinal image as objects recede.
Ponzo Illusion:
 Long lines appear to represent railway lines or roads receding
into the distance.
 The top line is perceived as further away, leading to the
conclusion that it must be larger, even though rectangles A
and B are the same size in the retinal image.
Müller-Lyer Illusion:
 Vertical lines are perceived to differ in
length due to their presentation (inside vs.
outside corners of a room).
 Assumption: The vertical line in the left
figure appears longer because it is perceived as further away.
Depth Cues and Perception:
 Gregory argues depth cues are automatically used, making two-dimensional figures
appear three-dimensional under certain conditions.
 Evidence: Depth cues in photographs create stronger illusions than in drawings

Criticism of Gregory’s Theory:


 Claims about universal perception of Müller-Lyer figures as three-dimensional are
disputed.
 The illusion persists when fins are replaced with different attachments, supporting
the incorrect comparison theory (Matlin and Foley, 1997).
 Coren and Girgus (1972) found that changing the color of the fins reduced the
illusion, suggesting a reliance on visual context.

Experimental Evidence:
 DeLucia and Hochberg (1991) showed the Müller-Lyer effect in three-dimensional
displays where all fins were at the same distance, challenging the misapplied size-
constancy explanation.
 Action-based studies (e.g., Gentilucci et al., 1996; Aglioti et al., 1995) found
reduced effects of visual illusions during physical actions like pointing or gripping.

Evaluation of the Constructivist Approach


1. Strengths: The constructivist approach reveals various interesting perceptual
phenomena, suggesting processes similar to those proposed by constructivist
theorists are at play.
2. Critiques: Accuracy of Perception: The approach predicts frequent errors in
perception, yet perceptions are typically accurate, indicating that the
environment offers more information than assumed.
o Experimental Stimuli: Many studies utilize artificial stimuli, and brief
presentations can enhance top-down processing over bottom-up
processes.
o Hypothesis Formation: It’s unclear what hypotheses observers form,
leading to potential misinterpretations of stimuli.
o General Theory Limitations: Gregory’s misapplied size-constancy
theory fails to adequately explain most visual illusions, which may depend
on a range of factors rather than a single general theory.

Neisser-Schema Theory.
Developed by Ulric Neisser in 1976.
Also known as the Interactive Theory of Perception.
Proposes that perception is a cyclical process involving the interaction between
internal schemata (mental templates) and environmental stimuli.

Perceptual Cycle Model (PCM):


 Cyclical Process: Perception involves a continuous feedback loop between top-
down processing (influenced by prior knowledge) and bottom-up processing
(driven by sensory input).
 Neisser argues that to function effectively, we cannot rely solely on either data-
driven or theory-driven processes.
 The cycle consists of three processes:
Modification of Schemata: Internal representations are adjusted based on new
information.
Modification of Attention: Focus and attention are directed by expectations and
context.
Modification of Available Information: The sensory information received is shaped
by expectations and prior knowledge.

Perceptual Exploration:
 Expectations guide our exploration of the perceptual field.
 Sensory organs and locomotive actions facilitate the gathering of relevant
information while filtering out the irrelevant.
 The gathered information can modify existing schemata.

Ways to Modify Schemata:


Corrective Aspect:
 This involves using information from the environment to correct inaccuracies in
anticipatory schemata.
 When sensory data contradicts expectations, the schema is either modified or
replaced.
Elaborative Aspect:
 Refers to the enhancement and depth of schemata through experience and
interaction with the environment.
 More coherent and detailed schemata are developed with increased use and
exposure.

Ecological Validity:
o Neisser's model emphasizes the importance of context and ecological factors
in perception, ensuring that the theory aligns with real-world experiences.

Unit 2: Theories of pattern recognition:


Biederman-Geon theory
Irving Biederman (1987) proposed a
theory of object perception that uses
a type of featural analysis that is
also consistent with some of the
Gestalt principles of perceptual
organization.
This theory is also known as the
Recognition- by- components (RBC)
theory.
The recognition- by- components theory explains our ability to perceive 3-D objects
with the help of simple geometric shapes. It proposes that all complex forms are
composed of geons
Geon theory, as espoused by Biederman
proposes that the recognition of an object,
such as a telephone, a suitcase, or even
more complex forms, consists of recognition
by components (RBC) in which complex
forms are broken down into simple
forms.
Biederman proposed that when people view
objects, they segment them into simple
geometric components, called geons.
Biederman proposed a total of 36 such primitive components (visual primitives).
Arcs, wedges, spheres, blocks and cylinders are all examples of geons.
He believed; we can construct mental representations of a very large set of common
objects. We divide the whole into the parts, or geons
We pay attention not just to what geons are present but also to the arrangement
of geons. The geons also can be recomposed into alternative arrangements.
The geons are simple and are viewpoint-invariant (i.e., distinct from various
viewpoints). One test of geon theory developed by Biederman is in the use of
degraded forms. This recognition of components is carried out in two stages as
elaborated below:
 Edge extraction: In this stage, we try to extract core information from retinal
image of the geons.
 Encoding non-accidental features: Here, the detected geons are matched with
the memory to form a meaningful perception.

1) The first step is described by Biederman as edge


extraction and it being an early edge extraction stage,
responsive to differences in surface characteristics namely,
luminance, texture, or colour, provides a line drawing
description of the object.'
2) Next it must be decided upon how the object should be
segmented so that the number of parts of components
(geons) can be found. Biederman (1987) agreed with Marr
and Nishihara (1978) that when segmenting a visual image
into geons, the concave parts of the object's contour are of
specific importance.

3) The outlasting vital element to the theory is deciding


which edge information from an object possesses the
necessary characteristic of remaining invariant across different viewing angles
Biederman (1987) states five invariant properties of edges, including points on a
curve (Curvature), sets of points in parallel (Parallel), edges terminating at a common
point (Co-Termination) and points in a straight line (Co-Linearity).
The theory continues to explain that a visual object's geons are formed of these
invariant properties: for example a book is constructed of three parallel edges and no
curved edges but a cup has two parallel edges connecting the curved edges
EXPERIMENT:
Kirkpatrick-Steger, Wasserman, and Biederman (1998) examined whether geons are
important to pigeons in recognizing line drawings.
Pigeons were first trained to discriminate among four drawings of a watering can, an
iron, a desk lamp, and a sailboat using the four-key choice procedure. Each of the
training stimuli contained four geons. Once the pigeons had attained a high level of
accuracy on the training procedure, they were tested with versions of the objects in
which one or three components were deleted or a single component was moved away
from the other three.
All four pigeons continued to discriminate the original drawings at a high level of
accuracy in these tests. When a single geon was either moved or deleted, there was
no apparent effect on recognition accuracy. In contrast, deleting three geons produced
a significant disruption in accuracy scores, but performance was still above the
chance level of 25%. The pattern of results is consistent with studies of recognition of
partial objects in humans, where the principle of three-geon sufficiency was observed.

LIMITATIONS:
Does not adequately explain how we recognize particular features.
Fails to explain the effects of prior expectations and environmental context on some
phenomena of pattern perception.
Not all perception researchers accept the notion of geons as fundamental units of
object perception (Tarr and Bulthoff, 1995).

Neisser-View based approach


 The view-based approach of object or pattern recognition by Ulric Neisser (1967)
holds that objects are recognized holistically through the comparison with a stored
analogy.
 These are viewer dependent approaches to perception.
 One major View based approach is the template matching theory.

Template matching theory:


 External pattern matched to a stored internal representation
 A reasonable first step to approaching such a task is to define a measure or a cost
measuring the “distance” or “similarity” between the (known) reference patterns
and the (unknown) test pattern, in order to perform the matching operation known
as template matching
 The incoming sensory information is compared directly to copies (templates) stored
in the long-term memory.
 These copies are stored from our past experiences and learning
 We recognize a pattern by comparing it with our set of templates. We then choose
the exact template that perfectly matches what we observe (Selfridge & Neisser,
1960).
 It holds that a great number of templates have been created by our life experience,
each template being associated with a meaning.
 In other words, the process of perception thus involves comparing incoming
information to the templates we have stored, and looking for a match.
 These methods require computational methods. Template matching has been
performed at 2 levels
1) Pixel Level Template Matching
2) Higher Level Template Matching
1. Pixel Level Template Matching: Consists of 4 types
Total Templates: Template is the same size as the input image. There is no rotation
or translation invariance.

Partial Templates: Templates is free from background.


 Multiple matches are allowed.
 Partial matches may also be allowed.
 Care must be taken in the case an “F” template could be easily matched to an
E

Piece Templates: Templates that match one feature of a figure.


These templates break a pattern into its component segments for example, “A” can
be broken down into “/”, “\” and “-”
The order in which templates are compared to the scene is important: the largest
templates must be first, since they contain the most information and may subsume
smaller templates.

Flexible Templates: These templates can handle stretching, misorientation, and


other possible deviations.
A good prototype of a known object is first obtained and represented parametrically
A problem with pixel based matching is that there is a rotation and translation is
the problem, also images are rarely perfect suffering from blurring, stretched and
other distortions etc.
2. High Level Template Matching:
 Operates on an image that has typically been segmented into regions of interest.
 Regions can be described in terms of area, average intensity, rate of change of
intensity, curvature and also compared – bigger than, adjacent to, above, distance
between.
 Templates are described in relationship between regions. Production rules and
other linguistic representation have been used. Also, statistical methods
(relaxation-based techniques) have been applied to perform the matching.
 Consists of 2 types:

Feature – based Matching:


When the template image has
strong features, a feature-based
approach may be considered, the
approach may prove further useful if
match in the search might be
transformed in some fashion.
Since the approach does not consider the entirely of the template image, it can be
more computationally efficient when working with the source images of larger
resolution.

Template – based Matching:


For templates without strong features, or for when the bulk of the templates image
constitutes the matching image, a template-based approach may be effective.
Template – based template matching may potentially require sampling of a large
number of points, it is possible to reduce the number of sampling points by reducing
the resolution of the search and template images by the same factor and performing
the operation on the resultant downsized images (multi-resolution, or pyramid, image
processing)
Consists of 4 types:
Image Pyramid: It is a series of
images being a result of down
sampling (scaling down, by the
factor of two in this case) of the
previous element.
Pyramid Processing: At each level of the pyramid, we will need appropriately down
sampled picture of the
reference image i.e.,
both the input image
pyramid and template
matching pyramid
should be computed
Grayscale based Matching:
Although in some of the applications the orientation of the object is uniform and fixed,
it is often the case that objects that are to be detected appear rotated.
In Template matching algorithms the classic pyramid search is adapted to allow the
multi – angle matching i.e., identification of rotated instances of the template
This is achieved by
computing not just
the template image
pyramid, but a set of
pyramids – one for
each possible
rotation of the
template. During the
pyramid search on
the input images the
algorithm identifies the pairs (template orientation, template position) rather
than the sole template positions. Similarly to the original schema, on each level of the
search the algorithm verifies only those (positions, orientation) pairs that scored well
on the previous level (i.e., seemed to match the template in the image of lower
resolution).
The technique of pyramid matching together with multi – angle search constitutes the
Grayscale – based Template Matching method
Edge – based Matching:
Edge – based Matching enhances the
previously discussed Grayscale –
based matching using crucial
observation – that the shape of any
object is defined mainly by the shape
of its edges
Therefore, instead of matching of the whole template, we could extract the edges and
matches only the nearby pixels, thus by avoiding some unnecessary computations.
In common applications the achieved speed – up is usually significant

Applications:
 Object Recognition
 Biological Area
 Eye detection in facial image

Merits:
Estimates become quite good with enough data
Template matching is the most effective technique to be used in pattern recognition
machines which read numbers and letters that are available in standardized,
constrained contexts (it means scanners which reads the financial credit number from
machines, checks the postal zip codes from the envelopes)

Demerits:
Slight changes in size or orientation variation can cause problems
It often uses several templates to represent on object
Templates may be of different sizes
Template matching requires high computational power because the detection of large
patterns of an image is time consuming.

Selfridge--pandemonium model
 Pandemonium is a model of bottom-up pattern recognition.
 It has applications in artificial intelligence and pattern recognition.
 The theory was developed by the artificial intelligence pioneer Oliver Selfridge in
1959. This model is now recognized as the basis of visual perception in cognitive
science.
 Pandemonium
architecture arose in
response to the
inability of template
matching theories to
offer a biologically
plausible explanation
of the image
constancy
phenomenon.
 The basic idea of the
pandemonium
architecture is that a pattern is first perceived in its parts before the "whole".

Four Kinds of Demons:


In the Pandemonium model, there are four kinds of demons:
 Image demons
 Feature demons
 Cognitive demons
 Decision demons.
Image Demons:
The first kind of demons are image demons, which convert the proximal stimulus into
representations, or internal depictions of information, that higher-level demons can
assess.
Feature Demons:
Each representation is scanned by several “feature demons”, each looking for a
different particular feature (such as a curved or a vertical line). If a demon finds such
a feature, that demon screams.
Feature demons communicate the level of confidence that the feature is present by
screaming more softly or loudly
Cognitive Demons:
Cognitive (Letter) demons cannot look at the stimulus itself but can only listen to the
feature demons.
The Cognitive (letter) demons pay particular attention to the demons associated with
their particular letter.
Cognitive (Letter) demons scream when the output from the feature demons
convinces them their letter is in the representation—again, more loudly or softly,
depending on the level of confidence.
Decision Demons:
A single decision demon listens to all this screaming and decides what letter is being
presented.

Features Of Pandemonium Model:


The Pandemonium model, named with a sense of humor, illustrates a number of
important aspects of featural analysis.
Demons can scream more loudly or softly, depending on the clarity and quality of the
input. This allows for the fact that real-life stimuli are often degraded or incomplete,
yet objects and patterns can still be recognized.
Feature demons can be linked to letter demons in such a way that more important
features carry greater weight. This takes into account that some features matter more
than others in pattern recognition. Take the case of the letter A. In writing this letter,
some people are sloppy about their slanted vertical lines (sometimes the lines are
almost parallel), yet the A is still often recognizable. Without the horizontal line,
however, the pattern seems to stop being an A. In the Pandemonium model, then, the
letter demon for A would be more tightly connected to the horizontal-line feature
demon than it would be to the slanted-line demons.
Last, the weights of the various features can be changed over time, allowing for
learning. Thus a demon model could learn to recognize my mother’s handwritten A’s,
even though she makes her capital A’s with no slanted lines, only two very curved
lines.

Limitations Of Pandemonium Model:


 The Pandemonium Model fails to provide a proper definition for what can be
constituted as a feature, not a feature except for in very restricted domains as
perception of the letters.
 The letter H is composed of 2 long vertical lines and a short horizontal line; but if
we rotate the H 90 degrees in either direction, it is now composed of 2 long
horizontal lines and a short vertical line. In order to recognize the rotated H as H,
we would need a rotated H cognitive demon.
 Thus we might end up with a system that requires a large amount of cognitive
demons in order to produce accurate recognition, which would lead to the same
biological plausibility criticism of the template matching models.
 Recognition is only driven by physical characteristics.
 The Pandemonium Model can lead to overgeneralization and misleading
conclusions as the process for recognizing complex three-dimensional objects is
much different than simple schematics.
 Errors in the Model can occur due to similar features of letters.
 The pandemonium model have the processing stages of pattern recognition almost
backwards.
 The Model does not define the features required for recognizing a human face, ball.
 If at all there are different sets of features then the number of sets of features
would be huge.

Eleanor Gibson & Lewin-Distinctive features


In the late 1950s Gibson and Richard Walk, a professor at Cornell, developed the
“visual cliff” experiment.

Visual Cliff Experiment:


Aim: In 1959, psychologists Eleanor Gibson and Richard Walk set wanted to research
depth perception in babies. They wanted to know if depth perception is a learned
behavior or if it is something that we are born with. In order to study this, Gibson and
Walk used the visual cliff experiment.
Procedure: Gibson and Walk studied 36 babies between the ages of six and 14
months, and all of the babies could crawl. The infants were placed one at a time on a
visual cliff. It was created using a large glass table that was raised about a foot off the
floor. Half of the glass table had a checker pattern underneath in order to create the
appearance of a ‘shallow side.’ In order to create a ‘deep side,’ a checker pattern was
created on the floor; this side is the visual cliff. Even though the glass table extends
all the way across, the placement of the checker pattern on the floor creates the
illusion of a sudden drop-off. Researchers placed a foot-wide centreboard between the
shallow side and the deep side. The mother would call for the babies, and then the
researchers would see if the baby would “cross” the visual point of the cliff or not.
Results: Nine of the infants did not move off the centreboard. All of the 27 infants
who did move, crossed into the shallow side when their mothers called them from the
shallow side. Three of the infants crawled off the visual cliff toward their mother when
called from the deep side. This study showed that depth perception is likely an inborn
trait in humans.
Evaluation: Overall, this experiment wasn’t very ethical. They used babies so the
babies weren’t informed (informed consent) about the experiment (even though the
parents were, it still shouldn’t be allowed because the parent isn’t being tested, the
baby is). They used deception on the babies by making them believe they couldn’t
crawl to their mothers because of the cliff. And the babies didn’t have a choice of
withdrawal (even though the parent might, but it’s still unfair because again, the baby
is the one being tested).

Lewin – Distinctive Features:


 Several feature-analysis theories propose a more flexible approach, in which a
visual stimulus is composed of a small number of characteristics or components
(Gordon, 2004)
 Each characteristic is called a distinctive feature.
 They argue that we store a list of distinctive features for each letter.
For example, the distinctive features for the letter R include a curved component, a
vertical line, and a diagonal line. When you look at a new letter, your visual system
notes the presence or absence of the various features. It then compares this list with
the features stored in memory for each letter of the alphabet.
 Eleanor Gibson (1969) propose that the distinctive features for each alphabet
letters remain constant, whether the letter is handwritten, printed, or typed. These
models can also explain how we perceive a wide variety of two-dimensional
patterns.
 The distinct feature approach of Eleanor Gibson and Lewin (1975) is an object-
centered view of pattern recognition where environment has no influence. Here
brain neural cells in the cerebral cortex activate from past experience to recognize
the distinctive features of stimuli.
 Distinctive features are defining characteristics of letters like the slanting line of R
that distinguish it from P. These distinctive features remain the same regardless of
fond/orientation.
 Neurological evidence also supports the identification of distinctive features. This
model is also known as PB model.

Limitation:
 This theory may explain how letters are recognised but cannot explain how we
recognize complex real-world objects like a horse
 Studies show that sometimes we notice distinctive feature of a whole figure but
may miss the same if features are presented in isolation. In real world we can see
an object only as a whole
 Could not explain complexity of pattern recognition.
 Which brain cell detector for which stimuli not specified.

Unit 3: Theories of Pain


perception:
Pain is defined as an unpleasant and
emotional experience associated with or
without actual tissue damage. Pain
perception is influenced by a variety of
environmental factors including the context
of the stimuli, social responses and
contingencies and cultural or ethnic
background.
Pain perception involves a number of psychological processes, including attentional
orienting to the painful sensation and its source, cognitive appraisal of the meaning of
the sensation, and the subsequent emotional, psychophysiological, and behavioral
reaction, which then feedback to influence pain perception

Types Of Pain
Acute pain is a sharp pain of short duration with easily identified cause. Often it is
localized in a small area before spreading to neighbouring areas. Usually, it is treated
by medications. Chronic pain is the intermittent or constant pain with different
intensities. It lasts for longer periods. It is somewhat difficult to treat chronic pain and
it needs professional expert care. A number of theories have been postulated to
describe mechanisms underlying pain perception.
Nociceptive pain is used to describe the pain from physical damage or potential
damage to the body and Neuropathic pain is used to describe the pain that
develops when the nervous system is damaged or not properly working because of a
disease or injury

Specificity Theory of Pain (By von frey-1895)


The specificity theory of pain posits that pain is a distinct sensory modality with
dedicated pathways and mechanisms, similar to vision or hearing.
 Dedicated Apparatus: Pain has its specific peripheral and central structures.
 Direct Transmission: Nerve impulses generated by injury travel directly to a
"pain center" in the brain.
 No Interaction: Pain signals do not interact or combine with other sensory
signals.
 Linear Process: The pathway is fixed—stimulus to injury to pain perception.
 Simplistic Model: Criticized for not addressing psychological or contextual
influences on pain perception.

Pattern Theory of Pain


The Pattern Theory, also known as the Nonspecificity Theory, suggests that pain is not
linked to specific fibers or receptors. Instead:
 Pain results from intense stimulation of nonspecific receptors shared with other
senses, such as touch.
 The nerves detecting pain are not exclusive to this sensation but report multiple
sensory inputs.
 The defining factor for pain perception is the amount of stimulation, rather than
its specific source.
 Pain emerges as a pattern of nerve impulses rather than activation of dedicated
pain receptors.
 This theory contrasts with the specificity theory, emphasizing overlap in sensory
pathways.

Gate Control Theory of Pain


 Pain perception is modulated by spinal cord "gates" in large nerve fibers.
 Noxious stimuli must pass through these gates to be perceived as pain in the brain.
 Gates can open (facilitating pain transmission) or close (inhibiting pain).
 Influencing factors include:
1. Biological: Injury or neural activity.
2. Chemical: Drugs altering pain transmission.
3. Psychological: Emotions and attention.
4. Cognitive: Brain instructions modulating the gate's status.
 Challenges linear pain models (e.g., specificity theory).
 Explains phenomena like placebo effects and the impact of distraction on pain.
 Introduced the concept of a "gating mechanism" in pain pathways.

Limitations of Pain Theories


 Did not account for neurons in the central nervous system (CNS) that respond to
both non-nociceptive and nociceptive stimuli (e.g., wide-dynamic range neurons).
 Focus on cutaneous pain and do not address issues pertaining to deep tissue,
visceral, or muscular pains.
 These models are focused on acute pain and do not address mechanisms of
persistent pain or the chronification of pain.
 Oversimplifications and flaws in the presentatio

Pain Threshold and Pain Management


Pain threshold is the minimum intensity at which a person begins to perceive, or
sense, a stimulus as being painful. Pain tolerance, is the maximum amount, or level,
of pain a person can tolerate or bear. Having a high pain tolerance is not necessarily a
good thing, because it can result in patients not feeling, or ignoring, their body’s
warning signals that something is wrong.
When listening to a sound, the level of loudness, or pressure, at which the sound
becomes painful is described as the pain threshold for that person at that time. The
pain threshold varies by person, often based on the frequency, and it can be
agedependent. The threshold for pain can differ between men and women, and can
fluctuate based on many other factors

Managing pain without medicines


Many non-medicine treatments are available to help you manage your pain. A
combination of treatments and therapies is often more effective than just one. Some
non-medicine options include:
 heat or cold – use ice packs immediately after an injury to reduce swelling. Heat
packs are better for relieving chronic muscle or joint injuries.
 physical therapies – such as walking, stretching, strengthening or aerobic
exercises may help reduce pain, keep you mobile and improve your mood. You
may need to increase your exercise very slowly to avoid over-doing it.
 massage – this is better suited to soft tissue injuries and should be avoided if the
pain is in the joints. There is some evidence that suggests massage may help
manage pain, but it is not recommended as a long-term therapy.
 relaxation and stress management techniques – including meditation
and yoga
 cognitive behaviour therapy (CBT) – this form of therapy can help you learn
to change how you think and, in turn, how you feel and behave about pain. This is
a valuable strategy for learning to self-manage chronic pain.
 acupuncture – a component of traditional Chinese medicine. Acupuncture
involves inserting thin needles into specific points on the skin. It aims to restore
balance within the body and encourage it to heal by releasing natural pain-
relieving compounds (endorphins). Some people find that acupuncture reduces the
severity of their pain and enables them to maintain function. Scientific evidence
for the effectiveness of acupuncture in managing pain is inconclusive.
 Electrical stimulation - involves using a device to send a gentle electric
current to your nerves or muscles. This can help treat pain by interrupting or
blocking the pain signals. Types include:

1. transcutaneous electrical nerve stimulation (TENS) therapy – minute


electrical currents pass through the skin via electrodes, prompting a pain relieving
response from the body. There is not enough published evidence to support the use
of TENS for the treatment of some chronic pain conditions. However, some people
with chronic pain that are unresponsive to other treatments may experience a
benefit.
2. Deep brain or spinal cord stimulation - sends low levels of electricity directly
into the brain or spinal cord to relieve pain.
3. Biofeedback techniques - use electronic devices to measure body functions
such as breathing and heart rate. This teaches you to be more aware of your body
functions, so you can learn to control them.

Pain medicines
Many people will use a pain medicine (analgesic) at some time in their lives. The main
types of pain medicines are:
 paracetamol – often recommended as the first medicine to relieve short-term pain
 aspirin – for short-term relief of fever and mild-to-moderate pain (such as period
pain or headache)
 non-steroidal anti-inflammatory drugs (NSAIDs), such as ibuprofen – these
medicines relieve pain and reduce inflammation (redness and swelling)
 opioid medications, such as codeine, morphine and oxycodone – these medicines
are reserved for severe or cancer pain
 local anaesthetics
 some antidepressants
 some anti-epileptic medicines.

Unit 4: Theories of constancies


Visual perception refers to the ability to interpret and make sense of visual
information from the environment through the process of sight.
Notice that when you look at an object, you acquire specific bits of information about
it, including its location, shape, texture, size, and (for familiar objects) name. This is
acquired through visual perception.
A constancy is a tendency for qualities of objects to stay the same despite changes in
the way we view the objects.
The main constancies include
 Perceptual constancy
 Size constancy
 Shape constancy
1) Perceptual constancy:
 The brain’s ability to recognize objects consistently, even if they appear altered.
 The phenomenon in which an object or its properties (e.g., size, shape, color)
appear unchanged despite variations in the stimulus itself or in the external
conditions of observation, such as object orientation or level of illumination.
 Function of Mental Shortcuts: Perceptual constancies act as mental shortcuts,
helping us identify objects despite distortions or changes.
 Types of Constancies: The brain maintains object recognition through size,
shape, and light constancies, adjusting for distance, orientation, and lighting.
 Purpose: This ability helps us accurately perceive our environment, facilitating
effective interaction with the world around us.

Importance of perceptual constancy:


1. Accurate Perception: Perceptual constancy enables us to perceive the world
accurately, avoiding overwhelming sensory confusion.
2. Consistency in Size and Shape: By perceiving size and shape consistently,
we interact predictably with our surroundings.
3. Crucial for Everyday Tasks: Essential for tasks like navigating spaces and
recognizing objects, enhancing our ability to function effectively.

Applications of perceptual constancy:


1. Robotics and Machine Vision: Robots use perceptual constancy to recognize
objects despite changes in distance, lighting, or orientation, essential in real-
time tasks like manufacturing.
2. Aviation and Driving: Pilots and drivers rely on size and distance constancies
to gauge distances to other vehicles, obstacles, and runways, enhancing safety
in navigation.
3. Advertising and Product Design: Perceptual constancy ensures product
recognizability across platforms and lighting, helping maintain brand identity in
diverse contexts.

Advantages of perceptual constancy:


1. Stable Perception in a Changing Environment: Enables us to recognize
objects as the same, even with changes in lighting or distance, aiding effective
navigation (e.g., perceiving a car as the same size whether near or far).
2. Improved Object Recognition: Enhances our ability to identify familiar
objects under different conditions, which is essential for tasks like driving or
recognizing faces in various lighting.

Disadvantages of perceptual constancy:


1. Difficulty in Detecting Gradual Changes: Constancy mechanisms can
obscure gradual changes in an object’s appearance, such as brightness shifts,
which may hinder tasks like monitoring displays.
2. Illusions and Misinterpretation: Perceptual constancy can lead to errors,
such as in the Müller-Lyer illusion, where size constancy mechanisms cause
misinterpretation by over-applying constancy in specific contexts.
2) Size constancy:
 The ability to perceive an object as being the same size despite the fact that the
size of its retinal image changes depending on its distance from the observer.
 Function: This phenomenon enables consistent size perception despite varying
distances from the observer.
 Mechanism: Our brains adjust for changes in retinal image size, allowing us to
perceive an object’s actual size regardless of distance changes (Sternberg &
Sternberg, 2016).

Examples of size constancy:


1. Train on a Platform: Watching a train approach
on a railway platform, the retinal image of the
train enlarges as it nears, but we perceive the
train as having a constant size rather than
“growing.”
2. Cars on a Highway: When cars move away on
a highway, they appear smaller on the retina,
yet we perceive all cars as roughly the same
size, despite their distance.
3. People Approaching: When people are far
away, they look smaller on the retina. As they
approach, they appear larger, but we understand
they are not physically growing in size; the brain interprets the changing retinal
image.

Illusions related to size constancy:


1. Müller-Lyer Illusion: Two identical line segments
appear different in length due to arrow-like tails at
their ends. Outward-pointing tails make the line
seem longer, while inward-pointing tails make it
seem shorter. This illusion tricks the brain by using
depth cues that typically support size constancy.
2. Ponzo Illusion: Two identical lines are placed
between converging lines, resembling receding railroad
tracks. The line near the converging point appears
longer, even though both lines are the same length.
The brain interprets the converging lines as depth
cues, making the top line seem farther away and larger
to maintain size constancy.

Applications of size constancy:


1. Virtual Reality (VR): VR developers apply size
constancy to ensure virtual objects appear consistently sized as users move,
enhancing the realism of the experience.
2. Photography and Film: Size constancy is used to create illusions of size or
scale, allowing artists to manipulate perceptions for storytelling or artistic
effects.
3. Psychological and Perceptual Research: Psychologists study size constancy
to explore human perception and understand how illusions, like the Müller-Lyer
illusion, challenge our visual processing.
4. Education: Teaching visual illusions and constancy concepts helps children
understand perception and develop critical thinking skills in science and
psychology.

Advantages of size constancy:


1. Consistent Perception Across Distances: Size constancy helps us perceive
objects as stable in size, even when their distance changes, allowing us to
navigate confidently without objects appearing to "shrink" or "grow"
unpredictably.
2. Improved Object Recognition: It facilitates quick identification of objects
from different distances, such as recognizing a friend across the street or up
close, by perceiving their body as a consistent size.
3. Practical for Design and Technology: Size constancy is crucial in
applications like virtual reality (VR), architecture, and product design, helping
create immersive, stable, and realistic experiences.

Disadvantages of size constancy:


1. Vulnerability to Optical Illusions: Size constancy can lead to perceptual
errors, especially in optical illusions like the Ponzo and Müller-Lyer illusions,
where depth cues create false perceptions of size.
2. Dependency on Contextual Cues: Size constancy depends on contextual
cues, such as surrounding objects or converging lines. Without these cues, size
perception may be inaccurate, causing errors in tasks that require precision.
3. Dependence on Visual Cues: Size constancy relies on visual cues like lighting
and perspective. In environments with limited cues, such as underwater or in
featureless landscapes, size constancy can fail, leading to perceptual mistakes.

3) Shape constancy:
 A type of perceptual constancy in which an object is perceived as having the same
shape when viewed at different angles. For example, a plate is still perceived as
circular despite appearing as an oval when viewed from the side.
 Function: It ensures that an object's shape is perceived as constant, even when its
orientation or retinal image changes. This ability prevents objects from appearing
different depending on the observer's position (e.g., a circle always appears
circular regardless of viewpoint) (Sternberg & Sternberg, 2016).
 Neuropsychological Basis: The extrastriate cortex is involved in processing
shape constancy, according to neuropsychological imaging (Kanwisher et al., 1996,
1997).

Examples of shape constancy:


1. Dining Room Table: A rectangular table appears rectangular from various
angles, even when viewed from the side, where it may seem trapezoidal due to
perspective distortion.
2. Dinner Plates: A round plate looks circular
when viewed from above, but appears elliptical
when viewed from the side. Despite this retinal
change, we still perceive it as round.
3. Doors: A rectangular door appears rectangular
when viewed head-on, but may project a trapezoidal
shape when opened. However, we continue to
perceive it as rectangular.
4. Books: A book maintains its rectangular
shape when it is lying flat or stood up
vertically, with no change in perception
despite the change in viewpoint.
5. Coins: A round coin remains perceived
as circular, even when tilted and
appearing elliptical on the retina.
6. Screens and Monitors: A flat-screen TV or computer monitor retains its
rectangular shape in our perception, even when viewed from an angle where it
appears distorted.

Applications of shape constancy:


1. Object Recognition: Shape constancy is crucial for recognizing objects from
different angles, helping identify items like doors, furniture, and vehicles,
regardless of their orientation.
2. Education: It aids in teaching geometry and spatial reasoning by helping
students understand how shapes maintain their form across various viewpoints.
3. Design and Architecture: Architects and designers use shape constancy
principles to create structures that appear stable and recognizable from
different perspectives, ensuring easy navigation within spaces.
4. Psychology and Neuroscience: Research on shape constancy deepens our
understanding of visual perception in the brain and informs therapies for visual
processing disorders.
5. Safety and Navigation: Shape constancy helps individuals judge distances
and navigate obstacles effectively in everyday tasks like driving or walking,
reducing accident risks.
6. Augmented and Virtual Reality: In AR, VR, computer vision, and robotics,
shape constancy is essential for ensuring that 3D objects maintain a consistent
shape as they change orientation, improving user experience and engagement.

Advantages of shape constancy:


1. Efficient Interaction: Shape constancy enables efficient manipulation of
objects, allowing us to accurately grasp items like cups or books without
needing to adjust our perception based on their orientation.
2. Visual Stability & Spatial Awareness: It provides a stable visual experience,
reducing confusion in dynamic environments. This is crucial for tasks like
identifying vehicles or road signs from different angles.
3. Learning and Memory: Shape constancy aids in recognizing and remembering
shapes consistently, which is essential for learning geometric concepts and
understanding spatial relationships, especially in educational settings.
4. Enhanced Communication: Recognizing familiar faces and body shapes relies
on shape constancy, allowing us to identify individuals even from different
angles or distances during social interactions.
5. Art and Design: Artists and designers use shape constancy principles to create
works that maintain their intended shapes regardless of the viewer's
perspective, enhancing visual appeal and aesthetics.

Disadvantages of shape constancy:


1. Perceptual Errors: Shape constancy can lead to misperceptions when objects
are viewed from unusual angles or in poor lighting, causing the brain to
misjudge the object's shape, identity, or spatial relationships.
2. Overgeneralization: The tendency to maintain a constant perception of shape
may result in misidentifying similar-looking objects, hindering accurate
recognition.
3. Difficulty with Novel Shapes: Encountering unfamiliar shapes can make
recognition challenging, as the brain may struggle to apply shape constancy
effectively without familiar contours.
4. Cognitive Load: Maintaining shape constancy can increase cognitive load, as
the brain must continuously adjust perceptions based on changing perspectives,
potentially leading to fatigue or slower reaction times in tasks requiring quick
shape judgments.

Modalities beyond vision


TACTILE
 Tactile perceptual constancy refers to the brain's ability to maintain a stable
perception of tactile properties (e.g., texture, roughness) despite variations in how
an object is touched.
 Integration of Inputs: The brain integrates different tactile inputs such as
pressure, texture, and temperature to create a coherent representation of objects
being touched.
 Object Identification: This grouping of tactile information helps individuals
identify and differentiate objects, even when tactile stimuli change due to factors
like movement or pressure variations.

Echolocation:
 Echolocation in Blind Individuals: Blind individuals trained in echolocation
demonstrate perceptual constancy specific to auditory stimuli. They can discern
changes in reflected sound waves based on object properties, not variations in
their own echolocation clicks.
 Development through Learning: This study suggests that perceptual constancy
is not solely inherent but can be developed through learning and experience, even
within novel sensory modalities like echolocation.
 Adaptation to New Sensory Input: The ability to apply perceptual constancy to
auditory stimuli highlights how sensory adaptation can enhance environmental
understanding, even in the absence of sight.

Olfactory perceptual constancy:


 Olfactory perceptual constancy is the ability to consistently recognize and identify
a specific smell, despite changes in its intensity, context, or environmental
conditions.
 Stable Perception of Odors: This ability allows individuals to maintain a stable
perception of odors based on previous experiences and memory associations.
 Example: For instance, the smell of coffee brewing at home can still be recognized
later at a café, even if the scent is less intense or mixed with other odors, such as
pastries.

Theories of illusions
 An illusion is a misleading image (optical illusion) or a perception that deceives or
misleads intellectually, leading to a misinterpretation of an object’s actual nature.
 Importance: Illusions are significant because they offer insights into how the
visual system functions and how our brain processes sensory information,
revealing its mechanisms and potential areas where perception can be distorted.

Illusions Involving Line Length or


Distance
Müller-Lyer Illusion:
 Illusion Description: In the Müller-Lyer illusion, two
horizontal lines appear to be of different lengths, although
they are actually the same. The illusion is created by the
direction of the arrow-like tails at the ends of the lines.
 Psychophysical Methods: The illusion can be demonstrated using
psychophysical techniques, such as magnitude estimation. In a study by McClellan,
Bernstein, and Garbin (1984), participants were asked to compare the Müller-Lyer
figures with a reference line (100 units in length).
 Magnitude Estimation: The study found that observers judged the "wings-
outward" version of the Müller-Lyer illusion to be longer (112 units) compared to
the "wings-inward" version (99 units), despite both lines being the same length.
 Expansion vs. Contraction: The study reinforced previous findings that the
Müller-Lyer illusion is primarily due to the perceived expansion of the "wings-
outward" version of the line, rather than the contraction of the "wings-inward"
version.

Sander Parallelogram Illusion:


 Illusion Description: The Sander
parallelogram illusion involves two
versions where the distance XA is equal
to the distance AY. Despite this, XA is
perceived as significantly longer than
AY.
 Perceptual Misjudgment: The illusion occurs because the surrounding
parallelogram distorts the perception of the line lengths. When viewed with the
parallelogram, XA appears much longer than AY, even though they are the same.
 Alternate Perception: If you can ignore the surrounding parallelogram, the figure
XAY can be perceived as an upside-down isosceles triangle, which provides a
different spatial interpretation that reduces the illusion.

Ponzo Illusion:
1. Illusion Description: In the Ponzo illusion, two parallel
horizontal lines of equal length are perceived differently. The
upper line appears longer than the bottom line when both are
flanked by converging oblique lines (similar to railroad tracks
receding into the distance).
2. Visual Mechanism: The illusion occurs because the brain
interprets the converging lines as depth cues, perceiving the
upper line as farther away and thus larger, maintaining size constancy.
3. Name: This illusion is also referred to as the "filled space-open space" illusion,
as the top line is perceived to be in a "filled" space (closer to the top of the
converging lines) while the bottom line is in an "open" space.

Horizontal-Vertical Illusion:
 Line Length Distortion:
1. Vertical lines next to horizontal lines appear longer than
when presented alone.
2. Horizontal lines next to vertical lines appear shorter
than when presented alone.
 Effect of Line Interruption: When a line is interrupted by
another line, it is perceived as being shorter than the same line when it is
uninterrupted.
 Perceptual Mechanism: This illusion occurs due to the brain's reliance on
surrounding context, where the presence of other lines influences the perception of
length, leading to misjudgments based on their orientation or position.

Line-Length and Distance Illusion - explanation:


 Misapplied Constancy: The theory suggests that people apply size constancy
principles incorrectly when interpreting illusions. In the Ponzo illusion, for
example, observers misapply distance cues (such as converging lines) to make
judgments about length, perceiving the upper line as longer because it appears
farther away.
 Experience Influence: The theory argues that past experience with distance
cues influences how people interpret illusions. Individuals have learned to use
depth cues, like converging lines, to judge size and distance in the real world,
but in illusions, these cues are misapplied.
 Age and Experience: Studies show that younger children (under 5 years old)
are less susceptible to the Ponzo illusion, while older children and adults are
more affected. This suggests that experience and cognitive development play a
role in susceptibility to illusions.
 Cross-Cultural and Contextual Differences: Research comparing students
from different cultural backgrounds (University of Guam vs. Pennsylvania State
University) revealed that adding depth cues, like texture and perspective, did
not significantly increase the Ponzo illusion effect for the Guam students. This
implies that cultural or environmental experiences can also influence the degree
to which people are deceived by illusions.

Eye Movement Theory:


 Illusions and Eye Movements: The theory posits that illusions are caused by
differences in eye movement patterns, such as mistracking between stimuli.
 Müller-Lyer Illusion: In the wings-outward version of this illusion, the eyes trace a
longer path, leading people to perceive the line as longer.
 Extent and Effort of Eye Movements: Illusions may result from:
o Extent of eye movement (as in Müller-Lyer).
o Effort of eye movement, such as in the horizontal-vertical illusion where
vertical eye movements are more effortful than horizontal ones.
 Research Findings: Studies show that the eyes travel longer for lines perceived
as longer, supporting the theory that eye movement correlates with perception.
 Fixation Experiment: In an experiment with dots, observers who fixated on one
dot and shifted their gaze a shorter distance judged the distance between the dots
to be shorter than those who had to move their eyes a greater distance.

Incorrect Comparison Theory:


 Theory Explanation: The theory suggests that observers make judgments
based on the incorrect parts of the figure.
 Müller-Lyer Illusion: In this case, observers cannot separate the lines from the
wings, leading them to compare the distances between the ends of the wings,
which distorts their perception of line length.
 Comparison to Eye-Movement Theory: Both theories do not rely on the
previous experiences of the viewer to explain the illusions, but focus on
perceptual processes during the viewing of the figure.

Illusion Involving Area


1. Ames Room: An Ames room is designed to look like a normal rectangular room
from the front, but it has an unusual structure where the ceiling and floor are
inclined, and the right corner is closer to the observer than the left corner.
2. Viewing Setup: The observer looks through a peephole to eliminate depth
perception, using a monocular view to enhance the illusion and remove depth
cues from binocular vision.
3. Perceptual Illusion: The illusion causes viewers to perceive two individuals as
standing at the same depth in the room, even though one is much closer than
the other.
4. Size Distortion: The individual on the right appears much
larger due to being closer, despite both appearing at the same
depth, illustrating how the room's design manipulates
perception of size and distance.
Moon Illusion:
 Moon Illusion: Observers typically perceive the moon to be about 30% larger
at the horizon than at the zenith (highest point in the sky).
 Early Theories:
1. Refraction Theory: Proposed by Aristotle and Ptolemy, suggesting that
the moon appears larger at the horizon due to light rays passing through
more of the Earth's atmosphere.
2. This theory has been ruled out, as physical factors like color and
brightness do not account for the illusion.
 Apparent-Distance Theory: Suggests that the moon appears farther away
when it is on the horizon compared to when it is at the zenith. The difference in
perceived distance could explain the size difference.
 Complexity of the Illusion: The moon illusion likely arises from multiple
perceptual factors, and there is no single mechanism that fully explains it.

Illusion in Music:
1. Shepard’s Tone Illusion: Shepard (1961) used complex tones that seem to
continuously increase in pitch, creating the illusion of an endlessly rising pitch,
despite returning to the original tone after many iterations. This was achieved
by manipulating the harmonics.
2. Octave Illusion: Created by Deutsch (1983), this illusion involves presenting
one tone to each ear, with the tones being an octave apart (e.g., G4 in one ear
and G5 in the other). When the tones are switched between ears, listeners
perceive the tones as coming from a single location, even though they are
physically different.

Illusory Movement:
1. Stroboscopic Movement: This illusion occurs when a series of rapidly
presented static images or lights create the perception of continuous motion,
due to how the retina processes the stimulation.
2. Autokinesis: A stationary object, when viewed against a featureless
background, appears to move. This is thought to result from spontaneous, tiny
movements of the eyes.

Unit 5: Classical and modern psychophysics:


Classical Psychophysical Methods
What s Psychophysics?
 Psychophysics is the branch of psychology that explores the relationship between
the objective physical properties of a stimulus (like its intensity) and the
subjective perception of that stimulus (like its perceived brightness).
 Origins: The term "psychophysics" was coined by Gustav Theodor Fechner in
his 1860 book Elements of Psychophysics. Fechner aimed to connect the physical
world (objective stimuli) with the mental experience (subjective sensation).
 Fechner’s Inspiration: Fechner's work was influenced by earlier experimental
results on touch and light by Ernst Heinrich Weber in the 1830s, which laid the
groundwork for understanding sensory perception.
 Two Transformations:
1. Psychophysical Processes: Fechner suggested that human sensation
occurs in two stages—stimulus and sensation. These stages involve
transformations of sensory information.
Outer Psychophysics: This focuses on the relationship between the physical
stimulus and the resulting sensation, based on observation and experimentation.
Inner Psychophysics: This examines the neural mechanisms and cognitive
processes (like attention and memory) that affect how we process and perceive
sensory input, focusing on how the brain encodes sensory information.

History of Psychophysics
 Fechner’s Discovery (October 22, 1850): Fechner had an epiphany about
measuring sensation, realizing that it could be done in the same way as physical
measurements—by using a null matching procedure.
 Null Matching: This method involves adjusting one stimulus until it matches
another in terms of perception. The "null" or "no difference" judgment is key, where
the two stimuli are perceived as identical in sensation.
 Scientific Use of Null Matching: Fechner emphasized that this procedure allows
for consistent and objective measurements, whether done by trained scientists or
untrained observers using instruments designed to replicate the null matching
procedure (e.g., pH meters in chemistry labs, which align with the titration method
used by chemists).
 Sensation and Judgment: Fechner argued that it’s not possible to quantify the
magnitude of sensations (e.g., one light being exactly "twice as bright" as another),
but we can assess whether sensations are present and whether two sensations are
the same. This formed the basis of classical psychophysics.
 Fechner’s Contribution: He established that sensations could be specified based
on the stimulus values needed to produce specific types of judgments. This led to
the development of the framework in psychophysics, where we can compare
sensations and their corresponding stimulus intensities, allowing for measurements
like brightness comparison.

Threshold
Absolute Threshold: This is the smallest amount of stimulus energy required for the
sensory system to detect a stimulus 50% of the time. It represents the minimum
intensity at which a stimulus is perceived.
Difference Threshold: Also known as the just noticeable difference (JND), this is
the minimum detectable difference between two stimuli. Like the absolute threshold,
the difference threshold is the smallest difference that can be perceived 50% of the
time.
Threshold Discrimination
 Discrimination Experiments: These experiments aim to determine the point
at which a subject can detect the difference between two stimuli (e.g., two
weights or sounds).
 Point of Subjective Equality (PSE): This is the point at which the subject
perceives two stimuli (such as weights) to be the same, even though they may
differ physically.
 Just Noticeable Difference (JND): Also known as the difference limen (DL),
this refers to the smallest difference in stimuli that the subject can detect. It is
typically defined as the difference that can be noticed 50% of the time
(proportion p = 50%).

METHOD OF CONSTANT STIMULI


a psychophysical procedure for determining the sensory threshold by randomly
presenting several stimuli known to be close to the threshold. The threshold is the
stimulus value that was detected 50% of the time. Also called constant stimulus
method;
Random Presentation: In this method, different levels of a stimulus are presented
randomly across trials. This randomness prevents the subject from predicting the
stimulus intensity for the next trial, thereby reducing errors caused by habituation or
expectation.
Absolute Thresholds: For absolute thresholds, the subject simply reports whether or
not they can detect the stimulus at different levels.
Difference Thresholds: For difference thresholds, a constant comparison
stimulus is used alongside the varied stimulus levels. The subject compares the two
and reports whether they detect a difference.
Origin: The method of constant stimuli was first described by Friedrich Hegelmaier
in his 1852 paper.
Method of Constant Stimuli is typically represented on a graph:
Graph Structure:
 The x-axis represents trials, with each trial corresponding to a specific stimulus
presentation.
 The y-axis represents the stimulus
level (intensity) used in each trial.

Participant Response: The


participant responds with "yes" or
"no" to indicate whether they
detected the stimulus at the given
intensity.

Data Plotting: The data points are


plotted along the y-axis for each trial,
reflecting the stimulus intensity used.

Detection Rate Curve:


1. A separate line or sigmoidal curve
is drawn to represent the
proportion of "yes" responses at
each stimulus level.
2. As the stimulus intensity increases, the likelihood of a "yes" response typically
rises, forming the sigmoidal curve.

METHOD OF LIMITS
 The Method of Limits is a psychophysical procedure used to determine the sensory
threshold by gradually increasing or decreasing the magnitude of the stimulus in
discrete steps.
Procedure:
 A stimulus of a given intensity is presented to the participant.
 If the stimulus is perceived, the intensity is decreased on the next trial, continuing
until the participant no longer detects the stimulus.
 If the stimulus is not perceived, the intensity is increased on the next trial,
continuing until the participant detects the stimulus.
Example: In a light detection experiment, you might be presented with a bright light
and asked if you can see it. If you can, the intensity is decreased in the next trial. This
process continues until you report no longer being able to detect the light.

Ascending Method
 The stimulus is presented initially at a very low intensity.
 The intensity is gradually increased until the subject can perceive the stimulus.
 The threshold is determined by the intensity level at which the subject first
detects the stimulus.

Descending Method
 The stimulus is presented at a high intensity.
 The intensity is gradually decreased until the subject can no longer perceive
the stimulus.
 The threshold is determined by the
intensity level at which the subject
stops detecting the stimulus.

Threshold Determination: In both


methods, the threshold is considered to
be the stimulus level where the subject
just detects the stimulus.

Alternating Methods: In experiments,


the ascending and descending methods
are alternated to reduce biases, and the
thresholds from both methods are
averaged to provide a more accurate
measure.
Two Point Threshold Experiment
Purpose: The experiment aims to determine the smallest distance at which a person
can distinguish between two tactile stimuli applied to the skin (e.g., fingertips).
Procedure:
 The participant closes their eyes to eliminate visual cues.
 The researcher applies two points of stimulation to a specific area of the skin,
starting at a known distance likely above the perceptual threshold.
 The participant is asked if they feel one point or two points.
 If the participant reports feeling only one point, the researcher increases the
distance slightly.
 If the participant reports feeling two points, the researcher decreases the
distance.
Threshold Measurement: After several trials, the average of the smallest
distances at which the participant can accurately distinguish the two points is
recorded as the two-point threshold.
Application: This experiment demonstrates the method of limits, where the distance
between the two stimuli is systematically varied to identify the smallest perceptible
separation.
Errors
Error of Habituation:
 This error occurs when a participant tends to report a "one-point sensation"
in the ascending series (where the distance between the points is increased)
and a "two-point sensation" in the descending series (where the distance is
decreased), even if they do not actually feel the points distinctly.
 It’s a bias in response, where participants may become accustomed to the
pattern of increasing or decreasing stimuli.

Error of Anticipation:
 This error happens when a participant changes their response to the stimulus
before actually feeling it.
 Essentially, they anticipate the stimulus based on the direction of the changes
in distance (ascending or descending), rather than responding to the actual
sensory experience.

METHOD OF ADJUSTMENT
a psychophysical technique in which the participant adjusts a variable stimulus to
match a constant or standard.
Purpose: It measures how people perceive differences between stimuli.
Process: In this technique, participants adjust a variable stimulus to match a
constant or standard stimulus.
Example: One common example is adjusting a comparison stimulus to match the
brightness of a standard visual stimulus.
Other Names: It is also referred to as the adjustment method, error method,
method of average error, or method of equivalents
How It Works
 Two Stimuli:
o Standard Stimulus (St): A fixed stimulus with a known intensity (e.g.,
brightness, loudness, or length).
o Comparison Stimulus (Co): A stimulus that starts at a random intensity
and can be adjusted by the subject.
Task: The subject adjusts the comparison stimulus (Co) until it appears to match the
standard stimulus (St) in terms of intensity or other characteristics being studied (e.g.,
brightness, sound level, or length).
Goal: To make the two stimuli appear the same in the characteristic being studied.
Key Components
Subjective Matching: The subject adjusts the comparison stimulus based on their
personal perception, making it subjective to their sensory experience.
Repeated Judgments: The adjustment process is repeated multiple times, with the
comparison stimulus starting at different initial intensities each time.
Point of Subjective Equality (PSE)
 Calculation: After all adjustments are made, the average of the adjustments is
calculated.
 Definition: The average is called the Point of Subjective Equality (PSE).
 Meaning: The PSE represents the point at which the subject perceives the two
stimuli as equal on average.
Error Calculation
Constant Error (CE): The difference between the standard stimulus (St) and the
Point of Subjective Equality (PSE) is called the Constant Error (CE).
Interpretation of CE:
1. Positive CE: If the PSE is greater than the standard stimulus (St), the subject
tends to overestimate the standard stimulus.
2. Negative CE: If the PSE is less than the standard stimulus (St), the subject
tends to underestimate the standard stimulus.
Example: If the comparison light (Co) is adjusted to appear brighter than the
standard light (St) on average, it’s overestimation. If the comparison light appears
dimmer, it’s underestimation.

Example - illustrating the Method of Adjustment:


 Müller-Lyer Illusion: The experiment measures the extent of the Müller-Lyer
illusion, where two lines appear different in length despite being identical.
 Experiment Setup:
1. The standard stimulus (St) is a line of 180 pixels in length.
2. Observers adjust the length of the comparison stimulus (left line) to
make it look identical to the standard (right line).
3. Ascending and descending trials are used to determine the comparison
line’s length.
 Results:
1. Ascending trial mean = 145 pixels.
2. Descending trial mean = 142 pixels.
3. Mean setting = 143.5 pixels, which is the Point of Subjective Equality
(PSE).
 Constant Error (CE):
1. CE = PSE - Standard stimulus (St) = 143.5 - 180 = -36.5 pixels.
2. The negative CE indicates that the observer underestimated the
standard line length by 36.5 pixels.
Common Uses
1. Müller-Lyer Illusion: Used to study visual illusions by having subjects adjust
line lengths until they perceive them as equal, revealing biases in visual
perception.
2. Pitch Matching: In auditory experiments, subjects adjust the pitch of a tone to
match a reference tone.
3. Brightness or Color Matching: Common in vision studies where subjects
adjust the brightness or color of a stimulus to match a reference stimulus.
Feature Method of Method of Method of Limits
Adjustment Constant Stimuli
Control over Subject adjusts Experimenter Experimenter
Stimuli stimuli controls stimuli in controls stimuli in
random order ascending/descendi
ng order
Bias More prone to Less prone to bias Prone to
subject bias (e.g., due to anticipation and
overshooting or randomization habituation effects
undershooting)
Efficiency Fastest and most Slowest and most Faster than
direct thorough constant stimuli but
slower than
adjustment
Accuracy Less accurate due Most accurate due More accurate than
to subject control to randomization adjustment but less
than constant
stimuli
Example Adjusting Randomly Indicating when a
brightness to presented tones to sound becomes
match a reference detect perception audible or inaudible
as volume changes
Primary Use Measuring Measuring Measuring
subjective equality detection threshold detection or
or matching difference threshold
Applications of Psychophysical Methods
1. Vision Research:
o Contrast Sensitivity: Measures the minimum contrast required to detect
objects.
o Brightness Perception: Assesses the thresholds for recognizing different
brightness levels.
2. Auditory Research:
o Hearing Sensitivity: Evaluates the ability to detect sounds at various
frequencies.
o Loudness Scaling: Subjects adjust volume levels to achieve perceptual
equality with reference sounds.
3. Pain Perception: Studies focused on pain tolerance and sensitivity using
controlled stimuli.
4. Consumer Testing:
o Taste and Smell: Used to assess consumer preferences for food,
fragrances, etc.
5. Clinical Diagnosis:
o Hearing or Vision Impairment: Assesses thresholds of perception in
clinical populations to diagnose impairments.

Fetcher’s contributions
 Origin of Concept: The concept was originally postulated by Ernst Heinrich
Weber in 1834 to describe research on weight lifting.
 Fechner’s Role: Gustav Theodor Fechner, Weber’s student, applied this
concept to the measurement of sensation and developed it into the science of
psychophysics.
 Weber's Influence: Ernst Weber was one of the first to study the human
response to physical stimuli quantitatively.
 Fechner’s Law: Fechner named his first law in honor of Weber, who had
conducted the experiments necessary to formulate it.

Fechner’s law
Logarithmic Perception: The perceived intensity of
a stimulus increases logarithmically as its actual
intensity increases. This means that the larger the
stimulus, the more of it is needed to detect a change.
Example 1 - Weight Perception: Adding a 1-pound
weight to a 10-pound weight feels more significant
than adding the same 1-pound weight to a 100-pound
weight. The perception of change diminishes as the
base weight increases.
Example 2 - Price Perception: When buying a small item (e.g., $10), a $1 increase
feels significant. However, when buying a more expensive item (e.g., a $1,000
laptop), the same $1 increase is barely noticeable. Fechner’s Law suggests that our
sensitivity to price changes decreases as the total price increases.
Mathematical formulation
S=k log(I )
Where:
 S is the perceived sensation.

 k is a constant that depends on the specific sense being measured (e.g., weight,
decibels, etc.).
 I is the intensity of the stimulus.
Implication: As the intensity of the stimulus increases, equal increases in intensity
lead to smaller increases in perceived sensation at higher levels of intensity.

Applications of the Weber-Fechner Law:


1. Sensory Perception and Product Design: The Weber-Fechner Law is used in
sensory-based product design to optimize user experience by understanding
how changes in stimuli (like weight, sound, or color) are perceived by
consumers.
2. Marketing and Pricing Strategies: In pricing, the law explains why small
price increases are less noticeable for higher-priced products compared to lower-
priced ones. Marketers use this principle to implement price hikes or discounts
without significantly altering consumers' perceived value of the product.
3. Psychophysics and Pain Management: The Weber-Fechner Law is applied in
pain management to understand and modulate patient responses to stimuli,
helping to improve treatments by factoring in how pain perception changes with
stimulus intensity.

Limitations of the Weber-Fechner Law:


1. Limited Range: The law is most effective within a specific range of stimulus
intensities and may not accurately describe perception at very low or very high
levels.
2. Doesn’t Account for Context: The law does not consider external factors,
such as context or environmental conditions, which can significantly affect
perception.
3. Focuses on One Sensory Modality: The Weber-Fechner Law primarily applies
to individual sensory modalities (e.g., vision or hearing) and may not effectively
explain interactions between different senses or multimodal perceptions

Webber’s law
Personal life
 Born: June 24, 1795, in Wittenberg, Germany.
 Family: The eldest of 13 children; his father was a professor of theology,
providing an academically stimulating environment.
 Education: Enrolled at the University of Wittenberg, where he studied
medicine and obtained his doctorate in 1815.
 University of Leipzig: Began his academic career in 1821, becoming a
professor of comparative anatomy.
 Contributions:
o Weber is considered one of the founders of experimental psychology.
o Most notable for his work in sensory perception, including the Just
Noticeable Difference (JND), Weber’s Law, and the Two-Point
Threshold (the minimum distance at which a person can distinguish two
separate points of contact on the skin).
o His work influenced Gustav Fechner, who expanded Weber’s ideas into
the broader field of psychophysics.
History of Weber’s Law:
 Origin: The law was initially postulated by Ernst Heinrich Weber in 1834 to
describe research on weight lifting.
 Fechner’s Contribution: Weber’s student, Gustav Theodor Fechner, later
applied Weber’s law to the measurement of sensation in relation to a stimulus.
Fechner expanded on this idea and developed it into the science of
psychophysics.
 Fechner’s Naming: Fechner named the resulting formula Weber’s Law, which is
sometimes referred to as the Fechner-Weber Law.
 Impact on Psychology: This allowed for the measurement of sensation in relation
to a physical stimulus, enabling the development of quantitative psychology.
 Psychophysics: The field of psychophysics studies the quantitative
relationship between psychological events (such as sensations) and physical
events (such as stimuli that produce them).
Foundational Principle: Weber’s Law is a key concept in sensory perception and
psychophysics, formulated by the German physiologist and psychologist Ernst
Heinrich Weber in the 19th century.
Just Noticeable Difference (JND): The law quantifies the smallest detectable
difference between two stimuli, known as the just noticeable difference (JND).
Proportional Relationship: According to Weber’s Law, the JND is proportional to the
magnitude of the original stimulus. This means the ability to detect changes in a
stimulus depends on the relative change rather than the absolute change in the
stimulus.

Formula:
∆I
=k
I
Where:
 ΔI is the minimum change in intensity required for detection (also known as
the just noticeable difference (JND)).
 I is the intensity of the original stimulus.
 k is the Weber fraction, a constant that varies for each type of sensory
modality (e.g., vision, hearing, etc.).

Experiments
Weber’s Weight Discrimination Experiment:
Objective: In 1834, Ernst Heinrich Weber conducted an experiment to understand
how small changes in physical stimuli (specifically weight) could be detected by the
human senses.

Method:
 Participants were blindfolded to eliminate visual bias.
 They were given two equal standard weights (one in each hand) to familiarize
themselves with the weight.
 After this, a slightly heavier test weight was added to one hand.
 The participants were then asked to compare the weights and judge which one felt
heavier.

Observations:
 Participants found it harder to detect differences when the standard weight was
larger.
 Example: If the standard weight was 100 grams, they could easily detect an
additional 10 grams. However, with a 200-gram standard weight, they needed an
additional 20 grams to perceive a difference.

Conclusion: The experiment showed that the ability to perceive changes in weight
was not based on an absolute difference but rather on a constant ratio between
the two weights, which led to the formulation of Weber’s Law.
Weber's Two-Point Discrimination Test:
Purpose: The test was designed to measure the two-point threshold, which is the
minimum distance between two points on the skin at which an individual can
distinguish them as separate touches.

Method:
 Calipers with two sharp points were used to touch different areas of the
participant’s skin.
 Participants, who were blindfolded, were asked to report whether they felt one or
two distinct points of contact.
 The distance between the points was gradually adjusted until the participant could
no longer distinguish them as separate touches.

Test Locations: The test was conducted on various body parts, including fingertips,
face, lips, palms, and back, to examine differences in tactile sensitivity across body
locations.

Findings:
Tactile sensitivity varies significantly across different parts of the body.
 Fingertips: Sensitivity ranged from 2 to 8 mm, allowing participants to
distinguish points just a few millimeters apart.
 Lips: Sensitivity was slightly more refined, with a threshold of 2 to 4 mm.
 Palms: Less sensitive, with a threshold of 8 to 12 mm.
 Back or Shins: Much lower sensitivity, with a threshold of 30 to 40 mm, requiring
a much larger distance to distinguish separate points.
Fechner’s Sensory Threshold Experiments:
Extension of Weber’s Work: Gustav Fechner extended Weber's findings and
developed the field of psychophysics, applying Weber’s law to various other senses,
including vision and hearing.

Method:
 Participants were asked to observe stimuli such as lights of varying brightness or
sounds of different volumes.
 They were then asked to indicate the point at which they could detect a
difference between two levels of stimuli (the Just Noticeable Difference (JND)).
Findings:
 Fechner confirmed Weber's principle, showing that the JND follows a
proportional relationship across different sensory modalities.
 Key Observation:
1. If a stimulus (e.g., sound or light) is faint, only a small change is needed to
detect it.
2. If the initial stimulus is already intense, a larger change is required for it to be
noticeable.

Applications of Weber's Law in Cognitive Psychology:


Attention:
 Selective Attention: Our ability to focus on specific stimuli is influenced by their
intensity. A stimulus must surpass the JND (Just Noticeable Difference)
threshold to capture or shift attention. For example, loud sounds or bright lights are
more likely to attract attention.
 Habituation and Adaptation: If changes in stimulus intensity are too subtle
(below the JND), they may not grab attention, leading to sensory habituation
(e.g., getting used to the ticking of a clock).
Memory:
 Stimuli that exceed the JND threshold are more likely to be encoded in memory
because they stand out from background information. Minor changes that do not
surpass the JND may not be stored or recalled effectively.
 Distinct experiences, such as a new fragrance or a sudden loud noise, tend to be
more memorable than repetitive, subtle changes.
Decision Making:
 Weber’s Law helps explain how individuals make discriminative judgments
under uncertainty. When making decisions, people rely on noticeable
differences in information. If the change between two options (e.g., product prices
or features) is below the JND, the difference may not influence their choice.
 This principle is crucial in marketing and economics, where small changes in
discounts or packaging may not significantly impact consumer choices unless
they exceed the perceptual threshold.
Emotion and Sensory Perception:
Emotional states can alter perception thresholds. For example, when stressed or
anxious, individuals may notice smaller changes in stimuli, lowering their JND.
Conversely, in states of calmness or positive emotions, larger changes are required to
perceive a difference, raising the JND threshold.

Real-life Applications of Weber's Law:


Marketing:
Subtle changes in price, packaging, or product quality are adjusted to stay within or
exceed consumers' JND (Just Noticeable Difference), influencing purchasing
behavior.
Example: A small price increase might go unnoticed if it’s within the JND, but a
noticeable discount can attract attention and influence buying decisions.
Human-Computer Interaction (HCI):
User interfaces are designed considering perceptual limits, like optimal font sizes or
contrast, ensuring that critical changes are noticeable without overwhelming users.
Example: Notification sounds and visual alerts are calibrated to be noticeable but
not too distracting for the user.
Clinical Psychology:
Sensory tests for vision, hearing, or touch assess patients' ability to detect small
changes, aiding in the diagnosis of sensory impairments.
Example: Audiologists use JND tests to detect hearing loss by identifying the
smallest detectable change in sound frequency or intensity.

Strengths of Weber's Law:


1. Broad Applicability: Explains perception across various senses, such as
vision, hearing, and touch.
2. Simplicity: Provides a clear, proportional relationship between stimulus and
perception, making it easy to understand and apply.
3. Experimental Foundation: Based on empirical evidence, which helps
quantify the Just Noticeable Difference (JND) for different stimuli.
4. Useful in Practical Fields: Widely applied in areas like marketing, user
interface design, and clinical assessments.

Limitations of Weber's Law:


1. Breakdown at Extreme Intensities: Fails to predict perception accurately for
very weak or very strong stimuli, such as faint sounds or blinding light.
2. Assumption of Linearity: Assumes a constant ratio between stimulus
change and perception, which doesn’t always hold true across different
intensities.
3. Inflexibility Across Modalities: Different senses and stimuli types may not
follow the same proportional relationship, which is better addressed by
Fechner's Law or Steven's Power Law.

Steven’s power law


Influence in Psychophysics: An influential American psychologist who pioneered
advancements in psychophysics.
Career and Contributions: Spent much of his career at Harvard University and
established the Psycho-Acoustic Laboratory. He advanced the understanding of
how the brain perceives physical stimuli like sound and light.
Stevens’ Power Law: Developed Stevens' Power Law in 1957, improving on the
Weber-Fechner Law. His research showed that sensory perception followed a
power function, rather than a logarithmic one.
Formula for Sensation: Proposed the formula S = kIⁿ, where S is perceived sensation,
I is stimulus intensity, k is a constant, and n varies based on the type of sensation.
This formula provided a more flexible model for different sensory experiences.
 Steven's Power Law is a psychophysical concept that states the the strength of a
physical stimulus is proportional to the sensation that is perceived by the recipient.
 Steven's Power Law essentially states that as a stimulus increases in magnitude it
will increase the sensation of the stimulus proportionally
n
S=k × I
where,
 S- perceived sensation
 k- Constant
 I-stimulus intensity
 n- Power exponent
The exponent n tells us how sensitive we are to changes in
each type of stimulus:
 If n>1: Perception grows faster than physical intensity
(like for electric shock).
 If n<1: Perception grows slower than physical
intensity (like for brightness).
 If n=1: Perception grows linearly with physical
intensity (rare, but close for line length).

Brightness Example
Suppose we have a light with intensity 100 units. If we
double the intensity to 200 units, let’s calculate the perceived brightness:
Initial Perceived Brightness:
0.33
S=k × I
For simplicity, let’s assume k = 1
At 100 units of intensity:
0.33
S=1×100 ≈ 4.64
Perceived Brightness at 200 units:
0.33
S=1× 200 ≈ 6.92
So, while the light’s physical intensity doubled (from 100 to 200), the perceived
brightness only increased from 4.64 to 6.92—not a doubling. This illustrates that our
perception of brightness grows more slowly than the actual increase in intensity.

Shock Intensity Example


Suppose we start with a shock intensity of 10 units. Now we increase the intensity to
20 units. Let’s calculate the perceived shock intensity.
Initial Perceived Intensity:
1.6
S=k × I
Again, assume k = 1.
At 10 units:
1.6
S=1×10 ≈39.81
Perceived Intensity at 20 units:
1.6
S=1× 20 ≈121.54
Here, even though the physical intensity only doubled, the perceived intensity
increased more than threefold, from about 39.81 to 121.54! This shows that with
electric shocks, we’re far more sensitive to increases in intensity.

History of Stevens' research in psychophysics:


Testing Various Stimuli: Stevens conducted extensive tests on different types
of stimuli to explore perception intensity.
Methods Used: He used magnitude estimation and magnitude production
techniques:
 Magnitude Estimation: Participants compare stimulus intensity against a
standard with a set modulus (a reference level for comparison).
 Magnitude Production: Participants are free to choose their own standard,
allowing them to produce or adjust stimuli intensity based on personal perception.
Impact: These methods allowed Stevens to quantify perception intensity, leading to
the development of Stevens' Power Law for a more accurate model of sensory
perception across different stimuli.

Magnitude Estimation
 Standard Reference: A reference stimulus is presented to participants to
establish a baseline for comparison.
Example: In a loudness experiment, a sound is played at a set level as the standard,
so participants know what "10" represents on the rating scale.
 Modulus: A numerical value assigned to the standard to create a consistent
rating scale.
Example: If the standard sound is given a modulus of "10," participants use this to
rate other sounds. A sound perceived as twice as loud might be rated "20," while one
half as loud might be rated "5."

Magnitude Production:
 Adjustment Task: The participant is asked to adjust a stimulus (like
brightness, sound, or weight) to reach a specific intensity level that corresponds to
a given numerical value.
Example: If provided a light with a reference brightness level of "10," participants
may be asked to make another light “twice as bright” by adjusting the dimmer switch
until they perceive it as “20.”

ASPECT FETCHNER STEVENS

Adaptable; n varies by
Assumes a fixed
stimulus, accommodating
ADAPTABILITY relationship for all types
different types of
of stimuli.
sensations.
Limited scope; less
More complex; requires
accurate for certain
LIMITATIONS determining n for each type
sensations and extreme
of stimulus.
intensities.

Grows as a power
Grows as a logarithm of
function of physical
PERCIEVED physical intensity, showing
intensity, allowing for
INTENSITY diminishing sensitivity at
varying sensitivity based on
higher levels.
the type of sensation.

Application
1. Marketing: Adjustments in product features, such as color brightness or
sound intensity, can influence consumer perceptions of quality and
attractiveness. By understanding the perceived intensity, marketers can create
products that appeal to sensory expectations.
2. Room Acoustics: Magnitude production is applied to acoustic design in
spaces like concert halls, allowing designers to optimize sound intensity based
on how different levels of volume are perceived by audiences. This can ensure
balanced acoustics for enhanced auditory experiences.
3. Ergonomics: In designing workspaces, understanding perceived comfort in
lighting and sound levels helps optimize environments for productivity.
Adjustments in these stimuli based on human responses can lead to healthier
and more effective workspaces by tailoring to how intensities are perceived.

Strengths:
1. Empirical Support: Stevens' Power Law is backed by extensive research
across sensory domains, demonstrating that it more accurately fits perceptual
data compared to earlier models, like Weber-Fechner Law. This strong empirical
support makes it a dependable framework for understanding sensory
perception.
2. Predictive Power: The law's ability to predict how perceived intensity
changes with stimulus magnitude allows it to be applied effectively in
psychology, neuroscience, and practical fields like marketing. For instance, in
marketing, it can help predict how changes in product features, like brightness
or sound, influence consumer perception, enhancing targeted design and
product appeal.

Limitations:
1. Individual Differences: Stevens' Power Law assumes a consistent stimulus-
sensation relationship across individuals. However, there are substantial
individual differences in perception, which are often obscured when data is
averaged, reducing the model’s accuracy in capturing unique perceptual
responses.
2. Contextual Factors: External influences like environmental conditions, prior
experiences, and emotional states can significantly impact perception. These
contextual variables introduce variability that the formula doesn’t account for,
leading to limitations in real-world applications where context affects
perception.
3. Fails at Extremes: The law does not reliably predict perceptual changes at
very high or very low stimulus intensities. At these extremes, human perception
often deviates from the power function, indicating that the law is less effective
for extreme sensory conditions.

Signal detection theory


 A framework to analyze how participants categorize ambiguous stimuli,
distinguishing between signal (known process) and noise (chance occurrence).
 Purpose: Examines the ability to process information amidst distractions or
background noise.
 Example: In memory recognition, a signal represents familiarity with a known
stimulus, while noise refers to false familiarity with a new stimulus.
 Key Parameters:
o Sensitivity (d'): Measures the ability to distinguish between signal and
noise; higher sensitivity indicates better distinction.
o Decision Criterion (β): Reflects the threshold for deciding a signal
presence, showing individual biases and risk levels in decision-making.
 Applications: Used in psychology, medical diagnostics, and statistical decision-
making.

History:
Origins (1800s) - Gustav Fechner:
 Fechner, known as the founder of experimental psychology, explored how humans
perceive stimuli.
 His Weber-Fechner Law examined the relationship between stimulus magnitude
and perceived intensity.
 Although indirect, Fechner’s research on stimulus discrimination contributed to
SDT's later development.
World War II (1940s):
 Radar technology needed to distinguish between actual signals (enemy planes)
and "noise" (irrelevant signals).
 The need to minimize false alarms and misses led to the early principles of SDT
focused on decision-making under uncertainty.
Post-War Cognitive Science (1950s-60s):
 SDT principles expanded to cognitive science, exploring human decision-making
amid noise and ambiguity.
 Used in perception, memory, and attention studies, SDT became a framework
for understanding correct detections, false alarms, misses, and correct rejections.

Radar Operators During WWII:


 Radar operators used radar to detect enemy planes amidst noisy signals.
 The research on SDT related to this task was developed in the 1950s and 1960s by
Wilson P. Tanner, David M. Green, and John A. Swets.
Experiment Details:
 Participants: Radar operators.
 Task: Identify enemy planes (signal) from random blips (noise) on radar screens.
 Conditions: Sometimes enemy planes were present (signal), other times they
were absent (no signal).
Possible Outcomes:
 Hit: Correctly identified enemy
plane.
 False Alarm: Incorrectly identified
a plane when none was present.
 Correct Rejection: Correctly
identified no plane was present.
 Miss: Failed to detect a plane that
was present.
Key Concepts:
 Sensitivity (d'): Operator's ability to distinguish between true enemy signals and
noise. High sensitivity means better detection.
 Decision Criterion (β): The threshold the operator uses to decide if they should
report a signal. It reflects their level of caution or boldness in decision-making.
Implications:
 The experiment showed that decision-making is influenced by both signal
strength and psychological factors (e.g., expectations and costs of errors).
 SDT Framework: Quantifies and explains trade-offs in decision-making by using
factors like signal clarity and operator judgment.

Graphical Representation:
 X-axis: Represents signal strength, from noise only (left) to strong signal (right).
 Y-axis: Represents the probability distribution of occurrences for noise and signal.
The two curves show distributions for signal absent and signal present.
Components of the Diagram:
 Signal Absent (Red Curve):
Shows instances when only noise
is present. The further left, the
more likely the event is pure
noise.
 Signal Present (Black Curve):
Shows instances when a signal is
present. The farther right, the
stronger and more detectable the
signal.
 Criterion (Vertical Line): The
decision threshold. Anything to
the right is considered a signal, and anything to the left is considered noise.
The placement of the criterion affects the balance between misses and false
alarms.
Four Possible Outcomes:
 Hit (Black Area): Signal is present, and the observer correctly detects it (signal
strength above the criterion).
 Miss (Shaded Red Area in Black Curve): Signal is present, but the observer
fails to detect it (signal strength below the criterion).
 False Alarm (Shaded Red Area in Red Curve): No signal, but the observer
incorrectly detects a signal (noise surpassing the criterion).
 Correct Reject (Light Red Area): No signal, and the observer correctly
identifies that no signal is present (noise below the criterion).

Application
Sensitivity or Discriminability:
 SDT assesses how well people distinguish stimuli in noisy environments.
Example: Memory tests in different environments (quiet vs. noisy) show how external
conditions affect sensitivity to signals.
Meteorology:
 Meteorologists apply SDT principles to sift through noise in atmospheric data for
better weather predictions.
Example: Weather instruments filter out noise to focus on relevant data for accurate
forecasting.
Bias in Responses:
 SDT measures the bias in decision-making, influenced by factors like rewards or
penalties.
Example: In a memory test, penalties for missed answers might lead to more cautious
or liberal responses.
Technology and Electronics:
 SDT is used in modern systems to distinguish important signals from background
noise, especially in radar and electronic devices.
Example: Car radios use SDT to separate the desired signal from interference when
tuning between stations.

Merits
1. Quantitative Measurement: SDT provides a quantitative framework for
measuring sensitivity (d') and decision criteria (c), allowing for detailed
analysis of performance.
2. Distinction Between Sensitivity and Bias: SDT differentiates between
perceptual sensitivity and decision-making biases, improving
understanding of how signals are detected.
3. Applicability Across Disciplines: SDT is widely used in fields like
psychology, medicine, and telecommunications to assess detection
capabilities.
4. Modeling Complex Tasks: SDT can model more complex decision-
making tasks, such as multi-alternative choices, beyond basic signal
detection.
5. Useful for Diagnostic Testing: In clinical settings, SDT helps evaluate the
effectiveness of diagnostic tests by assessing true positive and false
positive rates.

Limitations:
1. Assumptions of Ideal Observer: SDT assumes an ideal observer model,
which may not accurately capture real-world decision-making complexities.
2. Cognitive Biases: Individual biases (e.g., response bias) can influence
detection performance, making interpretations more complicated.
3. Binary Outcomes: SDT typically focuses on binary outcomes (signal
present or absent), which may oversimplify more complex decision-making
situations.
4. Contextual Influences: Environmental and contextual factors can affect
detection performance but are often not considered in standard SDT models.
5. Limited by Experimental Design: The validity of SDT findings can be
impacted by experimental design factors, such as how stimuli are presented
and the instructions given to participants

ROC curve.
History
Origins: The ROC curve was developed in the 1940s to evaluate radar performance in
identifying enemy versus friendly signals. The term “receiver operating characteristic”
refers to radar operators, who were the “receivers” of these signals.
Key Figure: John A. Swets, an American psychologist and statistician, was
instrumental in adapting ROC analysis for use in psychology and medicine,
broadening its application beyond engineering.
Popularization: Swets' book, Signal Detection Theory and ROC Analysis in
Psychology and Diagnosis, helped popularize the ROC curve for evaluating true versus
false signals in psychology and medical diagnostics.
Significance in Diagnostics: ROC analysis became a valuable tool in fields like
disease detection and psychological assessment by visualizing the trade-off between
sensitivity and specificity, aiding in diagnostic accuracy evaluation.
Broader Applications: Over time, the ROC curve has expanded into fields such as
machine learning and data science, where it is used to assess model accuracy in
classification and decision-making models.

What is an ROC curve?


The ROC is a plot of conditional hit (True-positive) proportion against the
conditional (false-alarm), false positive proportion for all possible locations of the
decision cut point. ROC curve, depicts data from signal detection experiments
Area under
curve

Area Under Curve (AUC) A single metric representing the overall performance of
a binary classificational model based on the area under its ROC curve
True Positive Rate (Sensitivity) Proportion of actual positives correctly
identified by the model. The Y-axis, or dependent variable, is the true positive rate
False Positive Rate (1-Specificity) The model incorrectly classifies the
proportion of actual negatives as positive. The X-axis or independent variable is false
positive rate
True Negative Rate (Specificity) Proportion of actual negatives corrently
identified by the model
Structure of an ROC Curve
Axes

 X axis represents the False Positive Rate (1-Specificity)


 Y axis represents the True Positive Rate (Sensitivity)

The diagonal line represents the performance of a random classifier

The curved line shows the performance of a mode; the closer it is, to the top left
corner, the better the models performance.

Area Under Curve represents discrimination ability AUC = 1 indicates perfect


discrimination

Part of the curve – Start starts at the origin (0,0) and the ideally progresses towards
the point (1,1) indicating perfect sensitivity and specificity
How to plot an ROC using an example?
Imagine a study where participants were asked to detect a faint sound (signal) in
presence of background noise

Step 1 - Define outcomes in cognitive psychological contest


In this task
 A hit occurs when a participant correctly identifies the presence of the sound (True
positive)
 A miss happens when the participants fails to detect sound when it is present
(False negative)
 False alarm happens when the participants reports hearing a sound that is not
actually there. (False positive)
 A correct rejection occurs when the participant correctly identifies that no sound
was present. (True negative)
We assumes the participants responded to a range of threshold in detecting the
sound, resulting in various rates of hits and false alarms

Step 2 - Collect data using Confusion Matrix


This step includes identifying hits, misses, false alarms and correct rejections using
the yes-no method of data collection. Let’s assume the study recorded data like the
following is for a single threshold level

Predicted Predicted
positive negative

Actual positive 30 Hits 20 Misses

40 Correct
Actual negative 10 False Alarms
Rejections
Step 3 - Calculate True Positive Rate and False Positive Rate.
To calculate the true positive rate (TPR) and false positive rate (FPR), from the
confusion matrix
True Positive Rate (TPR), also known as Sensitivity or Recall:
Hits(TP)
TPR=
Hits ( TP )+ Misses(FN )
False Positive Rate (FPR), also known as Sensitivity or Recall:
False Alarms (FP)
FPR=
False Alarms ( FP ) +Correct Rejections(TN)

Step 4 – Generate ROC curve.


 Vary the threshold at which a participant decides a sound is present
 Calculate TPR and FPR for each threshold
 Plot FPR on the x-axis and TPR on the y-axis

Signal Dectection Theory (SDT) & ROC Curve


Development and Purpose: Signal Detection Theory (SDT) and the ROC
(Receiver Operating Characteristic) curve were developed in the early 1950s by
Peterson and Birdsall to mathematically analyze signal detection performance.
Decision Criterion:
 SDT introduces the concept of an optimal decision criterion, or cutpoint, to balance
true signals and false alarms.
 The ROC curve illustrates how adjusting this criterion impacts the trade-off
between true positives (hits) and false positives (false alarms).
Sensitivity Measurement:
 Sensitivity (or discrimination ability) reflects an observer’s skill in distinguishing
noise from signal-plus-noise situations.
 The ROC curve displays the true positive rate against the false positive rate, with
higher sensitivity indicated by closer proximity to the top-left corner.
Beyond Signal-to-Noise Ratio (SNR):
 While SNR measures signal clarity, the ROC curve provides a broader assessment
of detection performance under various noise conditions.
 It allows for the evaluation of detection across different SNR levels, showing
performance adaptability.
Ideal vs. Human Observers:
 SDT benchmarks an "ideal observer" for comparison with human performance.
 The ROC curve and the Area Under the Curve (AUC) help quantify how close human
detection aligns with the ideal.
Significance of the ROC Curve in SDT: The ROC curve is central in SDT for
examining detection performance, visualizing criterion shifts, and assessing sensitivity
across SNR conditions, providing a practical measurement tool for signal detection
analysis.
Advantages
Visual Representation: Provides a clear, graphical view of a model’s performance
across different classification thresholds, simplifying comparisons between models or
classifiers.
Threshold Selection: Shows the trade-off between sensitivity (true positive rate)
and specificity (false positive rate), aiding in the choice of an optimal threshold
tailored to specific tasks.
Performance Comparison: Enables comparison of different models or algorithms
independent of threshold selection, with the AUC (Area Under the Curve) providing a
single scalar value to summarize performance for easy quantitative comparison.
Application Across Domains: Used across various fields, such as medical
diagnostics and machine learning, for evaluating binary classification, highlighting its
versatility and wide applicability.

Limitations
Primarily for Binary Classification: Best suited for binary classification tasks;
requires adjustments for multi-class classification, which complicates interpretation.
Insensitive to Class Imbalance: Can give an overly optimistic view with
imbalanced datasets, as it does not consider class distribution; Precision-Recall curves
may be more informative in these cases.
Limited by AUC Interpretation: AUC reduces the ROC curve to a single number,
which can obscure specific trade-offs. Models with similar AUCs may perform
differently in practice.
No Optimal Threshold Selection: Does not provide a direct method for choosing
the best decision threshold to balance false positives and false negatives, which is
crucial in practical applications.
Assumes Independence and Identical Distribution: Assumes positive and
negative instances are independent and identically distributed, which may not hold in
real-world data.
Less Informative for Rare Events: Can be misleading for rare events or highly
imbalanced data, as high specificity can be achieved without effective class
distinction.
Complexity in High-Dimensional Spaces: Interpreting ROC curves in high-
dimensional or complex models is challenging, and nuances in model performance
may be overlooked.
Limited Utility with Continuous or Ordinal Data: Less suitable for tasks with
ordinal or continuous outcomes, as they don’t fit well with the binary, threshold-based
ROC analysis.

Applications
Evaluating Diagnostic Tests: Widely used in medical research to assess
diagnostic test accuracy by comparing sensitivity and specificity, useful in detecting
diseases like cancer or heart conditions.
Comparing Classifiers: Helps compare multiple classification models to determine
the best trade-off between true positives and false positives, aiding in model
selection.
Assessing Credit Scoring Models: Used in finance to evaluate credit scoring
models, identifying the optimal threshold to classify borrowers as low or high risk.
Fraud Detection: Applied in fraud detection to measure model effectiveness in
distinguishing fraudulent from non-fraudulent transactions, where accurate
classification is critical.
Image and Object Recognition: Used in computer vision to evaluate object
recognition and image classification models by balancing detection accuracy with
minimizing false alarms.
Drug Discovery and Toxicology: In pharmaceutical research, it helps assess
models predicting drug efficacy or toxicity, aiding in identifying safe and effective
compounds.
Speech and Signal Processing: Evaluates models in speech recognition or signal
processing by distinguishing true signals, such as speech or alarms, from noise.
Weather and Climate Prediction: Applied in meteorology to evaluate models
predicting extreme weather events, balancing accurate warnings with minimizing
false alarms.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy