MODULE 3 - Sensation and Perception
MODULE 3 - Sensation and Perception
Experimental Evidence:
DeLucia and Hochberg (1991) showed the Müller-Lyer effect in three-dimensional
displays where all fins were at the same distance, challenging the misapplied size-
constancy explanation.
Action-based studies (e.g., Gentilucci et al., 1996; Aglioti et al., 1995) found
reduced effects of visual illusions during physical actions like pointing or gripping.
Neisser-Schema Theory.
Developed by Ulric Neisser in 1976.
Also known as the Interactive Theory of Perception.
Proposes that perception is a cyclical process involving the interaction between
internal schemata (mental templates) and environmental stimuli.
Perceptual Exploration:
Expectations guide our exploration of the perceptual field.
Sensory organs and locomotive actions facilitate the gathering of relevant
information while filtering out the irrelevant.
The gathered information can modify existing schemata.
Ecological Validity:
o Neisser's model emphasizes the importance of context and ecological factors
in perception, ensuring that the theory aligns with real-world experiences.
LIMITATIONS:
Does not adequately explain how we recognize particular features.
Fails to explain the effects of prior expectations and environmental context on some
phenomena of pattern perception.
Not all perception researchers accept the notion of geons as fundamental units of
object perception (Tarr and Bulthoff, 1995).
Applications:
Object Recognition
Biological Area
Eye detection in facial image
Merits:
Estimates become quite good with enough data
Template matching is the most effective technique to be used in pattern recognition
machines which read numbers and letters that are available in standardized,
constrained contexts (it means scanners which reads the financial credit number from
machines, checks the postal zip codes from the envelopes)
Demerits:
Slight changes in size or orientation variation can cause problems
It often uses several templates to represent on object
Templates may be of different sizes
Template matching requires high computational power because the detection of large
patterns of an image is time consuming.
Selfridge--pandemonium model
Pandemonium is a model of bottom-up pattern recognition.
It has applications in artificial intelligence and pattern recognition.
The theory was developed by the artificial intelligence pioneer Oliver Selfridge in
1959. This model is now recognized as the basis of visual perception in cognitive
science.
Pandemonium
architecture arose in
response to the
inability of template
matching theories to
offer a biologically
plausible explanation
of the image
constancy
phenomenon.
The basic idea of the
pandemonium
architecture is that a pattern is first perceived in its parts before the "whole".
Limitation:
This theory may explain how letters are recognised but cannot explain how we
recognize complex real-world objects like a horse
Studies show that sometimes we notice distinctive feature of a whole figure but
may miss the same if features are presented in isolation. In real world we can see
an object only as a whole
Could not explain complexity of pattern recognition.
Which brain cell detector for which stimuli not specified.
Types Of Pain
Acute pain is a sharp pain of short duration with easily identified cause. Often it is
localized in a small area before spreading to neighbouring areas. Usually, it is treated
by medications. Chronic pain is the intermittent or constant pain with different
intensities. It lasts for longer periods. It is somewhat difficult to treat chronic pain and
it needs professional expert care. A number of theories have been postulated to
describe mechanisms underlying pain perception.
Nociceptive pain is used to describe the pain from physical damage or potential
damage to the body and Neuropathic pain is used to describe the pain that
develops when the nervous system is damaged or not properly working because of a
disease or injury
Pain medicines
Many people will use a pain medicine (analgesic) at some time in their lives. The main
types of pain medicines are:
paracetamol – often recommended as the first medicine to relieve short-term pain
aspirin – for short-term relief of fever and mild-to-moderate pain (such as period
pain or headache)
non-steroidal anti-inflammatory drugs (NSAIDs), such as ibuprofen – these
medicines relieve pain and reduce inflammation (redness and swelling)
opioid medications, such as codeine, morphine and oxycodone – these medicines
are reserved for severe or cancer pain
local anaesthetics
some antidepressants
some anti-epileptic medicines.
3) Shape constancy:
A type of perceptual constancy in which an object is perceived as having the same
shape when viewed at different angles. For example, a plate is still perceived as
circular despite appearing as an oval when viewed from the side.
Function: It ensures that an object's shape is perceived as constant, even when its
orientation or retinal image changes. This ability prevents objects from appearing
different depending on the observer's position (e.g., a circle always appears
circular regardless of viewpoint) (Sternberg & Sternberg, 2016).
Neuropsychological Basis: The extrastriate cortex is involved in processing
shape constancy, according to neuropsychological imaging (Kanwisher et al., 1996,
1997).
Echolocation:
Echolocation in Blind Individuals: Blind individuals trained in echolocation
demonstrate perceptual constancy specific to auditory stimuli. They can discern
changes in reflected sound waves based on object properties, not variations in
their own echolocation clicks.
Development through Learning: This study suggests that perceptual constancy
is not solely inherent but can be developed through learning and experience, even
within novel sensory modalities like echolocation.
Adaptation to New Sensory Input: The ability to apply perceptual constancy to
auditory stimuli highlights how sensory adaptation can enhance environmental
understanding, even in the absence of sight.
Theories of illusions
An illusion is a misleading image (optical illusion) or a perception that deceives or
misleads intellectually, leading to a misinterpretation of an object’s actual nature.
Importance: Illusions are significant because they offer insights into how the
visual system functions and how our brain processes sensory information,
revealing its mechanisms and potential areas where perception can be distorted.
Ponzo Illusion:
1. Illusion Description: In the Ponzo illusion, two parallel
horizontal lines of equal length are perceived differently. The
upper line appears longer than the bottom line when both are
flanked by converging oblique lines (similar to railroad tracks
receding into the distance).
2. Visual Mechanism: The illusion occurs because the brain
interprets the converging lines as depth cues, perceiving the
upper line as farther away and thus larger, maintaining size constancy.
3. Name: This illusion is also referred to as the "filled space-open space" illusion,
as the top line is perceived to be in a "filled" space (closer to the top of the
converging lines) while the bottom line is in an "open" space.
Horizontal-Vertical Illusion:
Line Length Distortion:
1. Vertical lines next to horizontal lines appear longer than
when presented alone.
2. Horizontal lines next to vertical lines appear shorter
than when presented alone.
Effect of Line Interruption: When a line is interrupted by
another line, it is perceived as being shorter than the same line when it is
uninterrupted.
Perceptual Mechanism: This illusion occurs due to the brain's reliance on
surrounding context, where the presence of other lines influences the perception of
length, leading to misjudgments based on their orientation or position.
Illusion in Music:
1. Shepard’s Tone Illusion: Shepard (1961) used complex tones that seem to
continuously increase in pitch, creating the illusion of an endlessly rising pitch,
despite returning to the original tone after many iterations. This was achieved
by manipulating the harmonics.
2. Octave Illusion: Created by Deutsch (1983), this illusion involves presenting
one tone to each ear, with the tones being an octave apart (e.g., G4 in one ear
and G5 in the other). When the tones are switched between ears, listeners
perceive the tones as coming from a single location, even though they are
physically different.
Illusory Movement:
1. Stroboscopic Movement: This illusion occurs when a series of rapidly
presented static images or lights create the perception of continuous motion,
due to how the retina processes the stimulation.
2. Autokinesis: A stationary object, when viewed against a featureless
background, appears to move. This is thought to result from spontaneous, tiny
movements of the eyes.
History of Psychophysics
Fechner’s Discovery (October 22, 1850): Fechner had an epiphany about
measuring sensation, realizing that it could be done in the same way as physical
measurements—by using a null matching procedure.
Null Matching: This method involves adjusting one stimulus until it matches
another in terms of perception. The "null" or "no difference" judgment is key, where
the two stimuli are perceived as identical in sensation.
Scientific Use of Null Matching: Fechner emphasized that this procedure allows
for consistent and objective measurements, whether done by trained scientists or
untrained observers using instruments designed to replicate the null matching
procedure (e.g., pH meters in chemistry labs, which align with the titration method
used by chemists).
Sensation and Judgment: Fechner argued that it’s not possible to quantify the
magnitude of sensations (e.g., one light being exactly "twice as bright" as another),
but we can assess whether sensations are present and whether two sensations are
the same. This formed the basis of classical psychophysics.
Fechner’s Contribution: He established that sensations could be specified based
on the stimulus values needed to produce specific types of judgments. This led to
the development of the framework in psychophysics, where we can compare
sensations and their corresponding stimulus intensities, allowing for measurements
like brightness comparison.
Threshold
Absolute Threshold: This is the smallest amount of stimulus energy required for the
sensory system to detect a stimulus 50% of the time. It represents the minimum
intensity at which a stimulus is perceived.
Difference Threshold: Also known as the just noticeable difference (JND), this is
the minimum detectable difference between two stimuli. Like the absolute threshold,
the difference threshold is the smallest difference that can be perceived 50% of the
time.
Threshold Discrimination
Discrimination Experiments: These experiments aim to determine the point
at which a subject can detect the difference between two stimuli (e.g., two
weights or sounds).
Point of Subjective Equality (PSE): This is the point at which the subject
perceives two stimuli (such as weights) to be the same, even though they may
differ physically.
Just Noticeable Difference (JND): Also known as the difference limen (DL),
this refers to the smallest difference in stimuli that the subject can detect. It is
typically defined as the difference that can be noticed 50% of the time
(proportion p = 50%).
METHOD OF LIMITS
The Method of Limits is a psychophysical procedure used to determine the sensory
threshold by gradually increasing or decreasing the magnitude of the stimulus in
discrete steps.
Procedure:
A stimulus of a given intensity is presented to the participant.
If the stimulus is perceived, the intensity is decreased on the next trial, continuing
until the participant no longer detects the stimulus.
If the stimulus is not perceived, the intensity is increased on the next trial,
continuing until the participant detects the stimulus.
Example: In a light detection experiment, you might be presented with a bright light
and asked if you can see it. If you can, the intensity is decreased in the next trial. This
process continues until you report no longer being able to detect the light.
Ascending Method
The stimulus is presented initially at a very low intensity.
The intensity is gradually increased until the subject can perceive the stimulus.
The threshold is determined by the intensity level at which the subject first
detects the stimulus.
Descending Method
The stimulus is presented at a high intensity.
The intensity is gradually decreased until the subject can no longer perceive
the stimulus.
The threshold is determined by the
intensity level at which the subject
stops detecting the stimulus.
Error of Anticipation:
This error happens when a participant changes their response to the stimulus
before actually feeling it.
Essentially, they anticipate the stimulus based on the direction of the changes
in distance (ascending or descending), rather than responding to the actual
sensory experience.
METHOD OF ADJUSTMENT
a psychophysical technique in which the participant adjusts a variable stimulus to
match a constant or standard.
Purpose: It measures how people perceive differences between stimuli.
Process: In this technique, participants adjust a variable stimulus to match a
constant or standard stimulus.
Example: One common example is adjusting a comparison stimulus to match the
brightness of a standard visual stimulus.
Other Names: It is also referred to as the adjustment method, error method,
method of average error, or method of equivalents
How It Works
Two Stimuli:
o Standard Stimulus (St): A fixed stimulus with a known intensity (e.g.,
brightness, loudness, or length).
o Comparison Stimulus (Co): A stimulus that starts at a random intensity
and can be adjusted by the subject.
Task: The subject adjusts the comparison stimulus (Co) until it appears to match the
standard stimulus (St) in terms of intensity or other characteristics being studied (e.g.,
brightness, sound level, or length).
Goal: To make the two stimuli appear the same in the characteristic being studied.
Key Components
Subjective Matching: The subject adjusts the comparison stimulus based on their
personal perception, making it subjective to their sensory experience.
Repeated Judgments: The adjustment process is repeated multiple times, with the
comparison stimulus starting at different initial intensities each time.
Point of Subjective Equality (PSE)
Calculation: After all adjustments are made, the average of the adjustments is
calculated.
Definition: The average is called the Point of Subjective Equality (PSE).
Meaning: The PSE represents the point at which the subject perceives the two
stimuli as equal on average.
Error Calculation
Constant Error (CE): The difference between the standard stimulus (St) and the
Point of Subjective Equality (PSE) is called the Constant Error (CE).
Interpretation of CE:
1. Positive CE: If the PSE is greater than the standard stimulus (St), the subject
tends to overestimate the standard stimulus.
2. Negative CE: If the PSE is less than the standard stimulus (St), the subject
tends to underestimate the standard stimulus.
Example: If the comparison light (Co) is adjusted to appear brighter than the
standard light (St) on average, it’s overestimation. If the comparison light appears
dimmer, it’s underestimation.
Fetcher’s contributions
Origin of Concept: The concept was originally postulated by Ernst Heinrich
Weber in 1834 to describe research on weight lifting.
Fechner’s Role: Gustav Theodor Fechner, Weber’s student, applied this
concept to the measurement of sensation and developed it into the science of
psychophysics.
Weber's Influence: Ernst Weber was one of the first to study the human
response to physical stimuli quantitatively.
Fechner’s Law: Fechner named his first law in honor of Weber, who had
conducted the experiments necessary to formulate it.
Fechner’s law
Logarithmic Perception: The perceived intensity of
a stimulus increases logarithmically as its actual
intensity increases. This means that the larger the
stimulus, the more of it is needed to detect a change.
Example 1 - Weight Perception: Adding a 1-pound
weight to a 10-pound weight feels more significant
than adding the same 1-pound weight to a 100-pound
weight. The perception of change diminishes as the
base weight increases.
Example 2 - Price Perception: When buying a small item (e.g., $10), a $1 increase
feels significant. However, when buying a more expensive item (e.g., a $1,000
laptop), the same $1 increase is barely noticeable. Fechner’s Law suggests that our
sensitivity to price changes decreases as the total price increases.
Mathematical formulation
S=k log(I )
Where:
S is the perceived sensation.
k is a constant that depends on the specific sense being measured (e.g., weight,
decibels, etc.).
I is the intensity of the stimulus.
Implication: As the intensity of the stimulus increases, equal increases in intensity
lead to smaller increases in perceived sensation at higher levels of intensity.
Webber’s law
Personal life
Born: June 24, 1795, in Wittenberg, Germany.
Family: The eldest of 13 children; his father was a professor of theology,
providing an academically stimulating environment.
Education: Enrolled at the University of Wittenberg, where he studied
medicine and obtained his doctorate in 1815.
University of Leipzig: Began his academic career in 1821, becoming a
professor of comparative anatomy.
Contributions:
o Weber is considered one of the founders of experimental psychology.
o Most notable for his work in sensory perception, including the Just
Noticeable Difference (JND), Weber’s Law, and the Two-Point
Threshold (the minimum distance at which a person can distinguish two
separate points of contact on the skin).
o His work influenced Gustav Fechner, who expanded Weber’s ideas into
the broader field of psychophysics.
History of Weber’s Law:
Origin: The law was initially postulated by Ernst Heinrich Weber in 1834 to
describe research on weight lifting.
Fechner’s Contribution: Weber’s student, Gustav Theodor Fechner, later
applied Weber’s law to the measurement of sensation in relation to a stimulus.
Fechner expanded on this idea and developed it into the science of
psychophysics.
Fechner’s Naming: Fechner named the resulting formula Weber’s Law, which is
sometimes referred to as the Fechner-Weber Law.
Impact on Psychology: This allowed for the measurement of sensation in relation
to a physical stimulus, enabling the development of quantitative psychology.
Psychophysics: The field of psychophysics studies the quantitative
relationship between psychological events (such as sensations) and physical
events (such as stimuli that produce them).
Foundational Principle: Weber’s Law is a key concept in sensory perception and
psychophysics, formulated by the German physiologist and psychologist Ernst
Heinrich Weber in the 19th century.
Just Noticeable Difference (JND): The law quantifies the smallest detectable
difference between two stimuli, known as the just noticeable difference (JND).
Proportional Relationship: According to Weber’s Law, the JND is proportional to the
magnitude of the original stimulus. This means the ability to detect changes in a
stimulus depends on the relative change rather than the absolute change in the
stimulus.
Formula:
∆I
=k
I
Where:
ΔI is the minimum change in intensity required for detection (also known as
the just noticeable difference (JND)).
I is the intensity of the original stimulus.
k is the Weber fraction, a constant that varies for each type of sensory
modality (e.g., vision, hearing, etc.).
Experiments
Weber’s Weight Discrimination Experiment:
Objective: In 1834, Ernst Heinrich Weber conducted an experiment to understand
how small changes in physical stimuli (specifically weight) could be detected by the
human senses.
Method:
Participants were blindfolded to eliminate visual bias.
They were given two equal standard weights (one in each hand) to familiarize
themselves with the weight.
After this, a slightly heavier test weight was added to one hand.
The participants were then asked to compare the weights and judge which one felt
heavier.
Observations:
Participants found it harder to detect differences when the standard weight was
larger.
Example: If the standard weight was 100 grams, they could easily detect an
additional 10 grams. However, with a 200-gram standard weight, they needed an
additional 20 grams to perceive a difference.
Conclusion: The experiment showed that the ability to perceive changes in weight
was not based on an absolute difference but rather on a constant ratio between
the two weights, which led to the formulation of Weber’s Law.
Weber's Two-Point Discrimination Test:
Purpose: The test was designed to measure the two-point threshold, which is the
minimum distance between two points on the skin at which an individual can
distinguish them as separate touches.
Method:
Calipers with two sharp points were used to touch different areas of the
participant’s skin.
Participants, who were blindfolded, were asked to report whether they felt one or
two distinct points of contact.
The distance between the points was gradually adjusted until the participant could
no longer distinguish them as separate touches.
Test Locations: The test was conducted on various body parts, including fingertips,
face, lips, palms, and back, to examine differences in tactile sensitivity across body
locations.
Findings:
Tactile sensitivity varies significantly across different parts of the body.
Fingertips: Sensitivity ranged from 2 to 8 mm, allowing participants to
distinguish points just a few millimeters apart.
Lips: Sensitivity was slightly more refined, with a threshold of 2 to 4 mm.
Palms: Less sensitive, with a threshold of 8 to 12 mm.
Back or Shins: Much lower sensitivity, with a threshold of 30 to 40 mm, requiring
a much larger distance to distinguish separate points.
Fechner’s Sensory Threshold Experiments:
Extension of Weber’s Work: Gustav Fechner extended Weber's findings and
developed the field of psychophysics, applying Weber’s law to various other senses,
including vision and hearing.
Method:
Participants were asked to observe stimuli such as lights of varying brightness or
sounds of different volumes.
They were then asked to indicate the point at which they could detect a
difference between two levels of stimuli (the Just Noticeable Difference (JND)).
Findings:
Fechner confirmed Weber's principle, showing that the JND follows a
proportional relationship across different sensory modalities.
Key Observation:
1. If a stimulus (e.g., sound or light) is faint, only a small change is needed to
detect it.
2. If the initial stimulus is already intense, a larger change is required for it to be
noticeable.
Brightness Example
Suppose we have a light with intensity 100 units. If we
double the intensity to 200 units, let’s calculate the perceived brightness:
Initial Perceived Brightness:
0.33
S=k × I
For simplicity, let’s assume k = 1
At 100 units of intensity:
0.33
S=1×100 ≈ 4.64
Perceived Brightness at 200 units:
0.33
S=1× 200 ≈ 6.92
So, while the light’s physical intensity doubled (from 100 to 200), the perceived
brightness only increased from 4.64 to 6.92—not a doubling. This illustrates that our
perception of brightness grows more slowly than the actual increase in intensity.
Magnitude Estimation
Standard Reference: A reference stimulus is presented to participants to
establish a baseline for comparison.
Example: In a loudness experiment, a sound is played at a set level as the standard,
so participants know what "10" represents on the rating scale.
Modulus: A numerical value assigned to the standard to create a consistent
rating scale.
Example: If the standard sound is given a modulus of "10," participants use this to
rate other sounds. A sound perceived as twice as loud might be rated "20," while one
half as loud might be rated "5."
Magnitude Production:
Adjustment Task: The participant is asked to adjust a stimulus (like
brightness, sound, or weight) to reach a specific intensity level that corresponds to
a given numerical value.
Example: If provided a light with a reference brightness level of "10," participants
may be asked to make another light “twice as bright” by adjusting the dimmer switch
until they perceive it as “20.”
Adaptable; n varies by
Assumes a fixed
stimulus, accommodating
ADAPTABILITY relationship for all types
different types of
of stimuli.
sensations.
Limited scope; less
More complex; requires
accurate for certain
LIMITATIONS determining n for each type
sensations and extreme
of stimulus.
intensities.
Grows as a power
Grows as a logarithm of
function of physical
PERCIEVED physical intensity, showing
intensity, allowing for
INTENSITY diminishing sensitivity at
varying sensitivity based on
higher levels.
the type of sensation.
Application
1. Marketing: Adjustments in product features, such as color brightness or
sound intensity, can influence consumer perceptions of quality and
attractiveness. By understanding the perceived intensity, marketers can create
products that appeal to sensory expectations.
2. Room Acoustics: Magnitude production is applied to acoustic design in
spaces like concert halls, allowing designers to optimize sound intensity based
on how different levels of volume are perceived by audiences. This can ensure
balanced acoustics for enhanced auditory experiences.
3. Ergonomics: In designing workspaces, understanding perceived comfort in
lighting and sound levels helps optimize environments for productivity.
Adjustments in these stimuli based on human responses can lead to healthier
and more effective workspaces by tailoring to how intensities are perceived.
Strengths:
1. Empirical Support: Stevens' Power Law is backed by extensive research
across sensory domains, demonstrating that it more accurately fits perceptual
data compared to earlier models, like Weber-Fechner Law. This strong empirical
support makes it a dependable framework for understanding sensory
perception.
2. Predictive Power: The law's ability to predict how perceived intensity
changes with stimulus magnitude allows it to be applied effectively in
psychology, neuroscience, and practical fields like marketing. For instance, in
marketing, it can help predict how changes in product features, like brightness
or sound, influence consumer perception, enhancing targeted design and
product appeal.
Limitations:
1. Individual Differences: Stevens' Power Law assumes a consistent stimulus-
sensation relationship across individuals. However, there are substantial
individual differences in perception, which are often obscured when data is
averaged, reducing the model’s accuracy in capturing unique perceptual
responses.
2. Contextual Factors: External influences like environmental conditions, prior
experiences, and emotional states can significantly impact perception. These
contextual variables introduce variability that the formula doesn’t account for,
leading to limitations in real-world applications where context affects
perception.
3. Fails at Extremes: The law does not reliably predict perceptual changes at
very high or very low stimulus intensities. At these extremes, human perception
often deviates from the power function, indicating that the law is less effective
for extreme sensory conditions.
History:
Origins (1800s) - Gustav Fechner:
Fechner, known as the founder of experimental psychology, explored how humans
perceive stimuli.
His Weber-Fechner Law examined the relationship between stimulus magnitude
and perceived intensity.
Although indirect, Fechner’s research on stimulus discrimination contributed to
SDT's later development.
World War II (1940s):
Radar technology needed to distinguish between actual signals (enemy planes)
and "noise" (irrelevant signals).
The need to minimize false alarms and misses led to the early principles of SDT
focused on decision-making under uncertainty.
Post-War Cognitive Science (1950s-60s):
SDT principles expanded to cognitive science, exploring human decision-making
amid noise and ambiguity.
Used in perception, memory, and attention studies, SDT became a framework
for understanding correct detections, false alarms, misses, and correct rejections.
Graphical Representation:
X-axis: Represents signal strength, from noise only (left) to strong signal (right).
Y-axis: Represents the probability distribution of occurrences for noise and signal.
The two curves show distributions for signal absent and signal present.
Components of the Diagram:
Signal Absent (Red Curve):
Shows instances when only noise
is present. The further left, the
more likely the event is pure
noise.
Signal Present (Black Curve):
Shows instances when a signal is
present. The farther right, the
stronger and more detectable the
signal.
Criterion (Vertical Line): The
decision threshold. Anything to
the right is considered a signal, and anything to the left is considered noise.
The placement of the criterion affects the balance between misses and false
alarms.
Four Possible Outcomes:
Hit (Black Area): Signal is present, and the observer correctly detects it (signal
strength above the criterion).
Miss (Shaded Red Area in Black Curve): Signal is present, but the observer
fails to detect it (signal strength below the criterion).
False Alarm (Shaded Red Area in Red Curve): No signal, but the observer
incorrectly detects a signal (noise surpassing the criterion).
Correct Reject (Light Red Area): No signal, and the observer correctly
identifies that no signal is present (noise below the criterion).
Application
Sensitivity or Discriminability:
SDT assesses how well people distinguish stimuli in noisy environments.
Example: Memory tests in different environments (quiet vs. noisy) show how external
conditions affect sensitivity to signals.
Meteorology:
Meteorologists apply SDT principles to sift through noise in atmospheric data for
better weather predictions.
Example: Weather instruments filter out noise to focus on relevant data for accurate
forecasting.
Bias in Responses:
SDT measures the bias in decision-making, influenced by factors like rewards or
penalties.
Example: In a memory test, penalties for missed answers might lead to more cautious
or liberal responses.
Technology and Electronics:
SDT is used in modern systems to distinguish important signals from background
noise, especially in radar and electronic devices.
Example: Car radios use SDT to separate the desired signal from interference when
tuning between stations.
Merits
1. Quantitative Measurement: SDT provides a quantitative framework for
measuring sensitivity (d') and decision criteria (c), allowing for detailed
analysis of performance.
2. Distinction Between Sensitivity and Bias: SDT differentiates between
perceptual sensitivity and decision-making biases, improving
understanding of how signals are detected.
3. Applicability Across Disciplines: SDT is widely used in fields like
psychology, medicine, and telecommunications to assess detection
capabilities.
4. Modeling Complex Tasks: SDT can model more complex decision-
making tasks, such as multi-alternative choices, beyond basic signal
detection.
5. Useful for Diagnostic Testing: In clinical settings, SDT helps evaluate the
effectiveness of diagnostic tests by assessing true positive and false
positive rates.
Limitations:
1. Assumptions of Ideal Observer: SDT assumes an ideal observer model,
which may not accurately capture real-world decision-making complexities.
2. Cognitive Biases: Individual biases (e.g., response bias) can influence
detection performance, making interpretations more complicated.
3. Binary Outcomes: SDT typically focuses on binary outcomes (signal
present or absent), which may oversimplify more complex decision-making
situations.
4. Contextual Influences: Environmental and contextual factors can affect
detection performance but are often not considered in standard SDT models.
5. Limited by Experimental Design: The validity of SDT findings can be
impacted by experimental design factors, such as how stimuli are presented
and the instructions given to participants
ROC curve.
History
Origins: The ROC curve was developed in the 1940s to evaluate radar performance in
identifying enemy versus friendly signals. The term “receiver operating characteristic”
refers to radar operators, who were the “receivers” of these signals.
Key Figure: John A. Swets, an American psychologist and statistician, was
instrumental in adapting ROC analysis for use in psychology and medicine,
broadening its application beyond engineering.
Popularization: Swets' book, Signal Detection Theory and ROC Analysis in
Psychology and Diagnosis, helped popularize the ROC curve for evaluating true versus
false signals in psychology and medical diagnostics.
Significance in Diagnostics: ROC analysis became a valuable tool in fields like
disease detection and psychological assessment by visualizing the trade-off between
sensitivity and specificity, aiding in diagnostic accuracy evaluation.
Broader Applications: Over time, the ROC curve has expanded into fields such as
machine learning and data science, where it is used to assess model accuracy in
classification and decision-making models.
Area Under Curve (AUC) A single metric representing the overall performance of
a binary classificational model based on the area under its ROC curve
True Positive Rate (Sensitivity) Proportion of actual positives correctly
identified by the model. The Y-axis, or dependent variable, is the true positive rate
False Positive Rate (1-Specificity) The model incorrectly classifies the
proportion of actual negatives as positive. The X-axis or independent variable is false
positive rate
True Negative Rate (Specificity) Proportion of actual negatives corrently
identified by the model
Structure of an ROC Curve
Axes
The curved line shows the performance of a mode; the closer it is, to the top left
corner, the better the models performance.
Part of the curve – Start starts at the origin (0,0) and the ideally progresses towards
the point (1,1) indicating perfect sensitivity and specificity
How to plot an ROC using an example?
Imagine a study where participants were asked to detect a faint sound (signal) in
presence of background noise
Predicted Predicted
positive negative
40 Correct
Actual negative 10 False Alarms
Rejections
Step 3 - Calculate True Positive Rate and False Positive Rate.
To calculate the true positive rate (TPR) and false positive rate (FPR), from the
confusion matrix
True Positive Rate (TPR), also known as Sensitivity or Recall:
Hits(TP)
TPR=
Hits ( TP )+ Misses(FN )
False Positive Rate (FPR), also known as Sensitivity or Recall:
False Alarms (FP)
FPR=
False Alarms ( FP ) +Correct Rejections(TN)
Limitations
Primarily for Binary Classification: Best suited for binary classification tasks;
requires adjustments for multi-class classification, which complicates interpretation.
Insensitive to Class Imbalance: Can give an overly optimistic view with
imbalanced datasets, as it does not consider class distribution; Precision-Recall curves
may be more informative in these cases.
Limited by AUC Interpretation: AUC reduces the ROC curve to a single number,
which can obscure specific trade-offs. Models with similar AUCs may perform
differently in practice.
No Optimal Threshold Selection: Does not provide a direct method for choosing
the best decision threshold to balance false positives and false negatives, which is
crucial in practical applications.
Assumes Independence and Identical Distribution: Assumes positive and
negative instances are independent and identically distributed, which may not hold in
real-world data.
Less Informative for Rare Events: Can be misleading for rare events or highly
imbalanced data, as high specificity can be achieved without effective class
distinction.
Complexity in High-Dimensional Spaces: Interpreting ROC curves in high-
dimensional or complex models is challenging, and nuances in model performance
may be overlooked.
Limited Utility with Continuous or Ordinal Data: Less suitable for tasks with
ordinal or continuous outcomes, as they don’t fit well with the binary, threshold-based
ROC analysis.
Applications
Evaluating Diagnostic Tests: Widely used in medical research to assess
diagnostic test accuracy by comparing sensitivity and specificity, useful in detecting
diseases like cancer or heart conditions.
Comparing Classifiers: Helps compare multiple classification models to determine
the best trade-off between true positives and false positives, aiding in model
selection.
Assessing Credit Scoring Models: Used in finance to evaluate credit scoring
models, identifying the optimal threshold to classify borrowers as low or high risk.
Fraud Detection: Applied in fraud detection to measure model effectiveness in
distinguishing fraudulent from non-fraudulent transactions, where accurate
classification is critical.
Image and Object Recognition: Used in computer vision to evaluate object
recognition and image classification models by balancing detection accuracy with
minimizing false alarms.
Drug Discovery and Toxicology: In pharmaceutical research, it helps assess
models predicting drug efficacy or toxicity, aiding in identifying safe and effective
compounds.
Speech and Signal Processing: Evaluates models in speech recognition or signal
processing by distinguishing true signals, such as speech or alarms, from noise.
Weather and Climate Prediction: Applied in meteorology to evaluate models
predicting extreme weather events, balancing accurate warnings with minimizing
false alarms.