Authors:
Elisa Magosso
;
Filippo Cona
;
Cristiano Cuppini
and
Mauro Ursino
Affiliation:
University of Bologna, Italy
Keyword(s):
Visual-auditory Integration, Ventriloquism, Hebbian Plasticity, Aftereffect Generalization.
Related
Ontology
Subjects/Areas/Topics:
Artificial Intelligence
;
Biomedical Engineering
;
Biomedical Signal Processing
;
Computational Intelligence
;
Computational Neuroscience
;
Health Engineering and Technology Applications
;
Human-Computer Interaction
;
Methodologies and Methods
;
Neural Networks
;
Neurocomputing
;
Neurotechnology, Electronics and Informatics
;
Pattern Recognition
;
Physiological Computing Systems
;
Sensor Networks
;
Signal Processing
;
Soft Computing
;
Theory and Methods
Abstract:
When an auditory stimulus and a visual stimulus are simultaneously presented in spatial disparity, the sound is perceived shifted toward the visual stimulus (ventriloquism effect). After adaptation to a ventriloquism situation, enduring sound shifts are observed in the absence of the visual stimulus (ventriloquism aftereffect). Experimental studies report discordant results as to aftereffect generalization across sound frequencies, varying from aftereffect staying confined to the sound frequency used during the adaptation, to aftereffect transferring across some octaves of frequency. Here, we present a model of visual-auditory interactions, able to simulate the ventriloquism effect and to reproduce – via Hebbian plasticity rules – the ventriloquism aftereffect. The model is suitable to investigate aftereffect generalization as the simulated auditory neurons code both for spatial and spectral properties of the auditory stimuli. The model provides a plausible hypothesis to interpret th
e discordant results in the literature, showing that different sound intensities may produce different extents of aftereffect generalization. Model mechanisms and hypotheses are discussed in relation to the neurophysiological and psychophysical literature.
(More)