When I am looking at my coffee machine that makes funny noises, this is an instance of multisensory perception – I perceive this event by means of both vision and audition. But very often we only receive sensory stimulation from a multisensory event by means of one sense modality. If I hear the noisy coffee machine in the next room (without seeing it), then how do I represent the visual aspects of this multisensory event?
The aim of this research project is to bring together empirical findings about multimodal perception and empirical findings about (visual, auditory, tactile) mental imagery and argue that on occasions like the one described in the last paragraph, we have multimodal mental imagery: perceptual processing in one sense modality (here: vision) that is triggered by sensory stimulation in another sense modality
Multimodal mental imagery is rife. The vast majority of what we perceive are multisensory events: events that can be perceived in more than one sense modality – like the noisy coffee machine. And most of the time we are only acquainted with these multisensory events via a subset of the sense modalities involved – all the other aspects of these events are represented by means of multisensory mental imagery. This means that multisensory mental imagery is a crucial element of almost all instances of everyday perception, which has wider implications to philosophy of perception and beyond, to epistemological questions about whether we can trust our senses.
Focusing on multimodal mental imagery can help us to understand a number of puzzling perceptual phenomena, like sensory substitution and synaesthesia. Further, manipulating mental imagery has recently become an important clinical procedure in various branches of psychiatry as well as in counteracting implicit bias – using multimodal mental imagery rather than voluntarily and consciously conjured up mental imagery can lead to real progress in these experimental paradigms.