When Just One Sense Is Available, Multisensory Experience Fills in the Blanks
A new article is out describing how we use our senses to fill out the blanks when we are only provided with input from one modality. Talking in a phone is a good example. Here, we are only provided with the auditory input. In a new study, it seems that knowing the face of who […]
A new article is out describing how we use our senses to fill out the blanks when we are only provided with input from one modality. Talking in a phone is a good example. Here, we are only provided with the auditory input. In a new study, it seems that knowing the face of who you’re talking to helps recognizing the people you are talking to.
From the article by Liza Gross:
Our brains are wired in such a way that we can recognize a friend or loved one almost as easily whether we hear their voice or see their face. Specialized areas of the brain—in this case, the visual and auditory networks—are specially tuned to different properties of physical objects. These properties can be represented by multiple sensory modalities, so that a voice conveys nearly as much information about a person’s identity as a face. This redundancy allows rapid, automatic recognition of multimodal stimuli. It may also underlie “unimodal” perception—hearing a voice on the phone, for example—by automatically reproducing cues that are usually provided by other senses. In this view, as you listen to the caller’s voice, you imagine their face to try to identify the speaker. In a new study, Katharina von Kriegstein and Anne-Lise Giraud used functional magnetic resonance imaging (fMRI) to explore this possibility and understand how multimodal features like voices and faces are integrated in the human brain.
Read more at PLoS Biology