Search

Ears Can’t Take All the Credit: Facial Expressions Influence Hearing

When robots stretched parts of a subject’s face in different directions, the subject heard different words. (Image courtesy of Takayuki Ito / Haskins Laboratories)

In everyday conversation, we often mishear words. The usual result is mutual confusion on the parts of the speaker and listener. While many may attribute these humorous situations to auditory slip-ups or simple inattentiveness, Yale research­ers have shown that speech perception is also linked to somatosensory function – a function that processes different sensations or stimuli via skin and tissues.

According to the motor theory of speech perception, motor activity in the brain’s cortex accompanies speech perception. However, there has been very little research regarding the role of somatosensory function in speech.

That trend broke when observations were made that skin on the face is regularly stretched in set patterns of deformation. Furthermore, facial skin also consists of many cutaneous mechano­receptors, or neurons that respond to physical alterations in pressure. These two important observations led researchers to study the role of somatosensory processes in pronunciation and speech perception.

In a recently published paper in The Proceedings of the National Academy of Sciences, Takayuki Ito and colleagues at the Haskins Laboratory tested whether stretching a subject’s skin in a certain direction while streaming a set of words in the subject’s ear affected what was actually heard.

The study made use of a computer that pro­duced a continuum of two words at random, either “head” or “had.” At the same time, a robotic device was used to stretch the subject’s face in a particular direction. The subjects were then asked to identify the word that they had just heard.

As Takayuki explains, “We made the temporal pattern in order to imitate the jaw motion that produces the ‘head’ and ‘had’ sounds.” Direc­tions of skin stretch tested were up, down, or backwards and were produced using plastic tabs that connected to a subject’s skin at the edge of his or her mouth.

Based on their results, Takayuki and colleagues were able to show a distinct correlation between the direction that skin was stretched and the word that the subjects heard. Deformation of the skin in the upwards direction increased the probability that the subjects perceived the word as “head.” Conversely, deformation in the down direction caused the subjects to identify the word as “had.” Stretching of the skin backwards, however, did not seem to alter the subjects’ perceptions of the identity of the word.

According to Takayuki , this result confirmed their original idea that “the word ‘head’ is asso­ciated with a higher jaw position, while ‘had’ is linked to a lower jaw position,” explaining why her team observed results related to the vertical direction of skin stretch. “In the future, we plan to test for similar results with words related to the horizontal positioning of the jaw,” said Takayuki.

In a broader context, these results illustrate that individuals perceive speech sounds in a way that can be influenced by specifically altered somatosensory input that affects what an indi­vidual hears at a particular time in the process of perception. According to Takayuki, these results have much significance in the study of human communication. “This area of research is still beginning. Our results help show that a lot of sensory information is involved in one cognitive process. Knowledge about the facial skin and its role in motion information could help in speech therapy and rehabilitation.”

Moving forth, research regarding the role of somatosensory input in the process of hearing will not only progress scientific understanding of brain function but will also reveal more about the intricacies of human communication.