Multi-modal language comprehension as a joint activity: The influence of eye gaze on the processing of speech and co-speech gesture in multi-party communication
Traditionally, language comprehension has been studied as a solitary
and unimodal activity. Here, we investigate language comprehension
as a joint activity, i.e., in a dynamic social context involving multiple
participants in different roles with different perspectives, while taking
into account the multimodal nature of facetoface communication. We
simulated a triadic communication context involving a speaker alternating her gaze between two different recipients, conveying
information not only via speech but gesture as well. Participants thus
viewed videorecorded speechonly
or speech+gesture utterances
referencing objects (e.g., “he likes the laptop”/+TYPING ON LAPTOPgesture)
when being addressed (direct gaze) or unaddressed (averted gaze). The videoclips
were followed by two object images (laptoptowel). Participants’ task was to choose the object that matched the
speaker’s message (i.e., laptop). Unaddressed recipients responded
significantly slower than addressees for speechonly utterances.
However, perceiving the same speech accompanied by gestures sped
them up to levels identical to that of addressees. Thus, when speech
processing suffers due to being unaddressed, gestures become more
prominent and boost comprehension of a speaker’s spoken message.
Our findings illuminate how participants process multimodal language
and how this process is influenced by eye gaze, an important social cue
facilitating coordination in the joint activity of conversation.
Publication type
TalkPublication date
2013
Share this page