Judith Holler

Publications

Displaying 1 - 19 of 19
  • Bosker, H. R., Peeters, D., & Holler, J. (2020). How visual cues to speech rate influence speech perception. Quarterly Journal of Experimental Psychology, 73(10), 1523-1536. doi:10.1177/1747021820914564.

    Abstract

    Spoken words are highly variable and therefore listeners interpret speech sounds relative to the surrounding acoustic context, such as the speech rate of a preceding sentence. For instance, a vowel midway between short /ɑ/ and long /a:/ in Dutch is perceived as short /ɑ/ in the context of preceding slow speech, but as long /a:/ if preceded by a fast context. Despite the well-established influence of visual articulatory cues on speech comprehension, it remains unclear whether visual cues to speech rate also influence subsequent spoken word recognition. In two ‘Go Fish’-like experiments, participants were presented with audio-only (auditory speech + fixation cross), visual-only (mute videos of talking head), and audiovisual (speech + videos) context sentences, followed by ambiguous target words containing vowels midway between short /ɑ/ and long /a:/. In Experiment 1, target words were always presented auditorily, without visual articulatory cues. Although the audio-only and audiovisual contexts induced a rate effect (i.e., more long /a:/ responses after fast contexts), the visual-only condition did not. When, in Experiment 2, target words were presented audiovisually, rate effects were observed in all three conditions, including visual-only. This suggests that visual cues to speech rate in a context sentence influence the perception of following visual target cues (e.g., duration of lip aperture), which at an audiovisual integration stage bias participants’ target categorization responses. These findings contribute to a better understanding of how what we see influences what we hear.
  • Macuch Silva, V., Holler, J., Ozyurek, A., & Roberts, S. G. (2020). Multimodality and the origin of a novel communication system in face-to-face interaction. Royal Society Open Science, 7: 182056. doi:10.1098/rsos.182056.

    Abstract

    Face-to-face communication is multimodal at its core: it consists of a combination of vocal and visual signalling. However, current evidence suggests that, in the absence of an established communication system, visual signalling, especially in the form of visible gesture, is a more powerful form of communication than vocalisation, and therefore likely to have played a primary role in the emergence of human language. This argument is based on experimental evidence of how vocal and visual modalities (i.e., gesture) are employed to communicate about familiar concepts when participants cannot use their existing languages. To investigate this further, we introduce an experiment where pairs of participants performed a referential communication task in which they described unfamiliar stimuli in order to reduce reliance on conventional signals. Visual and auditory stimuli were described in three conditions: using visible gestures only, using non-linguistic vocalisations only and given the option to use both (multimodal communication). The results suggest that even in the absence of conventional signals, gesture is a more powerful mode of communication compared to vocalisation, but that there are also advantages to multimodality compared to using gesture alone. Participants with an option to produce multimodal signals had comparable accuracy to those using only gesture, but gained an efficiency advantage. The analysis of the interactions between participants showed that interactants developed novel communication systems for unfamiliar stimuli by deploying different modalities flexibly to suit their needs and by taking advantage of multimodality when required.
  • Ripperda, J., Drijvers, L., & Holler, J. (2020). Speeding up the detection of non-iconic and iconic gestures (SPUDNIG): A toolkit for the automatic detection of hand movements and gestures in video data. Behavior Research Methods, 52(4), 1783-1794. doi:10.3758/s13428-020-01350-2.

    Abstract

    In human face-to-face communication, speech is frequently accompanied by visual signals, especially communicative hand gestures. Analyzing these visual signals requires detailed manual annotation of video data, which is often a labor-intensive and time-consuming process. To facilitate this process, we here present SPUDNIG (SPeeding Up the Detection of Non-iconic and Iconic Gestures), a tool to automatize the detection and annotation of hand movements in video data. We provide a detailed description of how SPUDNIG detects hand movement initiation and termination, as well as open-source code and a short tutorial on an easy-to-use graphical user interface (GUI) of our tool. We then provide a proof-of-principle and validation of our method by comparing SPUDNIG’s output to manual annotations of gestures by a human coder. While the tool does not entirely eliminate the need of a human coder (e.g., for false positives detection), our results demonstrate that SPUDNIG can detect both iconic and non-iconic gestures with very high accuracy, and could successfully detect all iconic gestures in our validation dataset. Importantly, SPUDNIG’s output can directly be imported into commonly used annotation tools such as ELAN and ANVIL. We therefore believe that SPUDNIG will be highly relevant for researchers studying multimodal communication due to its annotations significantly accelerating the analysis of large video corpora.

    Additional information

    data and materials
  • Sekine, K., Schoechl, C., Mulder, K., Holler, J., Kelly, S., Furman, R., & Ozyurek, A. (2020). Evidence for children's online integration of simultaneous information from speech and iconic gestures: An ERP study. Language, Cognition and Neuroscience, 35(10), 1283-1294. doi:10.1080/23273798.2020.1737719.

    Abstract

    Children perceive iconic gestures, along with speech they hear. Previous studies have shown
    that children integrate information from both modalities. Yet it is not known whether children
    can integrate both types of information simultaneously as soon as they are available as adults
    do or processes them separately initially and integrate them later. Using electrophysiological
    measures, we examined the online neurocognitive processing of gesture-speech integration in
    6- to 7-year-old children. We focused on the N400 event-related potentials component which
    is modulated by semantic integration load. Children watched video clips of matching or
    mismatching gesture-speech combinations, which varied the semantic integration load. The
    ERPs showed that the amplitude of the N400 was larger in the mismatching condition than in
    the matching condition. This finding provides the first neural evidence that by the ages of 6
    or 7, children integrate multimodal semantic information in an online fashion comparable to
    that of adults.
  • Ter Bekke, M., Drijvers, L., & Holler, J. (2020). The predictive potential of hand gestures during conversation: An investigation of the timing of gestures in relation to speech. In Proceedings of the 7th GESPIN - Gesture and Speech in Interaction Conference. Stockholm: KTH Royal Institute of Technology.

    Abstract

    In face-to-face conversation, recipients might use the bodily movements of the speaker (e.g. gestures) to facilitate language processing. It has been suggested that one way through which this facilitation may happen is prediction. However, for this to be possible, gestures would need to precede speech, and it is unclear whether this is true during natural conversation.
    In a corpus of Dutch conversations, we annotated hand gestures that represent semantic information and occurred during questions, and the word(s) which corresponded most closely to the gesturally depicted meaning. Thus, we tested whether representational gestures temporally precede their lexical affiliates. Further, to see whether preceding gestures may indeed facilitate language processing, we asked whether the gesture-speech asynchrony predicts the response time to the question the gesture is part of.
    Gestures and their strokes (most meaningful movement component) indeed preceded the corresponding lexical information, thus demonstrating their predictive potential. However, while questions with gestures got faster responses than questions without, there was no evidence that questions with larger gesture-speech asynchronies get faster responses. These results suggest that gestures indeed have the potential to facilitate predictive language processing, but further analyses on larger datasets are needed to test for links between asynchrony and processing advantages.
  • Holler, J., Kendrick, K. H., Casillas, M., & Levinson, S. C. (2015). Editorial: Turn-taking in human communicative interaction. Frontiers in Psychology, 6: 1919. doi:10.3389/fpsyg.2015.01919.
  • Holler, J., Kokal, I., Toni, I., Hagoort, P., Kelly, S. D., & Ozyurek, A. (2015). Eye’m talking to you: Speakers’ gaze direction modulates co-speech gesture processing in the right MTG. Social Cognitive & Affective Neuroscience, 10, 255-261. doi:10.1093/scan/nsu047.

    Abstract

    Recipients process information from speech and co-speech gestures, but it is currently unknown how this processing is influenced by the presence of other important social cues, especially gaze direction, a marker of communicative intent. Such cues may modulate neural activity in regions associated either with the processing of ostensive cues, such as eye gaze, or with the processing of semantic information, provided by speech and gesture.
    Participants were scanned (fMRI) while taking part in triadic communication involving two recipients and a speaker. The speaker uttered sentences that
    were and were not accompanied by complementary iconic gestures. Crucially, the speaker alternated her gaze direction, thus creating two recipient roles: addressed (direct gaze) vs unaddressed (averted gaze) recipient. The comprehension of Speech&Gesture relative to SpeechOnly utterances recruited middle occipital, middle temporal and inferior frontal gyri, bilaterally. The calcarine sulcus and posterior cingulate cortex were sensitive to differences between direct and averted gaze. Most importantly, Speech&Gesture utterances, but not SpeechOnly utterances, produced additional activity in the right middle temporal gyrus when participants were addressed. Marking communicative intent with gaze direction modulates the processing of speech–gesture utterances in cerebral areas typically associated with the semantic processing of multi-modal communicative acts.
  • Holler, J., & Kendrick, K. H. (2015). Unaddressed participants’ gaze in multi-person interaction: Optimizing recipiency. Frontiers in Psychology, 6: 98. doi:10.3389/fpsyg.2015.00098.

    Abstract

    One of the most intriguing aspects of human communication is its turn-taking system. It requires the ability to process on-going turns at talk while planning the next, and to launch this next turn without considerable overlap or delay. Recent research has investigated the eye movements of observers of dialogues to gain insight into how we process turns at talk. More specifically, this research has focused on the extent to which we are able to anticipate the end of current and the beginning of next turns. At the same time, there has been a call for shifting experimental paradigms exploring social-cognitive processes away from passive observation towards online processing. Here, we present research that responds to this call by situating state-of-the-art technology for tracking interlocutors’ eye movements within spontaneous, face-to-face conversation. Each conversation involved three native speakers of English. The analysis focused on question-response sequences involving just two of those participants, thus rendering the third momentarily unaddressed. Temporal analyses of the unaddressed participants’ gaze shifts from current to next speaker revealed that unaddressed participants are able to anticipate next turns, and moreover, that they often shift their gaze towards the next speaker before the current turn ends. However, an analysis of the complex structure of turns at talk revealed that the planning of these gaze shifts virtually coincides with the points at which the turns first become recog-nizable as possibly complete. We argue that the timing of these eye movements is governed by an organizational principle whereby unaddressed participants shift their gaze at a point that appears interactionally most optimal: It provides unaddressed participants with access to much of the visual, bodily behavior that accompanies both the current speaker’s and the next speaker’s turn, and it allows them to display recipiency with regard to both speakers’ turns.
  • Kelly, S., Healey, M., Ozyurek, A., & Holler, J. (2015). The processing of speech, gesture and action during language comprehension. Psychonomic Bulletin & Review, 22, 517-523. doi:10.3758/s13423-014-0681-7.

    Abstract

    Hand gestures and speech form a single integrated system of meaning during language comprehension, but is gesture processed with speech in a unique fashion? We had subjects watch multimodal videos that presented auditory (words) and visual (gestures and actions on objects) information. Half of the subjects related the audio information to a written prime presented before the video, and the other half related the visual information to the written prime. For half of the multimodal video stimuli, the audio and visual information contents were congruent, and for the other half, they were incongruent. For all subjects, stimuli in which the gestures and actions were incongruent with the speech produced more errors and longer response times than did stimuli that were congruent, but this effect was less prominent for speech-action stimuli than for speech-gesture stimuli. However, subjects focusing on visual targets were more accurate when processing actions than gestures. These results suggest that although actions may be easier to process than gestures, gestures may be more tightly tied to the processing of accompanying speech.
  • Peeters, D., Chu, M., Holler, J., Hagoort, P., & Ozyurek, A. (2015). Electrophysiological and kinematic correlates of communicative intent in the planning and production of pointing gestures and speech. Journal of Cognitive Neuroscience, 27(12), 2352-2368. doi:10.1162/jocn_a_00865.

    Abstract

    In everyday human communication, we often express our communicative intentions by manually pointing out referents in the material world around us to an addressee, often in tight synchronization with referential speech. This study investigated whether and how the kinematic form of index finger pointing gestures is shaped by the gesturer's communicative intentions and how this is modulated by the presence of concurrently produced speech. Furthermore, we explored the neural mechanisms underpinning the planning of communicative pointing gestures and speech. Two experiments were carried out in which participants pointed at referents for an addressee while the informativeness of their gestures and speech was varied. Kinematic and electrophysiological data were recorded online. It was found that participants prolonged the duration of the stroke and poststroke hold phase of their gesture to be more communicative, in particular when the gesture was carrying the main informational burden in their multimodal utterance. Frontal and P300 effects in the ERPs suggested the importance of intentional and modality-independent attentional mechanisms during the planning phase of informative pointing gestures. These findings contribute to a better understanding of the complex interplay between action, attention, intention, and language in the production of pointing gestures, a communicative act core to human interaction.
  • Rowbotham, S., Lloyd, D. M., Holler, J., & Wearden, A. (2015). Externalizing the private experience of pain: A role for co-speech gestures in pain communication? Health Communication, 30(1), 70-80. doi:10.1080/10410236.2013.836070.

    Abstract

    Despite the importance of effective pain communication, talking about pain represents a major challenge for patients and clinicians because pain is a private and subjective experience. Focusing primarily on acute pain, this article considers the limitations of current methods of obtaining information about the sensory characteristics of pain and suggests that spontaneously produced “co-speech hand gestures” may constitute an important source of information here. Although this is a relatively new area of research, we present recent empirical evidence that reveals that co-speech gestures contain important information about pain that can both add to and clarify speech. Following this, we discuss how these findings might eventually lead to a greater understanding of the sensory characteristics of pain, and to improvements in treatment and support for pain sufferers. We hope that this article will stimulate further research and discussion of this previously overlooked dimension of pain communication
  • Schubotz, L., Holler, J., & Ozyurek, A. (2015). Age-related differences in multi-modal audience design: Young, but not old speakers, adapt speech and gestures to their addressee's knowledge. In G. Ferré, & M. Tutton (Eds.), Proceedings of the 4th GESPIN - Gesture & Speech in Interaction Conference (pp. 211-216). Nantes: Université of Nantes.

    Abstract

    Speakers can adapt their speech and co-speech gestures for
    addressees. Here, we investigate whether this ability is
    modulated by age. Younger and older adults participated in a
    comic narration task in which one participant (the speaker)
    narrated six short comic stories to another participant (the
    addressee). One half of each story was known to both participants, the other half only to the speaker. Younger but
    not older speakers used more words and gestures when narrating novel story content as opposed to known content.
    We discuss cognitive and pragmatic explanations of these findings and relate them to theories of gesture production.
  • Cleary, R. A., Poliakoff, E., Galpin, A., Dick, J. P., & Holler, J. (2011). An investigation of co-speech gesture production during action description in Parkinson’s disease. Parkinsonism & Related Disorders, 17, 753-756. doi:10.1016/j.parkreldis.2011.08.001.

    Abstract

    Methods
    The present study provides a systematic analysis of co-speech gestures which spontaneously accompany the description of actions in a group of PD patients (N = 23, Hoehn and Yahr Stage III or less) and age-matched healthy controls (N = 22). The analysis considers different co-speech gesture types, using established classification schemes from the field of gesture research. The analysis focuses on the rate of these gestures as well as on their qualitative nature. In doing so, the analysis attempts to overcome several methodological shortcomings of research in this area.
    Results
    Contrary to expectation, gesture rate was not significantly affected in our patient group, with relatively mild PD. This indicates that co-speech gestures could compensate for speech problems. However, while gesture rate seems unaffected, the qualitative precision of gestures representing actions was significantly reduced.
    Conclusions
    This study demonstrates the feasibility of carrying out fine-grained, detailed analyses of gestures in PD and offers insights into an as yet neglected facet of communication in patients with PD. Based on the present findings, an important next step is the closer investigation of the qualitative changes in gesture (including different communicative situations) and an analysis of the heterogeneity in co-speech gesture production in PD.
  • Holler, J., & Wilkin, K. (2011). Co-speech gesture mimicry in the process of collaborative referring during face-to-face dialogue. Journal of Nonverbal Behavior, 35, 133-153. doi:10.1007/s10919-011-0105-6.

    Abstract

    Mimicry has been observed regarding a range of nonverbal behaviors, but only recently have researchers started to investigate mimicry in co-speech gestures. These gestures are considered to be crucially different from other aspects of nonverbal behavior due to their tight link with speech. This study provides evidence of mimicry in co-speech gestures in face-to-face dialogue, the most common forum of everyday talk. In addition, it offers an analysis of the functions that mimicked co-speech gestures fulfill in the collaborative process of creating a mutually shared understanding of referring expressions. The implications bear on theories of gesture production, research on grounding, and the mechanisms underlying behavioral mimicry.
  • Holler, J., Tutton, M., & Wilkin, K. (2011). Co-speech gestures in the process of meaning coordination. In Proceedings of the 2nd GESPIN - Gesture & Speech in Interaction Conference, Bielefeld, 5-7 Sep 2011.

    Abstract

    This study uses a classical referential communication task to
    investigate the role of co-speech gestures in the process of
    coordination. The study manipulates both the common ground between the interlocutors, as well as the visibility of the gestures they use. The findings show that co-speech gestures are an integral part of the referential utterances speakers
    produced with regard to both initial references as well as repeated references, and that the availability of gestures appears to impact on interlocutors’ referential oordination. The results are discussed with regard to past research on
    common ground as well as theories of gesture production.
  • Holler, J., & Wilkin, K. (2011). An experimental investigation of how addressee feedback affects co-speech gestures accompanying speakers’ responses. Journal of Pragmatics, 43, 3522-3536. doi:10.1016/j.pragma.2011.08.002.

    Abstract

    There is evidence that co-speech gestures communicate information to addressees and that they are often communicatively intended. However, we still know comparatively little about the role of gestures in the actual process of communication. The present study offers a systematic investigation of speakers’ gesture use before and after addressee feedback. The findings show that when speakers responded to addressees’ feedback gesture rate remained constant when this feedback encouraged clarification, elaboration or correction. However, speakers gestured proportionally less often after feedback when providing confirmatory responses. That is, speakers may not be drawing on gesture in response to addressee feedback per se, but particularly with responses that enhance addressees’ understanding. Further, the large majority of speakers’ gestures changed in their form. They tended to be more precise, larger, or more visually prominent after feedback. Some changes in gesture viewpoint were also observed. In addition, we found that speakers used deixis in speech and gaze to increase the salience of gestures occurring in response to feedback. Speakers appear to conceive of gesture as a useful modality in redesigning utterances to make them more accessible to addressees. The findings further our understanding of recipient design and co-speech gestures in face-to-face dialogue.
    Highlights

    ► Gesture rate remains constant in response to addressee feedback when the response aims to correct or clarify understanding. ► But gesture rate decreases when speakers provide confirmatory responses to feedback signalling correct understanding. ► Gestures are more communicative in response to addressee feedback, particularly in terms of precision, size and visual prominence. ► Speakers make gestures in response to addressee feedback more salient by using deictic markers in speech and gaze.
  • Holler, J. (2011). Verhaltenskoordination, Mimikry und sprachbegleitende Gestik in der Interaktion. Psychotherapie - Wissenschaft: Special issue: "Sieh mal, wer da spricht" - der Koerper in der Psychotherapie Teil IV, 1(1), 56-64. Retrieved from http://www.psychotherapie-wissenschaft.info/index.php/psy-wis/article/view/13/65.
  • Kelly, S., Byrne, K., & Holler, J. (2011). Raising the stakes of communication: Evidence for increased gesture production as predicted by the GSA framework. Information, 2(4), 579-593. doi:10.3390/info2040579.

    Abstract

    Theorists of language have argued that co-­speech hand gestures are an
    intentional part of social communication. The present study provides evidence for these
    claims by showing that speakers adjust their gesture use according to their perceived relevance to the audience. Participants were asked to read about items that were and were not useful in a wilderness survival scenario, under the pretense that they would then
    explain (on camera) what they learned to one of two different audiences. For one audience (a group of college students in a dormitory orientation activity), the stakes of successful
    communication were low;; for the other audience (a group of students preparing for a
    rugged camping trip in the mountains), the stakes were high. In their explanations to the camera, participants in the high stakes condition produced three times as many
    representational gestures, and spent three times as much time gesturing, than participants in the low stakes condition. This study extends previous research by showing that the anticipated consequences of one’s communication—namely, the degree to which information may be useful to an intended recipient—influences speakers’ use of gesture.
  • Wilkin, K., & Holler, J. (2011). Speakers’ use of ‘action’ and ‘entity’ gestures with definite and indefinite references. In G. Stam, & M. Ishino (Eds.), Integrating gestures: The interdisciplinary nature of gesture (pp. 293-308). Amsterdam: John Benjamins.

    Abstract

    Common ground is an essential prerequisite for coordination in social interaction, including language use. When referring back to a referent in discourse, this referent is ‘given information’ and therefore in the interactants’ common ground. When a referent is being referred to for the first time, a speaker introduces ‘new information’. The analyses reported here are on gestures that accompany such references when they include definite and indefinite grammatical determiners. The main finding from these analyses is that referents referred to by definite and indefinite articles were equally often accompanied by gesture, but speakers tended to accompany definite references with gestures focusing on action information and indefinite references with gestures focusing on entity information. The findings suggest that speakers use speech and gesture together to design utterances appropriate for speakers with whom they share common ground.

    Files private

    Request files

Share this page