Judith Holler

Presentations

Displaying 1 - 24 of 24
  • Dockendorff, M., Holler, J., & Knoblich, G. (2023). Saying things with actions — or how instrumental actions can take on a communicative function. Talk presented at the 9th bi-annual Joint Action Meeting (JAM). Budapest, Hungary. 2023-07-10 - 2023-07-12.
  • Emmendorfer, A. K., Banovac, L., Gorter, A., & Holler, J. (2023). Visual signals as response mobilization cues in face-to-face conversation. Talk presented at the 8th Gesture and Speech in Interaction (GESPIN 2023). Nijmegen, The Netherlands. 2023-09-13 - 2023-09-15.
  • Emmendorfer, A. K., & Holler, J. (2023). Addressee gaze direction and response timing signal upcoming response preference: Evidence from behavioral and EEG experiments. Poster presented at the 15th Annual Meeting of the Society for the Neurobiology of Language (SNL 2023), Marseille, France.
  • Emmendorfer, A. K., & Holler, J. (2023). The influence of speaker gaze on addressees’ response planning: Evidence from behavioral and EEG data. Poster presented at the 15th Annual Meeting of the Society for the Neurobiology of Language (SNL 2023), Marseille, France.
  • Holler, J. (2023). Multimodal addressee responses as tools for coordination and adaptation in conversational interaction. Talk presented at the 9th bi-annual Joint Action Meeting (JAM). Budapest, Hungary. 2023-07-10 - 2023-07-12.
  • Holler, J. (2023). Human language processing as a multimodal, situated activity. Talk presented at the 21st International Multisensory Research Forum (IRMF 2023). Brussels, Belgium. 2023-06-23 - 2023-06-30.
  • Mazzini, S., Holler, J., Hagoort, P., & Drijvers, L. (2023). Investigating inter-brain synchrony during (un-)successful face-to-face communication. Poster presented at the 9th bi-annual Joint Action Meeting (JAM), Budapest, Hungary.
  • Mazzini, S., Holler, J., Hagoort, P., & Drijvers, L. (2023). Inter-brain synchrony during (un)successful face-to-face communication. Poster presented at the 15th Annual Meeting of the Society for the Neurobiology of Language (SNL 2023), Marseille, France.
  • Mazzini, S., Holler, J., Hagoort, P., & Drijvers, L. (2023). Studying the association between co-speech gestures, mutual understanding and inter-brain synchrony in face-to-face conversations. Poster presented at the 15th Annual Meeting of the Society for the Neurobiology of Language (SNL 2023), Marseille, France.
  • Mazzini, S., Holler, J., Hagoort, P., & Drijvers, L. (2023). Inter-brain synchrony during (un)successful face-to-face communication. Poster presented at the 19th NVP Winter Conference on Brain and Cognition, Egmond aan Zee, The Netherlands.

    Abstract

    Human communication requires interlocutors to mutually understand each other. Previous research has suggested inter-brain synchrony as an important feature of social interaction, since it has been observed during joint attention, speech interactions and cooperative tasks. Nonetheless, it is still unknown whether inter-brain synchrony is actually related to successful face-to-face communication. Here, we use dual-EEG to study if inter-brain synchrony is modulated during episodes of successful and unsuccessful communication in clear and noisy communication settings. Dyads performed a tangram-based referential communication task with and without background noise, while both their EEG and audiovisual behavior was recorded. Other-initiated repairs were annotated in the audiovisual data and were used as indexes of unsuccessful and successful communication. More specifically, we compared inter-brain synchrony during episodes of miscommunication (repair initiations) and episodes of mutual understanding (repair solutions and acceptance phases) in the clear and the noise condition. We expect that when communication is successful, inter-brain synchrony will be stronger than when communication is unsuccessful, and we expect that these patterns will be most pronounced in the noise condition. Results are currently being analyzed and will be presented and discussed with respect to the inter-brain neural signatures underlying the process of mutual understanding in face-to-face conversation.
  • Ter Bekke, M., Holler, J., & Drijvers, L. (2023). Do listeners use speakers’ iconic hand gestures to predict upcoming words?. Talk presented at the 9th bi-annual Joint Action Meeting (JAM). Budapest, Hungary. 2023-07-10 - 2023-07-12.
  • Ter Bekke, M., Drijvers, L., & Holler, J. (2023). Do listeners use speakers’ iconic gestures to predict upcoming words?. Poster presented at the 8th Gesture and Speech in Interaction (GESPIN 2023), Nijmegen, The Netherlands.
  • Ter Bekke, M., Drijvers, L., & Holler, J. (2023). Gestures speed up responses to questions. Poster presented at the 8th Gesture and Speech in Interaction (GESPIN 2023), Nijmegen, The Netherlands.
  • Ter Bekke, M., Drijvers, L., & Holler, J. (2023). Do listeners use speakers’ iconic hand gestures to predict upcoming words?. Poster presented at the 15th Annual Meeting of the Society for the Neurobiology of Language (SNL 2023), Marseille, France.
  • Trujillo, J. P., & Holler, J. (2023). Investigating the multimodal compositionality and comprehension of intended meanings using virtual agents. Talk presented at the 9th bi-annual Joint Action Meeting (JAM). Budapest, Hungary. 2023-07-10 - 2023-07-12.
  • Trujillo, J. P., Dyer, R. M. K., & Holler, J. (2023). Differences in partner empathy are associated with interpersonal kinetic and prosodic entrainment during conversation. Poster presented at the 9th bi-annual Joint Action Meeting (JAM), Budapest, Hungary.
  • Holler, J. (2013). Gesture use in social context: The influence of common ground on co-speech gesture production in dyadic interaction. Talk presented at the Humanities Lab, Lund University. Lund, Sweden.
  • Holler, J. (2013). Gesture use in social context: The influence of common ground on co-speech gesture production in dyadic interaction. Talk presented at Laboratoire Parole et Langage. Université Aix-Marseille. Aix-en-Provence, France.
  • Holler, J., Schubotz, L., Kelly, S., Hagoort, P., & Ozyurek, A. (2013). Multi-modal language comprehension as a joint activity: The influence of eye gaze on the processing of speech and co-speech gesture in multi-party communication. Talk presented at the 5th Joint Action Meeting. Berlin. 2013-07-26 - 2013-07-29.

    Abstract

    Traditionally, language comprehension has been studied as a solitary and unimodal activity. Here, we investigate language comprehension as a joint activity, i.e., in a dynamic social context involving multiple participants in different roles with different perspectives, while taking into account the multimodal nature of facetoface communication. We simulated a triadic communication context involving a speaker alternating her gaze between two different recipients, conveying information not only via speech but gesture as well. Participants thus viewed videorecorded speechonly or speech+gesture utterances referencing objects (e.g., “he likes the laptop”/+TYPING ON LAPTOPgesture) when being addressed (direct gaze) or unaddressed (averted gaze). The videoclips were followed by two object images (laptoptowel). Participants’ task was to choose the object that matched the speaker’s message (i.e., laptop). Unaddressed recipients responded significantly slower than addressees for speechonly utterances. However, perceiving the same speech accompanied by gestures sped them up to levels identical to that of addressees. Thus, when speech processing suffers due to being unaddressed, gestures become more prominent and boost comprehension of a speaker’s spoken message. Our findings illuminate how participants process multimodal language and how this process is influenced by eye gaze, an important social cue facilitating coordination in the joint activity of conversation.
  • Holler, J., Schubotz, L., Kelly, S., Schuetze, M., Hagoort, P., & Ozyurek, A. (2013). Here's not looking at you, kid! Unaddressed recipients benefit from co-speech gestures when speech processing suffers. Poster presented at the 35th Annual Meeting of the Cognitive Science Society (CogSci 2013), Berlin, Germany.
  • Holler, J., Kelly, S., Hagoort, P., Schubotz, L., & Ozyurek, A. (2013). Speakers' social eye gaze modulates addressed and unaddressed recipients' comprehension of gesture and speech in multi-party communication. Talk presented at the 5th Biennial Conference of Experimental Pragmatics (XPRAG 2013). Utrecht, The Netherlands. 2013-09-04 - 2013-09-06.
  • Peeters, D., Chu, M., Holler, J., Ozyurek, A., & Hagoort, P. (2013). Getting to the point: The influence of communicative intent on the form of pointing gestures. Talk presented at the 35th Annual Meeting of the Cognitive Science Society (CogSci 2013). Berlin, Germany. 2013-08-01 - 2013-08-03.
  • Peeters, D., Chu, M., Holler, J., Ozyurek, A., & Hagoort, P. (2013). The influence of communicative intent on the form of pointing gestures. Poster presented at the Fifth Joint Action Meeting (JAM5), Berlin, Germany.
  • Tutton, M., & Holler, J. (2013). How degree of verbal interaction affects the communication of static locative information. Talk presented at the 5th International Conference of the Association Française de Linguistique Cognitive: Empirical Approaches to Multi-modality and to Language Variation (AFLiCo 5). Lille, France. 2013-05-15 - 2013-05-17.

Share this page