Judith Holler

Presentations

Displaying 1 - 37 of 37
  • Holler, J. (2015). Gesture, gaze and the body in the coordination of turns in conversation. [invited talk]. Talk presented at Contribution to the Symposium ‘How cognition supports social interaction: From joint action to dialogue’, 19th Conference of the European Society for Cognitive Psychology (ESCoP). Paphos, Cyprus. 2015-09-17 - 2015-09-20.

    Abstract

    Human language has long been considered a unimodal activity, with the body being considered a mere vehicle
    for expressing acoustic linguistic meaning. But theories of
    language evolution point towards a close link between
    vocal and visual communication early on in history,
    pinpointing gesture as the origin of human language.
    Some consider this link between gesture and
    communicative vocalisations as having been temporary,
    with conventionalized linguistic code eventually replacing
    early bodily signaling. Others argue for this link being
    permanent, positing that even fully-fledged human
    language is a multi-modal phenomenon, with visual
    signals forming integral components of utterances in faceto-
    face conversation. My research provides evidence for
    the latter. Based on this research, I will provide insights
    into some of the factors and principles governing multimodal
    language use in adult interaction. My talk consists
    of three parts: First, I will present empirical findings
    showing that movements we produce with our body are
    indeed integral to spoken language and closely linked to
    communicative intentions underlying speaking. Second, I
    will show that bodily signals, first and foremost manual
    gestures, play an active role in the coordination of
    meaning during face-to-face interaction, including
    fundamental processes like the grounding of referential
    utterances. Third, I will present recent findings on the role
    of bodily communicative acts in the psycholinguistically
    challenging context of turn-taking during conversation.
    Together, the data I present form the basis of a framework
    aiming to capture multi-modal language use and
    processing situated in face-to-face interaction, the
    environment in which language first emerged, is acquired
    and used most.
  • Holler, J., & Kendrick, K. H. (2015). Gesture, gaze, and the body in the organisation of turn-taking for conversation. Talk presented at the 19th Meeting of the European Society for Cognitive Psychology (ESCoP 2015). Paphos, Cyprus. 2015-09-17 - 2015-09-20.

    Abstract

    The primordial site of conversation is face-to-face social interaction where participants make use of visual modalities, as well as talk, in the coordination of collaborative action. This most basic observation leads to a fundamental question: what is the place of multimodal resources such as these in the organisation of turn-taking for conversation? To answer this question, we collected a corpus of both dyadic and triadic face-to-face interactions between adults, with the aim to build on existing observations of the use of visual bodily modalities in conversation (e.g., Duncan, 1972; Goodwin, 1981; Kendon, 1967; Lerner 2003; Mondada 2007; Oloff, 2013; Rossano, 2012; Sacks & Schegloff, 2002; Schegloff, 1998).

    The corpus retains the spontaneity and naturalness of everyday talk as much as possible while combining it with state-of-the-art technology to allow for exact, detailed analyses of verbal and visual conversational behaviours. Each participant (1) was filmed by three high definition video cameras (providing a frontal plus two lateral views) allowing for fine-grained, frame-by-frame analyses of bodily conduct, as well as the precise measurement of how individual bodily behaviours are timed with respect to each other, and with respect to speech; (2) wore a head-mounted microphone providing high quality recordings of the audio signal suitable for determining on- and off-sets of speaking turns, as well as inter-turn gaps, with high precision, (3) wore head-mounted eye-tracking glasses to monitor eye movements and fixations overlaid onto a video recording of the visual scene the participant was viewing at any given moment (including the other [two] participant[s] and the surroundings in which the conversation took place). The HD video recordings of body behaviour, the eye-tracking video recordings, and the audio recordings from all 2/3 participants engaged in each conversation were then integrated within a single software application (ELAN) for synchronised playback and analysis.

    The analyses focus on the use and interplay of visual bodily resources, including eye gaze, co-speech gestures, and body posture, during conversational coordination, as well as on how these signals interweave with participants’ turns at talk. The results provide insight into the process of turn projection as evidenced by participants’ gaze behaviour with a focus on the role different bodily cues play in this context, and into how concurrent visual and verbal resources are involved in turn construction and turn allocation. This project will add to our understanding of core issues in the field of CA, such as by elucidating the role of multi-modality and number of participants engaged in talk-in-interaction (Schegloff, 2009).

    References

    Duncan, S. (1972). Some signals and rules for taking speaking turns in conversations. Journal of Personality and Social Psychology, 23, 283-92.

    Goodwin, C. (1981). Conversational organization: Interaction between speakers and hearers. New York: Academic Press.

    Kendon, A. (1967). Some functions of gaze-direction in social interaction. Acta Psychologia, 26, 22-63.

    Lerner, G. H. (2003). Selecting next speaker: The context-sensitive operation of a context-free organization. Language in Society, 32(02), 177–201.

    Mondada, L. (2007). Multimodal resources for turn-taking: Pointing and the emergence of possible next speakers. Discourse Studies, 9, 195-226.

    Oloff, F. (2013). Embodied withdrawal after overlap resolution. Journal of Pragmatics, 46, 139-156.

    Rossano, F. (2012). Gaze behavior in face-to-face interaction. PhD Thesis, Radboud University Nijmegen, Nijmegen.

    Sacks, H., & Schegloff, E. (2002). Home position. Gesture, 2, 133-146.

    Schegloff, E. (1998). Body torque. Social Research, 65, 535-596.

    Schegloff, E. (2009). One perspective on Conversation Analysis: Comparative perspectives. In J. Sidnell (ed.), Conversation Analysis: Comparative perspectives, pp. 357-406. Cambridge: Cambridge University Press.
  • Holler, J., & Kendrick, K. H. (2015). Gesture, gaze, and the body in the organisation of turn-taking for conversation. Poster presented at the 14th International Pragmatics Conference, Anwerp, Belguim.

    Abstract

    The primordial site of conversation is face-to-face social interaction where participants make use of visual modalities, as well as talk, in the coordination of collaborative action. This most basic observation leads to a fundamental question: what is the place of multimodal resources such as these in the organisation of turn-taking for conversation? To answer this question, we collected a corpus of both dyadic and triadic face-to-face interactions between adults, with the aim to build on existing observations of the use of visual bodily modalities in conversation (e.g., Duncan, 1972; Goodwin, 1981; Kendon, 1967; Lerner 2003; Mondada 2007; Oloff, 2013; Rossano, 2012; Sacks & Schegloff, 2002; Schegloff, 1998). The corpus retains the spontaneity and naturalness of everyday talk as much as possible while combining it with state-of-the-art technology to allow for exact, detailed analyses of verbal and visual conversational behaviours. Each participant (1) was filmed by three high definition video cameras (providing a frontal plus two lateral views) allowing for fine-grained, frame-by-frame analyses of bodily conduct, as well as the precise measurement of how individual bodily behaviours are timed with respect to each other, and with respect to speech; (2) wore a head-mounted microphone providing high quality recordings of the audio signal suitable for determining on- and off-sets of speaking turns, as well as inter-turn gaps, with high precision, (3) wore head-mounted eye-tracking glasses to monitor eye movements and fixations overlaid onto a video recording of the visual scene the participant was viewing at any given moment (including the other [two] participant[s] and the surroundings in which the conversation took place). The HD video recordings of body behaviour, the eye-tracking video recordings, and the audio recordings from all 2/3 participants engaged in each conversation were then integrated within a single software application (ELAN) for synchronised playback and analysis. The analyses focus on the use and interplay of visual bodily resources, including eye gaze, co-speech gestures, and body posture, during conversational coordination, as well as on how these signals interweave with participants’ turns at talk. The results provide insight into the process of turn projection as evidenced by participants’ gaze behaviour with a focus on the role different bodily cues play in this context, and into how concurrent visual and verbal resources are involved in turn construction and turn allocation. This project will add to our understanding of core issues in the field of CA, such as by elucidating the role of multi-modality and number of participants engaged in talk-in-interaction (Schegloff, 2009). References Duncan, S. (1972). Some signals and rules for taking speaking turns in conversations. Journal of Personality and Social Psychology, 23, 283-92. Goodwin, C. (1981). Conversational organization: Interaction between speakers and hearers. New York: Academic Press. Kendon, A. (1967). Some functions of gaze-direction in social interaction. Acta Psychologia, 26, 22-63. Lerner, G. H. (2003). Selecting next speaker: The context-sensitive operation of a context-free organization. Language in Society, 32(02), 177–201. Mondada, L. (2007). Multimodal resources for turn-taking: Pointing and the emergence of possible next speakers. Discourse Studies, 9, 195-226. Oloff, F. (2013). Embodied withdrawal after overlap resolution. Journal of Pragmatics, 46, 139-156. Rossano, F. (2012). Gaze behavior in face-to-face interaction. PhD Thesis, Radboud University Nijmegen, Nijmegen. Sacks, H., & Schegloff, E. (2002). Home position. Gesture, 2, 133-146. Schegloff, E. (1998). Body torque. Social Research, 65, 535-596. Schegloff, E. (2009). One perspective on Conversation Analysis: Comparative perspectives. In J. Sidnell (ed.), Conversation Analysis: Comparative perspectives, pp. 357-406. Cambridge: Cambridge University Press.
  • Holler, J. (2015). On the pragmatics of multi-modal face-to-face communication: Gesture, speech and gaze in the coordination of mental states and social interaction. Talk presented at the 4th GESPIN - Gesture & Speech in Interaction Conference. Nantes, France. 2015-09-02 - 2015-09-04.

    Abstract

    Coordination is at the heart of human conversation. In order to interact with one another through talk, we must coordinate at many levels, first and foremost at the level of our mental states, intentions and conversational contributions. In this talk, I will present findings on the pragmatics of multi-model communication from both production and comprehension studies. In terms of production, I will throw light on (1) how co-speech gestures are used in the coordination of meaning to allow interactants to arrive ate a shared understanding of the thins we talk about, as well as on (2) how gesture and gaze are employed in the coordination of speaking turns in spontaneous conversation, with special reference to the psycholinguistic and cognitive challenges that turn-taking poses. In terms of comprehension, I will focus on communicative intentions and the interplay of ostensive and semantic multi-model signals in triadic communication contexts. My talk will bring these different findings together to make the argument for richer reearch paradigms that capture more of the complexities and sociality of face-to-face conversational interactoin. Advancing the field of multi-model communication in this way will allow us to more fully understand the psycholinguistic processes that underlie human language use and language comprehension.
  • Holler, J. (2015). Visible communicative acts in the coordination of interaction. [invited talk]. Talk presented at Institute for Language Sciences, Cologne University. Cologne, Germany. 2015-06-11.
  • Hömke, P., Holler, J., & Levinson, S. C. (2015). Blinking as addressee feedback in face-to-face conversation. Talk presented at the 6th Joint Action Meeting. Budapest, Hungary. 2015-07-01 - 2015-07-04.
  • Hömke, P., Holler, J., & Levinson, S. C. (2015). Blinking as addressee feedback in face-to-face dialogue?. Poster presented at the 19th Workshop on the Semantics and Pragmatics of Dialogue (SemDial 2015 / goDIAL), Gothenburg, Sweden.
  • Hömke, P., Holler, J., & Levinson, S. C. (2015). Blinking as addressee feedback in face-to-face dialogue?. Talk presented at the Nijmegen-Tilburg Multi-modality workshop. Tilburg, The Netherlands. 2015-10-22.
  • Hömke, P., Holler, J., & Levinson, S. C. (2015). Blinking as addressee feedback in face-to-face dialogue?. Talk presented at the Donders Discussions Conference. Nijmegen, The Netherlands. 2015-11-05.
  • Humphries, S., Holler, J., Crawford, T., & Poliakoff, E. (2015). Investigating gesture viewpoint during action description in Parkinson’s Disease. Talk presented at the Research into Imagery and Observation Conference. Stirling, Scotland. 2015-05-14 - 2015-05-15.
  • Humphries, S., Holler, J., Crawfort, T., & Poliakoff, E. (2015). Investigating gesture viewpoint during action description in Parkinson’s Disease. Talk presented at the School of Psychological Sciences PGR Conference. Manchester, England.
  • Kendrick, K. H., & Holler, J. (2015). Triadic participation in question-answer sequences. Talk presented at Revisiting Participation – Language and Bodies in Interaction workshop. Basel, Switzerland. 2015-06-24 - 2015-06-27.
  • Kendrick, K. H., & Holler, J. (2015). Triadic participation in question-answer sequences. Talk presented at the 14th International Pragmatics Conference. Antwerp, Belgium. 2015-07-26 - 2015-07-31.
  • Schubotz, L., Holler, J., & Ozyurek, A. (2015). Age-related differences in multi-modal audience design: Young, but not old speakers, adapt speech and gestures to their addressee's knowledge. Talk presented at the 4th GESPIN - Gesture & Speech in Interaction Conference. Nantes, France. 2015-09-02 - 2015-09-05.
  • Schubotz, L., Drijvers, L., Holler, J., & Ozyurek, A. (2015). The cocktail party effect revisited in older and younger adults: When do iconic co-speech gestures help?. Poster presented at Donders Sessions 2015, Nijmegen, The Netherlands.
  • Schubotz, L., Drijvers, L., Holler, J., & Ozyurek, A. (2015). The cocktail party effect revisited in older and younger adults: When do iconic co-speech gestures help?. Talk presented at Donders Discussions 2015. Nijmegen, The Netherlands. 2015-11-05.
  • Cotroneo, C., Holler, J., & Connell, L. (2012). Gesture and the embodiment of auditory perceptual information. Poster presented at the 5th Embodied and Situated Language Processing Conference (ESLP 2012), Newcastle upon Tyne, UK.
  • Herrera, E., Poliakoff, E., Holler, J., McDonald, K., & Cuetos, F. (2012). Naming dynamic actions in Parkinson's disease. Poster presented at the 16th International Congress of Parkinson's Disease and Movement Disorders, Dubline, Ireland.
  • Holler, J. (2012). Contextualising gesture: Experimental studies of social processes in gesture production and comprehension. Talk presented at the Department of Psychology, University of Sheffield. Sheffield, UK. 2012-04.
  • Holler, J. (2012). Gesture use in social context. Talk presented at the Tilburg Centre for Cognition and Communication, Tilburg University. Tilburg, The Netherlands. 2012-02.
  • Holler, J. (2012). Gesture use in social context: The influence of common ground on gesture use in dyadic interaction. Talk presented at the Cologne-Aachen Gesture Colloquium Series, University of Cologne. Cologne, Germany. 2012-01.
  • Holler, J., Kelly, S., Hagoort, P., & Ozyurek, A. (2012). Overhearing gesture: The influence of eye gaze direction on the comprehension of iconic gestures. Poster presented at the Social Cognition, Engagement, and the Second-Person-Perspective Conference, Cologne, Germany.
  • Holler, J., Kelly, S., Hagoort, P., & Ozyurek, A. (2012). Overhearing gesture: The influence of eye gaze direction on the comprehension of iconic gestures. Poster presented at the EPS workshop 'What if.. the study of language started from the investigation of signed, rather than spoken language?, London, UK.
  • Holler, J., Kelly, S., Hagoort, P., & Ozyurek, A. (2012). The influence of gaze direction on the comprehension of speech and gesture in triadic communication. Talk presented at the 18th Annual Conference on Architectures and Mechanisms for Language Processing (AMLaP 2012). Riva del Garda, Italy. 2012-09-06 - 2012-09-08.

    Abstract

    Human face-to-face communication is a multi-modal activity. Recent research has shown that, during comprehension, recipients integrate information from speech with that contained in co-speech gestures (e.g., Kelly et al., 2010). The current studies take this research one step further by investigating the influence of another modality, namely eye gaze, on speech and gesture comprehension, to advance our understanding of language processing in more situated contexts. In spite of the large body of literature on processing of eye gaze, very few studies have investigated its processing in the context of communication (but see, e.g., Staudte & Crocker, 2011 for an exception). In two studies we simulated a triadic communication context in which a speaker alternated their gaze between our participant and another (alleged) participant. Participants thus viewed speech-only or speech + gesture utterances either in the role of addressee (direct gaze) or in the role of unaddressed recipient (averted gaze). In Study 1, participants (N = 32) viewed video-clips of a speaker producing speech-only (e.g. “she trained the horse”) or speech+gesture utterances conveying complementary information (e.g. “she trained the horse”+WHIPPING gesture). Participants were asked to judge whether a word displayed on screen after each video-clip matched what the speaker said or not. In half of the cases, the word matched a previously uttered word, requiring a “yes” answer. In all other cases, the word matched the meaning of the gesture the actor had performed, thus requiring a ‘no’ answer.
  • Holler, J., Kelly, S., Hagoort, P., & Ozyurek, A. (2012). When gestures catch the eye: The influence of gaze direction on co-speech gesture comprehension in triadic communication. Talk presented at the 5th Conference of the International Society for Gesture Studies (ISGS 5). Lund, Sweden. 2012-07-24 - 2012-07-27.
  • Holler, J., Kelly, S., Hagoort, P., & Ozyurek, A. (2012). When gestures catch the eye: The influence of gaze direction on co-speech gesture comprehension in triadic communication. Talk presented at the 34th Annual Meeting of the Cognitive Science Society (CogSci 2012). Sapporo, Japan. 2012-08-01 - 2012-08-04.
  • Humphreys, S., Poliakoff, E., & Holler, J. (2012). Action representation in co-speech gestures in Parkinson's Disease. Poster presented at the Parkinson's UK Research Conference, York, UK.
  • Humphries, S., Poliakoff, E., & Holler, J. (2012). Action representation actions in co-speech gestures in Parkinson's Disease. Poster presented at the Parkinson’s UK Research Conference, York.
  • Humphries, S., Poliakoff, E., & Holler, J. (2012). How does Parkinson’s Disease affect the way people use gestures to communicate about actions?. Poster presented at Parkinson’s UK Research Conference, York.

    Abstract

    Objective: To examine how co-speech gestures depicting actions are a ected in Parkinson’s disease (PD), and to explore how gestures might be related to measures of verbal fluency and action naming. Background: PD a ects not only motor abilities, but also language and communication. Language is more impaired for words relating to motor content; e.g., patients take longer to name actions with a high compared to a low motor content. Co-speech gestures embody a form of action which is tightly linked to language and which represent meaningful information that forms a unified whole together with that contained in speech. However, co-speech gestures have rarely been investigated in PD. Recent data showed that gestural precision was reduced in PD patients when describing actions, suggesting that the mental representations of actions underlying their co-speech gestures have become less specific. We investigated this phenomenon for a wider range of actions than the original study, and also explored the possible relationship between verbal fluency/naming deficits and gestures. Method: Sixteen PD patients and 13 IQ-matched healthy controls were video recorded describing pictures and video clips of actions, such as running and knitting. Participants also completed measures of verbal fluency (generating as many words as possible in one minute for certain phonological and semantic categories) and action naming. Results: Analysis is in progress. We are comparing the rate of co-speech gesture production as well as the precision of action-related co-speech gestures between PD patients and controls. We will also examine the relationship between gestures and scores on tasks of verbal fluency and action naming. Conclusions: Investigating co-speech gestures associated with actions has implications for understanding both communication and action representation in Parkinson’s.
  • Kelly, S., Ozyurek, A., Healey, M., & Holler, J. (2012). The communicative influence of gesture and action during speech comprehension: Gestures have the upper hand. Talk presented at the Acoustics 2012 Hong Kong Conference and Exhibition. Hong Kong. 2012-05-13 - 2012-05-18.
  • Kokal, I., Holler, J., Ozyurek, A., Kelly, S., Toni, I., & Hagoort, P. (2012). Eye'm talking to you: Speakers' gaze direction modulates the integration of speech and iconic gestures in the rigth MTG. Poster presented at the 4th Annual Neurobiology of Language Conference (NLC 2012), San Sebastian, Spain.
  • Kokal, I., Holler, J., Ozyurek, A., Kelly, S., Toni, I., & Hagoort, P. (2012). Eye'm talking to you: The role of the Middle Temporal Gyrus in the integration of gaze, gesture and speech. Poster presented at the Social Cognition, Engagement, and the Second-Person-Perspective Conference, Cologne, Germany.
  • Rowbotham, S., Wearden, A., Holler, J., & Lloyd, D. (2012). Investigating the association between pain catastrophising and co-speech gesture production during pain communication. Talk presented at the 8th Annual Scientific Meeting of the UK Society for Behavioural Medicine. Manchester, UK. 2012-12-10 - 2012-12-11.
  • Rowbotham, S., Wearden, A., Holler, J., & Lloyd, D. (2012). The relationship between pain catastrophizing and gesture production during pain communication. Poster presented at the British Psychological Society Division of Health Psychology Section Annual Conference, Liverpool, UK.
  • Rowbotham, S., Holler, J., Wearden, A., & Lloyd, D. (2012). The semantic interplay of speech and co-speech gestures in the description of pain sensations. Talk presented at the 5th Conference of the International Society for Gesture Studies (ISGS 5). Lund, Sweden. 2012-07-24 - 2012-07-27.
  • Theakston, A., & Holler, J. (2012). The effect of co-speech gesture on children's comprehension and production of complex syntactic constructions. Talk presented at the 4th UK Cognitive Linguistics Conference. London, UK. 2012-07-10 - 2012-07-12.
  • Tutton, M., & Holler, J. (2012). The influence of verbal interaction on speaker's gestural communication of mutually shared knowledge. Talk presented at the 5th Conference of the International Society for Gesture Studies (ISGS 5). Lund, Sweden. 2012-07-24 - 2012-07-27.

Share this page