Displaying 1 - 22 of 22
-
Holler, J. (2016). On the pragmatics of multi-modal communication: Gesture, speech and gaze in the coordination of mental states and social interaction. [invited talk]. Talk presented at Centre for Research on Social Interactions, University Neuchâtel. Neuchâtel, Switzerland. 2016-05-18.
Abstract
Coordination is at the heart of human conversation. In order to interact with one another through talk, we must coordinate at many levels, first and foremost at the level of our mental states, intentions and conversational contributions. In this talk, I will present findings on the pragmatics of multi-modal communication from both production and comprehension studies. In terms of production, I will talk about (1) how co-speech gestures are used in the coordination of meaning allowing interactants to arrive at a shared understanding of the things they talk about, as well as on (2) how gesture and gaze are employed in the coordination of speaking turns in conversation, with special reference to the psycholinguistic and cognitive challenges that turn-taking poses. In terms of comprehension, I will focus on the interplay of ostensive (social gaze) and semantic (gesture) signals in the context of intention perception and language processing. My talk will bring different sets of findings together to argue for richer research paradigms that capture more of the complexities and sociality of face-to-face conversational interaction. Advancing the field of multi-modal communication research in this direction will allow us to more fully understand the psycholinguistic processes that underlie human language use and language comprehension. -
Holler, J. (2016). The role of the body in coordinating minds and utterances in interaction [invited talk]. Talk presented at the International Workshop on Language Production (IWLP 2016). La Jolla, CA, USA. 2016-07-25 - 2016-07-27.
Abstract
Human language has long been considered a unimodal
activity, with the body being considered a mere vehicle
for expressing acoustic linguistic meaning. But theories of
language evolution point towards a close link between
vocal and visual communication early on in history,
pinpointing gesture as the origin of human language.
Some consider this link between gesture and
communicative vocalisations as having been temporary,
with conventionalized linguistic code eventually replacing
early bodily signaling. Others argue for this link being
permanent, positing that even fully-fledged human
language is a multi-modal phenomenon, with visual
signals forming integral components of utterances in faceto-
face conversation. My research provides evidence for
the latter. Based on this research, I will provide insights
into some of the factors and principles governing multimodal
language use in adult interaction. My talk consists
of three parts: First, I will present empirical findings
showing that movements we produce with our body are
indeed integral to spoken language and closely linked to
communicative intentions underlying speaking. Second, I
will show that bodily signals, first and foremost manual
gestures, play an active role in the coordination of
meaning during face-to-face interaction, including
fundamental processes like the grounding of referential
utterances. Third, I will present recent findings on the role
of bodily communicative acts in the psycholinguistically
challenging context of turn-taking during conversation.
Together, the data I present form the basis of a framework
aiming to capture multi-modal language use and
processing situated in face-to-face interaction, the
environment in which language first emerged, is acquired
and used most. -
Holler, J., & Kendrick, K. H. (2016). Turn-timing and the body: Gesture speeds up conversation. Talk presented at the 7th Conference of the International Society for Gesture Studies (ISGS7). Paris, France. 2016-07-18 - 2016-07-22.
Abstract
Conversation is the core niche of human multi-modal language use and it is characterized by a system of taking turns. This organization poses a particular psycholinguistic challenge for its participants: considering the gap between two speaking turns averages around just 200 ms (Stivers et al., 2009) but the production of single word utterances takes a minimum of 600 ms alone (Indefrey & Levelt, 2004), language production and comprehension must largely run in parallel; while listening to an on-going turn, a next speaker has to predict the upcoming content and end of that turn to start preparing their own and launch it on time (Levinson, 2013). Recently, research has begun to investigate the cognitive processes underpinning turn-taking (see Holler et al., 2015 for an overview), but this research has focused on the spoken modality. The present study investigates the role co-speech gestures may play in this process. We analysed a corpus of 7 casual face-to-face conversations between English speakers for all question-response sequences (N=281), the gestures that accompanied the identified set of questions, and the timing of these gestures with respect to the speaking turns they accompanied. Moreover, we measured the length of all inter-turn gaps in our set. Our main research question was whether the length of the gap between turns varied systematically as a consequence of questions being accompanied by gesture. Our results revealed that this is indeed the case: Questions with a gestural component were responded to significantly faster than questions without a gestural component. This finding holds when we consider head and hand gestures separately, when we control for points of possible completion in the verbal utterance prior to turn end, and when we control for complexity associated with question type. Furthermore, our findings revealed that within the group of questions accompanied by gestures, those questions whose gestures retracted prior to turn end were responded to faster than questions whose gestures retracted following turn end. This study provides evidence that gestures accompanying spoken questions in conversation facilitate the coordination of turns. While experimental studies have demonstrated beneficial effects of gestures on language processing, this is the first evidence that gestures may benefit processing even in the rich, cognitively challenging context of conversational interaction. That is, gestures appear to play an important psycholinguistic function during immersed, in situ language processing. Experimental work is currently exploring at which level (semantic, pragmatic, perceptual) the facilitative effects we found are operating. The findings not only suggest psycholinguistic processing benefits but also expand on previous turn-taking models that restrict the function of gesture to turn-yielding/-keeping cues (Duncan, 1972) as well as on turn-taking models focusing primarily on the verbal modality (Sacks et al., 1974). -
Hömke, P., Holler, J., & Levinson, S. C. (2016). Blinking as addressee feedback in face-to-face conversation. Talk presented at the 7th Conference of the International Society for Gesture Studies (ISGS7). Paris, France. 2016-07-22 - 2016-07-24.
-
Poliakoff, E., Humphries, S., Crawford, T., & Holler, J. (2016). Varying the degree of motion in actions influences gestural action depictions in Parkinson’s disease. Talk presented at the British Neuropsychological Society Autumn Meeting. London, UK. 2016-10-26 - 2016-10-27.
Abstract
In communication, speech is often accompanied by co-speech gestures, which embody a link between language and action. Language impairments in Parkinson’s disease (PD) are particularly pronounced for action-related words in comparison to nouns. People with PD produce fewer gestures from a first-person perspective when they describe others’ actions (Humphries et al., 2016), which may reflect a difficulty in simulation. We extended this to investigate the gestural depiction of other types of action information such as “manner” (how an action is performed) and “path” (the trajectory of a moving figure in space). We also explored whether the level of motion required to perform an action influences the way that people with PD use gestures to depict those actions. 37 people with PD and 35 age-matched controls viewed a cartoon which included low motion actions (e.g. hiding, knocking) and high motion actions (e.g. running, climbing), and described it to an addressee. We analysed the co-speech gestures they spontaneously produced while doing so. Overall gesture rate was similar in both groups, but people with PD produced action-gestures at a significantly lower rate than controls in both motion conditions. Also, people with PD produced significantly fewer manner and first-person action gestures than controls in the high motion condition (but not the low motion condition). Our findings suggest that motor impairments in PD contribute to the way in which actions, especially high motion actions, are depicted gesturally. Thus, people with Parkinson’s may have particular difficulty cognitively representing high motion actions -
Schubotz, L., Drijvers, L., Holler, J., & Ozyurek, A. (2016). The cocktail party effect revisited in older and younger adults: When do iconic co-speech gestures help?. Poster presented at the 8th Speech in Noise Workshop (SpiN 2016), Groningen, The Netherlands.
-
Holler, J. (2015). Gesture, gaze and the body in the coordination of turns in conversation. [invited talk]. Talk presented at Contribution to the Symposium ‘How cognition supports social interaction: From joint action to dialogue’, 19th Conference of the European Society for Cognitive Psychology (ESCoP). Paphos, Cyprus. 2015-09-17 - 2015-09-20.
Abstract
Human language has long been considered a unimodal activity, with the body being considered a mere vehicle
for expressing acoustic linguistic meaning. But theories of
language evolution point towards a close link between
vocal and visual communication early on in history,
pinpointing gesture as the origin of human language.
Some consider this link between gesture and
communicative vocalisations as having been temporary,
with conventionalized linguistic code eventually replacing
early bodily signaling. Others argue for this link being
permanent, positing that even fully-fledged human
language is a multi-modal phenomenon, with visual
signals forming integral components of utterances in faceto-
face conversation. My research provides evidence for
the latter. Based on this research, I will provide insights
into some of the factors and principles governing multimodal
language use in adult interaction. My talk consists
of three parts: First, I will present empirical findings
showing that movements we produce with our body are
indeed integral to spoken language and closely linked to
communicative intentions underlying speaking. Second, I
will show that bodily signals, first and foremost manual
gestures, play an active role in the coordination of
meaning during face-to-face interaction, including
fundamental processes like the grounding of referential
utterances. Third, I will present recent findings on the role
of bodily communicative acts in the psycholinguistically
challenging context of turn-taking during conversation.
Together, the data I present form the basis of a framework
aiming to capture multi-modal language use and
processing situated in face-to-face interaction, the
environment in which language first emerged, is acquired
and used most. -
Holler, J., & Kendrick, K. H. (2015). Gesture, gaze, and the body in the organisation of turn-taking for conversation. Talk presented at the 19th Meeting of the European Society for Cognitive Psychology (ESCoP 2015). Paphos, Cyprus. 2015-09-17 - 2015-09-20.
Abstract
The primordial site of conversation is face-to-face social interaction where participants make use of visual modalities, as well as talk, in the coordination of collaborative action. This most basic observation leads to a fundamental question: what is the place of multimodal resources such as these in the organisation of turn-taking for conversation? To answer this question, we collected a corpus of both dyadic and triadic face-to-face interactions between adults, with the aim to build on existing observations of the use of visual bodily modalities in conversation (e.g., Duncan, 1972; Goodwin, 1981; Kendon, 1967; Lerner 2003; Mondada 2007; Oloff, 2013; Rossano, 2012; Sacks & Schegloff, 2002; Schegloff, 1998).
The corpus retains the spontaneity and naturalness of everyday talk as much as possible while combining it with state-of-the-art technology to allow for exact, detailed analyses of verbal and visual conversational behaviours. Each participant (1) was filmed by three high definition video cameras (providing a frontal plus two lateral views) allowing for fine-grained, frame-by-frame analyses of bodily conduct, as well as the precise measurement of how individual bodily behaviours are timed with respect to each other, and with respect to speech; (2) wore a head-mounted microphone providing high quality recordings of the audio signal suitable for determining on- and off-sets of speaking turns, as well as inter-turn gaps, with high precision, (3) wore head-mounted eye-tracking glasses to monitor eye movements and fixations overlaid onto a video recording of the visual scene the participant was viewing at any given moment (including the other [two] participant[s] and the surroundings in which the conversation took place). The HD video recordings of body behaviour, the eye-tracking video recordings, and the audio recordings from all 2/3 participants engaged in each conversation were then integrated within a single software application (ELAN) for synchronised playback and analysis.
The analyses focus on the use and interplay of visual bodily resources, including eye gaze, co-speech gestures, and body posture, during conversational coordination, as well as on how these signals interweave with participants’ turns at talk. The results provide insight into the process of turn projection as evidenced by participants’ gaze behaviour with a focus on the role different bodily cues play in this context, and into how concurrent visual and verbal resources are involved in turn construction and turn allocation. This project will add to our understanding of core issues in the field of CA, such as by elucidating the role of multi-modality and number of participants engaged in talk-in-interaction (Schegloff, 2009).
References
Duncan, S. (1972). Some signals and rules for taking speaking turns in conversations. Journal of Personality and Social Psychology, 23, 283-92.
Goodwin, C. (1981). Conversational organization: Interaction between speakers and hearers. New York: Academic Press.
Kendon, A. (1967). Some functions of gaze-direction in social interaction. Acta Psychologia, 26, 22-63.
Lerner, G. H. (2003). Selecting next speaker: The context-sensitive operation of a context-free organization. Language in Society, 32(02), 177–201.
Mondada, L. (2007). Multimodal resources for turn-taking: Pointing and the emergence of possible next speakers. Discourse Studies, 9, 195-226.
Oloff, F. (2013). Embodied withdrawal after overlap resolution. Journal of Pragmatics, 46, 139-156.
Rossano, F. (2012). Gaze behavior in face-to-face interaction. PhD Thesis, Radboud University Nijmegen, Nijmegen.
Sacks, H., & Schegloff, E. (2002). Home position. Gesture, 2, 133-146.
Schegloff, E. (1998). Body torque. Social Research, 65, 535-596.
Schegloff, E. (2009). One perspective on Conversation Analysis: Comparative perspectives. In J. Sidnell (ed.), Conversation Analysis: Comparative perspectives, pp. 357-406. Cambridge: Cambridge University Press.
-
Holler, J., & Kendrick, K. H. (2015). Gesture, gaze, and the body in the organisation of turn-taking for conversation. Poster presented at the 14th International Pragmatics Conference, Anwerp, Belguim.
Abstract
The primordial site of conversation is face-to-face social interaction where participants make use of visual modalities, as well as talk, in the coordination of collaborative action. This most basic observation leads to a fundamental question: what is the place of multimodal resources such as these in the organisation of turn-taking for conversation? To answer this question, we collected a corpus of both dyadic and triadic face-to-face interactions between adults, with the aim to build on existing observations of the use of visual bodily modalities in conversation (e.g., Duncan, 1972; Goodwin, 1981; Kendon, 1967; Lerner 2003; Mondada 2007; Oloff, 2013; Rossano, 2012; Sacks & Schegloff, 2002; Schegloff, 1998). The corpus retains the spontaneity and naturalness of everyday talk as much as possible while combining it with state-of-the-art technology to allow for exact, detailed analyses of verbal and visual conversational behaviours. Each participant (1) was filmed by three high definition video cameras (providing a frontal plus two lateral views) allowing for fine-grained, frame-by-frame analyses of bodily conduct, as well as the precise measurement of how individual bodily behaviours are timed with respect to each other, and with respect to speech; (2) wore a head-mounted microphone providing high quality recordings of the audio signal suitable for determining on- and off-sets of speaking turns, as well as inter-turn gaps, with high precision, (3) wore head-mounted eye-tracking glasses to monitor eye movements and fixations overlaid onto a video recording of the visual scene the participant was viewing at any given moment (including the other [two] participant[s] and the surroundings in which the conversation took place). The HD video recordings of body behaviour, the eye-tracking video recordings, and the audio recordings from all 2/3 participants engaged in each conversation were then integrated within a single software application (ELAN) for synchronised playback and analysis. The analyses focus on the use and interplay of visual bodily resources, including eye gaze, co-speech gestures, and body posture, during conversational coordination, as well as on how these signals interweave with participants’ turns at talk. The results provide insight into the process of turn projection as evidenced by participants’ gaze behaviour with a focus on the role different bodily cues play in this context, and into how concurrent visual and verbal resources are involved in turn construction and turn allocation. This project will add to our understanding of core issues in the field of CA, such as by elucidating the role of multi-modality and number of participants engaged in talk-in-interaction (Schegloff, 2009). References Duncan, S. (1972). Some signals and rules for taking speaking turns in conversations. Journal of Personality and Social Psychology, 23, 283-92. Goodwin, C. (1981). Conversational organization: Interaction between speakers and hearers. New York: Academic Press. Kendon, A. (1967). Some functions of gaze-direction in social interaction. Acta Psychologia, 26, 22-63. Lerner, G. H. (2003). Selecting next speaker: The context-sensitive operation of a context-free organization. Language in Society, 32(02), 177–201. Mondada, L. (2007). Multimodal resources for turn-taking: Pointing and the emergence of possible next speakers. Discourse Studies, 9, 195-226. Oloff, F. (2013). Embodied withdrawal after overlap resolution. Journal of Pragmatics, 46, 139-156. Rossano, F. (2012). Gaze behavior in face-to-face interaction. PhD Thesis, Radboud University Nijmegen, Nijmegen. Sacks, H., & Schegloff, E. (2002). Home position. Gesture, 2, 133-146. Schegloff, E. (1998). Body torque. Social Research, 65, 535-596. Schegloff, E. (2009). One perspective on Conversation Analysis: Comparative perspectives. In J. Sidnell (ed.), Conversation Analysis: Comparative perspectives, pp. 357-406. Cambridge: Cambridge University Press. -
Holler, J. (2015). On the pragmatics of multi-modal face-to-face communication: Gesture, speech and gaze in the coordination of mental states and social interaction. Talk presented at the 4th GESPIN - Gesture & Speech in Interaction Conference. Nantes, France. 2015-09-02 - 2015-09-04.
Abstract
Coordination is at the heart of human conversation. In order to interact with one another through talk, we must coordinate at many levels, first and foremost at the level of our mental states, intentions and conversational contributions. In this talk, I will present findings on the pragmatics of multi-model communication from both production and comprehension studies. In terms of production, I will throw light on (1) how co-speech gestures are used in the coordination of meaning to allow interactants to arrive ate a shared understanding of the thins we talk about, as well as on (2) how gesture and gaze are employed in the coordination of speaking turns in spontaneous conversation, with special reference to the psycholinguistic and cognitive challenges that turn-taking poses. In terms of comprehension, I will focus on communicative intentions and the interplay of ostensive and semantic multi-model signals in triadic communication contexts. My talk will bring these different findings together to make the argument for richer reearch paradigms that capture more of the complexities and sociality of face-to-face conversational interactoin. Advancing the field of multi-model communication in this way will allow us to more fully understand the psycholinguistic processes that underlie human language use and language comprehension. -
Holler, J. (2015). Visible communicative acts in the coordination of interaction. [invited talk]. Talk presented at Institute for Language Sciences, Cologne University. Cologne, Germany. 2015-06-11.
-
Hömke, P., Holler, J., & Levinson, S. C. (2015). Blinking as addressee feedback in face-to-face conversation. Talk presented at the 6th Joint Action Meeting. Budapest, Hungary. 2015-07-01 - 2015-07-04.
-
Hömke, P., Holler, J., & Levinson, S. C. (2015). Blinking as addressee feedback in face-to-face dialogue?. Poster presented at the 19th Workshop on the Semantics and Pragmatics of Dialogue (SemDial 2015 / goDIAL), Gothenburg, Sweden.
-
Hömke, P., Holler, J., & Levinson, S. C. (2015). Blinking as addressee feedback in face-to-face dialogue?. Talk presented at the Nijmegen-Tilburg Multi-modality workshop. Tilburg, The Netherlands. 2015-10-22.
-
Hömke, P., Holler, J., & Levinson, S. C. (2015). Blinking as addressee feedback in face-to-face dialogue?. Talk presented at the Donders Discussions Conference. Nijmegen, The Netherlands. 2015-11-05.
-
Humphries, S., Holler, J., Crawford, T., & Poliakoff, E. (2015). Investigating gesture viewpoint during action description in Parkinson’s Disease. Talk presented at the Research into Imagery and Observation Conference. Stirling, Scotland. 2015-05-14 - 2015-05-15.
-
Humphries, S., Holler, J., Crawfort, T., & Poliakoff, E. (2015). Investigating gesture viewpoint during action description in Parkinson’s Disease. Talk presented at the School of Psychological Sciences PGR Conference. Manchester, England.
-
Kendrick, K. H., & Holler, J. (2015). Triadic participation in question-answer sequences. Talk presented at Revisiting Participation – Language and Bodies in Interaction workshop. Basel, Switzerland. 2015-06-24 - 2015-06-27.
-
Kendrick, K. H., & Holler, J. (2015). Triadic participation in question-answer sequences. Talk presented at the 14th International Pragmatics Conference. Antwerp, Belgium. 2015-07-26 - 2015-07-31.
-
Schubotz, L., Holler, J., & Ozyurek, A. (2015). Age-related differences in multi-modal audience design: Young, but not old speakers, adapt speech and gestures to their addressee's knowledge. Talk presented at the 4th GESPIN - Gesture & Speech in Interaction Conference. Nantes, France. 2015-09-02 - 2015-09-05.
-
Schubotz, L., Drijvers, L., Holler, J., & Ozyurek, A. (2015). The cocktail party effect revisited in older and younger adults: When do iconic co-speech gestures help?. Poster presented at Donders Sessions 2015, Nijmegen, The Netherlands.
-
Schubotz, L., Drijvers, L., Holler, J., & Ozyurek, A. (2015). The cocktail party effect revisited in older and younger adults: When do iconic co-speech gestures help?. Talk presented at Donders Discussions 2015. Nijmegen, The Netherlands. 2015-11-05.
Share this page