Displaying 1 - 54 of 54
-
Dona, L., Özyürek, A., Holler, J., Woensdregt, M., & Raviv, L. (2024). The role of facial expressions signalling confidence or doubt in language emergence. Poster presented at the IMPRS Conference 2024, Nijmegen, the Netherlands.
-
Dona, L., Özyürek, A., Holler, J., Woensdregt, M., & Raviv, L. (2024). Communicating confidence and doubt through the face: Implications for language emergence. Poster presented at the Highlights in the Language Sciences Conference 2024, Nijmegen, The Netherlands.
-
Emmendorfer, A. K., & Holler, J. (2024). The influence of speaker gaze on addressee response planning: evidence from EEG data. Poster presented at the IMPRS Conference 2024, Nijmegen, the Netherlands.
-
Emmendorfer, A. K., & Holler, J. (2024). Addressee facial signals indicate upcoming response: Evidence from an online VR experiment. Poster presented at the Highlights in the Language Sciences Conference 2024, Nijmegen, The Netherlands.
-
Ghaleb, E., Rasenberg, M., Pouw, W., Toni, I., Holler, J., Özyürek, A., & Fernandez, R. (2024). Analysing cross-speaker convergence through the lens of automatically detected shared linguistic constructions. Poster presented at the 46th Annual Meeting of the Cognitive Science Society (CogSci 2024), Rotterdam, The Netherlands.
-
Ghaleb, E., Burenko, I., Rasenberg, M., Pouw, W., Uhrig, P., Wilson, A., Toni, I., Holler, J., Özyürek, A., & Fernández, R. (2024). Temporal alignment and integration of audio-visual cues for co-speech gesture detection. Poster presented at the Highlights in the Language Sciences Conference 2024, Nijmegen, The Netherlands.
-
Mazzini, S., Holler, J., Hagoort, P., & Drijvers, L. (2024). Inter-brain synchrony during (un)-successful face-to face communication. Poster presented at the Highlights in the Language Sciences Conference 2024, Nijmegen, The Netherlands.
-
Ter Bekke, M., Drijvers, L., & Holler, J. (2024). Co-speech hand gestures are used to predict upcoming meaning. Poster presented at the Highlights in the Language Sciences Conference 2024, Nijmegen, The Netherlands.
-
Ter Bekke, M., Drijvers, L., & Holler, J. (2024). Gestures speed up responses by improving predictions of upcoming meaning. Poster presented at the Highlights in the Language Sciences Conference 2024, Nijmegen, The Netherlands.
-
Emmendorfer, A. K., & Holler, J. (2023). Addressee gaze direction and response timing signal upcoming response preference: Evidence from behavioral and EEG experiments. Poster presented at the 15th Annual Meeting of the Society for the Neurobiology of Language (SNL 2023), Marseille, France.
-
Emmendorfer, A. K., & Holler, J. (2023). The influence of speaker gaze on addressees’ response planning: Evidence from behavioral and EEG data. Poster presented at the 15th Annual Meeting of the Society for the Neurobiology of Language (SNL 2023), Marseille, France.
-
Mazzini, S., Holler, J., Hagoort, P., & Drijvers, L. (2023). Investigating inter-brain synchrony during (un-)successful face-to-face communication. Poster presented at the 9th bi-annual Joint Action Meeting (JAM), Budapest, Hungary.
-
Mazzini, S., Holler, J., Hagoort, P., & Drijvers, L. (2023). Inter-brain synchrony during (un)successful face-to-face communication. Poster presented at the 15th Annual Meeting of the Society for the Neurobiology of Language (SNL 2023), Marseille, France.
-
Mazzini, S., Holler, J., Hagoort, P., & Drijvers, L. (2023). Studying the association between co-speech gestures, mutual understanding and inter-brain synchrony in face-to-face conversations. Poster presented at the 15th Annual Meeting of the Society for the Neurobiology of Language (SNL 2023), Marseille, France.
-
Mazzini, S., Holler, J., Hagoort, P., & Drijvers, L. (2023). Inter-brain synchrony during (un)successful face-to-face communication. Poster presented at the 19th NVP Winter Conference on Brain and Cognition, Egmond aan Zee, The Netherlands.
Abstract
Human communication requires interlocutors to mutually understand each other. Previous research has suggested inter-brain synchrony as an important feature of social interaction, since it has been observed during joint attention, speech interactions and cooperative tasks. Nonetheless, it is still unknown whether inter-brain synchrony is actually related to successful face-to-face communication. Here, we use dual-EEG to study if inter-brain synchrony is modulated during episodes of successful and unsuccessful communication in clear and noisy communication settings. Dyads performed a tangram-based referential communication task with and without background noise, while both their EEG and audiovisual behavior was recorded. Other-initiated repairs were annotated in the audiovisual data and were used as indexes of unsuccessful and successful communication. More specifically, we compared inter-brain synchrony during episodes of miscommunication (repair initiations) and episodes of mutual understanding (repair solutions and acceptance phases) in the clear and the noise condition. We expect that when communication is successful, inter-brain synchrony will be stronger than when communication is unsuccessful, and we expect that these patterns will be most pronounced in the noise condition. Results are currently being analyzed and will be presented and discussed with respect to the inter-brain neural signatures underlying the process of mutual understanding in face-to-face conversation. -
Ter Bekke, M., Drijvers, L., & Holler, J. (2023). Do listeners use speakers’ iconic gestures to predict upcoming words?. Poster presented at the 8th Gesture and Speech in Interaction (GESPIN 2023), Nijmegen, The Netherlands.
-
Ter Bekke, M., Drijvers, L., & Holler, J. (2023). Gestures speed up responses to questions. Poster presented at the 8th Gesture and Speech in Interaction (GESPIN 2023), Nijmegen, The Netherlands.
-
Ter Bekke, M., Drijvers, L., & Holler, J. (2023). Do listeners use speakers’ iconic hand gestures to predict upcoming words?. Poster presented at the 15th Annual Meeting of the Society for the Neurobiology of Language (SNL 2023), Marseille, France.
-
Trujillo, J. P., Dyer, R. M. K., & Holler, J. (2023). Differences in partner empathy are associated with interpersonal kinetic and prosodic entrainment during conversation. Poster presented at the 9th bi-annual Joint Action Meeting (JAM), Budapest, Hungary.
-
Drijvers, L., & Holler, J. (2022). Face-to-face spatial orientation fine-tunes the brain for neurocognitive processing in conversation. Poster presented at the 14th Annual Meeting of the Society for the Neurobiology of Language (SNL 2022), Philadelphia, PA, USA.
-
Emmendorfer, A. K., Gorter, A., & Holler, J. (2022). Interactive gestures as response mobilizing cues? Evidence from corpus, behavioral, and EEG data. Poster presented at the 14th Annual Meeting of the Society for the Neurobiology of Language (SNL 2022), Philadelphia, PA, USA.
-
Emmendorfer, A. K., Banovac, L., & Holler, J. (2022). Investigating the role of speaker gaze in response mobilization: Evidence from corpus, behavioral, and EEG data. Poster presented at the 14th Annual Meeting of the Society for the Neurobiology of Language (SNL 2022), Philadelphia, PA, USA.
-
Mazzini, S., Holler, J., Hagoort, P., & Drijvers, L. (2022). Intra- and inter-brain synchrony during (un)successful face-to-face communication. Poster presented at the 18th NVP Winter Conference on Brain and Cognition, Egmond aan Zee, The Netherlands.
-
Mazzini, S., Holler, J., Hagoort, P., & Drijvers, L. (2022). Intra- and inter-brain synchrony during (un)successful face-to-face communication. Poster presented at Neurobiology of Language: Key Issues and Ways Forward II, online.
-
Mazzini, S., Holler, J., Hagoort, P., & Drijvers, L. (2022). Intra- and inter-brain synchrony during (un)successful face-to-face communication. Poster presented at the 14th Annual Meeting of the Society for the Neurobiology of Language (SNL 2022), Philadelphia, PA, USA.
-
Nota, N., Trujillo, J. P., & Holler, J. (2022). Facial signals in multimodal communication: The effect of eyebrow movements on social action attribution. Poster presented at the 18th NVP Winter Conference on Brain and Cognition, Egmond aan Zee, The Netherlands.
-
Nota, N., Trujillo, J. P., & Holler, J. (2022). Conversational eyebrow frowns facilitate question identification: An online VR study. Poster presented at the IMPRS Conference 2022, Nijmegen, the Netherlands.
-
Nota, N., Trujillo, J. P., & Holler, J. (2022). Specific facial signals associate with subcategories of social actions conveyed through questions. Poster presented at the Face2face: Advancing the science of social interaction Royal Society London discussion meeting, online.
-
Nota, N., Trujillo, J. P., & Holler, J. (2022). Specific facial signals associate with subcategories of social actions conveyed through questions. Poster presented at the Donders Poster Sessions 2022, Nijmegen, The Netherlands.
-
Ter Bekke, M., Drijvers, L., & Holler, J. (2022). Hand gestures speed up responses to questions. Poster presented at the 18th NVP Winter Conference on Brain and Cognition, Egmond aan Zee, The Netherlands.
-
Trujillo, J. P., Levinson, S. C., & Holler, J. (2022). Multimodal adaptation is a two-way street: A multiscale investigation of the human communication system's response to visual disruption. Poster presented at the Face2face: Advancing the science of social interaction Royal Society London discussion meeting, online.
-
Trujillo, J. P., & Holler, J. (2022). The bodily kinematics of signaling conversational social action. Poster presented at the 9th International Society for Gesture Studies conference (ISGS 2022), Chicago, IL, USA.
-
Nota, N., Trujillo, J. P., & Holler, J. (2021). Facial signals and social actions in multimodal face-to-face interaction. Poster presented at the 4th Experimental Pragmatics in Italy Conference (XPRAG.it 2020(21) ), online.
-
Schubotz, L., Ozyurek, A., & Holler, J. (2018). Age-related differences in multimodal recipient design. Poster presented at the 10th Dubrovnik Conference on Cognitive Science, Dubrovnik, Croatia.
-
Schubotz, L., Drijvers, L., Holler, J., & Ozyurek, A. (2016). The cocktail party effect revisited in older and younger adults: When do iconic co-speech gestures help?. Poster presented at the 8th Speech in Noise Workshop (SpiN 2016), Groningen, The Netherlands.
-
Holler, J., & Kendrick, K. H. (2015). Gesture, gaze, and the body in the organisation of turn-taking for conversation. Poster presented at the 14th International Pragmatics Conference, Anwerp, Belguim.
Abstract
The primordial site of conversation is face-to-face social interaction where participants make use of visual modalities, as well as talk, in the coordination of collaborative action. This most basic observation leads to a fundamental question: what is the place of multimodal resources such as these in the organisation of turn-taking for conversation? To answer this question, we collected a corpus of both dyadic and triadic face-to-face interactions between adults, with the aim to build on existing observations of the use of visual bodily modalities in conversation (e.g., Duncan, 1972; Goodwin, 1981; Kendon, 1967; Lerner 2003; Mondada 2007; Oloff, 2013; Rossano, 2012; Sacks & Schegloff, 2002; Schegloff, 1998). The corpus retains the spontaneity and naturalness of everyday talk as much as possible while combining it with state-of-the-art technology to allow for exact, detailed analyses of verbal and visual conversational behaviours. Each participant (1) was filmed by three high definition video cameras (providing a frontal plus two lateral views) allowing for fine-grained, frame-by-frame analyses of bodily conduct, as well as the precise measurement of how individual bodily behaviours are timed with respect to each other, and with respect to speech; (2) wore a head-mounted microphone providing high quality recordings of the audio signal suitable for determining on- and off-sets of speaking turns, as well as inter-turn gaps, with high precision, (3) wore head-mounted eye-tracking glasses to monitor eye movements and fixations overlaid onto a video recording of the visual scene the participant was viewing at any given moment (including the other [two] participant[s] and the surroundings in which the conversation took place). The HD video recordings of body behaviour, the eye-tracking video recordings, and the audio recordings from all 2/3 participants engaged in each conversation were then integrated within a single software application (ELAN) for synchronised playback and analysis. The analyses focus on the use and interplay of visual bodily resources, including eye gaze, co-speech gestures, and body posture, during conversational coordination, as well as on how these signals interweave with participants’ turns at talk. The results provide insight into the process of turn projection as evidenced by participants’ gaze behaviour with a focus on the role different bodily cues play in this context, and into how concurrent visual and verbal resources are involved in turn construction and turn allocation. This project will add to our understanding of core issues in the field of CA, such as by elucidating the role of multi-modality and number of participants engaged in talk-in-interaction (Schegloff, 2009). References Duncan, S. (1972). Some signals and rules for taking speaking turns in conversations. Journal of Personality and Social Psychology, 23, 283-92. Goodwin, C. (1981). Conversational organization: Interaction between speakers and hearers. New York: Academic Press. Kendon, A. (1967). Some functions of gaze-direction in social interaction. Acta Psychologia, 26, 22-63. Lerner, G. H. (2003). Selecting next speaker: The context-sensitive operation of a context-free organization. Language in Society, 32(02), 177–201. Mondada, L. (2007). Multimodal resources for turn-taking: Pointing and the emergence of possible next speakers. Discourse Studies, 9, 195-226. Oloff, F. (2013). Embodied withdrawal after overlap resolution. Journal of Pragmatics, 46, 139-156. Rossano, F. (2012). Gaze behavior in face-to-face interaction. PhD Thesis, Radboud University Nijmegen, Nijmegen. Sacks, H., & Schegloff, E. (2002). Home position. Gesture, 2, 133-146. Schegloff, E. (1998). Body torque. Social Research, 65, 535-596. Schegloff, E. (2009). One perspective on Conversation Analysis: Comparative perspectives. In J. Sidnell (ed.), Conversation Analysis: Comparative perspectives, pp. 357-406. Cambridge: Cambridge University Press. -
Hömke, P., Holler, J., & Levinson, S. C. (2015). Blinking as addressee feedback in face-to-face dialogue?. Poster presented at the 19th Workshop on the Semantics and Pragmatics of Dialogue (SemDial 2015 / goDIAL), Gothenburg, Sweden.
-
Schubotz, L., Drijvers, L., Holler, J., & Ozyurek, A. (2015). The cocktail party effect revisited in older and younger adults: When do iconic co-speech gestures help?. Poster presented at Donders Sessions 2015, Nijmegen, The Netherlands.
-
Humphries, S., Holler, J., Crawford, T., & Poliakoff, E. (2014). Representing actions in co-speech gestures in Parkinson's Disease. Poster presented at the 6th International Society for Gesture Studies Congress, San Diego, California, USA.
Abstract
Parkinson’s disease (PD) is a progressive, neurological dis- order caused by the loss of dopaminergic cells in the basal ganglia, which is involved in motor control. This leads to the cardinal motor symptoms of PD: tremor, bradykinesia (slow- ness of movement), rigidity and postural instability. PD also leads to general cognitive impairment (executive function, memory, visuospatial abilities), and language impairments; PD patients perform worse at language tasks such as provid- ing word definitions and naming objects, generating lists of verbs, and naming actions. Thus, there seems to be a par- ticular impairment for action-language. Despite the fact that action and language are both impaired in PD, little research has explored if and how co-speech gestures, which embody a link between these two domains, are affected. The Ges- ture as Simulated Action hypothesis argues that gestures arise from cognitive representations or simulations of actions. It has been argued that people with PD may be less able to cog- nitively represent, simulate and imagine actions, which could account for their action-language impairment and may also mean that gestures are affected. Recently, it has been shown that while there is not a straightforward reduction in gesture use in PD, patients’ gestures which described actions are less precise/informative than those of controls. However, partici- pants only described two actions, and to a knowing addressee (so the task was not communicative). The present study extended this by asking participants to describe a wide range of actions in an apparently commu- nicative task, and compared viewpoint as well as precision between the two groups. Gesture viewpoint was examined in order to provide a window into the cognitive representa- tions underlying gesture, by demonstrating whether or not the speaker has placed themselves as the agent within the ac- tion (character viewpoint), requiring a cognitive simulation of the action. Overall, studying gestures in PD has clinical relevance, and will provide insight into the cognitive basis of gestures in healthy people. 25 PD patients and 25 age-matched controls viewed 10 pictures and 10 videos depicting a range of actions and de- scribed them to help an addressee identify the correct stimu- lus. No difference in the rate of gesture production between the two groups was found. However, the precision of ges- tures describing actions was found to be significantly lower in the PD group. Furthermore, the proportion of gestures produced from character viewpoint was found to differ be- tween the groups, with PD patients producing significantly less C-VPT gestures. This suggests that the cognitive repre- sentations underlying the gestures have changed in PD, and that people with PD are less able to imagine themselves as the agent of the action. This supports the GSA hypothesis by demonstrating that gesture production changes when the abil- ity to perform and to cognitively simulate actions is impaired. Our next study will assess the relationships between cognitive factors affected in PD and gesture, and motor imagery ability and gesture. The study will also examine gestures produced by people with PD when describing a wide range of semantic content in various communicative situations. -
Peeters, D., Chu, M., Holler, J., Hagoort, P., & Ozyurek, A. (2014). Behavioral and neurophysiological correlates of communicative intent in the production of pointing gestures. Poster presented at the Annual Meeting of the Society for the Neurobiology of Language [SNL2014], Amsterdam, the Netherlands.
-
Schubotz, L., Holler, J., & Ozyurek, A. (2014). The impact of age and mutually shared knowledge on multi-modal utterance design. Poster presented at the 6th International Society for Gesture Studies Congress, San Diego, California, USA.
Abstract
Previous work suggests that the communicative behavior
of older adults differs systematically from that of younger
adults. For instance, older adults produce significantly fewer
representational gestures than younger adults in monologue
description tasks (Cohen & Borsoi, 1996; Feyereisen &
Havard, 1999). In addition, older adults seem to have more
difficulty than younger adults in establishing common ground
(i.e. knowledge, assumptions, and beliefs mutually shared
between a speaker and an addressee, Clark, 1996) in speech
in a referential communication paradigm (Horton & Spieler,
2007). Here we investigated whether older adults take such
common ground into account when designing multi-modal
utterances for an addressee. The present experiment com-
pared the speech and co-speech gesture production of two age
groups (young: 20-30 years, old: 65-75 years) in an inter-
active setting, manipulating the amount of common ground
between participants.
Thirty-two pairs of nave participants (16 young, 16 old,
same-age-pairs only) took part in the experiment. One of the
participants (the speaker) narrated short cartoon stories to the
other participant (the addressee) (task 1) and gave instruc-
tions on how to assemble a 3D model from wooden building
blocks (task 2). In both tasks, we varied the amount of infor-
mation mutually shared between the two participants (com-
mon ground manipulation). Additionally, we also obtained a
range of cognitive measures from the speaker: verbal work-
ing memory (operation span task), visual working memory
(visual patterns test and Corsi block test), processing speed
and executive functioning (trail making test parts A + B) and
a semantic fluency measure (animal naming task). Prelimi-
nary data analysis of about half the final sample suggests that
overall, speakers use fewer words per narration/instruction
when there is shared knowledge with the addressee, in line
with previous findings (e.g. Clark & Wilkes-Gibbs, 1986).
This effect is larger for young than for old adults, potentially
indicating that older adults have more difficulties taking com-
mon ground into account when formulating utterances. Fur-
ther, representational co-speech gestures were produced at the
same rate by both age groups regardless of common ground
condition in the narration task (in line with Campisi & zyrek,
2013). In the building block task, however, the trend for the
young adults is to gesture at a higher rate in the common
ground condition, suggesting that they rely more on the vi-
sual modality here (cf. Holler & Wilkin, 2009). The same
trend could not be found for the old adults. Within the next
three months, we will extend our analysis a) by taking a wider
range of gesture types (interactive gestures, beats) into ac-
count and b) by looking at qualitative features of speech (in-
formation content) and co-speech gestures (size, shape, tim-
ing). Finally, we will correlate the resulting data with the data
from the cognitive tests.
This study will contribute to a better understanding of the
communicative strategies of a growing aging population as
well as to the body of research on co-speech gesture use in
addressee design. It also addresses the relationship between
cognitive abilities on the one hand and co-speech gesture
production on the other hand, potentially informing existing
models of co-speech gesture production. -
Wilby, F., Riddell, C., Lloyd, D., Wearden, A., & Holler, J. (2014). Naming with words and gestures in children with Down Syndrome. Poster presented at the 6th International Society for Gesture Studies Congress, San Diego, California, USA.
Abstract
Several researchers have shown a close relationship between gesture and language in typically developing children and in children with developmental disorders involving delayed or impaired linguistic abilities. Most of these studies reported that, when children are limited in cognitive, linguistic, met- alinguistic, and articulatory skills, they may compensate for some of these limitations with gestures (Capone & McGre- gor, 2004). Some researchers also highlighted that children with Down Syndrome (DS) show a preference for nonver- bal communication using more gestures with respect to typi- cally developing (TD) children (Stefanini, Caselli & Volterra, 2011). The present study investigates the lexical comprehen- sion and production abilities as well as the frequency and the form of gestural production in children with DS. In partic- ular, we are interested in the frequency of gesture produc- tion (deictic and representational) and the types of represen- tational gesture produced. Four gesture types were coded, including own body, size and shape, body-part-as object and imagined-object. Fourteen children with DS (34 months of developmental age, 54 months of chronological age) and a comparison group of 14 typically developing children (TD) (29 months of chronological age) matched for gender and de- velopmental age were assessed through the parent question- naire MB-CDI and a direct test of lexical comprehension and production (PiNG). Children with DS show a general weak- ness in lexical comprehension and production. As for the composition of the lexical repertoire, for both groups of chil- dren, nouns are understood and produced in higher percent- ages compared to predicates. Children with DS produce more representational gestures than TD children in the comprehen- sion task and above all with predicates; on the contrary, both groups of children exhibit the same number of gestures on the MB-CDI and in the lexical production task. Children with DS produced more unimodal gestural answers than the con- trol group. Children from both groups produced all four ges- ture types (own body 53%, size and shape 9%, body-part-as object 25 %, and imagined-object 14%). Chi-square analy- sis revealed no significant difference in the type of gesture produced between the two groups of children for both lex- ical categories. For both groups the distribution of gesture types reflects an item effect (eg. 100% of gesture produced for the pictures lion, kissing and washing were own body and 100% of the pictures produce for small and long were size and shape). For some item (e.g. comb, talking on the phone) chil- dren in both groups produced both types (body-part-as object and imagined-object) with similar frequency. These data on the types of representational gestures produced by the two groups show a similar conceptual representation in TD chil- dren and in children with DS despite a greater impairment of the spoken linguistic abilities in the letter. Future investiga- tions, are needed to confirm these preliminary results. -
Holler, J., Schubotz, L., Kelly, S., Schuetze, M., Hagoort, P., & Ozyurek, A. (2013). Here's not looking at you, kid! Unaddressed recipients benefit from co-speech gestures when speech processing suffers. Poster presented at the 35th Annual Meeting of the Cognitive Science Society (CogSci 2013), Berlin, Germany.
-
Peeters, D., Chu, M., Holler, J., Ozyurek, A., & Hagoort, P. (2013). The influence of communicative intent on the form of pointing gestures. Poster presented at the Fifth Joint Action Meeting (JAM5), Berlin, Germany.
-
Cotroneo, C., Holler, J., & Connell, L. (2012). Gesture and the embodiment of auditory perceptual information. Poster presented at the 5th Embodied and Situated Language Processing Conference (ESLP 2012), Newcastle upon Tyne, UK.
-
Herrera, E., Poliakoff, E., Holler, J., McDonald, K., & Cuetos, F. (2012). Naming dynamic actions in Parkinson's disease. Poster presented at the 16th International Congress of Parkinson's Disease and Movement Disorders, Dubline, Ireland.
-
Holler, J., Kelly, S., Hagoort, P., & Ozyurek, A. (2012). Overhearing gesture: The influence of eye gaze direction on the comprehension of iconic gestures. Poster presented at the Social Cognition, Engagement, and the Second-Person-Perspective Conference, Cologne, Germany.
-
Holler, J., Kelly, S., Hagoort, P., & Ozyurek, A. (2012). Overhearing gesture: The influence of eye gaze direction on the comprehension of iconic gestures. Poster presented at the EPS workshop 'What if.. the study of language started from the investigation of signed, rather than spoken language?, London, UK.
-
Humphreys, S., Poliakoff, E., & Holler, J. (2012). Action representation in co-speech gestures in Parkinson's Disease. Poster presented at the Parkinson's UK Research Conference, York, UK.
-
Humphries, S., Poliakoff, E., & Holler, J. (2012). Action representation actions in co-speech gestures in Parkinson's Disease. Poster presented at the Parkinson’s UK Research Conference, York.
-
Humphries, S., Poliakoff, E., & Holler, J. (2012). How does Parkinson’s Disease affect the way people use gestures to communicate about actions?. Poster presented at Parkinson’s UK Research Conference, York.
Abstract
Objective: To examine how co-speech gestures depicting actions are a ected in Parkinson’s disease (PD), and to explore how gestures might be related to measures of verbal fluency and action naming. Background: PD a ects not only motor abilities, but also language and communication. Language is more impaired for words relating to motor content; e.g., patients take longer to name actions with a high compared to a low motor content. Co-speech gestures embody a form of action which is tightly linked to language and which represent meaningful information that forms a unified whole together with that contained in speech. However, co-speech gestures have rarely been investigated in PD. Recent data showed that gestural precision was reduced in PD patients when describing actions, suggesting that the mental representations of actions underlying their co-speech gestures have become less specific. We investigated this phenomenon for a wider range of actions than the original study, and also explored the possible relationship between verbal fluency/naming deficits and gestures. Method: Sixteen PD patients and 13 IQ-matched healthy controls were video recorded describing pictures and video clips of actions, such as running and knitting. Participants also completed measures of verbal fluency (generating as many words as possible in one minute for certain phonological and semantic categories) and action naming. Results: Analysis is in progress. We are comparing the rate of co-speech gesture production as well as the precision of action-related co-speech gestures between PD patients and controls. We will also examine the relationship between gestures and scores on tasks of verbal fluency and action naming. Conclusions: Investigating co-speech gestures associated with actions has implications for understanding both communication and action representation in Parkinson’s. -
Kokal, I., Holler, J., Ozyurek, A., Kelly, S., Toni, I., & Hagoort, P. (2012). Eye'm talking to you: Speakers' gaze direction modulates the integration of speech and iconic gestures in the rigth MTG. Poster presented at the 4th Annual Neurobiology of Language Conference (NLC 2012), San Sebastian, Spain.
-
Kokal, I., Holler, J., Ozyurek, A., Kelly, S., Toni, I., & Hagoort, P. (2012). Eye'm talking to you: The role of the Middle Temporal Gyrus in the integration of gaze, gesture and speech. Poster presented at the Social Cognition, Engagement, and the Second-Person-Perspective Conference, Cologne, Germany.
-
Rowbotham, S., Wearden, A., Holler, J., & Lloyd, D. (2012). The relationship between pain catastrophizing and gesture production during pain communication. Poster presented at the British Psychological Society Division of Health Psychology Section Annual Conference, Liverpool, UK.
Share this page