Asli Ozyurek

Presentations

Displaying 1 - 23 of 23
  • Kan, U., Gökgöz, K., Sumer, B., Tamyürek, E., & Özyürek, A. (2022). Emergence of negation in a Turkish homesign system: Insights from the family context. Talk presented at the Joint Conference on Language Evolution (JCoLE). Kanazawa, Japan. 2022-09-05 - 2022-09-08.
  • Karadöller, D. Z., Manhardt, F., Peeters, D., Özyürek, A., & Ortega, G. (2022). Beyond cognates: Both iconicity and gestures pave the way for speakers in learning signs in L2 at first exposure. Talk presented at the 9th International Society for Gesture Studies conference (ISGS 2022). Chicago, IL, USA. 2022-07-12 - 2022-07-15.
  • Karadöller, D. Z., Manhardt, F., Peeters, D., Özyürek, A., & Ortega, G. (2022). Beyond cognates: Both iconicity and gestures pave the way for speakers in learning signs in L2 at first exposure. Talk presented at the International Conference on Sign Language Acqusition (ICSLA 4). online. 2022-06-23 - 2022-06-25.
  • Karadöller, D. Z., Sumer, B., Ünal, E., & Özyürek, A. (2022). Relationship between spatial language experience and spatial memory: Evidence from deaf children with late sign language exposure. Talk presented at the International Conference on Sign Language Acqusition (ICSLA 4). online. 2022-06-23 - 2022-06-25.
  • Karadöller, D. Z., Sumer, B., Ünal, E., & Özyürek, A. (2022). Geç işaret dilini ediniminin uzamsal dil ve bellek ilişkisine etkileri [Effect of late sign language acquisition on the relationship between spatial language and memory]. Talk presented at 3. Gelişim Psikolojisi Sempozyumu [3rd Symposium on Developmental Psychology]. Istanbul, Turkey. 2022-07-08 - 2022-07-09.
  • Kırbaşoğlu, K., Ünal, E., Karadöller, D. Z., Sumer, B., & Özyürek, A. (2022). Konuşma ve jestlerde uzamsal ifadelerin gelişimi [Development of spatial expressions on speech and gesture]. Poster presented at 3. Gelişim Psikolojisi Sempozyumu [3rd Symposium on Developmental Psychology], Istanbul, Turkey.
  • Mamus, E., Speed, L., Özyürek, A., & Majid, A. (2022). Sensory modality influences the encoding of motion events in speech but not co-speech gestures. Talk presented at the 9th International Society for Gesture Studies conference (ISGS 2022). Chicago, IL, USA. 2022-07-12 - 2022-07-15.
  • Mamus, E., Speed, L., Rissman, L., Majid, A., & Özyürek, A. (2022). Visual experience affects motion event descriptions in speech and gesture. Talk presented at the 9th International Society for Gesture Studies conference (ISGS 2022). Chicago, IL, USA. 2022-07-12 - 2022-07-15.
  • Özyürek, A., Ünal, E., Manhardt, F., & Brouwer, S. (2022). Modality specific differences in speech, gesture and sign modulate visual attention differentially during message preparation. Talk presented at the 9th International Society for Gesture Studies conference (ISGS 2022). Chicago, IL, USA. 2022-07-12 - 2022-07-15.
  • Özyürek, A. (2022). Multimodality as design feature of human language capacity [keynote]. Talk presented at Institute on Multimodality 2022: Minds, Media, Technology. Bielefeld, Germany. 2022-08-28 - 2022-09-06.
  • Sekine, K., & Özyürek, A. (2022). Gestures give a hand to children's understanding of degraded speech. Talk presented at the 9th International Society for Gesture Studies conference (ISGS 2022). Chicago, IL, USA. 2022-07-12 - 2022-07-15.
  • Slonimska, A., Özyürek, A., & Capirci, O. (2022). Simultaneity as an emergent property of sign languages. Talk presented at the Joint Conference on Language Evolution (JCoLE). Kanazawa, Japan. 2022-09-05 - 2022-09-08.
  • Sumer, B., & Özyürek, A. (2022). Language use in deaf children with early-signing versus late-signing deaf parents. Talk presented at the International Conference on Sign Language Acqusition (ICSLA 4). online. 2022-06-23 - 2022-06-25.
  • Ünal, E., Kırbaşoğlu, K., Karadöller, D. Z., Sumer, B., & Özyürek, A. (2022). Children's multimodal spatial expressions vary across the complexity of relations. Poster presented at the 8th International Symposium on Brain and Cognitive Science, online.
  • Azar, Z., Backus, A., & Ozyurek, A. (2014). Discourse management: Reference tracking of subject referents in speech and gesture in Turkish narratives. Talk presented at the 17th International Conference on Turkish Linguistics. Rouen, France. 2014-09-03 - 2014-09-05.
  • Dimitrova, D. V., Chu, M., Wang, L., Ozyurek, A., & Hagoort, P. (2014). Beat gestures modulate the processing focused and non-focused words in context. Poster presented at the Sixth Annual Meeting of the Society for the Neurobiology of Language (SNL 2014), Amsterdam, the Netherlands.

    Abstract

    Information in language is organized according to a principle called information structure: new and important information (focus) is highlighted and distinguished from less important information (non-focus). Most studies so far have been concerned with how focused information is emphasized linguistically and suggest that listeners expect focus to be accented and process it more deeply than non-focus (Wang et al., 2011). Little is known about how listeners deal with non-verbal cues like beat gestures, which also emphasize the words they accompany, similarly to pitch accent. ERP studies suggest that beat gestures facilitate the processing of phonological, syntactic, and semantic aspects of speech (Biau, & Soto-Faraco, 2013; Holle et al., 2012; Wang & Chu, 2013). It is unclear whether listeners expect beat gestures to be aligned with the information structure of the message. The present ERP study addresses this question by testing whether beat gestures modulate the processing of accented-focused vs. unaccented-non focused words in context in a similar way. Participantswatched movies with short dialogues and performed a comprehension task. In each dialogue, the answer “He bought the books via amazon” contained a target word (“books”) which was combined with a beat gesture, a control hand movement (e.g., self touching movement) or no gesture. Based on the preceding context, the target word was either in focus and accented, when preceded by a question like “Did the student buy the books or the magazines via Amazon?”, or the target word was in non-focus and unaccented, when preceded by a question like “Did the student buy the books via Amazon or via Marktplaats?”. The gestures started 500 ms prior to the target word. All gesture parameters (hand shape, naturalness, emphasis, duration, and gesture-speech alignment) were determined in behavioural tests. ERPs were time-locked to gesture onset to examine gesture effects, and to target word onset for pitch accent effects. We applied a cluster-based random permutation analysis to test for main effects and gesture-accent interactions in both time-locking procedures. We found that accented words elicited a positive main effect between 300-600 ms post target onset. Words accompanied by a beat gesture and a control movement elicited sustained positivities between 200-1300 ms post gesture onset. These independent effects of pitch accent and beat gesture are in line with previous findings (Dimitrova et al., 2012; Wang & Chu, 2013). We also found an interaction between control gesture and pitch accent (1200-1300 ms post gesture onset), showing that accented words accompanied by a control movement elicited a negativity relative to unaccented words. The present data show that beat gestures do not differentially modulate the processing of accented-focused vs. unaccented-non focused words. Beat gestures engage a positive and long lasting neural signature, which appears independent from the information structure of the message. Our study suggests that non-verbal cues like beat gestures play a unique role in emphasizing information in speech.
  • Dimitrova, D. V., Chu, M., Wang, L., Ozyurek, A., & Hagoort, P. (2014). Independent effects of beat gesture and pitch accent on processing words in context. Poster presented at the 20th Architectures and Mechanisms for Language Processing Conference (AMLAP 2014), Edinburgh, UK.
  • Kelly, S., Healey, M., Ozyurek, A., & Holler, J. (2014). The integration of gestures and actions with speech: Should we welcome the empty-handed to language comprehension?. Talk presented at the 6th International Society for Gesture Studies Congress. San Diego, California, USA. 2014-07-08 - 2014-07-11.

    Abstract

    Background: Gesture and speech are theorized to form a sin- gle integrated system of meaning during language produc- tion (McNeill, 1992), and evidence is mounting that this in- tegration applies to language comprehension as well (Kelly, Ozyurek & Maris, 2010). However, it is unknown whether gesture is uniquely integrated with speech or is processed like any other manual action. To explore this issue, we compared the extent to which speech is integrated with hand gestures versus actual actions on objects during comprehension. Method: The present study employed a priming paradigm in two experiments. In Experiment 1, subjects watched multi- modal videos that presented auditory (words) and visual (ges- tures and actions on objects) information. Half the subjects related the audio information to a written prime presented be- fore the video, and the other half related the visual informa- tion to the written prime. For half of the multimodal video stimuli, the audio and visual information was congruent, and for the other half, incongruent. The task was to press one but- ton if the written prime was the same as the visual (31 sub- jects) or audio (31 subjects) information in the target video or another button if different. RT and accuracy were recorded. Results: In Experiment 2, we reversed the priming se- quence with a different set of 18 subjects. Now the video became the prime and the written verb followed as the target, but the task was the same with one differenceXto indicate whether the written target was related or unrelated to only the audio information (speech) in preceding video prime. ERPs were recorded to the written targets. In Experiment 1, subjects in both the audio and visual tar- get tasks were less accurate when processing stimuli in which gestures and actions were incongruent versus congruent with speech, F(1, 60) = 22.90, p < .001, but this effect was less prominent for speech-action than for speech-gesture stimuli. However, subjects were more accurate when identifying ac- tions versus gestures, F(1, 60) = 8.03, p = .006. In Experiment 2, there were two early ERP effects. When primed with gesture, incongruent primes produced a larger P1, t (17) = 3.75, p = 0.002, and P2, t (17) = 3.02, p = 0.008, to the target words than the congruent condition in the grand-averaged ERPs (reflecting early perceptual and atten- tional processes). However, there were no significant differ- ences between congruent and incongruent conditions when primed with action. Discussion: The incongruency effect replicates and ex- tends previous work by Kelly et al. (2010) by showing not only a bi-directional influence of gesture and speech, but also of action and speech. In addition, the results show that while actions are easier to process than gestures (Exp. 1), gestures may be more tightly tied to the processing of accompanying speech (Exps. 1 & 2). These results suggest that even though gestures are perceptually less informative than actions, they may be treated as communicatively more informative in rela- tion to the accompanying speech. In this way, the two types of visual information might have different status in language comprehension.
  • Kelly, S., Healey, M., Ozyurek, A., & Holler, J. (2014). The integration of gestures and actions with speech: Should we welcome the empty-handed to language comprehension?. Talk presented at the 6th Conference of the International Society for Gesture Studies (ISGS 6). San Diego, CA, USA. 2014-07-08 - 2014-07-11.

    Abstract

    Background: Gesture and speech are theorized to form a single integrated system of meaning during language production (McNeill, 1992), and evidence is mounting that this integration applies to language comprehension as well (Kelly, Ozyurek & Maris, 2010). However, it is unknown whether gesture is uniquely integrated with speech or is processed like any other manual action. To explore this issue, we compared the extent to which speech is integrated with hand gestures versus actual actions on objects during comprehension. Method: The present study employed a priming paradigm in two experiments. In Experiment 1, subjects watched multimodal videos that presented auditory (words) and visual (gestures and actions on objects) information. Half the subjects related the audio information to a written prime presented before the video, and the other half related the visual information to the written prime. For half of the multimodal video stimuli, the audio and visual information was congruent, and for the other half, incongruent. The task was to press one button if the written prime was the same as the visual (31 subjects) or audio (31 subjects) information in the target video or another button if different. RT and accuracy were recorded. Results: In Experiment 2, we reversed the priming sequence with a different set of 18 subjects. Now the video became the prime and the written verb followed as the target, but the task was the same with one differenceXto indicate whether the written target was related or unrelated to only the audio information (speech) in preceding video prime. ERPs were recorded to the written targets. In Experiment 1, subjects in both the audio and visual target tasks were less accurate when processing stimuli in which gestures and actions were incongruent versus congruent with speech, F(1, 60) = 22.90, p<.001, but this effect was less prominent for speech-action than for speech-gesture stimuli. However, subjects were more accurate when identifying actions versus gestures, F(1, 60) = 8.03, p = .006. In Experiment 2, there were two early ERP effects. When primed with gesture, incongruent primes produced a larger P1, t (17) = 3.75, p = 0.002, and P2, t (17) = 3.02, p = 0.008, to the target words than the congruent condition in the grand-averaged ERPs (reflecting early perceptual and attentional processes). However, there were no significant differences between congruent and incongruent conditions when primed with action. Discussion: The incongruency effect replicates and extends previous work by Kelly et al. (2010) by showing not only a bi-directional influence of gesture and speech, but also of action and speech. In addition, the results show that while actions are easier to process than gestures (Exp. 1), gestures may be more tightly tied to the processing of accompanying speech (Exps. 1 & 2). These results suggest that even though gestures are perceptually less informative than actions, they may be treated as communicatively more informative in relation to the accompanying speech. In this way, the two types of visual information might have different status in language comprehension
  • Peeters, D., Chu, M., Holler, J., Hagoort, P., & Ozyurek, A. (2014). Behavioral and neurophysiological correlates of communicative intent in the production of pointing gestures. Poster presented at the Annual Meeting of the Society for the Neurobiology of Language [SNL2014], Amsterdam, the Netherlands.
  • Peeters, D., Chu, M., Holler, J., Hagoort, P., & Ozyurek, A. (2014). Behavioral and neurophysiological correlates of communicative intent in the production of pointing gestures. Talk presented at the 6th Conference of the International Society for Gesture Studies (ISGS6). San Diego, Cal. 2014-07-08 - 2014-07-11.
  • Peeters, D., Azar, Z., & Ozyurek, A. (2014). The interplay between joint attention, physical proximity, and pointing gesture in demonstrative choice. Talk presented at the 36th Annual Meeting of the Cognitive Science Society (CogSci2014). Québec City, Canada. 2014-07-23 - 2014-07-26.
  • Schubotz, L., Holler, J., & Ozyurek, A. (2014). The impact of age and mutually shared knowledge on multi-modal utterance design. Poster presented at the 6th International Society for Gesture Studies Congress, San Diego, California, USA.

    Abstract

    Previous work suggests that the communicative behavior
    of older adults differs systematically from that of younger
    adults. For instance, older adults produce significantly fewer
    representational gestures than younger adults in monologue
    description tasks (Cohen & Borsoi, 1996; Feyereisen &
    Havard, 1999). In addition, older adults seem to have more
    difficulty than younger adults in establishing common ground
    (i.e. knowledge, assumptions, and beliefs mutually shared
    between a speaker and an addressee, Clark, 1996) in speech
    in a referential communication paradigm (Horton & Spieler,
    2007). Here we investigated whether older adults take such
    common ground into account when designing multi-modal
    utterances for an addressee. The present experiment com-
    pared the speech and co-speech gesture production of two age
    groups (young: 20-30 years, old: 65-75 years) in an inter-
    active setting, manipulating the amount of common ground
    between participants.
    Thirty-two pairs of nave participants (16 young, 16 old,
    same-age-pairs only) took part in the experiment. One of the
    participants (the speaker) narrated short cartoon stories to the
    other participant (the addressee) (task 1) and gave instruc-
    tions on how to assemble a 3D model from wooden building
    blocks (task 2). In both tasks, we varied the amount of infor-
    mation mutually shared between the two participants (com-
    mon ground manipulation). Additionally, we also obtained a
    range of cognitive measures from the speaker: verbal work-
    ing memory (operation span task), visual working memory
    (visual patterns test and Corsi block test), processing speed
    and executive functioning (trail making test parts A + B) and
    a semantic fluency measure (animal naming task). Prelimi-
    nary data analysis of about half the final sample suggests that
    overall, speakers use fewer words per narration/instruction
    when there is shared knowledge with the addressee, in line
    with previous findings (e.g. Clark & Wilkes-Gibbs, 1986).
    This effect is larger for young than for old adults, potentially
    indicating that older adults have more difficulties taking com-
    mon ground into account when formulating utterances. Fur-
    ther, representational co-speech gestures were produced at the
    same rate by both age groups regardless of common ground
    condition in the narration task (in line with Campisi & zyrek,
    2013). In the building block task, however, the trend for the
    young adults is to gesture at a higher rate in the common
    ground condition, suggesting that they rely more on the vi-
    sual modality here (cf. Holler & Wilkin, 2009). The same
    trend could not be found for the old adults. Within the next
    three months, we will extend our analysis a) by taking a wider
    range of gesture types (interactive gestures, beats) into ac-
    count and b) by looking at qualitative features of speech (in-
    formation content) and co-speech gestures (size, shape, tim-
    ing). Finally, we will correlate the resulting data with the data
    from the cognitive tests.
    This study will contribute to a better understanding of the
    communicative strategies of a growing aging population as
    well as to the body of research on co-speech gesture use in
    addressee design. It also addresses the relationship between
    cognitive abilities on the one hand and co-speech gesture
    production on the other hand, potentially informing existing
    models of co-speech gesture production.

Share this page