Displaying 1 - 25 of 25
-
Eijk, L., Rasenberg, M., Arnese, F., Blokpoel, M., Dingemanse, M., Doeller, C. F., Ernestus, M., Holler, J., Milivojevic, B., Özyürek, A., Pouw, W., Van Rooij, I., Schriefers, H., Toni, I., Trujillo, J. P., & Bögels, S. (2022). The CABB dataset: A multimodal corpus of communicative interactions for behavioural and neural analyses. NeuroImage, 264: 119734. doi:10.1016/j.neuroimage.2022.119734.
Abstract
We present a dataset of behavioural and fMRI observations acquired in the context of humans involved in multimodal referential communication. The dataset contains audio/video and motion-tracking recordings of face-to-face, task-based communicative interactions in Dutch, as well as behavioural and neural correlates of participants’ representations of dialogue referents. Seventy-one pairs of unacquainted participants performed two interleaved interactional tasks in which they described and located 16 novel geometrical objects (i.e., Fribbles) yielding spontaneous interactions of about one hour. We share high-quality video (from three cameras), audio (from head-mounted microphones), and motion-tracking (Kinect) data, as well as speech transcripts of the interactions. Before and after engaging in the face-to-face communicative interactions, participants’ individual representations of the 16 Fribbles were estimated. Behaviourally, participants provided a written description (one to three words) for each Fribble and positioned them along 29 independent conceptual dimensions (e.g., rounded, human, audible). Neurally, fMRI signal evoked by each Fribble was measured during a one-back working-memory task. To enable functional hyperalignment across participants, the dataset also includes fMRI measurements obtained during visual presentation of eight animated movies (35 minutes total). We present analyses for the various types of data demonstrating their quality and consistency with earlier research. Besides high-resolution multimodal interactional data, this dataset includes different correlates of communicative referents, obtained before and after face-to-face dialogue, allowing for novel investigations into the relation between communicative behaviours and the representational space shared by communicators. This unique combination of data can be used for research in neuroscience, psychology, linguistics, and beyond. -
Kan, U., Gökgöz, K., Sumer, B., Tamyürek, E., & Özyürek, A. (2022). Emergence of negation in a Turkish homesign system: Insights from the family context. In A. Ravignani, R. Asano, D. Valente, F. Ferretti, S. Hartmann, M. Hayashi, Y. Jadoul, M. Martins, Y. Oseki, E. D. Rodrigues, O. Vasileva, & S. Wacewicz (
Eds. ), The evolution of language: Proceedings of the Joint Conference on Language Evolution (JCoLE) (pp. 387-389). Nijmegen: Joint Conference on Language Evolution (JCoLE). -
Rasenberg, M., Pouw, W., Özyürek, A., & Dingemanse, M. (2022). The multimodal nature of communicative efficiency in social interaction. Scientific Reports, 12: 19111. doi:10.1038/s41598-022-22883-w.
Abstract
How does communicative efficiency shape language use? We approach this question by studying it at the level of the dyad, and in terms of multimodal utterances. We investigate whether and how people minimize their joint speech and gesture efforts in face-to-face interactions, using linguistic and kinematic analyses. We zoom in on other-initiated repair—a conversational microcosm where people coordinate their utterances to solve problems with perceiving or understanding. We find that efforts in the spoken and gestural modalities are wielded in parallel across repair turns of different types, and that people repair conversational problems in the most cost-efficient way possible, minimizing the joint multimodal effort for the dyad as a whole. These results are in line with the principle of least collaborative effort in speech and with the reduction of joint costs in non-linguistic joint actions. The results extend our understanding of those coefficiency principles by revealing that they pertain to multimodal utterance design.Additional information
Data and analysis scripts -
Rasenberg, M., Özyürek, A., Bögels, S., & Dingemanse, M. (2022). The primacy of multimodal alignment in converging on shared symbols for novel referents. Discourse Processes, 59(3), 209-236. doi:10.1080/0163853X.2021.1992235.
Abstract
When people establish shared symbols for novel objects or concepts, they have been shown to rely on the use of multiple communicative modalities as well as on alignment (i.e., cross-participant repetition of communicative behavior). Yet these interactional resources have rarely been studied together, so little is known about if and how people combine multiple modalities in alignment to achieve joint reference. To investigate this, we systematically track the emergence of lexical and gestural alignment in a referential communication task with novel objects. Quantitative analyses reveal that people frequently use a combination of lexical and gestural alignment, and that such multimodal alignment tends to emerge earlier compared to unimodal alignment. Qualitative analyses of the interactional contexts in which alignment emerges reveal how people flexibly deploy lexical and gestural alignment (independently, simultaneously or successively) to adjust to communicative pressures. -
Schubotz, L., Özyürek, A., & Holler, J. (2022). Individual differences in working memory and semantic fluency predict younger and older adults' multimodal recipient design in an interactive spatial task. Acta Psychologica, 229: 103690. doi:10.1016/j.actpsy.2022.103690.
Abstract
Aging appears to impair the ability to adapt speech and gestures based on knowledge shared with an addressee
(common ground-based recipient design) in narrative settings. Here, we test whether this extends to spatial settings
and is modulated by cognitive abilities. Younger and older adults gave instructions on how to assemble 3D-
models from building blocks on six consecutive trials. We induced mutually shared knowledge by either
showing speaker and addressee the model beforehand, or not. Additionally, shared knowledge accumulated
across the trials. Younger and crucially also older adults provided recipient-designed utterances, indicated by a
significant reduction in the number of words and of gestures when common ground was present. Additionally, we
observed a reduction in semantic content and a shift in cross-modal distribution of information across trials.
Rather than age, individual differences in verbal and visual working memory and semantic fluency predicted the
extent of addressee-based adaptations. Thus, in spatial tasks, individual cognitive abilities modulate the inter-
active language use of both younger and older adulAdditional information
1-s2.0-S0001691822002050-mmc1.docx -
Slonimska, A., Özyürek, A., & Capirci, O. (2022). Simultaneity as an emergent property of efficient communication in language: A comparison of silent gesture and sign language. Wiley Interdisciplinary Reviews: Cognitive Science, 46(5): 13133. doi:10.1111/cogs.13133.
Abstract
Sign languages use multiple articulators and iconicity in the visual modality which allow linguistic units to be organized not only linearly but also simultaneously. Recent research has shown that users of an established sign language such as LIS (Italian Sign Language) use simultaneous and iconic constructions as a modality-specific resource to achieve communicative efficiency when they are required to encode informationally rich events. However, it remains to be explored whether the use of such simultaneous and iconic constructions recruited for communicative efficiency can be employed even without a linguistic system (i.e., in silent gesture) or whether they are specific to linguistic patterning (i.e., in LIS). In the present study, we conducted the same experiment as in Slonimska et al. with 23 Italian speakers using silent gesture and compared the results of the two studies. The findings showed that while simultaneity was afforded by the visual modality to some extent, its use in silent gesture was nevertheless less frequent and qualitatively different than when used within a linguistic system. Thus, the use of simultaneous and iconic constructions for communicative efficiency constitutes an emergent property of sign languages. The present study highlights the importance of studying modality-specific resources and their use for linguistic expression in order to promote a more thorough understanding of the language faculty and its modality-specific adaptive capabilities. -
Slonimska, A., Özyürek, A., & Capirci, O. (2022). Simultaneity as an emergent property of sign languages. In A. Ravignani, R. Asano, D. Valente, F. Ferretti, S. Hartmann, M. Hayashi, Y. Jadoul, M. Martins, Y. Oseki, E. D. Rodrigues, O. Vasileva, & S. Wacewicz (
Eds. ), The evolution of language: Proceedings of the Joint Conference on Language Evolution (JCoLE) (pp. 678-680). Nijmegen: Joint Conference on Language Evolution (JCoLE). -
Sumer, B., & Özyürek, A. (2022). Cross-modal investigation of event component omissions in language development: A comparison of signing and speaking children. Language, Cognition and Neuroscience, 37(8), 1023-1039. doi:10.1080/23273798.2022.2042336.
Abstract
Language development research suggests a universal tendency for children to be under- informative in narrating motion events by omitting components such as Path, Manner or Ground. However, this assumption has not been tested for children acquiring sign language. Due to the affordances of the visual-spatial modality of sign languages for iconic expression, signing children might omit event components less frequently than speaking children. Here we analysed motion event descriptions elicited from deaf children (4–10 years) acquiring Turkish Sign Language (TİD) and their Turkish-speaking peers. While children omitted all types of event components more often than adults, signing children and adults encoded more Path and Manner in TİD than their peers in Turkish. These results provide more evidence for a general universal tendency for children to omit event components as well as a modality bias for sign languages to encode both Manner and Path more frequently than spoken languages. -
Sumer, B., & Özyürek, A. (2022). Language use in deaf children with early-signing versus late-signing deaf parents. Frontiers in Communication, 6: 804900. doi:10.3389/fcomm.2021.804900.
Abstract
Previous research has shown that spatial language is sensitive to the effects of delayed language exposure. Locative encodings of late-signing deaf adults varied from those of early-signing deaf adults in the preferred types of linguistic forms. In the current study, we investigated whether such differences would be found in spatial language use of deaf children with deaf parents who are either early or late signers of Turkish Sign Language (TİD). We analyzed locative encodings elicited from these two groups of deaf children for the use of different linguistic forms and the types of classifier handshapes. Our findings revealed differences between these two groups of deaf children in their preferred types of linguistic forms, which showed parallels to differences between late versus early deaf adult signers as reported by earlier studies. Deaf children in the current study, however, were similar to each other in the type of classifier handshapes that they used in their classifier constructions. Our findings have implications for expanding current knowledge on to what extent variation in language input (i.e., from early vs. late deaf signers) is reflected in children’s productions as well as the role of linguistic input on language development in general. -
Ter Bekke, M., Özyürek, A., & Ünal, E. (2022). Speaking but not gesturing predicts event memory: A cross-linguistic comparison. Language and Cognition, 14(3), 362-384. doi:10.1017/langcog.2022.3.
Abstract
Every day people see, describe, and remember motion events. However, the relation between multimodal encoding of motion events in speech and gesture, and memory is not yet fully understood. Moreover, whether language typology modulates this relation remains to be tested. This study investigates whether the type of motion event information (path or manner) mentioned in speech and gesture predicts which information is remembered and whether this varies across speakers of typologically different languages. Dutch- and Turkish-speakers watched and described motion events and completed a surprise recognition memory task. For both Dutch- and Turkish-speakers, manner memory was at chance level. Participants who mentioned path in speech during encoding were more accurate at detecting changes to the path in the memory task. The relation between mentioning path in speech and path memory did not vary cross-linguistically. Finally, the co-speech gesture did not predict memory above mentioning path in speech. These findings suggest that how speakers describe a motion event in speech is more important than the typology of the speakers’ native language in predicting motion event memory. The motion event videos are available for download for future research at https://osf.io/p8cas/.Additional information
S1866980822000035sup001.docx -
Trujillo, J. P., Özyürek, A., Kan, C., Sheftel-Simanova, I., & Bekkering, H. (2022). Differences in functional brain organization during gesture recognition between autistic and neurotypical individuals. Social Cognitive and Affective Neuroscience, 17(11), 1021-1034. doi:10.1093/scan/nsac026.
Abstract
Persons with and without autism process sensory information differently. Differences in sensory processing are directly relevant to social functioning and communicative abilities, which are known to be hampered in persons with autism. We collected functional magnetic resonance imaging (fMRI) data from 25 autistic individuals and 25 neurotypical individuals while they performed a silent gesture recognition task. We exploited brain network topology, a holistic quantification of how networks within the brain are organized to provide new insights into how visual communicative signals are processed in autistic and neurotypical individuals. Performing graph theoretical analysis, we calculated two network properties of the action observation network: local efficiency, as a measure of network segregation, and global efficiency, as a measure of network integration. We found that persons with autism and neurotypical persons differ in how the action observation network is organized. Persons with autism utilize a more clustered, local-processing-oriented network configuration (i.e., higher local efficiency), rather than the more integrative network organization seen in neurotypicals (i.e., higher global efficiency). These results shed new light on the complex interplay between social and sensory processing in autism.Additional information
nsac026_supp.zip -
Ünal, E., Manhardt, F., & Özyürek, A. (2022). Speaking and gesturing guide event perception during message conceptualization: Evidence from eye movements. Cognition, 225: 105127. doi:10.1016/j.cognition.2022.105127.
Abstract
Speakers’ visual attention to events is guided by linguistic conceptualization of information in spoken language
production and in language-specific ways. Does production of language-specific co-speech gestures further guide
speakers’ visual attention during message preparation? Here, we examine the link between visual attention and
multimodal event descriptions in Turkish. Turkish is a verb-framed language where speakers’ speech and gesture
show language specificity with path of motion mostly expressed within the main verb accompanied by path
gestures. Turkish-speaking adults viewed motion events while their eye movements were recorded during non-
linguistic (viewing-only) and linguistic (viewing-before-describing) tasks. The relative attention allocated to path
over manner was higher in the linguistic task compared to the non-linguistic task. Furthermore, the relative
attention allocated to path over manner within the linguistic task was higher when speakers (a) encoded path in
the main verb versus outside the verb and (b) used additional path gestures accompanying speech versus not.
Results strongly suggest that speakers’ visual attention is guided by language-specific event encoding not only in
speech but also in gesture. This provides evidence consistent with models that propose integration of speech and
gesture at the conceptualization level of language production and suggests that the links between the eye and the
mouth may be extended to the eye and the hand. -
Emmorey, K., & Ozyurek, A. (2014). Language in our hands: Neural underpinnings of sign language and co-speech gesture. In M. S. Gazzaniga, & G. R. Mangun (
Eds. ), The cognitive neurosciences (5th ed., pp. 657-666). Cambridge, Mass: MIT Press. -
Furman, R., Kuntay, A., & Ozyurek, A. (2014). Early language-specificity of children's event encoding in speech and gesture: Evidence from caused motion in Turkish. Language, Cognition and Neuroscience, 29, 620-634. doi:10.1080/01690965.2013.824993.
Abstract
Previous research on language development shows that children are tuned early on to the language-specific semantic and syntactic encoding of events in their native language. Here we ask whether language-specificity is also evident in children's early representations in gesture accompanying speech. In a longitudinal study, we examined the spontaneous speech and cospeech gestures of eight Turkish-speaking children aged one to three and focused on their caused motion event expressions. In Turkish, unlike in English, the main semantic elements of caused motion such as Action and Path can be encoded in the verb (e.g. sok- ‘put in’) and the arguments of a verb can be easily omitted. We found that Turkish-speaking children's speech indeed displayed these language-specific features and focused on verbs to encode caused motion. More interestingly, we found that their early gestures also manifested specificity. Children used iconic cospeech gestures (from 19 months onwards) as often as pointing gestures and represented semantic elements such as Action with Figure and/or Path that reinforced or supplemented speech in language-specific ways until the age of three. In the light of previous reports on the scarcity of iconic gestures in English-speaking children's early productions, we argue that the language children learn shapes gestures and how they get integrated with speech in the first three years of life. -
Holler, J., Schubotz, L., Kelly, S., Hagoort, P., Schuetze, M., & Ozyurek, A. (2014). Social eye gaze modulates processing of speech and co-speech gesture. Cognition, 133, 692-697. doi:10.1016/j.cognition.2014.08.008.
Abstract
In human face-to-face communication, language comprehension is a multi-modal, situated activity. However, little is known about how we combine information from different modalities during comprehension, and how perceived communicative intentions, often signaled through visual signals, influence this process. We explored this question by simulating a multi-party communication context in which a speaker alternated her gaze between two recipients. Participants viewed speech-only or speech + gesture object-related messages when being addressed (direct gaze) or unaddressed (gaze averted to other participant). They were then asked to choose which of two object images matched the speaker’s preceding message. Unaddressed recipients responded significantly more slowly than addressees for speech-only utterances. However, perceiving the same speech accompanied by gestures sped unaddressed recipients up to a level identical to that of addressees. That is, when unaddressed recipients’ speech processing suffers, gestures can enhance the comprehension of a speaker’s message. We discuss our findings with respect to two hypotheses attempting to account for how social eye gaze may modulate multi-modal language comprehension. -
Ortega, G., Sumer, B., & Ozyurek, A. (2014). Type of iconicity matters: Bias for action-based signs in sign language acquisition. In P. Bello, M. Guarini, M. McShane, & B. Scassellati (
Eds. ), Proceedings of the 36th Annual Meeting of the Cognitive Science Society (CogSci 2014) (pp. 1114-1119). Austin, Tx: Cognitive Science Society.Abstract
Early studies investigating sign language acquisition claimed
that signs whose structures are motivated by the form of their
referent (iconic) are not favoured in language development.
However, recent work has shown that the first signs in deaf
children’s lexicon are iconic. In this paper we go a step
further and ask whether different types of iconicity modulate
learning sign-referent links. Results from a picture description
task indicate that children and adults used signs with two
possible variants differentially. While children signing to
adults favoured variants that map onto actions associated with
a referent (action signs), adults signing to another adult
produced variants that map onto objects’ perceptual features
(perceptual signs). Parents interacting with children used
more action variants than signers in adult-adult interactions.
These results are in line with claims that language
development is tightly linked to motor experience and that
iconicity can be a communicative strategy in parental input. -
Ozyurek, A. (2014). Hearing and seeing meaning in speech and gesture: Insights from brain and behaviour. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 369(1651): 20130296. doi:10.1098/rstb.2013.0296.
Abstract
As we speak, we use not only the arbitrary form–meaning mappings of the speech channel but also motivated form–meaning correspondences, i.e. iconic gestures that accompany speech (e.g. inverted V-shaped hand wiggling across gesture space to demonstrate walking). This article reviews what we know about processing of semantic information from speech and iconic gestures in spoken languages during comprehension of such composite utterances. Several studies have shown that comprehension of iconic gestures involves brain activations known to be involved in semantic processing of speech: i.e. modulation of the electrophysiological recording component N400, which is sensitive to the ease of semantic integration of a word to previous context, and recruitment of the left-lateralized frontal–posterior temporal network (left inferior frontal gyrus (IFG), medial temporal gyrus (MTG) and superior temporal gyrus/sulcus (STG/S)). Furthermore, we integrate the information coming from both channels recruiting brain areas such as left IFG, posterior superior temporal sulcus (STS)/MTG and even motor cortex. Finally, this integration is flexible: the temporal synchrony between the iconic gesture and the speech segment, as well as the perceived communicative intent of the speaker, modulate the integration process. Whether these findings are special to gestures or are shared with actions or other visual accompaniments to speech (e.g. lips) or other visual symbols such as pictures are discussed, as well as the implications for a multimodal view of language. -
Peeters, D., Azar, Z., & Ozyurek, A. (2014). The interplay between joint attention, physical proximity, and pointing gesture in demonstrative choice. In P. Bello, M. Guarini, M. McShane, & B. Scassellati (
Eds. ), Proceedings of the 36th Annual Meeting of the Cognitive Science Society (CogSci 2014) (pp. 1144-1149). Austin, Tx: Cognitive Science Society. -
Sumer, B., Perniss, P., Zwitserlood, I., & Ozyurek, A. (2014). Learning to express "left-right" & "front-behind" in a sign versus spoken language. In P. Bello, M. Guarini, M. McShane, & B. Scassellati (
Eds. ), Proceedings of the 36th Annual Meeting of the Cognitive Science Society (CogSci 2014) (pp. 1550-1555). Austin, Tx: Cognitive Science Society.Abstract
Developmental studies show that it takes longer for
children learning spoken languages to acquire viewpointdependent
spatial relations (e.g., left-right, front-behind),
compared to ones that are not viewpoint-dependent (e.g.,
in, on, under). The current study investigates how
children learn to express viewpoint-dependent relations
in a sign language where depicted spatial relations can be
communicated in an analogue manner in the space in
front of the body or by using body-anchored signs (e.g.,
tapping the right and left hand/arm to mean left and
right). Our results indicate that the visual-spatial
modality might have a facilitating effect on learning to
express these spatial relations (especially in encoding of
left-right) in a sign language (i.e., Turkish Sign
Language) compared to a spoken language (i.e.,
Turkish). -
Furman, R., Ozyurek, A., & Küntay, A. C. (2010). Early language-specificity in Turkish children's caused motion event expressions in speech and gesture. In K. Franich, K. M. Iserman, & L. L. Keil (
Eds. ), Proceedings of the 34th Boston University Conference on Language Development. Volume 1 (pp. 126-137). Somerville, MA: Cascadilla Press. -
Kelly, S. D., Ozyurek, A., & Maris, E. (2010). Two sides of the same coin: Speech and gesture mutually interact to enhance comprehension. Psychological Science, 21, 260-267. doi:10.1177/0956797609357327.
Abstract
Gesture and speech are assumed to form an integrated system during language production. Based on this view, we propose the integrated‐systems hypothesis, which explains two ways in which gesture and speech are integrated—through mutual and obligatory interactions—in language comprehension. Experiment 1 presented participants with action primes (e.g., someone chopping vegetables) and bimodal speech and gesture targets. Participants related primes to targets more quickly and accurately when they contained congruent information (speech: “chop”; gesture: chop) than when they contained incongruent information (speech: “chop”; gesture: twist). Moreover, the strength of the incongruence affected processing, with fewer errors for weak incongruities (speech: “chop”; gesture: cut) than for strong incongruities (speech: “chop”; gesture: twist). Crucial for the integrated‐systems hypothesis, this influence was bidirectional. Experiment 2 demonstrated that gesture’s influence on speech was obligatory. The results confirm the integrated‐systems hypothesis and demonstrate that gesture and speech form an integrated system in language comprehension. -
Kita, S., Ozyurek, A., Allen, S., & Ishizuka, T. (2010). Early links between iconic gestures and sound symbolic words: Evidence for multimodal protolanguage. In A. D. Smith, M. Schouwstra, B. de Boer, & K. Smith (
Eds. ), Proceedings of the 8th International conference on the Evolution of Language (EVOLANG 8) (pp. 429-430). Singapore: World Scientific. -
Ozyurek, A., Zwitserlood, I., & Perniss, P. M. (2010). Locative expressions in signed languages: A view from Turkish Sign Language (TID). Linguistics, 48(5), 1111-1145. doi:10.1515/LING.2010.036.
Abstract
Locative expressions encode the spatial relationship between two (or more) entities. In this paper, we focus on locative expressions in signed language, which use the visual-spatial modality for linguistic expression, specifically in
Turkish Sign Language ( Türk İşaret Dili, henceforth TİD). We show that TİD uses various strategies in discourse to encode the relation between a Ground entity (i.e., a bigger and/or backgrounded entity) and a Figure entity (i.e., a
smaller entity, which is in the focus of attention). Some of these strategies exploit affordances of the visual modality for analogue representation and support evidence for modality-specific effects on locative expressions in sign languages.
However, other modality-specific strategies, e.g., the simultaneous expression of Figure and Ground, which have been reported for many other sign languages, occurs only sparsely in TİD. Furthermore, TİD uses categorical as well as analogical structures in locative expressions. On the basis of
these findings, we discuss differences and similarities between signed and spoken languages to broaden our understanding of the range of structures used in natural language (i.e., in both the visual-spatial or oral-aural modalities) to encode locative relations. A general linguistic theory of spatial relations, and specifically of locative expressions, must take all structures that
might arise in both modalities into account before it can generalize over the human language faculty. -
Ozyurek, A. (2010). The role of iconic gestures in production and comprehension of language: Evidence from brain and behavior. In S. Kopp, & I. Wachsmuth (
Eds. ), Gesture in embodied communication and human-computer interaction: 8th International Gesture Workshop, GW 2009, Bielefeld, Germany, February 25-27 2009. Revised selected papers (pp. 1-10). Berlin: Springer. -
Senghas, A., Ozyurek, A., & Goldin-Meadow, S. (2010). The evolution of segmentation and sequencing: Evidence from homesign and Nicaraguan Sign Language. In A. D. Smith, M. Schouwstra, B. de Boer, & K. Smith (
Eds. ), Proceedings of the 8th International conference on the Evolution of Language (EVOLANG 8) (pp. 279-289). Singapore: World Scientific.
Share this page