Displaying 1 - 27 of 27
-
Eijk, L., Rasenberg, M., Arnese, F., Blokpoel, M., Dingemanse, M., Doeller, C. F., Ernestus, M., Holler, J., Milivojevic, B., Özyürek, A., Pouw, W., Van Rooij, I., Schriefers, H., Toni, I., Trujillo, J. P., & Bögels, S. (2022). The CABB dataset: A multimodal corpus of communicative interactions for behavioural and neural analyses. NeuroImage, 264: 119734. doi:10.1016/j.neuroimage.2022.119734.
Abstract
We present a dataset of behavioural and fMRI observations acquired in the context of humans involved in multimodal referential communication. The dataset contains audio/video and motion-tracking recordings of face-to-face, task-based communicative interactions in Dutch, as well as behavioural and neural correlates of participants’ representations of dialogue referents. Seventy-one pairs of unacquainted participants performed two interleaved interactional tasks in which they described and located 16 novel geometrical objects (i.e., Fribbles) yielding spontaneous interactions of about one hour. We share high-quality video (from three cameras), audio (from head-mounted microphones), and motion-tracking (Kinect) data, as well as speech transcripts of the interactions. Before and after engaging in the face-to-face communicative interactions, participants’ individual representations of the 16 Fribbles were estimated. Behaviourally, participants provided a written description (one to three words) for each Fribble and positioned them along 29 independent conceptual dimensions (e.g., rounded, human, audible). Neurally, fMRI signal evoked by each Fribble was measured during a one-back working-memory task. To enable functional hyperalignment across participants, the dataset also includes fMRI measurements obtained during visual presentation of eight animated movies (35 minutes total). We present analyses for the various types of data demonstrating their quality and consistency with earlier research. Besides high-resolution multimodal interactional data, this dataset includes different correlates of communicative referents, obtained before and after face-to-face dialogue, allowing for novel investigations into the relation between communicative behaviours and the representational space shared by communicators. This unique combination of data can be used for research in neuroscience, psychology, linguistics, and beyond. -
Kan, U., Gökgöz, K., Sumer, B., Tamyürek, E., & Özyürek, A. (2022). Emergence of negation in a Turkish homesign system: Insights from the family context. In A. Ravignani, R. Asano, D. Valente, F. Ferretti, S. Hartmann, M. Hayashi, Y. Jadoul, M. Martins, Y. Oseki, E. D. Rodrigues, O. Vasileva, & S. Wacewicz (
Eds. ), The evolution of language: Proceedings of the Joint Conference on Language Evolution (JCoLE) (pp. 387-389). Nijmegen: Joint Conference on Language Evolution (JCoLE). -
Rasenberg, M., Pouw, W., Özyürek, A., & Dingemanse, M. (2022). The multimodal nature of communicative efficiency in social interaction. Scientific Reports, 12: 19111. doi:10.1038/s41598-022-22883-w.
Abstract
How does communicative efficiency shape language use? We approach this question by studying it at the level of the dyad, and in terms of multimodal utterances. We investigate whether and how people minimize their joint speech and gesture efforts in face-to-face interactions, using linguistic and kinematic analyses. We zoom in on other-initiated repair—a conversational microcosm where people coordinate their utterances to solve problems with perceiving or understanding. We find that efforts in the spoken and gestural modalities are wielded in parallel across repair turns of different types, and that people repair conversational problems in the most cost-efficient way possible, minimizing the joint multimodal effort for the dyad as a whole. These results are in line with the principle of least collaborative effort in speech and with the reduction of joint costs in non-linguistic joint actions. The results extend our understanding of those coefficiency principles by revealing that they pertain to multimodal utterance design.Additional information
Data and analysis scripts -
Rasenberg, M., Özyürek, A., Bögels, S., & Dingemanse, M. (2022). The primacy of multimodal alignment in converging on shared symbols for novel referents. Discourse Processes, 59(3), 209-236. doi:10.1080/0163853X.2021.1992235.
Abstract
When people establish shared symbols for novel objects or concepts, they have been shown to rely on the use of multiple communicative modalities as well as on alignment (i.e., cross-participant repetition of communicative behavior). Yet these interactional resources have rarely been studied together, so little is known about if and how people combine multiple modalities in alignment to achieve joint reference. To investigate this, we systematically track the emergence of lexical and gestural alignment in a referential communication task with novel objects. Quantitative analyses reveal that people frequently use a combination of lexical and gestural alignment, and that such multimodal alignment tends to emerge earlier compared to unimodal alignment. Qualitative analyses of the interactional contexts in which alignment emerges reveal how people flexibly deploy lexical and gestural alignment (independently, simultaneously or successively) to adjust to communicative pressures. -
Schubotz, L., Özyürek, A., & Holler, J. (2022). Individual differences in working memory and semantic fluency predict younger and older adults' multimodal recipient design in an interactive spatial task. Acta Psychologica, 229: 103690. doi:10.1016/j.actpsy.2022.103690.
Abstract
Aging appears to impair the ability to adapt speech and gestures based on knowledge shared with an addressee
(common ground-based recipient design) in narrative settings. Here, we test whether this extends to spatial settings
and is modulated by cognitive abilities. Younger and older adults gave instructions on how to assemble 3D-
models from building blocks on six consecutive trials. We induced mutually shared knowledge by either
showing speaker and addressee the model beforehand, or not. Additionally, shared knowledge accumulated
across the trials. Younger and crucially also older adults provided recipient-designed utterances, indicated by a
significant reduction in the number of words and of gestures when common ground was present. Additionally, we
observed a reduction in semantic content and a shift in cross-modal distribution of information across trials.
Rather than age, individual differences in verbal and visual working memory and semantic fluency predicted the
extent of addressee-based adaptations. Thus, in spatial tasks, individual cognitive abilities modulate the inter-
active language use of both younger and older adulAdditional information
1-s2.0-S0001691822002050-mmc1.docx -
Slonimska, A., Özyürek, A., & Capirci, O. (2022). Simultaneity as an emergent property of efficient communication in language: A comparison of silent gesture and sign language. Wiley Interdisciplinary Reviews: Cognitive Science, 46(5): 13133. doi:10.1111/cogs.13133.
Abstract
Sign languages use multiple articulators and iconicity in the visual modality which allow linguistic units to be organized not only linearly but also simultaneously. Recent research has shown that users of an established sign language such as LIS (Italian Sign Language) use simultaneous and iconic constructions as a modality-specific resource to achieve communicative efficiency when they are required to encode informationally rich events. However, it remains to be explored whether the use of such simultaneous and iconic constructions recruited for communicative efficiency can be employed even without a linguistic system (i.e., in silent gesture) or whether they are specific to linguistic patterning (i.e., in LIS). In the present study, we conducted the same experiment as in Slonimska et al. with 23 Italian speakers using silent gesture and compared the results of the two studies. The findings showed that while simultaneity was afforded by the visual modality to some extent, its use in silent gesture was nevertheless less frequent and qualitatively different than when used within a linguistic system. Thus, the use of simultaneous and iconic constructions for communicative efficiency constitutes an emergent property of sign languages. The present study highlights the importance of studying modality-specific resources and their use for linguistic expression in order to promote a more thorough understanding of the language faculty and its modality-specific adaptive capabilities. -
Slonimska, A., Özyürek, A., & Capirci, O. (2022). Simultaneity as an emergent property of sign languages. In A. Ravignani, R. Asano, D. Valente, F. Ferretti, S. Hartmann, M. Hayashi, Y. Jadoul, M. Martins, Y. Oseki, E. D. Rodrigues, O. Vasileva, & S. Wacewicz (
Eds. ), The evolution of language: Proceedings of the Joint Conference on Language Evolution (JCoLE) (pp. 678-680). Nijmegen: Joint Conference on Language Evolution (JCoLE). -
Sumer, B., & Özyürek, A. (2022). Cross-modal investigation of event component omissions in language development: A comparison of signing and speaking children. Language, Cognition and Neuroscience, 37(8), 1023-1039. doi:10.1080/23273798.2022.2042336.
Abstract
Language development research suggests a universal tendency for children to be under- informative in narrating motion events by omitting components such as Path, Manner or Ground. However, this assumption has not been tested for children acquiring sign language. Due to the affordances of the visual-spatial modality of sign languages for iconic expression, signing children might omit event components less frequently than speaking children. Here we analysed motion event descriptions elicited from deaf children (4–10 years) acquiring Turkish Sign Language (TİD) and their Turkish-speaking peers. While children omitted all types of event components more often than adults, signing children and adults encoded more Path and Manner in TİD than their peers in Turkish. These results provide more evidence for a general universal tendency for children to omit event components as well as a modality bias for sign languages to encode both Manner and Path more frequently than spoken languages. -
Sumer, B., & Özyürek, A. (2022). Language use in deaf children with early-signing versus late-signing deaf parents. Frontiers in Communication, 6: 804900. doi:10.3389/fcomm.2021.804900.
Abstract
Previous research has shown that spatial language is sensitive to the effects of delayed language exposure. Locative encodings of late-signing deaf adults varied from those of early-signing deaf adults in the preferred types of linguistic forms. In the current study, we investigated whether such differences would be found in spatial language use of deaf children with deaf parents who are either early or late signers of Turkish Sign Language (TİD). We analyzed locative encodings elicited from these two groups of deaf children for the use of different linguistic forms and the types of classifier handshapes. Our findings revealed differences between these two groups of deaf children in their preferred types of linguistic forms, which showed parallels to differences between late versus early deaf adult signers as reported by earlier studies. Deaf children in the current study, however, were similar to each other in the type of classifier handshapes that they used in their classifier constructions. Our findings have implications for expanding current knowledge on to what extent variation in language input (i.e., from early vs. late deaf signers) is reflected in children’s productions as well as the role of linguistic input on language development in general. -
Ter Bekke, M., Özyürek, A., & Ünal, E. (2022). Speaking but not gesturing predicts event memory: A cross-linguistic comparison. Language and Cognition, 14(3), 362-384. doi:10.1017/langcog.2022.3.
Abstract
Every day people see, describe, and remember motion events. However, the relation between multimodal encoding of motion events in speech and gesture, and memory is not yet fully understood. Moreover, whether language typology modulates this relation remains to be tested. This study investigates whether the type of motion event information (path or manner) mentioned in speech and gesture predicts which information is remembered and whether this varies across speakers of typologically different languages. Dutch- and Turkish-speakers watched and described motion events and completed a surprise recognition memory task. For both Dutch- and Turkish-speakers, manner memory was at chance level. Participants who mentioned path in speech during encoding were more accurate at detecting changes to the path in the memory task. The relation between mentioning path in speech and path memory did not vary cross-linguistically. Finally, the co-speech gesture did not predict memory above mentioning path in speech. These findings suggest that how speakers describe a motion event in speech is more important than the typology of the speakers’ native language in predicting motion event memory. The motion event videos are available for download for future research at https://osf.io/p8cas/.Additional information
S1866980822000035sup001.docx -
Trujillo, J. P., Özyürek, A., Kan, C., Sheftel-Simanova, I., & Bekkering, H. (2022). Differences in functional brain organization during gesture recognition between autistic and neurotypical individuals. Social Cognitive and Affective Neuroscience, 17(11), 1021-1034. doi:10.1093/scan/nsac026.
Abstract
Persons with and without autism process sensory information differently. Differences in sensory processing are directly relevant to social functioning and communicative abilities, which are known to be hampered in persons with autism. We collected functional magnetic resonance imaging (fMRI) data from 25 autistic individuals and 25 neurotypical individuals while they performed a silent gesture recognition task. We exploited brain network topology, a holistic quantification of how networks within the brain are organized to provide new insights into how visual communicative signals are processed in autistic and neurotypical individuals. Performing graph theoretical analysis, we calculated two network properties of the action observation network: local efficiency, as a measure of network segregation, and global efficiency, as a measure of network integration. We found that persons with autism and neurotypical persons differ in how the action observation network is organized. Persons with autism utilize a more clustered, local-processing-oriented network configuration (i.e., higher local efficiency), rather than the more integrative network organization seen in neurotypicals (i.e., higher global efficiency). These results shed new light on the complex interplay between social and sensory processing in autism.Additional information
nsac026_supp.zip -
Ünal, E., Manhardt, F., & Özyürek, A. (2022). Speaking and gesturing guide event perception during message conceptualization: Evidence from eye movements. Cognition, 225: 105127. doi:10.1016/j.cognition.2022.105127.
Abstract
Speakers’ visual attention to events is guided by linguistic conceptualization of information in spoken language
production and in language-specific ways. Does production of language-specific co-speech gestures further guide
speakers’ visual attention during message preparation? Here, we examine the link between visual attention and
multimodal event descriptions in Turkish. Turkish is a verb-framed language where speakers’ speech and gesture
show language specificity with path of motion mostly expressed within the main verb accompanied by path
gestures. Turkish-speaking adults viewed motion events while their eye movements were recorded during non-
linguistic (viewing-only) and linguistic (viewing-before-describing) tasks. The relative attention allocated to path
over manner was higher in the linguistic task compared to the non-linguistic task. Furthermore, the relative
attention allocated to path over manner within the linguistic task was higher when speakers (a) encoded path in
the main verb versus outside the verb and (b) used additional path gestures accompanying speech versus not.
Results strongly suggest that speakers’ visual attention is guided by language-specific event encoding not only in
speech but also in gesture. This provides evidence consistent with models that propose integration of speech and
gesture at the conceptualization level of language production and suggests that the links between the eye and the
mouth may be extended to the eye and the hand. -
Campisi, E., & Ozyurek, A. (2013). Iconicity as a communicative strategy: Recipient design in multimodal demonstrations for adults and children. Journal of Pragmatics, 47, 14-27. doi:10.1016/j.pragma.2012.12.007.
Abstract
Humans are the only species that uses communication to teach new knowledge to novices, usually to children (Tomasello, 1999 and Csibra and Gergely, 2006). This context of communication can employ “demonstrations” and it takes place with or without the help of objects (Clark, 1996). Previous research has focused on understanding the nature of demonstrations for very young children and with objects involved. However, little is known about the strategies used in demonstrating an action to an older child in comparison to another adult and without the use of objects, i.e., with gestures only. We tested if during demonstration of an action speakers use different degrees of iconicity in gestures for a child compared to an adult. 18 Italian subjects described to a camera how to make coffee imagining the listener as a 12-year-old child, a novice or an expert adult. While speech was found more informative both for the novice adult and for the child compared to the expert adult, the rate of iconic gestures increased and they were more informative and bigger only for the child compared to both of the adult conditions. Iconicity in gestures can be a powerful communicative strategy in teaching new knowledge to children in demonstrations and this is in line with claims that it can be used as a scaffolding device in grounding knowledge in experience (Perniss et al., 2010). -
Debreslioska, S., Ozyurek, A., Gullberg, M., & Perniss, P. M. (2013). Gestural viewpoint signals referent accessibility. Discourse Processes, 50(7), 431-456. doi:10.1080/0163853x.2013.824286.
Abstract
The tracking of entities in discourse is known to be a bimodal phenomenon. Speakers achieve cohesion in speech by alternating between full lexical forms, pronouns, and zero anaphora as they track referents. They also track referents in co-speech gestures. In this study, we explored how viewpoint is deployed in reference tracking, focusing on representations of animate entities in German narrative discourse. We found that gestural viewpoint systematically varies depending on discourse context. Speakers predominantly use character viewpoint in maintained contexts and observer viewpoint in reintroduced contexts. Thus, gestural viewpoint seems to function as a cohesive device in narrative discourse. The findings expand on and provide further evidence for the coordination between speech and gesture on the discourse level that is crucial to understanding the tight link between the two modalities. -
Gentner, D., Ozyurek, A., Gurcanli, O., & Goldin-Meadow, S. (2013). Spatial language facilitates spatial cognition: Evidence from children who lack language input. Cognition, 127, 318-330. doi:10.1016/j.cognition.2013.01.003.
Abstract
Does spatial language influence how people think about space? To address this question, we observed children who did not know a conventional language, and tested their performance on nonlinguistic spatial tasks. We studied deaf children living in Istanbul whose hearing losses prevented them from acquiring speech and whose hearing parents had not exposed them to sign. Lacking a conventional language, the children used gestures, called homesigns, to communicate. In Study 1, we asked whether homesigners used gesture to convey spatial relations, and found that they did not. In Study 2, we tested a new group of homesigners on a Spatial Mapping Task, and found that they performed significantly worse than hearing Turkish children who were matched to the deaf children on another cognitive task. The absence of spatial language thus went hand-in-hand with poor performance on the nonlinguistic spatial task, pointing to the importance of spatial language in thinking about space. -
Holler, J., Schubotz, L., Kelly, S., Schuetze, M., Hagoort, P., & Ozyurek, A. (2013). Here's not looking at you, kid! Unaddressed recipients benefit from co-speech gestures when speech processing suffers. In M. Knauff, M. Pauen, I. Sebanz, & I. Wachsmuth (
Eds. ), Proceedings of the 35th Annual Meeting of the Cognitive Science Society (CogSci 2013) (pp. 2560-2565). Austin, TX: Cognitive Science Society. Retrieved from http://mindmodeling.org/cogsci2013/papers/0463/index.html.Abstract
In human face-to-face communication, language comprehension is a multi-modal, situated activity. However, little is known about how we combine information from these different modalities, and how perceived communicative intentions, often signaled through visual signals, such as eye
gaze, may influence this processing. We address this question by simulating a triadic communication context in which a
speaker alternated her gaze between two different recipients. Participants thus viewed speech-only or speech+gesture
object-related utterances when being addressed (direct gaze) or unaddressed (averted gaze). Two object images followed
each message and participants’ task was to choose the object that matched the message. Unaddressed recipients responded significantly slower than addressees for speech-only
utterances. However, perceiving the same speech accompanied by gestures sped them up to a level identical to
that of addressees. That is, when speech processing suffers due to not being addressed, gesture processing remains intact and enhances the comprehension of a speaker’s message -
Ortega, G., & Ozyurek, A. (2013). Gesture-sign interface in hearing non-signers' first exposure to sign. In Proceedings of the Tilburg Gesture Research Meeting [TiGeR 2013].
Abstract
Natural sign languages and gestures are complex communicative systems that allow the incorporation of features of a referent into their structure. They differ, however, in that signs are more conventionalised because they consist of meaningless phonological parameters. There is some evidence that despite non-signers finding iconic signs more memorable they can have more difficulty at articulating their exact phonological components. In the present study, hearing non-signers took part in a sign repetition task in which they had to imitate as accurately as possible a set of iconic and arbitrary signs. Their renditions showed that iconic signs were articulated significantly less accurately than arbitrary signs. Participants were recalled six months later to take part in a sign generation task. In this task, participants were shown the English translation of the iconic signs they imitated six months prior. For each word, participants were asked to generate a sign (i.e., an iconic gesture). The handshapes produced in the sign repetition and sign generation tasks were compared to detect instances in which both renditions presented the same configuration. There was a significant correlation between articulation accuracy in the sign repetition task and handshape overlap. These results suggest some form of gestural interference in the production of iconic signs by hearing non-signers. We also suggest that in some instances non-signers may deploy their own conventionalised gesture when producing some iconic signs. These findings are interpreted as evidence that non-signers process iconic signs as gestures and that in production, only when sign and gesture have overlapping features will they be capable of producing the phonological components of signs accurately. -
Peeters, D., Chu, M., Holler, J., Ozyurek, A., & Hagoort, P. (2013). Getting to the point: The influence of communicative intent on the kinematics of pointing gestures. In M. Knauff, M. Pauen, N. Sebanz, & I. Wachsmuth (
Eds. ), Proceedings of the 35th Annual Meeting of the Cognitive Science Society (CogSci 2013) (pp. 1127-1132). Austin, TX: Cognitive Science Society.Abstract
In everyday communication, people not only use speech but
also hand gestures to convey information. One intriguing
question in gesture research has been why gestures take the
specific form they do. Previous research has identified the
speaker-gesturer’s communicative intent as one factor
shaping the form of iconic gestures. Here we investigate
whether communicative intent also shapes the form of
pointing gestures. In an experimental setting, twenty-four
participants produced pointing gestures identifying a referent
for an addressee. The communicative intent of the speakergesturer
was manipulated by varying the informativeness of
the pointing gesture. A second independent variable was the
presence or absence of concurrent speech. As a function of their communicative intent and irrespective of the presence of speech, participants varied the durations of the stroke and the post-stroke hold-phase of their gesture. These findings add to our understanding of how the communicative context influences the form that a gesture takes.Additional information
http://mindmodeling.org/cogsci2013/papers/0219/index.html -
Senghas, A., Ozyurek, A., & Goldin-Meadow, S. (2013). Homesign as a way-station between co-speech gesture and sign language: The evolution of segmenting and sequencing. In R. Botha, & M. Everaert (
Eds. ), The evolutionary emergence of language: Evidence and inference (pp. 62-77). Oxford: Oxford University Press. -
Sumer, B., Zwitserlood, I., Perniss, P. M., & Ozyurek, A. (2013). Acquisition of locative expressions in children learning Turkish Sign Language (TİD) and Turkish. In E. Arik (
Ed. ), Current directions in Turkish Sign Language research (pp. 243-272). Newcastle upon Tyne: Cambridge Scholars Publishing.Abstract
In sign languages, where space is often used to talk about space, expressions of spatial relations (e.g., ON, IN, UNDER, BEHIND) may rely on analogue mappings of real space onto signing space. In contrast, spoken languages express space in mostly categorical ways (e.g. adpositions). This raises interesting questions about the role of language modality in the acquisition of expressions of spatial relations. However, whether and to what extent modality influences the acquisition of spatial language is controversial – mostly due to the lack of direct comparisons of Deaf children to Deaf adults and to age-matched hearing children in similar tasks. Furthermore, the previous studies have taken English as the only model for spoken language development of spatial relations.
Therefore, we present a balanced study in which spatial expressions by deaf and hearing children in two different age-matched groups (preschool children and school-age children) are systematically compared, as well as compared to the spatial expressions of adults. All participants performed the same tasks, describing angular (LEFT, RIGHT, FRONT, BEHIND) and non-angular spatial configurations (IN, ON, UNDER) of different objects (e.g. apple in box; car behind box).
The analysis of the descriptions with non-angular spatial relations does not show an effect of modality on the development of
locative expressions in TİD and Turkish. However, preliminary results of the analysis of expressions of angular spatial relations suggest that signers provide angular information in their spatial descriptions
more frequently than Turkish speakers in all three age groups, and thus showing a potentially different developmental pattern in this domain. Implications of the findings with regard to the development of relations in spatial language and cognition will be discussed. -
Zwitserlood, I., Perniss, P. M., & Ozyurek, A. (2013). Expression of multiple entities in Turkish Sign Language (TİD). In E. Arik (
Ed. ), Current Directions in Turkish Sign Language Research (pp. 272-302). Newcastle upon Tyne: Cambridge Scholars Publishing.Abstract
This paper reports on an exploration of the ways in which multiple entities are expressed in Turkish Sign Language (TİD). The (descriptive and quantitative) analyses provided are based on a corpus of both spontaneous data and specifically elicited data, in order to provide as comprehensive an account as possible. We have found several devices in TİD for expression of multiple entities, in particular localization, spatial plural predicate inflection, and a specific form used to express multiple entities that are side by side in the same configuration (not reported for any other sign language to date), as well as numerals and quantifiers. In contrast to some other signed languages, TİD does not appear to have a productive system of plural reduplication. We argue that none of the devices encountered in the TİD data is a genuine plural marking device and that the plural interpretation of multiple entity localizations and plural predicate inflections is a by-product of the use of space to indicate the existence or the involvement in an event of multiple entities. -
Furman, R., Ozyurek, A., & Küntay, A. C. (2010). Early language-specificity in Turkish children's caused motion event expressions in speech and gesture. In K. Franich, K. M. Iserman, & L. L. Keil (
Eds. ), Proceedings of the 34th Boston University Conference on Language Development. Volume 1 (pp. 126-137). Somerville, MA: Cascadilla Press. -
Kelly, S. D., Ozyurek, A., & Maris, E. (2010). Two sides of the same coin: Speech and gesture mutually interact to enhance comprehension. Psychological Science, 21, 260-267. doi:10.1177/0956797609357327.
Abstract
Gesture and speech are assumed to form an integrated system during language production. Based on this view, we propose the integrated‐systems hypothesis, which explains two ways in which gesture and speech are integrated—through mutual and obligatory interactions—in language comprehension. Experiment 1 presented participants with action primes (e.g., someone chopping vegetables) and bimodal speech and gesture targets. Participants related primes to targets more quickly and accurately when they contained congruent information (speech: “chop”; gesture: chop) than when they contained incongruent information (speech: “chop”; gesture: twist). Moreover, the strength of the incongruence affected processing, with fewer errors for weak incongruities (speech: “chop”; gesture: cut) than for strong incongruities (speech: “chop”; gesture: twist). Crucial for the integrated‐systems hypothesis, this influence was bidirectional. Experiment 2 demonstrated that gesture’s influence on speech was obligatory. The results confirm the integrated‐systems hypothesis and demonstrate that gesture and speech form an integrated system in language comprehension. -
Kita, S., Ozyurek, A., Allen, S., & Ishizuka, T. (2010). Early links between iconic gestures and sound symbolic words: Evidence for multimodal protolanguage. In A. D. Smith, M. Schouwstra, B. de Boer, & K. Smith (
Eds. ), Proceedings of the 8th International conference on the Evolution of Language (EVOLANG 8) (pp. 429-430). Singapore: World Scientific. -
Ozyurek, A., Zwitserlood, I., & Perniss, P. M. (2010). Locative expressions in signed languages: A view from Turkish Sign Language (TID). Linguistics, 48(5), 1111-1145. doi:10.1515/LING.2010.036.
Abstract
Locative expressions encode the spatial relationship between two (or more) entities. In this paper, we focus on locative expressions in signed language, which use the visual-spatial modality for linguistic expression, specifically in
Turkish Sign Language ( Türk İşaret Dili, henceforth TİD). We show that TİD uses various strategies in discourse to encode the relation between a Ground entity (i.e., a bigger and/or backgrounded entity) and a Figure entity (i.e., a
smaller entity, which is in the focus of attention). Some of these strategies exploit affordances of the visual modality for analogue representation and support evidence for modality-specific effects on locative expressions in sign languages.
However, other modality-specific strategies, e.g., the simultaneous expression of Figure and Ground, which have been reported for many other sign languages, occurs only sparsely in TİD. Furthermore, TİD uses categorical as well as analogical structures in locative expressions. On the basis of
these findings, we discuss differences and similarities between signed and spoken languages to broaden our understanding of the range of structures used in natural language (i.e., in both the visual-spatial or oral-aural modalities) to encode locative relations. A general linguistic theory of spatial relations, and specifically of locative expressions, must take all structures that
might arise in both modalities into account before it can generalize over the human language faculty. -
Ozyurek, A. (2010). The role of iconic gestures in production and comprehension of language: Evidence from brain and behavior. In S. Kopp, & I. Wachsmuth (
Eds. ), Gesture in embodied communication and human-computer interaction: 8th International Gesture Workshop, GW 2009, Bielefeld, Germany, February 25-27 2009. Revised selected papers (pp. 1-10). Berlin: Springer. -
Senghas, A., Ozyurek, A., & Goldin-Meadow, S. (2010). The evolution of segmentation and sequencing: Evidence from homesign and Nicaraguan Sign Language. In A. D. Smith, M. Schouwstra, B. de Boer, & K. Smith (
Eds. ), Proceedings of the 8th International conference on the Evolution of Language (EVOLANG 8) (pp. 279-289). Singapore: World Scientific.
Share this page