Displaying 1 - 11 of 11
-
Habets, B., Kita, S., Shao, Z., Ozyurek, A., & Hagoort, P. (2011). The role of synchrony and ambiguity in speech–gesture integration during comprehension. Journal of Cognitive Neuroscience, 23, 1845-1854. doi:10.1162/jocn.2010.21462.
Abstract
During face-to-face communication, one does not only hear speech but also see a speaker's communicative hand movements. It has been shown that such hand gestures play an important role in communication where the two modalities influence each other's interpretation. A gesture typically temporally overlaps with coexpressive speech, but the gesture is often initiated before (but not after) the coexpressive speech. The present ERP study investigated what degree of asynchrony in the speech and gesture onsets are optimal for semantic integration of the concurrent gesture and speech. Videos of a person gesturing were combined with speech segments that were either semantically congruent or incongruent with the gesture. Although gesture and speech always overlapped in time, gesture and speech were presented with three different degrees of asynchrony. In the SOA 0 condition, the gesture onset and the speech onset were simultaneous. In the SOA 160 and 360 conditions, speech was delayed by 160 and 360 msec, respectively. ERPs time locked to speech onset showed a significant difference between semantically congruent versus incongruent gesture–speech combinations on the N400 for the SOA 0 and 160 conditions. No significant difference was found for the SOA 360 condition. These results imply that speech and gesture are integrated most efficiently when the differences in onsets do not exceed a certain time span because of the fact that iconic gestures need speech to be disambiguated in a way relevant to the speech context. -
Ozyurek, A. (2011). Language in our hands: The role of the body in language, cognition and communication [Inaugural lecture]. Nijmegen: Radboud University Nijmegen.
Abstract
Even though most studies of language have focused on speech channel and/or viewed language as an
amodal abstract system, there is growing evidence on the role our bodily actions/ perceptions play in language and communication.
In this context, Özyürek discusses what our meaningful visible bodily actions reveal about our language capacity. Conducting cross-linguistic, behavioral, and neurobiological research,
she shows that co-speech gestures reflect the imagistic, iconic aspects of events talked about and at the same time interact with language production and
comprehension processes. Sign languages can also be characterized having an abstract system of linguistic categories as well as using iconicity in several
aspects of the language structure and in its processing.
Studying language multimodally reveals how grounded language is in our visible bodily actions and opens
up new lines of research to study language in its situated,
natural face-to-face context. -
Ozyurek, A., & Perniss, P. M. (2011). Event representations in signed languages. In J. Bohnemeyer, & E. Pederson (
Eds. ), Event representations in language and cognition (pp. 84-107). New York: Cambridge University Press. -
Perniss, P. M., Zwitserlood, I., & Ozyurek, A. (2011). Does space structure spatial language? Linguistic encoding of space in sign languages. In L. Carlson, C. Holscher, & T. Shipley (
Eds. ), Proceedings of the 33rd Annual Meeting of the Cognitive Science Society (pp. 1595-1600). Austin, TX: Cognitive Science Society. -
Furman, R., & Ozyurek, A. (2006). The use of discourse markers in adult and child Turkish oral narratives: Şey, yani and işte. In S. Yagcioglu, & A. Dem Deger (
Eds. ), Advances in Turkish linguistics (pp. 467-480). Izmir: Dokuz Eylul University Press. -
Furman, R., Ozyurek, A., & Allen, S. E. M. (2006). Learning to express causal events across languages: What do speech and gesture patterns reveal? In D. Bamman, T. Magnitskaia, & C. Zaller (
Eds. ), Proceedings of the 30th Annual Boston University Conference on Language Development (pp. 190-201). Somerville, Mass: Cascadilla Press. -
Gullberg, M., & Ozyurek, A. (2006). Report on the Nijmegen Lectures 2004: Susan Goldin-Meadow 'The Many Faces of Gesture'. Gesture, 6(1), 151-164.
-
Küntay, A. C., & Ozyurek, A. (2006). Learning to use demonstratives in conversation: What do language specific strategies in Turkish reveal? Journal of Child Language, 33(2), 303-320. doi:10.1017/S0305000906007380.
Abstract
Pragmatic development requires the ability to use linguistic forms, along with non-verbal cues, to focus an interlocutor's attention on a referent during conversation. We investigate the development of this ability by examining how the use of demonstratives is learned in Turkish, where a three-way demonstrative system (bu, su, o) obligatorily encodes both distance contrasts (i.e. proximal and distal) and absence or presence of the addressee's visual attention on the referent. A comparison of the demonstrative use by Turkish children (6 four- and 6 six-year-olds) and 6 adults during conversation shows that adultlike use of attention directing demonstrative, su, is not mastered even at the age of six, while the distance contrasts are learned earlier. This language specific development reveals that designing referential forms in consideration of recipient's attentional status during conversation is a pragmatic feat that takes more than six years to develop. -
Allen, S., Ozyurek, A., Kita, S., Brown, A., Turanli, R., & Ishizuka, T. (2003). Early speech about manner and path in Turkish and English: Universal or language-specific? In B. Beachley, A. Brown, & F. Conlin (
Eds. ), Proceedings of the 27th annual Boston University Conference on Language Development (pp. 63-72). Somerville (MA): Cascadilla Press. -
Kita, S., & Ozyurek, A. (2003). What does cross-linguistic variation in semantic coordination of speech and gesture reveal? Evidence for an interface representation of spatial thinking and speaking. Journal of Memory and Language, 48(1), 16-32. doi:10.1016/S0749-596X(02)00505-3.
Abstract
Gestures that spontaneously accompany speech convey information coordinated with the concurrent speech. There has been considerable theoretical disagreement about the process by which this informational coordination is achieved. Some theories predict that the information encoded in gesture is not influenced by how information is verbally expressed. However, others predict that gestures encode only what is encoded in speech. This paper investigates this issue by comparing informational coordination between speech and gesture across different languages. Narratives in Turkish, Japanese, and English were elicited using an animated cartoon as the stimulus. It was found that gestures used to express the same motion events were influenced simultaneously by (1) how features of motion events were expressed in each language, and (2) spatial information in the stimulus that was never verbalized. From this, it is concluded that gestures are generated from spatio-motoric processes that interact on-line with the speech production process. Through the interaction, spatio-motoric information to be expressed is packaged into chunks that are verbalizable within a processing unit for speech formulation. In addition, we propose a model of speech and gesture production as one of a class of frameworks that are compatible with the data. -
Senghas, A., Ozyurek, A., & Kita, S. (2003). Encoding motion events in an emerging sign language: From Nicaraguan gestures to Nicaraguan signs. In A. E. Baker, B. van den Bogaerde, & O. A. Crasborn (
Eds. ), Crosslinguistic perspectives in sign language research (pp. 119-130). Hamburg: Signum Press.
Share this page