Displaying 101 - 163 of 163
-
Azar, Z., Backus, A., & Ozyurek, A. (2016). Multimodal reference tracking in Dutch and Turkish discourse: Role of culture and typological differences. Poster presented at the 7th Conference of the International Society for Gesture Studies (ISGS7), Paris, France.
Abstract
Previous studies show that during discourse narrations, speakers use fuller forms in speech (e.g. full noun phrase (NP) and gesture more while referring back to already introduced referents and use reduced forms in speech (e.g. overt pronoun and null pronoun) and gesture less while maintaining referents (Gullberg, 2006; Yoshioko, 2008; Debreslioska et al., 2013; Perniss & Özyürek, 2015). Thus, quantity of coding material in speech and co-speech gesture shows parallelism. However, those studies focus mostly on Indo-European languages and we do not know much about whether the parallel relation between speech and co-speech gesture during discourse narration is generalizable to languages with different pronominal systems. Furthermore, these studies have not taken into account whether a language is used in a rich or low gesture culture as a possible modulating factor. Aiming to fill this gap, we directly compare multimodal discourse narrations in Turkish and Dutch; two languages that have different constraints on the use of overt pronoun (preferred in Dutch) versus null pronoun (preferred in Turkish) and vary in terms of whether gender is marked in the pronouns (Dutch) or not (Turkish). We elicited discourse narrations in Turkey and Netherlands from 40 speakers (20 Dutch; 20 Turkish) using 2 short silent videos. Each speaker was paired with a naive addressee during data collection. We first divided the discourse into main clauses. We then coded each animate subject referring expressions for the linguistic type (i.e., NP, pronoun, null pronoun) and the co-reference context (i.e., re-introduction, maintenance). As for the co-speech gesture data, we first coded all types of gestures in order to determine whether Turkish and Dutch cultures show difference in terms of the overall gesture rate (per clause). Later we focused on the abstract deictic gestures to space that temporally align with the subject referent of each main clause to calculate the proportion of gesturally marked subject referents. Our gesture rate analyses reveal that Turkish speakers overall produce more gestures than Dutch speakers (p<.001) suggesting that Turkish is a relatively high-gesture culture compared to Dutch. Our speech analyses show that both Turkish and Dutch speakers use mainly NPs to re-introduce subject referents and reduced forms for maintained referents (null pronoun for Turkish and overt pronoun for Dutch). Our gesture analyses show that both Turkish and Dutch speakers gestured more with re-introduced subject referents when compared to maintained subject referents (p<001). However, Turkish speakers gestured more frequently with pronouns than Dutch speakers. All results put together, we show that speakers of both languages organize information structure in discourse in similar manner and vary the quantity of coding material in their speech and gesture in parallel to mark the co-reference context, a discourse strategy independent of whether the speakers are from a relatively high or low gesture culture and regardless of the differences in the pronominal system of their languages. As a novel contribution, however, we show that pragmatics interacts with contextual and linguistic factors modulating gestures: Pragmatically marked forms in speech are more likely to be marked with gestures as well (more gestures with pronouns but not with NPs in Turkish compared to Dutch). -
Azar, Z., Backus, A., & Ozyurek, A. (2016). Pragmatic relativity: Gender and context affect the use of personal pronouns in discourse differentially across languages. Talk presented at the 38th Annual Meeting of the Cognitive Science Society (CogSci 2016). Philadelphia, PA, US. 2016-08-11 - 2016-08-13.
Abstract
Speakers use differential referring expressions in pragmatically appropriate ways to produce coherent narratives. Languages, however, differ in a) whether REs as arguments can be dropped and b) whether personal pronouns encode gender. We examine two languages that differ from each other in these two aspects and ask whether the co-reference context and the gender encoding options affect the use of REs differentially. We elicited narratives from Dutch and Turkish speakers about two types of three-person events, one including people of the same and the other of mixed-gender. Speakers re-introduced referents into the discourse with fuller forms (NPs) and maintained them with reduced forms (overt or null pronoun). Turkish speakers used pronouns mainly to mark emphasis and only Dutch speakers used pronouns differentially across the two types of videos. We argue that linguistic possibilities available in languages tune speakers into taking different principles into account to produce pragmatically coherent narratives -
Drijvers, L., Ozyurek, A., & Jensen, O. (2016). Gestural enhancement of degraded speech comprehension engages the language network, motor and visual cortex as reflected by a decrease in the alpha and beta band. Talk presented at Sensorimotor Speech Processing Symposium. London, UK. 2016-08-16.
-
Drijvers, L., Ozyurek, A., & Jensen, O. (2016). Gestural enhancement of degraded speech comprehension engages the language network, motor and visual cortex as reflected by a decrease in the alpha and beta band. Poster presented at the 20th International Conference on Biomagnetism (BioMag 2016), Seoul, South Korea.
-
Drijvers, L., Ozyurek, A., & Jensen, O. (2016). Gestural enhancement of degraded speech comprehension engages the language network, motor and visual cortex as reflected by a decrease in the alpha and beta band. Poster presented at the Eighth Annual Meeting of the Society for the Neurobiology of Language (SNL 2016), London, UK.
Abstract
Face-to-face communication involves the integration of speech and visual information, such as iconic co-speech gestures. Especially iconic gestures, that illustrate object attributes, actions and space, can enhance speech comprehension in adverse listening conditions (e.g. Holle et al., 2010). Using magnetoencephalography (MEG), we aimed at identifying the networks and the neuronal dynamics associated with enhancing (degraded) speech comprehension by gestures. Our central hypothesis was that gestures enhance degraded speech comprehension, and that decreases in alpha and beta power reflect engagement, whereas increases in gamma reflect active processing in task relevant networks (Jensen & Mazaheri, 2010; Jokisch & Jensen, 2007). Participants (n = 30) were presented with videos of an actress uttering Dutch action verbs. Speech was presented clear or degraded by applying noise-vocoding (6-band), and accompanied by videos of an actor performing iconic gesture depicting actions (clear speech+ gesture; C-SG, degraded speech+gesture; D-SG) or no gesture (clear speech only; C-S, degraded speech only; D-S). We quantified changes in time-frequency representations of oscillatory power as the video unfolded. The sources of the task-specific modulations were identified using a beamformer approach. Gestural enhancement, calculated by comparing (D-SG vs DS) to (C-SG vs CS), revealed significant interactions between the occurrence of a gesture and degraded speech particularly in the alpha, beta and gamma band. Gestural enhancement was reflected by a beta decrease in motor areas indicative of engagement of the motor system during gesture observation, especially when speech was degraded. A beta band decrease was also observed in the language network including left inferior frontal gyrus, a region involved in semantic unification operations, and left superior temporal regions. This suggests a higher semantic unification load when a gesture is presented together with degraded versus clear speech. We also observed a gestural enhancement effect in the alpha band in visual areas. This suggests that visual areas are more engaged when a gesture is present, most likely reflecting the allocation of visual attention, especially when speech is degraded, which is in line with the functional inhibition hypothesis (see Jensen & Mazaheri, 2010). Finally we observed gamma band effects in left-temporal areas suggesting facilitated binding of speech and gesture into a unified representation, especially when speech is degraded. In conclusion, our results support earlier claims on the recruitment of a left-lateralized network including motor areas, STS/MTG and LIFG in speech-gesture integration and gestural enhancement of speech (see Ozyurek, 2014). Our findings provide novel insight into the neuronal dynamics associated with speech-gesture integration: decreases in alpha and beta power reflect the engagement of respectively the visual and language/motor networks, whereas a gamma band increase reflects the integrations in left prefrontal cortex. In future work we will characterize the interaction between these networks by means of functional connectivity analysis. -
Drijvers, L., Ozyurek, A., & Jensen, O. (2016). Gestural enhancement of degraded speech comprehension engages the language network, motor and visual cortex as reflected by a decrease in the alpha and beta band. Poster presented at the Language in Interaction Summerschool on Human Language: From Genes and Brains to Behavior, Berg en Dal, The Netherlands.
-
Drijvers, L., Ozyurek, A., & Jensen, O. (2016). Gestural enhancement of degraged speech comprehension engages the language network, motor cortex and visual cortex. Talk presented at the 2nd Workshop on Psycholinguistic Approaches to Speech Recognition in Adverse Conditions (PASRAC). Nijmegen, The Netherlands. 2016-10-31 - 2016-11-01.
-
Drijvers, L., & Ozyurek, A. (2016). Native language status of the listener modulates the neural integration of speech and gesture in clear and adverse listening conditions. Poster presented at the Eighth Annual Meeting of the Society for the Neurobiology of Language (SNL 2016), London, UK.
Abstract
Face-to-face communication consists of integrating speech and visual input, such as co-speech gestures. Iconic gestures (e.g. a drinking gesture) can enhance speech comprehension, especially when speech is difficult to comprehend, such as in noise (e.g. Holle et al., 2010) or in non-native speech comprehension (e.g. Sueyoshi & Hardison, 2005). Previous behavioral and neuroimaging studies have argued that the integration of speech and gestures is stronger when speech intelligibility decreases (e.g. Holle et al., 2010), but that in clear speech, non-native listeners benefit more from gestures than native listeners (Dahl & Ludvigson, 2014; Sueyoshi & Hardison, 2005). So far, the neurocognitive mechanisms of how non-native speakers integrate speech and gestures in adverse listening conditions remain unknown. We investigated whether high-proficient non-native speakers of Dutch make use of iconic co-speech gestures as much as native speakers during clear and degraded speech comprehension. In an EEG study, native (n = 23) and non-native (German, n = 23) speakers of Dutch watched videos of an actress uttering Dutch action verbs. Speech was presented either as clear or degraded by applying noise-vocoding (6-band), and accompanied by a matching or mismatching iconic gesture. This allowed us to calculate both the effects of speech degradation and semantic congruency of the gesture on the N400 component. The N400 was taken as an index of semantic integration effort (Kutas & Federmeier, 2011). In native listeners, N400 amplitude was sensitive to mismatches between speech and gesture and degradation; the most pronounced N400 was found in response to degraded speech and a mismatching gesture (DMM), followed by degraded speech and a matching gesture (DM), clear speech and a mismatching gesture (CMM), and clear speech and a matching gesture (CM) (DMM>DM>CMM>CM, all p < .05). In non-native speakers, we found a difference between CMM and CM but not DMM and DM. However, degraded conditions differed from clear conditions (DMM=DM>CMM>CM, all significant comparisons p < .05). Directly comparing native to non-native speakers, the N400 effect (i.e. the difference between CMM and CM / DMM and DM) was greater for non-native speakers in clear speech, but for native speakers in degraded speech. These results provide further evidence for the claim that in clear speech, non-native speakers benefit more from gestural information than native speakers, as indexed by a larger N400 effect for mismatch manipulation. Both native and non-native speakers show integration effort during degraded speech comprehension. However, native speakers require less effort to recognize auditory cues in degraded speech than non-native speakers, resulting in a larger N400 for degraded speech and a mismatching gesture for natives than non-natives. Conversely, non-native speakers require more effort to resolve auditory cues when speech is degraded and can therefore not benefit as much from auditory cues to map the semantic information from gesture to as native speakers. In sum, non-native speakers can benefit from gestural information in speech comprehension more than native listeners, but not when speech is degraded. Our findings suggest that the native language of the listener modulates multimodal semantic integration in adverse listening conditions. -
Drijvers, L., & Ozyurek, A. (2016). Native language status of the listener modulates the neural integration of speech and gesture in clear and adverse listening conditions. Poster presented at the 2nd Workshop on Psycholinguistic Approaches to Speech Recognition in Adverse Conditions (PASRAC), Nijmegen, The Netherlands.
-
Drijvers, L., Ozyurek, A., & Jensen, O. (2016). Oscillatory and temporal dynamics show engagement of the language network, motor system and visual cortex during gestural enhancement of degraded speech. Talk presented at the Donders Discussions 2016. Nijmegen, The Netherlands. 2016-11-23 - 2016-11-24.
-
Drijvers, L., & Ozyurek, A. (2016). What do iconic gestures and visible speech contribute to degraded speech comprehension?. Poster presented at the Nijmegen Lectures 2016, Nijmegen, The Netherlands.
-
Drijvers, L., & Ozyurek, A. (2016). Visible speech enhanced: What do iconic gestures and lip movements contribute to degraded speech comprehension?. Talk presented at the 7th Conference of the International Society for Gesture Studies (ISGS7). Paris, France. 2016-07-22 - 2016-07-24.
-
Drijvers, L., & Ozyurek, A. (2016). Visible speech enhanced: What do gestures and lip movements contribute to degraded speech comprehension?. Poster presented at the 8th Speech in Noise Workshop (SpiN 2016), Groningen, The Netherlands.
-
Drijvers, L., & Ozyurek, A. (2016). Visible speech enhanced: What do iconic gestures and lip movements contribute to degraded speech comprehension?. Talk presented at the 7th Conference of the International Society for Gesture Studies (ISGS7). Paris, France. 2016-07-18 - 2016-07-22.
Abstract
Natural, face-to-face communication consists of an audiovisual binding that integrates speech and visual information, such as iconic co-speech gestures and lip movements. Especially in adverse listening conditions such as in noise, this visual information can enhance speech comprehension. However, the contribution of lip movements and iconic gestures to understanding speech in noise has been mostly studied separately. Here, we investigated the contribution of iconic gestures and lip movements to degraded speech comprehension in a joint context. In a free-recall task, participants watched short videos of an actress uttering an action verb. This verb could be presented in clear speech, severely degraded speech (2-band noise-vocoding) or moderately degraded speech (6-band noise-vocoding), and could view the actress with her lips blocked, with her lips visible, or with her lips visible and making an iconic co-speech gesture. Additionally, we presented these clips without audio and with just the lip movements present, or with just lip movements and gestures present, to investigate how much information listeners could get from visual input alone. Our results reveal that when listeners perceive degraded speech in a visual context, listeners benefit more from gestural information than from just lip movements alone. This benefit is larger at moderate noise levels where auditory cues are still moderately reliable than compared to severe noise levels where auditory cues are no longer reliable. As a result, listeners are only able to benefit from this additive effect of ‘double’ multimodal enhancement of iconic gestures and lip movements when there are enough auditory cues present to map lip movements to the phonological information in the speech signal -
Drjiver, L., Ozyurek, A., & Jensen, O. (2016). Gestural enhancement of degraded speech comprehension engages the language network, motor and visual cortex as reflected by a decrease in the alpha and beta band. Poster presented at the Eighth Annual Meeting of the Society for the Neurobiology of Language (SNL 2016), London, UK.
Abstract
Face-to-face communication involves the integration of speech and visual information, such as iconic co-speech gestures. Especially iconic gestures, that illustrate object attributes, actions and space, can enhance speech comprehension in adverse listening conditions (e.g. Holle et al., 2010). Using magnetoencephalography (MEG), we aimed at identifying the networks and the neuronal dynamics associated with enhancing (degraded) speech comprehension by gestures. Our central hypothesis was that gestures enhance degraded speech comprehension, and that decreases in alpha and beta power reflect engagement, whereas increases in gamma reflect active processing in task relevant networks (Jensen & Mazaheri, 2010; Jokisch & Jensen, 2007). Participants (n = 30) were presented with videos of an actress uttering Dutch action verbs. Speech was presented clear or degraded by applying noise-vocoding (6-band), and accompanied by videos of an actor performing iconic gesture depicting actions (clear speech+ gesture; C-SG, degraded speech+gesture; D-SG) or no gesture (clear speech only; C-S, degraded speech only; D-S). We quantified changes in time-frequency representations of oscillatory power as the video unfolded. The sources of the task-specific modulations were identified using a beamformer approach. Gestural enhancement, calculated by comparing (D-SG vs DS) to (C-SG vs CS), revealed significant interactions between the occurrence of a gesture and degraded speech particularly in the alpha, beta and gamma band. Gestural enhancement was reflected by a beta decrease in motor areas indicative of engagement of the motor system during gesture observation, especially when speech was degraded. A beta band decrease was also observed in the language network including left inferior frontal gyrus, a region involved in semantic unification operations, and left superior temporal regions. This suggests a higher semantic unification load when a gesture is presented together with degraded versus clear speech. We also observed a gestural enhancement effect in the alpha band in visual areas. This suggests that visual areas are more engaged when a gesture is present, most likely reflecting the allocation of visual attention, especially when speech is degraded, which is in line with the functional inhibition hypothesis (see Jensen & Mazaheri, 2010). Finally we observed gamma band effects in left-temporal areas suggesting facilitated binding of speech and gesture into a unified representation, especially when speech is degraded. In conclusion, our results support earlier claims on the recruitment of a left-lateralized network including motor areas, STS/ MTG and LIFG in speech-gesture integration and gestural enhancement of speech (see Ozyurek, 2014). Our findings provide novel insight into the neuronal dynamics associated with speech-gesture integration: decreases in alpha and beta power reflect the engagement of respectively the visual and language/motor networks, whereas a gamma band increase reflects the integrations in left prefrontal cortex. In future work we will characterize the interaction between these networks by means of functional connectivity analysis. -
Karadöller, D. Z., Sumer, B., & Ozyurek, A. (2016). Effect of language modality on development of spatial cognition and memory. Poster presented at the Language in Interaction Summerschool on Human Language: From Genes and Brains to Behavior, Berg en Dal, The Netherlands.
-
Ortega, G., & Ozyurek, A. (2016). Generalisable patterns of gesture distinguish semantic categories in communication without language: Evidence from pantomime. Talk presented at the 7th Conference of the International Society for Gesture Studies (ISGS7). Paris, France. 2016-07-18 - 2016-07-22.
Abstract
There is a long-standing assumption that gestural forms are geared by a set of modes of representation (acting, representing, drawing, moulding) with each technique expressing speakers’ focus of attention on specific aspects of a referent (Müller, 2013). However, it is just recently that the relationship between gestural forms and mode of representation has been linked to 1) the semantic categories they represent (i.e., objects, actions) and 2) the affordances of the referents. Here we investigate these relations when speakers are asked to communicate about different types of referents in pantomime. This mode of communication has revealed generalisable ordering of constituents of events across speakers of different languages (Goldin- Meadow, So, Özyürek, & Mylander, 2008) but it remains an empirical question whether it also draws on systematic patterns to distinguish different semantic categories. Twenty speakers of Dutch participated in a pantomime generation task. They had to produce a gesture that conveyed the same meaning as a word on a computer screen without speaking. Participants saw 10 words from three semantic categories: actions with objects (e.g., to drink), manipulable objects (e.g., mug), and non-manipulable objects (e.g., building). Pantomimes were categorised according to their mode of representation and also the use of deictics (pointing, showing or eye gaze). Further, ordering of different representations were noted when there were more than one gesture produced. Actions with objects elicited mainly individual gestures (mean: 1.1, range: 1-2), while manipulable objects (mean: 1.8, range: 1-4) and non-manipulable objects (mean: 1.6, range: 1-4) elicited primarily more than one pantomime as sequences of interrelated gestures. Actions with objects were mostly represented with one gesture, and through re-enactment of the action (e.g., raising a closed fist to the mouth for ‘to drink’) while manipulable objects mostly were represented through an acting gesture followed by a deictic (e.g., raising a closed fist to the mouth and then pointing at the fist). Non-manipulable objects, however, were represented through a drawing gesture followed by an acting one (e.g., tracing a rectangle and then pretending to walk through a door). In the absence of language the form of gestures is constrained by objects’ affordances (i.e., manipulable or not) and the communicative need to discriminate across semantic categories (i.e., objects or action). Gestures adopt an acting or drawing mode of representation depending on the affordances of the referent; which echoes patterns observed in the forms of co-speech gestures (Masson-Carro, Goudbeek, & Krahmer, 2015). We also show for the first time that use and ordering of deictics and the different modes of representation operate in tandem to distinguish between semantically related concepts (e.g., to drink and mug). When forced to communicate without language, participants show consistent patterns in their strategies to distinguish different semantic categories -
Ozyurek, A., & Ortega, G. (2016). Language in the visual modality: Co-speech Gesture and Sign. Talk presented at the Language in Interaction Summerschool on Human Language: From Genes and Brains to Behavior. Berg en Dal, The Netherlands. 2016-07-03 - 2016-07-14.
Abstract
As humans, our ability to communicate and use language is instantiated not only in the vocal modality but also in the visual modality. The main examples of this are sign languages and (co-speech) gestures used in spoken languages. Sign languages, the natural languages of Deaf communities, use systematic and conventionalized movements of the hands, face, and body for linguistic expression Co-speech gestures, though non-linguistic, are produced and perceived in tight semantic and temporal integration with speech. Thus, language—in its primary face-to-face context (both phylogenetically and ontogenetically) is a multimodal phenomenon. In fact visual modality seems to be a more common way of communication than speech -when we consider both deaf and hearing individuals. Most research in language, however, has focused mostly on spoken/written language and has rarely considered the visual context it is embedded in to understand our linguistic capacity. This talk give a brief review on what know so far about what the visual expressive resources of language look like in both spoken and sign languages and their role in communication and cognition- broadening our scope of language. We will argue, based on these recent findings, that our models of language need to take visual modes of communication into account and provide a unified framework for how semiotic and expressive resources of the visual modality are recruited both for spoken and sign languages and their consequences for processing-also considering their neural underpinnings -
Schubotz, L., Drijvers, L., Holler, J., & Ozyurek, A. (2016). The cocktail party effect revisited in older and younger adults: When do iconic co-speech gestures help?. Poster presented at the 8th Speech in Noise Workshop (SpiN 2016), Groningen, The Netherlands.
-
Slonimska, A., Ozyurek, A., & Campisi, E. (2016). The role of addressee’s age in use of ostensive signals to gestures and their effectiveness. Talk presented at the 3rd Attentive Listener in the Visual World (AttLis 2016) workshop. Potsdam, Germany. 2016-03-10 - 2016-03-11.
-
Slonimska, A., Ozyurek, A., & Campisi, E. (2016). Markers of communicative intent through ostensive signals and their effectiveness in multimodal demonstrations to adults and children. Talk presented at the 7th Conference of the International Society for Gesture Studies (ISGS7). Paris, France. 2016-07-18 - 2016-07-22.
Abstract
In face-to-face interaction people adapt their multimodal message to fit their addressees’ informational needs. In doing so they are likely to mark their communicative intent by accentuating the relevant information provided by both speech and gesture. In the present study we were interested in the strategies by which speakers highlight their gestures (by means of ostensive signals like eye gaze and/or ostensive speech) for children in comparison to adults in a multimodal demonstration task. Moreover, we investigated the effectiveness of the ostensive signals to gestures and asked whether addressees shift their attention to the gestures highlighted by the speakers through different ostensive signals. Previous research has identified some of these ostensive signals (Streeck 1993; Gullberg & Kita 2009), but have not investigated how often they occur and whether they are designed for and attended to by different types of addressees. 48 Italians, born and raised in Sicily, participated in the study. 16 chosen Italian adult participants (12 female, 7 male, age range 20-30) were assigned the role of the speakers, while other 16 adults and 16 children (age range 9-10) had a role of the addressees. The task of the speaker was to describe the rules of a children’s game, which consists of using wooden blocks of different shapes to make a path without gaps. Speakers’ descriptions were coded for words and representational gestures, as well as for three types of ostensive signals highlighting the gestures – 1) eye gaze, 2) ostensive speech and 3) combination of eye gaze and ostensive speech to gesture. Addressees’ eye gaze to speakers’ gestures were coded and annotated whether eye gaze was directed to highlighted or not highlighted gesture. Overall eye gaze was the most common signal followed by ostensive speech and multimodal signals. We found that speakers were likely to highlight more gestures with children than with adults when all three types of signals were considered together. However, when treated separately, results revealed that speakers used more combined ostensive signals for children than for adults, but they were also likely to use more eye gaze towards their gestures with other adults than with children. Furthermore, both groups of addressees gazed more at gestures highlighted by the speakers in comparison to gestures that were not highlighted at all. The present study provides the first quantitative insights in regard to how speakers highlight their gestures and whether the age of the addressee influences the effectiveness of the ostensive signals. Speakers mark the communicative relevance of their gestures with different types of ostensive signals and by taking different types of addressees into account. In turn, addressees - not only adults but also children – take advantage of the provided signals to these gestures -
Sumer, B., Zwisterlood, I., & Ozyurek, A. (2016). Hands in motion: Learning to express motion events in a sign and a spoken language. Poster presented at the 12th International Conference on Theoretical Issues in Sign Language Research (TISLR12), Melbourne, Australia.
-
Azar, Z., Backus, A., & Ozyurek, A. (2015). Multimodal reference tracking in monolingual and bilingual discourse. Talk presented at the Nijmegen-Tilburg Multimodality Workshop. Tilburg, The Netherlands. 2015-10-22.
-
Drijvers, L., & Ozyurek, A. (2015). Visible speech enhanced: What do gestures and lips contribute to speech comprehension in noise?. Talk presented at the Nijmegen-Tilburg Multi-modality workshop. Tilburg, The Netherlands. 2015-10-22.
-
Ozyurek, A. (2015). The role of gesture in language evolution: Beyond the gesture-first hypotheses. Talk presented at the SMART Cognitive Science: the Amsterdam Conference – Workshop, Evolution of Language: The co-evolution of biology and culture. Amsterdam, the Netherlands. 2015-03-25 - 2015-03-26.
Abstract
It has been a popular view to propose that gesture preceded and paved the way for the evolution of (spoken) language (e.g., Corballis, Tomasello, Arbib). However these views do not take into account the recent findings on the neural and cognitive infrastructure of how modern humans (adults and children) use gestures in various communicative contexts. Based on this current knowledge I will revisit gesture-first theories of language evolution and discuss alternatives more compatible with the multimodal nature of modern human language -
Peeters, D., Snijders, T. M., Hagoort, P., & Ozyurek, A. (2015). The role of left inferior frontal gyrus in the integration of pointing gestures and speech. Talk presented at the 4th GESPIN - Gesture & Speech in Interaction Conference. Nantes, France. 2015-09-02 - 2015-09-04.
-
Peeters, D., Snijders, T. M., Hagoort, P., & Ozyurek, A. (2015). The neural integration of pointing gesture and speech in a visual context: An fMRI study. Poster presented at the 7th Annual Society for the Neurobiology of Language Conference (SNL 2015), Chigaco, USA.
Additional information
http://www.neurolang.org/programs/SNL2015_Abstracts.pdf -
Schubotz, L., Holler, J., & Ozyurek, A. (2015). Age-related differences in multi-modal audience design: Young, but not old speakers, adapt speech and gestures to their addressee's knowledge. Talk presented at the 4th GESPIN - Gesture & Speech in Interaction Conference. Nantes, France. 2015-09-02 - 2015-09-05.
-
Schubotz, L., Drijvers, L., Holler, J., & Ozyurek, A. (2015). The cocktail party effect revisited in older and younger adults: When do iconic co-speech gestures help?. Poster presented at Donders Sessions 2015, Nijmegen, The Netherlands.
-
Schubotz, L., Drijvers, L., Holler, J., & Ozyurek, A. (2015). The cocktail party effect revisited in older and younger adults: When do iconic co-speech gestures help?. Talk presented at Donders Discussions 2015. Nijmegen, The Netherlands. 2015-11-05.
-
Slonimska, A., Ozyurek, A., & Campisi, E. (2015). Markers of communicative relevance of gesture. Talk presented at the “Nijmegen-Tilburg Multi-modality“ workshop. Tilburg, The Netherlands. 2015-10-24.
-
Slonimska, A., Ozyurek, A., & Campisi, E. (2015). Ostensive signals: Markers of communicative relevance of gesture during demonstration to adults and children. Talk presented at the 4th GESPIN - Gesture & Speech in Interaction Conference. Nantes, France. 2015-09-02 - 2015-09-04.
-
Azar, Z., Backus, A., & Ozyurek, A. (2014). Discourse management: Reference tracking of subject referents in speech and gesture in Turkish narratives. Talk presented at the 17th International Conference on Turkish Linguistics. Rouen, France. 2014-09-03 - 2014-09-05.
-
Dimitrova, D. V., Chu, M., Wang, L., Ozyurek, A., & Hagoort, P. (2014). Beat gestures modulate the processing focused and non-focused words in context. Poster presented at the Sixth Annual Meeting of the Society for the Neurobiology of Language (SNL 2014), Amsterdam, the Netherlands.
Abstract
Information in language is organized according to a principle called information structure: new and important information (focus) is highlighted and distinguished from less important information (non-focus). Most studies so far have been concerned with how focused information is emphasized linguistically and suggest that listeners expect focus to be accented and process it more deeply than non-focus (Wang et al., 2011). Little is known about how listeners deal with non-verbal cues like beat gestures, which also emphasize the words they accompany, similarly to pitch accent. ERP studies suggest that beat gestures facilitate the processing of phonological, syntactic, and semantic aspects of speech (Biau, & Soto-Faraco, 2013; Holle et al., 2012; Wang & Chu, 2013). It is unclear whether listeners expect beat gestures to be aligned with the information structure of the message. The present ERP study addresses this question by testing whether beat gestures modulate the processing of accented-focused vs. unaccented-non focused words in context in a similar way. Participantswatched movies with short dialogues and performed a comprehension task. In each dialogue, the answer “He bought the books via amazon” contained a target word (“books”) which was combined with a beat gesture, a control hand movement (e.g., self touching movement) or no gesture. Based on the preceding context, the target word was either in focus and accented, when preceded by a question like “Did the student buy the books or the magazines via Amazon?”, or the target word was in non-focus and unaccented, when preceded by a question like “Did the student buy the books via Amazon or via Marktplaats?”. The gestures started 500 ms prior to the target word. All gesture parameters (hand shape, naturalness, emphasis, duration, and gesture-speech alignment) were determined in behavioural tests. ERPs were time-locked to gesture onset to examine gesture effects, and to target word onset for pitch accent effects. We applied a cluster-based random permutation analysis to test for main effects and gesture-accent interactions in both time-locking procedures. We found that accented words elicited a positive main effect between 300-600 ms post target onset. Words accompanied by a beat gesture and a control movement elicited sustained positivities between 200-1300 ms post gesture onset. These independent effects of pitch accent and beat gesture are in line with previous findings (Dimitrova et al., 2012; Wang & Chu, 2013). We also found an interaction between control gesture and pitch accent (1200-1300 ms post gesture onset), showing that accented words accompanied by a control movement elicited a negativity relative to unaccented words. The present data show that beat gestures do not differentially modulate the processing of accented-focused vs. unaccented-non focused words. Beat gestures engage a positive and long lasting neural signature, which appears independent from the information structure of the message. Our study suggests that non-verbal cues like beat gestures play a unique role in emphasizing information in speech. -
Dimitrova, D. V., Chu, M., Wang, L., Ozyurek, A., & Hagoort, P. (2014). Independent effects of beat gesture and pitch accent on processing words in context. Poster presented at the 20th Architectures and Mechanisms for Language Processing Conference (AMLAP 2014), Edinburgh, UK.
-
Kelly, S., Healey, M., Ozyurek, A., & Holler, J. (2014). The integration of gestures and actions with speech: Should we welcome the empty-handed to language comprehension?. Talk presented at the 6th International Society for Gesture Studies Congress. San Diego, California, USA. 2014-07-08 - 2014-07-11.
Abstract
Background: Gesture and speech are theorized to form a sin- gle integrated system of meaning during language produc- tion (McNeill, 1992), and evidence is mounting that this in- tegration applies to language comprehension as well (Kelly, Ozyurek & Maris, 2010). However, it is unknown whether gesture is uniquely integrated with speech or is processed like any other manual action. To explore this issue, we compared the extent to which speech is integrated with hand gestures versus actual actions on objects during comprehension. Method: The present study employed a priming paradigm in two experiments. In Experiment 1, subjects watched multi- modal videos that presented auditory (words) and visual (ges- tures and actions on objects) information. Half the subjects related the audio information to a written prime presented be- fore the video, and the other half related the visual informa- tion to the written prime. For half of the multimodal video stimuli, the audio and visual information was congruent, and for the other half, incongruent. The task was to press one but- ton if the written prime was the same as the visual (31 sub- jects) or audio (31 subjects) information in the target video or another button if different. RT and accuracy were recorded. Results: In Experiment 2, we reversed the priming se- quence with a different set of 18 subjects. Now the video became the prime and the written verb followed as the target, but the task was the same with one differenceXto indicate whether the written target was related or unrelated to only the audio information (speech) in preceding video prime. ERPs were recorded to the written targets. In Experiment 1, subjects in both the audio and visual tar- get tasks were less accurate when processing stimuli in which gestures and actions were incongruent versus congruent with speech, F(1, 60) = 22.90, p < .001, but this effect was less prominent for speech-action than for speech-gesture stimuli. However, subjects were more accurate when identifying ac- tions versus gestures, F(1, 60) = 8.03, p = .006. In Experiment 2, there were two early ERP effects. When primed with gesture, incongruent primes produced a larger P1, t (17) = 3.75, p = 0.002, and P2, t (17) = 3.02, p = 0.008, to the target words than the congruent condition in the grand-averaged ERPs (reflecting early perceptual and atten- tional processes). However, there were no significant differ- ences between congruent and incongruent conditions when primed with action. Discussion: The incongruency effect replicates and ex- tends previous work by Kelly et al. (2010) by showing not only a bi-directional influence of gesture and speech, but also of action and speech. In addition, the results show that while actions are easier to process than gestures (Exp. 1), gestures may be more tightly tied to the processing of accompanying speech (Exps. 1 & 2). These results suggest that even though gestures are perceptually less informative than actions, they may be treated as communicatively more informative in rela- tion to the accompanying speech. In this way, the two types of visual information might have different status in language comprehension. -
Kelly, S., Healey, M., Ozyurek, A., & Holler, J. (2014). The integration of gestures and actions with speech: Should we welcome the empty-handed to language comprehension?. Talk presented at the 6th Conference of the International Society for Gesture Studies (ISGS 6). San Diego, CA, USA. 2014-07-08 - 2014-07-11.
Abstract
Background: Gesture and speech are theorized to form a single integrated system of meaning during language production (McNeill, 1992), and evidence is mounting that this integration applies to language comprehension as well (Kelly, Ozyurek & Maris, 2010). However, it is unknown whether gesture is uniquely integrated with speech or is processed like any other manual action. To explore this issue, we compared the extent to which speech is integrated with hand gestures versus actual actions on objects during comprehension. Method: The present study employed a priming paradigm in two experiments. In Experiment 1, subjects watched multimodal videos that presented auditory (words) and visual (gestures and actions on objects) information. Half the subjects related the audio information to a written prime presented before the video, and the other half related the visual information to the written prime. For half of the multimodal video stimuli, the audio and visual information was congruent, and for the other half, incongruent. The task was to press one button if the written prime was the same as the visual (31 subjects) or audio (31 subjects) information in the target video or another button if different. RT and accuracy were recorded. Results: In Experiment 2, we reversed the priming sequence with a different set of 18 subjects. Now the video became the prime and the written verb followed as the target, but the task was the same with one differenceXto indicate whether the written target was related or unrelated to only the audio information (speech) in preceding video prime. ERPs were recorded to the written targets. In Experiment 1, subjects in both the audio and visual target tasks were less accurate when processing stimuli in which gestures and actions were incongruent versus congruent with speech, F(1, 60) = 22.90, p<.001, but this effect was less prominent for speech-action than for speech-gesture stimuli. However, subjects were more accurate when identifying actions versus gestures, F(1, 60) = 8.03, p = .006. In Experiment 2, there were two early ERP effects. When primed with gesture, incongruent primes produced a larger P1, t (17) = 3.75, p = 0.002, and P2, t (17) = 3.02, p = 0.008, to the target words than the congruent condition in the grand-averaged ERPs (reflecting early perceptual and attentional processes). However, there were no significant differences between congruent and incongruent conditions when primed with action. Discussion: The incongruency effect replicates and extends previous work by Kelly et al. (2010) by showing not only a bi-directional influence of gesture and speech, but also of action and speech. In addition, the results show that while actions are easier to process than gestures (Exp. 1), gestures may be more tightly tied to the processing of accompanying speech (Exps. 1 & 2). These results suggest that even though gestures are perceptually less informative than actions, they may be treated as communicatively more informative in relation to the accompanying speech. In this way, the two types of visual information might have different status in language comprehension -
Peeters, D., Chu, M., Holler, J., Hagoort, P., & Ozyurek, A. (2014). Behavioral and neurophysiological correlates of communicative intent in the production of pointing gestures. Poster presented at the Annual Meeting of the Society for the Neurobiology of Language [SNL2014], Amsterdam, the Netherlands.
-
Peeters, D., Chu, M., Holler, J., Hagoort, P., & Ozyurek, A. (2014). Behavioral and neurophysiological correlates of communicative intent in the production of pointing gestures. Talk presented at the 6th Conference of the International Society for Gesture Studies (ISGS6). San Diego, Cal. 2014-07-08 - 2014-07-11.
-
Peeters, D., Azar, Z., & Ozyurek, A. (2014). The interplay between joint attention, physical proximity, and pointing gesture in demonstrative choice. Talk presented at the 36th Annual Meeting of the Cognitive Science Society (CogSci2014). Québec City, Canada. 2014-07-23 - 2014-07-26.
-
Schubotz, L., Holler, J., & Ozyurek, A. (2014). The impact of age and mutually shared knowledge on multi-modal utterance design. Poster presented at the 6th International Society for Gesture Studies Congress, San Diego, California, USA.
Abstract
Previous work suggests that the communicative behavior
of older adults differs systematically from that of younger
adults. For instance, older adults produce significantly fewer
representational gestures than younger adults in monologue
description tasks (Cohen & Borsoi, 1996; Feyereisen &
Havard, 1999). In addition, older adults seem to have more
difficulty than younger adults in establishing common ground
(i.e. knowledge, assumptions, and beliefs mutually shared
between a speaker and an addressee, Clark, 1996) in speech
in a referential communication paradigm (Horton & Spieler,
2007). Here we investigated whether older adults take such
common ground into account when designing multi-modal
utterances for an addressee. The present experiment com-
pared the speech and co-speech gesture production of two age
groups (young: 20-30 years, old: 65-75 years) in an inter-
active setting, manipulating the amount of common ground
between participants.
Thirty-two pairs of nave participants (16 young, 16 old,
same-age-pairs only) took part in the experiment. One of the
participants (the speaker) narrated short cartoon stories to the
other participant (the addressee) (task 1) and gave instruc-
tions on how to assemble a 3D model from wooden building
blocks (task 2). In both tasks, we varied the amount of infor-
mation mutually shared between the two participants (com-
mon ground manipulation). Additionally, we also obtained a
range of cognitive measures from the speaker: verbal work-
ing memory (operation span task), visual working memory
(visual patterns test and Corsi block test), processing speed
and executive functioning (trail making test parts A + B) and
a semantic fluency measure (animal naming task). Prelimi-
nary data analysis of about half the final sample suggests that
overall, speakers use fewer words per narration/instruction
when there is shared knowledge with the addressee, in line
with previous findings (e.g. Clark & Wilkes-Gibbs, 1986).
This effect is larger for young than for old adults, potentially
indicating that older adults have more difficulties taking com-
mon ground into account when formulating utterances. Fur-
ther, representational co-speech gestures were produced at the
same rate by both age groups regardless of common ground
condition in the narration task (in line with Campisi & zyrek,
2013). In the building block task, however, the trend for the
young adults is to gesture at a higher rate in the common
ground condition, suggesting that they rely more on the vi-
sual modality here (cf. Holler & Wilkin, 2009). The same
trend could not be found for the old adults. Within the next
three months, we will extend our analysis a) by taking a wider
range of gesture types (interactive gestures, beats) into ac-
count and b) by looking at qualitative features of speech (in-
formation content) and co-speech gestures (size, shape, tim-
ing). Finally, we will correlate the resulting data with the data
from the cognitive tests.
This study will contribute to a better understanding of the
communicative strategies of a growing aging population as
well as to the body of research on co-speech gesture use in
addressee design. It also addresses the relationship between
cognitive abilities on the one hand and co-speech gesture
production on the other hand, potentially informing existing
models of co-speech gesture production. -
Holler, J., Schubotz, L., Kelly, S., Hagoort, P., & Ozyurek, A. (2013). Multi-modal language comprehension as a joint activity: The influence of eye gaze on the processing of speech and co-speech gesture in multi-party communication. Talk presented at the 5th Joint Action Meeting. Berlin. 2013-07-26 - 2013-07-29.
Abstract
Traditionally, language comprehension has been studied as a solitary and unimodal activity. Here, we investigate language comprehension as a joint activity, i.e., in a dynamic social context involving multiple participants in different roles with different perspectives, while taking into account the multimodal nature of facetoface communication. We simulated a triadic communication context involving a speaker alternating her gaze between two different recipients, conveying information not only via speech but gesture as well. Participants thus viewed videorecorded speechonly or speech+gesture utterances referencing objects (e.g., “he likes the laptop”/+TYPING ON LAPTOPgesture) when being addressed (direct gaze) or unaddressed (averted gaze). The videoclips were followed by two object images (laptoptowel). Participants’ task was to choose the object that matched the speaker’s message (i.e., laptop). Unaddressed recipients responded significantly slower than addressees for speechonly utterances. However, perceiving the same speech accompanied by gestures sped them up to levels identical to that of addressees. Thus, when speech processing suffers due to being unaddressed, gestures become more prominent and boost comprehension of a speaker’s spoken message. Our findings illuminate how participants process multimodal language and how this process is influenced by eye gaze, an important social cue facilitating coordination in the joint activity of conversation. -
Holler, J., Schubotz, L., Kelly, S., Schuetze, M., Hagoort, P., & Ozyurek, A. (2013). Here's not looking at you, kid! Unaddressed recipients benefit from co-speech gestures when speech processing suffers. Poster presented at the 35th Annual Meeting of the Cognitive Science Society (CogSci 2013), Berlin, Germany.
-
Holler, J., Kelly, S., Hagoort, P., Schubotz, L., & Ozyurek, A. (2013). Speakers' social eye gaze modulates addressed and unaddressed recipients' comprehension of gesture and speech in multi-party communication. Talk presented at the 5th Biennial Conference of Experimental Pragmatics (XPRAG 2013). Utrecht, The Netherlands. 2013-09-04 - 2013-09-06.
-
Ortega, G., & Ozyurek, A. (2013). Gesture-sign interface in hearing non-signers' first exposure to sign. Talk presented at the Tilburg Gesture Research Meeting [TiGeR 2013]. Tilburg, the Netherlands. 2013-06-19 - 2013-06-21.
Abstract
Natural sign languages and gestures are complex communicative systems that allow the incorporation of features of a referent into their structure. They differ, however, in that signs are more conventionalised because they consist of meaningless phonological parameters. There is some evidence that despite non-signers finding iconic signs more memorable they can have more difficulty at articulating their exact phonological components. In the present study, hearing non-signers took part in a sign repetition task in which they had to imitate as accurately as possible a set of iconic and arbitrary signs. Their renditions showed that iconic signs were articulated significantly less accurately than arbitrary signs. Participants were recalled six months later to take part in a sign generation task. In this task, participants were shown the English translation of the iconic signs they imitated six months prior. For each word, participants were asked to generate a sign (i.e., an iconic gesture). The handshapes produced in the sign repetition and sign generation tasks were compared to detect instances in which both renditions presented the same configuration. There was a significant correlation between articulation accuracy in the sign repetition task and handshape overlap. These results suggest some form of gestural interference in the production of iconic signs by hearing non-signers. We also suggest that in some instances non-signers may deploy their own conventionalised gesture when producing some iconic signs. These findings are interpreted as evidence that non-signers process iconic signs as gestures and that in production, only when sign and gesture have overlapping features will they be capable of producing the phonological components of signs accurately. -
Peeters, D., Chu, M., Holler, J., Ozyurek, A., & Hagoort, P. (2013). Getting to the point: The influence of communicative intent on the form of pointing gestures. Talk presented at the 35th Annual Meeting of the Cognitive Science Society (CogSci 2013). Berlin, Germany. 2013-08-01 - 2013-08-03.
-
Peeters, D., Chu, M., Holler, J., Ozyurek, A., & Hagoort, P. (2013). The influence of communicative intent on the form of pointing gestures. Poster presented at the Fifth Joint Action Meeting (JAM5), Berlin, Germany.
-
Holler, J., Kelly, S., Hagoort, P., & Ozyurek, A. (2012). Overhearing gesture: The influence of eye gaze direction on the comprehension of iconic gestures. Poster presented at the Social Cognition, Engagement, and the Second-Person-Perspective Conference, Cologne, Germany.
-
Holler, J., Kelly, S., Hagoort, P., & Ozyurek, A. (2012). Overhearing gesture: The influence of eye gaze direction on the comprehension of iconic gestures. Poster presented at the EPS workshop 'What if.. the study of language started from the investigation of signed, rather than spoken language?, London, UK.
-
Holler, J., Kelly, S., Hagoort, P., & Ozyurek, A. (2012). The influence of gaze direction on the comprehension of speech and gesture in triadic communication. Talk presented at the 18th Annual Conference on Architectures and Mechanisms for Language Processing (AMLaP 2012). Riva del Garda, Italy. 2012-09-06 - 2012-09-08.
Abstract
Human face-to-face communication is a multi-modal activity. Recent research has shown that, during comprehension, recipients integrate information from speech with that contained in co-speech gestures (e.g., Kelly et al., 2010). The current studies take this research one step further by investigating the influence of another modality, namely eye gaze, on speech and gesture comprehension, to advance our understanding of language processing in more situated contexts. In spite of the large body of literature on processing of eye gaze, very few studies have investigated its processing in the context of communication (but see, e.g., Staudte & Crocker, 2011 for an exception). In two studies we simulated a triadic communication context in which a speaker alternated their gaze between our participant and another (alleged) participant. Participants thus viewed speech-only or speech + gesture utterances either in the role of addressee (direct gaze) or in the role of unaddressed recipient (averted gaze). In Study 1, participants (N = 32) viewed video-clips of a speaker producing speech-only (e.g. “she trained the horse”) or speech+gesture utterances conveying complementary information (e.g. “she trained the horse”+WHIPPING gesture). Participants were asked to judge whether a word displayed on screen after each video-clip matched what the speaker said or not. In half of the cases, the word matched a previously uttered word, requiring a “yes” answer. In all other cases, the word matched the meaning of the gesture the actor had performed, thus requiring a ‘no’ answer. -
Holler, J., Kelly, S., Hagoort, P., & Ozyurek, A. (2012). When gestures catch the eye: The influence of gaze direction on co-speech gesture comprehension in triadic communication. Talk presented at the 5th Conference of the International Society for Gesture Studies (ISGS 5). Lund, Sweden. 2012-07-24 - 2012-07-27.
-
Holler, J., Kelly, S., Hagoort, P., & Ozyurek, A. (2012). When gestures catch the eye: The influence of gaze direction on co-speech gesture comprehension in triadic communication. Talk presented at the 34th Annual Meeting of the Cognitive Science Society (CogSci 2012). Sapporo, Japan. 2012-08-01 - 2012-08-04.
-
Kelly, S., Ozyurek, A., Healey, M., & Holler, J. (2012). The communicative influence of gesture and action during speech comprehension: Gestures have the upper hand. Talk presented at the Acoustics 2012 Hong Kong Conference and Exhibition. Hong Kong. 2012-05-13 - 2012-05-18.
-
Kokal, I., Holler, J., Ozyurek, A., Kelly, S., Toni, I., & Hagoort, P. (2012). Eye'm talking to you: Speakers' gaze direction modulates the integration of speech and iconic gestures in the rigth MTG. Poster presented at the 4th Annual Neurobiology of Language Conference (NLC 2012), San Sebastian, Spain.
-
Kokal, I., Holler, J., Ozyurek, A., Kelly, S., Toni, I., & Hagoort, P. (2012). Eye'm talking to you: The role of the Middle Temporal Gyrus in the integration of gaze, gesture and speech. Poster presented at the Social Cognition, Engagement, and the Second-Person-Perspective Conference, Cologne, Germany.
-
Peeters, D., Ozyurek, A., & Hagoort, P. (2012). Behavioral and neural correlates of deictic reference. Poster presented at the 18th Annual Conference on Architectures and Mechanisms for Language Processing [AMLaP 2012], Riva del Garda, Italy.
-
Peeters, D., Ozyurek, A., & Hagoort, P. (2012). The comprehension of exophoric reference: An ERP study. Poster presented at the Fourth Annual Neurobiology of Language Conference (NLC), San Sebastian, Spain.
Abstract
An important property of language is that it can be used exophorically, for instance in referring to entities in the extra-linguistic context of a conversation using demonstratives such as “this” and “that”. Despite large-scale cross-linguistic descriptions of demonstrative systems, the mechanisms underlying the comprehension of such referential acts are poorly understood. Therefore, we investigated the neural mechanisms underlying demonstrative comprehension in situated contexts. Twenty-three participants were presented on a computer screen with pictures containing a speaker and two similar objects. One of the objects was close to the speaker, whereas the other was either distal from the speaker but optically close to the participant (“sagittal orientation”), or distal from both (“lateral orientation”). The speaker pointed to one object, and participants heard sentences spoken by the speaker containing a proximal (“this”) or distal (“that”) demonstrative, and a correct or incorrect noun-label (i.e., a semantic violation). EEG was recorded continuously and time-locked to the onset of demonstratives and nouns. Semantic violations on the noun-label yielded a significant, wide-spread N400 effect, regardless of the objects’ orientation. Comparing the comprehension of proximal to distal demonstratives in the sagittal orientation yielded a similar N400 effect, both for the close and the far referent. Interestingly, no demonstrative effect was found when objects were oriented laterally. Our findings suggest a similar time-course for demonstrative and noun-label processing. However, the comprehension of demonstratives depends on the spatial orientation of potential referents, whereas noun-label comprehension does not. These findings reveal new insights about the mechanisms underlying everyday demonstrative comprehension. -
Peeters, D., & Ozyurek, A. (2012). The role of contextual factors in the use of demonstratives: Differences between Turkish and Dutch. Talk presented at the 6th Lodz Symposium: New Developments in Linguistic Pragmatics. Lodz, Poland. 2012-05-26 - 2012-05-28.
Abstract
An important feature of language is that it enables human beings to refer to entities, actions and events in the external world. In everyday interaction, one can refer to concrete entities in the extra-linguistic physical environment of a conversation by using demonstratives such as this and that. Traditionally, the choice of which demonstrative to use has been explained in terms of the distance of the referent [1]. In contrast, recent observational studies in different languages have suggested that factors such as joint attention also play an important role in demonstrative choice [2][3]. These claims have never been tested in a controlled setting and across different languages. There-fore, we tested demonstrative choice in a controlled elicitation task in two languages that previously have only been studied observational-ly: Turkish and Dutch. In our study, twenty-nine Turkish and twenty-four Dutch partic-ipants were presented with pictures including a speaker, an address-ee and an object (the referent). They were asked which demonstra-tive they would use in the depicted situations. Besides the distance of the referent, we manipulated the addressee’s focus of visual atten-tion, the presence of a pointing gesture, and the sentence type. A re-peated measures analysis of variance showed that, in addition to the distance of the referent, the focus of attention of the addressee on the referent and the type of sentence in which a demonstrative was used, influenced demonstrative choice in Turkish. In Dutch, only the dis-tance of the referent and the sentence type influenced demonstrative choice. Our cross-linguistic findings show that in different languages, people take into account both similar and different aspects of triadic situations to select a demonstrative. These findings reject descrip-tions of demonstrative systems that explain demonstrative choice in terms of one single variable, such as distance. The controlled study of referring acts in triadic situations is a valuable extension to observa-tional research, in that it gives us the possibility to look more specifi-cally into the interplay between language, attention, and other con-textual factors influencing how people refer to entities in the world References: [1] Levinson, S. C. (1983). Pragmatics. Cambridge: Cambridge University Press. [2] Diessel, H. (2006). Demonstratives, joint attention and the emergence of grammar. Cognitive Linguistics 17:4. 463–89. [3] Küntay, A. C. & Özyürek, A. (2006). Learning to use demonstratives in conversation: what do language specific strategies in Turkish reveal? Journal of Child Language 33. 303–320. -
Peeters, D., & Ozyurek, A. (2012). The role of contextual factors in the use of demonstratives: Differences between Turkish and Dutch. Poster presented at The IMPRS Relations in Relativity Workshop, Max Planck Institute for Psycholinguistics, Nijmegen, the Netherlands.
-
Ozyurek, A. (2011). Language in our hands: The role of the body in language, cognition and communication [Inaugural lecture]. Talk presented at The Radboud University Nijmegen. Nijmegen, The Netherlands. 2011-05-26.
-
Peeters, D., & Ozyurek, A. (2011). Demonstrating the importance of joint attention in the use of demonstratives: The case of Turkish. Poster presented at The 4th Biennial Conference of Experimental Pragmatics [XPRAG 2011], Barcelona, Spain.
-
Nyst, V., De Vos, C., Perniss, P. M., & Ozyurek, A. (2007). The typology of space in sign languages: Developing a descriptive format for cross-linguistic comparison. Talk presented at Cross-Linguistic Research on Sign Languages 2. Max Planck Institute for Psycholinguistics, Nijmegen. 2007-04-13.
-
Brown, A., Ozyurek, A., Allen, S., Kita, S., Ishizuka, T., & Furman, R. (2004). Does event structure influence children's motion event expressions. Poster presented at 29th Boston University Conference on Language Development, Boston.
Abstract
This study focuses on understanding of event structure, in particular therelationship between Manner and Path. Narratives were elicited from twenty 3-year-olds and twenty adults using 6 animated motion events that were divided into two groups based on Goldberg's (1997) distinction between causal (Manner-inherent; e.g. roll down) and non-causal (Manner-incidental; e.g. spin while going up) relationships between Manner and Path. The data revealed that adults and children are sensitive to differences between inherent and incidental Manner. Adults significantly reduced use of canonical syntactic constructions for Manner-incidental events, employing other constructions. Children, however, while significantly reducing use of canonical syntactic constructionsfor Manner-incidental events, did not exploit alternative constructions. Instead, they omitted Manner from their speech altogether. A follow-up lexical task showed that children had knowledge of all omitted Manners. Given that this strategic omission of Manner is not lexically motivated, the results are discussed in relation to implications for pragmatics and memory load.
Share this page