Displaying 1 - 25 of 25
-
Drijvers, L., Small, S. L., & Skipper, J. I. (2025). Language is widely distributed throughout the brain. Nature Reviews Neuroscience, 26: 189. doi:10.1038/s41583-024-00903-0.
-
Emmendorfer, A. K., & Holler, J. (2025). Facial signals shape predictions about the nature of upcoming conversational responses. Scientific Reports, 15: 1381. doi:10.1038/s41598-025-85192-y.
Abstract
Increasing evidence suggests that interlocutors use visual communicative signals to form predictions about unfolding utterances, but there is little data on the predictive potential of facial signals in conversation. In an online experiment with virtual agents, we examine whether facial signals produced by an addressee may allow speakers to anticipate the response to a question before it is given. Participants (n = 80) viewed videos of short conversation fragments between two virtual humans. Each fragment ended with the Questioner asking a question, followed by a pause during which the Responder looked either straight at the Questioner (baseline), or averted their gaze, or accompanied the straight gaze with one of the following facial signals: brow raise, brow frown, nose wrinkle, smile, squint, mouth corner pulled back (dimpler). Participants then indicated on a 6-point scale whether they expected a “yes” or “no” response. Analyses revealed that all signals received different ratings relative to the baseline: brow raises, dimplers, and smiles were associated with more positive responses, gaze aversions, brow frowns, nose wrinkles, and squints with more negative responses. Qur findings show that interlocutors may form strong associations between facial signals and upcoming responses to questions, highlighting their predictive potential in face-to-face conversation.Additional information
supplementary materials -
Esmer, Ş. C., Turan, E., Karadöller, D. Z., & Göksun, T. (2025). Sources of variation in preschoolers’ relational reasoning: The interaction between language use and working memory. Journal of Experimental Child Psychology, 252: 106149. doi:10.1016/j.jecp.2024.106149.
Abstract
Previous research has suggested the importance of relational language and working memory in children’s relational reasoning. The tendency to use language (e.g., using more relational than object-focused language, prioritizing focal objects over background in linguistic descriptions) could reflect children’s biases toward the relational versus object-based solutions in a relational match-to-sample (RMTS) task. In the lack of any apparent object match as a foil option, object-focused children might rely on other cognitive mechanisms (i.e., working memory) to choose a relational match in the RMTS task. The current study examined the interactive roles of language- and working memory-related sources of variation in Turkish-learning preschoolers’ relational reasoning. We collected data from 4- and 5-year-olds (N = 41) via Zoom in the RMTS task, a scene description task, and a backward word span task. Generalized binomial mixed effects models revealed that children who used more relational language and background-focused scene descriptions performed worse in the relational reasoning task. Furthermore, children with less frequent relational language use and focal object descriptions of the scenes benefited more from working memory to succeed in the relational reasoning task. These results suggest additional working memory demands for object-focused children to choose relational matches in the RMTS task, highlighting the importance of examining the interactive effects of different cognitive mechanisms on relational reasoning.Additional information
supplementary material -
Göksun, T., Aktan-Erciyes, A., Karadöller, D. Z., & Demir-Lira, Ö. E. (2025). Multifaceted nature of early vocabulary development: Connecting child characteristics with parental input types. Child Development Perspectives, 19(1), 30-37. doi:10.1111/cdep.12524.
Abstract
Children need to learn the demands of their native language in the early vocabulary development phase. In this dynamic process, parental multimodal input may shape neurodevelopmental trajectories while also being tailored by child-related factors. Moving beyond typically characterized group profiles, in this article, we synthesize growing evidence on the effects of parental multimodal input (amount, quality, or absence), domain-specific input (space and math), and language-specific input (causal verbs and sound symbols) on preterm, full-term, and deaf children's early vocabulary development, focusing primarily on research with children learning Turkish and Turkish Sign Language. We advocate for a theoretical perspective, integrating neonatal characteristics and parental input, and acknowledging the unique constraints of languages. -
Karadöller, D. Z., Demir-Lira, Ö. E., & Göksun, T. (2025). Full-term children with lower vocabulary scores receive more multimodal math input than preterm children. Journal of Cognition and Development. Advance online publication. doi:10.1080/15248372.2025.2470245.
Abstract
One of the earliest sources of mathematical input arises in dyadic parent–child interactions. However, the emphasis has been on parental input only in speech and how input varies across different environmental and child-specific factors remains largely unexplored. Here, we investigated the relationship among parental math input modality and type, children’s gestational status (being preterm vs. full-term born), and vocabulary development. Using book-reading as a medium for parental math input in dyadic interaction, we coded specific math input elicited by Turkish-speaking parents and their 26-month-old children (N = 58, 24 preterms) for speech-only and multimodal (speech and gestures combined) input. Results showed that multimodal math input, as opposed to speech-only math input, was uniquely associated with gestational status, expressive vocabulary, and the interaction between the two. Full-term children with lower expressive vocabulary scores received more multimodal input compared to their preterm peers. However, there was no association between expressive vocabulary and multimodal math input for preterm children. Moreover, cardinality was the most frequent type for both speech-only and multimodal input. These findings suggest that the specific type of multimodal math input can be produced as a function of children’s gestational status and vocabulary development. -
Lokhesh, N. N., Swaminathan, K., Shravan, G., Menon, D., Mishra, S., Nandanwar, A., & Mishra, C. (2025). Welcome to the library: Integrating social robots in Indian libraries. In O. Palinko, L. Bodenhagen, J.-J. Cabibihan, K. Fischer, S. Šabanović, K. Winkle, L. Behera, S. S. Ge, D. Chrysostomou, W. Jiang, & H. He (
Eds. ), Social Robotics: 16th International Conference, ICSR + AI 2024, Odense, Denmark, October 23–26, 2024, Proceedings (pp. 239-246). Singapore: Springer. doi:10.1007/978-981-96-3525-2_20.Abstract
Libraries are very often considered the hallway to developing knowledge. However, the lack of adequate staff within Indian libraries makes catering to the visitors’ needs difficult. Previous systems that have sought to address libraries’ needs through automation have mostly been limited to storage and fetching aspects while lacking in their interaction aspect. We propose to address this issue by incorporating social robots within Indian libraries that can communicate and address the visitors’ queries in a multi-modal fashion attempting to make the experience more natural and appealing while helping reduce the burden on the librarians. In this paper, we propose and deploy a Furhat robot as a robot librarian by programming it on certain core librarian functionalities. We evaluate our system with a physical robot librarian (N = 26). The results show that the robot librarian was found to be very informative and overall left with a positive impression and preference. -
Mishra, C., Skantze, G., Hagoort, P., & Verdonschot, R. G. (2025). Perception of emotions in human and robot faces: Is the eye region enough? In O. Palinko, L. Bodenhagen, J.-J. Cabihihan, K. Fischer, S. Šabanović, K. Winkle, L. Behera, S. S. Ge, D. Chrysostomou, W. Jiang, & H. He (
Eds. ), Social Robotics: 116th International Conference, ICSR + AI 2024, Odense, Denmark, October 23–26, 2024, Proceedings (pp. 290-303). Singapore: Springer.Abstract
The increased interest in developing next-gen social robots has raised questions about the factors affecting the perception of robot emotions. This study investigates the impact of robot appearances (human-like, mechanical) and face regions (full-face, eye-region) on human perception of robot emotions. A between-subjects user study (N = 305) was conducted where participants were asked to identify the emotions being displayed in videos of robot faces, as well as a human baseline. Our findings reveal three important insights for effective social robot face design in Human-Robot Interaction (HRI): Firstly, robots equipped with a back-projected, fully animated face – regardless of whether they are more human-like or more mechanical-looking – demonstrate a capacity for emotional expression comparable to that of humans. Secondly, the recognition accuracy of emotional expressions in both humans and robots declines when only the eye region is visible. Lastly, within the constraint of only the eye region being visible, robots with more human-like features significantly enhance emotion recognition. -
Özer, D., Özyürek, A., & Göksun, T. (2025). Spatial working memory is critical for gesture processing: Evidence from gestures with varying semantic links to speech. Psychonomic Bulletin & Review. Advance online publication. doi:10.3758/s13423-025-02642-4.
Abstract
Gestures express redundant or complementary information to speech they accompany by depicting visual and spatial features of referents. In doing so, they recruit both spatial and verbal cognitive resources that underpin the processing of visual semantic information and its integration with speech. The relation between spatial and verbal skills and gesture comprehension, where gestures may serve different roles in relation to speech is yet to be explored. This study examined the role of spatial and verbal skills in processing gestures that expressed redundant or complementary information to speech during the comprehension of spatial relations between objects. Turkish-speaking adults (N=74) watched videos describing the spatial location of objects that involved perspective-taking (left-right) or not (on-under) with speech and gesture. Gestures either conveyed redundant information to speech (e.g., saying and gesturing “left”) or complemented the accompanying demonstrative in speech (e.g., saying “here,” gesturing “left”). We also measured participants’ spatial (the Corsi block span and the mental rotation tasks) and verbal skills (the digit span task). Our results revealed nuanced interactions between these skills and spatial language comprehension, depending on the modality in which the information was expressed. One insight emerged prominently. Spatial skills, particularly spatial working memory capacity, were related to enhanced comprehension of visual semantic information conveyed through gestures especially when this information was not present in the accompanying speech. This study highlights the critical role of spatial working memory in gesture processing and underscores the importance of examining the interplay among cognitive and contextual factors to understand the complex dynamics of multimodal language. -
Rubio-Fernandez, P. (2025). First acquiring articles in a second language: A new approach to the study of language and social cognition. Lingua, 313: 103851. doi:10.1016/j.lingua.2024.103851.
Abstract
Pragmatic phenomena are characterized by extreme variability, which makes it difficult to draw sound generalizations about the role of social cognition in pragmatic language by and large. I introduce cultural evolutionary pragmatics as a new framework for the study of the interdependence between language and social cognition, and point at the study of common-ground management across languages and ages as a way to test the reliance of pragmatic language on social cognition. I illustrate this new research line with three experiments on article use by second language speakers, whose mother tongue lacks articles. These L2 speakers are known to find article use challenging and it is often argued that their difficulties stem from articles being pragmatically redundant. Contrary to this view, the results of this exploratory study support the view that proficient article use requires automatizing basic socio-cognitive processes, offering a window into the interdependence between language and social cognition. -
Rubio-Fernandez, P., Berke, M. D., & Jara-Ettinger, J. (2025). Tracking minds in communication. Trends in Cognitive Sciences, 29(3), 269-281. doi:10.1016/j.tics.2024.11.005.
Abstract
How might social cognition help us communicate through language? At what levels does this interaction occur? In classical views, social cognition is independent of language, and integrating the two can be slow, effortful, and error-prone. But new research into word level processes reveals that communication
is brimming with social micro-processes that happen in real time, guiding even the simplest choices like how we use adjectives, articles, and demonstratives. We interpret these findings in the context of advances in theoretical models of social cognition and propose a Communicative Mind-Tracking
framework, where social micro-processes aren’t a secondary process in how we use language—they are fundamental to how communication works. -
Soberanes, M., Pérez-Ramírez, C. A., & Assaneo, M. F. (2025). Insights into the effect of general attentional state, coarticulation, and primed speech rate in phoneme production time. Journal of Speech, Language, and Hearing Research. Advance online publication. doi:10.1044/2025_JSLHR-24-00595.
Abstract
Purpose:
This study aimed to identify how a set of predefined factors modulates phoneme articulation time within a speaker.
Method:
We used a custom in-lab system that records lip muscle activity through electromyography signals, aligned with the produced speech, to measure phoneme articulation time. Twenty Spanish-speaking participants (12 females) were evaluated while producing sequences of a consonant–vowel syllable, with each sequence consisting of repeated articulations of either /pa/ or /pu/. Before starting the sequences, participants underwent a priming step with either a fast or slow speech rate. Additionally, the general attentional state level was assessed at the beginning, middle, and end of the protocol. To analyze the variability in the duration of /p/ and vowel articulation, we fitted individual linear mixed-models considering three factors: general attentional state level, priming rate, and coarticulation effects (for /p/, i.e., followed by /a/ or /u/) or phoneme identity (for vowels, i.e., being /a/ or /u/).
Results:
We found that the level of general attentional state positively correlated with production time for both the consonant /p/ and the vowels. Additionally, /p/ production was influenced by the nature of the following vowel (i.e., coarticulation effects), while vowel production time was affected by the primed speech rate.
Conclusions:
Phoneme duration appears to be influenced by both stable, speaker-specific characteristics (idiosyncratic traits) and internal, state-dependent factors related to the speaker's condition at the time of speech production. While some factors affect both consonants and vowels, others specifically modify only one of these types.Additional information
supplemental material -
Tilston, O., Holler, J., & Bangerter, A. (2025). Opening social interactions: The coordination of approach, gaze, speech and handshakes during greetings. Cognitive Science, 49(2): e70049. doi:10.1111/cogs.70049.
Abstract
Despite the importance of greetings for opening social interactions, their multimodal coordination processes remain poorly understood. We used a naturalistic, lab-based setup where pairs of unacquainted participants approached and greeted each other while unaware their greeting behavior was studied. We measured the prevalence and time course of multimodal behaviors potentially culminating in a handshake, including motor behaviors (e.g., walking, standing up, hand movements like raise, grasp, and retraction), gaze patterns (using eye tracking glasses), and speech (close and distant verbal salutations). We further manipulated the visibility of partners’ eyes to test its effect on gaze. Our findings reveal that gaze to a partner's face increases over the course of a greeting, but is partly averted during approach and is influenced by the visibility of partners’ eyes. Gaze helps coordinate handshakes, by signaling intent and guiding the grasp. The timing of adjacency pairs in verbal salutations is comparable to the precision of floor transitions in the main body of conversations, and varies according to greeting phase, with distant salutation pair parts featuring more gaps and close salutation pair parts featuring more overlap. Gender composition and a range of multimodal behaviors affect whether pairs chose to shake hands or not. These findings fill several gaps in our understanding of greetings and provide avenues for future research, including advancements in social robotics and human−robot interaction. -
Trujillo, J. P., & Holler, J. (2025). Multimodal information density is highest in question beginnings, and early entropy is associated with fewer but longer visual signals. Discourse Processes. Advance online publication. doi:10.1080/0163853X.2024.2413314.
Abstract
When engaged in spoken conversation, speakers convey meaning using both speech and visual signals, such as facial expressions and manual gestures. An important question is how information is distributed in utterances during face-to-face interaction when information from visual signals is also present. In a corpus of casual Dutch face-to-face conversations, we focus on spoken questions in particular because they occur frequently, thus constituting core building blocks of conversation. We quantified information density (i.e. lexical entropy and surprisal) and the number and relative duration of facial and manual signals. We tested whether lexical information density or the number of visual signals differed between the first and last halves of questions, as well as whether the number of visual signals occurring in the less-predictable portion of a question was associated with the lexical information density of the same portion of the question in a systematic manner. We found that information density, as well as number of visual signals, were higher in the first half of questions, and specifically lexical entropy was associated with fewer, but longer visual signals. The multimodal front-loading of questions and the complementary distribution of visual signals and high entropy words in Dutch casual face-to-face conversations may have implications for the parallel processes of utterance comprehension and response planning during turn-taking.Additional information
supplemental material -
Trujillo, J. P., Dyer, R. M. K., & Holler, J. (2025). Dyadic differences in empathy scores are associated with kinematic similarity during conversational question-answer pairs. Discourse Processes. Advance online publication. doi:10.1080/0163853X.2025.2467605.
Abstract
During conversation, speakers coordinate and synergize their behaviors at multiple levels, and in different ways. The extent to which individuals converge or diverge in their behaviors during interaction may relate to interpersonal differences relevant to social interaction, such as empathy as measured by the empathy quotient (EQ). An association between interpersonal difference in empathy and interpersonal entrainment could help to throw light on how interlocutor characteristics influence interpersonal entrainment. We investigated this possibility in a corpus of unconstrained conversation between dyads. We used dynamic time warping to quantify entrainment between interlocutors of head motion, hand motion, and maximum speech f0 during question–response sequences. We additionally calculated interlocutor differences in EQ scores. We found that, for both head and hand motion, greater difference in EQ was associated with higher entrainment. Thus, we consider that people who are dissimilar in EQ may need to “ground” their interaction with low-level movement entrainment. There was no significant relationship between f0 entrainment and EQ score differences. -
Ünal, E., Kırbaşoğlu, K., Karadöller, D. Z., Sumer, B., & Özyürek, A. (2025). Gesture reduces mapping difficulties in the development of spatial language depending on the complexity of spatial relations. Cognitive Science, 49(2): e70046. doi:10.1111/cogs.70046.
Abstract
In spoken languages, children acquire locative terms in a cross-linguistically stable order. Terms similar in meaning to in and on emerge earlier than those similar to front and behind, followed by left and right. This order has been attributed to the complexity of the relations expressed by different locative terms. An additional possibility is that children may be delayed in expressing certain spatial meanings partly due to difficulties in discovering the mappings between locative terms in speech and spatial relation they express. We investigate cognitive and mapping difficulties in the domain of spatial language by comparing how children map spatial meanings onto speech versus visually motivated forms in co-speech gesture across different spatial relations. Twenty-four 8-year-old and 23 adult native Turkish-speakers described four-picture displays where the target picture depicted in-on, front-behind, or left-right relations between objects. As the complexity of spatial relations increased, children were more likely to rely on gestures as opposed to speech to informatively express the spatial relation. Adults overwhelmingly relied on speech to informatively express the spatial relation, and this did not change across the complexity of spatial relations. Nevertheless, even when spatial expressions in both speech and co-speech gesture were considered, children lagged behind adults when expressing the most complex left-right relations. These findings suggest that cognitive development and mapping difficulties introduced by the modality of expressions interact in shaping the development of spatial language.Additional information
list of stimuli and descriptions -
Yılmaz, B., Doğan, I., Karadöller, D. Z., Demir-Lira, Ö. E., & Göksun, T. (2025). Parental attitudes and beliefs about mathematics and the use of gestures in children’s math development. Cognitive Development, 73: 101531. doi:10.1016/j.cogdev.2024.101531.
Abstract
Children vary in mathematical skills even before formal schooling. The current study investigated how parental math beliefs, parents’ math anxiety, and children's spontaneous gestures contribute to preschool-aged children’s math performance. Sixty-three Turkish-reared children (33 girls, Mage = 49.9 months, SD = 3.68) were assessed on verbal counting, cardinality, and arithmetic tasks (nonverbal and verbal). Results showed that parental math beliefs were related to children’s verbal counting, cardinality and arithmetic scores. Children whose parents have higher math beliefs along with low math anxiety scored highest in the cardinality task. Children’s gesture use was also related to lower cardinality performance and the relation between parental math beliefs and children’s performance became stronger when child gestures were absent. These findings highlight the importance of parent and child-related contributors in explaining the variability in preschool-aged children’s math skills. -
Yılmaz, B., Doğan, I., Karadöller, D. Z., Demir-Lira, Ö. E., & Göksun, T. (2025). Parental attitudes and beliefs about mathematics and the use of gestures in children’s math development. Cognitive Development, 73: 101531. doi:10.1016/j.cogdev.2024.101531.
Abstract
Children vary in mathematical skills even before formal schooling. The current study investigated how parental math beliefs, parents’ math anxiety, and children's spontaneous gestures contribute to preschool-aged children’s math performance. Sixty-three Turkish-reared children (33 girls, Mage = 49.9 months, SD = 3.68) were assessed on verbal counting, cardinality, and arithmetic tasks (nonverbal and verbal). Results showed that parental math beliefs were related to children’s verbal counting, cardinality and arithmetic scores. Children whose parents have higher math beliefs along with low math anxiety scored highest in the cardinality task. Children’s gesture use was also related to lower cardinality performance and the relation between parental math beliefs and children’s performance became stronger when child gestures were absent. These findings highlight the importance of parent and child-related contributors in explaining the variability in preschool-aged children’s math skills.Additional information
supplementary material -
Zora, H., Kabak, B., & Hagoort, P. (2025). Relevance of prosodic focus and lexical stress for discourse comprehension in Turkish: Evidence from psychometric and electrophysiological data. Journal of Cognitive Neuroscience, 37(3), 693-736. doi:10.1162/jocn_a_02262.
Abstract
Prosody underpins various linguistic domains ranging from semantics and syntax to discourse. For instance, prosodic information in the form of lexical stress modifies meanings and, as such, syntactic contexts of words as in Turkish kaz-má "pickaxe" (noun) versus káz-ma "do not dig" (imperative). Likewise, prosody indicates the focused constituent of an utterance as the noun phrase filling the wh-spot in a dialogue like What did you eat? I ate----. In the present study, we investigated the relevance of such prosodic variations for discourse comprehension in Turkish. We aimed at answering how lexical stress and prosodic focus mismatches on critical noun phrases-resulting in grammatical anomalies involving both semantics and syntax and discourse-level anomalies, respectively-affect the perceived correctness of an answer to a question in a given context. To that end, 80 native speakers of Turkish, 40 participating in a psychometric experiment and 40 participating in an EEG experiment, were asked to judge the acceptability of prosodic mismatches that occur either separately or concurrently. Psychometric results indicated that lexical stress mismatch led to a lower correctness score than prosodic focus mismatch, and combined mismatch received the lowest score. Consistent with the psychometric data, EEG results revealed an N400 effect to combined mismatch, and this effect was followed by a P600 response to lexical stress mismatch. Conjointly, these results suggest that every source of prosodic information is immediately available and codetermines the interpretation of an utterance; however, semantically and syntactically relevant lexical stress information is assigned more significance by the language comprehension system compared with prosodic focus information. -
Emmorey, K., & Ozyurek, A. (2014). Language in our hands: Neural underpinnings of sign language and co-speech gesture. In M. S. Gazzaniga, & G. R. Mangun (
Eds. ), The cognitive neurosciences (5th ed., pp. 657-666). Cambridge, Mass: MIT Press. -
Furman, R., Kuntay, A., & Ozyurek, A. (2014). Early language-specificity of children's event encoding in speech and gesture: Evidence from caused motion in Turkish. Language, Cognition and Neuroscience, 29, 620-634. doi:10.1080/01690965.2013.824993.
Abstract
Previous research on language development shows that children are tuned early on to the language-specific semantic and syntactic encoding of events in their native language. Here we ask whether language-specificity is also evident in children's early representations in gesture accompanying speech. In a longitudinal study, we examined the spontaneous speech and cospeech gestures of eight Turkish-speaking children aged one to three and focused on their caused motion event expressions. In Turkish, unlike in English, the main semantic elements of caused motion such as Action and Path can be encoded in the verb (e.g. sok- ‘put in’) and the arguments of a verb can be easily omitted. We found that Turkish-speaking children's speech indeed displayed these language-specific features and focused on verbs to encode caused motion. More interestingly, we found that their early gestures also manifested specificity. Children used iconic cospeech gestures (from 19 months onwards) as often as pointing gestures and represented semantic elements such as Action with Figure and/or Path that reinforced or supplemented speech in language-specific ways until the age of three. In the light of previous reports on the scarcity of iconic gestures in English-speaking children's early productions, we argue that the language children learn shapes gestures and how they get integrated with speech in the first three years of life. -
Holler, J., Schubotz, L., Kelly, S., Hagoort, P., Schuetze, M., & Ozyurek, A. (2014). Social eye gaze modulates processing of speech and co-speech gesture. Cognition, 133, 692-697. doi:10.1016/j.cognition.2014.08.008.
Abstract
In human face-to-face communication, language comprehension is a multi-modal, situated activity. However, little is known about how we combine information from different modalities during comprehension, and how perceived communicative intentions, often signaled through visual signals, influence this process. We explored this question by simulating a multi-party communication context in which a speaker alternated her gaze between two recipients. Participants viewed speech-only or speech + gesture object-related messages when being addressed (direct gaze) or unaddressed (gaze averted to other participant). They were then asked to choose which of two object images matched the speaker’s preceding message. Unaddressed recipients responded significantly more slowly than addressees for speech-only utterances. However, perceiving the same speech accompanied by gestures sped unaddressed recipients up to a level identical to that of addressees. That is, when unaddressed recipients’ speech processing suffers, gestures can enhance the comprehension of a speaker’s message. We discuss our findings with respect to two hypotheses attempting to account for how social eye gaze may modulate multi-modal language comprehension. -
Ortega, G., Sumer, B., & Ozyurek, A. (2014). Type of iconicity matters: Bias for action-based signs in sign language acquisition. In P. Bello, M. Guarini, M. McShane, & B. Scassellati (
Eds. ), Proceedings of the 36th Annual Meeting of the Cognitive Science Society (CogSci 2014) (pp. 1114-1119). Austin, Tx: Cognitive Science Society.Abstract
Early studies investigating sign language acquisition claimed
that signs whose structures are motivated by the form of their
referent (iconic) are not favoured in language development.
However, recent work has shown that the first signs in deaf
children’s lexicon are iconic. In this paper we go a step
further and ask whether different types of iconicity modulate
learning sign-referent links. Results from a picture description
task indicate that children and adults used signs with two
possible variants differentially. While children signing to
adults favoured variants that map onto actions associated with
a referent (action signs), adults signing to another adult
produced variants that map onto objects’ perceptual features
(perceptual signs). Parents interacting with children used
more action variants than signers in adult-adult interactions.
These results are in line with claims that language
development is tightly linked to motor experience and that
iconicity can be a communicative strategy in parental input. -
Ozyurek, A. (2014). Hearing and seeing meaning in speech and gesture: Insights from brain and behaviour. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 369(1651): 20130296. doi:10.1098/rstb.2013.0296.
Abstract
As we speak, we use not only the arbitrary form–meaning mappings of the speech channel but also motivated form–meaning correspondences, i.e. iconic gestures that accompany speech (e.g. inverted V-shaped hand wiggling across gesture space to demonstrate walking). This article reviews what we know about processing of semantic information from speech and iconic gestures in spoken languages during comprehension of such composite utterances. Several studies have shown that comprehension of iconic gestures involves brain activations known to be involved in semantic processing of speech: i.e. modulation of the electrophysiological recording component N400, which is sensitive to the ease of semantic integration of a word to previous context, and recruitment of the left-lateralized frontal–posterior temporal network (left inferior frontal gyrus (IFG), medial temporal gyrus (MTG) and superior temporal gyrus/sulcus (STG/S)). Furthermore, we integrate the information coming from both channels recruiting brain areas such as left IFG, posterior superior temporal sulcus (STS)/MTG and even motor cortex. Finally, this integration is flexible: the temporal synchrony between the iconic gesture and the speech segment, as well as the perceived communicative intent of the speaker, modulate the integration process. Whether these findings are special to gestures or are shared with actions or other visual accompaniments to speech (e.g. lips) or other visual symbols such as pictures are discussed, as well as the implications for a multimodal view of language. -
Peeters, D., Azar, Z., & Ozyurek, A. (2014). The interplay between joint attention, physical proximity, and pointing gesture in demonstrative choice. In P. Bello, M. Guarini, M. McShane, & B. Scassellati (
Eds. ), Proceedings of the 36th Annual Meeting of the Cognitive Science Society (CogSci 2014) (pp. 1144-1149). Austin, Tx: Cognitive Science Society. -
Sumer, B., Perniss, P., Zwitserlood, I., & Ozyurek, A. (2014). Learning to express "left-right" & "front-behind" in a sign versus spoken language. In P. Bello, M. Guarini, M. McShane, & B. Scassellati (
Eds. ), Proceedings of the 36th Annual Meeting of the Cognitive Science Society (CogSci 2014) (pp. 1550-1555). Austin, Tx: Cognitive Science Society.Abstract
Developmental studies show that it takes longer for
children learning spoken languages to acquire viewpointdependent
spatial relations (e.g., left-right, front-behind),
compared to ones that are not viewpoint-dependent (e.g.,
in, on, under). The current study investigates how
children learn to express viewpoint-dependent relations
in a sign language where depicted spatial relations can be
communicated in an analogue manner in the space in
front of the body or by using body-anchored signs (e.g.,
tapping the right and left hand/arm to mean left and
right). Our results indicate that the visual-spatial
modality might have a facilitating effect on learning to
express these spatial relations (especially in encoding of
left-right) in a sign language (i.e., Turkish Sign
Language) compared to a spoken language (i.e.,
Turkish).
Share this page