Displaying 1 - 36 of 36
-
Drijvers, L., Small, S. L., & Skipper, J. I. (2025). Language is widely distributed throughout the brain. Nature Reviews Neuroscience, 26: 189. doi:10.1038/s41583-024-00903-0.
-
Emmendorfer, A. K., & Holler, J. (2025). Facial signals shape predictions about the nature of upcoming conversational responses. Scientific Reports, 15: 1381. doi:10.1038/s41598-025-85192-y.
Abstract
Increasing evidence suggests that interlocutors use visual communicative signals to form predictions about unfolding utterances, but there is little data on the predictive potential of facial signals in conversation. In an online experiment with virtual agents, we examine whether facial signals produced by an addressee may allow speakers to anticipate the response to a question before it is given. Participants (n = 80) viewed videos of short conversation fragments between two virtual humans. Each fragment ended with the Questioner asking a question, followed by a pause during which the Responder looked either straight at the Questioner (baseline), or averted their gaze, or accompanied the straight gaze with one of the following facial signals: brow raise, brow frown, nose wrinkle, smile, squint, mouth corner pulled back (dimpler). Participants then indicated on a 6-point scale whether they expected a “yes” or “no” response. Analyses revealed that all signals received different ratings relative to the baseline: brow raises, dimplers, and smiles were associated with more positive responses, gaze aversions, brow frowns, nose wrinkles, and squints with more negative responses. Qur findings show that interlocutors may form strong associations between facial signals and upcoming responses to questions, highlighting their predictive potential in face-to-face conversation.Additional information
supplementary materials -
Esmer, Ş. C., Turan, E., Karadöller, D. Z., & Göksun, T. (2025). Sources of variation in preschoolers’ relational reasoning: The interaction between language use and working memory. Journal of Experimental Child Psychology, 252: 106149. doi:10.1016/j.jecp.2024.106149.
Abstract
Previous research has suggested the importance of relational language and working memory in children’s relational reasoning. The tendency to use language (e.g., using more relational than object-focused language, prioritizing focal objects over background in linguistic descriptions) could reflect children’s biases toward the relational versus object-based solutions in a relational match-to-sample (RMTS) task. In the lack of any apparent object match as a foil option, object-focused children might rely on other cognitive mechanisms (i.e., working memory) to choose a relational match in the RMTS task. The current study examined the interactive roles of language- and working memory-related sources of variation in Turkish-learning preschoolers’ relational reasoning. We collected data from 4- and 5-year-olds (N = 41) via Zoom in the RMTS task, a scene description task, and a backward word span task. Generalized binomial mixed effects models revealed that children who used more relational language and background-focused scene descriptions performed worse in the relational reasoning task. Furthermore, children with less frequent relational language use and focal object descriptions of the scenes benefited more from working memory to succeed in the relational reasoning task. These results suggest additional working memory demands for object-focused children to choose relational matches in the RMTS task, highlighting the importance of examining the interactive effects of different cognitive mechanisms on relational reasoning.Additional information
supplementary material -
Göksun, T., Aktan-Erciyes, A., Karadöller, D. Z., & Demir-Lira, Ö. E. (2025). Multifaceted nature of early vocabulary development: Connecting child characteristics with parental input types. Child Development Perspectives, 19(1), 30-37. doi:10.1111/cdep.12524.
Abstract
Children need to learn the demands of their native language in the early vocabulary development phase. In this dynamic process, parental multimodal input may shape neurodevelopmental trajectories while also being tailored by child-related factors. Moving beyond typically characterized group profiles, in this article, we synthesize growing evidence on the effects of parental multimodal input (amount, quality, or absence), domain-specific input (space and math), and language-specific input (causal verbs and sound symbols) on preterm, full-term, and deaf children's early vocabulary development, focusing primarily on research with children learning Turkish and Turkish Sign Language. We advocate for a theoretical perspective, integrating neonatal characteristics and parental input, and acknowledging the unique constraints of languages. -
Karadöller, D. Z., Demir-Lira, Ö. E., & Göksun, T. (2025). Full-term children with lower vocabulary scores receive more multimodal math input than preterm children. Journal of Cognition and Development. Advance online publication. doi:10.1080/15248372.2025.2470245.
Abstract
One of the earliest sources of mathematical input arises in dyadic parent–child interactions. However, the emphasis has been on parental input only in speech and how input varies across different environmental and child-specific factors remains largely unexplored. Here, we investigated the relationship among parental math input modality and type, children’s gestational status (being preterm vs. full-term born), and vocabulary development. Using book-reading as a medium for parental math input in dyadic interaction, we coded specific math input elicited by Turkish-speaking parents and their 26-month-old children (N = 58, 24 preterms) for speech-only and multimodal (speech and gestures combined) input. Results showed that multimodal math input, as opposed to speech-only math input, was uniquely associated with gestational status, expressive vocabulary, and the interaction between the two. Full-term children with lower expressive vocabulary scores received more multimodal input compared to their preterm peers. However, there was no association between expressive vocabulary and multimodal math input for preterm children. Moreover, cardinality was the most frequent type for both speech-only and multimodal input. These findings suggest that the specific type of multimodal math input can be produced as a function of children’s gestational status and vocabulary development. -
Lokhesh, N. N., Swaminathan, K., Shravan, G., Menon, D., Mishra, S., Nandanwar, A., & Mishra, C. (2025). Welcome to the library: Integrating social robots in Indian libraries. In O. Palinko, L. Bodenhagen, J.-J. Cabibihan, K. Fischer, S. Šabanović, K. Winkle, L. Behera, S. S. Ge, D. Chrysostomou, W. Jiang, & H. He (
Eds. ), Social Robotics: 16th International Conference, ICSR + AI 2024, Odense, Denmark, October 23–26, 2024, Proceedings (pp. 239-246). Singapore: Springer. doi:10.1007/978-981-96-3525-2_20.Abstract
Libraries are very often considered the hallway to developing knowledge. However, the lack of adequate staff within Indian libraries makes catering to the visitors’ needs difficult. Previous systems that have sought to address libraries’ needs through automation have mostly been limited to storage and fetching aspects while lacking in their interaction aspect. We propose to address this issue by incorporating social robots within Indian libraries that can communicate and address the visitors’ queries in a multi-modal fashion attempting to make the experience more natural and appealing while helping reduce the burden on the librarians. In this paper, we propose and deploy a Furhat robot as a robot librarian by programming it on certain core librarian functionalities. We evaluate our system with a physical robot librarian (N = 26). The results show that the robot librarian was found to be very informative and overall left with a positive impression and preference. -
Mishra, C., Skantze, G., Hagoort, P., & Verdonschot, R. G. (2025). Perception of emotions in human and robot faces: Is the eye region enough? In O. Palinko, L. Bodenhagen, J.-J. Cabihihan, K. Fischer, S. Šabanović, K. Winkle, L. Behera, S. S. Ge, D. Chrysostomou, W. Jiang, & H. He (
Eds. ), Social Robotics: 116th International Conference, ICSR + AI 2024, Odense, Denmark, October 23–26, 2024, Proceedings (pp. 290-303). Singapore: Springer.Abstract
The increased interest in developing next-gen social robots has raised questions about the factors affecting the perception of robot emotions. This study investigates the impact of robot appearances (human-like, mechanical) and face regions (full-face, eye-region) on human perception of robot emotions. A between-subjects user study (N = 305) was conducted where participants were asked to identify the emotions being displayed in videos of robot faces, as well as a human baseline. Our findings reveal three important insights for effective social robot face design in Human-Robot Interaction (HRI): Firstly, robots equipped with a back-projected, fully animated face – regardless of whether they are more human-like or more mechanical-looking – demonstrate a capacity for emotional expression comparable to that of humans. Secondly, the recognition accuracy of emotional expressions in both humans and robots declines when only the eye region is visible. Lastly, within the constraint of only the eye region being visible, robots with more human-like features significantly enhance emotion recognition. -
Özer, D., Özyürek, A., & Göksun, T. (2025). Spatial working memory is critical for gesture processing: Evidence from gestures with varying semantic links to speech. Psychonomic Bulletin & Review. Advance online publication. doi:10.3758/s13423-025-02642-4.
Abstract
Gestures express redundant or complementary information to speech they accompany by depicting visual and spatial features of referents. In doing so, they recruit both spatial and verbal cognitive resources that underpin the processing of visual semantic information and its integration with speech. The relation between spatial and verbal skills and gesture comprehension, where gestures may serve different roles in relation to speech is yet to be explored. This study examined the role of spatial and verbal skills in processing gestures that expressed redundant or complementary information to speech during the comprehension of spatial relations between objects. Turkish-speaking adults (N=74) watched videos describing the spatial location of objects that involved perspective-taking (left-right) or not (on-under) with speech and gesture. Gestures either conveyed redundant information to speech (e.g., saying and gesturing “left”) or complemented the accompanying demonstrative in speech (e.g., saying “here,” gesturing “left”). We also measured participants’ spatial (the Corsi block span and the mental rotation tasks) and verbal skills (the digit span task). Our results revealed nuanced interactions between these skills and spatial language comprehension, depending on the modality in which the information was expressed. One insight emerged prominently. Spatial skills, particularly spatial working memory capacity, were related to enhanced comprehension of visual semantic information conveyed through gestures especially when this information was not present in the accompanying speech. This study highlights the critical role of spatial working memory in gesture processing and underscores the importance of examining the interplay among cognitive and contextual factors to understand the complex dynamics of multimodal language. -
Rubio-Fernandez, P. (2025). First acquiring articles in a second language: A new approach to the study of language and social cognition. Lingua, 313: 103851. doi:10.1016/j.lingua.2024.103851.
Abstract
Pragmatic phenomena are characterized by extreme variability, which makes it difficult to draw sound generalizations about the role of social cognition in pragmatic language by and large. I introduce cultural evolutionary pragmatics as a new framework for the study of the interdependence between language and social cognition, and point at the study of common-ground management across languages and ages as a way to test the reliance of pragmatic language on social cognition. I illustrate this new research line with three experiments on article use by second language speakers, whose mother tongue lacks articles. These L2 speakers are known to find article use challenging and it is often argued that their difficulties stem from articles being pragmatically redundant. Contrary to this view, the results of this exploratory study support the view that proficient article use requires automatizing basic socio-cognitive processes, offering a window into the interdependence between language and social cognition. -
Rubio-Fernandez, P., Berke, M. D., & Jara-Ettinger, J. (2025). Tracking minds in communication. Trends in Cognitive Sciences, 29(3), 269-281. doi:10.1016/j.tics.2024.11.005.
Abstract
How might social cognition help us communicate through language? At what levels does this interaction occur? In classical views, social cognition is independent of language, and integrating the two can be slow, effortful, and error-prone. But new research into word level processes reveals that communication
is brimming with social micro-processes that happen in real time, guiding even the simplest choices like how we use adjectives, articles, and demonstratives. We interpret these findings in the context of advances in theoretical models of social cognition and propose a Communicative Mind-Tracking
framework, where social micro-processes aren’t a secondary process in how we use language—they are fundamental to how communication works. -
Soberanes, M., Pérez-Ramírez, C. A., & Assaneo, M. F. (2025). Insights into the effect of general attentional state, coarticulation, and primed speech rate in phoneme production time. Journal of Speech, Language, and Hearing Research. Advance online publication. doi:10.1044/2025_JSLHR-24-00595.
Abstract
Purpose:
This study aimed to identify how a set of predefined factors modulates phoneme articulation time within a speaker.
Method:
We used a custom in-lab system that records lip muscle activity through electromyography signals, aligned with the produced speech, to measure phoneme articulation time. Twenty Spanish-speaking participants (12 females) were evaluated while producing sequences of a consonant–vowel syllable, with each sequence consisting of repeated articulations of either /pa/ or /pu/. Before starting the sequences, participants underwent a priming step with either a fast or slow speech rate. Additionally, the general attentional state level was assessed at the beginning, middle, and end of the protocol. To analyze the variability in the duration of /p/ and vowel articulation, we fitted individual linear mixed-models considering three factors: general attentional state level, priming rate, and coarticulation effects (for /p/, i.e., followed by /a/ or /u/) or phoneme identity (for vowels, i.e., being /a/ or /u/).
Results:
We found that the level of general attentional state positively correlated with production time for both the consonant /p/ and the vowels. Additionally, /p/ production was influenced by the nature of the following vowel (i.e., coarticulation effects), while vowel production time was affected by the primed speech rate.
Conclusions:
Phoneme duration appears to be influenced by both stable, speaker-specific characteristics (idiosyncratic traits) and internal, state-dependent factors related to the speaker's condition at the time of speech production. While some factors affect both consonants and vowels, others specifically modify only one of these types.Additional information
supplemental material -
Tilston, O., Holler, J., & Bangerter, A. (2025). Opening social interactions: The coordination of approach, gaze, speech and handshakes during greetings. Cognitive Science, 49(2): e70049. doi:10.1111/cogs.70049.
Abstract
Despite the importance of greetings for opening social interactions, their multimodal coordination processes remain poorly understood. We used a naturalistic, lab-based setup where pairs of unacquainted participants approached and greeted each other while unaware their greeting behavior was studied. We measured the prevalence and time course of multimodal behaviors potentially culminating in a handshake, including motor behaviors (e.g., walking, standing up, hand movements like raise, grasp, and retraction), gaze patterns (using eye tracking glasses), and speech (close and distant verbal salutations). We further manipulated the visibility of partners’ eyes to test its effect on gaze. Our findings reveal that gaze to a partner's face increases over the course of a greeting, but is partly averted during approach and is influenced by the visibility of partners’ eyes. Gaze helps coordinate handshakes, by signaling intent and guiding the grasp. The timing of adjacency pairs in verbal salutations is comparable to the precision of floor transitions in the main body of conversations, and varies according to greeting phase, with distant salutation pair parts featuring more gaps and close salutation pair parts featuring more overlap. Gender composition and a range of multimodal behaviors affect whether pairs chose to shake hands or not. These findings fill several gaps in our understanding of greetings and provide avenues for future research, including advancements in social robotics and human−robot interaction. -
Trujillo, J. P., & Holler, J. (2025). Multimodal information density is highest in question beginnings, and early entropy is associated with fewer but longer visual signals. Discourse Processes. Advance online publication. doi:10.1080/0163853X.2024.2413314.
Abstract
When engaged in spoken conversation, speakers convey meaning using both speech and visual signals, such as facial expressions and manual gestures. An important question is how information is distributed in utterances during face-to-face interaction when information from visual signals is also present. In a corpus of casual Dutch face-to-face conversations, we focus on spoken questions in particular because they occur frequently, thus constituting core building blocks of conversation. We quantified information density (i.e. lexical entropy and surprisal) and the number and relative duration of facial and manual signals. We tested whether lexical information density or the number of visual signals differed between the first and last halves of questions, as well as whether the number of visual signals occurring in the less-predictable portion of a question was associated with the lexical information density of the same portion of the question in a systematic manner. We found that information density, as well as number of visual signals, were higher in the first half of questions, and specifically lexical entropy was associated with fewer, but longer visual signals. The multimodal front-loading of questions and the complementary distribution of visual signals and high entropy words in Dutch casual face-to-face conversations may have implications for the parallel processes of utterance comprehension and response planning during turn-taking.Additional information
supplemental material -
Trujillo, J. P., Dyer, R. M. K., & Holler, J. (2025). Dyadic differences in empathy scores are associated with kinematic similarity during conversational question-answer pairs. Discourse Processes. Advance online publication. doi:10.1080/0163853X.2025.2467605.
Abstract
During conversation, speakers coordinate and synergize their behaviors at multiple levels, and in different ways. The extent to which individuals converge or diverge in their behaviors during interaction may relate to interpersonal differences relevant to social interaction, such as empathy as measured by the empathy quotient (EQ). An association between interpersonal difference in empathy and interpersonal entrainment could help to throw light on how interlocutor characteristics influence interpersonal entrainment. We investigated this possibility in a corpus of unconstrained conversation between dyads. We used dynamic time warping to quantify entrainment between interlocutors of head motion, hand motion, and maximum speech f0 during question–response sequences. We additionally calculated interlocutor differences in EQ scores. We found that, for both head and hand motion, greater difference in EQ was associated with higher entrainment. Thus, we consider that people who are dissimilar in EQ may need to “ground” their interaction with low-level movement entrainment. There was no significant relationship between f0 entrainment and EQ score differences. -
Ünal, E., Kırbaşoğlu, K., Karadöller, D. Z., Sumer, B., & Özyürek, A. (2025). Gesture reduces mapping difficulties in the development of spatial language depending on the complexity of spatial relations. Cognitive Science, 49(2): e70046. doi:10.1111/cogs.70046.
Abstract
In spoken languages, children acquire locative terms in a cross-linguistically stable order. Terms similar in meaning to in and on emerge earlier than those similar to front and behind, followed by left and right. This order has been attributed to the complexity of the relations expressed by different locative terms. An additional possibility is that children may be delayed in expressing certain spatial meanings partly due to difficulties in discovering the mappings between locative terms in speech and spatial relation they express. We investigate cognitive and mapping difficulties in the domain of spatial language by comparing how children map spatial meanings onto speech versus visually motivated forms in co-speech gesture across different spatial relations. Twenty-four 8-year-old and 23 adult native Turkish-speakers described four-picture displays where the target picture depicted in-on, front-behind, or left-right relations between objects. As the complexity of spatial relations increased, children were more likely to rely on gestures as opposed to speech to informatively express the spatial relation. Adults overwhelmingly relied on speech to informatively express the spatial relation, and this did not change across the complexity of spatial relations. Nevertheless, even when spatial expressions in both speech and co-speech gesture were considered, children lagged behind adults when expressing the most complex left-right relations. These findings suggest that cognitive development and mapping difficulties introduced by the modality of expressions interact in shaping the development of spatial language.Additional information
list of stimuli and descriptions -
Yılmaz, B., Doğan, I., Karadöller, D. Z., Demir-Lira, Ö. E., & Göksun, T. (2025). Parental attitudes and beliefs about mathematics and the use of gestures in children’s math development. Cognitive Development, 73: 101531. doi:10.1016/j.cogdev.2024.101531.
Abstract
Children vary in mathematical skills even before formal schooling. The current study investigated how parental math beliefs, parents’ math anxiety, and children's spontaneous gestures contribute to preschool-aged children’s math performance. Sixty-three Turkish-reared children (33 girls, Mage = 49.9 months, SD = 3.68) were assessed on verbal counting, cardinality, and arithmetic tasks (nonverbal and verbal). Results showed that parental math beliefs were related to children’s verbal counting, cardinality and arithmetic scores. Children whose parents have higher math beliefs along with low math anxiety scored highest in the cardinality task. Children’s gesture use was also related to lower cardinality performance and the relation between parental math beliefs and children’s performance became stronger when child gestures were absent. These findings highlight the importance of parent and child-related contributors in explaining the variability in preschool-aged children’s math skills. -
Yılmaz, B., Doğan, I., Karadöller, D. Z., Demir-Lira, Ö. E., & Göksun, T. (2025). Parental attitudes and beliefs about mathematics and the use of gestures in children’s math development. Cognitive Development, 73: 101531. doi:10.1016/j.cogdev.2024.101531.
Abstract
Children vary in mathematical skills even before formal schooling. The current study investigated how parental math beliefs, parents’ math anxiety, and children's spontaneous gestures contribute to preschool-aged children’s math performance. Sixty-three Turkish-reared children (33 girls, Mage = 49.9 months, SD = 3.68) were assessed on verbal counting, cardinality, and arithmetic tasks (nonverbal and verbal). Results showed that parental math beliefs were related to children’s verbal counting, cardinality and arithmetic scores. Children whose parents have higher math beliefs along with low math anxiety scored highest in the cardinality task. Children’s gesture use was also related to lower cardinality performance and the relation between parental math beliefs and children’s performance became stronger when child gestures were absent. These findings highlight the importance of parent and child-related contributors in explaining the variability in preschool-aged children’s math skills.Additional information
supplementary material -
Zora, H., Kabak, B., & Hagoort, P. (2025). Relevance of prosodic focus and lexical stress for discourse comprehension in Turkish: Evidence from psychometric and electrophysiological data. Journal of Cognitive Neuroscience, 37(3), 693-736. doi:10.1162/jocn_a_02262.
Abstract
Prosody underpins various linguistic domains ranging from semantics and syntax to discourse. For instance, prosodic information in the form of lexical stress modifies meanings and, as such, syntactic contexts of words as in Turkish kaz-má "pickaxe" (noun) versus káz-ma "do not dig" (imperative). Likewise, prosody indicates the focused constituent of an utterance as the noun phrase filling the wh-spot in a dialogue like What did you eat? I ate----. In the present study, we investigated the relevance of such prosodic variations for discourse comprehension in Turkish. We aimed at answering how lexical stress and prosodic focus mismatches on critical noun phrases-resulting in grammatical anomalies involving both semantics and syntax and discourse-level anomalies, respectively-affect the perceived correctness of an answer to a question in a given context. To that end, 80 native speakers of Turkish, 40 participating in a psychometric experiment and 40 participating in an EEG experiment, were asked to judge the acceptability of prosodic mismatches that occur either separately or concurrently. Psychometric results indicated that lexical stress mismatch led to a lower correctness score than prosodic focus mismatch, and combined mismatch received the lowest score. Consistent with the psychometric data, EEG results revealed an N400 effect to combined mismatch, and this effect was followed by a P600 response to lexical stress mismatch. Conjointly, these results suggest that every source of prosodic information is immediately available and codetermines the interpretation of an utterance; however, semantically and syntactically relevant lexical stress information is assigned more significance by the language comprehension system compared with prosodic focus information. -
Brown, A. R., Pouw, W., Brentari, D., & Goldin-Meadow, S. (2021). People are less susceptible to illusion when they use their hands to communicate rather than estimate. Psychological Science, 32, 1227-1237. doi:10.1177/0956797621991552.
Abstract
When we use our hands to estimate the length of a stick in the Müller-Lyer illusion, we are highly susceptible to the illusion. But when we prepare to act on sticks under the same conditions, we are significantly less susceptible. Here, we asked whether people are susceptible to illusion when they use their hands not to act on objects but to describe them in spontaneous co-speech gestures or conventional sign languages of the deaf. Thirty-two English speakers and 13 American Sign Language signers used their hands to act on, estimate the length of, and describe sticks eliciting the Müller-Lyer illusion. For both gesture and sign, the magnitude of illusion in the description task was smaller than the magnitude of illusion in the estimation task and not different from the magnitude of illusion in the action task. The mechanisms responsible for producing gesture in speech and sign thus appear to operate not on percepts involved in estimation but on percepts derived from the way we act on objects. -
Fisher, V. J. (2021). Embodied songs: Insights into the nature of cross-modal meaning-making within sign language informed, embodied interpretations of vocal music. Frontiers in Psychology, 12: 624689. doi:10.3389/fpsyg.2021.624689.
Abstract
Embodied song practices involve the transformation of songs from the acoustic modality into an embodied-visual form, to increase meaningful access for d/Deaf audiences. This goes beyond the translation of lyrics, by combining poetic sign language with other bodily movements to embody the para-linguistic expressive and musical features that enhance the message of a song. To date, the limited research into this phenomenon has focussed on linguistic features and interactions with rhythm. The relationship between bodily actions and music has not been probed beyond an assumed implication of conformance. However, as the primary objective is to communicate equivalent meanings, the ways that the acoustic and embodied-visual signals relate to each other should reveal something about underlying conceptual agreement. This paper draws together a range of pertinent theories from within a grounded cognition framework including semiotics, analogy mapping and cross-modal correspondences. These theories are applied to embodiment strategies used by prominent d/Deaf and hearing Dutch practitioners, to unpack the relationship between acoustic songs, their embodied representations, and their broader conceptual and affective meanings. This leads to the proposition that meaning primarily arises through shared patterns of internal relations across a range of amodal and cross-modal features with an emphasis on dynamic qualities. These analogous patterns can inform metaphorical interpretations and trigger shared emotional responses. This exploratory survey offers insights into the nature of cross-modal and embodied meaning-making, as a jumping-off point for further research. -
Karadöller, D. Z., Sumer, B., Ünal, E., & Ozyurek, A. (2021). Spatial language use predicts spatial memory of children: Evidence from sign, speech, and speech-plus-gesture. In T. Fitch, C. Lamm, H. Leder, & K. Teßmar-Raible (
Eds. ), Proceedings of the 43rd Annual Conference of the Cognitive Science Society (CogSci 2021) (pp. 672-678). Vienna: Cognitive Science Society.Abstract
There is a strong relation between children’s exposure to
spatial terms and their later memory accuracy. In the current
study, we tested whether the production of spatial terms by
children themselves predicts memory accuracy and whether
and how language modality of these encodings modulates
memory accuracy differently. Hearing child speakers of
Turkish and deaf child signers of Turkish Sign Language
described pictures of objects in various spatial relations to each
other and later tested for their memory accuracy of these
pictures in a surprise memory task. We found that having
described the spatial relation between the objects predicted
better memory accuracy. However, the modality of these
descriptions in sign, speech, or speech-plus-gesture did not
reveal differences in memory accuracy. We discuss the
implications of these findings for the relation between spatial
language, memory, and the modality of encoding. -
Karadöller, D. Z., Sumer, B., & Ozyurek, A. (2021). Effects and non-effects of late language exposure on spatial language development: Evidence from deaf adults and children. Language Learning and Development, 17(1), 1-25. doi:10.1080/15475441.2020.1823846.
Abstract
Late exposure to the first language, as in the case of deaf children with hearing parents, hinders the production of linguistic expressions, even in adulthood. Less is known about the development of language soon after language exposure and if late exposure hinders all domains of language in children and adults. We compared late signing adults and children (MAge = 8;5) 2 years after exposure to sign language, to their age-matched native signing peers in expressions of two types of locative relations that are acquired in certain cognitive-developmental order: view-independent (IN-ON-UNDER) and view-dependent (LEFT-RIGHT). Late signing children and adults differed from native signers in their use of linguistic devices for view-dependent relations but not for view-independent relations. These effects were also modulated by the morphological complexity. Hindering effects of late language exposure on the development of language in children and adults are not absolute but are modulated by cognitive and linguistic complexity. -
Mamus, E., Speed, L. J., Ozyurek, A., & Majid, A. (2021). Sensory modality of input influences encoding of motion events in speech but not co-speech gestures. In T. Fitch, C. Lamm, H. Leder, & K. Teßmar-Raible (
Eds. ), Proceedings of the 43rd Annual Conference of the Cognitive Science Society (CogSci 2021) (pp. 376-382). Vienna: Cognitive Science Society.Abstract
Visual and auditory channels have different affordances and
this is mirrored in what information is available for linguistic
encoding. The visual channel has high spatial acuity, whereas
the auditory channel has better temporal acuity. These
differences may lead to different conceptualizations of events
and affect multimodal language production. Previous studies of
motion events typically present visual input to elicit speech and
gesture. The present study compared events presented as audio-
only, visual-only, or multimodal (visual+audio) input and
assessed speech and co-speech gesture for path and manner of
motion in Turkish. Speakers with audio-only input mentioned
path more and manner less in verbal descriptions, compared to
speakers who had visual input. There was no difference in the
type or frequency of gestures across conditions, and gestures
were dominated by path-only gestures. This suggests that input
modality influences speakers’ encoding of path and manner of
motion events in speech, but not in co-speech gestures. -
Manhardt, F. (2021). A tale of two modalities. PhD Thesis, Radboud University Nijmegen, Nijmegen.
Additional information
full text via Radboud Repository -
Manhardt, F., Brouwer, S., & Ozyurek, A. (2021). A tale of two modalities: Sign and speech influence in each other in bimodal bilinguals. Psychological Science, 32(3), 424-436. doi:10.1177/0956797620968789.
Abstract
Bimodal bilinguals are hearing individuals fluent in a sign and a spoken language. Can the two languages influence each other in such individuals despite differences in the visual (sign) and vocal (speech) modalities of expression? We investigated cross-linguistic influences on bimodal bilinguals’ expression of spatial relations. Unlike spoken languages, sign uses iconic linguistic forms that resemble physical features of objects in a spatial relation and thus expresses specific semantic information. Hearing bimodal bilinguals (n = 21) fluent in Dutch and Sign Language of the Netherlands and their hearing nonsigning and deaf signing peers (n = 20 each) described left/right relations between two objects. Bimodal bilinguals expressed more specific information about physical features of objects in speech than nonsigners, showing influence from sign language. They also used fewer iconic signs with specific semantic information than deaf signers, demonstrating influence from speech. Bimodal bilinguals’ speech and signs are shaped by two languages from different modalities.Additional information
supplementary materials -
Nielsen, A. K. S., & Dingemanse, M. (2021). Iconicity in word learning and beyond: A critical review. Language and Speech, 64(1), 52-72. doi:10.1177/0023830920914339.
Abstract
Interest in iconicity (the resemblance-based mapping between aspects of form and meaning) is in the midst of a resurgence, and a prominent focus in the field has been the possible role of iconicity in language learning. Here we critically review theory and empirical findings in this domain. We distinguish local learning enhancement (where the iconicity of certain lexical items influences the learning of those items) and general learning enhancement (where the iconicity of certain lexical items influences the later learning of non-iconic items or systems). We find that evidence for local learning enhancement is quite strong, though not as clear cut as it is often described and based on a limited sample of languages. Despite common claims about broader facilitatory effects of iconicity on learning, we find that current evidence for general learning enhancement is lacking. We suggest a number of productive avenues for future research and specify what types of evidence would be required to show a role for iconicity in general learning enhancement. We also review evidence for functions of iconicity beyond word learning: iconicity enhances comprehension by providing complementary representations, supports communication about sensory imagery, and expresses affective meanings. Even if learning benefits may be modest or cross-linguistically varied, on balance, iconicity emerges as a vital aspect of language. -
Ozyurek, A. (2021). Considering the nature of multimodal language from a crosslinguistic perspective. Journal of Cognition, 4(1): 42. doi:10.5334/joc.165.
Abstract
Language in its primary face-to-face context is multimodal (e.g., Holler and Levinson, 2019; Perniss, 2018). Thus, understanding how expressions in the vocal and visual modalities together contribute to our notions of language structure, use, processing, and transmission (i.e., acquisition, evolution, emergence) in different languages and cultures should be a fundamental goal of language sciences. This requires a new framework of language that brings together how arbitrary and non-arbitrary and motivated semiotic resources of language relate to each other. Current commentary evaluates such a proposal by Murgiano et al (2021) from a crosslinguistic perspective taking variation as well as systematicity in multimodal utterances into account. -
Pouw, W., Dingemanse, M., Motamedi, Y., & Ozyurek, A. (2021). A systematic investigation of gesture kinematics in evolving manual languages in the lab. Cognitive Science, 45(7): e13014. doi:10.1111/cogs.13014.
Abstract
Silent gestures consist of complex multi-articulatory movements but are now primarily studied through categorical coding of the referential gesture content. The relation of categorical linguistic content with continuous kinematics is therefore poorly understood. Here, we reanalyzed the video data from a gestural evolution experiment (Motamedi, Schouwstra, Smith, Culbertson, & Kirby, 2019), which showed increases in the systematicity of gesture content over time. We applied computer vision techniques to quantify the kinematics of the original data. Our kinematic analyses demonstrated that gestures become more efficient and less complex in their kinematics over generations of learners. We further detect the systematicity of gesture form on the level of thegesture kinematic interrelations, which directly scales with the systematicity obtained on semantic coding of the gestures. Thus, from continuous kinematics alone, we can tap into linguistic aspects that were previously only approachable through categorical coding of meaning. Finally, going beyond issues of systematicity, we show how unique gesture kinematic dialects emerged over generations as isolated chains of participants gradually diverged over iterations from other chains. We, thereby, conclude that gestures can come to embody the linguistic system at the level of interrelationships between communicative tokens, which should calibrate our theories about form and linguistic content. -
Pouw, W., Wit, J., Bögels, S., Rasenberg, M., Milivojevic, B., & Ozyurek, A. (2021). Semantically related gestures move alike: Towards a distributional semantics of gesture kinematics. In V. G. Duffy (
Ed. ), Digital human modeling and applications in health, safety, ergonomics and risk management. human body, motion and behavior:12th International Conference, DHM 2021, Held as Part of the 23rd HCI International Conference, HCII 2021 (pp. 269-287). Berlin: Springer. doi:10.1007/978-3-030-77817-0_20. -
Pouw, W., Proksch, S., Drijvers, L., Gamba, M., Holler, J., Kello, C., Schaefer, R. S., & Wiggins, G. A. (2021). Multilevel rhythms in multimodal communication. Philosophical Transactions of the Royal Society of London, Series B: Biological Sciences, 376: 20200334. doi:10.1098/rstb.2020.0334.
Abstract
It is now widely accepted that the brunt of animal communication is conducted via several modalities, e.g. acoustic and visual, either simultaneously or sequentially. This is a laudable multimodal turn relative to traditional accounts of temporal aspects of animal communication which have focused on a single modality at a time. However, the fields that are currently contributing to the study of multimodal communication are highly varied, and still largely disconnected given their sole focus on a particular level of description or their particular concern with human or non-human animals. Here, we provide an integrative overview of converging findings that show how multimodal processes occurring at neural, bodily, as well as social interactional levels each contribute uniquely to the complex rhythms that characterize communication in human and non-human animals. Though we address findings for each of these levels independently, we conclude that the most important challenge in this field is to identify how processes at these different levels connect. -
Pouw, W., De Jonge-Hoekstra, L., Harrison, S. J., Paxton, A., & Dixon, J. A. (2021). Gesture-speech physics in fluent speech and rhythmic upper limb movements. Annals of the New York Academy of Sciences, 1491(1), 89-105. doi:10.1111/nyas.14532.
Abstract
Communicative hand gestures are often coordinated with prosodic aspects of speech, and salient moments of gestural movement (e.g., quick changes in speed) often co-occur with salient moments in speech (e.g., near peaks in fundamental frequency and intensity). A common understanding is that such gesture and speech coordination is culturally and cognitively acquired, rather than having a biological basis. Recently, however, the biomechanical physical coupling of arm movements to speech movements has been identified as a potentially important factor in understanding the emergence of gesture-speech coordination. Specifically, in the case of steady-state vocalization and mono-syllable utterances, forces produced during gesturing are transferred onto the tensioned body, leading to changes in respiratory-related activity and thereby affecting vocalization F0 and intensity. In the current experiment (N = 37), we extend this previous line of work to show that gesture-speech physics impacts fluent speech, too. Compared with non-movement, participants who are producing fluent self-formulated speech, while rhythmically moving their limbs, demonstrate heightened F0 and amplitude envelope, and such effects are more pronounced for higher-impulse arm versus lower-impulse wrist movement. We replicate that acoustic peaks arise especially during moments of peak-impulse (i.e., the beat) of the movement, namely around deceleration phases of the movement. Finally, higher deceleration rates of higher-mass arm movements were related to higher peaks in acoustics. These results confirm a role for physical-impulses of gesture affecting the speech system. We discuss the implications of
gesture-speech physics for understanding of the emergence of communicative gesture, both ontogenetically and phylogenetically.Additional information
data and analyses -
Schubotz, L., Holler, J., Drijvers, L., & Ozyurek, A. (2021). Aging and working memory modulate the ability to benefit from visible speech and iconic gestures during speech-in-noise comprehension. Psychological Research, 85, 1997-2011. doi:10.1007/s00426-020-01363-8.
Abstract
When comprehending speech-in-noise (SiN), younger and older adults benefit from seeing the speaker’s mouth, i.e. visible speech. Younger adults additionally benefit from manual iconic co-speech gestures. Here, we investigate to what extent younger and older adults benefit from perceiving both visual articulators while comprehending SiN, and whether this is modulated by working memory and inhibitory control. Twenty-eight younger and 28 older adults performed a word recognition task in three visual contexts: mouth blurred (speech-only), visible speech, or visible speech + iconic gesture. The speech signal was either clear or embedded in multitalker babble. Additionally, there were two visual-only conditions (visible speech, visible speech + gesture). Accuracy levels for both age groups were higher when both visual articulators were present compared to either one or none. However, older adults received a significantly smaller benefit than younger adults, although they performed equally well in speech-only and visual-only word recognition. Individual differences in verbal working memory and inhibitory control partly accounted for age-related performance differences. To conclude, perceiving iconic gestures in addition to visible speech improves younger and older adults’ comprehension of SiN. Yet, the ability to benefit from this additional visual information is modulated by age and verbal working memory. Future research will have to show whether these findings extend beyond the single word level.Additional information
supplementary material -
Schubotz, L. (2021). Effects of aging and cognitive abilities on multimodal language production and comprehension in context. PhD Thesis, Radboud University Nijmegen, Nijmegen.
Additional information
full text via Radboud Repository -
Slonimska, A., Ozyurek, A., & Capirci, O. (2021). Using depiction for efficient communication in LIS (Italian Sign Language). Language and Cognition, 13(3), 367 -396. doi:10.1017/langcog.2021.7.
Abstract
Meanings communicated with depictions constitute an integral part of how speakers and signers actually use language (Clark, 2016). Recent studies have argued that, in sign languages, depicting strategy like constructed action (CA), in which a signer enacts the referent, is used for referential purposes in narratives. Here, we tested the referential function of CA in a more controlled experimental setting and outside narrative context. Given the iconic properties of CA we hypothesized that this strategy could be used for efficient information transmission. Thus, we asked if use of CA increased with the increase in the information required to be communicated. Twenty-three deaf signers of LIS described unconnected images, which varied in the amount of information represented, to another player in a director–matcher game. Results revealed that participants used CA to communicate core information about the images and also increased the use of CA as images became informatively denser. The findings show that iconic features of CA can be used for referential function in addition to its depictive function outside narrative context and to achieve communicative efficiency. -
Trujillo, J. P., Ozyurek, A., Holler, J., & Drijvers, L. (2021). Speakers exhibit a multimodal Lombard effect in noise. Scientific Reports, 11: 16721. doi:10.1038/s41598-021-95791-0.
Abstract
In everyday conversation, we are often challenged with communicating in non-ideal settings, such as in noise. Increased speech intensity and larger mouth movements are used to overcome noise in constrained settings (the Lombard effect). How we adapt to noise in face-to-face interaction, the natural environment of human language use, where manual gestures are ubiquitous, is currently unknown. We asked Dutch adults to wear headphones with varying levels of multi-talker babble while attempting to communicate action verbs to one another. Using quantitative motion capture and acoustic analyses, we found that (1) noise is associated with increased speech intensity and enhanced gesture kinematics and mouth movements, and (2) acoustic modulation only occurs when gestures are not present, while kinematic modulation occurs regardless of co-occurring speech. Thus, in face-to-face encounters the Lombard effect is not constrained to speech but is a multimodal phenomenon where the visual channel carries most of the communicative burden.Additional information
supplementary material -
Trujillo, J. P., Ozyurek, A., Kan, C. C., Sheftel-Simanova, I., & Bekkering, H. (2021). Differences in the production and perception of communicative kinematics in autism. Autism Research, 14(12), 2640-2653. doi:10.1002/aur.2611.
Abstract
In human communication, social intentions and meaning are often revealed in the way we move. In this study, we investigate the flexibility of human communication in terms of kinematic modulation in a clinical population, namely, autistic individuals. The aim of this study was twofold: to assess (a) whether communicatively relevant kinematic features of gestures differ between autistic and neurotypical individuals, and (b) if autistic individuals use communicative kinematic modulation to support gesture recognition. We tested autistic and neurotypical individuals on a silent gesture production task and a gesture comprehension task. We measured movement during the gesture production task using a Kinect motion tracking device in order to determine if autistic individuals differed from neurotypical individuals in their gesture kinematics. For the gesture comprehension task, we assessed whether autistic individuals used communicatively relevant kinematic cues to support recognition. This was done by using stick-light figures as stimuli and testing for a correlation between the kinematics of these videos and recognition performance. We found that (a) silent gestures produced by autistic and neurotypical individuals differ in communicatively relevant kinematic features, such as the number of meaningful holds between movements, and (b) while autistic individuals are overall unimpaired at recognizing gestures, they processed repetition and complexity, measured as the amount of submovements perceived, differently than neurotypicals do. These findings highlight how subtle aspects of neurotypical behavior can be experienced differently by autistic individuals. They further demonstrate the relationship between movement kinematics and social interaction in high-functioning autistic individuals.Additional information
supporting information
Share this page