Displaying 1 - 100 of 104
  • Alagöz, G., Eising, E., Mekki, Y., Bignardi, G., Fontanillas, P., 23andMe Research Team, Nivard, M. G., Luciano, M., Cox, N. J., Fisher, S. E., & Gordon, R. L. (2025). The shared genetic architecture and evolution of human language and musical rhythm. Nature Human Behaviour, 9, 376-390. doi:10.1038/s41562-024-02051-y.

    Abstract

    Rhythm and language-related traits are phenotypically correlated, but their genetic overlap is largely unknown. Here, we leveraged two large-scale genome-wide association studies performed to shed light on the shared genetics of rhythm (N=606,825) and dyslexia (N=1,138,870). Our results reveal an intricate shared genetic and neurobiological architecture, and lay groundwork for resolving longstanding debates about the potential co-evolution of human language and musical traits.
  • Bethke, S., Meyer, A. S., & Hintz, F. (2025). The German Auditory and Image (GAudI) vocabulary test: A new German receptive vocabulary test and its relationships to other tests measuring linguistic experience. PLOS ONE, 20: e0318115. doi:10.1371/journal.pone.0318115.

    Abstract

    Humans acquire word knowledge through producing and comprehending spoken and written language. Word learning continues into adulthood and knowledge accumulates across the lifespan. Therefore, receptive vocabulary size is often conceived of as a proxy for linguistic experience and plays a central role in assessing individuals’ language proficiency. There is currently no valid open access test available for assessing receptive vocabulary size in German-speaking adults. We addressed this gap and developed the German Auditory and Image Vocabulary Test (GAudI). In the GAudI, participants are presented with spoken test words and have to indicate their meanings by selecting the corresponding picture from a set of four alternatives. Here we describe the development of the test and provide evidence for its validity. Specifically, we report a study in which 168 German-speaking participants completed the GAudI and five other tests tapping into linguistic experience: one test measuring print exposure, two tests measuring productive vocabulary, one test assessing knowledge of book language grammar, and a test of receptive vocabulary that was normed in adolescents. The psychometric properties of the GAudI and its relationships to the other tests demonstrate that it is a suitable tool for measuring receptive vocabulary size. We offer an open-access digital test environment that can be used for research purposes, accessible via https://ems13.mpi.nl/bq4_customizable_de/researchers_welcome.php.
  • Bujok, R., Meyer, A. S., & Bosker, H. R. (2025). Audiovisual perception of lexical stress: Beat gestures and articulatory cues. Language and Speech, 68(1), 181-203. doi:10.1177/00238309241258162.

    Abstract

    Human communication is inherently multimodal. Auditory speech, but also visual cues can be used to understand another talker. Most studies of audiovisual speech perception have focused on the perception of speech segments (i.e., speech sounds). However, less is known about the influence of visual information on the perception of suprasegmental aspects of speech like lexical stress. In two experiments, we investigated the influence of different visual cues (e.g., facial articulatory cues and beat gestures) on the audiovisual perception of lexical stress. We presented auditory lexical stress continua of disyllabic Dutch stress pairs together with videos of a speaker producing stress on the first or second syllable (e.g., articulating VOORnaam or voorNAAM). Moreover, we combined and fully crossed the face of the speaker producing lexical stress on either syllable with a gesturing body producing a beat gesture on either the first or second syllable. Results showed that people successfully used visual articulatory cues to stress in muted videos. However, in audiovisual conditions, we were not able to find an effect of visual articulatory cues. In contrast, we found that the temporal alignment of beat gestures with speech robustly influenced participants' perception of lexical stress. These results highlight the importance of considering suprasegmental aspects of language in multimodal contexts.
  • Coopmans, C. W., De Hoop, H., Tezcan, F., Hagoort, P., & Martin, A. E. (2025). Language-specific neural dynamics extend syntax into the time domain. PLOS Biology, 23: e3002968. doi:10.1371/journal.pbio.3002968.

    Abstract

    Studies of perception have long shown that the brain adds information to its sensory analysis of the physical environment. A touchstone example for humans is language use: to comprehend a physical signal like speech, the brain must add linguistic knowledge, including syntax. Yet, syntactic rules and representations are widely assumed to be atemporal (i.e., abstract and not bound by time), so they must be translated into time-varying signals for speech comprehension and production. Here, we test 3 different models of the temporal spell-out of syntactic structure against brain activity of people listening to Dutch stories: an integratory bottom-up parser, a predictive top-down parser, and a mildly predictive left-corner parser. These models build exactly the same structure but differ in when syntactic information is added by the brain—this difference is captured in the (temporal distribution of the) complexity metric “incremental node count.” Using temporal response function models with both acoustic and information-theoretic control predictors, node counts were regressed against source-reconstructed delta-band activity acquired with magnetoencephalography. Neural dynamics in left frontal and temporal regions most strongly reflect node counts derived by the top-down method, which postulates syntax early in time, suggesting that predictive structure building is an important component of Dutch sentence comprehension. The absence of strong effects of the left-corner model further suggests that its mildly predictive strategy does not represent Dutch language comprehension well, in contrast to what has been found for English. Understanding when the brain projects its knowledge of syntax onto speech, and whether this is done in language-specific ways, will inform and constrain the development of mechanistic models of syntactic structure building in the brain.
  • Goral, M., Antolovic, K., Hejazi, Z., & Schulz, F. M. (2025). Using a translanguaging framework to examine language production in a trilingual person with aphasia. Clinical Linguistics & Phonetics, 39(1), 1-20. doi:10.1080/02699206.2024.2328240.

    Abstract

    When language abilities in aphasia are assessed in clinical and research settings, the standard practice is to examine each language of a multilingual person separately. But many multilingual individuals, with and without aphasia, mix their languages regularly when they communicate with other speakers who share their languages. We applied a novel approach to scoring language production of a multilingual person with aphasia. Our aim was to discover whether the assessment outcome would differ meaningfully when we count accurate responses in only the target language of the assessment session versus when we apply a translanguaging framework, that is, count all accurate responses, regardless of the language in which they were produced. The participant is a Farsi-German-English speaking woman with chronic moderate aphasia. We examined the participant’s performance on two picture-naming tasks, an answering wh-question task, and an elicited narrative task. The results demonstrated that scores in English, the participant’s third-learned and least-impaired language did not differ between the two scoring methods. Performance in German, the participant’s moderately impaired second language benefited from translanguaging-based scoring across the board. In Farsi, her weakest language post-CVA, the participant’s scores were higher under the translanguaging-based scoring approach in some but not all of the tasks. Our findings suggest that whether a translanguaging-based scoring makes a difference in the results obtained depends on relative language abilities and on pragmatic constraints, with additional influence of the linguistic distances between the languages in question.
  • Karaca, F., Brouwer, S., Unsworth, S., & Huettig, F. (2025). Child heritage speakers’ reading skills in the majority language and exposure to the heritage language support morphosyntactic prediction in speech. Bilingualism: Language and Cognition. Advance online publication. doi:10.1017/S1366728925000331.

    Abstract

    We examined the morphosyntactic prediction ability of child heritage speakers and the role of reading skills and language experience in predictive processing. Using visual world eye-tracking, we focused on predictive use of case-marking cues in Turkish with monolingual (N=49, Mage=83 months) and heritage children, who were early bilinguals of Turkish and Dutch (N=30, Mage=90 months). We found quantitative differences in magnitude of the prediction ability of monolingual and heritage children; however, their overall prediction ability was on par. The heritage speakers’ prediction ability was facilitated by their reading skills in Dutch, but not in Turkish as well as by their heritage language exposure, but not by engagement in literacy activities. These findings emphasize the facilitatory role of reading skills and spoken language experience in predictive processing. This study is the first to show that in a developing bilingual mind, effects of reading-on-prediction can take place across modalities and across languages.

    Additional information

    data and analysis scripts
  • Karaca, F. (2025). On knowing what lies ahead: The interplay of prediction, experience, and proficiency. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Matetovici, M., Spruit, A., Colonnesi, C., Garnier‐Villarreal, M., & Noom, M. (2025). Parent and child gender effects in the relationship between attachment and both internalizing and externalizing problems of children between 2 and 5 years old: A dyadic perspective. Infant Mental Health Journal: Infancy and Early Childhood. Advance online publication. doi:10.1002/imhj.70002.

    Abstract

    Acknowledging that the parent–child attachment is a dyadic relationship, we investigated differences between pairs of parents and preschool children based on gender configurations in the association between attachment and problem behavior. We looked at mother–daughter, mother–son, father–daughter, and father–son dyads, but also compared mothers and fathers, daughters and sons, and same versus different gender pairs. We employed multigroup structural equation modeling to explore moderation effects of gender in a sample of 446 independent pairs of parents and preschool children (2–5 years old) from the Netherlands. A stronger association between both secure and avoidant attachment and internalizing problems was found for father–son dyads compared to father–daughter dyads. A stronger association between both secure and avoidant attachment and externalizing problems was found for mother–son dyads compared to mother–daughter and father–daughter dyads. Sons showed a stronger negative association between secure attachment and externalizing problems, a stronger positive association between avoidant attachment and externalizing problems, and a stronger negative association between secure attachment and internalizing problems compared to daughters. These results provide evidence for gender moderation and demonstrate that a dyadic approach can reveal patterns of associations that would not be recognized if parent and child gender effects were assessed separately.

    Additional information

    analysis code
  • Mazzini*, S., Seijdel*, N., & Drijvers*, L. (2025). Autistic individuals benefit from gestures during degraded speech comprehension. Autism, 29(2), 544-548. doi:10.1177/13623613241286570.

    Abstract

    *All authors contributed equally to this work
    Meaningful gestures enhance degraded speech comprehension in neurotypical adults, but it is unknown whether this is the case for neurodivergent populations, such as autistic individuals. Previous research demonstrated atypical multisensory and speech-gesture integration in autistic individuals, suggesting that integrating speech and gestures may be more challenging and less beneficial for speech comprehension in adverse listening conditions in comparison to neurotypicals. Conversely, autistic individuals could also benefit from additional cues to comprehend speech in noise, as they encounter difficulties in filtering relevant information from noise. We here investigated whether gestural enhancement of degraded speech comprehension differs for neurotypical (n = 40, mean age = 24.1) compared to autistic (n = 40, mean age = 26.8) adults. Participants watched videos of an actress uttering a Dutch action verb in clear or degraded speech accompanied with or without a gesture, and completed a free-recall task. Gestural enhancement was observed for both autistic and neurotypical individuals, and did not differ between groups. In contrast to previous literature, our results demonstrate that autistic individuals do benefit from gestures during degraded speech comprehension, similar to neurotypicals. These findings provide relevant insights to improve communication practices with autistic individuals and to develop new interventions for speech comprehension.
  • Ye, C., McQueen, J. M., & Bosker, H. R. (2025). Effect of auditory cues to lexical stress on the visual perception of gestural timing. Attention, Perception & Psychophysics. Advance online publication. doi:10.3758/s13414-025-03072-z.

    Abstract

    Speech is often accompanied by gestures. Since beat gestures—simple nonreferential up-and-down hand movements—frequently co-occur with prosodic prominence, they can indicate stress in a word and hence influence spoken-word recognition. However, little is known about the reverse influence of auditory speech on visual perception. The current study investigated whether lexical stress has an effect on the perceived timing of hand beats. We used videos in which a disyllabic word, embedded in a carrier sentence (Experiment 1) or in isolation (Experiment 2), was coupled with an up-and-down hand beat, while varying their degrees of asynchrony. Results from Experiment 1, a novel beat timing estimation task, revealed that gestures were estimated to occur closer in time to the pitch peak in a stressed syllable than their actual timing, hence reducing the perceived temporal distance between gestures and stress by around 60%. Using a forced-choice task, Experiment 2 further demonstrated that listeners tended to perceive a gesture, falling midway between two syllables, on the syllable receiving stronger cues to stress than the other, and this auditory effect was greater when gestural timing was most ambiguous. Our findings suggest that f0 and intensity are the driving force behind the temporal attraction effect of stress on perceived gestural timing. This study provides new evidence for auditory influences on visual perception, supporting bidirectionality in audiovisual interaction between speech-related signals that occur in everyday face-to-face communication.
  • Mooijman, S., Schoonen, R., Goral, M., Roelofs, A., & Ruiter, M. B. (2025). Why do bilingual speakers with aphasia alternate between languages? A study into their experiences and mixing patterns. Aphasiology. Advance online publication. doi:10.1080/02687038.2025.2452928.

    Abstract

    Background

    The factors that contribute to language alternation by bilingual speakers with aphasia have been debated. Some studies suggest that atypical language mixing results from impairments in language control, while others posit that mixing is a way to enhance communicative effectiveness. To address this question, most prior research examined the appropriateness of language mixing in connected speech tasks.
    Aims

    The goal of this study was to provide new insight into the question whether language mixing in aphasia reflects a strategy to enhance verbal effectiveness or involuntary behaviour resulting from impaired language control.
    Methods & procedures

    Semi-structured web-based interviews with bilingual speakers with aphasia (N = 19) with varying language backgrounds were conducted. The interviews were transcribed and coded for: (1) Self-reports regarding language control and compensation, (2) instances of language mixing, and (3) in two cases, instances of repair initiation.
    Outcomes & results

    The results showed that several participants reported language control difficulties but that the knowledge of additional languages could also be recruited to compensate for lexical retrieval problems. Most participants showed no or very few instances of mixing and the observed mixes appeared to adhere to the pragmatic context and known functions of switching. Three participants exhibited more marked switching behaviour and reported corresponding difficulties with language control. Instances of atypical mixing did not coincide with clear problems initiating conversational repair.
    Conclusions

    Our study highlights the variability in language mixing patterns of bilingual speakers with aphasia. Furthermore, most of the individuals in the study appeared to be able to effectively control their languages, and to alternate between their languages for compensatory purposes. Control deficits resulting in atypical language mixing were observed in a small number of participants.
  • Papoutsi, C., Tourtouri, E. N., Piai, V., Lampe, L. F., & Meyer, A. S. (2025). Fast and slow errors: What naming latencies of errors reveal about the interplay of attentional control and word planning in speeded picture naming. Journal of Experimental Psychology: Learning, Memory, and Cognition. Advance online publication. doi:10.1037/xlm0001472.

    Abstract

    Speakers sometimes produce lexical errors, such as saying “salt” instead of “pepper.” This study aimed to better understand the origin of lexical errors by assessing whether they arise from a hasty selection and premature decision to speak (premature selection hypothesis) or from momentary attentional disengagement from the task (attentional lapse hypothesis). We analyzed data from a speeded picture naming task (Lampe et al., 2023) and investigated whether lexical errors are produced as fast as target (i.e., correct) responses, thus arising from premature selection, or whether they are produced more slowly than target responses, thus arising from lapses of attention. Using ex-Gaussian analyses, we found that lexical errors were slower than targets in the tail, but not in the normal part of the response time distribution, with the tail effect primarily resulting from errors that were not coordinates, that is, members of the target’s semantic category. Moreover, we compared the coordinate errors and target responses in terms of their word-intrinsic properties and found that they were overall more frequent, shorter, and acquired earlier than targets. Given the present findings, we conclude that coordinate errors occur due to a premature selection but in the context of intact attentional control, following the same lexical constraints as targets, while other errors, given the variability in their nature, may vary in their origin, with one potential source being lapses of attention.
  • Rohrer, P. L., Bujok, R., Van Maastricht, L., & Bosker, H. R. (2025). From “I dance” to “she danced” with a flick of the hands: Audiovisual stress perception in Spanish. Psychonomic Bulletin & Review. Advance online publication. doi:10.3758/s13423-025-02683-9.

    Abstract

    When talking, speakers naturally produce hand movements (co-speech gestures) that contribute to communication. Evidence in Dutch suggests that the timing of simple up-and-down, non-referential “beat” gestures influences spoken word recognition: the same auditory stimulus was perceived as CONtent (noun, capitalized letters indicate stressed syllables) when a beat gesture occurred on the first syllable, but as conTENT (adjective) when the gesture occurred on the second syllable. However, these findings were based on a small number of minimal pairs in Dutch, limiting the generalizability of the findings. We therefore tested this effect in Spanish, where lexical stress is highly relevant in the verb conjugation system, distinguishing bailo, “I dance” with word-initial stress from bailó, “she danced” with word-final stress. Testing a larger sample (N = 100), we also assessed whether individual differences in working memory capacity modulated how much individuals relied on the gestures in spoken word recognition. The results showed that, similar to Dutch, Spanish participants were biased to perceive lexical stress on the syllable that visually co-occurred with a beat gesture, with the effect being strongest when the acoustic stress cues were most ambiguous. No evidence was found for by-participant effect sizes to be influenced by individual differences in phonological or visuospatial working memory. These findings reveal gestural-speech coordination impacts lexical stress perception in a language where listeners are regularly confronted with such lexical stress contrasts, highlighting the impact of gestures’ timing on prominence perception and spoken word recognition.
  • Roos, N. M. (2025). Naming a picture in context: Paving the way to investigate language recovery after stroke. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Sander, J., Zhang, Y., & Rowland, C. F. (2025). Language acquisition occurs in multimodal social interaction: A commentary on Karadöller, Sümer and Özyürek [invited commentary]. First Language: advance online publication. doi:10.1177/01427237251326984.

    Abstract

    We argue that language learning occurs in triadic interactions, where caregivers and children engage not only with each other but also with objects, actions and non-verbal cues that shape language acquisition. We illustrate this using two studies on real-time interactions in spoken and signed language. The first examines shared book reading, showing how caregivers use speech, gestures and gaze coordination to establish joint attention, facilitating word-object associations. The second study explores joint attention in spoken and signed interactions, demonstrating that signing dyads rely on a wider range of multimodal behaviours – such as touch, vibrations and peripheral gaze – compared to speaking dyads. Our data highlight how different language modalities shape attentional strategies. We advocate for research that fully incorporates the dynamic interplay between language, attention and environment.
  • Severijnen, G. G. A. (2025). A blessing in disguise: How prosodic variability challenges but also aids successful speech perception. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Ter Bekke, M. (2025). On how gestures facilitate prediction and fast responding during conversation. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Ter Bekke, M., Drijvers, L., & Holler, J. (2025). Co-speech hand gestures are used to predict upcoming meaning. Psychological Science, 36(4), 237-248. doi:10.1177/09567976251331041.

    Abstract

    In face-to-face conversation, people use speech and gesture to convey meaning. Seeing gestures alongside speech facilitates comprehenders’ language processing, but crucially, the mechanisms underlying this facilitation remain unclear. We investigated whether comprehenders use the semantic information in gestures, typically preceding related speech, to predict upcoming meaning. Dutch adults listened to questions asked by a virtual avatar. Questions were accompanied by an iconic gesture (e.g., typing) or meaningless control movement (e.g., arm scratch) followed by a short pause and target word (e.g., “type”). A Cloze experiment showed that gestures improved explicit predictions of upcoming target words. Moreover, an EEG experiment showed that gestures reduced alpha and beta power during the pause, indicating anticipation, and reduced N400 amplitudes, demonstrating facilitated semantic processing. Thus, comprehenders use iconic gestures to predict upcoming meaning. Theories of linguistic prediction should incorporate communicative bodily signals as predictive cues to capture how language is processed in face-to-face interaction.

    Additional information

    supplementary material
  • Van Geert, E., Ding, R., & Wagemans, J. (2025). A cross-cultural comparison of aesthetic preferences for neatly organized compositions: Native Chinese- versus Native Dutch-speaking samples. Empirical Studies of the Arts, 43(1), 250-275. doi:10.1177/02762374241245917.

    Abstract

    Do aesthetic preferences for images of neatly organized compositions (e.g., images collected on blogs like Things Organized Neatly©) generalize across cultures? In an earlier study, focusing on stimulus and personal properties related to order and complexity, Western participants indicated their preference for one of two simultaneously presented images (100 pairs). In the current study, we compared the data of the native Dutch-speaking participants from this earlier sample (N = 356) to newly collected data from a native Chinese-speaking sample (N = 220). Overall, aesthetic preferences were quite similar across cultures. When relating preferences for each sample to ratings of order, complexity, soothingness, and fascination collected from a Western, mainly Dutch-speaking sample, the results hint at a cross-culturally consistent preference for images that Western participants rate as more ordered, but a cross-culturally diverse relation between preferences and complexity.
  • Byun, K.-S., De Vos, C., Bradford, A., Zeshan, U., & Levinson, S. C. (2018). First encounters: Repair sequences in cross-signing. Topics in Cognitive Science, 10(2), 314-334. doi:10.1111/tops.12303.

    Abstract

    Most human communication is between people who speak or sign the same languages. Nevertheless, communication is to some extent possible where there is no language in common, as every tourist knows. How this works is of some theoretical interest (Levinson 2006). A nice arena to explore this capacity is when deaf signers of different languages meet for the first time, and are able to use the iconic affordances of sign to begin communication. Here we focus on Other-Initiated Repair (OIR), that is, where one signer makes clear he or she does not understand, thus initiating repair of the prior conversational turn. OIR sequences are typically of a three-turn structure (Schegloff 2007) including the problem source turn (T-1), the initiation of repair (T0), and the turn offering a problem solution (T+1). These sequences seem to have a universal structure (Dingemanse et al. 2013). We find that in most cases where such OIR occur, the signer of the troublesome turn (T-1) foresees potential difficulty, and marks the utterance with 'try markers' (Sacks & Schegloff 1979, Moerman 1988) which pause to invite recognition. The signers use repetition, gestural holds, prosodic lengthening and eyegaze at the addressee as such try-markers. Moreover, when T-1 is try-marked this allows for faster response times of T+1 with respect to T0. This finding suggests that signers in these 'first encounter' situations actively anticipate potential trouble and, through try-marking, mobilize and facilitate OIRs. The suggestion is that heightened meta-linguistic awareness can be utilized to deal with these problems at the limits of our communicational ability.
  • Byun, K.-S., De Vos, C., Roberts, S. G., & Levinson, S. C. (2018). Interactive sequences modulate the selection of expressive forms in cross-signing. In C. Cuskley, M. Flaherty, H. Little, L. McCrohon, A. Ravignani, & T. Verhoef (Eds.), Proceedings of the 12th International Conference on the Evolution of Language (EVOLANG XII) (pp. 67-69). Toruń, Poland: NCU Press. doi:10.12775/3991-1.012.
  • Croijmans, I. (2018). Wine expertise shapes olfactory language and cognition. PhD Thesis, Radboud University, Nijmegen.
  • Dai, B., Chen, C., Long, Y., Zheng, L., Zhao, H., Bai, X., Liu, W., Zhang, Y., Liu, L., Guo, T., Ding, G., & Lu, C. (2018). Neural mechanisms for selectively tuning into the target speaker in a naturalistic noisy situation. Nature Communications, 9: 2405. doi:10.1038/s41467-018-04819-z.

    Abstract

    The neural mechanism for selectively tuning in to a target speaker while tuning out the others in a multi-speaker situation (i.e., the cocktail-party effect) remains elusive. Here we addressed this issue by measuring brain activity simultaneously from a listener and from multiple speakers while they were involved in naturalistic conversations. Results consistently show selectively enhanced interpersonal neural synchronization (INS) between the listener and the attended speaker at left temporal–parietal junction, compared with that between the listener and the unattended speaker across different multi-speaker situations. Moreover, INS increases significantly prior to the occurrence of verbal responses, and even when the listener’s brain activity precedes that of the speaker. The INS increase is independent of brain-to-speech synchronization in both the anatomical location and frequency range. These findings suggest that INS underlies the selective process in a multi-speaker situation through neural predictions at the content level but not the sensory level of speech.

    Additional information

    Dai_etal_2018_sup.pdf
  • Den Hoed, J., Sollis, E., Venselaar, H., Estruch, S. B., Derizioti, P., & Fisher, S. E. (2018). Functional characterization of TBR1 variants in neurodevelopmental disorder. Scientific Reports, 8: 14279. doi:10.1038/s41598-018-32053-6.

    Abstract

    Recurrent de novo variants in the TBR1 transcription factor are implicated in the etiology of sporadic autism spectrum disorders (ASD). Disruptions include missense variants located in the T-box DNA-binding domain and previous work has demonstrated that they disrupt TBR1 protein function. Recent screens of thousands of simplex families with sporadic ASD cases uncovered additional T-box variants in TBR1 but their etiological relevance is unclear. We performed detailed functional analyses of de novo missense TBR1 variants found in the T-box of ASD cases, assessing many aspects of protein function, including subcellular localization, transcriptional activity and protein-interactions. Only two of the three tested variants severely disrupted TBR1 protein function, despite in silico predictions that all would be deleterious. Furthermore, we characterized a putative interaction with BCL11A, a transcription factor that was recently implicated in a neurodevelopmental syndrome involving developmental delay and language deficits. Our findings enhance understanding of molecular functions of TBR1, as well as highlighting the importance of functional testing of variants that emerge from next-generation sequencing, to decipher their contributions to neurodevelopmental disorders like ASD.

    Additional information

    Electronic supplementary material
  • Devanna, P., Chen, X. S., Ho, J., Gajewski, D., Smith, S. D., Gialluisi, A., Francks, C., Fisher, S. E., Newbury, D. F., & Vernes, S. C. (2018). Next-gen sequencing identifies non-coding variation disrupting miRNA binding sites in neurological disorders. Molecular Psychiatry, 23(5), 1375-1384. doi:10.1038/mp.2017.30.

    Abstract

    Understanding the genetic factors underlying neurodevelopmental and neuropsychiatric disorders is a major challenge given their prevalence and potential severity for quality of life. While large-scale genomic screens have made major advances in this area, for many disorders the genetic underpinnings are complex and poorly understood. To date the field has focused predominantly on protein coding variation, but given the importance of tightly controlled gene expression for normal brain development and disorder, variation that affects non-coding regulatory regions of the genome is likely to play an important role in these phenotypes. Herein we show the importance of 3 prime untranslated region (3'UTR) non-coding regulatory variants across neurodevelopmental and neuropsychiatric disorders. We devised a pipeline for identifying and functionally validating putatively pathogenic variants from next generation sequencing (NGS) data. We applied this pipeline to a cohort of children with severe specific language impairment (SLI) and identified a functional, SLI-associated variant affecting gene regulation in cells and post-mortem human brain. This variant and the affected gene (ARHGEF39) represent new putative risk factors for SLI. Furthermore, we identified 3′UTR regulatory variants across autism, schizophrenia and bipolar disorder NGS cohorts demonstrating their impact on neurodevelopmental and neuropsychiatric disorders. Our findings show the importance of investigating non-coding regulatory variants when determining risk factors contributing to neurodevelopmental and neuropsychiatric disorders. In the future, integration of such regulatory variation with protein coding changes will be essential for uncovering the genetic causes of complex neurological disorders and the fundamental mechanisms underlying health and disease

    Additional information

    mp201730x1.docx
  • Dingemanse, M., Blythe, J., & Dirksmeyer, T. (2018). Formats for other-initiation of repair across languages: An exercise in pragmatic typology. In I. Nikolaeva (Ed.), Linguistic Typology: Critical Concepts in Linguistics. Vol. 4 (pp. 322-357). London: Routledge.

    Abstract

    In conversation, people regularly deal with problems of speaking, hearing, and understanding. We report on a cross-linguistic investigation of the conversational structure of other-initiated repair (also known as collaborative repair, feedback, requests for clarification, or grounding sequences). We take stock of formats for initiating repair across languages (comparable to English huh?, who?, y’mean X?, etc.) and find that different languages make available a wide but remarkably similar range of linguistic resources for this function. We exploit the patterned variation as evidence for several underlying concerns addressed by repair initiation: characterising trouble, managing responsibility, and handling knowledge. The concerns do not always point in the same direction and thus provide participants in interaction with alternative principles for selecting one format over possible others. By comparing conversational structures across languages, this paper contributes to pragmatic typology: the typology of systems of language use and the principles that shape them.
  • Drijvers, L., & Trujillo, J. P. (2018). Commentary: Transcranial magnetic stimulation over left inferior frontal and posterior temporal cortex disrupts gesture-speech integration. Frontiers in Human Neuroscience, 12: 256. doi:10.3389/fnhum.2018.00256.

    Abstract

    A commentary on
    Transcranial Magnetic Stimulation over Left Inferior Frontal and Posterior Temporal Cortex Disrupts Gesture-Speech Integration

    by Zhao, W., Riggs, K., Schindler, I., and Holle, H. (2018). J. Neurosci. 10, 1748–1717. doi: 10.1523/JNEUROSCI.1748-17.2017
  • Drijvers, L., Ozyurek, A., & Jensen, O. (2018). Alpha and beta oscillations index semantic congruency between speech and gestures in clear and degraded speech. Journal of Cognitive Neuroscience, 30(8), 1086-1097. doi:10.1162/jocn_a_01301.

    Abstract

    Previous work revealed that visual semantic information conveyed by gestures can enhance degraded speech comprehension, but the mechanisms underlying these integration processes under adverse listening conditions remain poorly understood. We used MEG to investigate how oscillatory dynamics support speech–gesture integration when integration load is manipulated by auditory (e.g., speech degradation) and visual semantic (e.g., gesture congruency) factors. Participants were presented with videos of an actress uttering an action verb in clear or degraded speech, accompanied by a matching (mixing gesture + “mixing”) or mismatching (drinking gesture + “walking”) gesture. In clear speech, alpha/beta power was more suppressed in the left inferior frontal gyrus and motor and visual cortices when integration load increased in response to mismatching versus matching gestures. In degraded speech, beta power was less suppressed over posterior STS and medial temporal lobe for mismatching compared with matching gestures, showing that integration load was lowest when speech was degraded and mismatching gestures could not be integrated and disambiguate the degraded signal. Our results thus provide novel insights on how low-frequency oscillatory modulations in different parts of the cortex support the semantic audiovisual integration of gestures in clear and degraded speech: When speech is clear, the left inferior frontal gyrus and motor and visual cortices engage because higher-level semantic information increases semantic integration load. When speech is degraded, posterior STS/middle temporal gyrus and medial temporal lobe are less engaged because integration load is lowest when visual semantic information does not aid lexical retrieval and speech and gestures cannot be integrated.
  • Drijvers, L., Ozyurek, A., & Jensen, O. (2018). Hearing and seeing meaning in noise: Alpha, beta and gamma oscillations predict gestural enhancement of degraded speech comprehension. Human Brain Mapping, 39(5), 2075-2087. doi:10.1002/hbm.23987.

    Abstract

    During face-to-face communication, listeners integrate speech with gestures. The semantic information conveyed by iconic gestures (e.g., a drinking gesture) can aid speech comprehension in adverse listening conditions. In this magnetoencephalography (MEG) study, we investigated the spatiotemporal neural oscillatory activity associated with gestural enhancement of degraded speech comprehension. Participants watched videos of an actress uttering clear or degraded speech, accompanied by a gesture or not and completed a cued-recall task after watching every video. When gestures semantically disambiguated degraded speech comprehension, an alpha and beta power suppression and a gamma power increase revealed engagement and active processing in the hand-area of the motor cortex, the extended language network (LIFG/pSTS/STG/MTG), medial temporal lobe, and occipital regions. These observed low- and high-frequency oscillatory modulations in these areas support general unification, integration and lexical access processes during online language comprehension, and simulation of and increased visual attention to manual gestures over time. All individual oscillatory power modulations associated with gestural enhancement of degraded speech comprehension predicted a listener's correct disambiguation of the degraded verb after watching the videos. Our results thus go beyond the previously proposed role of oscillatory dynamics in unimodal degraded speech comprehension and provide first evidence for the role of low- and high-frequency oscillations in predicting the integration of auditory and visual information at a semantic level.

    Additional information

    hbm23987-sup-0001-suppinfo01.docx
  • Drijvers, L., & Ozyurek, A. (2018). Native language status of the listener modulates the neural integration of speech and iconic gestures in clear and adverse listening conditions. Brain and Language, 177-178, 7-17. doi:10.1016/j.bandl.2018.01.003.

    Abstract

    Native listeners neurally integrate iconic gestures with speech, which can enhance degraded speech comprehension. However, it is unknown how non-native listeners neurally integrate speech and gestures, as they might process visual semantic context differently than natives. We recorded EEG while native and highly-proficient non-native listeners watched videos of an actress uttering an action verb in clear or degraded speech, accompanied by a matching ('to drive'+driving gesture) or mismatching gesture ('to drink'+mixing gesture). Degraded speech elicited an enhanced N400 amplitude compared to clear speech in both groups, revealing an increase in neural resources needed to resolve the spoken input. A larger N400 effect was found in clear speech for non-natives compared to natives, but in degraded speech only for natives. Non-native listeners might thus process gesture more strongly than natives when speech is clear, but need more auditory cues to facilitate access to gestural semantic information when speech is degraded.
  • Drozdova, P. (2018). The effects of nativeness and background noise on the perceptual learning of voices and ambiguous sounds. PhD Thesis, Radboud University, Nijmegen.
  • Duarte, R., Uhlmann, M., Van den Broek, D., Fitz, H., Petersson, K. M., & Morrison, A. (2018). Encoding symbolic sequences with spiking neural reservoirs. In Proceedings of the 2018 International Joint Conference on Neural Networks (IJCNN). doi:10.1109/IJCNN.2018.8489114.

    Abstract

    Biologically inspired spiking networks are an important tool to study the nature of computation and cognition in neural systems. In this work, we investigate the representational capacity of spiking networks engaged in an identity mapping task. We compare two schemes for encoding symbolic input, one in which input is injected as a direct current and one where input is delivered as a spatio-temporal spike pattern. We test the ability of networks to discriminate their input as a function of the number of distinct input symbols. We also compare performance using either membrane potentials or filtered spike trains as state variable. Furthermore, we investigate how the circuit behavior depends on the balance between excitation and inhibition, and the degree of synchrony and regularity in its internal dynamics. Finally, we compare different linear methods of decoding population activity onto desired target labels. Overall, our results suggest that even this simple mapping task is strongly influenced by design choices on input encoding, state-variables, circuit characteristics and decoding methods, and these factors can interact in complex ways. This work highlights the importance of constraining computational network models of behavior by available neurobiological evidence.
  • Eekhof, L. S., Eerland, A., & Willems, R. M. (2018). Readers’ insensitivity to tense revealed: No differences in mental simulation during reading of present and past tense stories. Collabra: Psychology, 4(1): 16. doi:10.1525/collabra.121.

    Abstract

    While the importance of mental simulation during literary reading has long been recognized, we know little about the factors that determine when, what, and how much readers mentally simulate. Here we investigate the influence of a specific text characteristic, namely verb tense (present vs. past), on mental simulation during literary reading. Verbs usually denote the actions and events that take place in narratives and hence it is hypothesized that verb tense will influence the amount of mental simulation elicited in readers. Although the present tense is traditionally considered to be more “vivid”, this study is one of the first to experimentally assess this claim. We recorded eye-movements while subjects read stories in the past or present tense and collected data regarding self-reported levels of mental simulation, transportation and appreciation. We found no influence of tense on any of the offline measures. The eye-tracking data showed a slightly more complex pattern. Although we did not find a main effect of sensorimotor simulation content on reading times, we were able to link the degree to which subjects slowed down when reading simulation eliciting content to offline measures of attention and transportation, but this effect did not interact with the tense of the story. Unexpectedly, we found a main effect of tense on reading times per word, with past tense stories eliciting longer first fixation durations and gaze durations. However, we were unable to link this effect to any of the offline measures. In sum, this study suggests that tense does not play a substantial role in the process of mental simulation elicited by literary stories.

    Additional information

    Data Accessibility
  • Estruch, S. B., Graham, S. A., Quevedo, M., Vino, A., Dekkers, D. H. W., Deriziotis, P., Sollis, E., Demmers, J., Poot, R. A., & Fisher, S. E. (2018). Proteomic analysis of FOXP proteins reveals interactions between cortical transcription factors associated with neurodevelopmental disorders. Human Molecular Genetics, 27(7), 1212-1227. doi:10.1093/hmg/ddy035.

    Abstract

    FOXP transcription factors play important roles in neurodevelopment, but little is known about how their transcriptional activity is regulated. FOXP proteins cooperatively regulate gene expression by forming homo- and hetero-dimers with each other. Physical associations with other transcription factors might also modulate the functions of FOXP proteins. However, few FOXP-interacting transcription factors have been identified so far. Therefore, we sought to discover additional transcription factors that interact with the brain-expressed FOXP proteins, FOXP1, FOXP2 and FOXP4, through affinity-purifications of protein complexes followed by mass spectrometry. We identified seven novel FOXP-interacting transcription factors (NR2F1, NR2F2, SATB1, SATB2, SOX5, YY1 and ZMYM2), five of which have well-established roles in cortical development. Accordingly, we found that these transcription factors are co-expressed with FoxP2 in the deep layers of the cerebral cortex and also in the Purkinje cells of the cerebellum, suggesting that they may cooperate with the FoxPs to regulate neural gene expression in vivo. Moreover, we demonstrated that etiological mutations of FOXP1 and FOXP2, known to cause neurodevelopmental disorders, severely disrupted the interactions with FOXP-interacting transcription factors. Additionally, we pinpointed specific regions within FOXP2 sequence involved in mediating these interactions. Thus, by expanding the FOXP interactome we have uncovered part of a broader neural transcription factor network involved in cortical development, providing novel molecular insights into the transcriptional architecture underlying brain development and neurodevelopmental disorders.
  • Estruch, S. B. (2018). Characterization of transcription factors in monogenic disorders of speech and language. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Fairs, A., Bögels, S., & Meyer, A. S. (2018). Dual-tasking with simple linguistic tasks: Evidence for serial processing. Acta Psychologica, 191, 131-148. doi:10.1016/j.actpsy.2018.09.006.

    Abstract

    In contrast to the large amount of dual-task research investigating the coordination of a linguistic and a nonlinguistic
    task, little research has investigated how two linguistic tasks are coordinated. However, such research
    would greatly contribute to our understanding of how interlocutors combine speech planning and listening in
    conversation. In three dual-task experiments we studied how participants coordinated the processing of an
    auditory stimulus (S1), which was either a syllable or a tone, with selecting a name for a picture (S2). Two SOAs,
    of 0 ms and 1000 ms, were used. To vary the time required for lexical selection and to determine when lexical
    selection took place, the pictures were presented with categorically related or unrelated distractor words. In
    Experiment 1 participants responded overtly to both stimuli. In Experiments 2 and 3, S1 was not responded to
    overtly, but determined how to respond to S2, by naming the picture or reading the distractor aloud. Experiment
    1 yielded additive effects of SOA and distractor type on the picture naming latencies. The presence of semantic
    interference at both SOAs indicated that lexical selection occurred after response selection for S1. With respect to
    the coordination of S1 and S2 processing, Experiments 2 and 3 yielded inconclusive results. In all experiments,
    syllables interfered more with picture naming than tones. This is likely because the syllables activated phonological
    representations also implicated in picture naming. The theoretical and methodological implications of the
    findings are discussed.

    Additional information

    1-s2.0-S0001691817305589-mmc1.pdf
  • Felker, E. R., Troncoso Ruiz, A., Ernestus, M., & Broersma, M. (2018). The ventriloquist paradigm: Studying speech processing in conversation with experimental control over phonetic input. The Journal of the Acoustical Society of America, 144(4), EL304-EL309. doi:10.1121/1.5063809.

    Abstract

    This article presents the ventriloquist paradigm, an innovative method for studying speech processing in dialogue whereby participants interact face-to-face with a confederate who, unbeknownst to them, communicates by playing pre-recorded speech. Results show that the paradigm convinces more participants that the speech is live than a setup without the face-to-face element, and it elicits more interactive conversation than a setup in which participants believe their partner is a computer. By reconciling the ecological validity of a conversational context with full experimental control over phonetic exposure, the paradigm offers a wealth of new possibilities for studying speech processing in interaction.
  • Frank, S. L., & Yang, J. (2018). Lexical representation explains cortical entrainment during speech comprehension. PLoS One, 13(5): e0197304. doi:10.1371/journal.pone.0197304.

    Abstract

    Results from a recent neuroimaging study on spoken sentence comprehension have been interpreted as evidence for cortical entrainment to hierarchical syntactic structure. We present a simple computational model that predicts the power spectra from this study, even
    though the model's linguistic knowledge is restricted to the lexical level, and word-level representations are not combined into higher-level units (phrases or sentences). Hence, the
    cortical entrainment results can also be explained from the lexical properties of the stimuli, without recourse to hierarchical syntax.
  • Franken, M. K. (2018). Listening for speaking: Investigations of the relationship between speech perception and production. PhD Thesis, Radboud University, Nijmegen.

    Abstract

    Speaking and listening are complex tasks that we perform on a daily basis, almost without conscious effort. Interestingly, speaking almost never occurs without listening: whenever we speak, we at least hear our own speech. The research in this thesis is concerned with how the perception of our own speech influences our speaking behavior. We show that unconsciously, we actively monitor this auditory feedback of our own speech. This way, we can efficiently take action and adapt articulation when an error occurs and auditory feedback does not correspond to our expectation. Processing the auditory feedback of our speech does not, however, automatically affect speech production. It is subject to a number of constraints. For example, we do not just track auditory feedback, but also its consistency. If auditory feedback is more consistent over time, it has a stronger influence on speech production. In addition, we investigated how auditory feedback during speech is processed in the brain, using magnetoencephalography (MEG). The results suggest the involvement of a broad cortical network including both auditory and motor-related regions. This is consistent with the view that the auditory center of the brain is involved in comparing auditory feedback to our expectation of auditory feedback. If this comparison yields a mismatch, motor-related regions of the brain can be recruited to alter the ongoing articulations.

    Additional information

    full text via Radboud Repository
  • Franken, M. K., Acheson, D. J., McQueen, J. M., Hagoort, P., & Eisner, F. (2018). Opposing and following responses in sensorimotor speech control: Why responses go both ways. Psychonomic Bulletin & Review, 25(4), 1458-1467. doi:10.3758/s13423-018-1494-x.

    Abstract

    When talking, speakers continuously monitor and use the auditory feedback of their own voice to control and inform speech production processes. When speakers are provided with auditory feedback that is perturbed in real time, most of them compensate for this by opposing the feedback perturbation. But some speakers follow the perturbation. In the current study, we investigated whether the state of the speech production system at perturbation onset may determine what type of response (opposing or following) is given. The results suggest that whether a perturbation-related response is opposing or following depends on ongoing fluctuations of the production system: It initially responds by doing the opposite of what it was doing. This effect and the non-trivial proportion of following responses suggest that current production models are inadequate: They need to account for why responses to unexpected sensory feedback depend on the production-system’s state at the time of perturbation.
  • Franken, M. K., Eisner, F., Acheson, D. J., McQueen, J. M., Hagoort, P., & Schoffelen, J.-M. (2018). Self-monitoring in the cerebral cortex: Neural responses to pitch-perturbed auditory feedback during speech production. NeuroImage, 179, 326-336. doi:10.1016/j.neuroimage.2018.06.061.

    Abstract

    Speaking is a complex motor skill which requires near instantaneous integration of sensory and motor-related information. Current theory hypothesizes a complex interplay between motor and auditory processes during speech production, involving the online comparison of the speech output with an internally generated forward model. To examine the neural correlates of this intricate interplay between sensory and motor processes, the current study uses altered auditory feedback (AAF) in combination with magnetoencephalography (MEG). Participants vocalized the vowel/e/and heard auditory feedback that was temporarily pitch-shifted by only 25 cents, while neural activity was recorded with MEG. As a control condition, participants also heard the recordings of the same auditory feedback that they heard in the first half of the experiment, now without vocalizing. The participants were not aware of any perturbation of the auditory feedback. We found auditory cortical areas responded more strongly to the pitch shifts during vocalization. In addition, auditory feedback perturbation resulted in spectral power increases in the θ and lower β bands, predominantly in sensorimotor areas. These results are in line with current models of speech production, suggesting auditory cortical areas are involved in an active comparison between a forward model's prediction and the actual sensory input. Subsequently, these areas interact with motor areas to generate a motor response. Furthermore, the results suggest that θ and β power increases support auditory-motor interaction, motor error detection and/or sensory prediction processing.
  • Goriot, C., Broersma, M., McQueen, J. M., Unsworth, S., & Van Hout, R. (2018). Language balance and switching ability in children acquiring English as a second language. Journal of Experimental Child Psychology, 173, 168-186. doi:10.1016/j.jecp.2018.03.019.

    Abstract

    This study investigated whether relative lexical proficiency in Dutch and English in child second language (L2) learners is related to executive functioning. Participants were Dutch primary school pupils of three different age groups (4–5, 8–9, and 11–12 years) who either were enrolled in an early-English schooling program or were age-matched controls not on that early-English program. Participants performed tasks that measured switching, inhibition, and working memory. Early-English program pupils had greater knowledge of English vocabulary and more balanced Dutch–English lexicons. In both groups, lexical balance, a ratio measure obtained by dividing vocabulary scores in English by those in Dutch, was related to switching but not to inhibition or working memory performance. These results show that for children who are learning an L2 in an instructional setting, and for whom managing two languages is not yet an automatized process, language balance may be more important than L2 proficiency in influencing the relation between childhood bilingualism and switching abilities.
  • Hahn, L. E., Benders, T., Snijders, T. M., & Fikkert, P. (2018). Infants' sensitivity to rhyme in songs. Infant Behavior and Development, 52, 130-139. doi:10.1016/j.infbeh.2018.07.002.

    Abstract

    Children’s songs often contain rhyming words at phrase endings. In this study, we investigated whether infants can already recognize this phonological pattern in songs. Earlier studies using lists of spoken words were equivocal on infants’ spontaneous processing of rhymes (Hayes, Slater, & Brown, 2000; Jusczyk, Goodman, & Baumann, 1999). Songs, however, constitute an ecologically valid rhyming stimulus, which could allow for spontaneous processing of this phonological pattern in infants. Novel children’s songs with rhyming and non-rhyming lyrics using pseudo-words were presented to 35 9-month-old Dutch infants using the Headturn Preference Procedure. Infants on average listened longer to the non-rhyming songs, with around half of the infants however exhibiting a preference for the rhyming songs. These results highlight that infants have the processing abilities to benefit from their natural rhyming input for the development of their phonological abilities.
  • Havron, N., Raviv, L., & Arnon, I. (2018). Literate and preliterate children show different learning patterns in an artificial language learning task. Journal of Cultural Cognitive Science, 2, 21-33. doi:10.1007/s41809-018-0015-9.

    Abstract

    Literacy affects many aspects of cognitive and linguistic processing. Among them, it increases the salience of words as units of linguistic processing. Here, we explored the impact of literacy acquisition on children’s learning of an artifical language. Recent accounts of L1–L2 differences relate adults’ greater difficulty with language learning to their smaller reliance on multiword units. In particular, multiword units are claimed to be beneficial for learning opaque grammatical relations like grammatical gender. Since literacy impacts the reliance on words as units of processing, we ask if and how acquiring literacy may change children’s language-learning results. We looked at children’s success in learning novel noun labels relative to their success in learning article-noun gender agreement, before and after learning to read. We found that preliterate first graders were better at learning agreement (larger units) than at learning nouns (smaller units), and that the difference between the two trial types significantly decreased after these children acquired literacy. In contrast, literate third graders were as good in both trial types. These findings suggest that literacy affects not only language processing, but also leads to important differences in language learning. They support the idea that some of children’s advantage in language learning comes from their previous knowledge and experience with language—and specifically, their lack of experience with written texts.
  • Hoey, E., & Kendrick, K. H. (2018). Conversation analysis. In A. M. B. De Groot, & P. Hagoort (Eds.), Research methods in psycholinguistics and the neurobiology of language: A practical guide (pp. 151-173). Hoboken: Wiley.

    Abstract

    Conversation Analysis (CA) is an inductive, micro-analytic, and predominantly qualitative
    method for studying human social interactions. This chapter describes and illustrates the basic
    methods of CA. We first situate the method by describing its sociological foundations, key areas
    of analysis, and particular approach in using naturally occurring data. The bulk of the chapter is
    devoted to practical explanations of the typical conversation analytic process for collecting data
    and producing an analysis. We analyze a candidate interactional practice – the assessmentimplicative
    interrogative – using real data extracts as a demonstration of the method, explicitly
    laying out the relevant questions and considerations for every stage of an analysis. The chapter
    concludes with some discussion of quantitative approaches to conversational interaction, and
    links between CA and psycholinguistic concerns
  • Hoey, E. (2018). How speakers continue with talk after a lapse in conversation. Research on Language and Social Interaction, 51(3), 329-346. doi:10.1080/08351813.2018.1485234.

    Abstract

    How do conversational participants continue with turn-by-turn talk after a momentary lapse? If all participants forgo the option to speak at possible sequence completion, an extended silence may emerge that can indicate a lack of anything to talk about next. For the interaction to proceed recognizably as a conversation, the postlapse turn needs to implicate more talk. Using conversation analysis, I examine three practical alternatives regarding sequentially implicative postlapse turns: Participants may move to end the interaction, continue with some prior matter, or start something new. Participants are shown using resources grounded in the interaction’s overall structural organization, the materials from the interaction-so-far, the mentionables they bring to interaction, and the situated environment itself. Comparing these alternatives, there’s suggestive quantitative evidence for a preference for continuation. The analysis of lapse resolution shows lapses as places for the management of multiple possible courses of action. Data are in U.S. and UK English.
  • Hömke, P., Holler, J., & Levinson, S. C. (2018). Eye blinks are perceived as communicative signals in human face-to-face interaction. PLoS One, 13(12): e0208030. doi:10.1371/journal.pone.0208030.

    Abstract

    In face-to-face communication, recurring intervals of mutual gaze allow listeners to provide speakers with visual feedback (e.g. nodding). Here, we investigate the potential feedback function of one of the subtlest of human movements—eye blinking. While blinking tends to be subliminal, the significance of mutual gaze in human interaction raises the question whether the interruption of mutual gaze through blinking may also be communicative. To answer this question, we developed a novel, virtual reality-based experimental paradigm, which enabled us to selectively manipulate blinking in a virtual listener, creating small differences in blink duration resulting in ‘short’ (208 ms) and ‘long’ (607 ms) blinks. We found that speakers unconsciously took into account the subtle differences in listeners’ blink duration, producing substantially shorter answers in response to long listener blinks. Our findings suggest that, in addition to physiological, perceptual and cognitive functions, listener blinks are also perceived as communicative signals, directly influencing speakers’ communicative behavior in face-to-face communication. More generally, these findings may be interpreted as shedding new light on the evolutionary origins of mental-state signaling, which is a crucial ingredient for achieving mutual understanding in everyday social interaction.

    Additional information

    Supporting information
  • Huisman, J. L. A., & Majid, A. (2018). Psycholinguistic variables matter in odor naming. Memory & Cognition, 46, 577-588. doi:10.3758/s13421-017-0785-1.

    Abstract

    People from Western societies generally find it difficult to name odors. In trying to explain this, the olfactory literature has proposed several theories that focus heavily on properties of the odor itself but rarely discuss properties of the label used to describe it. However, recent studies show speakers of languages with dedicated smell lexicons can name odors with relative ease. Has the role of the lexicon been overlooked in the olfactory literature? Word production studies show properties of the label, such as word frequency and semantic context, influence naming; but this field of research focuses heavily on the visual domain. The current study combines methods from both fields to investigate word production for olfaction in two experiments. In the first experiment, participants named odors whose veridical labels were either high-frequency or low-frequency words in Dutch, and we found that odors with high-frequency labels were named correctly more often. In the second experiment, edibility was used for manipulating semantic context in search of a semantic interference effect, presenting the odors in blocks of edible and inedible odor source objects to half of the participants. While no evidence was found for a semantic interference effect, an effect of word frequency was again present. Our results demonstrate psycholinguistic variables—such as word frequency—are relevant for olfactory naming, and may, in part, explain why it is difficult to name odors in certain languages. Olfactory researchers cannot afford to ignore properties of an odor’s label.
  • Janssen, R., Moisik, S. R., & Dediu, D. (2018). Agent model reveals the influence of vocal tract anatomy on speech during ontogeny and glossogeny. In C. Cuskley, M. Flaherty, H. Little, L. McCrohon, A. Ravignani, & T. Verhoef (Eds.), Proceedings of the 12th International Conference on the Evolution of Language (EVOLANG XII) (pp. 171-174). Toruń, Poland: NCU Press. doi:10.12775/3991-1.042.
  • Janssen, R., & Dediu, D. (2018). Genetic biases affecting language: What do computer models and experimental approaches suggest? In T. Poibeau, & A. Villavicencio (Eds.), Language, Cognition and Computational Models (pp. 256-288). Cambridge: Cambridge University Press.

    Abstract

    Computer models of cultural evolution have shown language properties emerging on interacting agents with a brain that lacks dedicated, nativist language modules. Notably, models using Bayesian agents provide a precise specification of (extra-)liguististic factors (e.g., genetic) that shape language through iterated learning (biases on language), and demonstrate that weak biases get expressed more strongly over time (bias amplification). Other models attempt to lessen assumption on agents’ innate predispositions even more, and emphasize self-organization within agents, highlighting glossogenesis (the development of language from a nonlinguistic state). Ultimately however, one also has to recognize that biology and culture are strongly interacting, forming a coevolving system. As such, computer models show that agents might (biologically) evolve to a state predisposed to language adaptability, where (culturally) stable language features might get assimilated into the genome via Baldwinian niche construction. In summary, while many questions about language evolution remain unanswered, it is clear that it is not to be completely understood from a purely biological, cognitivist perspective. Language should be regarded as (partially) emerging on the social interactions between large populations of speakers. In this context, agent models provide a sound approach to investigate the complex dynamics of genetic biasing on language and speech
  • Janssen, R., Moisik, S. R., & Dediu, D. (2018). Modelling human hard palate shape with Bézier curves. PLoS One, 13(2): e0191557. doi:10.1371/journal.pone.0191557.

    Abstract

    People vary at most levels, from the molecular to the cognitive, and the shape of the hard palate (the bony roof of the mouth) is no exception. The patterns of variation in the hard palate are important for the forensic sciences and (palaeo)anthropology, and might also play a role in speech production, both in pathological cases and normal variation. Here we describe a method based on Bézier curves, whose main aim is to generate possible shapes of the hard palate in humans for use in computer simulations of speech production and language evolution. Moreover, our method can also capture existing patterns of variation using few and easy-to-interpret parameters, and fits actual data obtained from MRI traces very well with as little as two or three free parameters. When compared to the widely-used Principal Component Analysis (PCA), our method fits actual data slightly worse for the same number of degrees of freedom. However, it is much better at generating new shapes without requiring a calibration sample, its parameters have clearer interpretations, and their ranges are grounded in geometrical considerations. © 2018 Janssen et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
  • Janssen, R. (2018). Let the agents do the talking: On the influence of vocal tract anatomy no speech during ontogeny. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Kirsch, J. (2018). Listening for the WHAT and the HOW: Older adults' processing of semantic and affective information in speech. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Koch, X. (2018). Age and hearing loss effects on speech processing. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Kochari, A. R., & Ostarek, M. (2018). Introducing a replication-first rule for PhD projects (commmentary on Zwaan et al., ‘Making replication mainstream’). Behavioral and Brain Sciences, 41: e138. doi:10.1017/S0140525X18000730.

    Abstract

    Zwaan et al. mention that young researchers should conduct replications as a
    small part of their portfolio. We extend this proposal and suggest that conducting and
    reporting replications should become an integral part of PhD projects and be taken into
    account in their assessment. We discuss how this would help not only scientific
    advancement, but also PhD candidates’ careers.
  • Kolipakam, V. (2018). A holistic approach to understanding pre-history. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Kolipakam, V., Jordan, F., Dunn, M., Greenhill, S. J., Bouckaert, R., Gray, R. D., & Verkerk, A. (2018). A Bayesian phylogenetic study of the Dravidian language family. Royal Society Open Science, 5: 171504. doi:10.1098/rsos.171504.

    Abstract

    The Dravidian language family consists of about 80 varieties (Hammarström H. 2016 Glottolog 2.7) spoken by 220 million people across southern and central India and surrounding countries (Steever SB. 1998 In The Dravidian languages (ed. SB Steever), pp. 1–39: 1). Neither the geographical origin of the Dravidian language homeland nor its exact dispersal through time are known. The history of these languages is crucial for understanding prehistory in Eurasia, because despite their current restricted range, these languages played a significant role in influencing other language groups including Indo-Aryan (Indo-European) and Munda (Austroasiatic) speakers. Here, we report the results of a Bayesian phylogenetic analysis of cognate-coded lexical data, elicited first hand from native speakers, to investigate the subgrouping of the Dravidian language family, and provide dates for the major points of diversification. Our results indicate that the Dravidian language family is approximately 4500 years old, a finding that corresponds well with earlier linguistic and archaeological studies. The main branches of the Dravidian language family (North, Central, South I, South II) are recovered, although the placement of languages within these main branches diverges from previous classifications. We find considerable uncertainty with regard to the relationships between the main branches.
  • Kouwenhoven, H., Ernestus, M., & Van Mulken, M. (2018). Register variation by Spanish users of English. The Nijmegen Corpus of Spanish English. Corpus Linguistics and Linguistic Theory, 14(1), 35-63. doi:10.1515/cllt-2013-0054.

    Abstract

    English serves as a lingua franca in situations with varying degrees of
    formality. How formality affects non-native speech has rarely been studied. We
    investigated register variation by Spanish users of English by comparing formal
    and informal speech from the Nijmegen Corpus of Spanish English that we
    created. This corpus comprises speech from thirty-four Spanish speakers of
    English in interaction with Dutch confederates in two speech situations.
    Formality affected the amount of laughter and overlapping speech and the
    number of Spanish words. Moreover, formal speech had a more informational
    character than informal speech. We discuss how our findings relate to register
    variation in Spanish

    Files private

    Request files
  • Kung, C. (2018). Speech comprehension in a tone language: The role of lexical tone, context, and intonation in Cantonese-Chinese. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Lattenkamp, E. Z., Vernes, S. C., & Wiegrebe, L. (2018). Mammalian models for the study of vocal learning: A new paradigm in bats. In C. Cuskley, M. Flaherty, H. Little, L. McCrohon, A. Ravignani, & T. Verhoef (Eds.), Proceedings of the 12th International Conference on the Evolution of Language (EVOLANG XII) (pp. 235-237). Toruń, Poland: NCU Press. doi:10.12775/3991-1.056.
  • Lattenkamp, E. Z., & Vernes, S. C. (2018). Vocal learning: A language-relevant trait in need of a broad cross-species approach. Current Opinion in Behavioral Sciences, 21, 209-215. doi:10.1016/j.cobeha.2018.04.007.

    Abstract

    Although humans are unmatched in their capacity to produce
    speech and learn language, comparative approaches in diverse
    animalmodelsareabletoshedlightonthebiologicalunderpinnings
    of language-relevant traits. In the study of vocal learning, a trait
    crucial for spoken language, passerine birds have been the
    dominant models, driving invaluable progress in understanding the
    neurobiology and genetics of vocal learning despite being only
    distantly related to humans. To date, there is sparse evidence that
    our closest relatives, nonhuman primates have the capability to
    learn new vocalisations. However, a number of other mammals
    have shown the capacity for vocal learning, such as some
    cetaceans, pinnipeds, elephants, and bats, and we anticipate that
    with further study more species will gain membership to this
    (currently) select club. A broad, cross-species comparison of vocal
    learning, coupled with careful consideration of the components
    underlying this trait, is crucial to determine how human speech and
    spoken language is biologically encoded and how it evolved. We
    emphasise the need to draw on the pool of promising species that
    havethusfarbeenunderstudiedorneglected.Thisisbynomeansa
    call for fewer studies in songbirds, or an unfocused treasure-hunt,
    but rather an appeal for structured comparisons across a range of
    species, considering phylogenetic relationships, ecological and
    morphological constrains, developmental and social factors, and
    neurogenetic underpinnings. Herein, we promote a comparative
    approachhighlightingtheimportanceofstudyingvocallearningina
    broad range of model species, and describe a common framework
    for targeted cross-taxon studies to shed light on the biology and
    evolution of vocal learning.
  • Lattenkamp, E. Z., Vernes, S. C., & Wiegrebe, L. (2018). Volitional control of social vocalisations and vocal usage learning in bats. Journal of Experimental Biology, 221(14): jeb.180729. doi:10.1242/jeb.180729.

    Abstract

    Bats are gregarious, highly vocal animals that possess a broad repertoire of social vocalisations. For in-depth studies of their vocal behaviours, including vocal flexibility and vocal learning, it is necessary to gather repeatable evidence from controlled laboratory experiments on isolated individuals. However, such studies are rare for one simple reason: eliciting social calls in isolation and under operant control is challenging and has rarely been achieved. To overcome this limitation, we designed an automated setup that allows conditioning of social vocalisations in a new context, and tracks spectro-temporal changes in the recorded calls over time. Using this setup, we were able to reliably evoke social calls from temporarily isolated lesser spear-nosed bats (Phyllostomus discolor). When we adjusted the call criteria that could result in food reward, bats responded by adjusting temporal and spectral call parameters. This was achieved without the help of an auditory template or social context to direct the bats. Our results demonstrate vocal flexibility and vocal usage learning in bats. Our setup provides a new paradigm that allows the controlled study of the production and learning of social vocalisations in isolated bats, overcoming limitations that have, until now, prevented in-depth studies of these behaviours.

    Additional information

    JEB180729supp.pdf
  • Lefever, E., Hendrickx, I., Croijmans, I., Van den Bosch, A., & Majid, A. (2018). Discovering the language of wine reviews: A text mining account. In N. Calzolari, K. Choukri, C. Cieri, T. Declerck, S. Goggi, K. Hasida, H. Isahara, B. Maegaard, J. Mariani, H. Mazo, A. Moreno, J. Odijk, S. Piperidis, & T. Tokunaga (Eds.), Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018) (pp. 3297-3302). Paris: LREC.

    Abstract

    It is widely held that smells and flavors are impossible to put into words. In this paper we test this claim by seeking predictive patterns in wine reviews, which ostensibly aim to provide guides to perceptual content. Wine reviews have previously been critiqued as random and meaningless. We collected an English corpus of wine reviews with their structured metadata, and applied machine learning techniques to automatically predict the wine's color, grape variety, and country of origin. To train the three supervised classifiers, three different information sources were incorporated: lexical bag-of-words features, domain-specific terminology features, and semantic word embedding features. In addition, using regression analysis we investigated basic review properties, i.e., review length, average word length, and their relationship to the scalar values of price and review score. Our results show that wine experts do share a common vocabulary to describe wines and they use this in a consistent way, which makes it possible to automatically predict wine characteristics based on the review text alone. This means that odors and flavors may be more expressible in language than typically acknowledged.
  • Lopopolo, A., Frank, S. L., Van den Bosch, A., Nijhof, A., & Willems, R. M. (2018). The Narrative Brain Dataset (NBD), an fMRI dataset for the study of natural language processing in the brain. In B. Devereux, E. Shutova, & C.-R. Huang (Eds.), Proceedings of LREC 2018 Workshop "Linguistic and Neuro-Cognitive Resources (LiNCR) (pp. 8-11). Paris: LREC.

    Abstract

    We present the Narrative Brain Dataset, an fMRI dataset that was collected during spoken presentation of short excerpts of three
    stories in Dutch. Together with the brain imaging data, the dataset contains the written versions of the stimulation texts. The texts are
    accompanied with stochastic (perplexity and entropy) and semantic computational linguistic measures. The richness and unconstrained
    nature of the data allows the study of language processing in the brain in a more naturalistic setting than is common for fMRI studies.
    We hope that by making NBD available we serve the double purpose of providing useful neural data to researchers interested in natural
    language processing in the brain and to further stimulate data sharing in the field of neuroscience of language.
  • Lupyan, G., Wendorf, A., Berscia, L. M., & Paul, J. (2018). Core knowledge or language-augmented cognition? The case of geometric reasoning. In C. Cuskley, M. Flaherty, H. Little, L. McCrohon, A. Ravignani, & T. Verhoef (Eds.), Proceedings of the 12th International Conference on the Evolution of Language (EVOLANG XII) (pp. 252-254). Toruń, Poland: NCU Press. doi:10.12775/3991-1.062.
  • Lutzenberger, H. (2018). Manual and nonmanual features of name signs in Kata Kolok and sign language of the Netherlands. Sign Language Studies, 18(4), 546-569. doi:10.1353/sls.2018.0016.

    Abstract

    Name signs are based on descriptions, initialization, and loan translations. Nyst and Baker (2003) have found crosslinguistic similarities in the phonology of name signs, such as a preference for one-handed signs and for the head location. Studying Kata Kolok (KK), a rural sign language without indigenous fingerspelling, strongly suggests that one-handedness is not correlated to initialization, but represents a more general feature of name sign phonology. Like in other sign languages, the head location is used frequently in both KK and Sign Language of the Netherlands (NGT) name signs. The use of nonmanuals, however, is strikingly different. NGT name signs are always accompanied by mouthings, which are absent in KK. Instead, KK name signs may use mouth gestures; these may disambiguate manually identical name signs, and even form independent name signs without any manual features
  • Mainz, N. (2018). Vocabulary knowledge and learning: Individual differences in adult native speakers. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Mamus, E., & Karadöller, D. Z. (2018). Anıları Zihinde Canlandırma [Imagery in autobiographical memories]. In S. Gülgöz, B. Ece, & S. Öner (Eds.), Hayatı Hatırlamak: Otobiyografik Belleğe Bilimsel Yaklaşımlar [Remembering Life: Scientific Approaches to Autobiographical Memory] (pp. 185-200). Istanbul, Turkey: Koç University Press.
  • Merkx, D., & Scharenborg, O. (2018). Articulatory feature classification using convolutional neural networks. In Proceedings of Interspeech 2018 (pp. 2142-2146). doi:10.21437/Interspeech.2018-2275.

    Abstract

    The ultimate goal of our research is to improve an existing speech-based computational model of human speech recognition on the task of simulating the role of fine-grained phonetic information in human speech processing. As part of this work we are investigating articulatory feature classifiers that are able to create reliable and accurate transcriptions of the articulatory behaviour encoded in the acoustic speech signal. Articulatory feature (AF) modelling of speech has received a considerable amount of attention in automatic speech recognition research. Different approaches have been used to build AF classifiers, most notably multi-layer perceptrons. Recently, deep neural networks have been applied to the task of AF classification. This paper aims to improve AF classification by investigating two different approaches: 1) investigating the usefulness of a deep Convolutional neural network (CNN) for AF classification; 2) integrating the Mel filtering operation into the CNN architecture. The results showed a remarkable improvement in classification accuracy of the CNNs over state-of-the-art AF classification results for Dutch, most notably in the minority classes. Integrating the Mel filtering operation into the CNN architecture did not further improve classification performance.
  • Mostert, P., Albers, A. M., Brinkman, L., Todorova, L., Kok, P., & De Lange, F. P. (2018). Eye movement-related confounds in neural decoding of visual working memory representations. eNeuro, 5(4): ENEURO.0401-17.2018. doi:10.1523/ENEURO.0401-17.2018.

    Abstract

    A relatively new analysis technique, known as neural decoding or multivariate pattern analysis (MVPA), has become increasingly popular for cognitive neuroimaging studies over recent years. These techniques promise to uncover the representational contents of neural signals, as well as the underlying code and the dynamic profile thereof. A field in which these techniques have led to novel insights in particular is that of visual working memory (VWM). In the present study, we subjected human volunteers to a combined VWM/imagery task while recording their neural signals using magnetoencephalography (MEG). We applied multivariate decoding analyses to uncover the temporal profile underlying the neural representations of the memorized item. Analysis of gaze position however revealed that our results were contaminated by systematic eye movements, suggesting that the MEG decoding results from our originally planned analyses were confounded. In addition to the eye movement analyses, we also present the original analyses to highlight how these might have readily led to invalid conclusions. Finally, we demonstrate a potential remedy, whereby we train the decoders on a functional localizer that was specifically designed to target bottom-up sensory signals and as such avoids eye movements. We conclude by arguing for more awareness of the potentially pervasive and ubiquitous effects of eye movement-related confounds.
  • Ostarek, M. (2018). Envisioning language: An exploration of perceptual processes in language comprehension. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Ostarek, M., Ishag, I., Joosen, D., & Huettig, F. (2018). Saccade trajectories reveal dynamic interactions of semantic and spatial information during the processing of implicitly spatial words. Journal of Experimental Psychology: Learning, Memory, and Cognition, 44(10), 1658-1670. doi:10.1037/xlm0000536.

    Abstract

    Implicit up/down words, such as bird and foot, systematically influence performance on visual tasks involving immediately following targets in compatible vs. incompatible locations. Recent studies have observed that the semantic relation between prime words and target pictures can strongly influence the size and even the direction of the effect: Semantically related targets are processed faster in congruent vs. incongruent locations (location-specific priming), whereas unrelated targets are processed slower in congruent locations. Here, we used eye-tracking to investigate the moment-to-moment processes underlying this pattern. Our reaction time results for related targets replicated the location-specific priming effect and showed a trend towards interference for unrelated targets. We then used growth curve analysis to test how up/down words and their match vs. mismatch with immediately following targets in terms of semantics and vertical location influences concurrent saccadic eye movements. There was a strong main effect of spatial association on linear growth with up words biasing changes in y-coordinates over time upwards relative to down words (and vice versa). Similar to the RT data, this effect was strongest for semantically related targets and reversed for unrelated targets. Intriguingly, all conditions showed a bias in the congruent direction in the initial stage of the saccade. Then, at around halfway into the saccade the effect kept increasing in the semantically related condition, and reversed in the unrelated condition. These results suggest that online processing of up/down words triggers direction-specific oculomotor processes that are dynamically modulated by the semantic relation between prime words and targets.
  • Piepers, J., & Redl, T. (2018). Gender-mismatching pronouns in context: The interpretation of possessive pronouns in Dutch and Limburgian. In B. Le Bruyn, & J. Berns (Eds.), Linguistics in the Netherlands 2018 (pp. 97-110). Amsterdam: Benjamins.

    Abstract

    Gender-(mis)matching pronouns have been studied extensively in experiments. However, a phenomenon common to various languages has thus far been overlooked: the systemic use of non-feminine pronouns when referring to female individuals. The present study is the first to provide experimental insights into the interpretation of such a pronoun: Limburgian zien ‘his/its’ and Dutch zijn ‘his/its’ are grammatically ambiguous between masculine and neuter, but while Limburgian zien can refer to women, the Dutch equivalent zijn cannot. Employing an acceptability judgment task, we presented speakers of Limburgian (N = 51) with recordings of sentences in Limburgian featuring zien, and speakers of Dutch (N = 52) with Dutch translations of these sentences featuring zijn. All sentences featured a potential male or female antecedent embedded in a stereotypically male or female context. We found that ratings were higher for sentences in which the pronoun could refer back to the antecedent. For Limburgians, this extended to sentences mentioning female individuals. Context further modulated sentence appreciation. Possible mechanisms regarding the interpretation of zien as coreferential with a female individual will be discussed.
  • Popov, V., Ostarek, M., & Tenison, C. (2018). Practices and pitfalls in inferring neural representations. NeuroImage, 174, 340-351. doi:10.1016/j.neuroimage.2018.03.041.

    Abstract

    A key challenge for cognitive neuroscience is deciphering the representational schemes of the brain. Stimulus-feature-based encoding models are becoming increasingly popular for inferring the dimensions of neural representational spaces from stimulus-feature spaces. We argue that such inferences are not always valid because successful prediction can occur even if the two representational spaces use different, but correlated, representational schemes. We support this claim with three simulations in which we achieved high prediction accuracy despite systematic differences in the geometries and dimensions of the underlying representations. Detailed analysis of the encoding models' predictions showed systematic deviations from ground-truth, indicating that high prediction accuracy is insufficient for making representational inferences. This fallacy applies to the prediction of actual neural patterns from stimulus-feature spaces and we urge caution in inferring the nature of the neural code from such methods. We discuss ways to overcome these inferential limitations, including model comparison, absolute model performance, visualization techniques and attentional modulation.
  • Raviv, L., & Arnon, I. (2018). Systematicity, but not compositionality: Examining the emergence of linguistic structure in children and adults using iterated learning. Cognition, 181, 160-173. doi:10.1016/j.cognition.2018.08.011.

    Abstract

    Recent work suggests that cultural transmission can lead to the emergence of linguistic structure as speakers’ weak individual biases become amplified through iterated learning. However, to date no published study has demonstrated a similar emergence of linguistic structure in children. The lack of evidence from child learners constitutes a problematic
    2
    gap in the literature: if such learning biases impact the emergence of linguistic structure, they should also be found in children, who are the primary learners in real-life language transmission. However, children may differ from adults in their biases given age-related differences in general cognitive skills. Moreover, adults’ performance on iterated learning tasks may reflect existing (and explicit) linguistic biases, partially undermining the generality of the results. Examining children’s performance can also help evaluate contrasting predictions about their role in emerging languages: do children play a larger or smaller role than adults in the creation of structure? Here, we report a series of four iterated artificial language learning studies (based on Kirby, Cornish & Smith, 2008) with both children and adults, using a novel child-friendly paradigm. Our results show that linguistic structure does not emerge more readily in children compared to adults, and that adults are overall better in both language learning and in creating linguistic structure. When languages could become underspecified (by allowing homonyms), children and adults were similar in developing consistent mappings between meanings and signals in the form of structured ambiguities. However, when homonimity was not allowed, only adults created compositional structure. This study is a first step in using iterated language learning paradigms to explore child-adult differences. It provides the first demonstration that cultural transmission has a different effect on the languages produced by children and adults: While children were able to develop systematicity, their languages did not show compositionality. We focus on the relation between learning and structure creation as a possible explanation for our findings and discuss implications for children’s role in the emergence of linguistic structure.

    Additional information

    results A results B results D stimuli
  • Raviv, L., & Arnon, I. (2018). The developmental trajectory of children’s auditory and visual statistical learning abilities: Modality-based differences in the effect of age. Developmental Science, 21(4): e12593. doi:10.1111/desc.12593.

    Abstract

    Infants, children and adults are capable of extracting recurring patterns from their environment through statistical learning (SL), an implicit learning mechanism that is considered to have an important role in language acquisition. Research over the past 20 years has shown that SL is present from very early infancy and found in a variety of tasks and across modalities (e.g., auditory, visual), raising questions on the domain generality of SL. However, while SL is well established for infants and adults, only little is known about its developmental trajectory during childhood, leaving two important questions unanswered: (1) Is SL an early-maturing capacity that is fully developed in infancy, or does it improve with age like other cognitive capacities (e.g., memory)? and (2) Will SL have similar developmental trajectories across modalities? Only few studies have looked at SL across development, with conflicting results: some find age-related improvements while others do not. Importantly, no study to date has examined auditory SL across childhood, nor compared it to visual SL to see if there are modality-based differences in the developmental trajectory of SL abilities. We addressed these issues by conducting a large-scale study of children's performance on matching auditory and visual SL tasks across a wide age range (5–12y). Results show modality-based differences in the development of SL abilities: while children's learning in the visual domain improved with age, learning in the auditory domain did not change in the tested age range. We examine these findings in light of previous studies and discuss their implications for modality-based differences in SL and for the role of auditory SL in language acquisition. A video abstract of this article can be viewed at: https://www.youtube.com/watch?v=3kg35hoF0pw.

    Additional information

    Video abstract of the article
  • Raviv, L., Meyer, A. S., & Lev-Ari, S. (2018). The role of community size in the emergence of linguistic structure. In C. Cuskley, M. Flaherty, H. Little, L. McCrohon, A. Ravignani, & T. Verhoef (Eds.), Proceedings of the 12th International Conference on the Evolution of Language (EVOLANG XII) (pp. 402-404). Toruń, Poland: NCU Press. doi:10.12775/3991-1.096.
  • Redl, T., Eerland, A., & Sanders, T. J. M. (2018). The processing of the Dutch masculine generic zijn ‘his’ across stereotype contexts: An eye-tracking study. PLoS One, 13(10): e0205903. doi:10.1371/journal.pone.0205903.

    Abstract

    Language users often infer a person’s gender when it is not explicitly mentioned. This information is included in the mental model of the described situation, giving rise to expectations regarding the continuation of the discourse. Such gender inferences can be based on two types of information: gender stereotypes (e.g., nurses are female) and masculine generics, which are grammatically masculine word forms that are used to refer to all genders in certain contexts (e.g., To each his own). In this eye-tracking experiment (N = 82), which is the first to systematically investigate the online processing of masculine generic pronouns, we tested whether the frequently used Dutch masculine generic zijn ‘his’ leads to a male bias. In addition, we tested the effect of context by introducing male, female, and neutral stereotypes. We found no evidence for the hypothesis that the generically-intended masculine pronoun zijn ‘his’ results in a male bias. However, we found an effect of stereotype context. After introducing a female stereotype, reading about a man led to an increase in processing time. However, the reverse did not hold, which parallels the finding in social psychology that men are penalized more for gender-nonconforming behavior. This suggests that language processing is not only affected by the strength of stereotype contexts; the associated disapproval of violating these gender stereotypes affects language processing, too.

    Additional information

    pone.0205903.s001.pdf data files
  • Rietbergen, M., Roelofs, A., Den Ouden, H., & Cools, R. (2018). Disentangling cognitive from motor control: Influence of response modality on updating, inhibiting, and shifting. Acta Psychologica, 191, 124-130. doi:10.1016/j.actpsy.2018.09.008.

    Abstract

    It is unclear whether cognitive and motor control are parallel and interactive or serial and independent processes. According to one view, cognitive control refers to a set of modality-nonspecific processes that act on supramodal representations and precede response modality-specific motor processes. An alternative view is that cognitive control represents a set of modality-specific operations that act directly on motor-related representations, implying dependence of cognitive control on motor control. Here, we examined the influence of response modality (vocal vs. manual) on three well-established subcomponent processes of cognitive control: shifting, inhibiting, and updating. We observed effects of all subcomponent processes in reaction times. The magnitude of these effects did not differ between response modalities for shifting and inhibiting, in line with a serial, supramodal view. However, the magnitude of the updating effect differed between modalities, in line with an interactive, modality-specific view. These results suggest that updating represents a modality-specific operation that depends on motor control, whereas shifting and inhibiting represent supramodal operations that act independently of motor control.
  • Scharenborg, O., & Merkx, D. (2018). The role of articulatory feature representation quality in a computational model of human spoken-word recognition. In Proceedings of the Machine Learning in Speech and Language Processing Workshop (MLSLP 2018).

    Abstract

    Fine-Tracker is a speech-based model of human speech
    recognition. While previous work has shown that Fine-Tracker
    is successful at modelling aspects of human spoken-word
    recognition, its speech recognition performance is not
    comparable to that of human performance, possibly due to
    suboptimal intermediate articulatory feature (AF)
    representations. This study investigates the effect of improved
    AF representations, obtained using a state-of-the-art deep
    convolutional network, on Fine-Tracker’s simulation and
    recognition performance: Although the improved AF quality
    resulted in improved speech recognition; it, surprisingly, did
    not lead to an improvement in Fine-Tracker’s simulation power.
  • Schoenmakers, G.-J., & Piepers, J. (2018). Echter kan het wel. Levende Talen Magazine, 105(4), 10-13.
  • Shitova, N. (2018). Electrophysiology of competition and adjustment in word and phrase production. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Sikora, K. (2018). Executive control in language production by adults and children with and without language impairment. PhD Thesis, Radboud University, Nijmegen, The Netherlands.

    Abstract

    The present study examined how the updating, inhibiting, and shifting abilities underlying executive control influence spoken noun-phrase production. Previous studies provided evidence that updating and inhibiting, but not shifting, influence picture naming response time (RT). However, little is known about the role of executive control in more complex forms of language production like generating phrases. We assessed noun-phrase production using picture description and a picture-word interference procedure. We measured picture description RT to assess length, distractor, and switch effects, which were assumed to reflect, respectively, the updating, inhibiting, and shifting abilities of adult participants. Moreover, for each participant we obtained scores on executive control tasks that measured verbal and nonverbal updating, nonverbal inhibiting, and nonverbal shifting. We found that both verbal and nonverbal updating scores correlated with the overall mean picture description RTs. Furthermore, the length effect in the RTs correlated with verbal but not nonverbal updating scores, while the distractor effect correlated with inhibiting scores. We did not find a correlation between the switch effect in the mean RTs and the shifting scores. However, the shifting scores correlated with the switch effect in the normal part of the underlying RT distribution. These results suggest that updating, inhibiting, and shifting each influence the speed of phrase production, thereby demonstrating a contribution of all three executive control abilities to language production.

    Additional information

    full text via Radboud Repository
  • Sikora, K., & Roelofs, A. (2018). Switching between spoken language-production tasks: the role of attentional inhibition and enhancement. Language, Cognition and Neuroscience, 33(7), 912-922. doi:10.1080/23273798.2018.1433864.

    Abstract

    Since Pillsbury [1908. Attention. London: Swan Sonnenschein & Co], the issue of whether attention operates through inhibition or enhancement has been on the scientific agenda. We examined whether overcoming previous attentional inhibition or enhancement is the source of asymmetrical switch costs in spoken noun-phrase production and colour-word Stroop tasks. In Experiment 1, using bivalent stimuli, we found asymmetrical costs in response times for switching between long and short phrases and between Stroop colour naming and reading. However, in Experiment 2, using bivalent stimuli for the weaker tasks (long phrases, colour naming) and univalent stimuli for the stronger tasks (short phrases, word reading), we obtained an asymmetrical switch cost for phrase production, but a symmetrical cost for Stroop. The switch cost evidence was quantified using Bayesian statistical analyses. Our findings suggest that switching between phrase types involves inhibition, whereas switching between colour naming and reading involves enhancement. Thus, the attentional mechanism depends on the language-production task involved. The results challenge theories of task switching that assume only one attentional mechanism, inhibition or enhancement, rather than both mechanisms.
  • Stoehr, A., Benders, T., Van Hell, J. G., & Fikkert, P. (2018). Heritage language exposure impacts voice onset time of Dutch–German simultaneous bilingual preschoolers. Bilingualism: Language and Cognition, 21(3), 598-617. doi:10.1017/S1366728917000116.

    Abstract

    This study assesses the effects of age and language exposure on VOT production in 29 simultaneous bilingual children aged 3;7 to 5;11 who speak German as a heritage language in the Netherlands. Dutch and German have a binary voicing contrast, but the contrast is implemented with different VOT values in the two languages. The results suggest that bilingual children produce ‘voiced’ plosives similarly in their two languages, and these productions are not monolingual-like in either language. Bidirectional cross-linguistic influence between Dutch and German can explain these results. Yet, the bilinguals seemingly have two autonomous categories for Dutch and German ‘voiceless’ plosives. In German, the bilinguals’ aspiration is not monolingual-like, but bilinguals with more heritage language exposure produce more target-like aspiration. Importantly, the amount of exposure to German has no effect on the majority language's ‘voiceless’ category. This implies that more heritage language exposure is associated with more language-specific voicing systems.
  • Stoehr, A. (2018). Speech production, perception, and input of simultaneous bilingual preschoolers: Evidence from voice onset time. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Thorin, J., Sadakata, M., Desain, P., & McQueen, J. M. (2018). Perception and production in interaction during non-native speech category learning. The Journal of the Acoustical Society of America, 144(1), 92-103. doi:10.1121/1.5044415.

    Abstract

    Establishing non-native phoneme categories can be a notoriously difficult endeavour—in both speech perception and speech production. This study asks how these two domains interact in the course of this learning process. It investigates the effect of perceptual learning and related production practice of a challenging non-native category on the perception and/or production of that category. A four-day perceptual training protocol on the British English /æ/-/ɛ/ vowel contrast was combined with either related or unrelated production practice. After feedback on perceptual categorisation of the contrast, native Dutch participants in the related production group (N = 19) pronounced the trial's correct answer, while participants in the unrelated production group (N = 19) pronounced similar but phonologically unrelated words. Comparison of pre- and post-tests showed significant improvement over the course of training in both perception and production, but no differences between the groups were found. The lack of an effect of production practice is discussed in the light of previous, competing results and models of second-language speech perception and production. This study confirms that, even in the context of related production practice, perceptual training boosts production learning.
  • Tribushinina, E., Mak, M., Dubinkina, E., & Mak, W. M. (2018). Adjective production by Russian-speaking children with developmental language disorder and Dutch–Russian simultaneous bilinguals: Disentangling the profiles. Applied Psycholinguistics, 39(5), 1033-1064. doi:10.1017/S0142716418000115.

    Abstract

    Bilingual children with reduced exposure to one or both languages may have language profiles that are
    apparently similar to those of children with developmental language disorder (DLD). Children with
    DLD receive enough input, but have difficulty using this input for acquisition due to processing deficits.
    The present investigation aims to determine aspects of adjective production that are differentially
    affected by reduced input (in bilingualism) and reduced intake (in DLD). Adjectives were elicited
    from Dutch–Russian simultaneous bilinguals with limited exposure to Russian and Russian-speaking
    monolinguals with andwithout DLD.Anantonymelicitation taskwas used to assess the size of adjective
    vocabularies, and a degree task was employed to compare the preferences of the three groups in the
    use of morphological, lexical, and syntactic degree markers. The results revealed that adjective–noun
    agreement is affected to the same extent by both reduced input and reduced intake. The size of adjective
    lexicons is also negatively affected by both, but more so by reduced exposure. However, production
    of morphological degree markers and learning of semantic paradigms are areas of relative strength in
    which bilinguals outperform monolingual children with DLD.We suggest that reduced input might be
    counterbalanced by linguistic and cognitive advantages of bilingualism
  • Tromp, J. (2018). Indirect request comprehension in different contexts. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Tromp, J., Peeters, D., Meyer, A. S., & Hagoort, P. (2018). The combined use of Virtual Reality and EEG to study language processing in naturalistic environments. Behavior Research Methods, 50(2), 862-869. doi:10.3758/s13428-017-0911-9.

    Abstract

    When we comprehend language, we often do this in rich settings in which we can use many cues to understand what someone is saying. However, it has traditionally been difficult to design experiments with rich three-dimensional contexts that resemble our everyday environments, while maintaining control over the linguistic and non-linguistic information that is available. Here we test the validity of combining electroencephalography (EEG) and Virtual Reality (VR) to overcome this problem. We recorded electrophysiological brain activity during language processing in a well-controlled three-dimensional virtual audiovisual environment. Participants were immersed in a virtual restaurant, while wearing EEG equipment. In the restaurant participants encountered virtual restaurant guests. Each guest was seated at a separate table with an object on it (e.g. a plate with salmon). The restaurant guest would then produce a sentence (e.g. “I just ordered this salmon.”). The noun in the spoken sentence could either match (“salmon”) or mismatch (“pasta”) with the object on the table, creating a situation in which the auditory information was either appropriate or inappropriate in the visual context. We observed a reliable N400 effect as a consequence of the mismatch. This finding validates the combined use of VR and EEG as a tool to study the neurophysiological mechanisms of everyday language comprehension in rich, ecologically valid settings.
  • Trompenaars, T. (2018). Empathy for the inanimate. Linguistics in the Netherlands, 35, 125-138. doi:10.1075/avt.00009.tro.

    Abstract

    Narrative fiction may invite us to share the perspective of characters which are very much unlike ourselves. Inanimate objects featuring as protagonists or narrators are an extreme example of this. The way readers experience these characters was examined by means of a narrative immersion study. Participants (N = 200) judged narratives containing animate or inanimate characters in predominantly Agent or Experiencer roles. Narratives with inanimate characters were judged to be less emotionally engaging. This effect was influenced by the dominant thematic role associated with the character: inanimate Agents led to more defamiliarization compared to their animate counterparts than inanimate Experiencers. I argue for an integrated account of thematic roles and animacy in literary experience and linguistics in general.
  • Trompenaars, T., Hogeweg, L., Stoop, W., & De Hoop, H. (2018). The language of an inanimate narrator. Open Linguistics, 4, 707-721. doi:10.1515/opli-2018-0034.

    Abstract

    We show by means of a corpus study that the language used by the inanimate first person narrator in the novel Specht en zoon deviates from what we would expect on the basis of the fact that the narrator is inanimate, but at the same time also differsfrom the language of a human narrator in the novel De wijde blik on several linguistic dimensions. Whereas the human narrator is associated strongly with action verbs, preferring the Agent role, the inanimate narrator is much more limited to the Experiencer role, predominantly associated with cognition and sensory verbs. Our results show that animacy as a linguistic concept may be refined by taking into account the myriad ways in which an entity’s conceptual animacy may be expressed: we accept the conceptual animacy of the inanimate narrator despite its inability to act on its environment, showing this need not be a requirement for animacy
  • Trujillo, J. P., Simanova, I., Bekkering, H., & Ozyurek, A. (2018). Communicative intent modulates production and perception of actions and gestures: A Kinect study. Cognition, 180, 38-51. doi:10.1016/j.cognition.2018.04.003.

    Abstract

    Actions may be used to directly act on the world around us, or as a means of communication. Effective communication requires the addressee to recognize the act as being communicative. Humans are sensitive to ostensive communicative cues, such as direct eye gaze (Csibra & Gergely, 2009). However, there may be additional cues present in the action or gesture itself. Here we investigate features that characterize the initiation of a communicative interaction in both production and comprehension.

    We asked 40 participants to perform 31 pairs of object-directed actions and representational gestures in more- or less- communicative contexts. Data were collected using motion capture technology for kinematics and video recording for eye-gaze. With these data, we focused on two issues. First, if and how actions and gestures are systematically modulated when performed in a communicative context. Second, if observers exploit such kinematic information to classify an act as communicative.

    Our study showed that during production the communicative context modulates space–time dimensions of kinematics and elicits an increase in addressee-directed eye-gaze. Naïve participants detected communicative intent in actions and gestures preferentially using eye-gaze information, only utilizing kinematic information when eye-gaze was unavailable.

    Our study highlights the general communicative modulation of action and gesture kinematics during production but also shows that addressees only exploit this modulation to recognize communicative intention in the absence of eye-gaze. We discuss these findings in terms of distinctive but potentially overlapping functions of addressee directed eye-gaze and kinematic modulations within the wider context of human communication and learning.
  • Ung, D. C., Iacono, G., Méziane, H., Blanchard, E., Papon, M.-A., Selten, M., van Rhijn, J.-R., Van Rhijn, J. R., Montjean, R., Rucci, J., Martin, S., Fleet, A., Birling, M.-C., Marouillat, S., Roepman, R., Selloum, M., Lux, A., Thépault, R.-A., Hamel, P., Mittal, K. and 7 moreUng, D. C., Iacono, G., Méziane, H., Blanchard, E., Papon, M.-A., Selten, M., van Rhijn, J.-R., Van Rhijn, J. R., Montjean, R., Rucci, J., Martin, S., Fleet, A., Birling, M.-C., Marouillat, S., Roepman, R., Selloum, M., Lux, A., Thépault, R.-A., Hamel, P., Mittal, K., Vincent, J. B., Dorseuil, O., Stunnenberg, H. G., Billuart, P., Nadif Kasri, N., Hérault, Y., & Laumonnier, F. (2018). Ptchd1 deficiency induces excitatory synaptic and cognitive dysfunctions in mouse. Molecular Psychiatry, 23, 1356-1367. doi:10.1038/mp.2017.39.

    Abstract

    Synapse development and neuronal activity represent fundamental processes for the establishment of cognitive function. Structural organization as well as signalling pathways from receptor stimulation to gene expression regulation are mediated by synaptic activity and misregulated in neurodevelopmental disorders such as autism spectrum disorder (ASD) and intellectual disability (ID). Deleterious mutations in the PTCHD1 (Patched domain containing 1) gene have been described in male patients with X-linked ID and/or ASD. The structure of PTCHD1 protein is similar to the Patched (PTCH1) receptor; however, the cellular mechanisms and pathways associated with PTCHD1 in the developing brain are poorly determined. Here we show that PTCHD1 displays a C-terminal PDZ-binding motif that binds to the postsynaptic proteins PSD95 and SAP102. We also report that PTCHD1 is unable to rescue the canonical sonic hedgehog (SHH) pathway in cells depleted of PTCH1, suggesting that both proteins are involved in distinct cellular signalling pathways. We find that Ptchd1 deficiency in male mice (Ptchd1−/y) induces global changes in synaptic gene expression, affects the expression of the immediate-early expression genes Egr1 and Npas4 and finally impairs excitatory synaptic structure and neuronal excitatory activity in the hippocampus, leading to cognitive dysfunction, motor disabilities and hyperactivity. Thus our results support that PTCHD1 deficiency induces a neurodevelopmental disorder causing excitatory synaptic dysfunction.

    Additional information

    mp201739x1.pdf
  • Van Rhijn, J. R., Fisher, S. E., Vernes, S. C., & Nadif Kasri, N. (2018). Foxp2 loss of function increases striatal direct pathway inhibition via increased GABA release. Brain Structure and Function, 223(9), 4211-4226. doi:10.1007/s00429-018-1746-6.

    Abstract

    Heterozygous mutations of the Forkhead-box protein 2 (FOXP2) gene in humans cause childhood apraxia of speech. Loss of Foxp2 in mice is known to affect striatal development and impair motor skills. However, it is unknown if striatal excitatory/inhibitory balance is affected during development and if the imbalance persists into adulthood. We investigated the effect of reduced Foxp2 expression, via a loss-of-function mutation, on striatal medium spiny neurons (MSNs). Our data show that heterozygous loss of Foxp2 decreases excitatory (AMPA receptor-mediated) and increases inhibitory (GABA receptor-mediated) currents in D1 dopamine receptor positive MSNs of juvenile and adult mice. Furthermore, reduced Foxp2 expression increases GAD67 expression, leading to both increased presynaptic content and release of GABA. Finally, pharmacological blockade of inhibitory activity in vivo partially rescues motor skill learning deficits in heterozygous Foxp2 mice. Our results suggest a novel role for Foxp2 in the regulation of striatal direct pathway activity through managing inhibitory drive.

    Additional information

    429_2018_1746_MOESM1_ESM.docx
  • Van Campen, A. D., Kunert, R., Van den Wildenberg, W. P. M., & Ridderinkhof, K. R. (2018). Repetitive transcranial magnetic stimulation over inferior frontal cortex impairs the suppression (but not expression) of action impulses during action conflict. Psychophysiology, 55(3): e13003. doi:10.1111/psyp.13003.

    Abstract

    In the recent literature, the effects of noninvasive neurostimulation on cognitive functioning appear to lack consistency and replicability. We propose that such effects may be concealed unless dedicated, sensitive, and process-specific dependent measures are used. The expression and subsequent suppression of response capture are often studied using conflict tasks. Response-time distribution analyses have been argued to provide specific measures of the susceptibility to make fast impulsive response errors, as well as the proficiency of the selective suppression of these impulses. These measures of response capture and response inhibition are particularly sensitive to experimental manipulations and clinical deficiencies that are typically obfuscated in commonly used overall performance analyses. Recent work using structural and functional imaging techniques links these behavioral outcome measures to the integrity of frontostriatal networks. These studies suggest that the presupplementary motor area (pre-SMA) is linked to the susceptibility to response capture whereas the right inferior frontal cortex (rIFC) is associated with the selective suppression of action impulses. Here, we used repetitive transcranial magnetic stimulation (rTMS) to test the causal involvement of these two cortical areas in response capture and inhibition in the Simon task. Disruption of rIFC function specifically impaired selective suppression of conflicting action tendencies, whereas the anticipated increase of fast impulsive errors after perturbing pre-SMA function was not confirmed. These results provide a proof of principle of the notion that the selection of appropriate dependent measures is perhaps crucial to establish the effects of neurostimulation on specific cognitive functions.
  • Vanlangendonck, F., Takashima, A., Willems, R. M., & Hagoort, P. (2018). Distinguishable memory retrieval networks for collaboratively and non-collaboratively learned information. Neuropsychologia, 111, 123-132. doi:10.1016/j.neuropsychologia.2017.12.008.

    Abstract

    Learning often occurs in communicative and collaborative settings, yet almost all research into the neural basis of memory relies on participants encoding and retrieving information on their own. We investigated whether learning linguistic labels in a collaborative context at least partly relies on cognitively and neurally distinct representations, as compared to learning in an individual context. Healthy human participants learned labels for sets of abstract shapes in three different tasks. They came up with labels with another person in a collaborative communication task (collaborative condition), by themselves (individual condition), or were given pre-determined unrelated labels to learn by themselves (arbitrary condition). Immediately after learning, participants retrieved and produced the labels aloud during a communicative task in the MRI scanner. The fMRI results show that the retrieval of collaboratively generated labels as compared to individually learned labels engages brain regions involved in understanding others (mentalizing or theory of mind) and autobiographical memory, including the medial prefrontal cortex, the right temporoparietal junction and the precuneus. This study is the first to show that collaboration during encoding affects the neural networks involved in retrieval.
  • Vanlangendonck, F., Willems, R. M., & Hagoort, P. (2018). Taking common ground into account: Specifying the role of the mentalizing network in communicative language production. PLoS One, 13(10): e0202943. doi:10.1371/journal.pone.0202943.
  • De Vos, J., Schriefers, H., Nivard, M. C., & Lemhöfer, K. (2018). A meta‐analysis and meta‐regression of incidental second language word learning from spoken input. Language Learning, 68(4), 906-941. doi:10.1111/lang.12296.

    Abstract

    We meta‐analyzed the effectiveness of incidental second language word learning from spoken input. Our sample contained 105 effect sizes from 32 primary studies employing meaning‐focused word‐learning activities with 1,964 participants with typical cognitive functioning. The random‐effects meta‐analysis yielded a mean effect size of g = 1.05, reflecting generally large vocabulary gains from spoken input in meaning‐focused activities. A meta‐regression with three substantive and two methodological predictors also revealed that adult participants outperformed children in terms of word learning and that interactive learning tasks were more effective than noninteractive ones. Furthermore, learning scores were higher when measured with recognition than with recall tests. Methodologically, the use of a no‐input control group seemed to protect against an overestimation of learning effects, evidenced by smaller effect sizes. Finally, whether a pretest–posttest design was used did not influence effect sizes. All data and the analysis script are publicly available.
  • De Vos, J., Schriefers, H., & Lemhöfer, K. (2018). Noticing vocabulary holes aids incidental second language word learning: An experimental study. Bilingualism: Language and Cognition, 22(3), 500-515. doi:10.1017/S1366728918000019.

    Abstract

    Noticing the hole (NTH) occurs when speakers want to say something, but realise they do not know the right word(s). Such awareness of lacking knowledge supposedly facilitates the acquisition of the unknown word(s) from later input (Swain, 1993). We tested this claim by experimentally inducing NTH in a second language (L2) for some participants (experimental), but not others (control). Then, in a price comparison game, all participants were exposed to spoken L2 input containing the to-be-learned words. They were unaware of taking part in an L2 study. Post-tests showed that participants who had noticed holes in their vocabulary had indeed learned more words compared to participants who had not. This held both for the experimental group as well as those participants in the control group who later reported to have noticed holes. Thus, when we become aware of vocabulary holes, the first step to improve our vocabulary is already taken.

Share this page