Displaying 1 - 100 of 134
  • Akamine, S., Ghaleb, E., Rasenberg, M., Fernandez, R., Meyer, A. S., & Özyürek, A. (2024). Speakers align both their gestures and words not only to establish but also to maintain reference to create shared labels for novel objects in interaction. In L. K. Samuelson, S. L. Frank, A. Mackey, & E. Hazeltine (Eds.), Proceedings of the 46th Annual Meeting of the Cognitive Science Society (CogSci 2024) (pp. 2435-2442).

    Abstract

    When we communicate with others, we often repeat aspects of each other's communicative behavior such as sentence structures and words. Such behavioral alignment has been mostly studied for speech or text. Yet, language use is mostly multimodal, flexibly using speech and gestures to convey messages. Here, we explore the use of alignment in speech (words) and co-speech gestures (iconic gestures) in a referential communication task aimed at finding labels for novel objects in interaction. In particular, we investigate how people flexibly use lexical and gestural alignment to create shared labels for novel objects and whether alignment in speech and gesture are related over time. The present study shows that interlocutors establish shared labels multimodally, and alignment in words and iconic gestures are used throughout the interaction. We also show that the amount of lexical alignment positively associates with the amount of gestural alignment over time, suggesting a close relationship between alignment in the vocal and manual modalities.

    Additional information

    link to eScholarship
  • Alvarez van Tussenbroek, I., Knörnschild, M., Nagy, M., Ten Cate, C. J., & Vernes, S. C. (2024). Morphological diversity in the brains of 12 Neotropical Bat species. Acta Chiropterologica, 25(2), 323-338. doi:10.3161/15081109ACC2023.25.2.011.

    Abstract

    Comparative neurobiology allows us to investigate relationships between phylogeny and the brain and understand the evolution of traits. Bats constitute an attractive group of mammalian species for comparative studies, given their large diversity in behavioural phenotypes, brain morphology, and array of specialised traits. Currently, the order Chiroptera contains over 1,450 species within 21 families and spans ca. 65 million years of evolution. To date, 194 Neotropical bat species (ca. 13% of the total number of species around the world) have been recorded in Central America. This study includes qualitative and quantitative macromorphological descriptions of the brains of 12 species from six families of Neotropical bats. These analyses, which include histological neuronal staining of two species from different families (Phyllostomus hastatus and Saccopteryx bilineata), show substantial diversity in brain macromorphology including brain shape and size, exposure of mesencephalic regions, and cortical and cerebellar fissure depth. Brain macromorphology can in part be explained by phylogeny as species within the same family are more similar to each other. However, macromorphology cannot be explained by evolutionary time alone as brain differences between some phyllostomid bats are larger than between species from the family Emballonuridae despite being of comparable diverging distances in the phylogenetic tree. This suggests that faster evolutionary changes in brain morphology occurred in phyllostomids — although a larger number of species needs to be studied to confirm this. Our results show the rich diversity in brain morphology that bats provide for comparative and evolutionary studies.
  • Alvarez van Tussenbroek, I., Knörnschild, M., Nagy, M., O'Toole, B. P., Formenti, G., Philge, P., Zhang, N., Abueg, L., Brajuka, N., Jarvis, E., Volkert, T. L., Gray, J. L., Pieri, M., Mai, M., Teeling, E. C., Vernes, S. C., The Bat Biology Foundation, & The Bat1K Consortium (2024). The genome sequence of Rhynchonycteris naso, Peters, 1867 (Chiroptera, Emballonuridae, Rhynchonycteris). Wellcome Open Research, 9: 361. doi:10.12688/wellcomeopenres.19959.1.

    Abstract

    We present a reference genome assembly from an individual male Rhynchonycteris naso (Chordata; Mammalia; Chiroptera; Emballonuridae). The genome sequence is 2.46 Gb in span. The majority of the assembly is scaffolded into 22 chromosomal pseudomolecules, with the Y sex chromosome assembled.
  • Alvarez van Tussenbroek, I. (2024). Neotropical bat species: An exploration of brain morphology and genetics. PhD Thesis, Leiden University, Leiden.
  • Amelink, J., Postema, M., Kong, X., Schijven, D., Carrion Castillo, A., Soheili-Nezhad, S., Sha, Z., Molz, B., Joliot, M., Fisher, S. E., & Francks, C. (2024). Imaging genetics of language network functional connectivity reveals links with language-related abilities, dyslexia and handedness. Communications Biology, 7: 1209. doi:10.1038/s42003-024-06890-3.

    Abstract

    Language is supported by a distributed network of brain regions with a particular contribution from the left hemisphere. A multi-level understanding of this network requires studying the genetic architecture of its functional connectivity and hemispheric asymmetry. We used resting state functional imaging data from 29,681 participants from the UK Biobank to measure functional connectivity between 18 left-hemisphere regions implicated in multimodal sentence-level processing, as well as their homotopic regions in the right-hemisphere, and interhemispheric connections. Multivariate genome-wide association analysis of this total network, based on common genetic variants (with population frequencies above 1%), identified 14 loci associated with network functional connectivity. Three of these loci were also associated with hemispheric differences of intrahemispheric connectivity. Polygenic dispositions to lower language-related abilities, dyslexia and left-handedness were associated with generally reduced leftward asymmetry of functional connectivity, but with some trait- and connection-specific exceptions. Exome-wide association analysis based on rare, protein-altering variants (frequencies < 1%) suggested 7 additional genes. These findings shed new light on the genetic contributions to language network connectivity and its asymmetry based on both common and rare genetic variants, and reveal genetic links to language-related traits and hemispheric dominance for hand preference.
  • Anijs, M. (2024). Networks within networks: Probing the neuronal and molecular underpinnings of language-related disorders using human cell models. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Bonandrini, R., Gornetti, E., & Paulesu, E. (2024). A meta-analytical account of the functional lateralization of the reading network. Cortex, 177, 363-384. doi:10.1016/j.cortex.2024.05.015.

    Abstract

    The observation that the neural correlates of reading are left-lateralized is ubiquitous in the cognitive neuroscience and neuropsychological literature. Still, reading is served by a constellation of neural units, and the extent to which these units are consistently left-lateralized is unclear. In this regard, the functional lateralization of the fusiform gyrus is of particular interest, by virtue of its hypothesized role as a “visual word form area”. A quantitative Activation Likelihood Estimation meta-analysis was conducted on activation foci from 35 experiments investigating silent reading, and both a whole-brain and a bayesian ROI-based approach were used to assess the lateralization of the data submitted to meta-analysis. Perirolandic areas showed the highest level of left-lateralization, the fusiform cortex and the parietal cortex exhibited only a moderate pattern of left-lateralization, while in the occipital, insular cortices and in the cerebellum the lateralization turned out to be the lowest observed. The relatively limited functional lateralization of the fusiform gyrus was further explored in a regression analysis on the lateralization profile of each study. The functional lateralization of the fusiform gyrus during reading was positively associated with the lateralization of the precentral and inferior occipital gyri and negatively associated with the lateralization of the triangular portion of the inferior frontal gyrus and of the temporal pole. Overall, the present data highlight how lateralization patterns differ within the reading network. Furthermore, the present data highlight how the functional lateralization of the fusiform gyrus during reading is related to the degree of functional lateralization of other language brain areas.
  • Çetinçelik, M. (2024). A look into language: The role of visual cues in early language acquisition in the infant brain. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Çetinçelik, M., Jordan‐Barros, A., Rowland, C. F., & Snijders, T. M. (2024). The effect of visual speech cues on neural tracking of speech in 10‐month‐old infants. European Journal of Neuroscience, 60(6), 5381-5399. doi:10.1111/ejn.16492.

    Abstract

    While infants' sensitivity to visual speech cues and the benefit of these cues have been well-established by behavioural studies, there is little evidence on the effect of visual speech cues on infants' neural processing of continuous auditory speech. In this study, we investigated whether visual speech cues, such as the movements of the lips, jaw, and larynx, facilitate infants' neural speech tracking. Ten-month-old Dutch-learning infants watched videos of a speaker reciting passages in infant-directed speech while electroencephalography (EEG) was recorded. In the videos, either the full face of the speaker was displayed or the speaker's mouth and jaw were masked with a block, obstructing the visual speech cues. To assess neural tracking, speech-brain coherence (SBC) was calculated, focusing particularly on the stress and syllabic rates (1–1.75 and 2.5–3.5 Hz respectively in our stimuli). First, overall, SBC was compared to surrogate data, and then, differences in SBC in the two conditions were tested at the frequencies of interest. Our results indicated that infants show significant tracking at both stress and syllabic rates. However, no differences were identified between the two conditions, meaning that infants' neural tracking was not modulated further by the presence of visual speech cues. Furthermore, we demonstrated that infants' neural tracking of low-frequency information is related to their subsequent vocabulary development at 18 months. Overall, this study provides evidence that infants' neural tracking of speech is not necessarily impaired when visual speech cues are not fully visible and that neural tracking may be a potential mechanism in successful language acquisition.

    Additional information

    supplementary materials
  • Çetinçelik, M., Rowland, C. F., & Snijders, T. M. (2024). Does the speaker’s eye gaze facilitate infants’ word segmentation from continuous speech? An ERP study. Developmental Science, 27(2): e13436. doi:10.1111/desc.13436.

    Abstract

    The environment in which infants learn language is multimodal and rich with social cues. Yet, the effects of such cues, such as eye contact, on early speech perception have not been closely examined. This study assessed the role of ostensive speech, signalled through the speaker's eye gaze direction, on infants’ word segmentation abilities. A familiarisation-then-test paradigm was used while electroencephalography (EEG) was recorded. Ten-month-old Dutch-learning infants were familiarised with audio-visual stories in which a speaker recited four sentences with one repeated target word. The speaker addressed them either with direct or with averted gaze while speaking. In the test phase following each story, infants heard familiar and novel words presented via audio-only. Infants’ familiarity with the words was assessed using event-related potentials (ERPs). As predicted, infants showed a negative-going ERP familiarity effect to the isolated familiarised words relative to the novel words over the left-frontal region of interest during the test phase. While the word familiarity effect did not differ as a function of the speaker's gaze over the left-frontal region of interest, there was also a (not predicted) positive-going early ERP familiarity effect over right fronto-central and central electrodes in the direct gaze condition only. This study provides electrophysiological evidence that infants can segment words from audio-visual speech, regardless of the ostensiveness of the speaker's communication. However, the speaker's gaze direction seems to influence the processing of familiar words.
  • Cheung, C.-Y., Kirby, S., & Raviv, L. (2024). The role of gender, social bias and personality traits in shaping linguistic accommodation: An experimental approach. In J. Nölle, L. Raviv, K. E. Graham, S. Hartmann, Y. Jadoul, M. Josserand, T. Matzinger, K. Mudd, M. Pleyer, A. Slonimska, & S. Wacewicz (Eds.), The Evolution of Language: Proceedings of the 15th International Conference (EVOLANG XV) (pp. 80-82). Nijmegen: The Evolution of Language Conferences. doi:10.17617/2.3587960.
  • Collins, J. (2024). Linguistic areas and prehistoric migrations. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Ding, R., Ten Oever, S., & Martin, A. E. (2024). Delta-band activity underlies referential meaning representation during pronoun resolution. Journal of Cognitive Neuroscience, 36(7), 1472-1492. doi:10.1162/jocn_a_02163.

    Abstract

    Human language offers a variety of ways to create meaning, one of which is referring to entities, objects, or events in the world. One such meaning maker is understanding to whom or to what a pronoun in a discourse refers to. To understand a pronoun, the brain must access matching entities or concepts that have been encoded in memory from previous linguistic context. Models of language processing propose that internally stored linguistic concepts, accessed via exogenous cues such as phonological input of a word, are represented as (a)synchronous activities across a population of neurons active at specific frequency bands. Converging evidence suggests that delta band activity (1–3 Hz) is involved in temporal and representational integration during sentence processing. Moreover, recent advances in the neurobiology of memory suggest that recollection engages neural dynamics similar to those which occurred during memory encoding. Integrating from these two research lines, we here tested the hypothesis that neural dynamic patterns, especially in delta frequency range, underlying referential meaning representation, would be reinstated during pronoun resolution. By leveraging neural decoding techniques (i.e., representational similarity analysis) on a magnetoencephalogram data set acquired during a naturalistic story-listening task, we provide evidence that delta-band activity underlies referential meaning representation. Our findings suggest that, during spoken language comprehension, endogenous linguistic representations such as referential concepts may be proactively retrieved and represented via activation of their underlying dynamic neural patterns.
  • Dona, L., & Schouwstra, M. (2024). Balancing regularization and variation: The roles of priming and motivatedness. In J. Nölle, L. Raviv, K. E. Graham, S. Hartmann, Y. Jadoul, M. Josserand, T. Matzinger, K. Mudd, M. Pleyer, A. Slonimska, & S. Wacewicz (Eds.), The Evolution of Language: Proceedings of the 15th International Conference (EVOLANG XV) (pp. 130-133). Nijmegen: The Evolution of Language Conferences.
  • Duengen, D., Polotzek, M., O'Sullivan, E., & Ravignani, A. (2024). Anecdotal observations of socially learned vocalizations in harbor seals. Animal Behavior and Cognition, 11, 393-403. doi:10.26451/abc.11.03.04.2024.

    Abstract

    Harbor seals (Phoca vitulina) are more solitary than many other pinnipeds. Yet, they are capable of vocal learning, a form of social learning. Most extant literature examines social animals when investigating social learning, despite sociality not being a prerequisite. Here, we report two formerly silent harbor seals who initiated vocalizations, after having repeatedly observed a conspecific receiving food rewards for vocalizing. Our observations suggest both social and vocal learning in a group of captive harbor seals, a species that lives semi-solitarily in the wild. We propose that, in this case, social learning acted as a shortcut to acquiring food rewards compared to the comparatively costly asocial learning.
  • Düngen, D., Jadoul, Y., & Ravignani, A. (2024). Vocal usage learning and vocal comprehension learning in harbor seals. BMC Neuroscience, 25: 48. doi:10.1186/s12868-024-00899-4.

    Abstract

    Background

    Which mammals show vocal learning abilities, e.g., can learn new sounds, or learn to use sounds in new contexts? Vocal usage and comprehension learning are submodules of vocal learning. Specifically, vocal usage learning is the ability to learn to use a vocalization in a new context; vocal comprehension learning is the ability to comprehend a vocalization in a new context. Among mammals, harbor seals (Phoca vitulina) are good candidates to investigate vocal learning. Here, we test whether harbor seals are capable of vocal usage and comprehension learning.

    Results

    We trained two harbor seals to (i) switch contexts from a visual to an auditory cue. In particular, the seals first produced two vocalization types in response to two hand signs; they then transitioned to producing these two vocalization types upon the presentation of two distinct sets of playbacks of their own vocalizations. We then (ii) exposed the seals to a combination of trained and novel vocalization stimuli. In a final experiment, (iii) we broadcasted only novel vocalizations of the two vocalization types to test whether seals could generalize from the trained set of stimuli to only novel items of a given vocal category. Both seals learned all tasks and took ≤ 16 sessions to succeed across all experiments. In particular, the seals showed contextual learning through switching the context from former visual to novel auditory cues, vocal matching and generalization. Finally, by responding to the played-back vocalizations with distinct vocalizations, the animals showed vocal comprehension learning.

    Conclusions

    It has been suggested that harbor seals are vocal learners; however, to date, these observations had not been confirmed in controlled experiments. Here, through three experiments, we could show that harbor seals are capable of both vocal usage and comprehension learning.
  • Eekhof, L. S. (2024). Reading the mind: The relationship between social cognition and narrative processing. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Eekhof, L. S., & Mar, R. A. (2024). Does reading about fictional minds make us more curious about real ones? Language and Cognition, 16(1), 176-196. doi:10.1017/langcog.2023.30.

    Abstract

    Although there is a large body of research assessing whether exposure to narratives boosts social cognition immediately afterward, not much research has investigated the underlying mechanism of this putative effect. This experiment investigates the possibility that reading a narrative increases social curiosity directly afterward, which might explain the short-term boosts in social cognition reported by some others. We developed a novel measure of state social curiosity and collected data from participants (N = 222) who were randomly assigned to read an excerpt of narrative fiction or expository nonfiction. Contrary to our expectations, we found that those who read a narrative exhibited less social curiosity afterward than those who read an expository text. This result was not moderated by trait social curiosity. An exploratory analysis uncovered that the degree to which texts present readers with social targets predicted less social curiosity. Our experiment demonstrates that reading narratives, or possibly texts with social content in general, may engage and fatigue social-cognitive abilities, causing a temporary decrease in social curiosity. Such texts might also temporarily satisfy the need for social connection, temporarily reducing social curiosity. Both accounts are in line with theories describing how narratives result in better social cognition over the long term.
  • He, J., Frances, C., Creemers, A., & Brehm, L. (2024). Effects of irrelevant unintelligible and intelligible background speech on spoken language production. Quarterly Journal of Experimental Psychology, 77(8), 1745-1769. doi:10.1177/17470218231219971.

    Abstract

    Earlier work has explored spoken word production during irrelevant background speech such as intelligible and unintelligible word lists. The present study compared how different types of irrelevant background speech (word lists vs. sentences) influenced spoken word production relative to a quiet control condition, and whether the influence depended on the intelligibility of the background speech. Experiment 1 presented native Dutch speakers with Chinese word lists and sentences. Experiment 2 presented a similar group with Dutch word lists and sentences. In both experiments, the lexical selection demands in speech production were manipulated by varying name agreement (high vs. low) of the to-be-named pictures. Results showed that background speech, regardless of its intelligibility, disrupted spoken word production relative to a quiet condition, but no effects of word lists versus sentences in either language were found. Moreover, the disruption by intelligible background speech compared with the quiet condition was eliminated when planning low name agreement pictures. These findings suggest that any speech, even unintelligible speech, interferes with production, which implies that the disruption of spoken word production is mainly phonological in nature. The disruption by intelligible background speech can be reduced or eliminated via top–down attentional engagement.
  • Goltermann*, O., Alagöz*, G., Molz, B., & Fisher, S. E. (2024). Neuroimaging genomics as a window into the evolution of human sulcal organization. Cerebral Cortex, 34(3): bhae078. doi:10.1093/cercor/bhae078.

    Abstract

    * Ole Goltermann and Gökberk Alagöz contributed equally.
    Primate brain evolution has involved prominent expansions of the cerebral cortex, with largest effects observed in the human lineage. Such expansions were accompanied by fine-grained anatomical alterations, including increased cortical folding. However, the molecular bases of evolutionary alterations in human sulcal organization are not yet well understood. Here, we integrated data from recently completed large-scale neuroimaging genetic analyses with annotations of the human genome relevant to various periods and events in our evolutionary history. These analyses identified single-nucleotide polymorphism (SNP) heritability enrichments in fetal brain human-gained enhancer (HGE) elements for a number of sulcal structures, including the central sulcus, which is implicated in human hand dexterity. We zeroed in on a genomic region that harbors DNA variants associated with left central sulcus shape, an HGE element, and genetic loci involved in neurogenesis including ZIC4, to illustrate the value of this approach for probing the complex factors contributing to human sulcal evolution.

    Additional information

    supplementary data link to preprint
  • Goncharova, M. V., Jadoul, Y., Reichmuth, C., Fitch, W. T., & Ravignani, A. (2024). Vocal tract dynamics shape the formant structure of conditioned vocalizations in a harbor seal. Annals of the New York Academy of Sciences, 1538(1), 107-116. doi:10.1111/nyas.15189.

    Abstract

    Formants, or resonance frequencies of the upper vocal tract, are an essential part of acoustic communication. Articulatory gestures—such as jaw, tongue, lip, and soft palate movements—shape formant structure in human vocalizations, but little is known about how nonhuman mammals use those gestures to modify formant frequencies. Here, we report a case study with an adult male harbor seal trained to produce an arbitrary vocalization composed of multiple repetitions of the sound wa. We analyzed jaw movements frame-by-frame and matched them to the tracked formant modulation in the corresponding vocalizations. We found that the jaw opening angle was strongly correlated with the first (F1) and, to a lesser degree, with the second formant (F2). F2 variation was better explained by the jaw angle opening when the seal was lying on his back rather than on the belly, which might derive from soft tissue displacement due to gravity. These results show that harbor seals share some common articulatory traits with humans, where the F1 depends more on the jaw position than F2. We propose further in vivo investigations of seals to further test the role of the tongue on formant modulation in mammalian sound production.
  • De Hoyos, L., Barendse, M. T., Schlag, F., Van Donkelaar, M. M. J., Verhoef, E., Shapland, C. Y., Klassmann, A., Buitelaar, J., Verhulst, B., Fisher, S. E., Rai, D., & St Pourcain, B. (2024). Structural models of genome-wide covariance identify multiple common dimensions in autism. Nature Communications, 15: 1770. doi:10.1038/s41467-024-46128-8.

    Abstract

    Common genetic variation has been associated with multiple symptoms in Autism Spectrum Disorder (ASD). However, our knowledge of shared genetic factor structures contributing to this highly heterogeneous neurodevelopmental condition is limited. Here, we developed a structural equation modelling framework to directly model genome-wide covariance across core and non-core ASD phenotypes, studying autistic individuals of European descent using a case-only design. We identified three independent genetic factors most strongly linked to language/cognition, behaviour and motor development, respectively, when studying a population-representative sample (N=5,331). These analyses revealed novel associations. For example, developmental delay in acquiring personal-social skills was inversely related to language, while developmental motor delay was linked to self-injurious behaviour. We largely confirmed the three-factorial structure in independent ASD-simplex families (N=1,946), but uncovered simplex-specific genetic overlap between behaviour and language phenotypes. Thus, the common genetic architecture in ASD is multi-dimensional and contributes, in combination with ascertainment-specific patterns, to phenotypic heterogeneity.
  • Jansen, M. G., Zwiers, M. P., Marques, J. P., Chan, K.-S., Amelink, J., Altgassen, M., Oosterman, J. M., & Norris, D. G. (2024). The Advanced BRain Imaging on ageing and Memory (ABRIM) data collection: Study protocol and rationale. PLOS ONE, 19(6): e0306006. doi:10.1371/journal.pone.0306006.

    Abstract

    To understand the neurocognitive mechanisms that underlie heterogeneity in cognitive ageing, recent scientific efforts have led to a growing public availability of imaging cohort data. The Advanced BRain Imaging on ageing and Memory (ABRIM) project aims to add to these existing datasets by taking an adult lifespan approach to provide a cross-sectional, normative database with a particular focus on connectivity, myelinization and iron content of the brain in concurrence with cognitive functioning, mechanisms of reserve, and sleep-wake rhythms. ABRIM freely shares MRI and behavioural data from 295 participants between 18–80 years, stratified by age decade and sex (median age 52, IQR 36–66, 53.20% females). The ABRIM MRI collection consists of both the raw and pre-processed structural and functional MRI data to facilitate data usage among both expert and non-expert users. The ABRIM behavioural collection includes measures of cognitive functioning (i.e., global cognition, processing speed, executive functions, and memory), proxy measures of cognitive reserve (e.g., educational attainment, verbal intelligence, and occupational complexity), and various self-reported questionnaires (e.g., on depressive symptoms, pain, and the use of memory strategies in daily life and during a memory task). In a sub-sample (n = 120), we recorded sleep-wake rhythms using an actigraphy device (Actiwatch 2, Philips Respironics) for a period of 7 consecutive days. Here, we provide an in-depth description of our study protocol, pre-processing pipelines, and data availability. ABRIM provides a cross-sectional database on healthy participants throughout the adult lifespan, including numerous parameters relevant to improve our understanding of cognitive ageing. Therefore, ABRIM enables researchers to model the advanced imaging parameters and cognitive topologies as a function of age, identify the normal range of values of such parameters, and to further investigate the diverse mechanisms of reserve and resilience.
  • Karaca, F., Brouwer, S., Unsworth, S., & Huettig, F. (2024). Morphosyntactic predictive processing in adult heritage speakers: Effects of cue availability and spoken and written language experience. Language, Cognition and Neuroscience, 39(1), 118-135. doi:10.1080/23273798.2023.2254424.

    Abstract

    We investigated prediction skills of adult heritage speakers and the role of written and spoken language experience on predictive processing. Using visual world eye-tracking, we focused on predictive use of case-marking cues in verb-medial and verb-final sentences in Turkish with adult Turkish heritage speakers (N = 25) and Turkish monolingual speakers (N = 24). Heritage speakers predicted in verb-medial sentences (when verb-semantic and case-marking cues were available), but not in verb-final sentences (when only case-marking cues were available) while monolinguals predicted in both. Prediction skills of heritage speakers were modulated by their spoken language experience in Turkish and written language experience in both languages. Overall, these results strongly suggest that verb-semantic information is needed to scaffold the use of morphosyntactic cues for prediction in heritage speakers. The findings also support the notion that both spoken and written language experience play an important role in predictive spoken language processing.
  • Karadöller, D. Z., Sümer, B., Ünal, E., & Özyürek, A. (2024). Sign advantage: Both children and adults’ spatial expressions in sign are more informative than those in speech and gestures combined. Journal of Child Language, 51(4), 876-902. doi:10.1017/S0305000922000642.

    Abstract

    Expressing Left-Right relations is challenging for speaking-children. Yet, this challenge was absent for signing-children, possibly due to iconicity in the visual-spatial modality of expression. We investigate whether there is also a modality advantage when speaking-children’s co-speech gestures are considered. Eight-year-old child and adult hearing monolingual Turkish speakers and deaf signers of Turkish-Sign-Language described pictures of objects in various spatial relations. Descriptions were coded for informativeness in speech, sign, and speech-gesture combinations for encoding Left-Right relations. The use of co-speech gestures increased the informativeness of speakers’ spatial expressions compared to speech-only. This pattern was more prominent for children than adults. However, signing-adults and children were more informative than child and adult speakers even when co-speech gestures were considered. Thus, both speaking- and signing-children benefit from iconic expressions in visual modality. Finally, in each modality, children were less informative than adults, pointing to the challenge of this spatial domain in development.
  • Kocsis, K., Düngen, D., Jadoul, Y., & Ravignani, A. (2024). Harbour seals use rhythmic percussive signalling in interaction and display. Animal Behaviour, 207, 223-234. doi:10.1016/j.anbehav.2023.09.014.

    Abstract

    Multimodal rhythmic signalling abounds across animal taxa. Studying its mechanisms and functions can highlight adaptive components in highly complex rhythmic behaviours, like dance and music. Pinnipeds, such as the harbour seal, Phoca vitulina, are excellent comparative models to assess rhythmic capacities. Harbour seals engage in rhythmic percussive behaviours which, until now, have not been described in detail. In our study, eight zoo-housed harbour seals (two pups, two juveniles and four adults) were passively monitored by audio and video during their pupping/breeding season. All juvenile and adult animals performed percussive signalling with their fore flippers in agonistic conditions, both on land and in water. Flipper slap sequences produced on the ground or on the seals' bodies were often highly regular in their interval duration, that is, were quasi-isochronous, at a 200–600 beats/min pace. Three animals also showed significant lateralization in slapping. In contrast to slapping on land, display slapping in water, performed only by adult males, showed slower tempo by one order of magnitude, and a rather motivic temporal structure. Our work highlights that percussive communication is a significant part of harbour seals' behavioural repertoire. We hypothesize that its forms of rhythm production may reflect adaptive functions such as regulating internal states and advertising individual traits.
  • Koutamanis, E. (2024). Spreading the word: Cross-linguistic influence in the bilingual child's lexicon. PhD Thesis, Radboud University, Nijmegen.
  • Koutamanis, E., Kootstra, G. J., Dijkstra, T., & Unsworth, S. (2024). Cognate facilitation in single- and dual-language contexts in bilingual children’s word processing. Linguistic Approaches to Bilingualism, 14(4), 577-608. doi:10.1075/lab.23009.kou.

    Abstract

    We examined the extent to which cognate facilitation effects occurred in simultaneous bilingual children’s production and comprehension and how these were modulated by language dominance and language context. Bilingual Dutch-German children, ranging from Dutch-dominant to German-dominant, performed picture naming and auditory lexical decision tasks in single-language and dual-language contexts. Language context was manipulated with respect to the language of communication (with the experimenter and in instructional videos) and by means of proficiency tasks. Cognate facilitation effects emerged in both production and comprehension and interacted with both dominance and context. In a single-language context, stronger cognate facilitation effects were found for picture naming in children’s less dominant language, in line with previous studies on individual differences in lexical activation. In the dual-language context, this pattern was reversed, suggesting inhibition of the dominant language at the decision level. Similar effects were observed in lexical decision. These findings provide evidence for an integrated bilingual lexicon in simultaneous bilingual children and shed more light on the complex interplay between lexicon-internal and lexicon-external factors modulating the extent of lexical cross-linguistic influence more generally.
  • Koutamanis, E., Kootstra, G. J., Dijkstra, T., & Unsworth, S. (2024). Cross-linguistic influence in the simultaneous bilingual child's lexicon: An eye-tracking and primed picture selection study. Bilingualism: Language and Cognition, 27(3), 377-387. doi:10.1017/S136672892300055X.

    Abstract

    In a between-language lexical priming study, we examined to what extent the two languages in a simultaneous bilingual child's lexicon interact, while taking individual differences in language exposure into account. Primary-school-aged Dutch–Greek bilinguals performed a primed picture selection task combined with eye-tracking. They matched pictures to auditorily presented Dutch target words preceded by Greek prime words. Their reaction times and eye movements were recorded. We tested for effects of between-language phonological priming, translation priming, and phonological priming through translation. Priming effects emerged in reaction times and eye movements in all three conditions, at different stages of processing, and unaffected by language exposure. These results extend previous findings for bilingual toddlers and bilingual adults. Processing similarities between these populations indicate that, across different stages of development, bilinguals have an integrated lexicon that is accessed in a language-nonselective way and is susceptible to interactions within and between different types of lexical representation.
  • Mamus, E. (2024). Perceptual experience shapes how blind and sighted people express concepts in multimodal language. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Mazzini, S., Yadnik, S., Timmers, I., Rubio-Gozalbo, E., & Jansma, B. M. (2024). Altered neural oscillations in classical galactosaemia during sentence production. Journal of Inherited Metabolic Disease, 47(4), 575-833. doi:10.1002/jimd.12740.

    Abstract

    Classical galactosaemia (CG) is a hereditary disease in galactose metabolism that despite dietary treatment is characterized by a wide range of cognitive deficits, among which is language production. CG brain functioning has been studied with several neuroimaging techniques, which revealed both structural and functional atypicalities. In the present study, for the first time, we compared the oscillatory dynamics, especially the power spectrum and time–frequency representations (TFR), in the electroencephalography (EEG) of CG patients and healthy controls while they were performing a language production task. Twenty-one CG patients and 19 healthy controls described animated scenes, either in full sentences or in words, indicating two levels of complexity in syntactic planning. Based on previous work on the P300 event related potential (ERP) and its relation with theta frequency, we hypothesized that the oscillatory activity of patients and controls would differ in theta power and TFR. With regard to behavior, reaction times showed that patients are slower, reflecting the language deficit. In the power spectrum, we observed significant higher power in patients in delta (1–3 Hz), theta (4–7 Hz), beta (15–30 Hz) and gamma (30–70 Hz) frequencies, but not in alpha (8–12 Hz), suggesting an atypical oscillatory profile. The time-frequency analysis revealed significantly weaker event-related theta synchronization (ERS) and alpha desynchronization (ERD) in patients in the sentence condition. The data support the hypothesis that CG language difficulties relate to theta–alpha brain oscillations.

    Additional information

    table S1 and S2
  • Mickan, A., Slesareva, E., McQueen, J. M., & Lemhöfer, K. (2024). New in, old out: Does learning a new language make you forget previously learned foreign languages? Quarterly Journal of Experimental Psychology, 77(3), 530-550. doi:10.1177/17470218231181380.

    Abstract

    Anecdotal evidence suggests that learning a new foreign language (FL) makes you forget previously learned FLs. To seek empirical evidence for this claim, we tested whether learning words in a previously unknown L3 hampers subsequent retrieval of their L2 translation equivalents. In two experiments, Dutch native speakers with knowledge of English (L2), but not Spanish (L3), first completed an English vocabulary test, based on which 46 participant-specific, known English words were chosen. Half of those were then learned in Spanish. Finally, participants’ memory for all 46 English words was probed again in a picture naming task. In Experiment 1, all tests took place within one session. In Experiment 2, we separated the English pre-test from Spanish learning by a day and manipulated the timing of the English post-test (immediately after learning vs. 1 day later). By separating the post-test from Spanish learning, we asked whether consolidation of the new Spanish words would increase their interference strength. We found significant main effects of interference in naming latencies and accuracy: Participants speeded up less and were less accurate to recall words in English for which they had learned Spanish translations, compared with words for which they had not. Consolidation time did not significantly affect these interference effects. Thus, learning a new language indeed comes at the cost of subsequent retrieval ability in other FLs. Such interference effects set in immediately after learning and do not need time to emerge, even when the other FL has been known for a long time.

    Additional information

    supplementary material
  • Mooijman, S. (2024). Control of language in bilingual speakers with and without aphasia. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Mooijman, S., Schoonen, R., Roelofs, A., & Ruiter, M. B. (2024). Benefits of free language choice in bilingual individuals with aphasia. Aphasiology, 38(11), 1793-1831. doi:10.1080/02687038.2024.2326239.

    Abstract

    Background

    Forced switching between languages poses demands on control abilities, which may be difficult to meet for bilinguals with aphasia. Freely choosing languages has been shown to increase naming efficiency in healthy bilinguals, and lexical accessibility was found to be a predictor for language choice. The overlap between bilingual language switching and other types of switching is yet unclear.

    Aims

    This study aimed to examine the benefits of free language choice for bilinguals with aphasia and to investigate the overlap of between- and within-language switching abilities.

    Methods & Procedures

    Seventeen bilinguals with aphasia completed a questionnaire and four web-based picture naming tasks: single-language naming in the first and second language separately; voluntary switching between languages; cued and predictable switching between languages; cued and predictable switching between phrase types in the first language. Accuracy and naming latencies were analysed using (generalised) linear mixed-effects models.

    Outcomes & Results

    The results showed higher accuracy and faster naming for the voluntary switching condition compared to single-language naming and cued switching. Both voluntary and cued language switching yielded switch costs, and voluntary switch costs were larger. Ease of lexical access was a reliable predictor for voluntary language choice. We obtained no statistical evidence for differences or associations between switch costs in between- and within-language switching.

    Conclusions

    Several results point to benefits of voluntary language switching for bilinguals with aphasia. Freely mixing languages improved naming accuracy and speed, and ease of lexical access affected language choice. There was no statistical evidence for overlap of between- and within-language switching abilities. This study highlights the benefits of free language choice for bilinguals with aphasia.
  • Mooijman, S., Schoonen, R., Ruiter, M. B., & Roelofs, A. (2024). Voluntary and cued language switching in late bilingual speakers. Bilingualism: Language and Cognition, 27(4), 610-627. doi:10.1017/S1366728923000755.

    Abstract

    Previous research examining the factors that determine language choice and voluntary switching mainly involved early bilinguals. Here, using picture naming, we investigated language choice and switching in late Dutch–English bilinguals. We found that naming was overall slower in cued than in voluntary switching, but switch costs occurred in both types of switching. The magnitude of switch costs differed depending on the task and language, and was moderated by L2 proficiency. Self-rated rather than objectively assessed proficiency predicted voluntary switching and ease of lexical access was associated with language choice. Between-language and within-language switch costs were not correlated. These results highlight self-rated proficiency as a reliable predictor of voluntary switching, with language modulating switch costs. As in early bilinguals, ease of lexical access was related to word-level language choice of late bilinguals.
  • Papoutsi*, C., Zimianiti*, E., Bosker, H. R., & Frost, R. L. A. (2024). Statistical learning at a virtual cocktail party. Psychonomic Bulletin & Review, 31, 849-861. doi:10.3758/s13423-023-02384-1.

    Abstract

    * These two authors contributed equally to this study
    Statistical learning – the ability to extract distributional regularities from input – is suggested to be key to language acquisition. Yet, evidence for the human capacity for statistical learning comes mainly from studies conducted in carefully controlled settings without auditory distraction. While such conditions permit careful examination of learning, they do not reflect the naturalistic language learning experience, which is replete with auditory distraction – including competing talkers. Here, we examine how statistical language learning proceeds in a virtual cocktail party environment, where the to-be-learned input is presented alongside a competing speech stream with its own distributional regularities. During exposure, participants in the Dual Talker group concurrently heard two novel languages, one produced by a female talker and one by a male talker, with each talker virtually positioned at opposite sides of the listener (left/right) using binaural acoustic manipulations. Selective attention was manipulated by instructing participants to attend to only one of the two talkers. At test, participants were asked to distinguish words from part-words for both the attended and the unattended languages. Results indicated that participants’ accuracy was significantly higher for trials from the attended vs. unattended
    language. Further, the performance of this Dual Talker group was no different compared to a control group who heard only one language from a single talker (Single Talker group). We thus conclude that statistical learning is modulated by selective attention, being relatively robust against the additional cognitive load provided by competing speech, emphasizing its efficiency in naturalistic language learning situations.

    Additional information

    supplementary file
  • Peirolo, M., Meyer, A. S., & Frances, C. (2024). Investigating the causes of prosodic marking in self-repairs: An automatic process? In Y. Chen, A. Chen, & A. Arvaniti (Eds.), Proceedings of Speech Prosody 2024 (pp. 1080-1084). doi:10.21437/SpeechProsody.2024-218.

    Abstract

    Natural speech involves repair. These repairs are often highlighted through prosodic marking (Levelt & Cutler, 1983). Prosodic marking usually entails an increase in pitch, loudness, and/or duration that draws attention to the corrected word. While it is established that natural self-repairs typically elicit prosodic marking, the exact cause of this is unclear. This study investigates whether producing a prosodic marking emerges from an automatic correction process or has a communicative purpose. In the current study, we elicit corrections to test whether all self-corrections elicit prosodic marking. Participants carried out a picture-naming task in which they described two images presented on-screen. To prompt self-correction, the second image was altered in some cases, requiring participants to abandon their initial utterance and correct their description to match the new image. This manipulation was compared to a control condition in which only the orientation of the object would change, eliciting no self-correction while still presenting a visual change. We found that the replacement of the item did not elicit a prosodic marking, regardless of the type of change. Theoretical implications and research directions are discussed, in particular theories of prosodic planning.
  • Quaresima, A. (2024). A Bridge not too far: Neurobiological causal models of word recognition. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • de Reus, K., Benítez-Burraco, A., Hersh, T. A., Groot, N., Lambert, M. L., Slocombe, K. E., Vernes, S. C., & Raviv, L. (2024). Self-domestication traits in vocal learning mammals. In J. Nölle, L. Raviv, K. E. Graham, S. Hartmann, Y. Jadoul, M. Josserand, T. Matzinger, K. Mudd, M. Pleyer, A. Slonimska, & S. Wacewicz (Eds.), The Evolution of Language: Proceedings of the 15th International Conference (EVOLANG XV) (pp. 105-108). Nijmegen: The Evolution of Language Conferences.
  • Rohrer, P. L., Bujok, R., Van Maastricht, L., & Bosker, H. R. (2024). The timing of beat gestures affects lexical stress perception in Spanish. In Y. Chen, A. Chen, & A. Arvaniti (Eds.), Proceedings Speech Prosody 2024 (pp. 702-706). doi:10.21437/SpeechProsody.2024-142.

    Abstract

    It has been shown that when speakers produce hand gestures, addressees are attentive towards these gestures, using them to facilitate speech processing. Even relatively simple “beat” gestures are taken into account to help process aspects of speech such as prosodic prominence. In fact, recent evidence suggests that the timing of a beat gesture can influence spoken word recognition. Termed the manual McGurk Effect, Dutch participants, when presented with lexical stress minimal pair continua in Dutch, were biased to hear lexical stress on the syllable that coincided with a beat gesture. However, little is known about how this manual McGurk effect would surface in languages other than Dutch, with different acoustic cues to prominence, and variable gestures. Therefore, this study tests the effect in Spanish where lexical stress is arguably even more important, being a contrastive cue in the regular verb conjugation system. Results from 24 participants corroborate the effect in Spanish, namely that when given the same auditory stimulus, participants were biased to perceive lexical stress on the syllable that visually co-occurred with a beat gesture. These findings extend the manual McGurk effect to a different language, emphasizing the impact of gestures' timing on prosody perception and spoken word recognition.
  • Roos, N. M., Chauvet, J., & Piai, V. (2024). The Concise Language Paradigm (CLaP), a framework for studying the intersection of comprehension and production: Electrophysiological properties. Brain Structure and Function, 229, 2097-2113. doi:10.1007/s00429-024-02801-8.

    Abstract

    Studies investigating language commonly isolate one modality or process, focusing on comprehension or production. Here, we present a framework for a paradigm that combines both: the Concise Language Paradigm (CLaP), tapping into comprehension and production within one trial. The trial structure is identical across conditions, presenting a sentence followed by a picture to be named. We tested 21 healthy speakers with EEG to examine three time periods during a trial (sentence, pre-picture interval, picture onset), yielding contrasts of sentence comprehension, contextually and visually guided word retrieval, object recognition, and naming. In the CLaP, sentences are presented auditorily (constrained, unconstrained, reversed), and pictures appear as normal (constrained, unconstrained, bare) or scrambled objects. Imaging results revealed different evoked responses after sentence onset for normal and time-reversed speech. Further, we replicated the context effect of alpha-beta power decreases before picture onset for constrained relative to unconstrained sentences, and could clarify that this effect arises from power decreases following constrained sentences. Brain responses locked to picture-onset differed as a function of sentence context and picture type (normal vs. scrambled), and naming times were fastest for pictures in constrained sentences, followed by scrambled picture naming, and equally fast for bare and unconstrained picture naming. Finally, we also discuss the potential of the CLaP to be adapted to different focuses, using different versions of the linguistic content and tasks, in combination with electrophysiology or other imaging methods. These first results of the CLaP indicate that this paradigm offers a promising framework to investigate the language system.
  • Sander, J., Çetinçelik, M., Zhang, Y., Rowland, C. F., & Harmon, Z. (2024). Why does joint attention predict vocabulary acquisition? The answer depends on what coding scheme you use. In L. K. Samuelson, S. L. Frank, M. Toneva, A. Mackey, & E. Hazeltine (Eds.), Proceedings of the 46th Annual Meeting of the Cognitive Science Society (CogSci 2024) (pp. 1607-1613).

    Abstract

    Despite decades of study, we still know less than we would like about the association between joint attention (JA) and language acquisition. This is partly because of disagreements on how to operationalise JA. In this study, we examine the impact of applying two different, influential JA operationalisation schemes to the same dataset of child-caregiver interactions, to determine which yields a better fit to children's later vocabulary size. Two coding schemes— one defining JA in terms of gaze overlap and one in terms of social aspects of shared attention—were applied to video-recordings of dyadic naturalistic toy-play interactions (N=45). We found that JA was predictive of later production vocabulary when operationalised as shared focus (study 1), but also that its operationalisation as shared social awareness increased its predictive power (study 2). Our results emphasise the critical role of methodological choices in understanding how and why JA is associated with vocabulary size.
  • Severijnen, G. G. A., Bosker, H. R., & McQueen, J. M. (2024). Your “VOORnaam” is not my “VOORnaam”: An acoustic analysis of individual talker differences in word stress in Dutch. Journal of Phonetics, 103: 101296. doi:10.1016/j.wocn.2024.101296.

    Abstract

    Different talkers speak differently, even within the same homogeneous group. These differences lead to acoustic variability in speech, causing challenges for correct perception of the intended message. Because previous descriptions of this acoustic variability have focused mostly on segments, talker variability in prosodic structures is not yet well documented. The present study therefore examined acoustic between-talker variability in word stress in Dutch. We recorded 40 native Dutch talkers from a participant sample with minimal dialectal variation and balanced gender, producing segmentally overlapping words (e.g., VOORnaam vs. voorNAAM; ‘first name’ vs. ‘respectable’, capitalization indicates lexical stress), and measured different acoustic cues to stress. Each individual participant’s acoustic measurements were analyzed using Linear Discriminant Analyses, which provide coefficients for each cue, reflecting the strength of each cue in a talker’s productions. On average, talkers primarily used mean F0, intensity, and duration. Moreover, each participant also employed a unique combination of cues, illustrating large prosodic variability between talkers. In fact, classes of cue-weighting tendencies emerged, differing in which cue was used as the main cue. These results offer the most comprehensive acoustic description, to date, of word stress in Dutch, and illustrate that large prosodic variability is present between individual talkers.
  • Severijnen, G. G. A., Gärtner, V. M., Walther, R. F. E., & McQueen, J. M. (2024). Talker-specific perceptual learning about lexical stress: stability over time. In Y. Chen, A. Chen, & A. Arvaniti (Eds.), Proceedings of Speech Prosody 2024 (pp. 657-661). doi:10.21437/SpeechProsody.2024-133.

    Abstract

    Talkers vary in how they speak, resulting in acoustic variability in segments and prosody. Previous studies showed that listeners deal with segmental variability through perceptual learning and that these learning effects are stable over time. The present study examined whether this is also true for lexical stress variability. Listeners heard Dutch minimal pairs (e.g., VOORnaam vs. voorNAAM, ‘first name’ vs. ‘respectable’) spoken by two talkers. Half of the participants heard Talker 1 using only F0 to signal lexical stress and Talker 2 using only intensity. The other half heard the reverse. After a learning phase, participants were tested on words spoken by these talkers with conflicting stress cues (‘mixed items’; e.g., Talker 1 saying voornaam with F0 signaling initial stress and intensity signaling final stress). We found that, despite the conflicting cues, listeners perceived these items following what they had learned. For example, participants hearing the example mixed item described above who had learned that Talker 1 used F0 perceived initial stress (VOORnaam) but those who had learned that Talker 1 used intensity perceived final stress (voorNAAM). Crucially, this result was still present in a delayed test phase, showing that talker-specific learning about lexical stress is stable over time.
  • Slaats, S. (2024). On the interplay between lexical probability and syntactic structure in language comprehension. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Slaats, S., Meyer, A. S., & Martin, A. E. (2024). Lexical surprisal shapes the time course of syntactic structure building. Neurobiology of Language, 5(4), 942-980. doi:10.1162/nol_a_00155.

    Abstract

    When we understand language, we recognize words and combine them into sentences. In this article, we explore the hypothesis that listeners use probabilistic information about words to build syntactic structure. Recent work has shown that lexical probability and syntactic structure both modulate the delta-band (<4 Hz) neural signal. Here, we investigated whether the neural encoding of syntactic structure changes as a function of the distributional properties of a word. To this end, we analyzed MEG data of 24 native speakers of Dutch who listened to three fairytales with a total duration of 49 min. Using temporal response functions and a cumulative model-comparison approach, we evaluated the contributions of syntactic and distributional features to the variance in the delta-band neural signal. This revealed that lexical surprisal values (a distributional feature), as well as bottom-up node counts (a syntactic feature) positively contributed to the model of the delta-band neural signal. Subsequently, we compared responses to the syntactic feature between words with high- and low-surprisal values. This revealed a delay in the response to the syntactic feature as a consequence of the surprisal value of the word: high-surprisal values were associated with a delayed response to the syntactic feature by 150–190 ms. The delay was not affected by word duration, and did not have a lexical origin. These findings suggest that the brain uses probabilistic information to infer syntactic structure, and highlight an importance for the role of time in this process.

    Additional information

    supplementary data
  • Sommers, R. P. (2024). Neurobiology of reference. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Stärk, K. (2024). The company language keeps: How distributional cues influence statistical learning for language. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Ter Bekke, M., Drijvers, L., & Holler, J. (2024). Hand gestures have predictive potential during conversation: An investigation of the timing of gestures in relation to speech. Cognitive Science, 48(1): e13407. doi:10.1111/cogs.13407.

    Abstract

    During face-to-face conversation, transitions between speaker turns are incredibly fast. These fast turn exchanges seem to involve next speakers predicting upcoming semantic information, such that next turn planning can begin before a current turn is complete. Given that face-to-face conversation also involves the use of communicative bodily signals, an important question is how bodily signals such as co-speech hand gestures play into these processes of prediction and fast responding. In this corpus study, we found that hand gestures that depict or refer to semantic information started before the corresponding information in speech, which held both for the onset of the gesture as a whole, as well as the onset of the stroke (the most meaningful part of the gesture). This early timing potentially allows listeners to use the gestural information to predict the corresponding semantic information to be conveyed in speech. Moreover, we provided further evidence that questions with gestures got faster responses than questions without gestures. However, we found no evidence for the idea that how much a gesture precedes its lexical affiliate (i.e., its predictive potential) relates to how fast responses were given. The findings presented here highlight the importance of the temporal relation between speech and gesture and help to illuminate the potential mechanisms underpinning multimodal language processing during face-to-face conversation.
  • Ter Bekke, M., Drijvers, L., & Holler, J. (2024). Gestures speed up responses to questions. Language, Cognition and Neuroscience, 39(4), 423-430. doi:10.1080/23273798.2024.2314021.

    Abstract

    Most language use occurs in face-to-face conversation, which involves rapid turn-taking. Seeing communicative bodily signals in addition to hearing speech may facilitate such fast responding. We tested whether this holds for co-speech hand gestures by investigating whether these gestures speed up button press responses to questions. Sixty native speakers of Dutch viewed videos in which an actress asked yes/no-questions, either with or without a corresponding iconic hand gesture. Participants answered the questions as quickly and accurately as possible via button press. Gestures did not impact response accuracy, but crucially, gestures sped up responses, suggesting that response planning may be finished earlier when gestures are seen. How much gestures sped up responses was not related to their timing in the question or their timing with respect to the corresponding information in speech. Overall, these results are in line with the idea that multimodality may facilitate fast responding during face-to-face conversation.
  • Ter Bekke, M., Levinson, S. C., Van Otterdijk, L., Kühn, M., & Holler, J. (2024). Visual bodily signals and conversational context benefit the anticipation of turn ends. Cognition, 248: 105806. doi:10.1016/j.cognition.2024.105806.

    Abstract

    The typical pattern of alternating turns in conversation seems trivial at first sight. But a closer look quickly reveals the cognitive challenges involved, with much of it resulting from the fast-paced nature of conversation. One core ingredient to turn coordination is the anticipation of upcoming turn ends so as to be able to ready oneself for providing the next contribution. Across two experiments, we investigated two variables inherent to face-to-face conversation, the presence of visual bodily signals and preceding discourse context, in terms of their contribution to turn end anticipation. In a reaction time paradigm, participants anticipated conversational turn ends better when seeing the speaker and their visual bodily signals than when they did not, especially so for longer turns. Likewise, participants were better able to anticipate turn ends when they had access to the preceding discourse context than when they did not, and especially so for longer turns. Critically, the two variables did not interact, showing that visual bodily signals retain their influence even in the context of preceding discourse. In a pre-registered follow-up experiment, we manipulated the visibility of the speaker's head, eyes and upper body (i.e. torso + arms). Participants were better able to anticipate turn ends when the speaker's upper body was visible, suggesting a role for manual gestures in turn end anticipation. Together, these findings show that seeing the speaker during conversation may critically facilitate turn coordination in interaction.
  • Titus, A., Dijkstra, T., Willems, R. M., & Peeters, D. (2024). Beyond the tried and true: How virtual reality, dialog setups, and a focus on multimodality can take bilingual language production research forward. Neuropsychologia, 193: 108764. doi:10.1016/j.neuropsychologia.2023.108764.

    Abstract

    Bilinguals possess the ability of expressing themselves in more than one language, and typically do so in contextually rich and dynamic settings. Theories and models have indeed long considered context factors to affect bilingual language production in many ways. However, most experimental studies in this domain have failed to fully incorporate linguistic, social, or physical context aspects, let alone combine them in the same study. Indeed, most experimental psycholinguistic research has taken place in isolated and constrained lab settings with carefully selected words or sentences, rather than under rich and naturalistic conditions. We argue that the most influential experimental paradigms in the psycholinguistic study of bilingual language production fall short of capturing the effects of context on language processing and control presupposed by prominent models. This paper therefore aims to enrich the methodological basis for investigating context aspects in current experimental paradigms and thereby move the field of bilingual language production research forward theoretically. After considering extensions of existing paradigms proposed to address context effects, we present three far-ranging innovative proposals, focusing on virtual reality, dialog situations, and multimodality in the context of bilingual language production.
  • Titus, A., & Peeters, D. (2024). Multilingualism at the market: A pre-registered immersive virtual reality study of bilingual language switching. Journal of Cognition, 7(1), 24-35. doi:10.5334/joc.359.

    Abstract

    Bilinguals, by definition, are capable of expressing themselves in more than one language. But which cognitive mechanisms allow them to switch from one language to another? Previous experimental research using the cued language-switching paradigm supports theoretical models that assume that both transient, reactive and sustained, proactive inhibitory mechanisms underlie bilinguals’ capacity to flexibly and efficiently control which language they use. Here we used immersive virtual reality to test the extent to which these inhibitory mechanisms may be active when unbalanced Dutch-English bilinguals i) produce full sentences rather than individual words, ii) to a life-size addressee rather than only into a microphone, iii) using a message that is relevant to that addressee rather than communicatively irrelevant, iv) in a rich visual environment rather than in front of a computer screen. We observed a reversed language dominance paired with switch costs for the L2 but not for the L1 when participants were stand owners in a virtual marketplace and informed their monolingual customers in full sentences about the price of their fruits and vegetables. These findings strongly suggest that the subtle balance between the application of reactive and proactive inhibitory mechanisms that support bilingual language control may be different in the everyday life of a bilingual compared to in the (traditional) psycholinguistic laboratory.
  • Uluşahin, O., Bosker, H. R., McQueen, J. M., & Meyer, A. S. (2024). Knowledge of a talker’s f0 affects subsequent perception of voiceless fricatives. In Y. Chen, A. Chen, & A. Arvaniti (Eds.), Proceedings of Speech Prosody 2024 (pp. 432-436).

    Abstract

    The human brain deals with the infinite variability of speech through multiple mechanisms. Some of them rely solely on information in the speech input (i.e., signal-driven) whereas some rely on linguistic or real-world knowledge (i.e., knowledge-driven). Many signal-driven perceptual processes rely on the enhancement of acoustic differences between incoming speech sounds, producing contrastive adjustments. For instance, when an ambiguous voiceless fricative is preceded by a high fundamental frequency (f0) sentence, the fricative is perceived as having lower a spectral center of gravity (CoG). However, it is not clear whether knowledge of a talker’s typical f0 can lead to similar contrastive effects. This study investigated a possible talker f0 effect on fricative CoG perception. In the exposure phase, two groups of participants (N=16 each) heard the same talker at high or low f0 for 20 minutes. Later, in the test phase, participants rated fixed-f0 /?ɔk/ tokens as being /sɔk/ (i.e., high CoG) or /ʃɔk/ (i.e., low CoG), where /?/ represents a fricative from a 5-step /s/-/ʃ/ continuum. Surprisingly, the data revealed the opposite of our contrastive hypothesis, whereby hearing high f0 instead biased perception towards high CoG. Thus, we demonstrated that talker f0 information affects fricative CoG perception.
  • Ünal, E., Mamus, E., & Özyürek, A. (2024). Multimodal encoding of motion events in speech, gesture, and cognition. Language and Cognition, 16(4), 785-804. doi:10.1017/langcog.2023.61.

    Abstract

    How people communicate about motion events and how this is shaped by language typology are mostly studied with a focus on linguistic encoding in speech. Yet, human communication typically involves an interactional exchange of multimodal signals, such as hand gestures that have different affordances for representing event components. Here, we review recent empirical evidence on multimodal encoding of motion in speech and gesture to gain a deeper understanding of whether and how language typology shapes linguistic expressions in different modalities, and how this changes across different sensory modalities of input and interacts with other aspects of cognition. Empirical evidence strongly suggests that Talmy’s typology of event integration predicts multimodal event descriptions in speech and gesture and visual attention to event components prior to producing these descriptions. Furthermore, variability within the event itself, such as type and modality of stimuli, may override the influence of language typology, especially for expression of manner.
  • Verhoef, E., Allegrini, A. G., Jansen, P. R., Lange, K., Wang, C. A., Morgan, A. T., Ahluwalia, T. S., Symeonides, C., EAGLE-Working Group, Eising, E., Franken, M.-C., Hypponen, E., Mansell, T., Olislagers, M., Omerovic, E., Rimfeld, K., Schlag, F., Selzam, S., Shapland, C. Y., Tiemeier, H., Whitehouse, A. J. O. Verhoef, E., Allegrini, A. G., Jansen, P. R., Lange, K., Wang, C. A., Morgan, A. T., Ahluwalia, T. S., Symeonides, C., EAGLE-Working Group, Eising, E., Franken, M.-C., Hypponen, E., Mansell, T., Olislagers, M., Omerovic, E., Rimfeld, K., Schlag, F., Selzam, S., Shapland, C. Y., Tiemeier, H., Whitehouse, A. J. O., Saffery, R., Bønnelykke, K., Reilly, S., Pennell, C. E., Wake, M., Cecil, C. A., Plomin, R., Fisher, S. E., & St Pourcain, B. (2024). Genome-wide analyses of vocabulary size in infancy and toddlerhood: Associations with Attention-Deficit/Hyperactivity Disorder and cognition-related traits. Biological Psychiatry, 95(1), 859-869. doi:10.1016/j.biopsych.2023.11.025.

    Abstract

    Background

    The number of words children produce (expressive vocabulary) and understand (receptive vocabulary) changes rapidly during early development, partially due to genetic factors. Here, we performed a meta–genome-wide association study of vocabulary acquisition and investigated polygenic overlap with literacy, cognition, developmental phenotypes, and neurodevelopmental conditions, including attention-deficit/hyperactivity disorder (ADHD).

    Methods

    We studied 37,913 parent-reported vocabulary size measures (English, Dutch, Danish) for 17,298 children of European descent. Meta-analyses were performed for early-phase expressive (infancy, 15–18 months), late-phase expressive (toddlerhood, 24–38 months), and late-phase receptive (toddlerhood, 24–38 months) vocabulary. Subsequently, we estimated single nucleotide polymorphism–based heritability (SNP-h2) and genetic correlations (rg) and modeled underlying factor structures with multivariate models.

    Results

    Early-life vocabulary size was modestly heritable (SNP-h2 = 0.08–0.24). Genetic overlap between infant expressive and toddler receptive vocabulary was negligible (rg = 0.07), although each measure was moderately related to toddler expressive vocabulary (rg = 0.69 and rg = 0.67, respectively), suggesting a multifactorial genetic architecture. Both infant and toddler expressive vocabulary were genetically linked to literacy (e.g., spelling: rg = 0.58 and rg = 0.79, respectively), underlining genetic similarity. However, a genetic association of early-life vocabulary with educational attainment and intelligence emerged only during toddlerhood (e.g., receptive vocabulary and intelligence: rg = 0.36). Increased ADHD risk was genetically associated with larger infant expressive vocabulary (rg = 0.23). Multivariate genetic models in the ALSPAC (Avon Longitudinal Study of Parents and Children) cohort confirmed this finding for ADHD symptoms (e.g., at age 13; rg = 0.54) but showed that the association effect reversed for toddler receptive vocabulary (rg = −0.74), highlighting developmental heterogeneity.

    Conclusions

    The genetic architecture of early-life vocabulary changes during development, shaping polygenic association patterns with later-life ADHD, literacy, and cognition-related traits.
  • Zhao, J., Martin, A. E., & Coopmans, C. W. (2024). Structural and sequential regularities modulate phrase-rate neural tracking. Scientific Reports, 14: 16603. doi:10.1038/s41598-024-67153-z.

    Abstract

    Electrophysiological brain activity has been shown to synchronize with the quasi-regular repetition of grammatical phrases in connected speech—so-called phrase-rate neural tracking. Current debate centers around whether this phenomenon is best explained in terms of the syntactic properties of phrases or in terms of syntax-external information, such as the sequential repetition of parts of speech. As these two factors were confounded in previous studies, much of the literature is compatible with both accounts. Here, we used electroencephalography (EEG) to determine if and when the brain is sensitive to both types of information. Twenty native speakers of Mandarin Chinese listened to isochronously presented streams of monosyllabic words, which contained either grammatical two-word phrases (e.g., catch fish, sell house) or non-grammatical word combinations (e.g., full lend, bread far). Within the grammatical conditions, we varied two structural factors: the position of the head of each phrase and the type of attachment. Within the non-grammatical conditions, we varied the consistency with which parts of speech were repeated. Tracking was quantified through evoked power and inter-trial phase coherence, both derived from the frequency-domain representation of EEG responses. As expected, neural tracking at the phrase rate was stronger in grammatical sequences than in non-grammatical sequences without syntactic structure. Moreover, it was modulated by both attachment type and head position, revealing the structure-sensitivity of phrase-rate tracking. We additionally found that the brain tracks the repetition of parts of speech in non-grammatical sequences. These data provide an integrative perspective on the current debate about neural tracking effects, revealing that the brain utilizes regularities computed over multiple levels of linguistic representation in guiding rhythmic computation.
  • Azar, Z., Backus, A., & Ozyurek, A. (2017). Highly proficient bilinguals maintain language-specific pragmatic constraints on pronouns: Evidence from speech and gesture. In G. Gunzelmann, A. Howes, T. Tenbrink, & E. Davelaar (Eds.), Proceedings of the 39th Annual Conference of the Cognitive Science Society (CogSci 2017) (pp. 81-86). Austin, TX: Cognitive Science Society.

    Abstract

    The use of subject pronouns by bilingual speakers using both a pro-drop and a non-pro-drop language (e.g. Spanish heritage speakers in the USA) is a well-studied topic in research on cross-linguistic influence in language contact situations. Previous studies looking at bilinguals with different proficiency levels have yielded conflicting results on whether there is transfer from the non-pro-drop patterns to the pro-drop language. Additionally, previous research has focused on speech patterns only. In this paper, we study the two modalities of language, speech and gesture, and ask whether and how they reveal cross-linguistic influence on the use of subject pronouns in discourse. We focus on elicited narratives from heritage speakers of Turkish in the Netherlands, in both Turkish (pro-drop) and Dutch (non-pro-drop), as well as from monolingual control groups. The use of pronouns was not very common in monolingual Turkish narratives and was constrained by the pragmatic contexts, unlike in Dutch. Furthermore, Turkish pronouns were more likely to be accompanied by localized gestures than Dutch pronouns, presumably because pronouns in Turkish are pragmatically marked forms. We did not find any cross-linguistic influence in bilingual speech or gesture patterns, in line with studies (speech only) of highly proficient bilinguals. We therefore suggest that speech and gesture parallel each other not only in monolingual but also in bilingual production. Highly proficient heritage speakers who have been exposed to diverse linguistic and gestural patterns of each language from early on maintain monolingual patterns of pragmatic constraints on the use of pronouns multimodally.
  • Barthel, M., Meyer, A. S., & Levinson, S. C. (2017). Next speakers plan their turn early and speak after turn-final ‘go-signals’. Frontiers in Psychology, 8: 393. doi:10.3389/fpsyg.2017.00393.

    Abstract

    In conversation, turn-taking is usually fluid, with next speakers taking their turn right after the end of the previous turn. Most, but not all, previous studies show that next speakers start to plan their turn early, if possible already during the incoming turn. The present study makes use of the list-completion paradigm (Barthel et al., 2016), analyzing speech onset latencies and eye-movements of participants in a task-oriented dialogue with a confederate. The measures are used to disentangle the contributions to the timing of turn-taking of early planning of content on the one hand and initiation of articulation as a reaction to the upcoming turn-end on the other hand. Participants named objects visible on their computer screen in response to utterances that did, or did not, contain lexical and prosodic cues to the end of the incoming turn. In the presence of an early lexical cue, participants showed earlier gaze shifts toward the target objects and responded faster than in its absence, whereas the presence of a late intonational cue only led to faster response times and did not affect the timing of participants' eye movements. The results show that with a combination of eye-movement and turn-transition time measures it is possible to tease apart the effects of early planning and response initiation on turn timing. They are consistent with models of turn-taking that assume that next speakers (a) start planning their response as soon as the incoming turn's message can be understood and (b) monitor the incoming turn for cues to turn-completion so as to initiate their response when turn-transition becomes relevant
  • Bouhali, F., Mongelli, V., & Cohen, L. (2017). Musical literacy shifts asymmetries in the ventral visual cortex. NeuroImage, 156, 445-455. doi:10.1016/j.neuroimage.2017.04.027.

    Abstract

    The acquisition of literacy has a profound impact on the functional specialization and lateralization of the visual cortex. Due to the overall lateralization of the language network, specialization for printed words develops in the left occipitotemporal cortex, allegedly inducing a secondary shift of visual face processing to the right, in literate as compared to illiterate subjects. Applying the same logic to the acquisition of high-level musical literacy, we predicted that, in musicians as compared to non-musicians, occipitotemporal activations should show a leftward shift for music reading, and an additional rightward push for face perception. To test these predictions, professional musicians and non-musicians viewed pictures of musical notation, faces, words, tools and houses in the MRI, and laterality was assessed in the ventral stream combining ROI and voxel-based approaches. The results supported both predictions, and allowed to locate the leftward shift to the inferior temporal gyrus and the rightward shift to the fusiform cortex. Moreover, these laterality shifts generalized to categories other than music and faces. Finally, correlation measures across subjects did not support a causal link between the leftward and rightward shifts. Thus the acquisition of an additional perceptual expertise extensively modifies the laterality pattern in the visual system

    Additional information

    1-s2.0-S1053811917303208-mmc1.docx

    Files private

    Request files
  • Brand, S. (2017). The processing of reduced word pronunciation variants by natives and learners: Evidence from French casual speech. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Carrion Castillo, A., Maassen, B., Franke, B., Heister, A., Naber, M., Van der Leij, A., Francks, C., & Fisher, S. E. (2017). Association analysis of dyslexia candidate genes in a Dutch longitudinal sample. European Journal of Human Genetics, 25(4), 452-460. doi:10.1038/ejhg.2016.194.

    Abstract

    Dyslexia is a common specific learning disability with a substantive genetic component. Several candidate genes have been proposed to be implicated in dyslexia susceptibility, such as DYX1C1, ROBO1, KIAA0319, and DCDC2. Associations with variants in these genes have also been reported with a variety of psychometric measures tapping into the underlying processes that might be impaired in dyslexic people. In this study, we first conducted a literature review to select single nucleotide polymorphisms (SNPs) in dyslexia candidate genes that had been repeatedly implicated across studies. We then assessed the SNPs for association in the richly phenotyped longitudinal data set from the Dutch Dyslexia Program. We tested for association with several quantitative traits, including word and nonword reading fluency, rapid naming, phoneme deletion, and nonword repetition. In this, we took advantage of the longitudinal nature of the sample to examine if associations were stable across four educational time-points (from 7 to 12 years). Two SNPs in the KIAA0319 gene were nominally associated with rapid naming, and these associations were stable across different ages. Genetic association analysis with complex cognitive traits can be enriched through the use of longitudinal information on trait development.
  • Collins, J. (2017). Real and spurious correlations involving tonal languages. In N. J. Enfield (Ed.), Dependencies in language: On the causal ontology of linguistics systems (pp. 129-139). Berlin: Language Science Press.
  • Cortázar-Chinarro, M., Lattenkamp, E. Z., Meyer-Lucht, Y., Luquet, E., Laurila, A., & Höglund, J. (2017). Drift, selection, or migration? Processes affecting genetic differentiation and variation along a latitudinal gradient in an amphibian. BMC Evolutionary Biology, 17: 189. doi:10.1186/s12862-017-1022-z.

    Abstract

    Past events like fluctuations in population size and post-glacial colonization processes may influence the relative importance of genetic drift, migration and selection when determining the present day patterns of genetic variation. We disentangle how drift, selection and migration shape neutral and adaptive genetic variation in 12 moor frog populations along a 1700 km latitudinal gradient. We studied genetic differentiation and variation at a MHC exon II locus and a set of 18 microsatellites.
    Results

    Using outlier analyses, we identified the MHC II exon 2 (corresponding to the β-2 domain) locus and one microsatellite locus (RCO8640) to be subject to diversifying selection, while five microsatellite loci showed signals of stabilizing selection among populations. STRUCTURE and DAPC analyses on the neutral microsatellites assigned populations to a northern and a southern cluster, reflecting two different post-glacial colonization routes found in previous studies. Genetic variation overall was lower in the northern cluster. The signature of selection on MHC exon II was weaker in the northern cluster, possibly as a consequence of smaller and more fragmented populations.
    Conclusion

    Our results show that historical demographic processes combined with selection and drift have led to a complex pattern of differentiation along the gradient where some loci are more divergent among populations than predicted from drift expectations due to diversifying selection, while other loci are more uniform among populations due to stabilizing selection. Importantly, both overall and MHC genetic variation are lower at northern latitudes. Due to lower evolutionary potential, the low genetic variation in northern populations may increase the risk of extinction when confronted with emerging pathogens and climate change.
  • Dai, B., McQueen, J. M., Hagoort, P., & Kösem, A. (2017). Pure linguistic interference during comprehension of competing speech signals. The Journal of the Acoustical Society of America, 141, EL249-EL254. doi:10.1121/1.4977590.

    Abstract

    Speech-in-speech perception can be challenging because the processing of competing acoustic and linguistic information leads to informational masking. Here, a method is proposed to isolate the linguistic component of informational masking while keeping the distractor's acoustic information unchanged. Participants performed a dichotic listening cocktail-party task before and after training on 4-band noise-vocoded sentences that became intelligible through the training. Distracting noise-vocoded speech interfered more with target speech comprehension after training (i.e., when intelligible) than before training (i.e., when unintelligible) at −3 dB SNR. These findings confirm that linguistic and acoustic information have distinct masking effects during speech-in‐speech comprehension
  • Dediu, D., Janssen, R., & Moisik, S. R. (2017). Language is not isolated from its wider environment: Vocal tract influences on the evolution of speech and language. Language and Communication, 54, 9-20. doi:10.1016/j.langcom.2016.10.002.

    Abstract

    Language is not a purely cultural phenomenon somehow isolated from its wider environment, and we may only understand its origins and evolution by seriously considering its embedding in this environment as well as its multimodal nature. By environment here we understand other aspects of culture (such as communication technology, attitudes towards language contact, etc.), of the physical environment (ultraviolet light incidence, air humidity, etc.), and of the biological infrastructure for language and speech. We are specifically concerned in this paper with the latter, in the form of the biases, constraints and affordances that the anatomy and physiology of the vocal tract create on speech and language. In a nutshell, our argument is that (a) there is an under-appreciated amount of inter-individual variation in vocal tract (VT) anatomy and physiology, (b) variation that is non-randomly distributed across populations, and that (c) results in systematic differences in phonetics and phonology between languages. Relevant differences in VT anatomy include the overall shape of the hard palate, the shape of the alveolar ridge, the relationship between the lower and upper jaw, to mention just a few, and our data offer a new way to systematically explore such differences and their potential impact on speech. These differences generate very small biases that nevertheless can be amplified by the repeated use and transmission of language, affecting language diachrony and resulting in cross-linguistic synchronic differences. Moreover, the same type of biases and processes might have played an essential role in the emergence and evolution of language, and might allow us a glimpse into the speech and language of extinct humans by, for example, reconstructing the anatomy of parts of their vocal tract from the fossil record and extrapolating the biases we find in present-day humans.
  • Drijvers, L., & Ozyurek, A. (2017). Visual context enhanced: The joint contribution of iconic gestures and visible speech to degraded speech comprehension. Journal of Speech, Language, and Hearing Research, 60, 212-222. doi:10.1044/2016_JSLHR-H-16-0101.

    Abstract

    Purpose This study investigated whether and to what extent iconic co-speech gestures contribute to information from visible speech to enhance degraded speech comprehension at different levels of noise-vocoding. Previous studies of the contributions of these 2 visual articulators to speech comprehension have only been performed separately.

    Method Twenty participants watched videos of an actress uttering an action verb and completed a free-recall task. The videos were presented in 3 speech conditions (2-band noise-vocoding, 6-band noise-vocoding, clear), 3 multimodal conditions (speech + lips blurred, speech + visible speech, speech + visible speech + gesture), and 2 visual-only conditions (visible speech, visible speech + gesture).

    Results Accuracy levels were higher when both visual articulators were present compared with 1 or none. The enhancement effects of (a) visible speech, (b) gestural information on top of visible speech, and (c) both visible speech and iconic gestures were larger in 6-band than 2-band noise-vocoding or visual-only conditions. Gestural enhancement in 2-band noise-vocoding did not differ from gestural enhancement in visual-only conditions.
  • Drozdova, P., Van Hout, R., & Scharenborg, O. (2017). L2 voice recognition: The role of speaker-, listener-, and stimulus-related factors. The Journal of the Acoustical Society of America, 142(5), 3058-3068. doi:10.1121/1.5010169.

    Abstract

    Previous studies examined various factors influencing voice recognition and learning with mixed results. The present study investigates the separate and combined contribution of these various speaker-, stimulus-, and listener-related factors to voice recognition. Dutch listeners, with arguably incomplete phonological and lexical knowledge in the target language, English, learned to recognize the voice of four native English speakers, speaking in English, during four-day training. Training was successful and listeners' accuracy was shown to be influenced by the acoustic characteristics of speakers and the sound composition of the words used in the training, but not by lexical frequency of the words, nor the lexical knowledge of the listeners or their phonological aptitude. Although not conclusive, listeners with a lower working memory capacity seemed to be slower in learning voices than listeners with a higher working memory capacity. The results reveal that speaker-related, listener-related, and stimulus-related factors accumulate in voice recognition, while lexical information turns out not to play a role in successful voice learning and recognition. This implies that voice recognition operates at the prelexical processing level.
  • Ernestus, M., Kouwenhoven, H., & Van Mulken, M. (2017). The direct and indirect effects of the phonotactic constraints in the listener's native language on the comprehension of reduced and unreduced word pronunciation variants in a foreign language. Journal of Phonetics, 62, 50-64. doi:10.1016/j.wocn.2017.02.003.

    Abstract

    This study investigates how the comprehension of casual speech in foreign languages is affected by the phonotactic constraints in the listener’s native language. Non-native listeners of English with different native languages heard short English phrases produced by native speakers of English or Spanish and they indicated whether these phrases included can or can’t. Native Mandarin listeners especially tended to interpret can’t as can. We interpret this result as a direct effect of the ban on word-final /nt/ in Mandarin. Both the native Mandarin and the native Spanish listeners did not take full advantage of the subsegmental information in the speech signal cueing reduced can’t. This finding is probably an indirect effect of the phonotactic constraints in their native languages: these listeners have difficulties interpreting the subsegmental cues because these cues do not occur or have different functions in their native languages. Dutch resembles English in the phonotactic constraints relevant to the comprehension of can’t, and native Dutch listeners showed similar patterns in their comprehension of native and non-native English to native English listeners. This result supports our conclusion that the major patterns in the comprehension results are driven by the phonotactic constraints in the listeners’ native languages.
  • Franken, M. K., Eisner, F., Schoffelen, J.-M., Acheson, D. J., Hagoort, P., & McQueen, J. M. (2017). Audiovisual recalibration of vowel categories. In Proceedings of Interspeech 2017 (pp. 655-658). doi:10.21437/Interspeech.2017-122.

    Abstract

    One of the most daunting tasks of a listener is to map a
    continuous auditory stream onto known speech sound
    categories and lexical items. A major issue with this mapping
    problem is the variability in the acoustic realizations of sound
    categories, both within and across speakers. Past research has
    suggested listeners may use visual information (e.g., lipreading)
    to calibrate these speech categories to the current
    speaker. Previous studies have focused on audiovisual
    recalibration of consonant categories. The present study
    explores whether vowel categorization, which is known to show
    less sharply defined category boundaries, also benefit from
    visual cues.
    Participants were exposed to videos of a speaker
    pronouncing one out of two vowels, paired with audio that was
    ambiguous between the two vowels. After exposure, it was
    found that participants had recalibrated their vowel categories.
    In addition, individual variability in audiovisual recalibration is
    discussed. It is suggested that listeners’ category sharpness may
    be related to the weight they assign to visual information in
    audiovisual speech perception. Specifically, listeners with less
    sharp categories assign more weight to visual information
    during audiovisual speech recognition.
  • Franken, M. K., Acheson, D. J., McQueen, J. M., Eisner, F., & Hagoort, P. (2017). Individual variability as a window on production-perception interactions in speech motor control. The Journal of the Acoustical Society of America, 142(4), 2007-2018. doi:10.1121/1.5006899.

    Abstract

    An important part of understanding speech motor control consists of capturing the
    interaction between speech production and speech perception. This study tests a
    prediction of theoretical frameworks that have tried to account for these interactions: if
    speech production targets are specified in auditory terms, individuals with better
    auditory acuity should have more precise speech targets, evidenced by decreased
    within-phoneme variability and increased between-phoneme distance. A study was
    carried out consisting of perception and production tasks in counterbalanced order.
    Auditory acuity was assessed using an adaptive speech discrimination task, while
    production variability was determined using a pseudo-word reading task. Analyses of
    the production data were carried out to quantify average within-phoneme variability as
    well as average between-phoneme contrasts. Results show that individuals not only
    vary in their production and perceptual abilities, but that better discriminators have
    more distinctive vowel production targets (that is, targets with less within-phoneme
    variability and greater between-phoneme distances), confirming the initial hypothesis.
    This association between speech production and perception did not depend on local
    phoneme density in vowel space. This study suggests that better auditory acuity leads
    to more precise speech production targets, which may be a consequence of auditory
    feedback affecting speech production over time.
  • Frega, M., van Gestel, S. H. C., Linda, K., Van der Raadt, J., Keller, J., Van Rhijn, J. R., Schubert, D., Albers, C. A., & Kasri, N. N. (2017). Rapid neuronal differentiation of induced pluripotent stem cells for measuring network activity on micro-electrode arrays. Journal of Visualized Experiments, e45900. doi:10.3791/54900.

    Abstract

    Neurons derived from human induced Pluripotent Stem Cells (hiPSCs) provide a promising new tool for studying neurological disorders. In the past decade, many protocols for differentiating hiPSCs into neurons have been developed. However, these protocols are often slow with high variability, low reproducibility, and low efficiency. In addition, the neurons obtained with these protocols are often immature and lack adequate functional activity both at the single-cell and network levels unless the neurons are cultured for several months. Partially due to these limitations, the functional properties of hiPSC-derived neuronal networks are still not well characterized. Here, we adapt a recently published protocol that describes production of human neurons from hiPSCs by forced expression of the transcription factor neurogenin-212. This protocol is rapid (yielding mature neurons within 3 weeks) and efficient, with nearly 100% conversion efficiency of transduced cells (>95% of DAPI-positive cells are MAP2 positive). Furthermore, the protocol yields a homogeneous population of excitatory neurons that would allow the investigation of cell-type specific contributions to neurological disorders. We modified the original protocol by generating stably transduced hiPSC cells, giving us explicit control over the total number of neurons. These cells are then used to generate hiPSC-derived neuronal networks on micro-electrode arrays. In this way, the spontaneous electrophysiological activity of hiPSC-derived neuronal networks can be measured and characterized, while retaining interexperimental consistency in terms of cell density. The presented protocol is broadly applicable, especially for mechanistic and pharmacological studies on human neuronal networks.

    Additional information

    video component of this article
  • Guadalupe, T., Mathias, S. R., Van Erp, T. G. M., Whelan, C. D., Zwiers, M. P., Abe, Y., Abramovic, L., Agartz, I., Andreassen, O. A., Arias-Vásquez, A., Aribisala, B. S., Armstrong, N. J., Arolt, V., Artiges, E., Ayesa-Arriola, R., Baboyan, V. G., Banaschewski, T., Barker, G., Bastin, M. E., Baune, B. T. and 141 moreGuadalupe, T., Mathias, S. R., Van Erp, T. G. M., Whelan, C. D., Zwiers, M. P., Abe, Y., Abramovic, L., Agartz, I., Andreassen, O. A., Arias-Vásquez, A., Aribisala, B. S., Armstrong, N. J., Arolt, V., Artiges, E., Ayesa-Arriola, R., Baboyan, V. G., Banaschewski, T., Barker, G., Bastin, M. E., Baune, B. T., Blangero, J., Bokde, A. L., Boedhoe, P. S., Bose, A., Brem, S., Brodaty, H., Bromberg, U., Brooks, S., Büchel, C., Buitelaar, J., Calhoun, V. D., Cannon, D. M., Cattrell, A., Cheng, Y., Conrod, P. J., Conzelmann, A., Corvin, A., Crespo-Facorro, B., Crivello, F., Dannlowski, U., De Zubicaray, G. I., De Zwarte, S. M., Deary, I. J., Desrivières, S., Doan, N. T., Donohoe, G., Dørum, E. S., Ehrlich, S., Espeseth, T., Fernández, G., Flor, H., Fouche, J.-P., Frouin, V., Fukunaga, M., Gallinat, J., Garavan, H., Gill, M., Suarez, A. G., Gowland, P., Grabe, H. J., Grotegerd, D., Gruber, O., Hagenaars, S., Hashimoto, R., Hauser, T. U., Heinz, A., Hibar, D. P., Hoekstra, P. J., Hoogman, M., Howells, F. M., Hu, H., Hulshoff Pol, H. E.., Huyser, C., Ittermann, B., Jahanshad, N., Jönsson, E. G., Jurk, S., Kahn, R. S., Kelly, S., Kraemer, B., Kugel, H., Kwon, J. S., Lemaitre, H., Lesch, K.-P., Lochner, C., Luciano, M., Marquand, A. F., Martin, N. G., Martínez-Zalacaín, I., Martinot, J.-L., Mataix-Cols, D., Mather, K., McDonald, C., McMahon, K. L., Medland, S. E., Menchón, J. M., Morris, D. W., Mothersill, O., Maniega, S. M., Mwangi, B., Nakamae, T., Nakao, T., Narayanaswaamy, J. C., Nees, F., Nordvik, J. E., Onnink, A. M. H., Opel, N., Ophoff, R., Martinot, M.-L.-P., Orfanos, D. P., Pauli, P., Paus, T., Poustka, L., Reddy, J. Y., Renteria, M. E., Roiz-Santiáñez, R., Roos, A., Royle, N. A., Sachdev, P., Sánchez-Juan, P., Schmaal, L., Schumann, G., Shumskaya, E., Smolka, M. N., Soares, J. C., Soriano-Mas, C., Stein, D. J., Strike, L. T., Toro, R., Turner, J. A., Tzourio-Mazoyer, N., Uhlmann, A., Valdés Hernández, M., Van den Heuvel, O. A., Van der Meer, D., Van Haren, N. E.., Veltman, D. J., Venkatasubramanian, G., Vetter, N. C., Vuletic, D., Walitza, S., Walter, H., Walton, E., Wang, Z., Wardlaw, J., Wen, W., Westlye, L. T., Whelan, R., Wittfeld, K., Wolfers, T., Wright, M. J., Xu, J., Xu, X., Yun, J.-Y., Zhao, J., Franke, B., Thompson, P. M., Glahn, D. C., Mazoyer, B., Fisher, S. E., & Francks, C. (2017). Human subcortical asymmetries in 15,847 people worldwide reveal effects of age and sex. Brain Imaging and Behavior, 11(5), 1497-1514. doi:10.1007/s11682-016-9629-z.

    Abstract

    The two hemispheres of the human brain differ functionally and structurally. Despite over a century of research, the extent to which brain asymmetry is influenced by sex, handedness, age, and genetic factors is still controversial. Here we present the largest ever analysis of subcortical brain asymmetries, in a harmonized multi-site study using meta-analysis methods. Volumetric asymmetry of seven subcortical structures was assessed in 15,847 MRI scans from 52 datasets worldwide. There were sex differences in the asymmetry of the globus pallidus and putamen. Heritability estimates, derived from 1170 subjects belonging to 71 extended pedigrees, revealed that additive genetic factors influenced the asymmetry of these two structures and that of the hippocampus and thalamus. Handedness had no detectable effect on subcortical asymmetries, even in this unprecedented sample size, but the asymmetry of the putamen varied with age. Genetic drivers of asymmetry in the hippocampus, thalamus and basal ganglia may affect variability in human cognition, including susceptibility to psychiatric disorders.

    Additional information

    11682_2016_9629_MOESM1_ESM.pdf
  • Guadalupe, T. (2017). The biology of variation in anatomical brain asymmetries. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Hartung, F. (2017). Getting under your skin: The role of perspective and simulation of experience in narrative comprehension. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Hartung, F., Hagoort, P., & Willems, R. M. (2017). Readers select a comprehension mode independent of pronoun: Evidence from fMRI during narrative comprehension. Brain and Language, 170, 29-38. doi:10.1016/j.bandl.2017.03.007.

    Abstract

    Perspective is a crucial feature for communicating about events. Yet it is unclear how linguistically encoded perspective relates to cognitive perspective taking. Here, we tested the effect of perspective taking with short literary stories. Participants listened to stories with 1st or 3rd person pronouns referring to the protagonist, while undergoing fMRI. When comparing action events with 1st and 3rd person pronouns, we found no evidence for a neural dissociation depending on the pronoun. A split sample approach based on the self-reported experience of perspective taking revealed 3 comprehension preferences. One group showed a strong 1st person preference, another a strong 3rd person preference, while a third group engaged in 1st and 3rd person perspective taking simultaneously. Comparing brain activations of the groups revealed different neural networks. Our results suggest that comprehension is perspective dependent, but not on the perspective suggested by the text, but on the reader’s (situational) preference
  • Hartung, F., Withers, P., Hagoort, P., & Willems, R. M. (2017). When fiction is just as real as fact: No differences in reading behavior between stories believed to be based on true or fictional events. Frontiers in Psychology, 8: 1618. doi:10.3389/fpsyg.2017.01618.

    Abstract

    Experiments have shown that compared to fictional texts, readers read factual texts faster and have better memory for described situations. Reading fictional texts on the other hand seems to improve memory for exact wordings and expressions. Most of these studies used a ‘newspaper’ versus ‘literature’ comparison. In the present study, we investigated the effect of reader’s expectation to whether information is true or fictional with a subtler manipulation by labelling short stories as either based on true or fictional events. In addition, we tested whether narrative perspective or individual preference in perspective taking affects reading true or fictional stories differently. In an online experiment, participants (final N=1742) read one story which was introduced as based on true events or as fictional (factor fictionality). The story could be narrated in either 1st or 3rd person perspective (factor perspective). We measured immersion in and appreciation of the story, perspective taking, as well as memory for events. We found no evidence that knowing a story is fictional or based on true events influences reading behavior or experiential aspects of reading. We suggest that it is not whether a story is true or fictional, but rather expectations towards certain reading situations (e.g. reading newspaper or literature) which affect behavior by activating appropriate reading goals. Results further confirm that narrative perspective partially influences perspective taking and experiential aspects of reading
  • Heyselaar, E., Hagoort, P., & Segaert, K. (2017). How social opinion influences syntactic processing – An investigation using virtual reality. PLoS One, 12(4): e0174405. doi:10.1371/journal.pone.0174405.
  • Heyselaar, E., Hagoort, P., & Segaert, K. (2017). In dialogue with an avatar, language behavior is identical to dialogue with a human partner. Behavior Research Methods, 49(1), 46-60. doi:10.3758/s13428-015-0688-7.

    Abstract

    The use of virtual reality (VR) as a methodological tool is becoming increasingly popular in behavioral research as its flexibility allows for a wide range of applications. This new method has not been as widely accepted in the field of psycholinguistics, however, possibly due to the assumption that language processing during human-computer interactions does not accurately reflect human-human interactions. Yet at the same time there is a growing need to study human-human language interactions in a tightly controlled context, which has not been possible using existing methods. VR, however, offers experimental control over parameters that cannot be (as finely) controlled in the real world. As such, in this study we aim to show that human-computer language interaction is comparable to human-human language interaction in virtual reality. In the current study we compare participants’ language behavior in a syntactic priming task with human versus computer partners: we used a human partner, a human-like avatar with human-like facial expressions and verbal behavior, and a computer-like avatar which had this humanness removed. As predicted, our study shows comparable priming effects between the human and human-like avatar suggesting that participants attributed human-like agency to the human-like avatar. Indeed, when interacting with the computer-like avatar, the priming effect was significantly decreased. This suggests that when interacting with a human-like avatar, sentence processing is comparable to interacting with a human partner. Our study therefore shows that VR is a valid platform for conducting language research and studying dialogue interactions in an ecologically valid manner.
  • Heyselaar, E. (2017). Influences on the magnitude of syntactic priming. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Heyselaar, E., Segaert, K., Walvoort, S. J., Kessels, R. P., & Hagoort, P. (2017). The role of nondeclarative memory in the skill for language: Evidence from syntactic priming in patients with amnesia. Neuropsychologia, 101, 97-105. doi:10.1016/j.neuropsychologia.2017.04.033.

    Abstract

    Syntactic priming, the phenomenon in which participants adopt the linguistic behaviour of their partner, is widely used in psycholinguistics to investigate syntactic operations. Although the phenomenon of syntactic priming is well documented, the memory system that supports the retention of this syntactic information long enough to influence future utterances, is not as widely investigated. We aim to shed light on this issue by assessing patients with Korsakoff's amnesia on an active-passive syntactic priming task and compare their performance to controls matched in age, education, and premorbid intelligence. Patients with Korsakoff's syndrome display deficits in all subdomains of declarative memory, yet their nondeclarative memory remains intact, making them an ideal patient group to determine which memory system supports syntactic priming. In line with the hypothesis that syntactic priming relies on nondeclarative memory, the patient group shows strong priming tendencies (12.6% passive structure repetition). Our healthy control group did not show a priming tendency, presumably due to cognitive interference between declarative and nondeclarative memory. We discuss the results in relation to amnesia, aging, and compensatory mechanisms.
  • Hibar, D. P., Adams, H. H. H., Jahanshad, N., Chauhan, G., Stein, J. L., Hofer, E., Rentería, M. E., Bis, J. C., Arias-Vasquez, A., Ikram, M. K., Desrivieres, S., Vernooij, M. W., Abramovic, L., Alhusaini, S., Amin, N., Andersson, M., Arfanakis, K., Aribisala, B. S., Armstrong, N. J., Athanasiu, L. and 312 moreHibar, D. P., Adams, H. H. H., Jahanshad, N., Chauhan, G., Stein, J. L., Hofer, E., Rentería, M. E., Bis, J. C., Arias-Vasquez, A., Ikram, M. K., Desrivieres, S., Vernooij, M. W., Abramovic, L., Alhusaini, S., Amin, N., Andersson, M., Arfanakis, K., Aribisala, B. S., Armstrong, N. J., Athanasiu, L., Axelsson, T., Beecham, A. H., Beiser, A., Bernard, M., Blanton, S. H., Bohlken, M. M., Boks, M. P., Bralten, J., Brickman, A. M., Carmichael, O., Chakravarty, M. M., Chen, Q., Ching, C. R. K., Chouraki, V., Cuellar-Partida, G., Crivello, F., den Brabander, A., Doan, N. T., Ehrlich, S., Giddaluru, S., Goldman, A. L., Gottesman, R. F., Grimm, O., Griswold, M. E., Guadalupe, T., Gutman, B. A., Hass, J., Haukvik, U. K., Hoehn, D., Holmes, A. J., Hoogman, M., Janowitz, D., Jia, T., Jørgensen, K. N., Mirza-Schreiber, N., Kasperaviciute, D., Kim, S., Klein, M., Krämer, B., Lee, P. H., Liewald, D. C. M., Lopez, L. M., Luciano, M., Macare, C., Marquand, A. F., Matarin, M., Mather, K. A., Mattheisen, M., McKay, D. R., Milaneschi, Y., Maniega, S. M., Nho, K., Nugent, A. C., Nyquist, P., Olde Loohuis, L. M., Oosterlaan, J., Papmeyer, M., Pirpamer, L., Pütz, B., Ramasamy, A., Richards, J. S., Risacher, S., Roiz-Santiañez, R., Rommelse, N., Ropele, S., Rose, E., Royle, N. A., Rundek, T., Sämann, P. G., Saremi, A., Satizabal, C. L., Schmaal, L., Schork, A. J., Shen, L., Shin, J., Shumskaya, E., Smith, A. V., Sprooten, E., Strike, L. T., Teumer, A., Tordesillas-Gutierrez, D., Toro, R., Trabzuni, D., Trompet, S., Vaidya, D., Van der Grond, J., Van der Lee, S. J., Van der Meer, D., Van Donkelaar, M. M. J., Van Eijk, K. R., van Erp, T. G. M., Van Rooij, D., Walton, E., Westlye, L. T., Whelan, C. D., Windham, B. G., Winkler, A. M., Wittfeld, K. M., Woldehawariat, G., Wolf, C., Wolfers, T., Yanek, L. R., Yang, J., Zijdenbos, A., Zwiers, M. P., Agartz, I., Almasy, L., Ames, D., Amouyel, P., Andreassen, O. A., Arepalli, S., Assareh, A. A., Barral, S., Bastin, M. E., Becker, D. M., Becker, J. T., Bennett, D. A., Blangero, J., Van Bokhoven, H., Boomsma, D. I., Brodaty, H., Brouwer, R. M., Brunner, H. G., Buckner, R. L., Buitelaar, J. K., Bulayeva, K. B., Cahn, W., Calhoun, V. D., Cannon, D. M., Cavalleri, G. L., Cheng, C.-Y., Cichon, S., Cookson, M. R., Corvin, A., Crespo-Facorro, B., Curran, J. E., Czisch, M., Dale, A. M., Davies, G. E., De Craen, A. J. M., De Geus, E. J. C., De Jager, P. L., De Zubicaray, G. i., Deary, I. J., Debette, S., DeCarli, C., Delanty, N., Depondt, C., DeStefano, A., Dillman, A., Djurovic, S., Donohoe, G., Drevets, W. C., Duggirala, R., Dyer, T. D., Enzinger, C., Erk, S., Espeseth, T., Fedko, I. O., Fernández, G., Ferrucci, L., Fisher, S. E., Fleischman, D. A., Ford, I., Fornage, M., Foroud, T. M., Fox, P. T., Francks, C., Fukunaga, M., Gibbs, J. R., Glahn, D. C., Gollub, R. L., Göring, H. H. H., Green, R. C., Gruber, O., Gudnason, V., Guelfi, S., Haberg, A. K., Hansell, N. K., Hardy, J., Hartman, C. A., Hashimoto, R., Hegenscheid, K., Heinz, A., Le Hellard, S., Hernandez, D. G., Heslenfeld, D. J., Ho, B.-C., Hoekstra, P. J., Hoffmann, W., Hofman, A., Holsboer, F., Homuth, G., Hosten, N., Hottenga, J.-J., Huentelman, M., Pol, H. E. H., Ikeda, M., Jack Jr., C. R., Jenkinson, M., Johnson, R., Jonsson, E. G., Jukema, J. W., Kahn, R. S., Kanai, R., Kloszewska, I., Knopman, D. S., Kochunov, P., Kwok, J. B., Lawrie, S. M., Lemaître, H., Liu, X., Longo, D. L., Lopez, O. L., Lovestone, S., Martinez, O., Martinot, J.-L., Mattay, V. S., McDonald, C., Mcintosh, A. M., McMahon, F., McMahon, K. L., Mecocci, P., Melle, I., Meyer-Lindenberg, A., Mohnke, S., Montgomery, G. W., Morris, D. W., Mosley, T. H., Mühleisen, T. W., Müller-Myhsok, B., Nalls, M. A., Nauck, M., Nichols, T. E., Niessen, W. J., Nöthen, M. M., Nyberg, L., Ohi, K., Olvera, R. L., Ophoff, R. A., Pandolfo, M., Paus, T., Pausova, Z., Penninx, B. W. J. H., Pike, G. B., Potkin, S. G., Psaty, B. M., Reppermund, S., Rietschel, M., Roffman, J. L., Romanczuk-Seiferth, N., Rotter, J. I., Ryten, M., Sacco, R. L., Sachdev, P. S., Saykin, A. J., Schmidt, R., Schmidt, H., Schofield, P. R., Sigursson, S., Simmons, A., Singleton, A., Sisodiya, S. M., Smith, C., Smoller, J. W., Soininen, H., Steen, V. M., Stott, D. J., Sussmann, J. E., Thalamuthu, A., Toga, A. W., Traynor, B. J., Troncoso, J., Tsolaki, M., Tzourio, C., Uitterlinden, A. G., Hernández, M. C. V., Van der Brug, M., Van der Lugt, A., Van der Wee, N. J. A., Van Haren, N. E. M., Van Tol, M.-J., Vardarajan, B. N., Vellas, B., Veltman, D. J., Völzke, H., Walter, H., Wardlaw, J. M., Wassink, T. H., Weale, M. e., Weinberger, D. R., Weiner, M., Wen, W., Westman, E., White, T., Wong, T. Y., Wright, C. B., Zielke, R. H., Zonderman, A. B., Martin, N. G., Van Duijn, C. M., Wright, M. J., Longstreth, W. W. T., Schumann, G., Grabe, H. J., Franke, B., Launer, L. J., Medland, S. E., Seshadri, S., Thompson, P. M., & Ikram, A. (2017). Novel genetic loci associated with hippocampal volume. Nature Communications, 8: 13624. doi:10.1038/ncomms13624.

    Abstract

    The hippocampal formation is a brain structure integrally involved in episodic memory, spatial navigation, cognition and stress responsiveness. Structural abnormalities in hippocampal volume and shape are found in several common neuropsychiatric disorders. To identify the genetic underpinnings of hippocampal structure here we perform a genome-wide association study (GWAS) of 33,536 individuals and discover six independent loci significantly associated with hippocampal volume, four of them novel. Of the novel loci, three lie within genes (ASTN2, DPP4 and MAST4) and one is found 200 kb upstream of SHH. A hippocampal subfield analysis shows that a locus within the MSRB3 gene shows evidence of a localized effect along the dentate gyrus, subiculum, CA1 and fissure. Further, we show that genetic variants associated with decreased hippocampal volume are also associated with increased risk for Alzheimer’s disease (rg=−0.155). Our findings suggest novel biological pathways through which human genetic variation influences hippocampal volume and risk for neuropsychiatric illness.

    Additional information

    ncomms13624-s1.pdf ncomms13624-s2.xlsx
  • Hoey, E. (2017). [Review of the book Temporality in Interaction]. Studies in Language, 41(1), 232-238. doi:10.1075/sl.41.1.08hoe.
  • Hoey, E. (2017). Lapse organization in interaction. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Hoey, E. (2017). Sequence recompletion: A practice for managing lapses in conversation. Journal of Pragmatics, 109, 47-63. doi:10.1016/j.pragma.2016.12.008.

    Abstract

    Conversational interaction occasionally lapses as topics become exhausted or as participants are left with no obvious thing to talk about next. In this article I look at episodes of ordinary conversation to examine how participants resolve issues of speakership and sequentiality in lapse environments. In particular, I examine one recurrent phenomenon—sequence recompletion—whereby participants bring to completion a sequence of talk that was already treated as complete. Using conversation analysis, I describe four methods for sequence recompletion: turn-exiting, action redoings, delayed replies, and post-sequence transitions. With this practice, participants use verbal and vocal resources to locally manage their participation framework when ending one course of action and potentially starting up a new one
  • Hömke, P., Holler, J., & Levinson, S. C. (2017). Eye blinking as addressee feedback in face-to-face conversation. Research on Language and Social Interaction, 50, 54-70. doi:10.1080/08351813.2017.1262143.

    Abstract

    Does blinking function as a type of feedback in conversation? To address this question, we built a corpus of Dutch conversations, identified short and long addressee blinks during extended turns, and measured their occurrence relative to the end of turn constructional units (TCUs), the location
    where feedback typically occurs. Addressee blinks were indeed timed to the
    end of TCUs. Also, long blinks were more likely than short blinks to occur
    during mutual gaze, with nods or continuers, and their occurrence was
    restricted to sequential contexts in which signaling understanding was
    particularly relevant, suggesting a special signaling capacity of long blinks.
  • Iacozza, S., Costa, A., & Duñabeitia, J. A. (2017). What do your eyes reveal about your foreign language? Reading emotional sentences in a native and foreign language. PLoS One, 12(10): e0186027. doi:10.1371/journal.pone.0186027.

    Abstract

    Foreign languages are often learned in emotionally neutral academic environments which differ greatly from the familiar context where native languages are acquired. This difference in learning contexts has been argued to lead to reduced emotional resonance when confronted with a foreign language. In the current study, we investigated whether the reactivity of the sympathetic nervous system in response to emotionally-charged stimuli is reduced in a foreign language. To this end, pupil sizes were recorded while reading aloud emotional sentences in the native or foreign language. Additionally, subjective ratings of emotional impact were provided after reading each sentence, allowing us to further investigate foreign language effects on explicit emotional understanding. Pupillary responses showed a larger effect of emotion in the native than in the foreign language. However, such a difference was not present for explicit ratings of emotionality. These results reveal that the sympathetic nervous system reacts differently depending on the language context, which in turns suggests a deeper emotional processing when reading in a native compared to a foreign language.

    Additional information

    pone.0186027.s001.docx
  • Jongman, S. R., Roelofs, A., Scheper, A., & Meyer, A. S. (2017). Picture naming in typically developing and language impaired children: The role of sustained attention. International Journal of Language & Communication Disorders, 52(3), 323-333. doi:10.1111/1460-6984.12275.

    Abstract

    Children with specific language impairment (SLI) have problems not only with language performance but also with sustained attention, which is the ability to maintain alertness over an extended period of time. Although there is consensus that this ability is impaired with respect to processing stimuli in the auditory perceptual modality, conflicting evidence exists concerning the visual modality.
    Aims

    To address the outstanding issue whether the impairment in sustained attention is limited to the auditory domain, or if it is domain-general. Furthermore, to test whether children's sustained attention ability relates to their word-production skills.
    Methods & Procedures

    Groups of 7–9 year olds with SLI (N = 28) and typically developing (TD) children (N = 22) performed a picture-naming task and two sustained attention tasks, namely auditory and visual continuous performance tasks (CPTs).
    Outcomes & Results

    Children with SLI performed worse than TD children on picture naming and on both the auditory and visual CPTs. Moreover, performance on both the CPTs correlated with picture-naming latencies across developmental groups.
    Conclusions & Implications

    These results provide evidence for a deficit in both auditory and visual sustained attention in children with SLI. Moreover, the study indicates there is a relationship between domain-general sustained attention and picture-naming performance in both TD and language-impaired children. Future studies should establish whether this relationship is causal. If attention influences language, training of sustained attention may improve language production in children from both developmental groups.
  • Karadöller, D. Z., Sumer, B., & Ozyurek, A. (2017). Effects of delayed language exposure on spatial language acquisition by signing children and adults. In G. Gunzelmann, A. Howes, T. Tenbrink, & E. Davelaar (Eds.), Proceedings of the 39th Annual Conference of the Cognitive Science Society (CogSci 2017) (pp. 2372-2376). Austin, TX: Cognitive Science Society.

    Abstract

    Deaf children born to hearing parents are exposed to language input quite late, which has long-lasting effects on language production. Previous studies with deaf individuals mostly focused on linguistic expressions of motion events, which have several event components. We do not know if similar effects emerge in simple events such as descriptions of spatial configurations of objects. Moreover, previous data mainly come from late adult signers. There is not much known about language development of late signing children soon after learning sign language. We compared simple event descriptions of late signers of Turkish Sign Language (adults, children) to age-matched native signers. Our results indicate that while late signers in both age groups are native-like in frequency of expressing a relational encoding, they lag behind native signers in using morphologically complex linguistic forms compared to other simple forms. Late signing children perform similar to adults and thus showed no development over time.
  • Kavaklioglu, T., Guadalupe, T., Zwiers, M., Marquand, A. F., Onnink, M., Shumskaya, E., Brunner, H., Fernandez, G., Fisher, S. E., & Francks, C. (2017). Structural asymmetries of the human cerebellum in relation to cerebral cortical asymmetries and handedness. Brain Structure and Function, 22, 1611-1623. doi:10.1007/s00429-016-1295-9.

    Abstract

    There is evidence that the human cerebellum is involved not only in motor control but also in other cognitive functions. Several studies have shown that language-related activation is lateralized toward the right cerebellar hemisphere in most people, in accordance with leftward cerebral cortical lateralization for language and a general contralaterality of cerebral–cerebellar activations. In terms of behavior, hand use elicits asymmetrical activation in the cerebellum, while hand preference is weakly associated with language lateralization. However, it is not known how, or whether, these functional relations are reflected in anatomy. We investigated volumetric gray matter asymmetries of cerebellar lobules in an MRI data set comprising 2226 subjects. We tested these cerebellar asymmetries for associations with handedness, and for correlations with cerebral cortical anatomical asymmetries of regions important for language or hand motor control, as defined by two different automated image analysis methods and brain atlases, and supplemented with extensive visual quality control. No significant associations of cerebellar asymmetries to handedness were found. Some significant associations of cerebellar lobular asymmetries to cerebral cortical asymmetries were found, but none of these correlations were greater than 0.14, and they were mostly method-/atlas-dependent. On the basis of this large and highly powered study, we conclude that there is no overt structural manifestation of cerebellar functional lateralization and connectivity, in respect of hand motor control or language laterality
  • Klein, M., Van Donkelaar, M., Verhoef, E., & Franke, B. (2017). Imaging genetics in neurodevelopmental psychopathology. American Journal of Medical Genetics Part B: Neuropsychiatric Genetics, 174(5), 485-537. doi:10.1002/ajmg.b.32542.

    Abstract

    Neurodevelopmental disorders are defined by highly heritable problems during development and brain growth. Attention-deficit/hyperactivity disorder (ADHD), autism spectrum disorders (ASDs), and intellectual disability (ID) are frequent neurodevelopmental disorders, with common comorbidity among them. Imaging genetics studies on the role of disease-linked genetic variants on brain structure and function have been performed to unravel the etiology of these disorders. Here, we reviewed imaging genetics literature on these disorders attempting to understand the mechanisms of individual disorders and their clinical overlap. For ADHD and ASD, we selected replicated candidate genes implicated through common genetic variants. For ID, which is mainly caused by rare variants, we included genes for relatively frequent forms of ID occurring comorbid with ADHD or ASD. We reviewed case-control studies and studies of risk variants in healthy individuals. Imaging genetics studies for ADHD were retrieved for SLC6A3/DAT1, DRD2, DRD4, NOS1, and SLC6A4/5HTT. For ASD, studies on CNTNAP2, MET, OXTR, and SLC6A4/5HTT were found. For ID, we reviewed the genes FMR1, TSC1 and TSC2, NF1, and MECP2. Alterations in brain volume, activity, and connectivity were observed. Several findings were consistent across studies, implicating, for example, SLC6A4/5HTT in brain activation and functional connectivity related to emotion regulation. However, many studies had small sample sizes, and hypothesis-based, brain region-specific studies were common. Results from available studies confirm that imaging genetics can provide insight into the link between genes, disease-related behavior, and the brain. However, the field is still in its early stages, and conclusions about shared mechanisms cannot yet be drawn.
  • Kunert, R., & Jongman, S. R. (2017). Entrainment to an auditory signal: Is attention involved? Journal of Experimental Psychology: General, 146(1), 77-88. doi:10.1037/xge0000246.

    Abstract

    Many natural auditory signals, including music and language, change periodically. The effect of such auditory rhythms on the brain is unclear however. One widely held view, dynamic attending theory, proposes that the attentional system entrains to the rhythm and increases attention at moments of rhythmic salience. In support, 2 experiments reported here show reduced response times to visual letter strings shown at auditory rhythm peaks, compared with rhythm troughs. However, we argue that an account invoking the entrainment of general attention should further predict rhythm entrainment to also influence memory for visual stimuli. In 2 pseudoword memory experiments we find evidence against this prediction. Whether a pseudoword is shown during an auditory rhythm peak or not is irrelevant for its later recognition memory in silence. Other attention manipulations, dividing attention and focusing attention, did result in a memory effect. This raises doubts about the suggested attentional nature of rhythm entrainment. We interpret our findings as support for auditory rhythm perception being based on auditory-motor entrainment, not general attention entrainment.
  • Kunert, R. (2017). Music and language comprehension in the brain. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Lam, N. H. L. (2017). Comprehending comprehension: Insights from neuronal oscillations on the neuronal basis of language. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Lam, K. J. Y., Bastiaansen, M. C. M., Dijkstra, T., & Rueschemeyer, S. A. (2017). Making sense: motor activation and action plausibility during sentence processing. Language, Cognition and Neuroscience, 32(5), 590-600. doi:10.1080/23273798.2016.1164323.

    Abstract

    The current electroencephalography study investigated the relationship between the motor and (language) comprehension systems by simultaneously measuring mu and N400 effects. Specifically, we examined whether the pattern of motor activation elicited by verbs depends on the larger sentential context. A robust N400 congruence effect confirmed the contextual manipulation of action plausibility, a form of semantic congruency. Importantly, this study showed that: (1) Action verbs elicited more mu power decrease than non-action verbs when sentences described plausible actions. Action verbs thus elicited more motor activation than non-action verbs. (2) In contrast, when sentences described implausible actions, mu activity was present but the difference between the verb types was not observed. The increased processing associated with a larger N400 thus coincided with mu activity in sentences describing implausible actions. Altogether, context-dependent motor activation appears to play a functional role in deriving context-sensitive meaning
  • Lewis, A. G., Schoffelen, J.-M., Hoffmann, C., Bastiaansen, M. C. M., & Schriefers, H. (2017). Discourse-level semantic coherence influences beta oscillatory dynamics and the N400 during sentence comprehension. Language, Cognition and Neuroscience, 32(5), 601-617. doi:10.1080/23273798.2016.1211300.

    Abstract

    In this study, we used electroencephalography to investigate the influence of discourse-level semantic coherence on electrophysiological signatures of local sentence-level processing. Participants read groups of four sentences that could either form coherent stories or were semantically unrelated. For semantically coherent discourses compared to incoherent ones, the N400 was smaller at sentences 2–4, while the visual N1 was larger at the third and fourth sentences. Oscillatory activity in the beta frequency range (13–21 Hz) was higher for coherent discourses. We relate the N400 effect to a disruption of local sentence-level semantic processing when sentences are unrelated. Our beta findings can be tentatively related to disruption of local sentence-level syntactic processing, but it cannot be fully ruled out that they are instead (or also) related to disrupted local sentence-level semantic processing. We conclude that manipulating discourse-level semantic coherence does have an effect on oscillatory power related to local sentence-level processing.
  • Lewis, A. G. (2017). Explorations of beta-band neural oscillations during language comprehension: Sentence processing and beyond. PhD Thesis, Radboud University Nijmegen, Nijmegen.
  • Lockwood, G. (2017). Talking sense: The behavioural and neural correlates of sound symbolism. PhD Thesis, Radboud University, Nijmegen.
  • Lopopolo, A., Frank, S. L., Van den Bosch, A., & Willems, R. M. (2017). Using stochastic language models (SLM) to map lexical, syntactic, and phonological information processing in the brain. PLoS One, 12(5): e0177794. doi:10.1371/journal.pone.0177794.

    Abstract

    Language comprehension involves the simultaneous processing of information at the phonological, syntactic, and lexical level. We track these three distinct streams of information in the brain by using stochastic measures derived from computational language models to detect neural correlates of phoneme, part-of-speech, and word processing in an fMRI experiment. Probabilistic language models have proven to be useful tools for studying how language is processed as a sequence of symbols unfolding in time. Conditional probabilities between sequences of words are at the basis of probabilistic measures such as surprisal and perplexity which have been successfully used as predictors of several behavioural and neural correlates of sentence processing. Here we computed perplexity from sequences of words and their parts of speech, and their phonemic transcriptions. Brain activity time-locked to each word is regressed on the three model-derived measures. We observe that the brain keeps track of the statistical structure of lexical, syntactic and phonological information in distinct areas.

    Additional information

    Data availability
  • Mainz, N., Shao, Z., Brysbaert, M., & Meyer, A. S. (2017). Vocabulary Knowledge Predicts Lexical Processing: Evidence from a Group of Participants with Diverse Educational Backgrounds. Frontiers in Psychology, 8: 1164. doi:10.3389/fpsyg.2017.01164.

    Abstract

    Vocabulary knowledge is central to a speaker's command of their language. In previous research, greater vocabulary knowledge has been associated with advantages in language processing. In this study, we examined the relationship between individual differences in vocabulary and language processing performance more closely by (i) using a battery of vocabulary tests instead of just one test, and (ii) testing not only university students (Experiment 1) but young adults from a broader range of educational backgrounds (Experiment 2). Five vocabulary tests were developed, including multiple-choice and open antonym and synonym tests and a definition test, and administered together with two established measures of vocabulary. Language processing performance was measured using a lexical decision task. In Experiment 1, vocabulary and word frequency were found to predict word recognition speed while we did not observe an interaction between the effects. In Experiment 2, word recognition performance was predicted by word frequency and the interaction between word frequency and vocabulary, with high-vocabulary individuals showing smaller frequency effects. While overall the individual vocabulary tests were correlated and showed similar relationships with language processing as compared to a composite measure of all tests, they appeared to share less variance in Experiment 2 than in Experiment 1. Implications of our findings concerning the assessment of vocabulary size in individual differences studies and the investigation of individuals from more varied backgrounds are discussed.

    Additional information

    Supplementary Material Appendices.pdf

Share this page